text
stringlengths
216
4.52M
meta
dict
\section{Introduction} Relaying has been shown to be beneficial to a conventional point-to-point communication channel by cooperating with the transmitter \cite{Cover1979}. Meanwhile, it is also proved that the relaying can improve the sum achievable rates for a multiple access channel \cite{Kramer2005} and a compound multiple access channel \cite{Gunduz2010}. \iffalse Although the general capacity of a relay channel and is still unknown until nowadays, many researchers have made essential progress towards the capacity bounds of the channels with relay presented. For instance, coding strategies and capacity bounds were extensively studied in for the Multiple Access Relay Channel (MARC) and in for the compound multiple access channel with a relay (cMACr). \fi One of the fundamental relaying schemes proposed in \cite{Cover1979} is compress-and-forward (CF). Comparing to the decode-and-forward (DF) based schemes, CF based schemes are not limitated by the decoding capability of the relay as studied in \cite{Cover1979} and \cite{Gunduz2010}. Different variations of the CF based schemes have been investigated in \cite{Cover2007,Razaghi2013,Avestimehr2011,Lim2011}. \iffalse The hash-and-forward (HF) scheme \cite{Cover2007} was improved by the generalized hash-and-forward (GHF) scheme in \cite{Razaghi2013} by allowing a more flexible quantization at relay and performing a list decoding at the destination. Quantize-map-and-forward (QMF) scheme \cite{Avestimehr2011} was employed and combined with joint decoding as the noisy network coding (NNC) \cite{Lim2011}. \fi In \cite{Lim2011}, a Noisy network coding (NNC) scheme was proposed. NNC is able to recover the rate achieved by the classsic CF in a three-node relay channel and generally outperforms other CF based schemes in a multiuser channel. On the other hand, NNC requires the "long" sources messages to be repeated several times which significantly increases the decoding delay. In order to overcome this drawback, a short message noisy network coding (SNNC) was introduced by Wu and Xie\cite{Wu2010}\cite{Wu2013}. In SNNC, each source transmits independent short messsage in each block. In \cite{Hou2012} and \cite{Kramer2011}, SNNC was also shown to achieve the same rate as NNC in multiple multicast sessions. Usually, the SNNC is applied in the full-duplex relay channel. Motivated by the practical constraint that relay cannot transmit and receive simultaneously in wireless communications \cite{Laneman2003}, a quantize-and-forward (QF) scheme, originates from NNC \cite{Lim2011}, has been studied for a fading half-duplex relay channel (HDRC) in \cite{Yao2013}. A single block and two slots coding based QF scheme for the HDRC has been proposed in \cite{Yao2013} to derive the achievable rates. In this paper, the "cheap" half-duplex relay \cite{Khojastepour2003} is also considered. Specifically, the achievable rate regions for the half-duplex MARC and cMACr, as shown in Fig.\ref{fig:MARC-phases} and Fig.\ref{fig:phases}, respectively, are investigated based on a variation of the QF scheme, namely the GQF scheme. Compared with the QF scheme, the proposed GQF scheme not only adopts the single block two slots coding structure but also takes into account the effect of the co-exist interfered message signal at relay. Comparing with the classic CF based schemes, the benefit is that the proposed GQF scheme is able to simplify the operation at relay ("cheaper" relay) while keeping the advantage of the CF based schemes. The GQF scheme only requires a simplified relay in the sense that no Wyner-Ziv binning is necessary. Correspondingly, the GQF scheme performs joint decoding instead of sequential decoding. The GQF scheme simplifies the relay encoding and can be implemented in any situations where a low-cost half-duplex relay is needed. \begin{figure}[htpb] \centering \subfloat[MARC Slot-1] {\label{fig:MARC-phase1} \includegraphics[width=1.5in]{MARC-phase1} } \qquad \subfloat[MARC Slot-2]{\label{fig:MARC-phase2} \includegraphics[width=1.5in]{MARC-phase2} } \caption{Message flow of the Half-Duplex Multiple Access Relay Channel. $R$ listens to the channel in Slot-1 and Transmits to $D_1$ in Slot-2} \label{fig:MARC-phases} \end{figure} \begin{figure}[htpb] \centering \subfloat[Slot-1] {\label{fig:phase1} \includegraphics[width=1.5in]{phase1} } \qquad \subfloat[Slot-2]{\label{fig:phase2} \includegraphics[width=1.5in]{phase2} } \caption{Message flow of the cMACr. $R$ listens to the channel in Slot-1 and Transmits to both $D_1$ and $D_2$ in Slot-2} \label{fig:phases} \end{figure} For the channels as shown in Fig.\ref{fig:MARC-phases} and Fig.\ref{fig:phases} using GQF, the relay quantizes the signal received from both sources and forwards the quantization index without using Wyner-Ziv coding (binning). Each destination decodes both messages from checking the joint typicality of the signal received in both slots without decoding the quantization index. \iffalse The Multi-message Joint decoding CF scheme is also proposed in this work. The decoding in MJCF contains three steps. In the first and second step, decoder finds the unique bin index and the quantization index. In the third step, differently with the classic CF \cite{Cover1979}, the decoder jointly decodes both messages from both slots reception in the last step of decoding. \fi The performance comparison between the GQF scheme and the CF scheme is also discussed in this paper. However, in order to fit in the half-duplex MARC and cMACr, the classic CF scheme has been modified. Furthermore, the rate regions based on GQF scheme are extended from the discrete memoryless channel to the Gaussian channel. The scheme performance is shown through some numerical examples. \iffalse The rest of this paper is organized as follows: Section \MakeUppercase{\romannumeral 2} presents the system model. The achievable rate regions based on the proposed GQF scheme and a modified classic CF scheme are shown in section \MakeUppercase{\romannumeral 3}. The above achievable rate region result for both the GQF and modified CF schemes can be verified by a special case of three-node HDRC. The validation is provided in section \MakeUppercase{\romannumeral 3}. In section \MakeUppercase{\romannumeral 4}, the results were extended to the Gaussian Channel and some numerical analysis is given. Finally the conclusion of this work is shown in Section \MakeUppercase{\romannumeral 6}. \fi \section{System Model} Two discrete memoryless channel models, a half-duplex multiple access relay channel and a half-duplex compound multiple access channel with a relay, are considered in this work. Since a cMACr naturally reduces to a MARC if the second destination is not present, in the following only the description for the HD-cMACr will be shown. The HD-MARC can be described by adjusting the random variables based on the absence of the second destination. As shown in Fig. \ref{fig:phases}, a cMACr consists two sources $S_{1}$, $S_{2}$ and two destinations $D_{1}$ and $D_{2}$. Relay $R$ helps the information propagation from sources to destinations by cooperating with the sources. Relay operates in the Half-Duplex mode. This means that $R$ is either receiving signals from the source nodes ($S_{1}$ and $S_{2}$) or transmitting to the destinations ($D_{1}$ and $D_{2}$). Assume that each block length is totally $l$ channel uses and has two slots. The first slot ($R$ listens to the channel) and the second slot ($R$ transmits to the channel) of a single block are of $n$ and $m$ channel uses, respectively. Hence, the number of channel uses in one block is $l=n+m$. Each source $S_i$, $i=1,2$ chooses a message $W_i$ from a message set $\mathcal{W}_{i}=\{1,2,\dots,2^{lR_i}\}$, then encodes this message into a length $n$ codeword with an encoding function $f_{i1}(W_i)=X_{i1}^{n}$ and a length $m$ codeword with an encoding function $f_{i2}(W_i)=X_{i2}^{m}$, finally sends these two codewords in the corresponding slots. Relay $R$ employs an encoding function based on its recetpion $Y_{R}^{n}$ in the first slot. \iffalse tries to find an $u$, the value of an auxiliary random variable $U$, based on its reception $Y_{R}^{n}$ in the first slot. After that, $R$ encodes this $u$ into a length $m$ codeword with an encoding function $f_{R}(U)=X_{R}^{m}$. \fi Each destination uses a decoding function $g_{i}(Y_{i1}^{n},Y_{i2}^{m})=(\hat{W}_1,\hat{W}_2)$ that jointly decodes messages from the receptions in both slots. \iffalse and $g_{i}(Y_{i1}^{n},Y_{i2}^{m})=(\hat{W}_i)$ if $D_i$ only interested in and decodes the message $W_i$. \fi The channel is memoryless such that the channel transition probabilities can be represented by \begin{equation} \begin{split} & p_{Y_{R}^{n}Y_{11}^{n}Y_{21}^{n}| X_{11}^{n}X_{21}^{n}}(y_R^n,y_{11}^n,y_{21}^n| x_{11}^n,x_{21}^n)\\ &=\prod_{i=1}^{n}p_{Y_{R}Y_{11}Y_{21}| X_{11}X_{21}}(y_{R,i},y_{11,i},y_{21,i}| x_{11,i}x_{21,i})\\ \end{split} \end{equation} and \begin{equation} \begin{split} & p_{Y_{12}^{m}Y_{22}^{m}| X_{12}^{m}X_{22}^{m}X_{R}^{m}}(y_{12}^m,y_{22}^m| x_{12}^m,x_{22}^m,x_{R}^m)\\ & =\prod_{i=1}^{m}p_{Y_{12}Y_{22}| X_{12}X_{22}X_{R}}(y_{12,i},y_{22,i}| x_{12,i}x_{22,i}x_{R,i}).\\ \end{split} \end{equation} A rate pair $(R_1,R_2)$ is called achievable if there exists a message set, together with the encoding and decoding functions stated before such that $Pr(\hat{W}_1\neq W_1 \cup \hat{W}_2\neq W_2)\rightarrow 0$ when $l \rightarrow \infty $. \section{Main Result} In this section, the achievable rate regions based on the GQF scheme for the discrete memoryless half-duplex MARC and cMACr are presented first. As a reference, the achievable rates based on a modified CF scheme is shown in the second subsection. In the last part of this section, it is shown that the achievable rate regions for the three node HDRC \cite{Yao2013} can be treated as a special case of the result for our five node cMACr. \subsection{Achievable Rate Region Based on GQF Scheme} In this subsection, the achievable rate regions for the discrete memoryless HD-MARC and HD-cMACr will be shown based on the GQF Scheme. The GQF scheme is an essential variation of the classic CF. In GQF scheme, relay quantizes its observation after the first slot, and then sends the quantization index in the second slot. Unlike the conventional CF, no Wyner-Ziv binning is applied in the relay, which simiplifies the relay operation. At the destination, decoding is also different in the sense that joint-decoding of the messages from both slots without explicitly decoding the quantization index is performed in GQF scheme. \subsubsection{Achievable Rate Region for discrete memoryless HD-MARC} In the HD-MARC, there is only one destination $D_1$ comparing to HD-cMACr. $D_1$ receives signals from $S_1$ and $S_2$ in the first slot. It then receives from $S_1$, $S_2$ and $R$ in the second slot. The decoding is done by $D_1$ found both messages $W_1$ and $W_2$ that were sent from the sources. The following theorem describes the achievable rate region for this discrete memoryless HD-MARC: \begin{theorem}\label{th-QFD-MARC} The following rate regions are achievable over discrete memoryless HD-MARC based on the GQF scheme: {\setlength\arraycolsep{0.1em} \begin{eqnarray} R_i & < & min\{a_1(i),b_1(i)\} \label{eqn-QFD111-MARC} \\ R_1+R_2 & < & min \{c_1,d_1\} \label{eqn-QFD222-MARC} \end{eqnarray} } where $\beta=n/l$ is fixed, {\setlength\arraycolsep{0.1em} \begin{eqnarray} &a_k(i)=&\beta I(X_{i1};X_{j1},Y_{k1},\hat{Y}_R) + (1-\beta)I(X_{i2};X_{j2},X_R,Y_{k2}) \nonumber \\ &b_k(i)=&\beta[I(X_{i1};X_{j1},Y_{k1})-I(\hat{Y}_R;Y_R | X_{i1},X_{j1},Y_{k1})] \nonumber \\ && \quad + (1-\beta)I(X_{i2},X_{R};X_{j2},Y_{k2}) \nonumber\\ &c_k=&\beta I(X_{11},X_{21};Y_{k1},\hat{Y}_R) + (1-\beta)I(X_{12},X_{22};X_R,Y_{k2}) \nonumber \\ &d_k=&\beta[I(X_{11},X_{21},\hat{Y}_{R};Y_{k1})+I(X_{11},X_{21};\hat{Y}_R)\nonumber \\ &&-I(Y_R;\hat{Y}_R)]+(1-\beta)I(X_{12},X_{22},X_{R};Y_{k2}),\label{set-abcd-MARC} \end{eqnarray} } $i,j,k\in \{1,2\}$ and $i\neq j$, for all input distributions \begin{equation} p(x_{11})p(x_{21})p(x_{12})p(x_{22})p(x_R)p(\hat{y}_R|y_R). \label{inputD-MARC} \end{equation} \end{theorem} \begin{proof}: As stated in section \MakeUppercase{\romannumeral 2}, HD-MARC can be treated as a reduced case of HD-cMACr. Thus, the above results can be obtained by simplifying the proof for the theorem \ref{th-QFD} under the assumption that the receiving random variables $Y_{21}=Y_{22}=\phi$ at $D_2$. The detail of the proof for theorem \ref{th-QFD} can be found in the first part of the Appendix. \end{proof} \subsubsection{Achievable Rate Region for discrete memoryless HD-cMACr} In the HD-cMACr, each destination $D_i$, $i=1,2$, tries to decode both messages $W_1$ and $W_2$. The decoding is done by each of the destination found both messages. The following theorem describes the achievable rate region for this discrete memoryless HD-cMACr: \begin{theorem}\label{th-QFD} The following rate regions are achievable over discrete memoryless half-duplex cMACr with the GQF scheme: \iffalse {\setlength\arraycolsep{0.1em} \begin{eqnarray} R_1 & < & min \{ \beta I(X_{11};X_{21},Y_{11},\hat{Y}_{R}) \nonumber \\ & &\qquad + (1-\beta)I(X_{12};X_{22},X_{R},Y_{12}), \nonumber \\ & & \qquad \beta[I(X_{11},\hat{Y}_{R};X_{21},Y_{11}) \nonumber \\ & & \qquad +I(X_{11};\hat{Y}_R)-I(Y_R;\hat{Y}_R)] \nonumber \\ & &\qquad + (1-\beta)I(X_{12},X_{R};X_{22},Y_{12}) \} \label{eqn-QFD1} \\ R_1 & < & min \{ \beta I(X_{11};X_{21},Y_{21},\hat{Y}_{R}) \nonumber \\ & &\qquad + (1-\beta)I(X_{12};X_{22},X_{R},Y_{22}), \nonumber \\ & & \qquad \beta[I(X_{11},\hat{Y}_{R};X_{21},Y_{21}) \nonumber \\ & & \qquad +I(X_{11};\hat{Y}_R)-I(Y_R;\hat{Y}_R)] \nonumber \\ & &\qquad + (1-\beta)I(X_{12},X_{R};X_{22},Y_{22}) \} \label{eqn-QFD4} \\ R_2 &< & min \{ \beta I(X_{21};X_{11},Y_{11},\hat{Y}_{R}) \nonumber \\ & & \qquad + (1-\beta)I(X_{22};X_{12},X_{R},Y_{12}), \nonumber \\ & & \qquad \beta[I(X_{21},\hat{Y}_{R};X_{11},Y_{11}) \nonumber \\ & & \qquad +I(X_{21};\hat{Y}_R)-I(Y_R;\hat{Y}_R)] \nonumber \\ & &\qquad + (1-\beta)I(X_{22},X_{R};X_{12},Y_{12}) \} \label{eqn-QFD2} \\ R_2 &< & min \{ \beta I(X_{21};X_{11},Y_{21},\hat{Y}_{R}) \nonumber \\ & & \qquad + (1-\beta)I(X_{22};X_{12},X_{R},Y_{22}), \nonumber \\ & & \qquad \beta[I(X_{21},\hat{Y}_{R};X_{11},Y_{21}) \nonumber \\ & & \qquad +I(X_{21};\hat{Y}_R)-I(Y_R;\hat{Y}_R)] \nonumber \\ & &\qquad + (1-\beta)I(X_{22},X_{R};X_{12},Y_{22}) \} \label{eqn-QFD5} \\ R_1 + R_2 & < & min \{ \beta I(X_{11},X_{21};Y_{11},\hat{Y}_{R}) \nonumber \\ & &\qquad + (1-\beta)I(X_{12},X_{22};X_{R},Y_{12}), \nonumber \\ & &\qquad \beta [I(X_{11},X_{21},\hat{Y}_{R};Y_{11}) \nonumber \\ & & \qquad +I(X_{11},X_{21};\hat{Y}_{R})-I(Y_R;\hat{Y}_R)] \nonumber \\ & & \qquad + (1-\beta)I(X_{12},X_{22},X_{R};Y_{12}) \} \label{eqn-QFD3} \\ R_1 + R_2 & < & min \{ \beta I(X_{11},X_{21};Y_{21},\hat{Y}_{R}) \nonumber \\ & &\qquad + (1-\beta)I(X_{12},X_{22};X_{R},Y_{22}), \nonumber \\ & &\qquad \beta [I(X_{11},X_{21},\hat{Y}_{R};Y_{21}) \nonumber \\ & & \qquad +I(X_{11},X_{21};\hat{Y}_{R})-I(Y_R;\hat{Y}_R)] \nonumber \\ & & \qquad + (1-\beta)I(X_{12},X_{22},X_{R};Y_{22}) \} \label{eqn-QFD6} \end{eqnarray} }\fi {\setlength\arraycolsep{0.1em} \begin{eqnarray} R_i & < & min\{a(i),b(i)\} \label{eqn-QFD111} \\ R_1+R_2 & < & min \{c,d\} \label{eqn-QFD222} \end{eqnarray} } where $\beta=n/l$ is fixed, $a(i)=min \{a_{1}(i),a_{2}(i)\}$, $b(i)=min \{b_{1}(i),b_{2}(i)\}$, $c=min\{c_1,c_2\}$, $d=min\{d_1,d_2\}$, {\setlength\arraycolsep{0.1em} \begin{eqnarray} a_k(i)&=&\beta I(X_{i1};X_{j1},Y_{k1},\hat{Y}_R) + (1-\beta)I(X_{i2};X_{j2},X_R,Y_{k2}) \nonumber \\ b_k(i)&=&\beta[I(X_{i1};X_{j1},Y_{k1})-I(\hat{Y}_R;Y_R | X_{i1},X_{j1},Y_{k1})] \nonumber \\ &+& (1-\beta)I(X_{i2},X_{R};X_{j2},Y_{k2}) \nonumber\\ c_k&=&\beta I(X_{11},X_{21};Y_{k1},\hat{Y}_R) + (1-\beta)I(X_{12},X_{22};X_R,Y_{k2}) \nonumber \\ d_k&=&\beta[I(X_{11},X_{21},\hat{Y}_{R};Y_{k1})+I(X_{11},X_{21};\hat{Y}_R)\nonumber \\ &-&I(Y_R;\hat{Y}_R)] + (1-\beta)I(X_{12},X_{22},X_{R};Y_{k2}), \label{set-abcd} \end{eqnarray} } $i,j,k\in \{1,2\}$, $i\neq j$ and $k \in \{1,2\}$, for all input distributions \begin{equation} p(x_{11})p(x_{21})p(x_{12})p(x_{22})p(x_R)p(\hat{y}_R|y_R) \label{inputD} \end{equation} \end{theorem} \begin{proof}: The detail of the proof is shown in the first part of the Appendix. \end{proof} \textit{Remark 1:} The major difference between the GQF scheme and the CF scheme applied in \cite{Gunduz2010} is that relay does not perform binning after quantize its observation of the sources messages. Moreover, in GQF two destinations perform one-step joint-decoding of both messages instead of sequentially decoding the relay bin index and then the source messages. \subsection{Achievable Rate Region Based on modified CF Scheme} In this subsection, as a reference, the achievable rate regions based on the modified CF scheme will be shown in the HD-MARC and HD-cMACr respectively. The modification is done for two parts: First, the relay in the classic CF scheme is now half-duplex; Second, the encoding and decoding at sources and destinations are now using a single block two slots structure. In this CF scheme, relay quantizes its observation in the end of the first slot. After that, the relay implements Wyner-Ziv binning and sends the bin index in the second slot. The destination sequentially decodes the bin index $\hat{s}\in \mathcal{S}$, quantization index $\hat{u} \in B(\hat{s})$ with the side information and finally the source messages $\hat{w}_1\in \mathcal{W}_1$ and $\hat{w}_2\in \mathcal{W}_2$ jointly from both slots reception. The achievable rate regions for discrete memoryless HD-MARC and HD-cMACr based on the modified CF can be summarized in the following: \begin{theorem}\label{th-CFD-MARC} The following rate regions are achievable over discrete memoryless half-duplex MARC based on the modified CF scheme: {\setlength\arraycolsep{0.1em} \begin{eqnarray} R_i & < & a_{1}(i) \label{eqn-CFD111-MARC} \\ R_1+R_2 & < & c_1 \label{eqn-CFD222-MARC} \end{eqnarray} } subject to \begin{equation} \beta [I(Y_R;\hat{Y}_R)-I(Y_{11};\hat{Y}_R)] <(1-\beta)I(X_R;Y_{12}) \label{eqn-CFDcon-MARC} \end{equation} where $i \in\{1,2\}$, $a_{1}(i), c_1$ are previously defined as in (\ref{set-abcd}), for all the input distributions as in (\ref{inputD}). \end{theorem} \begin{proof} Let one destination, say $D_2$, does not receive any signals from the HD-cMACr. Then the above result can be obtained following the proof for theorem \ref{th-CFD}. \end{proof} \begin{theorem}\label{th-CFD} The following rate regions are achievable over discrete memoryless HD-cMACr with the modified CF scheme: \iffalse {\setlength\arraycolsep{0.1em} \begin{eqnarray} R_1 & < & min \{ \beta I(X_{11};X_{21},Y_{11},\hat{Y}_R) \nonumber \\ & & \qquad + (1-\beta)I(X_{12};X_{22},Y_{12} | X_R) \nonumber \\ & & \qquad \beta I(X_{11};X_{21},Y_{21},\hat{Y}_R) \nonumber \\ & & \qquad + (1-\beta)I(X_{12};X_{22},Y_{22} | X_R) \} \label{eqn-CFD1} \\ R_2 & < & min \{ \beta I(X_{21};X_{11},Y_{11},\hat{Y}_R) \nonumber \\ & & \qquad + (1-\beta)I(X_{22};X_{12},Y_{12} | X_R) \nonumber \\ & & \qquad \beta I(X_{21};X_{11},Y_{21},\hat{Y}_R) \nonumber \\ & & \qquad + (1-\beta)I(X_{22};X_{12},Y_{22} | X_R) \} \label{eqn-CFD2} \\ R_1 + R_2 & < & min \{ \beta I(X_{11},X_{21};Y_{11},\hat{Y}_R) \nonumber \\ & & \qquad + (1-\beta)I(X_{12},X_{22};Y_{12} | X_R) \nonumber \\ & & \qquad \beta I(X_{11},X_{21};Y_{21},\hat{Y}_R) \nonumber \\ & & \qquad + (1-\beta)I(X_{12},X_{22};Y_{22} | X_R) \}\label{eqn-CFD3} \end{eqnarray} } \fi {\setlength\arraycolsep{0.1em} \begin{eqnarray} R_i & < & min\{a_{1}(i),a_{2}(i)\} \label{eqn-CFD111} \\ R_1+R_2 & < & min \{c_1,c_2\} \label{eqn-CFD222} \end{eqnarray} } subject to \begin{equation} \begin{split} max \{\beta [I(Y_R;\hat{Y}_R)-I(Y_{11};\hat{Y}_R)], \beta[I(Y_R;\hat{Y}_R)-I(Y_{21};\hat{Y}_R)]\} \\ < min \{(1-\beta)I(X_R;Y_{12}), (1-\beta)I(X_R;Y_{22})\} \end{split} \label{eqn-CFDcon} \end{equation} where $i\in\{1,2\}$, $a_{1}(i), a_{2}(i), c_1$, $c_2$ are previously defined as in (\ref{set-abcd}), for all the input distributions as in (\ref{inputD}). \end{theorem} \begin{proof} The outline of the proof can be found in the second part of the Appendix. \end{proof} \textit{Remark 2:} The GQF and the modified CF schemes provide different achievable rate regions. Note that results based on the modified CF should have (\ref{eqn-CFDcon-MARC}) and (\ref{eqn-CFDcon}) hold, which means the relay-destination link good enough to support the compression at relay to be recovered at destination(s). If this condition holds, then achievable rates based on the modified CF scheme are no less than those based on the GQF scheme. In other words, the GQF scheme cannot provide a better result in terms of achievable rate region. Therefore, when a higher sum rate is desired in the HD-MARC and HD-cMACr, (\ref{eqn-CFDcon-MARC}) and (\ref{eqn-CFDcon}) hold and a simplified relay is not required, the CF based scheme is suggested. On the other hand, if a low-cost simplified relay is preferred or (\ref{eqn-CFDcon-MARC}) and (\ref{eqn-CFDcon}) do not hold, then the GQF scheme is a superior choice. \iffalse On the other hand, if (\ref{eqn-CFDcon-MARC}) and (\ref{eqn-CFDcon}) do not hold, which means the relay to destination channel is not good enough to support the compression performed at relay, the rate regions of (\ref{eqn-CFD111-MARC}),(\ref{eqn-CFD222-MARC}) and (\ref{eqn-CFD111}),(\ref{eqn-CFD222}) cannot be achieved by applying MJCF scheme. While in this case, the rate regions of (\ref{eqn-QFD111-MARC}),(\ref{eqn-QFD111-MARC}) and (\ref{eqn-QFD111}),(\ref{eqn-QFD222}) from the GQF scheme still holds. Thus GQF scheme is a superior choice in this situation. \fi \subsection{Special Case of The Achievable Rates Result} In this subsection, we show that the achievable rate region for the three-node HDRC \cite{Yao2013} can be induced from the aforementioned achievable rate region for the five-node HD-cMACr. \iffalse Specifically, we consider a three-node HDRC which contains $S_1$, $R$ and $D_1$ and is reduced from the five-node HDIRC by taking $R_2=0$ and $X_{21}=X_{22}=Y_{21}=Y_{22}=\phi$. \fi Specifically, by taking $R_2=0$ and $X_{21}=X_{22}=Y_{21}=Y_{22}=\phi$ in HD-cMACr, a reduced three-node HDRC which contains $S_1$, $R$ and $D_1$ is considered. \subsubsection{special case of GQF scheme} For the GQF scheme, since $Y_{21}=Y_{22}=\phi$ and $R_2=0$, the individual rate (\ref{eqn-QFD111}) \iffalse (\ref{eqn-QFD4}) (\ref{eqn-QFD5}) (\ref{eqn-QFD6}) describe the rates achievable related to $D_{2}$, (\ref{eqn-QFD4}) (\ref{eqn-QFD5}) (\ref{eqn-QFD6}) are not useful. Also among those inequalities (\ref{eqn-QFD1}), (\ref{eqn-QFD2}) and (\ref{eqn-QFD3}) in theorem \ref{th-QFD}, (\ref{eqn-QFD2}) is not useful. (\ref{eqn-QFD1}) \fi become {\setlength\arraycolsep{0.1em} \begin{eqnarray} R_1 & < & \iffalse { min \{\beta I(X_{11};Y_{11},\hat{Y}_R) + (1-\beta)I(X_{12};Y_{12}|X_R), \nonumber \\ & & \qquad \beta[I(X_{11},\hat{Y}_R;Y_{11}) + I(X_{11};\hat{Y}_R) - I(Y_R;\hat{Y}_R)] \nonumber \\ & & \qquad + (1-\beta)I(X_{12},X_R;Y_{12})\} \label{eqn-conQFD1}\\ & = & } \fi min \{\beta I(X_{11};Y_{11},\hat{Y}_R) + (1-\beta)I(X_{12};Y_{12}|X_R), \nonumber \\ & & \qquad \beta[I(X_{11};Y_{11})-I(Y_R;\hat{Y}_R | X_{11},Y_{11})] \nonumber \\ & & \qquad + (1-\beta)I(X_{12},X_R;Y_{12})\} \label{eqn-conQFD1} \end{eqnarray} } where (\ref{eqn-conQFD1}) is based on the Markov chain $(X_{11},Y_{11})\rightarrow Y_R\rightarrow \hat{Y}_R$ that $H(\hat{Y}_R|Y_R)=H(\hat{Y}_R|Y_R,X_{11},Y_{11})$. \iffalse Changing the variable names accordingly, we can verify that individual rate of $R_1$ become the same as in theorem 1 of \cite{Yao2013}. \fi Also the sum rate (\ref{eqn-QFD222}) can also be rewritten as {\setlength\arraycolsep{0.1em} \begin{eqnarray} R_1 & < & \iffalse{ min \{\beta [I(X_{11};Y_{11},\hat{Y}_R) + I(X_{21};Y_{11},\hat{Y}_R|X_{11}) \nonumber \\ & & \qquad + (1-\beta)[I(X_{12};Y_{12}|X_R) \nonumber \\ & & \qquad +I(X_{22};X_R,Y_{12}|X_{12})], \nonumber \\ & & \qquad \beta[I(X_{11},\hat{Y}_R;Y_{11}) +I(X_{21};Y_{11}|X_{11},\hat{Y}_R) \nonumber \\ & & \qquad + I(X_{11};\hat{Y}_R) + I(X_{21};\hat{Y}_R|X_{11})- I(Y_R;\hat{Y}_R)] \nonumber \\ & & \qquad + (1-\beta)[I(X_{12},X_R;Y_{12}) \nonumber \\ & & \qquad +I(X_{22};Y_{12}|X_{12},X_R)]\} \label{eqn-conQFD3}\\ & = & }\fi min \{\beta I(X_{11};Y_{11},\hat{Y}_R) + (1-\beta)I(X_{12};Y_{12}|X_R), \nonumber \\ & & \qquad \beta[I(X_{11},\hat{Y}_R;Y_{11}) + I(X_{11};\hat{Y}_R)-I(Y_R;\hat{Y}_R)] \nonumber \\ & & \qquad + (1-\beta)I(X_{12},X_R;Y_{12})\} \label{eqn-conQFD2}. \end{eqnarray} } Observing that (\ref{eqn-conQFD2}) is the same as (\ref{eqn-conQFD1}). By changing the variable names accordingly, $R_1$ from the individual rate and the sum rate become the same as in theorem 1 of \cite{Yao2013}. Therefore the achievable rates based on QF scheme of \cite{Yao2013} can be treated as a special case of \textit{Theorem \ref{th-QFD}} of this work. \subsubsection{special case of modified CF scheme} Since in the three-node HDRC $Y_{21}=Y_{22}=\phi$, \iffalse{ it is not necessary to check the consistency of the second terms of the min functions in those inequalities (\ref{eqn-CFD1}), (\ref{eqn-CFD2}) and (\ref{eqn-CFD3}). Hence we only focus on checking the first terms of those min functions.}\fi the individual rate $R_1$ from (\ref{eqn-CFD111}) can be rewritten as: {\setlength\arraycolsep{0.1em} \begin{eqnarray} R_1 & < & \iffalse \beta I(X_{11};X_{21},Y_{11},\hat{Y}_R) \nonumber \\ & & + (1-\beta)I(X_{12};X_{22},Y_{12}|X_R) \label{eqn-conCFD1} \\ & = & \beta [I(X_{11};Y_{11},\hat{Y}_R) + I(X_{11};X_{21}|Y_{11},\hat{Y}_R)] \nonumber \\ & & + (1-\beta)[I(X_{12};Y_{12}|X_R) \nonumber \\ & & +I(X_{12};X_{22}|X_R,Y_{12})] \label{eqn-conCFD2} \\ & = & \fi \beta I(X_{11};Y_{11},\hat{Y}_R) + (1-\beta)I(X_{12};Y_{12}|X_R) \label{eqn-conCFD3} \end{eqnarray} } where \iffalse (\ref{eqn-conCFD2}) is from mutual information identity and \fi (\ref{eqn-conCFD3}) is from mutual information identity and $X_{21}=X_{22}=\phi$. Similarly using $R_2=0$, the sum rate inequality (\ref{eqn-CFD222}) can be rewritten as the same as (\ref{eqn-conCFD3}). Also note that the condition for the achievable rate region (\ref{eqn-CFDcon}) become \begin{equation} (1-\beta)I(X_R;Y_{12})>\beta I(Y_R;\hat{Y}_R)-\beta I(Y_{11};\hat{Y}_R). \end{equation} By changing the variable names respectively, the achievable rate region based on CF scheme of \cite{Yao2013} can also be considered as a special case of the result of \textit{Theorem \ref{th-CFD}} based on the modified CF scheme of this work. \section{Gaussian Channels and Numerical Examples} In this section, we extend the proposed GQF scheme and the modified CF scheme to the Gaussian Channels. Some numerical results are shown to compare the performance of the two schemes in the Half-Duplex Gaussian MARC. Notice that in a cMACr both destinations need to decode both messages from the sources. However, in an interference relay channel \cite{Tian2011}, each of the destination may not be interested in decoding the message from the interfered source. The sum achievable rates in a cMACr are always a minimum function of two terms that obtained from the achievability of each destination. Therefore, for clarity of the presentation and simplicity of exposition, the extended results to the half duplex Gaussian cMACr will not be shown as they have the similar performance and effect in comparing GQF and CF. Consider a Gaussian HD-MARC as shown in Fig.\ref{fig:MARC-phases}. Following the similar notation of \cite{Yao2013}, the channel transition probabilities specified in the below relationships: {\setlength\arraycolsep{0.1em} \begin{eqnarray} y_{11}^n & = & h_{11}x_{11}^n+h_{21}x_{21}^n+z_{11}^n \nonumber \\ y_{R}^n & = & h_{1R}x_{11}^n+h_{2R}x_{21}^n+z_{R}^n \nonumber \\ y_{12}^m & = & h_{11}x_{12}^m+h_{21}x_{22}^m+h_{R1}x_R^m+z_{12}^m \nonumber \end{eqnarray} } where $h_i$ for $i\in\{11,21,1R,2R,R1\}$ are real constants representing the channel gain, and the channel noises $z_{11}^n,z_{R}^n$ and $z_{12}^m$ are generated independently and identically according to Gaussian distributions with zero means and unit variances. They are independent of other random variables in the model. The transmitters at the sources and the relay have power constraints over the transmitted sequences in each slot as the following: \begin{eqnarray} \frac{1}{n}\sum_{i=1}^{n}|x_{j,i}| & \leq & P_j, \text{for}\:j\in\{11,21\}; \\ \frac{1}{m}\sum_{i=1}^{m}|x_{k,i}| & \leq & P_k,\text{for}\:k\in\{12,22,R\}, \end{eqnarray} where $|x|$ shows the absolute value of $x$. In this work, it is assumed that in the Gaussian channels all the codebooks used are generated according to some zero-mean Gaussian distributions. Notice that these input distributions are not necessarily the optimal distributions which maximize the achievable rate. Nevertheless, the Gaussian codebooks are still used since they are the most widely assumption in the literature and make the analysis of characterizing the achievable rates tractable for illustration purpose. Let $X_i$ for $i\in\{11,21,12,22,R\}$, $Z_j$ for $j\in\{11,12,R\}$ and $Z_Q$ be generic random variables which are Gaussian with zero mean and are mutually independent. The variances of $X_i$, $Z_j$ and $Z_Q$ are $P_i$, 1 and $\sigma_Q^2$ respectively. The random variable $Y_k$ denotes the channel output where $k\in\{11,12,R\}$. \iffalse The variances of $X_i$ are $P_i$. The variances of $Z_j$ are one. The variance of $Z_Q$ is $\sigma_Q^2$. The random variables of the channel output are $Y_k$ for $k\in\{11,12,R\}$. \fi $\hat{Y}_R$ is the estimation of $Y_R$. The following equations show the relationships between the introduced random variables: {\setlength\arraycolsep{0.1em} \begin{eqnarray} Y_{11} & = & h_{11}X_{11}+h_{21}X_{21}+Z_{11}; \\ Y_{12} & = & h_{11}X_{12}+h_{21}X_{22}+h_{R1}X_R+Z_{12}; \\ Y_{R} & = & h_{1R}X_{11}+h_{2R}X_{21}+Z_{R}; \\ \hat{Y}_R &=& Y_R+Z_Q. \label{eqn-RtoRhat} \end{eqnarray} } In the following, the achievable rate region for the Gaussian setup with the GQF scheme is characterized. \begin{proposition} The following rates are achievable for the Gaussian HD-MARC by using the GQF scheme: {\setlength\arraycolsep{0.1em} \begin{eqnarray} R_i &<& \underset{\sigma_Q^2,\beta}{max} \;\; min \{\frac{\beta}{2}log(1+h_{i1}^2P_{i1}+\frac{h_{iR}^2P_{i1}}{1+\sigma_Q^2}) \nonumber \\ &&+\frac{1-\beta}{2}log(1+h_{i1}^2P_{i2}), \nonumber \\ &&\frac{\beta}{2}log(\frac{(1+h_{i1}^2P_{i1})\sigma_Q^2}{1+\sigma_Q^2}) \nonumber \\ &&+\frac{1-\beta}{2}log(1+h_{i1}^2P_{i2}+h_{R1}^2P_R)\} \label{eqn-GQF-GauMARCindi}\\ R_1 &+& R_2 < \underset{\sigma_Q^2,\beta}{max} \;\; min \{\frac{\beta}{2}log(1+h_{11}^2P_{11}+h_{21}^2P_{21} \nonumber \\ &&+\frac{(h_{11}h_{2R}-h_{1R}h_{21})^2P_{11}P_{21}+h_{1R}^2P_{11}+h_{2R}^2P_{21}}{1+\sigma_Q^2}) \nonumber \\ &&+\frac{1-\beta}{2}log(1+h_{11}^2P_{12}+h_{21}^2P_{22}), \nonumber \\ &&\frac{\beta}{2}log(\frac{(1+h_{11}^2P_{11}+h_{21}^2P_{21})\sigma_Q^2}{1+\sigma_Q^2}) \nonumber \\ &&+\frac{1-\beta}{2}log(1+h_{11}^2P_{12}+h_{21}^2P_{22}+h_{R1}^2P_R)\} \label{eqn-GQF-GauMARCsum} \end{eqnarray} } where i = 1,2 and $\sigma_Q^2$ is the relay quantization factor. \end{proposition} \emph{Remark 3}:\; Within the achievable sum rate (\ref{eqn-GQF-GauMARCsum}), the two min terms can be treated as two functions of $\sigma_Q^2$. Similarly as \cite{Yao2013}, denote the first term as $I_1(\sigma_Q^2)$ and the second term as $I_2(\sigma_Q^2)$. The effect of the different values of the $\sigma_Q^2$ on the achievable sum rate is shown in the Fig.\ref{fig:Sigma-q-GQF}. It can be seen that, for fixed $\beta$, $I_1(\sigma_Q^2)$ is a simple decreasing function and $I_2(\sigma_Q^2)$ is an increasing function. In other words, the first order derivative of $I_1(\sigma_Q^2)$ is always negative and that of $I_2(\sigma_Q^2)$ is always positive. Let $I_1(\sigma_Q^2)=I_2(\sigma_Q^2)$, the value of $\sigma_Q^2$ that maximizes the sum rate can be obtained. \begin{figure}[t] \centering \includegraphics[scale=0.225]{Sigma-q} \caption{Achievable Rates of GQF and CF based scheme with variant $\sigma_q^2,P_{11}=P_{12}=1,P_{21}=P_{22}=1,P_R=1$, and if no relay source powers $P_1=P_2=1.5$, channel gain are $h_{11}=h_{21}=1, h_{1R}=3, h_{2R}=0.5, h_{R1}=3,\beta=0.5$} \label{fig:Sigma-q-GQF} \end{figure} Next, the achievable rate region for the Gaussian setup with the modified CF scheme will be described for the HD-MARC. As stated before, a Gaussian quantization codebook is assumed for illustration purpose and the optimality is not claimed here. \begin{proposition} The following rates are achievable for the Gaussian HD-MARC by using the modified CF scheme: {\setlength\arraycolsep{0.1em} \begin{eqnarray} R_i&<&\frac{\beta}{2}log(1+h_{i1}^2P_{i1}+\frac{h_{iR}^2P_{i1}}{1+\sigma_Q^2}) \nonumber \\ &&+\frac{1-\beta}{2}log(1+h_{i1}^2P_{i2}), \label{eqn-CF-GauMARCindi} \\ R_1 &+& R_2 < \frac{\beta}{2}log(1+h_{11}^2P_{11}+h_{21}^2P_{21} \nonumber \\ &+&\frac{(h_{11}h_{2R}-h_{1R}h_{21})^2P_{11}P_{21}+h_{1R}^2P_{11}+h_{2R}^2P_{21}}{1+\sigma_Q^2}) \nonumber \\ &+&\frac{1-\beta}{2}log(1+h_{11}^2P_{12}+h_{21}^2P_{22}) \label{eqn-CF-GauMARCsum} \end{eqnarray} } where i=1,2 and \begin{eqnarray} \sigma_Q^2 > \frac{1+\frac{h_{1R}^2P_{11}+h_{2R}^2P_{21}+(h_{11}h_{2R}-h_{1R}h_{21})^2P_{11}P_{21}}{1+h_{11}^2P_{11}+h_{21}^2P_{21}}}{(1+\frac{h_{R1}^2P_R}{1+h_{11}^2P_{12}+h_{21}^2P_{22}})^{\frac{1-\beta}{\beta}}-1} \label{eqn-CF-GauMARC} \end{eqnarray} \end{proposition} \emph{Remark 4}:\; The constraint condition (\ref{eqn-CFDcon-MARC}) in the discrete memoryless channel guarantees the quantized observation at relay can be recovered at destination. Here in the Gaussian setup, it is translated to the condition of (\ref{eqn-CF-GauMARC}). It can be seen that a minimum value of $\sigma_Q^2$ is required for the modified CF scheme. This is essentially due to the characteristic of $Z_Q$ in (\ref{eqn-RtoRhat}) where a larger value of $\sigma_Q^2$ will result $\hat{Y}_R$ to be a more degraded version of $Y_R$ or in other words a more compressed signal at relay. The achievable sum rate term (\ref{eqn-CF-GauMARCsum}) based on the CF scheme is the same as the first min term of (\ref{eqn-GQF-GauMARCsum}) based on GQF scheme when the constraint on $\sigma_Q^2$ in (\ref{eqn-CF-GauMARC}) is satisfied. Fig.\ref{fig:Sigma-q-GQF} also shows the sum rates based on modified CF scheme in the HD-MARC based on different $\sigma_Q^2$. Notice that if relay uses a good quantizer or relay has a good estimate $\hat{Y}_R$ of $Y_R$ such that $\sigma_Q^2$ is less than the right-hand side of (\ref{eqn-CF-GauMARC}). Then, a higher sum rate cannot be achieved. This is due to the channel between relay and destination is limiting the compressed observation at relay to be recovered at destination. In other words, $\hat{Y}_R$ has a higher rate than the channel between relay and destination can support. Thus for smaller value of $\sigma_Q^2$, the sum rate is the same as the one taken from the constraint condition. \begin{figure}[t] \centering \includegraphics[scale=0.23]{beta-factor} \caption{Achievable Rates of GQF and CF based scheme with variant $\beta$} \label{fig:beta-factor} \end{figure} As defined previously, $\beta$ is the ratio of the two slots taken in each block. The impact of the factor $\beta$ on the achievable rates that based on the GQF scheme in the HD-MARC channel is shown in Fig. \ref{fig:beta-factor} where we assume the same power and channel gain as Fig. \ref{fig:Sigma-q-GQF}. It can be seen that under such a channel state in order to maximize the achievable sum rate the length of each slot should be carefully chosen. Notice that if the relay quantization random variable $\sigma_Q^2$ was chosen to satisfy the constraint (\ref{eqn-CF-GauMARC}), and then the achievable sum rates based on CF is the same as those based on GQF. As also shown in the Fig. \ref{fig:beta-factor}, both GQF and CF schemes outperform the case where no relay is available in the channel. Comparing the achievable sum rates of (\ref{eqn-GQF-GauMARCsum}) with optimized $\sigma_Q^2$ and (\ref{eqn-CF-GauMARCsum}) for the HD-MARC based on both GQF and CF under the Gaussian setup, the sum rates are same. In general, without optimizing the relay quantization factor $\sigma_Q^2$, the CF scheme outperforms the GQF scheme. However, by choosing the optimized value of $\sigma_Q^2$, the GQF scheme, in which a low-cost simplified relay is used, is able to provide similar sum rate performance as the more complicated but sophisticated CF scheme. \section{Conclusion} In this paper the Half-Duplex relaying in the Multiple Access Relay Channel and the compound Multiple Access Channel with a relay has been studied. A variation of the QF scheme, the GQF scheme, based on single block coding has been proposed. The GQF scheme employs joint decoding at destinations and uses a low-cost relay where it only quantizes the received signal after first slot and forwards it in the second slot without Wyner-Ziv binning. For comparison purpose, a modified CF scheme was also introduced. The achievable rate regions were obtained based on GQF scheme and CF scheme for HD-MARC and HD-cMACr, respectively. It is also shown that the achievable rate regions for the three-node HDRC can be treated as specical cases of our results obtained for the five-node channel. As a further development, the achievable rate results from discrete memoryless channels were also extended to the Half-Duplex Gaussian MARC. Some numerical examples were provided for the purpose of the performance comparison. The results indicate that the proposed GQF scheme can provide a similar performance as the CF scheme with only a simplified low-cost relay. \section*{Appendix} \subsection{Proof of Theorem \ref{th-QFD}} Assume the source messages $W_{1}$ and $W_{2}$ are independent of each other. Each message $W_{i}$, $i=1,2$, is uniformly distributed in its message set $\mathcal{W}_i = [1 : 2^{lR_i}]$. \subsubsection{Codebook Generation} Assume the joint pmf factors as \begin{equation} \begin{split} &p(x_{11})p(x_{21})p(x_{12})p(x_{22})p(x_R)p(\hat{y}_R|y_R)\\ &p(y_{11},y_{21},y_R|x_{11},x_{12})p(y_{21},y_{22}|x_{12},x_{22},x_R). \end{split} \end{equation} Fix any input distribution $$p(x_{11})p(x_{21})p(x_{12})p(x_{22})p(x_R)p(\hat{y}_R|y_R).$$ Randomly and independently generate \begin{itemize} \item $2^{lR_1}$ codewords $x_{11}^{n}(w_1)$, $w_1\in\mathcal{W}_1$, each according to $\prod _{i=1}^{n} p_{X_{11}} (x_{11,i}(w_1))$; \item $2^{lR_2}$ codewords $x_{21}^{n}(w_2)$, $w_2\in\mathcal{W}_2$, each according to $\prod _{i=1}^{n} p_{X_{21}} (x_{21,i}(w_2))$; \item $2^{lR_1}$ codewords $x_{12}^{m}(w_1)$, $w_1\in\mathcal{W}_1$, each according to $\prod _{i=1}^{m} p_{X_{12}} (x_{12,i}(w_1))$; \item $2^{lR_2}$ codewords $x_{22}^{m}(w_2)$, $w_2\in\mathcal{W}_2$, each according to $\prod _{i=1}^{m} p_{X_{22}} (x_{22,i}(w_2))$; \item $2^{lR_U}$ codewords $x_{R}^{m}(u)$, $u\in\mathcal{U}=\{1,2,\dots 2^{lR_U}\}$, each according to $\prod _{i=1}^{m} p_{X_{R}} (x_{R,i}(u))$. \end{itemize} Calculate the marginal distribution $$p(\hat{y}_R)=\sum_{x_{11}\in \mathcal{X_{11}} ,x_{21}\in \mathcal{X_{21}},y_{11}\in \mathcal{Y_{21}},y_{21}\in \mathcal{Y_{21}},y_{R}\in \mathcal{Y_R}} p(\hat{y}_R|y_R)$$ $$\qquad p(y_R,y_{11},y_{21}|x_{11},x_{21})p(x_{11})p(x_{21}).$$ Randomly and independently generate $2^{lR_U}$ codewords $\hat{y}_{R}^{n}(u)$, each according to $\prod _{i=1}^{n} p_{\hat{Y}_{R}} (\hat{y}_{R,i}(u))$. \subsubsection{Encoding} To send messages $w_i$, the source node $S_i$ transmits $x_{i1}^{n}(w_i)$ in the first slot and $x_{i2}^{m}(w_i)$ in the second slot, where $i=1,2$. Let $\epsilon' \in (0,1)$ . After receiving $y_R^n$ at the end of the first slot, the relay tries to find a unique $u\in\mathcal{U}$ such that \begin{equation} (y_R^n,\hat{y}_R^n(u))\in \mathcal{T}_{\epsilon'}^n(Y_R,\hat{Y}_R) \end{equation} where $\mathcal{T}_{\epsilon}^n(Y_R,\hat{Y}_R)$ is the $\epsilon$-strongly typical set as defined in \cite{Lim2011}. If there are more than one such $u$, randomly choose one in $\mathcal{U}$. The relay then sends $x_R^m(u)$ in the second slot. \subsubsection{Decoding} Destinations start decoding the messages after the second slot finishes. Let $\epsilon'<\epsilon<1$. Upon receiving in both slots, $D_1$ and $D_2$ tries to find a unique pair of the messages $\hat{w}_1\in\mathcal{W}_1$ and $\hat{w}_2\in\mathcal{W}_2$ such that \begin{eqnarray} (x_{11}^n(\hat{w}_1),x_{21}^n(\hat{w}_2),y_{11}^n,\hat{y}_R^n(u)) \in \mathcal{T}_\epsilon^n(X_{11},X_{21},Y_{11},\hat{Y}_R)\\ (x_{12}^m(\hat{w}_1),x_{22}^m(\hat{w}_2),x_R^m(u),y_{12}^m) \in \mathcal{T}_\epsilon^m(X_{12},X_{22},X_R,Y_{12}) \end{eqnarray} and \begin{eqnarray} (x_{11}^n(\hat{w}_1),x_{21}^n(\hat{w}_2),y_{21}^n,\hat{y}_R^n(u)) \in \mathcal{T}_\epsilon^n(X_{11},X_{21},Y_{21},\hat{Y}_R)\\ (x_{12}^m(\hat{w}_1),x_{22}^m(\hat{w}_2),x_R^m(u),y_{22}^m) \in \mathcal{T}_\epsilon^m(X_{12},X_{22},X_R,Y_{22}) \end{eqnarray} for some $u\in\mathcal{U}$. \subsubsection{Probability of Error Analysis} Let $W_i$ denote the message sent from source node $S_i, i=1,2$. $U$ represents the index chosen by the relay $R$. The probability of error averaged over $W_1$,$W_2$, $U$ over all possible codebooks is defined as {\setlength\arraycolsep{0.1em} \begin{align} Pr(\mathcal\epsilon) & \iffalse = Pr(\hat{W}_1\neq W_1 \cup \hat{W}_2\neq W_2) \nonumber \\ &=\sum_{w_i\in \mathcal{W}_i}Pr(\bigcup \hat{W}_i\neq w_i| W_1=w_1, W_2=w_2)p(w_1,w_2) \nonumber \\ & \fi = Pr(\hat{W}_1\neq 1 \cup \hat{W}_2\neq 1 | W_1=1, W_2=1)\label{poe}. \end{align} } The (\ref{poe}) is based on the symmetry of the codebook construction and the fact that the messages $W_1$ and $W_2$ are chosen uniformly from $\mathcal{W}_1$ and $\mathcal{W}_2$, the overall probability of error is equal to the probability of error when $W_1=1$ and $W_2=1$ were selected as the message indices. Define three events $\mathcal{E}_{0}$ $\mathcal{E}_{2,(w_1,w_2)}$ and $\mathcal{E}_{2,(w_1,w_2)}$ which are described in the following with $i=1,2$: {\setlength\arraycolsep{0.1em} \begin{align} \mathcal{E}_{0} &:= \{((Y_R^n,\hat{Y}_R^n(u))\notin \mathcal{T}_{\epsilon'}^n(Y_R\hat{Y}_R)), \text {for all} \: u \} \\ \mathcal{E}_{i,(w_1,w_2)} &:= \{ (X_{11}^n(w_1),X_{21}^n(w_2),Y_{i1}^n,\hat{Y}_R^n(u)) \nonumber \\ & \qquad \in \mathcal{T}_{\epsilon}^n(X_{11}X_{21}Y_{i1}\hat{Y}_R) \:\: \text{and} \nonumber \\ &\qquad (X_{12}^m(w_1),X_{22}^m(w_2),X_R^m(u),Y_{i2}^m) \nonumber \\ &\qquad \in \mathcal{T}_{\epsilon}^m(X_{11}X_{21}X_RY_{i2}) \; \text{for some}\: u \}. \iffalse {\\ \mathcal{E}_{2,(w_1,w_2)} &:= \{ (X_{11}^n(w_1),X_{21}^n(w_2),Y_{21}^n,\hat{Y}_R^n(u)) \nonumber \\ &\qquad \in \mathcal{T}_{\epsilon}^n(X_{11}X_{21}Y_{21}\hat{Y}_R) \:\: \text{and} \nonumber \\ &\qquad (X_{12}^m(w_1),X_{22}^m(w_2),X_R^m(u),Y_{22}^m) \nonumber \\ &\qquad \in \mathcal{T}_{\epsilon}^m(X_{11}X_{21}X_{R}Y_{22}) \: \text{for some}\: u \} }\fi \end{align} } Then the probability of error can be rewritten as {\setlength\arraycolsep{0.1em} \begin{eqnarray} Pr(\mathcal\epsilon) \iffalse{ & = & Pr( (\mathcal{E}_{1,(1,1)} \cap \mathcal{E}_{2,(1,1)})^c \nonumber \\ & & \qquad \cup_{(w_1,w_2)\in\mathcal{A}} \mathcal{E}_{1,(w_1,w_2)} \nonumber \\ & & \qquad \cup_{(w_1,w_2)\in\mathcal{A}} \mathcal{E}_{2,(w_1,w_2)}|W_1=1,W_2=1 ) \nonumber \\ & \leq & Pr( (\mathcal{E}_{1,(1,1)} \cap \mathcal{E}_{2,(1,1)})^c|W_1=1,W_2=1) \nonumber \\ & & + Pr(\cup_{(w_1,w_2)\in\mathcal{A}} \mathcal{E}_{1,(w_1,w_2)}|W_1=1,W_2=1) \nonumber \\ & & \;+ Pr(\cup_{(w_1,w_2)\in\mathcal{A}} \mathcal{E}_{2,(w_1,w_2)}|W_1=1,W_2=1) \nonumber \\ & = & Pr( (\mathcal{E}_{1,(1,1)} \cap \mathcal{E}_{2,(1,1)})^c\cap\mathcal{E}_{0}|W_1=1,W_2=1) \nonumber \\ & & + Pr( (\mathcal{E}_{1,(1,1)} \cap \mathcal{E}_{2,(1,1)})^c\cap\mathcal{E}_{0}^c|W_1=1,W_2=1) \nonumber \\ & & + Pr(\cup_{(w_1,w_2)\in\mathcal{A}} \mathcal{E}_{1,(w_1,w_2)}|W_1=1,W_2=1) \nonumber \\ & & + Pr(\cup_{(w_1,w_2)\in\mathcal{A}} \mathcal{E}_{2,(w_1,w_2)}|W_1=1,W_2=1) \nonumber \\ }\fi & \leq & Pr (\mathcal{E}_{0}|W_1=1,W_2=1) \nonumber \\ & & + Pr( (\mathcal{E}_{1,(1,1)} \cap \mathcal{E}_{2,(1,1)})^c\cap\mathcal{E}_{0}^c|W_1=1,W_2=1) \nonumber \\ & & + Pr(\cup_{(w_1,w_2)\in\mathcal{A}} \mathcal{E}_{1,(w_1,w_2)}|W_1=1,W_2=1) \nonumber \\ & & + Pr(\cup_{(w_1,w_2)\in\mathcal{A}} \mathcal{E}_{2,(w_1,w_2)}|W_1=1,W_2=1), \label{ineqn-err} \end{eqnarray} } where $\mathcal{A}:=\{(w_1,w_2)\in\mathcal{W}_1\times \mathcal{W}_2:(w_1,w_2)\neq (1,1)\}$. Assume $\beta$ is fixed, then by covering lemma \cite{Gamal2010}, $Pr(\mathcal{E}_{0}|W_1=1,W_2=1)\rightarrow 0$ when $l\rightarrow \infty$, if \begin{equation} R_U > \beta I(Y_R,\hat{Y}_R) + \delta(\epsilon') \label{eq-rfindu} \end{equation} where $\delta(\epsilon')\rightarrow 0$ as $\epsilon'\rightarrow 0$. By the conditional typicality lemma \cite{Gamal2010}, $Pr( (\mathcal{E}_{1,(1,1)} \cap \mathcal{E}_{2,(1,1)})^c\cap\mathcal{E}_{0}^c|W_1=1,W_2=1) \rightarrow 0$ as $l\rightarrow \infty$. Due to the space limitation and the similar fashion can be used to analyze the probability of error for both the third line term and the fourth line term of (\ref{ineqn-err}), only the analysis for the third line term $Pr(\cup_{(w_1,w_2)\in\mathcal{A}} \mathcal{E}_{1,(w_1,w_2)}|W_1=1,W_2=1)$ will be shown in this proof. \iffalse Note that here only the probability of error analysis for the third line term in (\ref{ineqn-err}) $Pr(\cup_{(w_1,w_2)\in\mathcal{A}} \mathcal{E}_{1,(w_1,w_2)}|W_1=1,W_2=1)$ will be shown, since the analysis for the fourth line term $Pr(\cup_{(w_1,w_2)\in\mathcal{A}} \mathcal{E}_{2,(w_1,w_2)}|W_1=1,W_2=1)$ can be obtained in a similar fashion and ignored in this part due to the space limitation. \fi In addition, some standard probability error analysis is also omitted here and only those important steps were kept in the following. For fixed $\beta = \frac{n}{l}$, $1-\beta = \frac{m}{l}$, if $l\rightarrow\infty$, $\epsilon\rightarrow 0$ and the following inequalities hold: {\setlength\arraycolsep{0.1em} \begin{eqnarray} R_1 & < & \beta I(X_{11};X_{21},Y_{11},\hat{Y}_{R}) \nonumber \\ & & + (1-\beta)I(X_{12};X_{22},X_{R},Y_{12}) \\ R_1 + R_U & < & \beta[I(X_{11},\hat{Y}_{R};X_{21},Y_{11})+I(X_{11};\hat{Y}_R)] \nonumber \\ & & + (1-\beta)I(X_{12},X_{R};X_{22},Y_{12}) \\ R_2 & < & \beta I(X_{21};X_{11},Y_{11},\hat{Y}_{R}) \nonumber \\ & & + (1-\beta)I(X_{22};X_{12},X_{R},Y_{12}) \\ R_2 + R_U & < & \beta[I(X_{21},\hat{Y}_{R};X_{11},Y_{11})+I(X_{21};\hat{Y}_R)] \nonumber \\ & & + (1-\beta)I(X_{22},X_{R};X_{12},Y_{12}) \\ R_1 + R_2 & < & \beta [I(X_{11},X_{21};Y_{11},\hat{Y}_{R})+I(X_{11};X_{21})] \nonumber \\ & & + (1-\beta)I(X_{12},X_{22};X_{R},Y_{12}) \\ R_1 + R_2 + R_U & < & \beta [I(X_{11},X_{21},\hat{Y}_{R};Y_{11})+I(X_{11},X_{21};\hat{Y}_{R})] \nonumber \\ & & + (1-\beta)[I(X_{12},X_{22},X_{R};Y_{12}) \nonumber \\ & & \qquad + I(X_{12},X_{22};X_{R})], \end{eqnarray} } then $Pr(\cup_{(w_1,w_2)\in\mathcal{A}} \mathcal{E}_{1,(w_1,w_2)}|W_1=1,W_2=1)\rightarrow 0$. Note that since the messages and the codebook have been independently generated, the above inequalities can be further simplified by substituting $I(X_{22};X_R)=0$, $I(X_{12};X_R)=0$, $I(X_{11};X_{21})=0$, $I(X_{12};X_{22})=0$ and $I(X_{12},X_{22};X_R)=0$ In the last, by taking out $R_U$ according to (\ref{eq-rfindu}), the following inequalities define the achievable rates corresponding to $D_1$: {\setlength\arraycolsep{0.1em} \begin{eqnarray} R_i & < & min \{ \beta I(X_{i1};X_{j1},Y_{11},\hat{Y}_{R}) \nonumber \\ & & \qquad + (1-\beta)I(X_{i2};X_{j2},X_{R},Y_{12}), \nonumber \\ & & \qquad\beta[I(X_{i1},\hat{Y}_{R};X_{j1},Y_{11})+I(X_{i1};\hat{Y}_R) \nonumber \\ & & \qquad -I(Y_R;\hat{Y}_R)] + (1-\beta)I(X_{i2},X_{R};X_{j2},Y_{12}) \} \nonumber \end{eqnarray} \iffalse R_2 & < & min \{ \beta I(X_{21};X_{11},Y_{11},\hat{Y}_{R}) \nonumber \\ & & \qquad + (1-\beta)I(X_{22};X_{12},X_{R},Y_{12}), \nonumber \\ & &\qquad \beta[I(X_{21},\hat{Y}_{R};X_{11},Y_{11})+I(X_{21};\hat{Y}_R) \nonumber \\ & & \qquad -I(Y_R;\hat{Y}_R)] + (1-\beta)I(X_{22},X_{R};X_{12},Y_{12}) \} \nonumber \\ \fi \begin{eqnarray} R_1 + R_2 & < & min \{ \beta I(X_{11},X_{21};Y_{11},\hat{Y}_{R}) \nonumber \\ & & \qquad + (1-\beta)I(X_{12},X_{22};X_{R},Y_{12}), \nonumber \\ & & \qquad\beta [I(X_{11},X_{21},\hat{Y}_{R};Y_{11})+I(X_{11},X_{21};\hat{Y}_{R})\nonumber \\ & & \qquad -I(Y_R;\hat{Y}_R)] + (1-\beta)I(X_{12},X_{22},X_{R};Y_{12}) \} \nonumber \end{eqnarray} } where $i=1,2$ and $j=\{1,2 | \;i\neq j\}$. Similarly the achievable rate results for $D_2$ can be obtained. Therefore, the probability of error $P(\mathcal\epsilon) \rightarrow 0$ if all the achievable inequalities corresponding to $D_1$ and $D_2$ hold. This completes the proof and those achievable inequalities are shown in Theorem 2. \iffalse \section*{Appendix: Proof of Independence} In this appendix, we will show the independence of two conditional probabilities. Specifically the following probability $p(\{x_{11}^n\},\{x_{21}^n\},y_{11}^n,\{\hat{y}_{R}^n\}|u,w_1,w_2)$ is independent of $p(\{x_{21}^m\},\{x_{22}^m\},\{x_{R}^m\},y_{12}^m,|u,w_1,w_2)$. Note that here the random variables inside the brace$\{\}$ denotes the corresponding codebook and other random variables which uses capital font represents the codewords we have chosen. For instance, $x_{11}^n$ means the codebook of $x_{11}$ or all the codewords that can be chosen, while $X_{11}^n$ denotes a specific sequence of codeword that has been selected to transmit according to the message $W_1$. The following markov chain exists in our model: {\setlength\arraycolsep{0.1em} \begin{eqnarray} &(W_1,W_2,\{x_{11}^n\},\{x_{21}^n\})\rightarrow(X_{11}^n,X_{21}^n) \nonumber \\ &\rightarrow(Y_{11}^n,Y_R^n,\{\hat{Y}_R^n\})\rightarrow(U,\{X_R^m\}) \nonumber \\ &\rightarrow(X_R^m,W_1,W_2,\{x_{12}^m\},\{x_{22}^m\},X_{12}^m,X_{22}^m)\rightarrow Y_{12}^m \nonumber \end{eqnarray} } Then the following probability can be written as: {\setlength\arraycolsep{0.1em} \begin{eqnarray} && p(\{x_{11}^n\},\{x_{21}^n\},y_{11}^n,\{\hat{y}_R^n\},\{x_{12}^m\},\{x_{22}^m\},\{x_R^m\},y_{12}^m|u,w_1,w_2) \nonumber \\ =&& p(\{x_{12}^m\},\{x_{22}^m\},\{x_R^m\},y_{12}^m|u,w_1,w_2,\{x_{11}^n\},\{x_{21}^n\},y_{11}^n,\{\hat{y}_R^n\}) \nonumber \\ &&\quad p(\{x_{11}^n\},\{x_{21}^n\},y_{11}^n,\{\hat{y}_R^n\}|u,w_1,w_2) \nonumber \\ =&&p(\{x_{12}^m\},\{x_{22}^m\},y_{12}^m | u,\{x_R^m\},w_1,w_2,\{x_{11}^n\},\{x_{21}^n\},y_{11}^n,\{\hat{y}_R^n\}) \nonumber \\ &&\quad p(\{x_R^m\} | u,w_1,w_2,\{x_{11}^n\},\{x_{21}^n\},y_{11}^n,\{\hat{y}_R^n\} ) \nonumber \\ &&\quad p(\{x_{11}^n\},\{x_{21}^n\},y_{11}^n,\{\hat{y}_R^n\}|u,w_1,w_2) \nonumber \\ =&&p(\{x_{12}^m\},\{x_{22}^m\},y_{12}^m | u,\{x_R^m\},w_1,w_2)p(\{x_R^m\} | u,w_1,w_2) \nonumber \\ &&\quad p(\{x_{11}^n\},\{x_{21}^n\},y_{11}^n,\{\hat{y}_R^n\}|u,w_1,w_2) \nonumber \\ =&&p(\{x_{12}^m\},\{x_{22}^m\},y_{12}^m,\{x_R^m\} | u,w_1,w_2) \nonumber \\ && \quad p(\{x_{11}^n\},\{x_{21}^n\},y_{11}^n,\{\hat{y}_R^n\}|u,w_1,w_2) \nonumber \end{eqnarray} } Therefore the conditional independence holds for the above mentioned two probabilities. \fi \subsection{Outline of Proof for the CF based Achievable Rate Region} Due to the space limitation and the similarity for the proof of CF and GQF based achievable rate regions, the detailed proof is omitted in this subsection. Note that there are two major differences in the CF scheme comparing to the GQF scheme: First, after $R$ quantizes the received signal from first slot with a rate $R_U$, it applies the Wyner-Ziv binning to further partition the set of $\mathcal{U}$ into $2^{lR_S}$ equal size bins and send the bin index $S$ with $X_{R}(s)$ in the second slot; Second, each destination performs step decoding for the bin index $\hat{s}$, ${u}$ and $(\hat{w_1},\hat{w_2})$ sequentially. In the last step decoding of the CF scheme, the decoder jointly decodes both messages from the signals received in both slots. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} The authors of~\cite{C} allege to have shown that the conclusions of~\cite{HARDY} regarding the inconsistency of Time Asymmetric Quantum Theory (TAQT) with quantum mechanics are false. In this reply, we will show that the arguments of~\cite{C} are missing essential aspects of~\cite{HARDY}, and that therefore the conclusions of~\cite{HARDY} still stand. The most important claims of~\cite{C} are the following: \begin{enumerate} \item[{\bf 1}.] There are many examples of TAQT, and the present author has inadvertently constructed another one. \item[{\bf 2}.] The flaws of the Quantum Arrow of Time (QAT) pointed out in~\cite{HARDY} are actually not flaws, because the original derivation of the QAT was misquoted from its source~\cite{JMP95}. \item[{\bf 3}.] The crucial argument of~\cite{HARDY} regarding the exponential blowup of the test functions $\widehat{\varphi}^{\pm}(z)$ does not prevent $\widehat{\varphi}^{\pm}(z)$ from being of Hardy class. \end{enumerate} As we shall see, all these claims do not stand close scrutiny. In order to show why, in Sec.~\ref{sec:stamet} we will outline the method to construct rigged Hilbert spaces in quantum mechanics based on the theory of distributions~\cite{GELFAND}. We shall refer to this method as the ``standard method'' and show that the resulting rigged Hilbert spaces are not of Hardy class. We shall also explain the meaning of the exponential blowup of $\widehat{\varphi}^{\pm}(z)$ and why it implies that the spaces of test functions are not of Hardy class. In Sec.~\ref{sec:TAQTvsSQM}, we briefly outline the method to introduce rigged Hilbert spaces of Hardy class in TAQT and compare such method with the ``standard method.'' It will then be apparent that using the method of TAQT, one can introduce any arbitrary rigged Hilbert space for the Gamow states. In order to address claim~{\bf 2}, we show (again) in Sec.~\ref{seec:QAT} that no matter how one introduces it, the Quantum Arrow of Time has little to do with the actual time evolution of a quantum system. To address claim~{\bf 3}, in Sec.~\ref{sec:clasins} we use classic results of Paley and Wiener and of Gelfand and Shilov to show that the ``standard method'' of dealing with the Lippmann-Schwinger equation leads to rigged Hilbert spaces that are {\it not} of Hardy class. Section~\ref{sec:con} concludes that the arguments of~\cite{HARDY} still stand. \section{The ``standard method''} \label{sec:stamet} In this section, we illustrate the main features of the ``standard method'' to construct rigged Hilbert spaces in quantum mechanics~\cite{RELBO}. Such ``standard method'' is based on the theory of distributions~\cite{GELFAND}. For the sake of clarity, we shall use the spherical shell potential of height $V_0$, \begin{equation} V(\vec{x})=V(r)=\left\{ \begin{array}{ll} 0 &0<r<a \\ V_0 &a<r<b \\ 0 &b<r<\infty \, . \end{array} \right. \label{potential} \end{equation} For $l=0$, the Hamiltonian acts as (we take $\hbar ^2/2m=1$) \begin{equation} H = -\frac{\rmd ^2}{\rmd r^2} + V(r) \, . \label{doh} \end{equation} The regular solution is \begin{equation} \chi (r;E)=\left\{ \begin{array}{lll} \sin (\sqrt{E \,} r) \quad &0<r<a \\ {\cal J}_1(E)\rme ^{\rmi \sqrt{E-V_0 \,} r} +{\cal J}_2(E)\rme ^{-\rmi \sqrt{E-V_0 \,} r} \quad &a<r<b \\ {\cal J}_3(E) \rme ^{\rmi \sqrt{E \,} r} +{\cal J}_4(E)\rme ^{-\rmi \sqrt{E \,} r} \quad &b<r<\infty \, . \end{array} \right. \label{chi} \end{equation} The Jost functions and the $S$ matrix are given by \begin{equation} {\cal J}_+(E)=-2\rmi {\cal J}_4(E) \, , \quad {\cal J}_-(E)=2\rmi {\cal J}_3(E) \, , \label{josfuc1} \end{equation} \begin{equation} S(E) =\frac{{\cal J}_-(E)}{{\cal J}_+(E)} \, . \label{smatrix1} \end{equation} The solutions of the Lippmann-Schwinger equation can be written as \begin{equation} \langle r|E^{\pm}\rangle \equiv \chi ^{\pm}(r;E)= \sqrt{\frac{1}{\pi} \frac{1}{\sqrt{E \,}\,}\,} \, \frac{\chi (r;E)}{{\cal J}_{\pm}(E)} \, . \label{LSdnesb} \end{equation} When $V$ tends to zero, these eigensolutions tend to the ``free'' eigensolution: \begin{equation} \langle r|E\rangle \equiv \chi _0(r;E)= \sqrt{\frac{1}{\pi} \frac{1}{\sqrt{E \,}\,}\,} \, \sin (\sqrt{E\,}r) \, . \label{LSdnesb0} \end{equation} These eigenfunctions are delta-normalized and therefore their associated unitary operators, \begin{equation} (U_{\pm}f)(E)=\int_0^{\infty}\rmd r \, \overline{\chi ^{\pm}(r;E)}\, f(r) \equiv \widehat{f}_{\pm}(E) \, , \quad E\geq 0 \, , \label{Upm} \end{equation} \begin{equation} (U_0f)(E)=\int_0^{\infty}\rmd r \, \overline{\chi _0(r;E)}\, f(r) \equiv \widehat{f}_0(E) \, , \quad E\geq 0 \, , \label{Upm0} \end{equation} transform from $L^2([0,\infty),\rmd r)$ onto $L^2([0,\infty),\rmd E)$. The Lippmann-Schwinger and the ``free'' eigenfunctions can be analytically continued from the scattering spectrum into the whole complex plane. We shall denote such analytically continued eigenfunctions by $\chi ^{\pm}(r;z)$ and $\chi _0(r;z)$. Whenever they exist, the analytic continuations of~(\ref{Upm}) and (\ref{Upm0}) are denoted by \begin{equation} \widehat{f}_{\pm}(z)=\int_0^{\infty}\rmd r \, \overline{\chi ^{\pm}(r;\overline{z})}\, f(r) \, , \label{Upmac} \end{equation} \begin{equation} \widehat{f}_{0}(z)=\int_0^{\infty}\rmd r \, \overline{\chi _0(r;\overline{z})}\, f(r) \, , \label{Upm0ac} \end{equation} where here and in the following $z$ belongs to a two-sheeted Riemann surface. The resonant energies are given by the poles $z_n$ of the $S$ matrix, and their associated Gamow states are \begin{equation} u(r;z_n) = N_n\left\{ \begin{array}{ll} \frac{1}{{\mathcal J}_3(z_n)}\sin(\sqrt{z_n\,}r) &0<r<a \\ [1ex] \frac{{\mathcal J}_1(z_n)}{{\mathcal J}_3(z_n)} \rme ^{\rmi \sqrt{z_n-V_0\,}r} +\frac{{\mathcal J}_2(z_n)}{{\mathcal J}_3(z_n)} \rme ^{-\rmi \sqrt{z_n-V_0\,}r} &a<r<b \\ [1ex] \rme ^{\rmi \sqrt{z_n\,}r} &b<r<\infty \, , \end{array} \right. \label{dgv0p} \end{equation} where $N_n$ is a normalization factor. The theory of distributions~\cite{GELFAND} says that a test function $\varphi (r)$ on which a distribution $d(r)$ acts is such that the following integral is finite:\footnote{In quantum mechanics, we need to impose a few more requirements, but we will not need to go into such details here.} \begin{equation} \langle \varphi|d\rangle \equiv \int_0^{\infty}\rmd r \, \overline{\varphi (r)} d(r) <\infty \, , \label{basic} \end{equation} where $\langle \varphi |d\rangle$ represents the action of the functional $|d\rangle$ on the test function $\varphi$. With some variations, this is the ``standard method'' followed by~\cite{SUDARSHAN,BOLLINI,FP02,JPA02,IJTP03,JPA04,EJP05,LS1,LS2} to introduce spaces of test functions in quantum mechanics. Thus, contrary to what the authors of~\cite{C} assert, the method followed by the present author runs (somewhat) parallel to~\cite{BOLLINI}, not to~TAQT. In order to use~(\ref{basic}) to construct the rigged Hilbert spaces for the analytically continued Lippmann-Schwinger eigenfunctions and for the Gamow states, we need to obtain the growth of $\chi ^{\pm}(r;z)$, $\chi _0(r;z)$ and $u(r;z_n)$. Because the regular solution blows up exponentially~\cite{TAYLOR}, \begin{equation} \left| \chi (r; z)\right| \leq C \, \frac{\left|z\right|^{1/2}r}{1+\left|z\right|^{1/2}r} \, \rme ^{|{\rm Im}\sqrt{z\,}|r} \, , \label{boundrs} \end{equation} the growth of the eigenfunctions~(\ref{LSdnesb}), (\ref{LSdnesb0}) and (\ref{dgv0p}) blows up exponentially: \begin{equation} |\chi ^{\pm}(r;z)| \leq C \, \frac{1}{{\cal J}_{\pm}(z)} \, \frac{\left|z\right|^{1/4}r}{1+\left|z\right|^{1/2}r} \, \rme ^{|{\rm Im}\sqrt{z\,}|r} \, , \label{boundpms} \end{equation} \begin{equation} |\chi _0(r;z)| \leq C \, \frac{\left|z\right|^{1/4}r}{1+\left|z\right|^{1/2}r} \, \rme ^{|{\rm Im}\sqrt{z\,}|r} \, , \end{equation} \begin{equation} |u(r;z_n)| \leq C_n \, \frac{\left|z _n\right|^{1/2}r}{1+\left|z_n \right|^{1/2}r} \rme ^{|{\rm Im}\sqrt{z_n\,}|r} \, . \end{equation} When we plug this exponential blowup into the basic requirement~(\ref{basic}) of the ``standard method,'' we see that the test functions on which those distributions act must fall off at least exponentially. By using the Gelfand-Shilov theory of $M$ an $\Omega$ functions~\cite{GELFAND}, it was shown in~\cite{LS2} that when $a$ and $b$ are positive real numbers satisfying \begin{equation} \frac{1}{a}+\frac{1}{b} = 1 \, , \label{ab} \end{equation} and when $\varphi ^+(r)$ is an infinitely differentiable function whose tails fall off like $\rme ^{-r^a/a}$, then $\varphi ^+(z)$ grows like $\rme ^{|{\rm Im}(\sqrt{z})|^b/b}$ in the infinite arc of the lower half-plane of the Riemann surface: \begin{equation} \hskip-1cm {\rm If} \ |\varphi ^+(r)| < C \rme ^{-\frac{\, r^a}{a}} \ {\rm as} \ r \to \infty , \ {\rm then} \ |\widehat{\varphi}^+(z)| \leq C \rme ^{\frac{\, |{\rm Im}(\sqrt{z})|^b}{b}} \ {\rm as} \ |z| \to \infty \,. \label{blowupfex} \end{equation} It was shown in~\cite{HARDY} that when $\varphi ^+(r) \in C_0^{\infty}$, $\widehat{\varphi}^+(z)$ blows up exponentially in the infinite arc of the lower half-plane of the Riemann surface: \begin{equation} {\rm If} \ |\varphi ^+(r)| = 0 \ {\rm when} \ r>A , \ {\rm then} \ |\widehat{\varphi}^+(z)| \leq C \rme ^{A|{\rm Im}\sqrt{z\,}|} \ {\rm as} \ |z| \to \infty \, . \label{grofainf} \end{equation} From the above estimates, we concluded in~\cite{HARDY} that the $\varphi ^+$'s obtained from the ``standard method'' cannot be Hardy functions, since $\widehat{\varphi}^+(z)$ does not tend to zero as $|z|$ tends to infinity. The authors of~\cite{C} argue that one cannot draw any conclusion on the limit $|z|\to \infty$ from estimates such as~(\ref{blowupfex}) or (\ref{grofainf}), and therefore they conclude that nothing prevents $\widehat{\varphi}^+(z)$ from tending to zero and therefore from being Hardy functions. Their conclusion is not true, because their argument does not take the nature of~(\ref{blowupfex}) and (\ref{grofainf}) into account. After we explain the meaning of those estimates, it will be clear why they prevent $\widehat{\varphi}^{\pm}(z)$ from tending to zero in any infinite arc of the Riemann surface. In order to understand what~(\ref{blowupfex}) and (\ref{grofainf}) mean, we start with the simple sine function $\sin (\sqrt{E\, }r)$. When $E\geq 0$, the sine function oscillates between $+1$ and $-1$: \begin{equation} |\sin (\sqrt{E\, }r)| \leq 1 \, , \quad E\geq 0 \, . \end{equation} As $E$ tends to infinity, such oscillatory behavior remains, and in such limit the sine function does not tend to zero. When we analytically continue the sine function, \begin{equation} \sin (\sqrt{z\, }r) \, , \label{sineac} \end{equation} the oscillations are bounded by \begin{equation} |\sin (\sqrt{z\, }r)| \leq C \, \frac{\left|z\right|^{1/2}r}{1+\left|z\right|^{1/2}r} \, \rme ^{|{\rm Im}\sqrt{z\,}|r} \, . \label{sineabv} \end{equation} Thus, as $|z|$ tends to infinity, $\sin (\sqrt{z\,}r)$ oscillates wildly, and the magnitude of its oscillation is tightly bounded by the exponential function. It is certain that as $|z|$ tends to infinity, $\sin (\sqrt{z\,}r)$ does not tend to zero, even though the function vanishes when $\sqrt{z\, }r = \pm n \pi$, $n=0,1,\ldots$ It just happens that the solutions of the Lippmann-Schwinger equation follow the same pattern. When $E$ is positive, the eigensolutions are oscillatory and bounded by \begin{equation} |\chi ^{\pm}(r;E)| \leq C \, \frac{1}{{\cal J}_{\pm}(E)} \, \frac{\left|E\right|^{1/4}r}{1+\left|E\right|^{1/2}r} \, . \end{equation} When the energy is complex, their oscillations get wild and are bounded by Eq.~(\ref{boundpms}).\footnote{The points at which ${\cal J}_{\pm}(z)=0$ do not affect the essence of the argument.} Thus, the analytic continuations of the Lippmann-Schwinger eigenfunctions oscillate wildly, and the magnitude of their oscillation is tightly bounded by an exponential function (multiplied by factors that don't cancel the exponential blowup when $|z|\to \infty$). Because in Eqs.~(\ref{Upmac}) and (\ref{Upm0ac}) we are integrating over $r$, the exponentially-bounded oscillations of $\chi ^{\pm}(r;z)$ get transmitted into $\widehat{\varphi}^{\pm}(z)$. The estimates~(\ref{blowupfex}) and (\ref{grofainf}) bound the oscillation of the test functions of the ``standard method,'' except for factors that don't cancel the exponential blowup. It is the exponentially-bounded oscillations of $\widehat{\varphi}^{\pm}(z)$ what prevent $\widehat{\varphi}^{\pm}(z)$ from tending to zero in any infinite arc of the Riemann surface and therefore from being of Hardy class. A somewhat simpler way to understand the above estimates is by looking at the ``free'' incoming and outgoing wave functions ${\varphi}^{\rm in}$ and $\varphi ^{\rm out}$. Because in the energy representation such wave functions are the same as the ``in'' and ``out'' wave functions, \begin{equation} \widehat{\varphi}^{\rm in}(E) = \langle E|{\varphi}^{\rm in}\rangle =\langle ^+E|{\varphi}^{+}\rangle = \widehat{\varphi}^{+}(E)\, , \label{asstrco1} \end{equation} \begin{equation} \widehat{\varphi}^{\rm out}(E) = \langle E|{\varphi}^{\rm out}\rangle =\langle ^-E|{\varphi}^{-}\rangle =\widehat{\varphi}^{-}(E) \, , \label{asstrco2} \end{equation} in TAQT the analytic continuation of $\widehat{\varphi}^{\rm in}(E)$ and $\widehat{\varphi}^{\rm out}(E)$ are also of Hardy class. Since \begin{equation} \widehat{\varphi}^{\rm in, out}(z) =\int_0^{\infty}\rmd r \, \frac{1}{\sqrt{\pi}} \frac{1}{z^{1/4}} \, \sin (\sqrt{z}\,r) \,\varphi ^{\rm in, out}(r) \, , \label{Upm0in} \end{equation} it is evident that the exponential blowup~(\ref{sineabv}) of $\sin (\sqrt{z\,}r)$ will prevent $\widehat{\varphi}^{\rm in, out}(z)$ from tending to zero as $|z| \to \infty$ in any half-plane of the Riemann surface. Thus, $\widehat{\varphi}^{\rm in, out}(z)$ are not of Hardy class, contrary to TAQT. Strictly speaking, the bounds~(\ref{blowupfex}) and (\ref{grofainf}) are not the tightest ones. We should include polynomial corrections, see Eq.~(B.15) in~\cite{LS2}, and the effect of $\frac{\left|z\right|^{1/4}r}{1+\left|z\right|^{1/2}r}$ and $\frac{1}{{\cal J}_{\pm}(z)}$ to obtain the tightest bounds. We shall not obtain those corrections here, because they do not cancel the exponential blowup at infinity, and because in this reply we shall use instead other classic bounds, see Sec.~\ref{sec:clasins}. Let us summarize this section. In standard quantum mechanics, once the Lippmann-Schwinger equation is solved, the properties of $\widehat{\varphi}^{\pm}(z)$ are already determined by Eqs.~(\ref{Upmac}) and (\ref{Upm0ac}), and there is no room for any extra assumption on their properties. This means, in particular, that the Hardy axiom cannot be simply assumed. Rather, the Hardy axiom must be proved using Eqs.~(\ref{Upmac}) and (\ref{Upm0ac}).\footnote{This is what in~\cite{HARDY} it was meant by the assertion that the Hardy axiom is not a matter of assumption but a matter of proof.} It simply happens that the ``standard method'' yields $\widehat{\varphi}^{\pm}(z)$ and $\widehat{\varphi}^{\rm in,out}(z)$ that oscillate wildly. Because these oscillations are bounded by exponential functions, $\widehat{\varphi}^{\pm}(z)$ and $\widehat{\varphi}^{\rm in,out}(z)$ do not tend to zero as $|z|$ tends to infinity in any half-plane of the Riemann surface---hence they are not of Hardy class. \section{TAQT vs.~the ``standard method''} \label{sec:TAQTvsSQM} In TAQT, one doesn't solve the Lippmann-Schwinger equation in order to afterward obtain the properties of $\widehat{\varphi}^{\pm}(z)$ using Eq.~(\ref{Upmac}). Instead, one transforms into the energy representation (using $U_{\pm}$ in our example) and then imposes the Hardy axiom. If ${\cal H}_{\pm}^2$ denotes the spaces of Hardy functions from above ($+$) and below ($-$), $\cal S$ denotes the Schwartz space, and $\tilde{\Phi}_{\pm}$ denote their intersection restricted to the positive real line, \begin{equation} \tilde{\Phi}_{\pm} = {\cal H}_{\pm}^2\cap {\cal S}|_{{\mathbb R}^+} \, , \end{equation} then the Hardy axiom states that the functions $\widehat{\varphi}^{\pm}(z)$ belong to $\tilde{\Phi}_{\mp}$: \begin{equation} \widehat{\varphi}^{\pm}(z) \in \tilde{\Phi}_{\mp} \, . \label{sssusm} \end{equation} This means that in the position representation, the Gamow states and the analytic continuation of the Lippmann-Schwinger eigenfunctions act on the following spaces: \begin{equation} \Phi _{{\rm BG}\mp}= U_{\pm}^{-1} \tilde{\Phi}_{\mp} \, . \label{BGchoice} \end{equation} It is obvious that the choices~(\ref{sssusm})-(\ref{BGchoice}) are arbitrary. One may as well choose another dense subset of $L^2([0,\infty ),\rmd E)$ with different properties and obtain a different space of test functions for the Gamow states. What is more, $\Phi _{{\rm BG}\pm}$ are different from the spaces of test functions obtained through the ``standard method,'' because the functions $\widehat{\varphi}^{\pm}(z)$ of the ``standard method'' are not of Hardy class. The authors of~\cite{C} claim that the present author has inadvertently constructed an example of TAQT. That such is not the case can be seen not only from the differences between the ``standard method'' and the method used in TAQT to introduce rigged Hilbert spaces, but also from the outcomes. For example, whereas in the position representation the ``standard method'' calls for just {\it one} rigged Hilbert space for the Gamow states and for the analytically continued Lippmann-Schwinger eigenfunctions~\cite{LS2}, TAQT uses {\it two} rigged Hilbert spaces \begin{equation} \Phi _{{\rm BG}\pm} \subset L^2([0,\infty ),\rmd r) \subset \Phi _{{\rm BG}\pm}^{\times} \,. \label{BGchoicetwo} \end{equation} One of the rigged Hilbert spaces is used for the ``in'' solutions and for the anti-resonant states, whereas the other one is used for the ``out'' solutions and for the resonant states. Another difference is that in TAQT, the solutions of the Lippmann-Schwinger equation for scattering energies have a time asymmetric evolution~\cite{BLUNDER}, whereas the ``standard method'' yields that such time evolution runs from $t=-\infty$ to $t=+\infty$, see~\cite{LS1}. Incidentally, this is an instance where TAQT differs not only mathematically but also physically from standard quantum mechanics, because in standard scattering theory, the time evolution of a scattering process goes from the asymptotically remote past ($t \to -\infty$) to the asymptotically far future ($t \to +\infty$). This is not so in TAQT~\cite{BLUNDER}. It seems hardly necessary to clarify what the present author means by ``standard quantum mechanics.'' Standard quantum mechanics means the Schr\"odinger equation, and standard scattering theory means the Lippmann-Schwinger equation. In standard quantum mechanics, one assumes that these equations describe the physics and then solves them. Because of the scattering and resonant spectra, their solutions lie within rigged Hilbert spaces. The construction of such rigged Hilbert spaces follows by application of the ``standard method.'' By contrast, TAQT simply assumes that the solutions of the Schr\"odinger and the Lippmann-Schwinger equations comply with the Hardy axiom, without ever showing that the actual solutions of those equations comply with such axiom. It was claimed in~\cite{HARDY} that there is no example of TAQT. The authors of~\cite{C} dispute such claim and assert that there are many examples. The present author disagrees with their assertion, because {\it assuming} that for a large class of potentials the solutions of the Lippmann-Schwinger equation comply with the Hardy axiom is not the same as having an example where it is shown that the {\it actual} solutions of the Lippmann-Schwinger equation comply with the Hardy axiom. In fact, to the best of the present author's knowledge, no advocate of TAQT has ever used Eq.~(\ref{Upmac}) to discuss the analytic properties of $\widehat{\varphi}^{\pm}(E) = \langle ^{\pm}E|\varphi ^{\pm}\rangle$ in terms of the actual solutions $\chi^{\pm}(r;E)$ of the Lippmann-Schwinger equation. The authors of~\cite{C} inadvertently acknowledge that there is no example of TAQT when they say that they still need ``{\it to identify the form and properties}'' of the functions of~(\ref{BGchoice}), see the last paragraph in section~2 of~\cite{C}. By saying so, they are acknowledging that they don't know whether the standard Gamow states defined in the position representation are well defined as functionals acting on $\Phi _{{\rm BG}\pm}$. If TAQT had an example, it would be known. \section{The Quantum Arrow of Time (QAT)} \label{seec:QAT} Advocates of TAQT argue that their choice~(\ref{BGchoice}) is not arbitrary but rather is rooted on a causality principle. Such causality principle is the ``preparation-registration arrow of time,'' sometimes referred to as the ``Quantum Arrow of Time'' (QAT). For the ``in'' states $\varphi ^+$, the causal statement of the QAT is written as \begin{equation} \widetilde{\varphi}^+(t) \equiv \int_{-\infty}^{+\infty}\rmd E \, \rme ^{-\rmi Et} \widehat{\varphi}^{+}(E) =0 \, , \quad \mbox{for} \ t>0 \, . \label{ferFtvph+} \end{equation} By one of the Paley-Wiener theorems, Eq.~(\ref{ferFtvph+}) is equivalent to assuming that $\widehat{\varphi}^+(E)$ is of Hardy class from below. The corresponding causal statement for the ``out'' wave functions $\varphi ^-$ implies that $\varphi ^-$ is of Hardy class from above. Hence, in TAQT, the choice~(\ref{BGchoice}) is not arbitrary but a consequence of causality. It was pointed out in~\cite{HARDY} that the QAT is flawed. The argument was twofold. First, it was pointed out that the original derivation~\cite{JMP95} of Eq.~(\ref{ferFtvph+}) made use of the following flawed assumption: \begin{equation} 0=\langle E|\varphi ^{\rm in}(t)\rangle = \langle ^+E|\varphi ^{+}(t)\rangle = \rme ^{-\rmi Et} \widehat{\varphi}^{+}(E) \, , \quad {\rm for \ all \ energies,} \label{flawassum1} \end{equation} which can happen only when $\varphi ^+$ and $\varphi ^{\rm in}$ are identically 0. It was then pointed out that even though one may simply assume the causal statement~(\ref{ferFtvph+}) and forget about how it was derived, such causal statement says little about the actual time evolution of a quantum system, because the quantum mechanical time evolution of $\varphi ^+$ is not given by Eq.~(\ref{ferFtvph+}): \begin{equation} \varphi ^+(t)= \rme ^{-\rmi Ht} \varphi ^+ \ \neq \ \widetilde{\varphi}^+(t) \, . \label{neoe} \end{equation} To counter this argument, the authors of~\cite{C} claim that the derivation of the QAT was misquoted from the original source~\cite{JMP95}, and that the flawed assumption~(\ref{flawassum1}) was never used to derive the QAT~(\ref{ferFtvph+}). It seems therefore necessary to quote the original derivation (see~\cite{JMP95}, page~2597):\footnote{In this quote, $\phi ^{\rm in}$, $\phi ^+$, ${\cal F}(t)$ and Eq.~(\ref{Ftdkd}) correspond, respectively, to $\varphi ^{\rm in}$, $\varphi ^+$, $\widetilde{\varphi}^+(t)$ and Eq.~(\ref{ferFtvph+}).} \begin{quote} {\small \it ``We are now in the position to give a mathematical formulation of the QAT: we choose $t=0$ to be the time before which all preparations of $\phi ^{\rm in}(t)$ are completed and after which the registration of $\psi ^{\rm out}(t)$ begins. This means that for $t>0$ the energy distribution of the preparation apparatus must vanish: $\langle E,\eta |\phi ^{\rm in}(t) \rangle =0$ for all values of the quantum numbers $E$ and $\eta$ ($\eta$ are the additional quantum numbers which we usually suppress). As the mathematical statement for `no preparations for $t>0$' we therefore write (the slightly weaker condition) \begin{equation} \hskip-0.5cm 0= \int \rmd E \, \langle E|\phi ^{\rm in}(t)\rangle = \int \rmd E \, \langle ^+E|\phi ^{+}(t)\rangle = \int \rmd E \, \langle ^+E|\rme ^{-\rmi Ht}|\phi ^{+}\rangle \end{equation} or \begin{equation} \hskip-0.5cm 0= \int_{-\infty}^{+\infty} \rmd E \, \langle ^+E|\phi ^{+}\rangle \rme ^{-\rmi Et} \equiv {\cal F}(t) \quad {\rm for \ } t>0 \, . \label{Ftdkd} \end{equation} } \end{quote} The readers can decide whether or not the flawed hypothesis~(\ref{flawassum1}) was used to derive the QAT~(\ref{Ftdkd}). Nevertheless, it is actually not very relevant whether the authors of~\cite{JMP95} used~(\ref{flawassum1}) to derive~(\ref{ferFtvph+}). As pointed out in~\cite{HARDY}, and as mentioned above, even though one can forget~(\ref{flawassum1}) and simply assume~(\ref{ferFtvph+}) as the causal condition to be satisfied by $\varphi ^+$, such causal condition has little to do with the time evolution of a quantum system, see again Eq.~(\ref{neoe}). In particular, as even the author of~\cite{BAUMGARTEL} has asserted, the $t$ that appears in Eq.~(\ref{ferFtvph+}) is not the same as the parametric time $t$ that labels the evolution of a quantum system.\footnote{All this shows that the new term TAQT is a misnomer. A better name is Bohm-Gadella theory, because it was these two authors who proposed the theory and summarized it in~\cite{BG}.} Thus, as far as standard quantum mechanics is concerned, the causal content of the QAT is physically vacuous, and therefore, regardless of how one motivates it, there is no physical justification for the choice~(\ref{BGchoice}). \section{TAQT vs.~the ``classic results''} \label{sec:clasins} In this section, we are going to compare the Hardy axiom of TAQT with some classic results of Paley and Wiener, of Gelfand and Shilov and of the theory of ultradistributions, which we shall collectively refer to as the ``classic results.'' More precisely, we will see that the spaces of test functions $\widehat{\varphi}^{\pm}$ obtained by the ``standard method'' would be of Hardy class only if the ``classic results'' were wrong. The direct comparison with the ``classic results'' is more easily done in one dimension, and therefore we shall use the example of the one-dimensional rectangular barrier potential: \begin{equation} V(x)=\left\{ \begin{array}{ll} 0 &-\infty <x<a \\ V_0 &a<x<b \\ 0 &b<x<\infty \, . \end{array} \right. \label{sbpotential1} \end{equation} For this potential, the ``in'' and ``out'' eigensolutions are well known and can be found for example in~\cite{JPA04}. We shall denote them by $\chi _{\rm l,r}^{\pm}(x;E)$, where the labels l,r denote left and right incidence. When we analytically continue these eigenfunctions, or when we consider the Gamow states for this potential, the ``standard method'' calls for test functions $\varphi _{\rm l,r}^{\pm}(x)$ for which the following integrals are finite: \begin{equation} \widehat{\varphi}_{\rm l,r}^{\pm}(z)=\int_{-\infty}^{\infty}\rmd x \, \overline{\chi _{\rm l,r}^{\pm}(x;\overline{z})}\, {\varphi}(x) \, . \label{Upmac1D} \end{equation} Just as in the example discussed in Sec.~\ref{sec:stamet}, the test functions $\varphi (x)$ must at least fall off faster than exponentials. To further simplify the discussion, we need to recall that, because of Eqs.~(\ref{asstrco1}) and (\ref{asstrco2}), the Hardy axiom assumes that the ``free'' wave functions $\widehat{\varphi}_{\rm l,r}^{\rm in}(E)$ and $\widehat{\varphi}_{\rm l,r}^{\rm out}(E)$ are also of Hardy class. These ``free'' functions are given by (hereafter, we just consider $\varphi _{\rm l,r}^{\rm in}$, since the analysis for $\varphi _{\rm l,r}^{\rm out}$ is the same) \begin{equation} \widehat{\varphi}_{\rm l}^{\rm in}(E)= \frac{1}{\sqrt{4\pi k \,}} \int_{-\infty}^{\infty}\rmd x \, \rme ^{-\rmi kx} \, {\varphi}^{\rm in}(x) \, , \label{Upmac1D0in} \end{equation} \begin{equation} \widehat{\varphi}_{\rm r}^{\rm in}(E)= \frac{1}{\sqrt{4\pi k \,}} \int_{-\infty}^{\infty}\rmd x \, \rme ^{\rmi kx} \, {\varphi}^{\rm in}(x) \, , \label{Upmac1D0inr} \end{equation} where $k=\sqrt{E}$ is the wave number. The total wave function is given by the sum of left and right components: \begin{equation} \widehat{\varphi}^{\rm in}(E)= \widehat{\varphi}_{\rm l}^{\rm in}(E) + \widehat{\varphi}_{\rm r}^{\rm in}(E) \, . \label{lrimoc} \end{equation} It is simpler to work with $k$ rather than with $E$ and define \begin{equation} \widehat{\varphi}_{\rm l,r}^{\rm in}(k) \equiv \sqrt{2k \,} \, \widehat{\varphi}_{\rm l,r}^{\rm in}(E) \, ; \label{Upmac1D0in2} \end{equation} that is, \begin{equation} \widehat{\varphi}_{\rm l}^{\rm in}(k)= \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty}\rmd x \, \rme ^{-\rmi kx} \, {\varphi}^{\rm in}(x) \, , \quad k\geq 0 \, , \label{Upmac1D0ink} \end{equation} \begin{equation} \widehat{\varphi}_{\rm r}^{\rm in}(k)= \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty}\rmd x \, \rme ^{\rmi kx} \, {\varphi}^{\rm in}(x) \, , \quad k\geq 0 \, . \label{Upmac1D0irdk} \end{equation} The ``total'' wave function in the wave-number representation, $\widehat{\varphi}^{\rm in}(k)= \widehat{\varphi}_{\rm l}^{\rm in}(k) + \widehat{\varphi}_{\rm r}^{\rm in}(k)$, is thus the Fourier transform of $\varphi (x)$, \begin{equation} \widehat{\varphi}^{\rm in}(k)= \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty}\rmd x \, \rme ^{-\rmi kx} \, {\varphi}^{\rm in}(x) \, , \quad k \in {\mathbb R} \, . \label{lrimockks} \end{equation} Its analytic continuation will be denoted as \begin{equation} \widehat{\varphi}^{\rm in}(q)= \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty}\rmd x \, \rme ^{-\rmi qx} \, {\varphi}^{\rm in}(x) \, , \quad q \in {\mathbb C} \, . \label{Upmac1D0inlrqFT} \end{equation} At this point, we are ready to introduce two classic theorems. The first one is due to Paley and Wiener (see Theorem IX.11 in~\cite{SIMON}): \vskip0.5cm \theoremstyle{plain} \newtheorem*{Th1}{Theorem~1 (Paley-Wiener)} \begin{Th1} An entire analytic function $\widehat{\varphi}(q)$ is the Fourier transform of a $C_0^{\infty}({\mathbb R})$ function $\varphi (x)$ with support in the segment $\{ x \, | \ |x|<A \}$ if, and only if, for each $N$ there is a $C_N$ so that \begin{equation} |\widehat{\varphi}(q)|\leq \frac{C_N \, \rme ^{A|{\rm Im}(q)|}}{ (1+|q|)^N} \label{PWbound} \end{equation} for all $q \in {\mathbb C}$. \end{Th1} \vskip0.5cm This theorem says that the Fourier transform of a $C_0^{\infty}$ function is an analytic function that grows exponentially, and that such exponential growth is mildly corrected (but not canceled) by a polynomial falloff. The second theorem we shall use is due to Gelfand and Shilov~\cite{GELFAND}. Before stating it, we need some definitions. Let $a$ and $b$ denote two positive real numbers satisfying~(\ref{ab}). Let us define $\Phi _{a,b}$ as the set of all differentiable functions $\varphi (x)$ ($-\infty <x <\infty$) satisfying the inequalities \begin{equation} \left| \frac{\rmd ^n \varphi (x)}{\rmd x ^n} \right| \leq C_n \rme ^{-\alpha \frac{|x|^a}{a}} \end{equation} with constants $C_n$ and $\alpha >0$ which may depend on the function $\varphi$. Let us define the space $\widehat{\Phi}_{a,b}$ as the set of entire analytic functions $\widehat{\varphi}(q)$, $q={\rm Re}(q)+\rmi \,{\rm Im}(q)$, which satisfy the inequalities \begin{equation} |q^n\widehat{\varphi}(q)|\leq C_n \rme ^{+\beta \frac{|{\rm Im}(q)|^b}{b}} \, , \label{GSbound} \end{equation} where the constants $C_n$ and $\beta >0$ depend on the function $\varphi$. It is obvious that the elements of $\Phi _{a,b}$ are functions that, together with their derivatives, decrease at infinity faster than $\rme ^{-\frac{|x|^a}{a}}$, whereas the elements of $\widehat{\Phi}_{a,b}$ are analytic functions that grow exponentially at infinity as $\rme ^{+\frac{|{\rm Im}(q)|^b}{b}}$, except for a polynomial correction that doesn't cancel the exponential blowup. \vskip0.5cm \theoremstyle{plain} \newtheorem*{Th2}{Theorem~2 (Gelfand-Shilov)} \begin{Th2} The space $\widehat{\Phi}_{a,b}$ is the Fourier transform of $\Phi _{a,b}$. \end{Th2} \vskip0.5cm This theorem means that the smooth functions that fall off at infinity faster than $\rme ^{-|x|^a/a}$ are, in Fourier space, analytic functions that grow exponentially like $\rme ^{+|{\rm Im}(q)|^b/b}$. The bounds~(\ref{PWbound}) and (\ref{GSbound}) are to be understood in the same way as the bounds~(\ref{blowupfex}) and (\ref{grofainf}). That is, the bounds~(\ref{PWbound}) and (\ref{GSbound}) mean that $\widehat{\varphi}(q)$ is an oscillatory function that grows exponentially in the infinite arc of the $q$-plane, the oscillation being tightly bounded by Eqs.~(\ref{PWbound}) and (\ref{GSbound}) when $\varphi (x)$ belongs to $C_0^{\infty}$ and $\Phi _{a,b}$, respectively. Note that after the addition of the corresponding polynomial corrections, the bounds~(\ref{blowupfex}) and (\ref{grofainf}) are entirely analogous to the bounds~(\ref{PWbound}) and (\ref{GSbound})---the operators $U_{\pm}$ are after all Fourier-like transforms~\cite{JPA04}. Let us now apply the above theorems to the functions $\varphi ^{\rm in}(x)$ obtained by the ``standard method.'' In order for Eq.~(\ref{Upmac1D0inlrqFT}) to make sense, $\varphi ^{\rm in}(x)$ must fall off faster than exponentials. If we choose $\varphi ^{\rm in}(x)$ to fall off like $\rme ^{-|x|^a/a}$, then the Gelfand-Shilov theorem tells us that $\widehat{\varphi}^{\rm in}(q)$ grows like $\rme ^{+|{\rm Im}(q)|^b/b}$. Even when we impose that $\varphi ^{\rm in}(x)$ is $C_0^{\infty}$, which is already a very strict requirement, the Paley-Wiener theorem says that $\widehat{\varphi}^{\rm in}(q)$ grows exponentially. This means, in particular, that the $\widehat{\varphi}^{\rm in}(q)$ do in general {\it not} tend to zero in the infinite arc of the $q$-plane, because if they did, the Paley-Wiener and the Gelfand-Shilov theorems would be wrong. Because of Eq.~(\ref{Upmac1D0in2}), $\widehat{\varphi}^{\rm in}(z)$ does in general not tend to zero as $|z|$ tends to infinity in the lower half-plane of the second sheet. Hence the space of $\widehat{\varphi}^{\rm in}$'s is not of Hardy class from below. The space of $\widehat{\varphi}^{+}$'s cannot be of Hardy class from below either, because if it were, then \begin{equation} \lim _{|z|\to \infty} \widehat{\varphi}^{+}(z) =0 \, , \label{sskdks} \end{equation} where the limit is taken in the lower half plane of the second sheet. By Eq.~(\ref{asstrco1}), this implies that also the space of $\widehat{\varphi}^{\rm in}$'s would be of Hardy class and comply with this limit, which we know is not possible due to the ``classic results.'' Thus, the ``standard method'' yields spaces of test functions that do {\it not} comply with the Hardy axiom. This is precisely what it was meant in~\cite{HARDY} by the assertion that TAQT is inconsistent with standard quantum mechanics. To finish this section, we note that if we chose the test functions as in~\cite{BOLLINI}, then we would be dealing with ultradistributions. In Fourier space, the test functions for ultradistributions grow faster than any exponential as we follow the imaginary axis, see~\cite{BOLLINI} and references therein. Thus, if the ``standard method'' yielded spaces of Hardy functions, that property of ultradistributions would be false. \section{Further remarks} \label{sec:further remarks} The authors of~\cite{C} claim that it is inaccurate to state that the proponents of TAQT dispense with asymptotic completeness. This statement should be compared with the first quote in section~6 of~\cite{HARDY}. The authors of~\cite{C} also claim that TAQT obtains the resonant states by solving the Schr\"odinger equation subject to purely outgoing boundary conditions. This claim should be compared with the second quote in section~6 of~\cite{HARDY}. The authors of~\cite{C} also dispute the assertion of~\cite{HARDY} that TAQT sometimes uses the whole real line as though it coincided with the scattering spectrum of the Hamiltonian. A glance at, for example, the QAT~(\ref{ferFtvph+}) seems to support such assertion. \section{Conclusions} \label{sec:con} In standard scattering theory, one assumes that the physics is described by the Lippmann-Schwinger equation. When one solves such equation, one finds that its solutions must be accommodated by a rigged Hilbert space, and that its time evolution runs from $t=-\infty$ till $t=+\infty$~\cite{LS1}. When one analytically continues the solutions of the Lippmann-Schwinger equation, one finds that they must be accommodated by {\it one} rigged Hilbert space, which also accommodates the resonant (Gamow) states. The construction of such rigged Hilbert space is determined by standard distribution theory. By contrast, TAQT assumes that the solutions of the Lippmann-Schwinger equations belong to {\it two} rigged Hilbert spaces of Hardy class. In TAQT, one never explicitly solves the Lippmann-Schwinger equation for specific potentials in the position representation. Instead, one assumes that its solutions satisfy the Hardy axiom. Unlike in standard scattering theory, in TAQT the time evolution of the solutions of the Lippmann-Schwinger equation does not run from $t=-\infty$ till $t=+\infty$. By comparing the properties of the actual solutions of the Lippmann-Schwinger equation with the Hardy axiom, we have seen that such actual solutions would comply with the Hardy axiom only if classic results of Paley and Wiener, of Gelfand and Shilov, and of the theory of ultradistributions were wrong. We have (again) stressed the fact that the Quantum Arrow of Time, which is the justification for using the rigged Hilbert spaces of Hardy class, has little to do with the time evolution of a quantum system. We have stressed that using the method of TAQT to introduce rigged Hilbert spaces, we could accommodate the Gamow states in a landscape of arbitrary rigged Hilbert spaces, see also~\cite{RELBO}. Our claim of inconsistency should not be taken as a claim that TAQT is mathematically inconsistent or that TAQT doesn't have a beautiful mathematical structure. What the present author claims is that TAQT is not applicable in quantum mechanics and is in fact a different theory. To finish, we would like to mention that the ``classic theorems'' are not in conflict with using Hardy functions in quantum mechanics. They are in conflict only with the Hardy axiom. Thus, our results do not apply to other works that use Hardy functions in a different way~\cite{YOSHI}. \ackn This research was supported by MEC and DOE. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Galaxy mergers, which take place more frequently in the dense environments of galaxy clusters play a pivotal role in the evolution of galaxies in the Universe across cosmic time. The merger dynamics is influenced by various factors such as size, mass, impact parameter, relative velocity, gas content and the relative inclination of the participating galaxies. Mergers have profound effects on the properties of galaxies on various physical scales. On galactic scales ($\sim10 -100$ kpc) mergers may result in ram pressure stripping of gas, the formation of long tidal tails and enhanced star formation. On smaller scales ($< 1$ pc), the growth of black holes (BHs) and possible triggering of AGN activity with occasional relativistic jet ejection may occur in mergers, which transport matter and energy from the galaxy interiors to the surroundings through AGN feedback processes \citep{MN12}. On the smallest scales ($<<$pc), the final inspiral stage of merging BHs results in powerful gravitational wave emission. However, the formation and merger rates of galactic BHs are still much uncertain. Finally, on very large scales ($\sim100 -1000$ kpc), the shocks and turbulence inducted into the intra-cluster medium (ICM) during mergers may inject large amounts of nonthermal energy and subsequent shock heating of the ICM to X-ray temperatures, with disruption of cooling cores \citep{kandu2006,b64,spaul_01}. Therefore cluster centers are fascinating laboratories to study galaxy formation and evolution. \begin{center} \begin{table*} \caption{Galaxy properties in central region of Abell 407 cluster.} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline {\hskip -4.50cm} Galaxy$^a$ & {\hskip -5.0cm}Coordinates (J2000)& Redshift$^b$ & \multicolumn {5}{c|}{SDSS Magnitudes} & \multicolumn{3}{|c}{SDSS Colours} \\\cline{4-8} \cline{9-11} & {\hskip -6.0cm}(from SDSS) & &$ u$ &$g$ & $r$ &$i$ &$z$ & $g-r$ & $r-i$ & $i-z$\\ \hline \hline {\hskip -4.50cm}G1 (B)& {\hskip -5.0cm}03h 01m 51.5s +35d 50m 30s & 0.0483 & 18.23 & 15.84 & 14.72 & 14.23 &13.84 &1.12 &0.49 &0.39\\ {\hskip -4.50cm}G2 (A)& {\hskip -5.0cm}03h 01m 51.2s +35d 50m 22s & 0.0454 & 21.09 &19.04 & 18.46 & 17.95 & 17.54 &0.58 &0.51 &0.41\\ {\hskip -4.50cm}G3 (D)& {\hskip -5.0cm}03h 01m 51.8s +35d 50m 20s & 0.0471 & 18.11 &16.00 & 15.00 & 14.44 &13.97 &1.00&0.56 &0.47 \\ {\hskip -4.50cm}G4 (C) & {\hskip -5.0cm}03h 01m 51.7s +35d 50m 12s & 0.0501 & 20.91 &18.81 & 17.76 & 17.37 &16.90 &1.05 &0.39 &0.47 \\ {\hskip -4.50cm}G5 (F) & {\hskip -5.0cm}03h 01m 51.5s +35d 50m 12s & 0.0478 & 20.02 &17.93 & 16.86 & 16.31 &15.85 &1.07 &0.55 &0.46 \\ {\hskip -4.50cm}G6 (E) & {\hskip -5.0cm}03h 01m 52.4s +35d 50m 29s & 0.0444 & 21.33 &19.93 & 19.11 & 18.70 &18.18 &0.82 &0.41 &0.52 \\ {\hskip -4.50cm}G7 (G) & {\hskip -5.0cm}03h 01m 53.2s +35d 50m 26s & 0.0471 & 18.37 &16.34 & 15.31 & 14.80 &14.39 &1.03 &0.51 &0.41 \\ {\hskip -4.50cm}G8 (H) & {\hskip -5.0cm}03h 01m 53.7s +35d 50m 28s & - & - &- & - & - &- &- & -&\\ {\hskip -4.50cm}G9 (I) & {\hskip -5.0cm}03h 01m 54.5s +35d 50m 18s & 0.0451 & 20.27 &18.68 & 17.59 & 17.23 &16.68 &1.09 &0.36 &0.55\\ \hline $^a$ Zwicky's original nomenclature is shown within brackets.\\ $^b$ From \cite{b26}. \end{tabular} \label{tab1} \end{table*} \end{center} Rich galaxy clusters are usually dominated by massive, luminous central elliptical galaxies known as the Brightest Cluster Galaxies (BCG). There are some interesting examples, where extremely massive ($> 10^{12} M_{\odot}$) and brighter ellipticals called the cD galaxies form in the densest regions near the spatial and kinematical center of their host clusters \citep{tov_01,b62}. The distinguishing property of cD galaxies is the presence of a diffuse, faint stellar halo that may extend up to 100s of kpc, well into the intracluster medium \citep{b62,tov_01}. These facts seem to suggest that the formation of cD galaxies is unique to the cluster environment and is linked closely to its dynamical history. It is still far from clear how cD galaxies originate in such a galactic environment, because very few clusters with cD galaxies in the process of formation have been identified. \begin{figure} \centering \includegraphics[width=3.5in,height=4in,keepaspectratio]{A407_3.jpg} \caption{Colour image ($5.6^{\prime} \times 4.7^{\prime}$) of the galaxy cluster Abell~407 taken from the Sloan Digital Sky Survey (SDSS). The central region is host to a striking group of nine close packed galaxy like condensations, embedded within a diffuse stellar halo of intra-cluster light (the `Zwicky's Nonet'). This system possibly represents an exceptionally rare site of a multiple-nucleus cD galaxy precursor assembling in a rich galaxy cluster environment. } \label{fig1} \end{figure} \begin{figure*} \centering \includegraphics[width=3.0in,height=3.5in,keepaspectratio]{JHK_composite_MOD.png}\quad \includegraphics[width=2.9in,height=3.5in,keepaspectratio]{3C3506_150MHz+RASS.png} \caption{Left panel: A pseudo-color near infra-red image ($2.1^{\prime} \times 2.1^{\prime}$) of the compact group `Zwicky's Nonet' obtained by combining the J, H and K-band images available in the 2MASS 6X deep survey. The nine central galaxies of the nonet are marked on the image while Table~\ref{tab1} and Table~\ref{tabk} show their positions, magnitudes, colours and estimated central black hole masses. Right Panel: GMRT 150 MHz radio image ($11^{\prime} \times 11^{\prime}$) of 4C 35.06 is shown superposed on the ROSAT 0.5 - 2.4 keV band smoothed X-ray data shown with contours.} \label{2mass_rass+150} \end{figure*} cD galaxies are almost always radio loud and often eject powerful radio jets from accreting supermassive black holes \citep{bagchi_94,b64}. Galaxy interactions and major mergers remove significant amounts of angular momentum by gravitational torque and drive a part of the constituent gas towards nuclear supermassive black holes (SMBHs), thereby triggering the activity of the central engine \citep{b72}. As a result, cD galaxies are more likely to be radio-loud above radio luminosity $\sim 10^{24.5}$ W Hz$^{-1}$ at 1.4 GHz compared with other galaxies in a cluster \citep{bagchi_94}. The multiple galactic nuclei observed in cDs and BCGs provide evidence supporting the merger scenario. In addition, the SMBHs of these galaxies are believed to grow by multiple galactic mergers \citep{volo_2003,kul_2012,b51}. Thus, gravitational perturbations in the accretion disc in the presence of closely spaced massive black hole pairs may result in the occurrence of distorted radio jets. The inversion symmetry found in `S' or `Z' -shaped radio sources is ascribed to the precession of a spinning black hole \citep{b51,b76} and the associated tilting/perturbation in the accretion disk \citep{b77,b78}. The precessing radio jets in this situation are likely to trace a characteristic helical pattern on the sky \citep{b96}. One prominent object showing this rare phenomenon is PKS 2149-158, a dual radio-loud elliptical galaxy pair in the center of cluster Abell 2382, forming a pair of twisted jet systems \citep{b1}. The well known `C'-shaped wide-angle tail (WAT) source 3C~75 is another striking example of twin AGN producing two pairs of jets showing oscillations and interactions \citep{b2}. In this article, we report radio, optical and near-IR studies of the extraordinary radio source 4C 35.06 (B2 0258+35B), located near the center of the cluster Abell 407, which, interestingly, also harbors a remarkably compact group of nine galaxies embedded inside a diffuse stellar halo of faint intra-cluster light. We make a detailed study of this remarkable system, which possibly provides a unique and compelling evidence for an ongoing formation of a giant cD galaxy at the cluster center and connects to the evolution of its central black hole by mergers. The radio source 4C 35.06 clearly shows a very peculiar twisted jet system on 100 kpc scale, which we investigate further with multi-frequency radio data. This paper is organised as follows; In Section 2 we discuss the optical, near-IR, radio and X-ray properties of the system. In Section 3 we discuss the observations and the data reduction procedure. In Section 4 the main results obtained in the present study are discussed in seven subsections. In the last concluding section, section 5, we summarise our findings. Throughout this article, a Hubble constant $ H_{0} = 73\,$ $\rm km s^{ -1}\, Mpc^{ -1},$ and cosmological parameters $\Omega_{M} = 0.27$ and $\Omega_{\Lambda} = 0.73$ were used. For redshift $z = 0.047$ it implies the linear scale of 0.885 kpc arcsec$^{ -1}$ and a luminosity distance of 200 Mpc \citep{b68}. We define the synchrotron emission spectral index $\alpha$ by $S(\nu) \propto \nu^{\alpha}$, where $S(\nu)$ is the flux density at frequency $\nu$. \section[]{`Zwicky's Nonet': Previous Optical, X-ray and Radio Observations} Abell~407 is a rich galaxy cluster of Bautz-Morgan class II, at a redshift of 0.047 \citep{b56}. The optical image of the central region of this cluster from Sloan Digital Sky Survey (SDSS) shows a complex ensemble of at least nine galaxy-like condensations $\sim 1 \arcmin$ across ($\sim50$ kpc), embedded in a low surface brightness, diffuse stellar halo, which is reminiscent of a giant cD galaxy (Figure~\ref{fig1}). Table~\ref{tab1} lists the SDSS optical magnitudes, redshifts and colours of these nine galaxies. The $g-r$ colour index of $\sim1$ indicates they are passive, early-type red galaxies. Historically, Fritz Zwicky first noticed this extraordinary galactic configuration (V Zw 311) in 1971 \citep{zwicky71}; which was later studied in more detail by \cite{b26} using the Palomar 200-inch and 60-inch telescopes. \cite{b26} described it as the ``most nightmarish known multiple-nucleus system" and concluded that this puzzling galactic system possibly represents an extremely rare and unique snapshot of a giant cD galaxy caught in its formative stages. There were no further investigations of this highly unusual galactic system to understand its nature. Here we propose to name this extraordinary galaxy group of nine galaxies as `Zwicky's Nonet', honoring Fritz Zwicky who first noticed this galaxy group. To our knowledge, this is the most compact and rich system of multiple galaxies known to date. Some other famous compact groups with multiple members are the `Stephen's Quintet' and `Seyfert's Sextet', and the lesser known `Zwicky's Triplet' (Arp 103). In the Uppsala General Catalogue, this multi-galactic system is confusingly listed as a single galaxy UGC~2489 (G1 in our nomenclature) positioned at $03^h01^m51.5^s,$ $+35^d50^m30^s$ \citep{b57}. The SDSS optical and 2MASS near-IR images of the central region of Abell~407 are shown in Figs ~\ref{fig1} and \ref{2mass_rass+150} (Left panel), and a zoomed version is shown in Fig. ~\ref{gmrt610} (Right panel), where we have also labeled the nine galactic condensations by letters G1 to G9 (see Table~\ref{tab1}). The first column in Table~\ref{tab1} also gives Zwicky's original nomenclature (within brackets) for these nine galaxies. \begin{figure*} \centering \includegraphics[width=5.5in,height=5in,keepaspectratio]{610_SDSS_i.png} \caption{Left panel: 4C~35.06 at 610 MHz mapped ($5.8^{\prime} \times 5.4^{\prime}$) with GMRT at $5\arcsec$ resolution, with the positions of central galaxies G1 (the brightest member), and G2, G5 and G6 (possible radio sources) marked on it. Right panel: the SDSS zoomed i-band image ($1.4^{\prime} \times 1.3^{\prime}$) of the central region together with the GMRT 610 MHz radio contours plotted on it. In this figure, all the nine galaxies comprising `Zwicky's Nonet' have been marked. North is up, and east is to the left.} \label{gmrt610} \end{figure*} \subsection{X-ray detection} The Abell 407 cluster is detected in the ROSAT All Sky Survey (RASS), with estimated X-ray luminosity and gas temperature of $5 \times10^{44}$ ergs$^{-1}$ (0.1-2.4 keV band) and 2.5 keV respectively \citep{ebel_98}. In MCXC (Meta Catalogue of X-ray Galaxy Clusters) it is listed as MCXC J0301.8+3550 \citep{Piffa2011} with an estimated cluster mass $M_{500} = 9.16 \times 10^{13}$ M$_{\odot}$, where $M_{500}$ is the total mass enclosed inside a radius $R_{500} = 675.4$ kpc, within which the mean over-density of the cluster is 500 times the critical density at the cluster redshift. From archival X-ray data in RASS hard band (0.5 -2.4keV) we obtained surface brightness contours that are overlaid on GMRT 150 MHz image (Figure.~\ref{2mass_rass+150}; right panel). Here the projected separation of the optical center (taken as G1, the brightest member) and the brighter X-ray peak to the southwest is $\sim 1.7\arcmin$ or $\sim 90$ kpc. Similar offsets in the optical and X-ray emission peaks have been observed in systems with ongoing mergers or in dynamically active clusters \citep{ry_08,man_12}. \begin{figure} \centering \includegraphics[width=3.5in,height=4in,keepaspectratio]{gmrt_610_color.png} \caption{Optical i-band image ($5.7^{\prime} \times 2.9^{\prime}$) of the galaxy cluster Abell~407 taken from the Sloan Digital Sky Survey (SDSS). The white contours show the 610 MHz radio emission morphology of 4C~35.06 as imaged with GMRT at $5\arcsec$ resolution. Both the radio and optical images have been rotated for convenience.} \label{610_color} \end{figure} \subsection{Previous radio observations of 4C~35.06} One of the earliest detections of this source is at 1.4 GHz frequency with the Cambridge one-mile telescope with fairly poor resolution \citep{Riley75}. Subsequently, this source has been studied using VLA at 1.4 GHz and 5 GHz by \cite{b27}, revealing a bi-lobed structure at $\sim5\arcsec$ resolution. At 1.4 GHz the total flux density of 4C~35.06 is 728 mJy, a core flux density of 10 mJy, an eastern lobe of 305 mJy, and a western lobe of 416 mJy. At 5 GHz, the total flux density was 170 mJy, with core flux of 4 mJy, an eastern lobe of 55 mJy and a western lobe of 114 mJy. A 5 GHz VLBA observation at $4.87 \times 2.23$ mas$^{2}$ resolution detected a compact radio core of 2.6 mJy peak flux (3.5 mJy integrated) associated with the galaxy G3, which is the second brightest member of the nonet \citep{b28}. However, the VLBA scale is only about 3~pc across, while the large-scale radio structure extends over 200 kpc, leaving a vast gap in our understanding of the connection between the compact AGN and larger radio morphology. Moreover, the previous high frequency VLA maps all miss the large-scale jet structure and ultra-steep spectrum outer regions of this source. Recently, using high resolution ($5\arcsec$ FWHM) and high sensitivity (70 $\mu$Jy/beam rms) GMRT observations at 610 MHz frequency, \cite{b65} drew attention to the complete radio structure of an unusual, helically twisted, kinked-jet system of the radio source 4C~35.06. \cite{b97} have also studied this source at a very low radio frequency of 62 MHz with LOFAR at angular resolution of $50\arcsec$ FWHM. We discuss these observations along with our new GMRT observations in the sections below. \section[]{Observations and Data Reduction} \subsection{Multi-wavelength GMRT radio observations} We observed 4C~35.06 with GMRT at three frequencies; 610, 235 and 150 MHz (Project codes $21_{-}066$ and $26_{-}037$ \footnote[1]{https://naps.ncra.tifr.res.in/goa/mt/search/basicSearch}). Table~\ref{tab2} shows the log of radio observations. For flux and bandpass calibrations, 3C 48 was observed at the beginning and end of the observation runs for 10 minutes. The 30-minute scans on target source were alternated by 5 minute scans on the phase calibrator. The data were reduced using NRAO AIPS software package. AIPS tasks SETJY and GETJY were used for flux density calibrations \citep{b30}. The visibility data were flagged for Radio Frequency Interference (RFI) using AIPS tasks. The clean and calibrated solutions for the flux calibrator were used to calibrate the phase calibrator. The bandpass solutions were computed using the flux calibrator. These bandpass solutions were applied to the data and the frequency channels were averaged to increase the signal to noise ratio. The collapsed channel data were recalibrated for phase and amplitude solutions and later applied to the target source. AIPS task IMAGR was used for imaging in 3 dimensions, correcting for the W-term effects at low frequencies. For imaging, Briggs ROBUST weighting parameter was adjusted to detect low surface brightness diffuse emission regions better. Before the final imaging, several rounds of phase self-calibrations and one round of amplitude self-calibration were applied to the data. At 610 and 235 MHz, rms noise levels of 70 $\mu $Jy/beam and 0.90 mJy/beam were achieved, respectively. The rms noise measured in 150 MHz image was $\sim1$ mJy/beam. In Fig.~\ref{610_color}, the GMRT 610 MHz radio image of 4C~35.06 is shown overlaid on the SDSS i-band optical image. From the VLA archives we also created high frequency radio maps using data in D and C scaled arrays, observed at 6cm (4.8 GHz) and 20cm (1.4 GHz) wavelengths, respectively. Standard routines in AIPS were used for calibration and imaging. For spectral index mapping, we imaged both data sets with identical $15\arcsec$ (FWHM) angular resolutions. \begin{center} \begin{table*} \centering \caption{Details of radio observations.} \begin{tabular}{@{}llllll@{}} \hline\hline Telescope&Observed & Band &Obs. time & Beam & Map \\ &frequency & width & & (arc sec)& rms \\ \hline GMRT&610 MHz&32 MHz & 9hrs & 5.83 $\times $ 4.78 & 0.07 mJy/b \\ GMRT&235 MHz&6 MHz & 9hrs & 20.86 $\times $16.68 & 0.9 mJy/b \\ GMRT&150 MHz&16 MHz & 10hrs &19.87 $\times $ 15.77 & 1 mJy/b \\ VLA(NVSS)$^a$&1.4 GHz &100 MHz &survey data &45 $\times $ 45 & 0.4 mJy/b \\ VLA(VLSS)$^b$&74 MHz&1.56 MHz &survey data &80 $\times $ 80 & 100 mJy/b \\ \hline $^a$ \citep{b69} & $^b$ \citep{b70} \end{tabular} \label{tab2} \end{table*} \end{center} \begin{figure*} \includegraphics[width=6.5in,height=6in,keepaspectratio]{A407_GMRT.pdf} \caption{Colour scale GMRT images (Top Panel, left to right): at 610 MHz at resolution $5.83\arcsec \times 4.78\arcsec$, 235 MHz at resolution $20.86\arcsec \times 16.68\arcsec$, and 150 MHz (contour plot) with levels [1, 2, 4, 8, 16,......] $ \times 4 \;m$Jy/beam and resolution $23.9\arcsec \times 19.36\arcsec$. Bottom Panel: Colour scale images showing 150 MHz GMRT contours plotted over 1400 MHz NVSS image (Bottom left ) and 74 MHz VLSS image (Bottom right).} \label{gmrt} \end{figure*} \subsection{Spectroscopic Observations} We attempted to obtain good quality spectra for all nine galaxies comprising `Zwicky's Nonet'. The optical spectroscopic observations of the brighter galaxy members G1, G3, G4, G7 and G9 were taken with the IUCAA Girawali Observatory (IGO) 2m telescope and fainter galaxies G2, G5 and G6 with the Palomar 200-inch telescope. The aim was to characterize their AGN and star forming activity, and to estimate the central velocity dispersions ($\sigma$) and black hole masses ($M_{BH}$) using the well known $M_{BH} - \sigma$ correlations \citep{b3,b4}. Optical long-slit spectroscopic data were taken on 20-22 November 2011 on the IUCAA 2m telescope (IGO). The spectra were obtained using the IUCAA Faint Object Spectrograph and Camera (IFOSC) \footnote[2]{http://igo.iucaa.in}. We used two IFOSC grisms; grism no.7 and grism no.8 in combination with a 1.5-arcsec slit. These grisms provide a wavelength coverage of 3800-–6840 \AA\, and 5800–-8350 \AA. Standard stars were observed during the same nights for flux calibration. Wavelength calibration was done using standard Helium-Neon lamp spectra. Palomar 200-inch observations were carried out on 2014 January 23. The data were taken covering the wavelength range 3800 \AA \; to 9500 \AA, using the double spectrograph (blue and red arms). Wavelength calibration was done using standard Fe-Ar arc lamp spectra. Standard routines in the Image Reduction and Analysis Facility (IRAF) were used for data reduction and one-dimensional spectra were extracted using the {\em doslit} task in IRAF. The analysed data are added as supplementary material. \section[]{Results and Discussion} \subsection{The GMRT images of 4C 35.06: steep spectrum emission and source parameters} Figure~\ref{gmrt} upper row presents the GMRT low frequency radio images at 610, 235 and 150 MHz. The regions marked A1, B1 and A2, B2 represent the features on jets while C denotes the core region. The regions D1 and D2 indicate the outermost diffuse structures of the source. The forthcoming sections give a detailed discussion of these regions. We also show the radio images from the NRAO VLA Sky Survey (NVSS, 1420 MHz) and VLA Low-frequency Sky Survey (VLSS, 74 MHz) in the bottom panel of Figure~\ref{gmrt}. The contours of the GMRT 150 MHz image are plotted over the NVSS and VLSS images for size comparison. The highest resolution (5\arcsec FWHM) and currently the deepest yet GMRT image at 610 MHz shows a bright, complex core region and associated double sided twisted/helical jet structure, while the NVSS 1.4 GHz image does not resolve these structures due to its poor resolution (45\arcsec FWHM). The VLA 1.4 GHz image at $15\arcsec$ resolution detects the extended, twisting-turning jet structure (Fig. ~\ref{gmrt}, bottom left panel). The low frequency 235 and 150 MHz GMRT maps show extended, steep spectrum 'relic' plasma emission features (see Figure~\ref{gmrt}) at the extremities of the helically twisted jet structure. These features were also detected in 62 MHz LOFAR map [\cite{b97}]. At 610 MHz the source is found to have a total flux density of $1.7\pm 0.12$ Jy. The flux densities along the western and eastern jets are observed to be 580 mJy and 193 mJy respectively. The angular size of the source is $260\arcsec$ or linear size of $\sim220$ kpc. The maximum extent of the source at 235 MHz is $430\arcsec$ or $\sim380$ kpc. At 150 MHz the source is found to have the largest size of $460\arcsec$ or $\sim400$ kpc linear size. This implies that the linear extent of the source grows larger with the lowering of frequency, which is suggestive of steep-spectrum emission regions present at the extremities. At these frequencies, the western jet is brighter than the eastern jet (2.37 Jy and 1.18 Jy at 150 MHz, and 1.72 Jy and 800 mJy at 235 MHz respectively). The total source flux densities at 150 MHz and 235 MHz are 6.0$ \pm$0.18 Jy and 4.7$ \pm$0.13 Jy, respectively. The excellent quality GMRT images enabled us to make high resolution ($\sim 25 \arcsec$) spectral index maps down to 150 MHz. These maps are discussed further in the following sections. \subsection{Where is the AGN radio core?} On the $5\arcsec$ resolution GMRT 610 MHz map of 4C 35.06, there are three radio peaks near the center, of which two are shifted south from the jet direction (Figure.\ref{gmrt610}). A previous VLBA observation \citep{b28} has detected a compact radio core in the galaxy G3 on parsec scale. The two GMRT 610 MHz radio peaks are centered near optical galaxies G5 and G2, while the third radio peak on the jet axis is close to the faint galaxy G6. Figure~\ref{gmrt610} (left panel) shows the positions of G2, G5, G6 and the brightest galaxy G1 with `+' signs. \cite{b65} suggested that the probable host AGN emitting the bipolar jet could be galaxy G6 rather than G3, which is clearly offset southward from the principal jet direction. The galaxy G6 is very faint both in optical and infrared light (Table~\ref{tab1} and~\ref{tabk}). From spectroscopy, we obtained a velocity dispersion of (143$\pm$40) Km~s$^{-1}$ for G6, which yielded a relatively small black hole mass of $(0.52\pm0.65)\times 10^8 M_{\odot}$ (see section 4.7). However, the error margin is high due to the low SNR of the spectrum. Even though it is improbable (but not impossible) for a faint galaxy like G6 to produce such a large scale radio jet, it is possible that this now faint galaxy has been stripped off the majority of its outer halo stars in multiple tidal encounters, while still retaining a dense stellar core and black hole at the centre. Possibly this is also reflected in even smaller black hole mass, $(1.5\pm0.07)\times 10^7 M_{\odot}$ derived from its faint K-band luminosity of $M_K = -21.16$ (Table~\ref{tabk}). In an alternative scenario, \cite{b97} have suggested that the rapid movement of galaxy G3 and its episodic AGN radio activity is the reason for the observed peculiar radio morphology of 4C~35.06. In their interpretation, the large scale jet morphology is due to the earlier phase of activity of G3, which had switched off its radio emission and then restarted while it was moving to its current position, resulting in the offset inner double-lobed morphology and steep-spectrum larger jet structure to the north, similar to dying radio galaxies \citep{b98}. Thus, at present, we are observing an aged FRI-like large scale structure with an embedded restarted radio source. \cite{b97} also discuss that this AGN core is less likely to be G6 as argued by \cite{b65}, because of the lower mass of this galaxy and hence the lesser likelihood of it containing an SMBH. However, they have not considered the possibility of stripping of stars from the outer halo during tidal encounters in a dense environment, still retaining the SMBH at its center. In our opinion, the canonical black hole mass- IR bulge luminosity correlation applicable for normal ellipticals in relaxed environments may not hold good in the case of galaxies subjected to violent mergers. We suggest that the observation of the close coincidence of the optical positions of G2 and G5 with the radio peaks and the location of G6 near the center of symmetry of the large scale jets provide ample evidence to argue that the conclusion by \cite{b65} might be justifiable. However, much higher resolution radio or X-ray data are still necessary to identify the compact AGN core and the progenitor galaxy of the large scale jet structure firmly. \begin{figure} \centering \includegraphics[width=3.5in,height=5.0in,keepaspectratio]{Int_spec_RVII_01.pdf} \caption{The integrated radio spectrum of 4C 35.06. A power law fit is shown by a red line. Red points are the data taken from literature and green points the GMRT observations. A pair of spectra shown with blue and pink data points represent the diffuse, relic regions D1 and D2 detected in GMRT low frequency images (Figure ~\ref{gmrt}).\vspace{-0.5cm}} \label{int_spect} \end{figure} \subsection{Spectral Index Maps and Spectral Ageing Results} The integrated spectrum and spectral index maps are derived from the data available in the literature and our present GMRT observations. The broad-band spectrum shows an overall steep power-law from 26 MHz up to 4.9 GHz (Figure.~\ref{int_spect}). The GMRT data points clearly fit with the power-law spectral index $\alpha = -0.99$, thereby indicating overall steep spectrum nature of the source, as compared to typical radio galaxies with jets and lobes. \begin{figure*} \centering \includegraphics[width=6in,height=5in,keepaspectratio]{spix_a407.png} \caption{The spectral index maps obtained from 235 MHz vs. 610 MHz (left panel) and 235 MHz vs. 150 MHz GMRT images (right panel) using matched resolutions. The spectral index values are shown with a color bar on the right edge. In both plots, the spectral index errors in the central region ($\sim 0.02$ and $\sim 0.1$ ) are much lower than that near the jet extremities ($\sim 0.2$ and $\sim 1$ ). } \label{spix} \end{figure*} To understand the energetics of this radio source better, we created spectral index maps at low and high frequency ends, using radio maps convolved to the same resolution. Figure~\ref{spix} shows dual-frequency spectral index maps of the 235 vs. 610 MHz and 150 vs. 235 MHz bands. The spectral index maps clearly show that the radio emission in the central core region has flatter spectra in the range $\alpha = -0.5$ to $ -0.8$, whereas ultra-steep spectrum emission dominate at the outer extremities, with $\alpha \sim -2$ for 235-610 MHz and 150-235 MHz maps. The spectral indices for the diffuse, outermost relic regions marked D1 and D2 are estimated to be $-1.79$ and $-2.10$ respectively, indicating their ultra steep spectral nature (Figure \ref{gmrt} and \ref{int_spect}). This suggests that the radio emission in region D1 and D2 originates from an ageing radio plasma subjected to heavy energy losses, possibly resulting from a previous phase of energy injection in the region. The LOFAR observations at 62 MHz \citep{b97} also detect these ultra-steep spectral regions, but less clearly in comparison with the higher sensitivity GMRT images. The central region shows a flat spectrum, possibly due to the superposition of emission from a few radio-loud AGNs, while the steep spectrum towards the extremities can be attributed to the lack of fresh injection of accelerated particles and radiative energy losses. A high frequency spectral index map (Figure~\ref{vla}) was created by combining VLA D and C array maps at 6 cm (4.8 GHz) and 20 cm (1.4 GHz) wavelengths, both imaged at the same $15\arcsec$ angular resolution. This figure shows a flat spectral index central region with $\alpha \approx -0.5$ (dark shades) which steepens progressively away from the center along the twisting radio jets, with $\alpha > -2$ (light shades) at the extremities of the jets. Although the angular resolution of VLA maps is lower than that of GMRT, the spectral index trend is similar to that with GMRT maps. \begin{figure} \centering \includegraphics[width=3.5in,height=5in,keepaspectratio]{A407_spix-new.pdf} \caption{The spectral index map of 4C 35.06 shown in grayscale, obtained by combining D and C scaled-array VLA data at 6cm and 20cm wavelengths at $15\arcsec$ resolution. The contours are the VLA 20cm (1.4 GHz) radio contours at the same resolution, and contour levels in mJy/beam are shown at the bottom of the image. The optical galaxies in the center are denoted with `+' symbols.} \label{vla} \end{figure} \begin{figure} \centering \includegraphics[width=3in,height=3in,keepaspectratio]{D1t1.pdf}\\ \includegraphics[width=3in,height=3in,keepaspectratio]{D2t1.pdf} \caption{ The second order polynomial spectral fit (blue line) for the outermost regions D1 (Top panel) and D2 (bottom panel). The flux values are taken at frequencies 610, 235, 150 (GMRT) and 74 MHz (VLASS). The dashed red lines represent the tangents drawn at higher (610 MHz) and lower frequency (150 MHz) ends of spectra and the intersection point denotes the estimated break frequency (green dotted line). \vspace{-0.5cm} } \label{breaks} \end{figure} The radiative age of the non-thermal plasma in regions D1 and D2 was obtained from the spectral breaks in the integrated radio spectra of the regions shown in Figure~\ref{breaks}. The spectra are fitted with second order polynomials and tangents are drawn at 610 MHz and 150 MHz frequencies. The intersection point of these tangents gives the break frequency $\nu_{b}$, obtained as 308 MHz and 302 MHz for regions D1 and D2 respectively. Beyond the breaks, a steepening or a possible cut-off in the spectra is suggested, consistent with the scenario of radio emission emanating from a rapidly cooling electron population. The electron spectral age $t_{sp}$ (or cooling time scale) is then estimated from the synchrotron radio spectrum using the formula \citep{b98} \begin{equation} t _{sp} = 1.59 \times 10^{9} \left[\dfrac{B^{1/2}}{\left[ B^2+B_{IC}^2\right] \left[\nu _{b}(1+z)\right]^{1/2}}\right] \; yr \end{equation} This formula is obtained for a uniform magnetic field, neglecting expansion losses over the radiative age. Here $B$ is the magnetic field in $\mu$G, $z$ is the redshift, $ B_{IC}= 3.25 (1+z)^2$ $\mu$G is the inverse Compton equivalent magnetic field and $\nu_{b}$ is the cooling break frequency in GHz. An independent estimate of the magnetic field in the relic regions is needed for a robust estimate of $t_{sp}$. Assuming minimum energy condition, we calculated the energy density and magnetic field in the regions A1, A2 (the inner loop structures), and D1, D2 (the outer relic regions) (see Figure~\ref{gmrt}). The minimum energy density $u_{min}$ is given by: \begin{equation} u_{min} = \xi(\alpha,\nu_{1},\nu_{2}) (1+k)^{4/7} (\nu_o)^{4\alpha/7} (1+z)^{(12+4\alpha)/7} {(I_o/ d)}^{4/7} \end{equation} where k is the ratio of the energy of the relativistic protons to that of electrons, $\alpha$ is the spectral index, $\nu_{1}$ and $\nu_{2}$ are lower and higher limits of frequency, $\nu_o$ is the frequency at which surface brightness $I_o$ is measured, and function $\xi(\alpha,\nu_{1},\nu_{2})$ is tabulated in \cite{GF04}. Here we assume the filling factor to be 1, $\nu_{1}$ = 10 MHz, $\nu_{2}$ = 10 GHz, $\nu_o$ = 150 MHz and z = 0.047. The equipartition magnetic field can be expressed by: \begin{equation} B_{eq} = (\frac{24\pi}{7} \, u_{min})^{1/2} \end{equation} This way the magnetic fields for the relic regions are obtained as $\approx 5\mu $G and $\approx 16\mu $G for $k =1$ and $k=100$, respectively. The corresponding elapsed times are $\sim 170$ and $\sim 40$ Myr, respectively. We point out that \cite{b97} calculated the spectral age by estimating the time taken by the galaxy G3 (putative central AGN) to translate from the former location (at the center of the east-west jet direction), by approximating the radial velocity difference between the galaxy and the stellar envelope \citep{b26} as the velocity of the source in the sky plane. It was assumed that the source was shut off before the translation and restarted once it reached the new position. This approximation enabled them to adopt the translation time of G3 to be the shutdown period ($t_{off}$) of the AGN. An estimate of spectral age was made assuming equal time scales for active ($t_{on}$) and quiescent ($t_{off}$)phases of the AGN and adjudging the sum (($t_{on}$) + ( $t_{off}$)= 70 Myrs) as the age of the radio plasma. Assuming lowest the Lorentz factor values, they obtained a magnetic field of $10 \mu $G and a corresponding break frequency of $\sim 380$ MHz, which is very close to the break frequency ($\sim 300$ MHz) we obtained from the GMRT data points. Even though the restarted AGN activity model and the present model yield almost same values for break frequency, many assumptions had to be invoked to substantiate the former model.\\ \subsection{Twists and kinks in the jet: Dynamical signatures of a perturbed AGN ?} The GMRT 610 MHz maps (Figures~\ref{gmrt610} and ~\ref{gmrt}) reveal that the bipolar radio jet undergoes helical twists, with inversion symmetry, on either side of core C at the points marked A1 and A2 in Figures.~\ref{gmrt} and ~\ref{ss433}. Moreover, the north-west arm of the jet is observed to be brighter, and bends to form a prominent loop/arc starting from A2 up to point B2. A similarly twisted feature between points A1 and B1 is observed in the south-east arm as well, but fainter and more diffuse compared with the north-western counterpart. The region D2 in the western arm shows a peculiar, upward bent jet-like structure in the 150 MHz GMRT map, capped by a `mushroom' like feature at the top, the nature of which is not clear at the moment (Figure.~\ref{gmrt} upper panel). The symmetric counterpart of this knot/mushroom structure could be the downward bent feature D1 in the south-eastern section of the source. The extremely steep high-frequency radio spectra of the mushroom like feature at D2 (and also D1), located $\sim$ 200 kpc from the center, indicate significant energy losses, suggesting an absence of freshly injected particles. One possibility is that they are buoyant non-thermal plasma bubbles rising into the hot intra-cluster medium, inflated by the radio jet in the past, as observed around some BCG/cD galaxies residing in centers of clusters or groups \citep{Bagchi09,MN12}. In addition to these interesting features there are a few sharp kinks or steps in the western arm of the jet at points marked C1, C2 and C3 (Figure.~\ref{ss433}). On the eastern side of the jet these kinks/steps are not discernible, possibly due to the projection effects or their absence. In our present work, lacking detailed modeling, it is difficult to decipher what these kinks in the jet flow represent physically, but they have been discussed further below ( in Section 4.5 $\&$ 4.6). \begin{figure} \centering \includegraphics[width=3.5in,keepaspectratio]{composite1.pdf} \caption{The high resolution (5$^{\prime\prime}$) grayscale image of the source 4C~35.06 at 610 MHz (GMRT) showing the twisted, helical jet structure. Different regions of the source are marked, and the end extensions D1 and D2 detected at 235 MHz and 150 MHz, are indicated by dotted lines. The colour image shows, for comparison, the cork-screw shaped precessing jets observed in galactic XRB `microquasar' SS433, which is the total intensity image at 4.85 GHz \citep{b52}. Note the linear size of SS 433 jet system is only 0.26 pc while that of 4C 35.06 at 610 MHz is 230 kpc. } \label{ss433} \vspace{-0.5cm} \end{figure} \subsection{Precessing radio jet structure: comparison with galactic microquasar SS~433} The deciphering of the peculiar radio jet morphology of 4C 35.06 gains added impetus when it is compared with a precessing radio jet structure observed in the galactic `microquasar' SS~433 (see Figure~\ref{ss433}). SS~433 is an X-ray binary (XRB) system in the center of supernova remnant W50, consisting of a stellar mass black hole or neutron star accreting matter from an A-type supergiant donor star \citep{b31,b32,b52}. The most unusual aspect of this object, modelled through radial velocity measurements of `shifting' $H_{\alpha}$ lines and high resolution radio imaging, is that the accretion disk around the compact object precesses with a regular period of $\sim 164$ days \citep{b31,b32}. Consequently, the axis of the jet-ejection nozzle also precesses with the same period and the ejected radio plasma traces out a dynamically changing `corkscrew' pattern (see Figure~\ref{ss433} and \cite{b52}). Even though a detailed modelling of the precession geometry of 4C 35.06 is beyond the scope of the present study, it is noticeable that its large-scale jet structure is analogous to that of SS 433, if we ignore the kinks (C1, C2 and C3) for the time being. The loop portions A2 - B2 and A1 - B1 are suggestive of the corkscrew pattern resulting from continuous change of the jet axis, possibly due to a precessing motion. Further, flat spectrum terminal hot spots are absent at the jet extremities of 4C 35.06 as well as SS~433, which is another indication of the continuous shifting of the jet direction. Moreover, the brightness of the western arm of the jet system in 4C~35.06 is nearly double that of the eastern arm. Depending on the inclination of the jet axis with our line of sight, the precession cone angle and the plasma bulk velocity, the projected radio morphology and brightness can appear to be quite different on the approaching and receding sides. This has been shown by \cite{b96} in numerical simulation of relativistic effects in precessing jets. The differences in the observed jet morphology on the two sides can be attributed as partly due to this. The precession of radio jets may be attributable to two mechanisms: first, the presence of a binary black hole system \citep{b76,b51}, where the torque exerted by the companion black hole can precess the accretion disk of the first object, leading to jet precession and secondly, the Lense-Thirring frame dragging effect \citep{bp75}; if the angular momentum vector of the accretion disk is misaligned with that of a fast spinning Kerr black hole, the black hole will try to frame drag the inner accretion disk so as to align it with its spin vector. This will lead to the precession of the accretion disk and of radio jets orthogonal to it. If this is the reason for the helically twisted jets in 4C~35.06, an interesting corollary is that the mass accreting supermassive black hole must be spinning. Moreover, the observed resemblance of the morphology of 4C 35.06 to the precessing relativistic jet system of X-ray binary SS~433 supports the fundamental paradigm that, in spite of a vast difference in involved black hole masses, length and time-scales, almost all relativistic disk-jet coupled phenomena happen in a scale-invariant manner in radio-loud AGNs and the galactic microquasars. \subsection{Jet energetics and interaction of jets with the ambient intracluster medium} In Figure~\ref{2mass_rass+150} (right panel), the GMRT 150 MHz radio image of 4C 35.06 is shown superposed on the ROSAT X-ray map in the 0.5 - 2.4 keV band. Here we observe that the energetic jet system of 4C 35.06 has expanded preferentially in the direction of lower gas pressure in the dense intra-cluster medium. This might affect the ambient X-ray medium by uplifting the gas along the mean jet flow direction. This radio jet feedback effect could be analogous to the distorted, extended lobes of supernova remnant W50 (the `Manatee' nebula), which are shaped by the interaction of powerful jets in microquasar SS~433 with the ambient ISM \citep{Dubner98}. We need much deeper X-ray observations of the A407 cluster to investigate the AGN feedback signatures better. Observationally, the kinetic power of a jet ($\bar{Q}$) is a key descriptor of the state of an accreting SMBH system: its mass, spin and the magnetic field of the accretion disk. Correlation of low-frequency ( $\nu \sim150$~MHz) radiative power of radio sources against their jet power shows that the radio luminosity of the jet constitutes only a small fraction ($<1\%$) of the total kinetic power \citep{b66,Daly2012}. Using the 150 MHz radio flux density of $6.0 \pm$0.18 Jy from the GMRT map, the time averaged kinetic power of jets in 4C 35.06 is computed as $\bar{Q} \approx 3 \times 10^{43}$erg~s$^{-1}$ \citep{b66}. We have not corrected for the (unknown) loss of energy in the outer radio lobes and thus $\bar{Q}$ is likely to be a lower limit. This power is below the transition value $5.0 \times 10^{43}$ erg s$^{-1}$ between FR I and II classes. However, if the jet continues to operate between $10^{7} - 10^{8}$ yr, the injected mechanical energy is $\sim10^{58} - 10^{59}$ erg, which is large enough to affect or even quench any cooling flow strongly and to drive large-scale outflows that redistribute and heat the gas on cluster-wide scales. If we further assume that this jet power is derived from accretion flow onto a black hole at the rate $\dot M$ and $\bar{Q} \sim 0.1 {\dot M} c^{2}$, we obtain ${\dot M} \sim 5.3 \times 10^{-3}$ M$_{\odot}$ yr$^{-1}$. This number is only representative, but it suggests accretion at sub-Eddington rate $\lambda = \bar{Q}/L_{edd} = 0.0024 \times (10^{8}/M_{BH})$, where $L_{edd}$ is eddington luminosity and $M_{BH}$ is the mass of the black hole. This low accretion rate signifies a radiatively inefficient accretion flow (RIAF) in a low-luminosity active galactic nuclei (so called LLAGN or LINER). The optical spectra of the galaxies in Zwicky's Nonet (refer Section. 4.7) also confirms this nature. However a complete picture of the launch of radio jet in 4C~35.06 and its energetic feedback effects requires much deeper and higher resolution X-ray and radio data. Systems hosting helically modulated symmetric jets are the most promising sites for finding close black hole systems (triple or binary)\citep{b51,b76}. In Zwicky's Nonet, it is observed that seven galaxy pairs are separated by distances $\; \sim$ 10 kpc in projection. Their redshift values are also close, with a mean $\bar z = 0.0469$ and standard deviation $\sigma_{z} = 0.00176$, or $v \sim 520$ km s$^{-1}$ (refer Table~\ref{tab1}). The number of binary or triple black hole systems discovered with projected separations less than a few kpc are very low \citep{b51}. In the present dense system of nine galaxies, all packed within a radius of only 25 kpc, the smallest projected separation between galaxy pair combinations is about 5 kpc (between G7 and G8). So the extreme closeness of the galactic members coupled with the helically twisted, large scale jet structure highlights the prime importance of this galaxy group in the search of multiple supermassive black hole systems and their gravitational and electromagnetic merger signatures. Moreover, merging SMBHs would also lead to an enhanced rate of tidal disruption of stars and possible gravitational wave recoil (slingshot) ejection of black holes from galaxies at speeds in excess of $1000$ km~s$^{-1}$. Even though the symmetric helical pattern observed in the jet structure might be explained with precession model, there are a few anomalies like the presence of distinct kinks or steps denoted by C1, C2 and C3 in the north-western arm that pose a challenge. The precession model alone may be insufficient to explain these features. We note that C-shaped twisted paired jet system in radio source 3C~75 is associated with a binary black hole pair separated by only 7~kpc \citep{b2}. Both the jets in 3C~75 show prominent wiggled and kinked structures, which have been modelled as due to the combined linear and orbital motion of the bound binary black hole pair \citep{Yokosawa85}. In 4C~35.05 observed kinks could arise from a similar system, where a black hole with radio jets is orbiting another one at high speed and with large orbital eccentricity \citep{Yokosawa85}. However, in this model, we can not easily explain why these kinks or steps are absent in the south-eastern arm of the jet. \subsection{Optical spectroscopic results: AGN signature and black hole mass estimation} Previously, \cite{b26} measured the redshifts and stellar velocity dispersions ($\sigma$) for a few of galaxies in Zwicky's nonet covering a wavelength range from $3700$ to $5250 \AA$. In our present study, we have obtained good S/N spectra over the wavelength range of $3800 $ to $8500 \AA$ for eight out of the nine galaxies comprising Zwicky's Nonet (The spectra are included as supplementary material). However, attempt to obtain a fair spectrum of galaxy G8 failed, due to it being very faint. Our main aim was to search for the signs of AGN or star forming activity in the optical spectra of these galaxies and to estimate their black hole masses from the stellar velocity dispersion. The spectra of these galaxies resemble those of passive, early type red ellipticals, devoid of any major emission lines. This is not unusual as optical emission lines are found to be absent in many AGNs showing radio emission and large scale radio jets. It has been observed that many FRI radio sources in galaxy clusters are hosted by galaxies showing very weak or no optical emission lines \citep{b37,b67}. \begin{center} \begin{table} \centering \caption[caption]{K band absolute magnitudes and masses of the \\\hspace{\textwidth} SMBHs associated with the nine galaxies of `Zwicky's Nonet'. } \begin{tabular}{@{}|c|c|c|@{}} \hline Source & K Band absolute & Mass of the \\ & magnitude & SMBH. \\ & & ($ 10^8 M_{\odot} $) \\ \hline \hline G1 &-23.46$\pm$0.039 &1.13 $\pm$0.30\\ G2 &-21.99$\pm$0.085 &0.31$\pm$0.12\\ G3 &-23.49$\pm$0.034 &1.16$\pm$0.30 \\ G4 &-22.60$\pm$0.028 & 0.53$\pm$0.17\\ G5 &-22.73$\pm$0.031 &0.60$\pm$0.18 \\ G6 & -21.16$\pm$0.089 &0.15$\pm$0.07 \\ G7 &-23.56$\pm$0.031 &1.23$\pm$0.30 \\ G8 & -20.06$\pm$0.113 &0.06$\pm$0.03 \\ G9 &--22.50$\pm$0.039 &0.49$\pm$0.16 \\ \hline \end{tabular} \label{tabk} \end{table} \end{center} Internal properties of a galaxy, such as mass and accretion rate of a SMBH are better estimated from the nuclear emission lines \citep{b39}. The observed spectra show that all the suspected radio loud galaxies (G2, G3, G5 and G6) belong to the class of low excitation radio galaxies (LERGs) \citep{b41,b43}. LERGs are mostly found to be hosted by BCGs having extended cD like light profiles \citep{b67}, similar to what we find in Zwicky's Nonet. The well-known tight correlation which connects the mass of the central black hole $M_{BH}$ to the galaxy's bulge stellar velocity dispersion $\sigma$ \citep{b3,b4} is given by \begin{equation} \log_{10}\left(\dfrac{M_{BH}}{M_{\odot}}\right) = \alpha + \beta \log_{10}\left(\frac{\sigma}{ \text{200 km\,s}^{-1}} \right) \end{equation} where $\sigma$ is expressed in km s$^{-1}$. Here we have used $\alpha = 8.38$ and $\beta= 4.53$, as derived in \cite{b95}. The estimated SMBH masses are tabulated in Table~\ref{tab0}. The black hole masses were also calculated using the slightly different $\alpha $ and $\beta $ values taken from \cite{b5} and \cite{b6}. These masses are consistent within one sigma limits with the numbers given in Table~\ref{tab0}. These results show that galaxies G1, G3, G5, G7, and G9 all host supermassive black holes of mass ($M_{BH} \approx few \times 10^{8} \, M_{\odot}$). For the other three galaxies G2, G4 and G6, the estimate of $M_{BH}$ has large errors. Interestingly, the most massive black hole of mass $M_{BH} \approx 10^{9} \, M_{\odot}$ resides in the galaxy G3, which showed a radio loud AGN core in previous VLBA observations \citep{b28}. The galaxy black hole masses are also calculated from their K-band magnitudes using 6 times deeper data on cluster A407 available from 2MASS survey. The equation connecting K-band absolute magnitude ($M_K$), of bulge component to the central black hole mass given by \cite{graham_2007} is, \begin{equation} \log_{10}\left(\dfrac{M_{BH}}{M_{\odot}}\right) = -0.38(\pm 0.06)\left(M_K+24\right) +8.26(\pm 0.11) \end{equation} Table~\ref{tabk} lists the black hole masses estimated with this method for the nine galaxies. The following caveats are worth mentioning here: It is unclear whether the canonical $M_{BH}$-$\sigma$ relation will suffice for galaxies in such a hostile environment, undergoing violent mergers and stripping of stars in multiple tidal encounters. This is clearly evidenced by the formation of a large-scale stellar halo of stripped matter in Zwicky's Nonet. The same concern applies if one were to obtain $M_{BH}$ from the K-band magnitude of bulge using the $M_{BH}$-$M_{K}$ correlation. Moreover, effect of the gravitational potential of the background stellar halo (which is highly dark matter dominated; \citep{b26}) and close merging galaxies on the bulge stellar velocity dispersion of a galaxy are also possible factors that need to be accounted for in black hole mass calculations. In this article, we have not attempted to do so. However, for checking this issue, the last column of Table~\ref{tab0} shows the ratio of black hole mass, from the $M_{BH}$-$\sigma$ correlation ($M_{BH,\sigma}$) to that obtained from $M_{BH}$-$M_{K}$ method ($M_{BH,K}$). The ratio $M_{BH,\sigma}/M_{BH,K}$ is $> 2 $ for galaxies with well determined black hole masses, which suggests that possibly $M_{BH}$-$M_{K}$ method gives smaller black hole masses because of the truncation of the outer envelope of galaxies, which reduces their K-band luminosity. Alternatively, the black hole masses from the $M_{BH}$-$\sigma$ relation are overestimated. The stripped away matter from the presently observed nuclei must provide a large fraction of the total luminosity of the observed stellar halo. The main parameters of the stellar halo, which is detectable up to the r-band surface brightness limit of $\sim 24$ mag arcsec$^{-2}$ (and possibly beyond), quoted by \cite{b26} are as follows; central mass density $\rho(0) = 0.63\pm 0.25 M_{\odot}$ pc$^{-3}$, mass-to-light ratio in r band $M/L = 90\pm35$, and halo radial velocity dispersion $\sigma_{halo} = 610 \pm 200$ km s$^{-1}$. From this value of $\sigma_{halo}$ and taking halo radius $r \approx 30\arcsec$ ($\sim 26.5$ kpc), we obtain the total dynamical mass of halo as $2.2 \times 10^{12}$ M$_{\odot}$, which interestingly is of the same order as that of a super giant cD galaxy. \begin{center} \begin{table} \centering \caption[caption]{The redshifts and masses of the SMBHs associated \\\hspace{\textwidth} with the galaxy like condensations in `Zwicky's Nonet'.\\\hspace{\textwidth} The last column shows the ratio of the SMBH black hole masses \\\hspace{\textwidth} obtained from stellar velocity dispersions and K band magnitudes.} \begin{tabular}{@{}|c|c|c|c|c|@{}} \hline Galaxy &{\hskip -0.5cm} Red & Velocity & Mass of the &SMBH\\ &{\hskip -0.50cm} shift& dispersion & SMBH & mass ratio\\ & & $(km s^{-1})$ & ($ 10^8 M_{\odot} $)&$M_{BH,\sigma} /M_{BH,K}$ \\ \hline \hline G1 &{\hskip -0.50cm}0.0473 &222 $\pm$16 &3.88$\pm$ 1.23&2.76\\ G2 &{\hskip -0.50cm}0.0476 &143 $\pm$27&0.53 $\pm$ 0.44&1.71\\ G3 &{\hskip -0.50cm}0.0470 &273$\pm$18& $9.83\pm 2.96 $&8.47 \\ G4 &{\hskip -0.50cm}0.0503 & 135$\pm$33 & 0.40 $\pm$ 0.45&0.76\\ G5 &{\hskip -0.50cm}0.0476 &230$\pm$8 &4.52$\pm$ 0.74&7.5\\ G6 &{\hskip -0.50cm}0.0454 &143$\pm$40 &0.52$\pm$ 0.65&3.47\\ G7 &{\hskip -0.5cm}0.0468 &211$\pm$16 &3.08$\pm$1.00&2.50\\ G9 &{\hskip -0.50cm}0.0445 &176$\pm$20 &1.34$\pm$0.68 &2.74\\ \hline \end{tabular} \label{tab0} \end{table} \end{center} \section{Conclusions} We have presented the results of our radio, optical and infra-red observations of the radio source 4C~35.06, located in the central region of the galaxy cluster Abell 407. The cluster center hosts a compact ensemble of nine passive, red elliptical galaxies embedded within a faint, diffuse stellar halo. We proposed to name this galactic system `Zwicky's Nonet'. GMRT observations at 150, 235 and 610 MHz clearly reveal the complete radio structure of 4C~35.06, with a complex central core region and helically twisted and kinked bipolar radio jets extending up to $\sim 400$~kpc. The radio jets terminate into diffuse, ultra-steep spectrum `relic/fossil' plasma lobes D1 and D2. In D2, a peculiar, very steep spectrum ($\alpha < -2$) mushroom like feature is discovered from GMRT 150 MHz map. In regions D1 and D2 of 4C~35.06, the average minimum energy magnetic field is $B \sim 5$~$\mu$G for $k =1$ and $B \sim 16$~$\mu$G for $k =100$. The corresponding spectral ages of electrons are obtained as $ 170 \times 10^6$ and $ 40 \times 10^6$ yr respectively. The time averaged kinetic power of jets is estimated to be $\approx 3 \times 10^{43}$erg~s$^{-1}$, indicating that the source is a FR I type radio galaxy. The unique helical jet system and the very compact configuration of nine galactic nuclei point to the possibility of precessional and orbital motion of the AGN. This also suggests possible gravitational perturbation effects of multiple black holes residing in the extremely dense central region of the cluster. In such an environment, orbital decay assisted by dynamical friction causes the central binary black holes of galaxies to merge, while gravitational torque in the binary phase may cause the accretion disk of AGN to precess, resulting in a helical jet pattern. The morphological similarity of this jet system with that of the galactic microquasar SS 433 also supports a precessional scenario. The absence of terminal hot spots and presence of ultra-steep spectrum regions on both ends of the jet strongly suggest the continuous shifting of the jet direction, further supporting the precessional model. Our study points to the possibility of the fainter member (G6) of the Zwicky's Nonet hosting large-scale radio jets. The faintness of this galaxy is attributed to the stripping of its major stellar envelope due to the tidal interactions in galactic mergers, retaining the SMBH at the center. The high ratio ($> 2 $) of black hole masses from stellar velocity dispersion and K-band luminosity, i.e. $M_{BH,\sigma}/M_{BH,K}$, for galaxies with well determined black hole masses corroborates the diminution of the stellar envelope. The observation of a diffuse stellar halo of stripped matter in the system supports this scenario. The optical spectra of eight galaxies in Zwicky's Nonet fail to show any prominent emission lines, indicating a radiatively inefficient accretion flow onto the central black holes at sub-Eddington rates. No strong star-formation/star-burst activity detected in any of the galaxy spectra. Further high sensitivity and higher resolution radio observations are needed to provide a complete spectral analysis and to obtain the detailed resolved central morphology of this complex source. A deep X-ray observation of the hot intra-cluster gas around the cD galaxy precursor, and detection of the AGNs and their X-ray spectra would be very beneficial in deciphering the nature of this puzzling radio galaxy. \section*{Acknowledgments} We thank the staff of GMRT, IUCAA/IGO and Palomar Observatory for their help during the observations. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. JB, JJ and PD acknowledge generous support from the Indo-French Center for the Promotion of Advanced Research (Centre Franco-Indien pour la Promotion de la Recherche Avan\'{c}ee) under programme no. 5204-2. KGB and JJ acknowledge IUCAA's support under the Visiting Associate program. KGB gratefully acknowledges the support received through Faculty Development Programme of the UGC, India. AAM and SGD acknowledge partial support from the NSF grants AST-1413600 and AST-1518308. MBP gratefully acknowledges support by DST INSPIRE Faculty Scheme, New Delhi. We have used images and results from SDSS and funding for SDSS has been provided by the Alfred P. Sloan Foundation, the participating institutions, the National Science Foundation, and the U.S. Department of Energy's Office of Science. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the NASA. We have used VLA data. The VLA is a facility of the National Radio Astronomy Observatory (NRAO). \def\aj{AJ}% \def\actaa{Acta Astron.}% \def\araa{ARA\&A}% \def\apj{ApJ}% \def\apjl{ApJ}% \def\apjs{ApJS}% \def\ao{Appl.~Opt.}% \def\apss{Ap\&SS}% \def\aap{A\&A \def\aapr{A\&A~Rev.}% \def\aaps{A\&AS}% \def\azh{AZh}% \def\baas{BAAS}% \defBull. astr. Inst. Czechosl.{Bull. astr. Inst. Czechosl.} \def\caa{Chinese Astron. Astrophys.}% \def\cjaa{Chinese J. Astron. Astrophys.}% \def\icarus{Icarus}% \def\jcap{J. Cosmology Astropart. Phys.}% \def\jrasc{JRASC}% \def\mnras{MNRAS}% \def\memras{MmRAS}% \def\na{New A}% \def\nar{New A Rev.}% \def\pasa{PASA}% \def\pra{Phys.~Rev.~A}% \def\prb{Phys.~Rev.~B}% \def\prc{Phys.~Rev.~C}% \def\prd{Phys.~Rev.~D}% \def\pre{Phys.~Rev.~E}% \def\prl{Phys.~Rev.~Lett.}% \def\pasp{PASP}% \def\pasj{PASJ}% \def\qjras{QJRAS \def\rmxaa{Rev. Mexicana Astron. Astrofis.}% \def\skytel{S\&T}% \def\solphys{Sol.~Phys.}% \def\sovast{Soviet~Ast.}% \def\ssr{Space~Sci.~Rev.}% \def\zap{ZAp}% \def\nat{Nature}% \def\iaucirc{IAU~Circ.}% \def\aplett{Astrophys.~Lett.}% \def\apspr{Astrophys.~Space~Phys.~Res.}% \def\bain{Bull.~Astron.~Inst.~Netherlands}% \def\fcp{Fund.~Cosmic~Phys.}% \def\gca{Geochim.~Cosmochim.~Acta}% \def\grl{Geophys.~Res.~Lett.}% \def\jcp{J.~Chem.~Phys.}% \def\jgr{J.~Geophys.~Res.}% \def\jqsrt{J.~Quant.~Spec.~Radiat.~Transf.}% \def\memsai{Mem.~Soc.~Astron.~Italiana}% \def\nphysa{Nucl.~Phys.~A}% \def\physrep{Phys.~Rep.}% \def\physscr{Phys.~Scr}% \def\planss{Planet.~Space~Sci.}% \def\procspie{Proc.~SPIED}% \let\astap=\aap \let\apjlett=\apjl \let\apjsupp=\apjs \let\applopt=\ao \bibliographystyle{mn2e}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Quantum statistical inference is of fundamental importance not just from foundation of quantum information theory but also in view of practical applications. For example, at a certain stage of any quantum information processing protocol, one has to know the state precisely to proceed the protocol. Typically, the quantum states to be estimated are not completely unknown, but we have partial information about them. This is contrast to quantum tomography where one has to identify a quantum state by informationally complete measurements. Quantum parameter estimation problem, which is a subclass of quantum statistical inference problems, assumes that a given quantum state is parameterized with a finite number of continuous parameters. One wishes to infer the value of these parameters by performing a measurement and making an estimate from measurement outcomes. Parameter estimation problem in classical statistics is a well-established subject and there are large numbers of literature available ranging from rigorous mathematical formulation to very practical applications. Quantum parameter estimation was initiated by Helstrom in 60s and developed by Holevo, Yuen-Lax, and others \cite{helstrom, holevo, yl73}. The new insight into this problem was triggered by Nagaoka in the late 80s where he developed new language based on information geometry in classical statistics \cite{ANbook} and opened asymptotical analysis of estimation. Some of his contributions are reprinted in Ref.~\cite{hayashi}. The field of quantum estimation theory has recently gained great attentions also from physics community. One important motivation is the study of quantum metrology, that is, high precision measurement which go beyond existing classical precision limit \cite{glm11}. The aim of this paper is to discuss some of unexplored aspect of quantum parameter estimation problem. We analyze an estimation problem in presence of unknown parameter, called a {\it nuisance parameter} in statistics, and discuss effects of this nuisance parameter. This problem is well-known in classical statistics \cite{lc,bnc}, yet few results are known in quantum case. For this purpose, we take the simplest quantum system, a qubit system, and we apply known precision bound to our estimation problem. We see that effects of nuisance parameters are important in general. For a very special case, asymptotically achievable bound can be obtained as shown in this paper. A quantum parametric model studied in this paper is \[ \rho_{\theta}=\frac{1}{2}\left(\begin{array}{cc}1+\theta_2 & \theta_1 \Exp{-\mathrm{i}\theta_3} \\[.5ex] \theta_1 \Exp{\mathrm{i}\theta_3} & 1-\theta_2\end{array}\right), \] where the parameters $\theta_1,\theta_2$ are of interest and the phase parameter in the off-diagonal component is not. This model is physically motivated from wave-particle duality, where one discusses the trade-off between the fringe visibility $\Leftrightarrow\ |\tr{\rho_{\theta}\sigma_+}|=|\theta_1|$ and the which-way information $\Leftrightarrow\ |\tr{\rho_{\theta}\sigma_3}|=|\theta_2|$, whereas the value of phase $\theta_3$ itself is irrelevant. We should not forget to mention several works related to the present work. Similar parameter models for mixed qubit states was discussed by several authors. Among them, Gill and Massar derived a very general trade-off relation known as the Gill-Massar inequality and derived an achievable bound for qubit case \cite{GM00}. Bagan {\it et al} studied two and three parameter model with different parametrization and different figures of merit \cite{bbgmm06}. Hayashi and Matsumoto performed a general analysis on asymptotic performance in qubit system and analyzed two and three parameter models. Our model is different from the previous results in three aspects. Firstly, parametrization is different and we do not use neither cartesian nor spherical coordinates in the Bloch vector representation as were analyzed in literature. Secondly, we shall not assume the value of phase is known. When this value is completely known, the model is reduced to two-parameter model which lies on an equatorial plane of the Bloch sphere. In contrast, we are interested in analyzing errors in estimating two parameters without knowing the value of phase. Lastly, in many studies on quantum metrology, one is interested in estimating the value of phase and the amplitude damping or phase dephasing caused by external noise are not \cite{emfd11,ddkg12}. In recent publications \cite{cdbw14,vdgjkkdbw14}, authors point out that we cannot estimate the value of phase in presence of noise in particular noise model. They derive a trade-off relation between error in these parameters together with experimental demonstration. Instead, this paper aims to shed light on parameter estimation problem in presence of unknown parameter based on quantum parameter estimation perspective. This paper is organized as follows. Section \ref{sec2} provides a brief summary of basic theorems in quantum estimation theory. Section \ref{sec3} discusses our parametric model and quantum estimation in presence of unknown phase parameter. Section \ref{sec4} shows the ultimate bound based on the Holevo bound. We also discuss the general structure of quantum estimation with nuisance parameters. We close the paper with conclusion and outlook in Section \ref{sec5}. \section{Preliminaries}\label{sec2} In this section we summarize definitions for basic terminologies and quantities to analyze quantum parametric models. Previously known results are listed without proofs. For more details, readers are referred to books \cite{holevo, ANbook, hayashi, petz} and the concise summary by Hayashi and Matsumoto \cite{HM08}. \subsection{Estimation problems and Fisher information in classical and quantum cases} Let ${\cal H}$ be a finite dimensional complex Hilbert space and ${\cal L}({\cal H})$ denote the set of all linear operator from ${\cal H}$ to itself. A quantum state $\rho$ is an element of ${\cal L}({\cal H})$, which is non-negative and has a unit trace. The totality of quantum states on ${\cal H}$ is written as ${\cal S}({\cal H}):= \{\rho\in{\cal L}({\cal H}) \, |\, \rho\ge 0, \tr\rho=1\}$. A measurement on a given quantum state $\rho$ is described by a positive operator-valued measure (POVM) or probability operator measurement, which is a set of non-negative operators summed up to the identity operator on ${\cal H}$. In this paper, we shall only consider discrete POVMs whose elements are countable. We denote the label set by ${\cal X}$. The POVM is expressed as $\Pi=\{ \Pi_x\in{\cal L}({\cal H})\,|\, \Pi_x\ge0,\sum_{x\in{\cal X}}\Pi_x=I\}_{x\in{\cal X}}$. One of the axioms of quantum mechanics (Born's interpretation) provides a simple rule: the probability distribution for detecting the measurement outcome $x$ when a POVM $\Pi$ is performed for a given quantum state $\rho$ is $p_\rho(x)=\tr{\rho\Pi_x}$. This is a conditional probability distribution and the condition $\rho$ is omitted when is clear from the context. A quantum parametric model is given as a family of quantum states on ${\cal H}$, which is parametrized by a $k$-dimensional parameter $\theta=(\theta_1,\theta_2,\dots,\theta_k)\in{\mathbb R}^k$ and is denoted by \begin{equation} {\cal M}_Q=\{\rho_\theta\,|\,\theta\in\Theta \}. \end{equation} Here the parameter set $\Theta$ is assumed to be an open subset of ${\mathbb R}^k$ and we also assume that $\rho_\theta$ varies smoothly enough so that no singular behaviors for information quantities defined later. Given a quantum model ${\cal M}_Q$, the aim of quantum statistician is two-fold: First she performs a good measurement $\Pi$ and then makes a good estimate $\hat\theta=(\hat\theta_1,\hat\theta_2,\dots,\hat\theta_k)$ based on her measurement outcomes. The quality of her estimation is measured according to a given figure of merit such as the mean square error, minimax error, Bayesian criterion, etc. In the following discussion, we choose the figure of merit as the mean square error (MSE) defined by \begin{equation} v_{\theta,ij}[\Pi,\hat{\theta}] := \sum_{x\in{\cal X}} (\hat{\theta}_i(x)-\theta_i) (\hat{\theta}_j(x)-\theta_j)\tr{\rho_{\theta}\Pi_x}, \end{equation} where the indices are $i,j=1,2,\dots,k$ and the $k\times k$ real matrix $V_{\theta}[\Pi,\hat{\theta}]:=[v_{\theta,ij}]_{i,j\in\{1,2,\dots,k\}}$ is called a MSE matrix. It is straightforward to see the MSE matrix is symmetric and non-negative matrix. The second process above is the same as (classical) statistics and is described by a function $\hat{\theta}$: ${\cal X}\rightarrow {\mathbb R}^k$ (In general, the range satisfies $\hat{\theta}(\Theta)\supset \Theta$. We take $\hat{\theta}(\Theta)= \Theta$ without loss of generality). The set $(\Pi,\hat{\theta})$ is to be called a quantum estimator or simply an estimator and is denoted as $\hat{\Pi}=(\Pi,\hat{\theta})$. Throughout our discussion, we restrict ourselves to find the best estimator satisfying the locally unbiased condition, that is for a given true value $\theta\in\Theta$, the estimator $\hat{\Pi}$ needs to satisfy the following condition for all $i,j=1,2,\dots,k$, \begin{equation}\label{lucond} \sum_{x\in{\cal X}} \hat{\theta}_i(x) \tr{\rho_{\theta}\Pi_x}=\theta_i,\ \sum_{x\in{\cal X}} \hat{\theta}_i(x)\tr{\partial_j\rho_{\theta}\Pi_x}=\delta_{ij}, \end{equation} where $\partial_i=\partial/\partial\theta_i$ is the partial derivative about $\theta_i$ and $\delta_{ij}$ is the Kronecker delta. We remind that this locally unbiased condition is much weaker than unbiased condition where one demands $\sum_{x\in{\cal X}} \hat{\theta}_i(x) \tr{\rho_{\theta}\Pi_x}=\theta_i$ holds for all values of $\theta\in\Theta$. A problem of finding an optimal (locally unbiased) quantum estimator $\hat{\Pi}$ is to minimize the MSE matrix $V_{\theta}[\hat{\Pi}]$ for a given model. In contrast to a (classical) parameter estimation problem, a quantum problem, however, does not exhibit the general solution as a matrix inequality except for special cases. One tractable formulation of the problem is to minimize a weighted trace of the MSE matrix, which is a scalar quantity; $\mathrm{Tr}\{W V_\theta[\hat{\Pi}]\}$. Here a $k\times k$ positive matrix $W$ is called a weight matrix and can be chosen arbitrary. To distinguish traces for density matrices and MSE matrices, we use lower case letter for quantum state and upper case letter for the latter. Thus, our problem for a quantum parameter estimation problem is to find the precision bound which is defined as \begin{equation} \label{opt} C_\theta[W]=\min_{\hat{\Pi}:\mathrm{l.u. at}\, \theta} \mathrm{Tr}\{W V_\theta[\hat{\Pi}]\}, \end{equation} where $\mathrm{l.u. at}\, \theta$ indicates the optimization is carried under the locally unbiased condition \eqref{lucond} and the optimal quantum estimator is denoted as $\hat{\Pi}_{opt}[W]$. As in (classical) estimation problems, we are given an $n$ copy of quantum states and is mathematically represented by a tensor product as $\rho_\theta^{\otimes n}=\bigotimes_{i=1}^n\rho_\theta$. This is analogous situation to identically and independently distributed (i.i.d.) scenario in probability theory and the state $\rho_\theta^{\otimes n}$ is referred to as an i.i.d. quantum state. Upon estimating a parameter $\theta$ for a given i.i.d. states, a significant difference arises for the quantum case. A quantum statistician can choose different strategies: One is to perform a POVM written as a tensor product $\Pi^{(n)}=\{ \Pi^{(1)}_x\otimes\Pi^{(2)}_x\otimes\dots \Pi^{(n)}_x\}_{x\in{\cal X}} $, and the other is a general POVM on the joint Hilbert space ${\cal H}^{\otimes n} $ which cannot be expressed as a tensor product. The former is called a separable measurement, and the latter is collective measurement in literature. It is known that collective measurements are more powerful than separable ones in general. In the following, we consider a separable measurement mainly and collective measurement scheme will be discussed in Section \ref{sec4}. One way to see why a quantum estimation problem is non-trivial is as follows. For (classical) estimation problems to estimate the probability distribution $p_{\theta}$, the fundamental precision bound for the MSE is given by the Cram\'er-Rao (CR) inequality which states that for any locally unbiased estimator the MSE matrix is bounded as \begin{equation} V_{\theta}[\hat{\theta}]\ge (J_\theta[p_\theta])^{-1}. \end{equation} In this inequality, $J_\theta[p_\theta]$ denotes the Fisher information matrix for a given probability distribution $p_\theta$ whose $(i,j)$ component is defined by \begin{equation} \label{clfisher} J_{\theta,ij}:=\sum_{x\in{\cal X}} p_{\theta}(x) \partial_i \ell_{\theta}(x) \partial_j \ell_{ \theta}(x), \end{equation} with $\partial_i \ell_{\theta}(x)=\partial_i\log p_{\theta}(x)$ the $i$th logarithmic derivative. This bound can be achieved asymptotically, for example, by the maximum likelihood estimator. For the quantum case, let us fix a measurement $\Pi$ on $\rho_\theta$ then the best estimator $\hat{\theta}$ should be given by the above CR bound as $V_{\theta}[\hat{\Pi}]\ge (J_\theta[\Pi])^{-1}$. Here the Fisher information matrix is calculated according to the probability distribution: $p_\theta(x)=\tr{\rho_{\theta}\Pi_x}$ and solely determined by the choice of a POVM. We remind ourselves that partial differentiations $\partial_i$ must act only on the state in the probability distribution $\tr{\rho_{\theta}\Pi_x}$. To find the optimal estimator for a given quantum model is then reduced to minimize the inverse of Fisher information matrix $(J_\theta[ \Pi])^{-1}$ over all possible POVMs. This problem is rather difficult simply because of an optimization of non-scalar quantity over matrix spaces with certain constraints. Thus, the strategy to minimize the weighted trace of the inverse of Fisher information matrix is another view into quantum parameter estimation problem. Let us call \begin{equation}\label{mibound} C_\theta^{\mathrm{MI}}[W]:=\min_{\Pi: \mathrm{POVM}}\Tr{W (J_\theta[\Pi])^{-1}}, \end{equation} the most informative precision allowed by quantum mechanics. It is known that $ C_\theta[W]=C_\theta^{\mathrm{MI}}[W]$ holds in general \cite{nagaoka89}, and Fisher information plays an important role even in quantum parameter estimation theory. To define quantum version of logarithmic derivatives and Fisher information, we first introduce an inner product for any linear operators and then define quantum Fisher information based on the inner product. It happens that there is no unique way to define an inner product for quantum cases, meaning that we have many different quantum versions of Fisher information. In the following, we use two kinds of quantum Fisher information based on symmetric logarithmic derivative (SLD) and right logarithmic derivative (RLD) operators. For a given quantum state $\rho_{\theta}$ and any (bounded) linear operators $X,Y$ on ${\cal H}$, we define symmetric and right inner product by \begin{align} \nonumber \sldin{X}{Y}&:=\tr{\rho_{\theta}(YX^\dagger+X^\dagger Y)}, \\ \rldin{X}{Y}&:=\tr{\rho_{\theta}YX^\dagger}, \end{align} respectively. SLD operators $L_i$ and RLD operators $\tilde{L}_i$ are formally defined by the solutions to the operator equations: \begin{align}\nonumber \partial_i\rho_{\theta}&=\frac12 (\rho_{\theta}L_{\theta,i}+L_{\theta,i}\rho_{\theta}), \\ \partial_i\rho_{\theta}&=\rho_{\theta}\tilde{L}_{\theta,i}. \end{align} It is not difficult to see that the SLD operators are hermite, whereas RLD operators are not in general. The SLD Fisher information matrix is defined by \begin{align} G_{\theta}&:= \left[ g_{\theta, ij}\right]_{i,j\in\{1,\dots,k\}} \\ \nonumber g_{\theta, ij}&:=\sldin{L_{\theta,i}}{L_{\theta,j}}=\tr{\rho_{\theta}\frac12 \big(L_{\theta,i}L_{\theta,j}+L_{\theta,j}L_{\theta,i} \big)}, \end{align} and the RLD Fisher information is \begin{align} \nonumber \tilde{G}_{\theta}&:= \left[ \tilde{g}_{\theta, ij} \right]_{i,j\in\{1,\dots,k\}}, \\ \tilde{g}_{\theta, ij}&:=\rldin{\tilde{L}_{\theta,i}}{\tilde{L}_{\theta,j}}=\tr{\rho_{\theta}\tilde{L}_{\theta,j}\tilde{L}_{\theta,i}^\dagger}. \end{align} The quantum versions of CR inequality state that for any locally unbiased estimators its MSE matrix satisfies \begin{align}\nonumber V_{\theta}[\hat{\Pi}]&\ge G_\theta^{-1}, \\ V_{\theta}[\hat{\Pi}]&\ge \tilde{G}_\theta^{-1}. \end{align} These are referred to as the SLD CR inequality and RLD CR inequality, respectively. For notational convenience, the $(i,j)$ component of the inverse of the SLD Fisher information is denoted as $g_{\theta}^{ij}$, i.e., $G_\theta^{-1}=[g_\theta^{ij}]_{i,j\in\{1,\dots,k\}}$. Unlike the classical CR bound, there is no estimator $\hat{\Pi}$ in general attaining the equalities in the above inequalities. Combining the above considerations, one can show that for any POVMs the following relation holds: \begin{equation}\label{qcmonotone} V_{\theta}[\hat{\Pi}]\ge (J_{\theta}[\Pi])^{-1}\ge G_\theta^{-1}, \end{equation} and similarly for the RLD Fisher information. This inequality again emphasizes importance of Fisher information since the true bound lies in-between $J_\theta[\Pi]$ and $G_\theta$. Before closing this subsection, we have several remarks regarding quantum Fisher information. First, quantum Fisher information should be used as a collective noun rather than a proper noun since there are many quantum versions of Fisher information in general. Second, among existing many quantum Fisher information, SLD and RLD Fisher information stand as special ones \cite{petz}. The SLD Fisher metric is known as the minimum operator-monotone metric whereas the RLD Fisher metric is the maximum one. This is a well-known result, but this does not imply the matrix inequality $\tilde{G}_\theta\ge G_\theta$ in general. That is, there is no ordering between $\tilde{G}_\theta$ and $G_\theta$ in general. The valid relation holds for real part of the RLD Fisher information and SLD Fisher information: \begin{equation}\label{rerldsld} \mathrm{Re}\, \tilde{G}_\theta\ge G_\theta, \end{equation} for any quantum parametric models. Third, the RLD Fisher information always dominates the SLD Fisher information when the number of parameters is equal to one. In this case, the SLD Fisher information is attainable by the projection measurement with respect to the spectral decomposition of the SLD operator, and RLD Fisher information does not provide important information as long as state estimation is concerned. Fourth, as in classical statistics, we assume some regularity condition for quantum parametric models to define quantum versions of Fisher information. Besides mathematical technical assumptions, the rank of a state is important. When the state is not full-rank, it is known that SLD operators and SLD Fisher information cannot be defined uniquely. However, modification of the inner products by taking an equivalent class provides a well-defined and unique SLD Fisher information \cite{fn95}. Last, quantum Fisher information is proper information quantity and satisfy important properties. To list a few: i) They are semi-definite positive matrix. ii) They do not increase when a quantum operation (completely positive map) is applied to the state. iii) They are convex with respect to quantum states. iv) They are additive for product states. \subsection{SLD CR , RLD CR, and Holevo bounds} Within our formulation of the problem, there are several bounds for the weighted trace of the MSE matrix \eqref{opt}. The first one is the SLD CR bound defined by \begin{equation}\label{sldcr} C_\theta^S[W]:=\Tr{WG_\theta^{-1}}, \end{equation} and this leads to the bound for any locally unbiased estimators as \begin{equation} \Tr{WV_\theta[\hat{\Pi}]}\ge C_\theta^S. \end{equation} The second one utilizes the RLD Fisher information and the following relation; \[ V\ge X\ \Rightarrow\ \Tr{WV}\ge\Tr{W\mathrm{Re}X}+\Trabs{W \mathrm{Im}X}, \] for a positive matrix $W$, real symmetric matrix $V$, and Hermite matrix $X$. Here, TrAbs$X$ denotes the trace of absolute values of eigenvalues of the matrix $X$, i.e., TrAbs$X=\sum_i|\lambda_i| $ with $X=\sum_i \lambda_i \ket{i}\bra{i}$ an eigenvalue decomposition of $X$. Since the RLD Fisher information is complex-valued in general, the above inequality gives the RLD CR bound: \begin{equation} C_\theta^R[W] :=\Tr{W\mathrm{Re}\,\tilde{G}_\theta^{-1}}+\mathrm{TrAbs}\{W\mathrm{Im}\,\tilde{G}_\theta^{-1}\} . \end{equation} The bound for quantum model which unifies the above bounds is due to Holevo and it is referred to as the Holevo bound \cite{holevo}. Denote a $k$ array of Hermite operators on ${\cal H}$ by \[ \vec{X}:=(X^1,X^2,\dots, X^k),\quad (X^\ell)^\dagger=X^\ell\ (\ell=1,2,\dots,k), \] and define the set of $\vec{X}$ by \begin{equation} \label{Xset} {\cal X}_\theta:=\{\vec{X}\,|\, \forall i\,\tr{\rho_\theta X^i}=0,\ \forall i,j\,\tr{\partial_i\rho_\theta X^j}=\delta_{ij} \}. \end{equation} The holevo function for quantum estimation is defined by \begin{equation} h_\theta[\vec{X}|W]:=\Tr{W\mathrm{Re}\,Z_\theta[\vec{X}]}+\Trabs {W\mathrm{Im}\,Z_\theta[\vec{X}] }, \end{equation} where the $k\times k$ matrix $Z_\theta[\vec{X}]$ is \begin{equation} Z_\theta[\vec{X}]:= [ \rldin{X^i}{X^j}]_{i,j\in\{1,\dots,k\}}. \end{equation} The Holevo bound is defined through the following optimization: \begin{equation} C_\theta^H[W]:=\min_{\vec{X}\in{\cal X}_\theta}h_\theta[\vec{X}|W]. \end{equation} Importantly, any locally unbiased estimators is bounded by the Holevo bound as \begin{equation} \Tr{WV_\theta[\hat{\Pi}]}\ge C_\theta^H[W], \end{equation} The Holevo bound can be attained asymptotically by an asymptotically unbiased estimator with a collective POVM in the following sense \cite{HM08,GK06,KG09,YFG13}. Consider $n$th i.i.d. quantum state $\rho_\theta^n=\rho_\theta^{\otimes n}$ for a given model and let $\hat{\Pi}^n$ be a sequence of estimators for the model ${\cal M}^n_Q=\{\rho^n_\theta\,|\,\theta\in\Theta \}$. An estimator is called asymptotically unbiased if the locally unbiased condition \eqref{lucond} hold for all values of $\theta$ in the $n\to\infty$ limit. Let us denote the MSE for the $n$th extension model as $V^n_\theta$, then, the Holevo bound has the operational meaning: \begin{multline} C_\theta^H[W]=\inf\big\{ \limsup_{n\to\infty}\,n\Tr{W V^n_\theta[\hat{\Pi}^n]}\,\\ \big|\,\hat{\Pi}^n \mbox{ is asymptotically unbiased} \big\}. \end{multline} That is the optimal MSE behaves as $\Tr{W V^n_\theta[\hat{\Pi}^n]}\simeq C_\theta^H[W]/n$ for very large $n$ by performing the optimal sequence of collective POVMs. In this sense, the Holevo bound is considered as the {\it ultimate bound} in quantum parameter estimation problem. Several remarks are listed regarding relations among the SLD CR, RLD CR, and Holevo bounds. First, there are no ordering in general between the SLD CR bound and the RLD CR bound, despite the fact \eqref{rerldsld}. When the inverse of RLD Fisher information matrix has no imaginary entries, then \eqref{rerldsld} gives $C_\theta^S[W]\le C_\theta^R[W]$. This indicates importance of imaginary part of the RLD Fisher information. Second, the Holevo bound is always superior both to SLD CR and RLD CR bounds, i.e., \begin{equation} C_\theta^H[W]\ge C_\theta^S[W]\ \mathrm{and}\ C_\theta^H[W]\ge C_\theta^R[W]. \end{equation} Third, when the number of parameters equal to one, the Holevo bound is identical to the SLD CR bound. Thus, collective measurements do not help to improve the accuracy of estimation. Fourth, when a model is so called D-invariant \cite{holevo, HM08}, the Holevo bound and the RLD CR bound coincide. In this case, the Holevo bound can be expressed as \begin{equation} C_\theta^H[W]=h_\theta[\vec{L}_\theta|W]\quad (\mbox{for D-invariant model}), \end{equation} where $\vec{L}_\theta=(L^1_\theta,L^2_\theta,\dots,L^k_\theta)$ is the cotangent vector of SLD operators, i.e., $L^j_\theta=\sum_{i=1}^k (G_\theta^{-1})_{ij}L_{\theta,i}$ ($j=1,2,\dots,k)$. Thus, a D-invariant model possesses nice structure as a statistical model, and this condition is satisfied, for example, when the set of SLD operators together with the identity operator span the whole Hermite operators, i.e., $\mathrm{span}_{{\mathbb R}}\{ L_{\theta,1},L_{\theta,2},\dots,L_{\theta,k},I\}={\cal L}_h({\cal H})$ holds. \subsection{Nagaoka and Hayashi-Gill-Massar bounds} For two-dimensional quantum system, the quantum estimation problem is completely solved and the attainable bound can be calculated for an arbitrary quantum statistical model. This problem was solved for two-parameter case affirmatively by Nagaoka in the 80s \cite{nagaoka89}. Hayashi solved the case for three-parameter by utilizing the infinite dimensional linear programming method \cite{hayashi97}. Gill and Massar independently solved the same problem by different manner \cite{GM00}. We recommend a compact proof by Yamagata \cite{yamagata}. In the rest of paper, we call the bound for two-parameter case as the Nagaoka bound and the one for three-parameter case as the Hayashi-Gill-Massar (HGM) bound for the sake of convenience although the latter includes the former as a special case. Consider a complex two-dimensional Hilbert space ${\mathbb C}^2$ and a quantum parametric model on it. For a given weight, the Nagaoka and the HGM bound for the weighted trace of MSE is given by \begin{equation} \label{n1bd} \min_{\hat{\Pi}:\mathrm{l.u. at}\, \theta} \mathrm{Tr}\{W V_\theta[\hat{\Pi}]\} =\left(F(G_{\theta}^{-1},W)\right)^2=: C^{HGM}_{\theta}[W], \end{equation} where $F(A,B)=\Tr{\sqrt{\sqrt{A}B\sqrt{A}}}$ denotes a fidelity between two semi-definite positive operators $A,B$. Nagaoka proved that this bound is more informative that the other bounds, i.e., $C^{HGM}_\theta[W]\ge C^H_\theta[W]$ holds \cite{nagaoka89}. The achievability of the HGM bound is known as the necessary and sufficient condition for POVMs, which states that the minimum is attained if and only if a POVM satisfies the condition \cite{yamagata}: \begin{equation} \label{optcond} J_{\theta}[\Pi_{opt}]=\frac{ \sqrt{G_{\theta}} \sqrt{F_{\theta}} \sqrt{G_{\theta}}}{\Tr{\sqrt{F_{\theta}}}}. \end{equation} One way to compute the fidelity between $A$ and $B$ is to calculate the eigenvalues of the hermite operator $\sqrt{A}B\sqrt{A}$, and to compute the sum of square root of all eigenvalues. To proceed further we introduce the following $k\times k$ real symmetric matrix and assume its diagonalized form as \begin{equation}\label{diagrep} F_{\theta}=\sqrt{G_{\theta}^{-1}}W\sqrt{ G_{\theta}^{-1}} =U_{\theta}\Lambda_{\theta}U_{\theta}^{-1}, \end{equation} where $U_{\theta}$ is real orthogonal matrix and $\Lambda_{\theta}$ is a diagonalized matrix whose elements are the eigenvalues of $F_{\theta}$ given as $\Lambda_{\theta}=\mathrm{diag}(\lambda_1,\lambda_2,\dots,\lambda_k)$. With these notations, the HGM bound $\eqref{n1bd}$ is expressed as \begin{equation}\label{n1bd2} C^{HGM}_\theta[W] =\sum_{i=1}^{k}\lambda_i+\sum_{i\neq j}\sqrt{\lambda_i\lambda_j}. \end{equation} Using the fact that the sum of eigenvalues of hermite matrix is equal to its trace, the first term in the right hand side of Eq.~\eqref{n1bd2} is written as $\sum_{i=1}^{k}\lambda_i=\Tr{WG_{\theta}^{-1}}$. This term corresponds to the SLD CR bound and noting the eigenvalues of the matrix $F_{\theta}$ are positive in general, we see that the HGM bound is strictly larger than the SLD CR bound for any weight matrix $W$, i.e., \begin{equation} C^{HGM}_\theta[W]>\Tr{WG_{\theta}^{-1}}. \end{equation} This shows that the SLD CR bound cannot be attained for a generic qubit problem. Two well-known exceptions are: one-parameter case ($k=1$) and the case where all SLD operators commute with each other. An optimal POVM was explicitly constructed by Nagaoka \cite{nagaoka91,FN99}, and its general form is given as follows \cite{yamagata}. Let $\hat{L}_{\theta,i}$ ($i=1,2,\dots,k$) be a linear combination of SLD operators defined by \begin{equation} \label{sldhat} \hat{L}_{\theta,i}:= \sum_{j=1}^k \left[ U_{\theta}^{-1}\sqrt{ G_{\theta}^{-1}}\right]_{ij}L_{\theta,j}, \end{equation} where $U_{\theta}$ is the matrix diagonalizing $F_{\theta}$ in Eq.~\eqref{diagrep}, and $G_{\theta}$ is the SLD Fisher information matrix. Let ${\Pi}^{(i)}$ ($i=1,2,\dots,k$) be the projection measurement, or projection valued measure (PVM), about the observable $\hat{L}_{\theta,i}$, then the optimal POVM which attains the HGM bound is to perform the PVMs ${\Pi}^{(i)}$ with a corresponding probability: \begin{equation} p_i=\frac{\sqrt{\lambda_i}}{\sum_{j=1}^k\sqrt{\lambda_j}}. \end{equation} Explicitly, writing ${\Pi}^{(i)}=\{\Pi_{i\pm}\}$ for binary outcome PVMs, the optimal POVM consists of $2k$ elements and is given by \begin{equation} \Pi_{opt}=\left\{p_1\Pi_{1\pm},\dots,p_k\Pi_{k\pm} \right\}. \end{equation} The optimal estimator $\hat{\theta_i}$($i=1,\dots,k$) is to assign the following estimate upon the measurement outcomes: \begin{equation} \label{optest} \hat{\theta}_i(x)=\theta_i+\sum_{j=1}^{k}\left(J_{ \theta}^{-1}\right)_{ij}\partial_j \log p_{\theta}(x), \end{equation} with $x\in{\cal X}=\{1\pm,\dots,k\pm\}$ and $p_{\theta}(x)=\tr{\rho_{\theta}\Pi_x }$. We remark that there are other forms of optimal POVMs known in literature \cite{GM00,hayashi97} This optimal estimator $(\Pi,\hat{\theta})$ explicitly depends on the true value of the unknown parameter $\theta$. This might be considered as self-contradiction in the formalism. Indeed, some authors claim that finding unbiased estimator is rather purely of mathematical interest and is of no use. Here we mention that the formalism based on locally unbiased estimators needs an additional ingredient when applying to real problem. It was first proposed by Nagaoka that one should perform the above optimal estimation adaptively, namely, when one starts with unknown value of parameters and then successively update the values according to measurement results \cite{nagaoka89-2}. The mathematical rigorous proofs for strong consistency and asymptotic efficiency of adaptive estimation are due to Fujiwara \cite{fujiwara06}. We take these mathematical justifications for granted to look for locally unbiased estimators. There is also an alternative way to achieve the bound obtained for the locally unbiased estimators by using two-step estimation strategy \cite{HM98,BNG00}. In this method, one take a few fraction of $n$ copies, say $\sqrt{n}$, to estimate the value of $\theta$ and then perform the optimal locally unbiased estimator for the remaining $n-\sqrt{n}$ copies. Finally, we remark that Yamagata shows that the adaptive estimation method works more efficiently than the standard quantum tomographic scheme in qubit system \cite{yamagata}. An adaptive estimation scheme for one-parameter case was experimentally demonstrated in Ref.~\cite{oioyift12}. \section{Estimation of qubit states in presence of unknown phase}\label{sec3} The quantum parametric model under consideration is given by the family of quantum states on the two-dimensional Hilbert space ${\mathbb C}^2$, i.e, qubit states: \begin{equation} \rho_{\theta}=\frac{1}{2}\left(\begin{array}{cc}1+\theta_2 & \theta_1 \Exp{-\mathrm{i}\theta_3} \\[.5ex] \theta_1 \Exp{\mathrm{i}\theta_3} & 1-\theta_2\end{array}\right) \in {\cal S}({\mathbb C}^2), \end{equation} where $\theta=(\theta_1,\theta_2,\theta_3)$ satisfy $\theta_1^2+\theta_2^2<1$ and $\theta_3\in[0,2\pi)$ and we exclude the point $\theta_1=0$ for the sake of mathematical convenience. This condition guarantees that the state is full-rank for all values of $\theta\in\Theta$. It is useful to go from matrix representation of a state to the three dimensional vector representation, so called the Bloch vector representation. This is given by a one-to-one mapping as ${\cal S}({\cal H})\ni\rho \longmapsto \v{s}=\tr{\v\sigma \rho}\in \mathbb{R}^3$, where $\v \sigma=(\sigma_1,\sigma_2,\sigma_3)^T$ denotes the set of usual Pauli spin operators. Requirement of unit trace and positivity imposes on the vector such that the length of this vector is less or equal to one. The space of all possible Bloch vectors is defined as surface and interior of unit sphere: ${\cal B} =\left\{b\in\mathbb{R}^3\,\big|\, |b|\le1 \right\}$. The inverse mapping from a given Bloch vector $\v s$ to matrix representation is ${\cal B}\ni \v{s}\longmapsto \rho=(\sigma_0+\v{s}\cdot\v{\sigma} )/2\in{\cal S}({\cal H})$, with $\sigma_0$ the identity $2\times2$ matrix and $\v{s}\cdot\v{\sigma}=\sum_{i=1,2,3}s_i\sigma_i$. Thus, the Bloch vector representation of our quantum model is \begin{equation} \v{s}_{\theta}=(\theta_1\cos\theta_3,\,\theta_1\sin\theta_3,\,\theta_2)^T. \end{equation} For later convenience, we define the standard inner product and the outer product for three dimensional vectors by \begin{align*} \vecin{\v a}{\v b}&=\sum_{i=1,2,3}a_ib_i,\\ \ket{\v a}\bra{\v b}&=[a_ib_j]_{i,j\in\{1,2,3\}}, \end{align*} respectively. The outer product is $3\times3$ matrix whose action onto a vector $\v c\in{\mathbb R}^3$ is $\ket{\v a}\bra{\v b} {\v c}=\vecin{\v b}{\v c}{\v a}$. In our problem, the phase parameter $\theta_3$ is of no interest, which is called a {\it nuisance parameter}, and we wish to discuss how well we can estimate this parametric model in presence of the nuisance parameter $\theta_3$. In the following, we first solve the problem when $\theta_3$ is known (no nuisance parameter) and then to solve the case with unknown phase parameter. In our analysis, we introduce an important concept, {\it mean square error region} defined as follows \cite{comment}. Given a quantum parametric model ${\cal M}_Q$ and the bound for the MSE matrix $ \mathrm{Tr}\{W V_\theta[\hat{\Pi}]\}\ge C_\theta[W]$, we define the set of all possible MSE matrices allowed by locally unbiased estimators: \begin{equation} D_{\mathrm{l.u.}}=\{V\in M_h\,\big|\, V=V_{\theta}[\hat{\Pi}] ; \hat{\Pi} \mbox{ is locally unbiased at }\theta \}, \end{equation} and the set of positive matrices allowed by the given bound: \begin{equation} D_C:= \{V\in M_{h}\, |\, \Tr{WV}\ge C_\theta[W],\ \forall W>0\}. \end{equation} Here $M_{h}$ denotes the set of all $2\times2$ symmetric matrices, i.e., $M_{h}=\left\{ V\in\mathbb{R}^{2\times 2}\,\Big|\, V^T=V\right\}$. It is not difficult to show that two sets are equivalent, i.e., $D_{\mathrm{l.u.}}=D_C$, if the bound is achievable. We call $D_{\mathrm{l.u.}}$ a MSE region and we shall analyze $D_C$ in the following based on the Nagaoka and the HGM bound. \subsection{No nuisance parameter case} In this subsection, we assume that the phase parameter $\theta_3$ is known with infinite precision. The number of parameters to be estimated is two and the straightforward calculation shows the inverse of the SLD Fisher information is given by \begin{equation} \label{sldF2} G_{\theta}^{-1} =\left(\begin{array}{cc}1-\theta_1^2 & -\theta_1\theta_2 \\[.5ex] -\theta_1\theta_2 & 1-\theta_2^2\end{array}\right). \end{equation} With this simple structure of SLD Fisher information, we have $ \Tr{G_{\theta}^{-1}}-1=\det G_{\theta}^{-1}=1-s_{\theta}^2$ where $s_{\theta}=(\theta_1^2+\theta_2^2)^{1/2}$ denotes the length of the Bloch vector. Eigenvalues of the $2\times 2$ matrix defined for the Nagaoka bound \eqref{diagrep} are given by \begin{align} \label{lambda} &\lambda_{1,2}=\frac12\left(t_{\theta}\pm\sqrt{\Delta_{\theta}} \right),\\\nonumber &t_{\theta}=\Tr{WG_{\theta}^{-1}},\ \Delta_{\theta}= t_{\theta}^2-4\det (WG_{\theta}^{-1}). \end{align} The Nagaoka bound $C_\theta^N[W]$ is thus written as \begin{equation} \label{nhgm2} C^{N}_\theta[W]=\Tr{WG_{\theta}^{-1}}+2\sqrt{\det WG_{\theta}^{-1}}. \end{equation} An optimal POVM attaining this bound is given as follows. Consider rank-1 projectors: \begin{equation} \label{optPVM2} P_{\theta,1(2)}=E_\theta\,\frac{{W}-{\lambda}_{2(1)}G_{\theta}}{\Tr{{W}}-{\lambda}_{2(1)}\Tr{G_{\theta}}}\,E_\theta^T, \end{equation} where $E_\theta$ is a $3\times2$ real matrix, \begin{equation}\label{ematrix} E_\theta:= \left(\begin{array}{cc}\cos\theta_3 & 0 \\ \sin\theta_3 & 0 \\0 & 1\end{array}\right), \end{equation} and write them as $P_{\theta,i}=\ket{\v{n}_i}\bra{\v{n}_i}$ with $\v{n}_i$ unit vectors. We remark that $\v{n}_1$ and $\v{n}_2$ are not orthogonal in general. An optimal POVM is then written as \begin{align} \label{optPOVM2} \Pi_{opt}&=\left\{p_1\Pi_{1+},\, p_1\Pi_{1-},\,p_2\Pi_{2+},\, p_2\Pi_{2-} \right\},\\ \nonumber \Pi_{i\pm}&=\frac{1}{2}\left(\sigma_0\pm\v{n}_i\cdot \v\sigma \right) ,\ p_{1,2}=\frac12(1\pm \cos 2 q_\theta), \end{align} where $q_\theta= \arctan [(1-\Delta_\theta/t_\theta^2)^{1/4}]$. With this optimal POVM and the estimator \eqref{optest}, the value of MSE matrix is given by \begin{equation} \nonumber V_{\theta}[\hat{\Pi}_{opt}]=J_{\theta}[\Pi_{opt}]\,^{-1}=G_{\theta}^{-1}+\sqrt{\det WG_{\theta}^{-1}}\, {W}^{-1}. \end{equation} As an example, let us consider the case for the identity wight matrix, which corresponds to estimating two parameters with equal footing. In this case, the optimal POVM takes a rather simple form: \begin{align*} \Pi_{1\pm}&=\frac12 (\sigma_0\pm \frac{\v{s}_{\theta}}{s_\theta}\cdot \v\sigma)\ \mbox{with}\ p_1=\frac{\sqrt{1-s_\theta^2}}{1+\sqrt{1-s_\theta^2}},\\ \Pi_{2\pm}&=\frac12 (\sigma_0\pm \frac{\v{s}^{\bot}_{\theta}}{s_\theta}\cdot \v\sigma)\ \mbox{with}\ p_2=\frac{1}{1+\sqrt{1-s_\theta^2}}, \end{align*} and the estimator $\hat{\theta}(\pm)=\left(\hat{\theta}_1(\pm),\hat{\theta}_2(\pm)\right)$: \begin{align*} \left(\hat{\theta}_1(\pm),\hat{\theta}_2(\pm)\right)&=\Big[1\pm\frac{1}{p_1}\frac{1-s_\theta}{s_\theta}\Big]({\theta}_1,{\theta}_2)\quad \mathrm{for}\ \Pi_1,\\ \left(\hat{\theta}_1(\pm),\hat{\theta}_2(\pm)\right)&=({\theta}_1,{\theta}_2)\pm\frac{1}{p_2s_\theta}(-\theta_2,\theta_1)\ \mathrm{for}\ \Pi_2. \end{align*} Since the state under estimation is given by $\rho_\theta=(\sigma_0+\v{s}_\theta\cdot\v{\sigma} )/2$, the first PVM $\Pi_1$ suggests to measure along the same direction as the unknown state. The second PVM, however, suggests us to measure along the perpendicular direction $\v{s}^{\bot}_{\theta}=(\theta_2\cos\theta_3,\,\theta_2\sin\theta_3,\,-\theta_1)^T$. This seems rather counter intuitive since the probability distribution upon the measurement $\Pi_2$ on $\rho_\theta$ is $1/2$, i.e., completely random outcomes, and hence this does not provide us any useful information to estimate the value $\theta$. To understand this optimal estimator, we note that both PVMs and estimators do depend on the true values of parameters $\theta$ and we can only attain this optimal quantum estimator by adaptively in $n\to\infty$ limit. As emphasized before, this is one formulation of quantum parameter estimation within locally unbiased estimators. To characterize the MSE region obtained from the Nagaoka bound for this problem, we note the following fundamental lemma: \begin{lemma}\label{lemma1} Let $c$ be a positive constant, the following two sets are equivalent. \begin{align}\nonumber D_1&=\{V\in M_{h}\,|\,\Tr{XV}\ge 2c\sqrt{\det X},\ \forall X>0 \}, \\ \nonumber D_2&=\{V\in M_{h}\,|\,\det V \ge c^2,\ V>0 \}. \end{align} \end{lemma} \begin{proof} This lemma can be shown as follows. From $\Tr{XV}\ge 2c\sqrt{\det X}>0$ for all $X>0$, we have $V>0$. Change the positive matrix as $X\to V^{-1/2} X V^{-1/2}> 0$, we have , $\Tr{X}\ge 2c\sqrt{\det XV^{-1}}\Leftrightarrow \det{V}\ge 2c \sqrt{\det{X}}/\Tr{X}$. Note for $2\times2$ matrix $X$ a functional $f(X):=\sqrt{\det X}/\Tr{X}$ has the maximum and $\forall X>0, 1/2\ge f(X)>0$ holds. With this we can show $D_1\subset D_2$. This argument can be reversed to show the converse inclusion. $\square$ \end{proof} With this lemma, we state our first result: \begin{proposition} The following sets are all equivalent. \begin{align*} D_{N}&=\{V\in M_{h}\,|\, \Tr{W V}\ge C^{N}_\theta[W],\ \forall W>0 \},\\ D_{GM}&=\{V\in M_{h}\ \,|\, \Tr{G_\theta^{-1} V^{-1}}\le 1,\ V>G_{\theta}^{-1} \}, \\ D&=\{V\in M_{h}\ \,|\, \det(V-G_{\theta}^{-1})\ge \det G_{\theta}^{-1},\ V>G_{\theta}^{-1}\}. \end{align*} \end{proposition} \begin{proof} Equivalence between $D_{N}$ and $D$ follows from lemma \ref{lemma1}. The other relation $D=D_{GM}$ follows from a direct calculation which shows $\Tr{AV^{-1}}\le 1\Leftrightarrow \det (V-A)\ge \det A$ for all $A\in M_h$ and $V>0$. $\square$ \end{proof} From the expression of $D$, we see that in general there is trade-off relation between errors in $\theta_1$ and $\theta_2$. We note that a similar trade-off relation was obtained in Ref.~\cite{wsu11} for any two observables in any finite dimensional quantum systems. However, their result heavily depends on the choice of parametrization for quantum states, and they only consider diagonal elements of the MSE matrix. We emphasize that all entries in the MSE matrix are important and the most general trade-off is \begin{equation} \det(V_\theta[\hat{\Pi}]-G_{\theta}^{-1})\ge \det G_{\theta}^{-1}, \end{equation} whereas SLD CR bound gives $\det(V_\theta[\hat{\Pi}]-G_{\theta}^{-1})>0$ and $\Tr{V_\theta[\hat{\Pi}]-G_{\theta}^{-1}}>0$. \subsection{Nuisance parameter case} In this section, we treat the phase parameter $\theta_3$ as unknown and discuss how well we can estimate the parameter $\theta_1,\theta_2$. Let us first briefly recall for classical parameter estimation theory with nuisance parameters \cite{lc,bnc}. Consider a probability distribution $p_{\theta}(x)$ on ${\cal X}$, $\theta=(\theta_1,\theta_2,\dots,\theta_k)$, where $\theta_{I}=(\theta_1,\theta_2,\dots,\theta_p)$ (parameters of interest) and $\theta_{N}=(\theta_{p+1},\theta_{p+2},\dots,\theta_k)$ (parameters of no interest, nuisance parameters). Let $J_{\theta}$ be the Fisher information matrix and consider block matrix representations as \begin{equation}\nonumber J_\theta=\left(\begin{array}{cc}J_{\theta_I\theta_I} & J_{\theta_I\theta_N} \\[0.1ex] J_{\theta_N\theta_I}& J_{\theta_N\theta_N}\end{array}\right), \ J_\theta^{-1}=\left(\begin{array}{cc}J^{\theta_I\theta_I} & J^{\theta_I\theta_N} \\[0.1ex] J^{\theta_N\theta_I}& J^{\theta_N\theta_N}\end{array}\right), \end{equation} in terms of the parameter grouping $\theta=(\theta_I,\theta_N)$. When $\theta_{N}$ are completely known, MSE for any unbiased estimators obeys \[ \displaystyle V_{\theta}\ge ( J_{\theta_I\theta_I})^{-1}, \] where all known values for $\theta_N$ are substituted to $p\times p$ matrix $J_{\theta_I\theta_I}$. When $\theta_{N}$ are not known, on the other hand, the MSE satisfies \[ \displaystyle V_{\theta}\ge J^{\theta_I\theta_I}. \] It is well-known that the following matrix inequality \begin{align}\nonumber J^{\theta_I\theta_I}&=(J_{\theta_I\theta_I}-J_{\theta_I\theta_N}J_{\theta_N\theta_N}^{-1}J_{\theta_N\theta_I})^{-1}\\ &\ge ( J_{\theta_I\theta_I})^{-1}, \label{nuiineq} \end{align} holds where the equality holds if and only if the off-diagonal block matrix vanishes $J_{\theta_I\theta_N} =0$. In this case we say that two sets of parameters $\theta_I$ and $\theta_N$ are orthogonal with respect to Fisher information. These two CR inequalities show that when the MSE becomes in general worse in presence of nuisance parameters. We now consider our problem for quantum case. We can show that the primary parameter $\theta_I=(\theta_1,\theta_2)$ are orthogonal to the nuisance parameter $\theta_3$ with respect to the SLD Fisher information, and the inverse of SLD Fisher information matrix for three parameter case reads \begin{equation} \label{sldF3} G_{\theta}(3)^{-1}=\left(\begin{array}{cc}G_{\theta}^{-1} & \begin{array}{c}0 \\[-1mm] 0\end{array} \\[0ex] 0\ 0 & g_\theta^{33}\end{array}\right), \end{equation} where $G_{\theta}^{-1}$ is same matrix given in \eqref{sldF2} and $g_\theta^{33}=1/\theta_1^2$. Clearly, $g^{33}_\theta$ diverges when $\theta_1=0$ simplely because we cannot have any information about $\theta_3$ at this point. Physically, this singularly is trivial since we cannot have any interference fringe at $\theta_1=0$. Thus, we justify the reason why we excluded the point $\theta_1=0$ in our model. This structure of the SLD Fisher information matrix might suggest that the bound \eqref{nhgm2} shown in the previous section holds even in presence of the nuisance parameter $\theta_3$. It is, however, not clear how to attain this bound without knowing the value of $\theta_3$. This is due to the fact that the optimal measurement \eqref{optPOVM2} explicitly depends on the unknown phase $\theta_3$, in particular the projectors \eqref{optPVM2}. To treat the effect of nuisance parameter in quantum case, we need to study the problem for estimating three parameters and to discuss trade-off between errors in $\theta_I$ and $\theta_N$. The HGM bound for generic three-parameter case can be written down as shown before. For the general $3\times3$ weight matrix, we have not gotten a simple expression for the HGM bound $C^{HGM}_\theta[W]$. To proceed our analysis, we write $3\times 3$ MSE and consider a special class of weight matrix as follows. \begin{equation}\nonumber V_{\theta}^{(3)}=\left(\begin{array}{cc}V_{\theta} & \begin{array}{c}v_{13} \\[-1mm] v_{23}\end{array} \\[0ex] v_{31}\ v_{32} & v_{33}\end{array}\right),\ \label{bdform} W(3)=\left(\begin{array}{cc}W& \begin{array}{c}0 \\[-1mm] 0\end{array} \\ 0\ 0 & w_3\end{array}\right), \end{equation} where $V_{\theta}$ and $W$ are $2\times2$ matrices analyzed before. For this specific choice of the weight matrix, the HGM bound can be expressed in terms of $C^{N}_\theta[W] $ \eqref{nhgm2} as \begin{align} \nonumber &\Tr{V_{\theta}^{(3)}W(3)}= \Tr{W V_\theta[\hat{\Pi}]}+w_3 v_{33} \ge C^{HGM}_\theta[W(3)],\\ & C^{HGM}_\theta[W(3)]=\Big(\sqrt{C^{N}_\theta[W]}+ \sqrt{w_3g_\theta^{33}}\Big)^2. \label{nhgm3} \end{align} Let us denote the set of all symmetric and nonnegative positive $3\times 3$ matrices by $M_{h}^{(3)}$ and $M_{+}^{(3)}$, respectively, define the sets of positive matrices by \begin{align*} D_{HGM}&=\{V\in M_{h}^{(3)}\,|\, \Tr{W V}\ge C^{HGM}_\theta[W],\ \forall W>0 \},\\ \tilde{D}_{HGM}&=\{V\in M_{+}^{(3)} \,|\, \Tr{W(3) V}\!\ge\! C^{HGM}_\theta[W(3)], \forall W(3)>0 \},\\ D(3)&=\{V\in M_{+}^{(3)} \,|\, \det(V_2- \gamma_{\theta} G_{\theta}^{-1})\ge \det (\gamma_{\theta} G_{\theta}^{-1}),\\ &\hspace{3cm} V_2>\gamma_{\theta} G_{\theta}^{-1},\ v_{33}>g_{\theta}^{33}\}, \end{align*} where $\gamma_{\theta}$ is an important quantity defined by \begin{equation} \label{gamma1} \gamma_{\theta}[\hat{\Pi}]=\frac{v_{33}[\hat{\Pi}]}{v_{33}[\hat{\Pi}]-g_{\theta}^{33}}, \end{equation} and $V_2=[v_{ij}]_{i,j\in\{1,2\}}$ in $D(3)$ is a $2\times2$ block matrix. The inclusion $\tilde{D}_{HGM}\subset D_{HGM}$ is trivial from the definition. With the same line of logic as before, we obtain the following result: \begin{proposition} $D(3)=\tilde{D}_{HGM}\subset D_{HGM}$ holds. \end{proposition} Consequences of the above result are emphasized here: First, even though the value of $\theta_3$ is unknown, the structure of the MSE region $D(3)$ is the same as the previous region $D$ for the two-parameter case. The change solely enters as the scaling factor $\gamma_{\theta}$ which depends on MSE of $\theta_3$, i.e., $v_{33}[\hat{\Pi}]$, and this factor is strictly larger than $1$. This implies the relation \begin{multline} D\subsetneq D_2(3):=\{V\in M_h| \det(V- \gamma_{\theta} G_{\theta}^{-1})\ge \det (\gamma_{\theta} G_{\theta}^{-1}),\\ V>\gamma_{\theta} G_{\theta}^{-1}\}, \end{multline} for each given value of the error $v_{33}[\hat{\Pi}]$. Second, the trade-off between errors in $\theta_I=(\theta_1,\theta_2)$ and $\theta_3$ is understood. The smaller error in $\theta_3$ gives the larger $\gamma_{\theta}$ resulting in large error in $\theta_I$. We then wish to make $v_{33}$ so large that $\gamma_{\theta}\simeq 1$. But, this means that we cannot perform the optimal POVM \eqref{optPOVM2} precisely. This kind of trade-off is typical in quantum theory and we think the general formalism to deal with effects of nuisance parameters in quantum estimation theory deserves further studies. If the SLD CR bound is used in stead of the HGM bound, we have the following MSE region obtained form the SLD CR bound: \begin{equation} D_{SLD}(3)=\{V\in M_{h}^{(3)} \,|\, V_2\ge G_{\theta}^{-1},\ v_{33}\ge g_{\theta}^{33}\}. \end{equation} This is different from the MSE matrix allowed by quantum mechanics. In particular, $D(3) \subsetneq D_{SLD}(3)$ holds. This shows that one should analyze achievable bound when considering the effect of nuisance parameters in general. \subsection{Achievability of the bound for no nuisance parameter} In this subsection, we discuss achievability of the bound \eqref{nhgm2} which was derived for the case of no nuisance parameter. In particular, we show that the above bound with the nuisance parameter provides the same bound in the asymptotic limit. This is a direct consequence of simple structure of MSE region $D(3)$. It is well-known that the additivity of SLD Fisher information gives $G_{\theta}(3) \to n G_{\theta} (3)$ for the $n$th i.i.e. extended model ${\cal M}_Q=\{\rho_\theta^{\otimes n}\,|\,\theta\in\Theta \}$. Let us consider an estimation strategy in which we use a fraction of $n$ states, say $\sqrt{n}$, to estimate $\theta_3$ and use the rest $n-\sqrt{n}$ states to estimate the relevant parameters $\theta_1,\theta_2$. Since the MSEs scales as $V_{\theta_1,\theta_2}\propto (n-\sqrt{n})^{-1} $ and $v_{33}\propto n^{-1/2}$, we see that the factor $\gamma_\theta$ scales as $\gamma_{\theta}\simeq 1$ for sufficiently large $n$. Therefore, the MSE region for $V_{\theta_1,\theta_2}$ converges to that of no nuisance parameter in $n\to\infty$ limit. To translate the above picture into more formal language, let us consider the $n$th i.i.e. extended model and consider an estimator with separable POVMs. Denoting the MSE matrix for this extended model as $V_\theta[\hat{\Pi}^n_{\mathrm{sep}}]$, the relation \eqref{qcmonotone} and the general bound \eqref{mibound} provide, \begin{equation} V_\theta[\hat{\Pi}^n_{\mathrm{sep}}]\ge \frac{1}{n}(J_\theta[\Pi])^{-1}\ge \frac{1}{n} C^{\mathrm{MI}}_\theta[W]. \end{equation} Let us write the rescaled MSE matrix as $V_\theta[\hat{\Pi}^n_{\mathrm{sep}}]\simeq \overline{V_\theta}[\hat{\Pi}^n_{\mathrm{sep}}]/n$, the above inequality and the HGM bound for three parameters, i.e., with the nuisance parameter, give us, $ \det(\overline{V_{\theta_I}}[\hat{\Pi}^n_{\mathrm{sep}}] -\overline{\gamma_{\theta}} G_{\theta}^{-1})\ge \det (\overline{\gamma_{\theta}} G_{\theta}^{-1})$, where $\overline{V_{\theta_I}}$ is the rescaled $2\times 2$ MSE matrix for $\theta_I=(\theta_1,\theta_2)$ and $\overline{\gamma_{\theta}}= {\overline{v_{33}}[\hat{\Pi}]}/{(\overline{v_{33}}[\hat{\Pi}]-g_{\theta}^{33})}$ is the rescaled factor. If we apply the considered estimation strategy, we have \begin{align}\nonumber & \det\left(\frac{n}{n-\sqrt{n}}\overline{V_{\theta_I}}[\hat{\Pi}^n_{\mathrm{sep}}] -\overline{\gamma_{\theta}} G_{\theta}^{-1}\right)\ge \det (\overline{\gamma_{\theta}} G_{\theta}^{-1}), \\ & \overline{\gamma_{\theta}}= \frac{\overline{v_{33}}[\hat{\Pi}]}{\overline{v_{33}}[\hat{\Pi}]-n^{-1/2}g_{\theta}^{33}} \end{align} We thus see that the MSE matrix $\overline{V_{\theta_I}}[\hat{\Pi}^n_{\mathrm{sep}}]$ for the relevant parameters satisfies \begin{equation} \det(\overline{V_{\theta_I}} -G_{\theta}^{-1})\ge \det (G_{\theta}^{-1}), \end{equation} in the $n\to\infty$ limit. We next discuss more efficient way to achieve the previous bound based on the optimal POVM for the Nagaoka bound \eqref{optPOVM2}. Given sufficiently large $n$ copies of quantum states $\rho_{\theta}$, we split $n$ into $\sqrt{n}$ and the rest $n-\sqrt{n}$. Let us use the first group to estimate the nuisance parameter $\theta_3$ and let the MSE be $v_{33}=c_{33} n^{-1/2}$. With this precision, we use the remaining $n-\sqrt{n}$ states to estimate $\theta_I=(\theta_1,\theta_2)$ with the optimal estimator described by \eqref{optPOVM2}. The limit $n\to\infty$ then leads to the bound \eqref{nhgm2}. To see this argument quantitatively, let us assume that the true value for $\theta_3$ is $\theta^*_3$. We first make an estimate as $\theta^*_3+\delta \theta_3$ with $\delta \theta_3$ a standard deviation ($\simeq$ square root of MSE). With this estimate let us perform the POVM of the form \eqref{optPOVM2}. We note that error in $\theta_3$ solely enters in the matrix \eqref{ematrix} and the straightforward calculation shows the effect of this deviation gives rise to the change of parameters: \begin{equation} (\theta_1,\theta_2)\to(\theta_1\cos\delta\theta_3,\theta_2). \end{equation} Therefore, the classical Fisher information matrix about this sub-optimal measurement outcomes with this error in $\theta_3$ is expressed as \begin{equation} \Delta_\theta J_{\theta}[\Pi_{opt}]\, \Delta_\theta \mbox{ with } \Delta_\theta=\left(\begin{array}{cc} \cos\delta\theta_3&0 \\0 & 1 \end{array}\right). \end{equation} Clearly, for small error $\delta\theta_3\simeq c_{33}/\sqrt{n}$ we can approximate $ \cos\delta\theta_3\simeq 1- c_{33}^2/2n$ and this decreases faster enough to conclude that the Nagaoka bound \eqref{nhgm2} can be achieved at each $\theta_I=(\theta_1,\theta_2)$ asymptotically for a given weight matrix $W$. \section{Asymptotic bound: Holevo bound}\label{sec4} In this section we shall discuss the Holevo bound for our parametric model. As stated in Section \ref{sec2}, the Holevo bound can be achieved by a collective POVM $\hat{\Pi}^n$ acting on $\rho^{\otimes n}$ in the asymptotic limit. We will see that the Holevo bounds are different whether the phase parameter is known or not. We first list the inverse of SLD and RLD Fisher information matrix for the model. When the phase parameter is completely known the model is two-dimensional and we have \begin{equation} \label{2sldrld} G_{\theta}^{-1} =\left(\begin{array}{cc}1-\theta_1^2 & -\theta_1\theta_2 \\[.5ex] -\theta_1\theta_2 & 1-\theta_2^2\end{array}\right),\quad \tilde{G}_\theta^{-1}=(1-s_\theta^2)\left(\begin{array}{cc}1 & 0 \\[0.5ex] 0 & 1\end{array}\right). \end{equation} Therefore, the RLD Fisher is real and we easily see that the SLD Fisher is more informative than RLD Fisher information, i.e., $G_{\theta}^{-1}\ge \tilde{G}_\theta^{-1}$. When the phase parameter $\theta_3$ is not known precisely and needs to be estimated, the inverse matrices of two quantum Fisher information are \begin{align}\nonumber G_{\theta}(3)^{-1}&= \left(\begin{array}{ccc}1- \theta_1^2& -\theta_1\theta_2 &0 \\[.5ex] -\theta_1\theta_2 & 1-\theta_2^2 & 0 \\[.5ex]0 & 0 & 1/\theta_1^2\end{array}\right),\\ \tilde{G}_\theta(3)^{-1}&= \left(\begin{array}{ccc}1- \theta_1^2& -\theta_1\theta_2 & -\mathrm{i}\theta_2/\theta_1 \\[.5ex] -\theta_1\theta_2 & 1-\theta_2^2 & \mathrm{i} \\[.5ex] \mathrm{i}\theta_2/\theta_1 & -\mathrm{i} & 1/\theta_1^2\end{array}\right). \end{align} It is easy to see that $\mathrm{Re}\,\{\tilde{G}_\theta(3)^{-1}\}=G_{\theta}(3)^{-1}$ and the difference of two matrices is neither positive nor negative, that is there is no ordering between $\tilde{G}_\theta(3)$ and $G_{\theta}(3)$. \subsection{No nuisance parameter case} The Holevo bound can be evaluated by an optimization over the tangent space at $\theta$: $T_\theta=\mathrm{span}_{{\mathbb R}}\{ L_{\theta,1},L_{\theta,2},\dots,L_{\theta,k}\}$. A straightforward calculation shows that the Holevo bound coincides with the SLD CR bound. Alternate way to see this simple fact is as follows. Consider the cotangent vectors of SLD operators defined by \begin{equation} L^j_\theta=\sum_{i=1}^2 (G_\theta^{-1})_{ij}L_{\theta,i}\ (j=1,2). \end{equation} By inserting $\vec{X}=(L^1_\theta,L^2_\theta)=:\vec{L_\theta}$ in the Holevo function $h_\theta[X|W]$, we see that the imaginary part of the matrix $Z_\theta[\vec{X}]$ vanishes. In this case the Holevo function coincides with the SLD CR bound, i.e., \begin{equation} C_\theta^H[W]=h_\theta[\vec{L}_\theta|W]=\Tr{WG_\theta^{-1}}=C_\theta^S[W] . \end{equation} Using the simple fact $\Tr{WA}\ge 0,\ \forall W>0\Rightarrow A\ge0$ for any $k\times k$ real symmetric matrix, we arrive at rather remarkable result: For any locally unbiased estimator, its MSE matrix satisfies \begin{equation} V_\theta[\hat{\Pi}]\ge G_\theta^{-1}, \end{equation} where the equality can be attained with a sequence of collective POVMs which are asymptotically unbiased in the $n\to\infty$ limit, i.e., $\lim_{n\to\infty}n V_\theta[\hat{\Pi}^n]=G_\theta^{-1}$. Correspondingly, the MSE region allowed by the Holevo bound is \begin{equation} D_H:=\{ V\in M_h\,|\, V\ge G_\theta^{-1}\}. \end{equation} This proves the SLD CR bound can be achievable in the asymptotic limit, even though two SLD operators do not commute. \subsection{Nuisance parameter case} Any qubit model of estimating three parameters becomes D-invariant if all SLD operators are linearly independent. This is true for our case as well and the Holevo bound is identical to the RLD CR bound. Thus, we have \begin{equation} C^H_\theta[W]=\Tr{W G_\theta(3)^{-1}}+\mathrm{TrAbs}\{W\mathrm{Im}\,\tilde{G}_\theta(3)^{-1}\}. \end{equation} The second term can be simplified as follows. Let $A=W\mathrm{Im}\tilde{G}_\theta(3)^{-1}$ be the $3\times 3$ real matrix whose eigenvalues are to be calculated. This matrix $A$ has good symmetry which gives \begin{equation} \Tr{A^\ell}=0\ \mbox{for odd }\ell, \end{equation} and $\det{A}=0$. The Caley-Hamilton theorem gives that the eigenvalues of $A$ are $0$, $\pm\sqrt{\Tr{A^2}/2 }$. The second term of the Holevo bound is written as \begin{equation} \mathrm{TrAbs}\{W\mathrm{Im}\,\tilde{G}_\theta(3)^{-1}\}=\sqrt{2\Tr{(W\mathrm{Im}\,\tilde{G}_\theta(3)^{-1})^2}}. \end{equation} If we set the weight matrix $W$ as the form of the block diagonal one \eqref{bdform}, the above term reads \begin{multline}\ \mathrm{TrAbs}\{W(3)\mathrm{Im}\,\tilde{G}_\theta(3)^{-1}\}\\ =2\sqrt{w_3g_\theta^{33}}\sqrt{\Tr{W(G_\theta^{-1}-\tilde{G}_\theta^{-1})}}. \end{multline} Here $W$ is the $2\times 2$ block matrix and $G_\theta^{-1}$ and $\tilde{G}_\theta^{-1}$ are the inverse of SLD and RLD Fisher information matrix for the known phase case, i.e., Eqs.~\eqref{2sldrld}. The final form of the Holevo bound is \begin{multline}\label{holevow3} C^H_\theta[W(3)]=\Tr{W G_\theta^{-1}}+w_3g_\theta^{33}\\ +2\sqrt{w_3g_\theta^{33}}\sqrt{\Tr{W(G_\theta^{-1}-\tilde{G}_\theta^{-1})}}. \end{multline} By analyzing $\Tr{W(3)V_\theta}\ge C^H_\theta[W(3)]$ for all $W(3)>0$ as before, we obtain the MSE region allowed by the Holevo bound as \begin{align}\nonumber D_H(3)=\{ V\in M_{+}(3)\,|\, V_2&\ge\gamma_\theta G_\theta^{-1}-(\gamma_\theta-1) \tilde{G}_\theta^{-1} ,\\ & V_2> G_\theta^{-1},\ v_{33}>g_\theta^{33}\}, \end{align} where $\gamma_\theta$ is defined by Eq.~\eqref{gamma1}. From this result, we see that the first term corresponds to the case of no nuisance parameter with the scaling factor $\gamma_\theta$. The second term, which is a negative matrix, represents non-trivial contribution from collective measurements. To see the structure of this Holevo bound, we rewrite the right hand side as $V_2\ge G_\theta^{-1}+(\gamma_\theta-1)(G_\theta^{-1}- \tilde{G}_\theta^{-1})\ge G_\theta^{-1}$. The last matrix inequality follows from $\gamma_\theta>1$ and $G_\theta^{-1}\ge \tilde{G}_\theta^{-1}$. Clearly, this shows that the Holevo bound for two-parameter case with no nuisance parameter cannot be attained exactly. However, by the same argument as before, one can find a sequence of POVMs such that $\gamma_\theta\to 1$ in the asymptotic limit. The MSE matrix for relevant parameters $\theta_1,\theta_2$ behaves as $V_{\theta_I} \simeq G_\theta^{-1}/n$. \subsection{Comparison and discussion} From these above results, we can expect that the ultimate bound can be quite different in general for quantum estimation problem wether there are nuisance parameters or not. This is because the error in the nuisance parameters enter as the bound of the MSE matrix for the relevant parameters. The model studied in this paper is a very special one in the sense that quantum Fisher information and all bounds do not depend on the value of the phase parameter (nuisance parameter). When the precision bound depends on the nuisance parameter, one has to substitute a rough estimate or adopt the worst case in order to derive a reliable bound for MSE for the relevant parameters. We also point out that our model meets the orthogonal condition with respect the SLD Fisher information. This orthogonality condition plays an important role in classical estimation problem and it guarantees the equality in \eqref{nuiineq}, that is, the bounds become same regardless whether there are nuisance parameters or not. In the quantum case, on the other hand, our result indicates that quantum version orthogonality condition itself does not conclude the same bound for the nuisance parameter case. In the first place, there are many quantum versions of Fisher information and we cannot say $\theta_I$ and $\theta_N$ are orthogonal in general. Indeed, it happens in our model that they are orthogonal with respect to the SLD Fisher information but not for the RLD Fisher information. Although our model is a very simple qubit system, it contains interesting and unique features of quantum parameter estimation problem. Let us briefly compare the bounds for separable POVMs and collective POVMs. In our model, the case of no nuisance parameter states that the Nagaoka bound \eqref{nhgm2}, which is truly greater than the SLD CR bound, can be improved significantly up to the SLD CR bound by collective measurements. When the value of phase $\theta_3$ is not known with infinite precision, the Holevo bound is strictly greater than the SLD CR bound. Collective POVMs improves the HGM bound, but we cannot reach the SLD CR bound for finite $n$. The analysis of this problem indicates: i) Importance of imaginary parts of RLD Fisher information and ii) proper treatment of nuisance parameters in quantum estimation problem. \section{Summary and outlook}\label{sec5} In this paper, we have discussed a simple quantum two-dimensional parametric model with unknown (nuisance) parameter. It is clear that if the nuisance parameter is not orthogonal to the parameters of interest, one cannot ignore the effects of nuisance parameter in general. The case for unknown phase parameter was analyzed, and we have shown that the bound can be achieved asymptotically. More detailed asymptotic behavior of the optimal estimator with nuisance parameter should be studied as future work. The general structure for quantum parameter estimation theory with nuisance parameters needs to be explored. We do not know if orthogonal nuisance parameters can always be estimated similarly as was done in this paper. It is clear that non-orthogonal parameters cannot be estimated with the same error even in asymptotically. This conclusion directly follows from classical statistics particularly the general inequality \eqref{nuiineq}. However, in the quantum case, orthogonality condition does not guarantee efficient estimation as discussed in the previous subsection. This is also because optimal measurements in general depend on the unknown parameters and more detailed analysis needs to be involved. There are many important examples where the effects of nuisance parameters are important. An immediate application is quantum metrology in presence of unavoidable noises. The values of noises are not known with infinite precision by definition. Hence, one should take into account the fact that small errors in knowledge of noise parameters might spoil the efficient estimation which go beyond the classical precision scaling. These are largely unexplored territories and we shall make progress in due course. \section*{Acknowledgement} The author is indebted to Prof.~Hiroshi Nagaoka for invaluable discussions and suggestions. He thanks Huangjun Zhu for providing information about the manuscript \cite{hjz14}. \section*{Appendix. Formulas} We list useful formulas for computing SLD and RLD operators and the corresponding Fisher information based on the Bloch vectors. For a given qubit model, we can also regarded it as a model described by three dimensional real vector: \begin{equation} \label{qbmodel} {\cal M}_{{\cal B}}=\left\{\v{s}_{\theta}\in{\cal B}\, |\, \theta\in\Theta\subset{\mathbb R}^k \right\}. \end{equation} Given a quantum statistical model \eqref{qbmodel}, SLD and RLD operators are expressed as \begin{align} \label{sldbrep} L_{\theta,i}&=-\frac{\vecin{\partial_i \v{s}_{\theta}}{\v{s}_{\theta}}}{1-s_\theta^2} \sigma_0+\left(\partial_i\v{s}_{\theta} + \frac{\vecin{\partial_i \v{s}_{\theta}}{\v{s}_{\theta}}}{1-s_\theta^2}\v{s}_{\theta}\right)\cdot \v\sigma \\ \nonumber \tilde{L}_{\theta,i}&=\frac{1}{1-s_\theta^2}\left[-\vecin{\partial_i \v{s}_{\theta}}{\v{s}_{\theta}} \sigma_0 +\left(\partial_i\v{s}_{\theta}-\mathrm{i} \v{s}_{\theta}\times\partial_i\v{s}_{\theta}\right)\cdot \v\sigma\right]. \end{align} SLD and RLD Fisher information matrices read rather simple expressions as \begin{align} g_{\theta,ij}&=\vecin{\partial_i\v{s}_{\theta}}{\partial_j\v{s}_{\theta}} +\frac{\vecin{\partial_i\v{s}_{\theta}}{\v{s}_{\theta}}\vecin{\v{s}_{\theta}}{\partial_j\v{s}_{\theta}}}{1-s_\theta^2} \\ \nonumber &=\frac{1}{1-s_\theta^2}\left(\vecin{\partial_i\v{s}_{\theta}}{\partial_j\v{s}_{\theta}}-\vecin{\partial_i\v{s}_{\theta}\times\v{s}_{\theta}}{\partial_j\v{s}_{\theta}\times\v{s}_{\theta}} \right), \\ \tilde{g}_{\theta,ij}&=\frac{1}{1-s_\theta^2}\left(\vecin{\partial_i\v{s}_{\theta}}{\partial_j\v{s}_{\theta}} +\mathrm{i}\vecin{\partial_i\v{s}_{\theta}\times\partial_j\v{s}_{\theta}}{\v{s}_{\theta}} \right). \end{align} In the above expressions, $s_\theta^2\equiv \vecin{\v{s}_{\theta}}{\v{s}_{\theta}}$ denotes the square of the length of the Bloch vector $\v{s}_{\theta}$. The Holevo bound can also be expressed in terms of Bloch vectors as follows. Let $T_{\theta,i}^{\bot}=\{\v x\in{\mathbb R}^3\,|\, \vecin{x}{\partial_i \v{s}_{\theta}}=0 \}$ be the orthogonal space to the $i$th derivative of the Bloch vector. A linear operator which satisfies $\tr{\rho_\theta X}=0$ and $\partial_i\tr{\rho_\theta X}=0$ can be expressed as \begin{equation} X=-\vecin{\v{s}_{\theta}}{\v x}\sigma_0+{\v x}\cdot{\v \sigma}\mbox{ with } \v x\in T_{\theta,i}^{\bot}, \end{equation} and an element of the set appeared in the definition \eqref{Xset} takes the form of $\vec{X}=(X^1,X^2,\dots,X^k)$ with \begin{align}\nonumber X^i&=-\vecin{\v{s}_{\theta}}{\v x}^i\sigma_0+{\v x}^i\cdot{\v \sigma},\\ \v{x}^i&\in\bigcap_{j\neq i} T_{\theta,j}^{\bot},\quad \vecin{\v{x}^i}{\partial_i\v{s}_{\theta}}=1. \end{align} Using this form of Bloch vector representation, the $Z_\theta[\vec{X}]$matrix reads \begin{align}\nonumber \mathrm{Re} \,z_\theta^{ij}[X]&=\vecin{\v x^i}{\v x^j}-\vecin{\v x^i}{\v{s}_{\theta}}\vecin{\v{s}_{\theta}}{\v x^j},\\ \mathrm{Im} \,z_\theta^{ij}[X]&=-\vecin{\v x^i\times\v x^j}{\v{s}_{\theta}}. \end{align} As noted in the text, the Holevo bound coincides with the RLD CR bound when $k=3$ for any qubit system if all SLD operators are linearly independent. Thus, the Holevo bound for $k=2$ is of interest and needs to be analyzed. With straightforward calculations, we have \begin{multline} h_\theta[X|W]=\sum_{i,j=1}^2w_{ij}(\vecin{\v x^i}{\v x^j}-\vecin{\v x^i}{\v{s}_{\theta}}\vecin{\v{s}_{\theta}}{\v x^j}) \\ +2\sqrt{\det W}\big|\vecin{\v x^i\times\v x^j}{\v{s}_{\theta}} \big|, \end{multline} for a given weight matrix $W=[w_{ij}]_{i,j\in\{1,2\}}$. Note this is a quadratic form with respect to $\v x^i$ and the Holevo bound can be obtained by standard optimization procedure.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \par Recently, Machine Learning (ML) and Deep Learning (DL) models have been used in various fields of physics \cite{Carleo2019,sudhe1,sudhe3}. In the study of dynamics of nonlinear systems, ML and DL algorithms are extensively used for the prediction and discovery of the behaviour of the chaotic and complex systems. For example, they have been used to identify chimera states \cite{BARMPARIS2020,ganaie2020}, in the replication of chaotic attractors \cite{Pathak2017}, using symbolic time series for network classification \cite{panday2021}, separating chaotic signals \cite{Krishnagopal2020}, learning dynamical systems in noise \cite{santo1} and in the prediction of extreme events \cite{meiyazhagan2021,meiyazhagan2021-2,PYRAGAS2020,dibak1,dibak2,asch2021model}. Very recently, the authors of Ref.~\cite{lellep2020} have considered H\'enon map and used a ML algorithm, namely Artificial Neural Network (ANN), to study the extreme events in it. The authors have focussed on binary classification and classified the data points as extreme and non-extreme \cite{lellep2020}. \par In our studies, we consider logistic map with quasiperiodic forcing and predict the time series of the system which is not continuous. The system exhibits chaos in two different regimes. We predict both the chaotic attractors of this system with the help of the DL framework, namely Long Short-Term Memory (LSTM). The logistic map with quasiperiodic forcing is described by the following equations, namely \cite{prasad1998,heagy1994} \begin{subequations}\label{map} \begin{equation} x_{n+1} = \alpha [1+\epsilon\cos(2\pi\phi_n)]x_n(1-x_n), \end{equation} \begin{equation} \phi_{n+1}=\phi_n+\omega\; (\textrm{mod 1}), \end{equation} \end{subequations} where $\epsilon$ and $\omega=(\sqrt{5}-1)/2$ are the forcing amplitude and irrational driving frequency respectively. The authors in Ref.~\cite{prasad1998} redefined the driving parameter as $\epsilon ' =\epsilon/(4/\alpha-1)$ to study the dynamics of the system in the regimes of $0\leq x \leq 1$, $0\leq \phi \leq 1$ and $0\leq \epsilon \leq 1$. The schematic phase diagram \cite{prasad1998} of the system is given in Fig.~\ref{fig:phase}. \begin{figure}[!ht] \centering \includegraphics[width=0.8\linewidth]{phase250.pdf} \caption{\label{fig:phase} Schematic phase diagram of the quasiperiodically forced logistic map. $C_1$ and $C_2$ are two different chaotic regimes.} \end{figure} The system shows various dynamic behaviours, namely periodic, strange nonchaotic and chaotic attractors which can be characterized by the nonzero Lyapunov exponent $\Lambda$ \cite{prasad1998}, where \begin{equation} \Lambda = \lim_{N\rightarrow\infty}\dfrac{1}{N}\sum_{i=1}^{N}\ln|\alpha[1+\epsilon\cos(2\pi\phi_i)](1-2x_i)|. \end{equation} From Fig.~\ref{fig:phase}, we can {notice} the interesting behaviour of the considered system which has two chaotic regimes, namely $C_1$ and $C_2$. The $C_1$ regime is the continuation of the chaotic regime in the logistic map for $\epsilon=0$ at the end of the period-doubling cascade, at $\alpha=3.5699...$. {The chaos }in $C_2$ regime is due to low nonlinearity and large amplitude forcing \cite{prasad1998}. Our aim is to predict chaotic attractors in both the regime using LSTM model since it is capable of forecasting the data which is in the form of {a} sequence. \par We organize our work as follows. In Sec. 2, we discuss the generation of training and testing data. In Sec. 3, we consider a DL framework called LSTM and train it using training set data and predict the test set data. The performance of the LSTM model is discussed in Sec. 4. We present the conclusion in Sec. 5. \section{Data preparation} \par Generating data is the foremost task in prediction because prediction is done only by learning the relationship between the given data. We calculate the value of $x$ for $10^5$ iterations using equation \eqref{map} in both the regimes $C_1$ and $C_2$. This discrete space data is then converted into supervised learning data by taking $x_n$ as input and $x_{n+1}$ as output. The chaotic attractors in the both regimes $C_1$ and $C_2$ are shown in Fig.~\ref{fig:tt}. The Figs.~\ref{fig:tt} (a) \& (b) corresponding to the regime $C_1$ and Figs.~\ref{fig:tt} (c) \& (d) correspond to the $C_2$ regime. \begin{figure}[!ht] \centering \includegraphics[width=1.0\textwidth]{train_test_big1.jpg} \caption{\label{fig:tt} Chaotic attractors in two different regimes $C_1$ and $C_2$. (a) $\alpha=3.6, \epsilon '=0.5$ and (b) $\alpha=3.9, \epsilon '=1.0$ correspond to $C_1$. (c) $\alpha=3.0, \epsilon '=1.0$ and (d) $\alpha=3.1, \epsilon '=0.8$ correspond to $C_2$. The points in blue (colour online) denoting the training set data and red (colour online) denoting the test set data.} \end{figure} The values of the parameters are taken as (a) $\alpha=3.6, \epsilon '=0.5$, (b) $\alpha=3.9, \epsilon '=1.0$, (c) $\alpha=3.0, \epsilon '=1.0$ and (d) $\alpha=3.1, \epsilon '=0.8$. We divide the data into two parts: (i) training set and (ii) test set. Training set data are used during the training process of the DL model and test set data are used for the evaluation of the ability of the DL model. In Fig.~\ref{fig:tt}, the blue dots are the data used for training purpose and the red coloured data are used for testing. We use $6\times 10^4$ data as training set data and $4\times 10^4$ data as test set data. \par These two sets of data are rescaled using min-max normalization which is given by the formula \cite{al2006data}, \begin{equation} x_i^{rescaled} = a + \dfrac{(x_i-x_{min})(b-a)}{x_{max}-x_{min}}, \qquad i = 1,2,3,\hdots,n, \end{equation} where $x_{min}$ and $x_{max}$ are the minimum and maximum value of the data set respectively. We fix $a=-1$ and $b=+1$ {in order to scale} the data between $-1$ {and} $+1$. During the testing phase, this preprocessing scaling step is reversed after {obtaining the} output from the DL model in order to compare the results with the actual data. \section{Deep Learning framework: Long Short-Term Memory} \par When the data is in a sequential form one can make use of the Recurrent Neural Networks (RNN) \cite{rumelhart1986} which is a type of ANN. For the present study we consider a DL framework known as LSTM \cite{hochreiter1997} which is a special kind of RNNs. In recent years, LSTM framework has proven to be capable of forecasting time series of the chaotic systems even when there are extreme events in the time series \cite{meiyazhagan2021,dibak1,dibak2}. The main feature that differentiates LSTM from the other RNNs is that the latter has only one activation function for the neurons that is $\tanh$ but in the case of the former, a sigmoid function is used for recurrent activations and $\tanh$ is used for the activation of neurons. The sigmoid activation function is defined by \cite{goodfellow2016}, \begin{equation} \sigma (z) = \dfrac{1}{1+e^{-z}}. \end{equation} \par We construct the LSTM model in the following way. We consider two LSTM layers each having 16 units in it and followed by a layer which has one neuron for output. During the training, we give both {the} input and the corresponding output to the model, that is {we give $x_n$ as the input and $x_{n+1}$ as the output}. By doing this, the model will learn the nonlinear relations between the given data. After training, the learned model is used to forecast the data steps. \begin{figure}[!ht] \centering \includegraphics[width=1.0\linewidth]{major_result_big1.jpg} \caption{\label{fig:main_results} Plots of forecasted values over the actual values for four different sets of $\alpha$ and $\epsilon$ values as mentioned in Fig.~\ref{fig:tt}. The Figs.~(a) \& (b) correspond to $C_1$ regime and (c) \& (d) correspond to $C_2$ regime. Black dots denote the actual value and green dots denote the predicted value.} \end{figure} During the testing phase, we feed only the input data and ask the model for the corresponding output. The predicted values at the output given by the LSTM model are compared with actual values to determine the efficiency of the model in forecasting the chaotic attractors of the considered system. \begin{figure}[!ht] \centering \includegraphics[width=0.8\linewidth]{major_scatt.jpg} \caption{\label{fig:main_scatter} Scatter plots and RMSE values of the four different cases. The green colour plots correspond to the regime $C_1$ and blue colour corresponds to $C_2$.} \end{figure} \section{Results and Discussion} \par To visualize the performance of the considered DL model, predicted data are plotted over the actual data in Fig.~\ref{fig:main_results}. Black dots (colour online) denote the actual value and green dots (colour online) denote the predicted value. The data in Figs.~\ref{fig:main_results} (a) \& (b) correspond to $C_1$ regime and Figs.~\ref{fig:main_results} (c) \& (d) correspond to $C_2$ regime. From the plots, we can see that relatively all predicted data coincide with the actual data. To have a clear understanding of the efficiency of the model we calculate the Root Mean Square Error (RMSE) value using the formula, \begin{equation} RMSE = \sqrt{\sum_{i=1}^{N_{Test}}\dfrac{(\hat{Y}_i^{Test}-Y_i^{Test})^2}{N_{Test}}}, \end{equation} where $\hat{Y}_i^{Test}$, $Y_i^{Test}$ and $N_{Test}$ {denote the} predicted values, actual values and total number of data in the test set {respectively}. We make use of the scatter plots which are plotted by taking actual values in the $x$-axis and predicted values in the $y$-axis (see Fig.~\ref{fig:main_scatter}). From Figs.~\ref{fig:main_scatter} (a) \& (b) we can see that the RMSE values for the regime $C_1$ are $0.015$ and $0.012$ respectively for the parameter values $\alpha=3.6, \epsilon '=0.5$ and $\alpha=3.9, \epsilon '=1.0$. The outcome of the scatter plots almost fit in straight line, {thereby} indicating that the difference between predicted and actual values are very low. From Figs.~\ref{fig:main_scatter} (c) \& (d) we can see that the the results of the second regime $C_2$ are calculated as $0.013$ and $0.004$ respectively for the parameter values $\alpha=3.0, \epsilon '=1.0$ and $\alpha=3.1, \epsilon '=0.8$. The scatter plots for the test set data of regime $C_2$ {also} show very {little} scatter points, {thereby} indicating the best fit of predicted data with the actual data. \begin{figure}[!ht] \centering \includegraphics[width=0.9\linewidth]{units.pdf} \caption{\label{fig:units} RMSE values for various number of units in LSTM layers. (a) and (b) corresponds to the regimes $C_1$, $\alpha=3.6, \epsilon '=0.5$ and $C_2$, $\alpha=3.0, \epsilon '=1.0$ respectively.} \end{figure} \subsection{Effect of model architecture} \par To study the effect of model architecture on the performance of the considered model we vary the number of units and analyse the performance based on the RMSE values. For this purpose, we change the units in both LSTM layers and train the model. Then each trained model is evaluated using the test set data. The outcome is shown in Fig.~\ref{fig:units}. For the $C_1$ regime, we evaluate the model with the data corresponding to $\alpha=3.6, \epsilon '=0.5$ and plot the results in Fig.~\ref{fig:units} (a). For the $C_2$ regime, we evaluate the model with the data corresponding to $\alpha=3.0, \epsilon '=1.0$ and plot the results in Fig.~\ref{fig:units} (b). The RMSE value changes while varying the number of units in the LSTM layers. \subsection{Multi-step forecasting} \par Now, we consider the task of multi-step forecasting. To do this, while preparing the supervised learning data, instead of having only one future step value, we take more than one value at the output. For this we consider the data in both the regimes $C_1$ ($\alpha=3.6, \epsilon '=0.5$) and $C_2$ ($\alpha=3.0, \epsilon '=1.0$). The results of multi-step forecasting are shown in Fig.~\ref{fig:sctt_step}. \begin{figure}[!ht] \centering \includegraphics[width=1\linewidth]{step_scatt_chaos.jpg} \caption{\label{fig:sctt_step} Scatter plots with RMSE values for the multi-step forecasting. (a)-(d) correspond to $C_1$ regime $\alpha=3.6, \epsilon '=0.5$ and (e)-(f) correspond to $C_2$ regime $\alpha=3.0, \epsilon '=1.0$.} \end{figure} From this figure we can infer that in the forecasting of multi-steps two and three, the considered model outperformed our expectations in the prediction task, the plots have fewer scatter points and the RMSE values are in admissible range. But for steps four and five, the model failed to give accurate values in both the regimes. It can be seen from Figs.~\ref{fig:sctt_step} (c), (d), (g) and (h), the points are scattered much when making forecasting with fourth and fifth steps. \section{Conclusion} \par In this work, we have considered the logistic map with quasiperiodic forcing. The system exhibits chaos in two different regimes. We employed a DL framework LSTM, for the prediction of two different chaos. For this, we have generated $10^5$ data totally and used $6\times 10^4$ data for training and the remaining $ 4\times 10^4$ data for the purpose of testing. We forecast the chaos corresponding to the two regimes $C_1$ and $C_2$. The outcome of the experiments are evaluated using the performance metric RMSE value and they are analyzed through the scatter plots which {have been} plotted between the predicted value and actual value. Further, we have checked the effect of the number of units of the LSTM layers on the performance of the model. In this connection, we have done multi-step forecasting in order to predict more than one future value of the considered map. From the obtained results, we conclude that the developed LSTM framework can be used for forecasting the chaotic dynamics of the discrete system, namely quasiperiodically forced logistic map described by Eq.~\eqref{map}. \section*{Acknowledgements} JM thanks RUSA 2.0 project for providing a fellowship to carry out this work. MS acknowledges RUSA 2.0 project for providing financial support in procuring a high-performance GPU server which highly assisted this work.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} An equation of state, such as the ideal gas law, is a mathematical relation between physical constants and macroscopically observable properties of a single phase of a system in equilibrium~\cite{LandauLifshitzStatMech}. Equations of state are path-independent, and so can be explored by changing a system along any convenient intraphase path in state space between equilibria. Interphase paths include a phase transition --- a discontinuous change in one or more system properties. For example, the significant volume increase when liquid water evaporates. Non-equilibrium paths, whether intraphase or interphase, can also be used to infer an equation of state, but an assumption is required to link the non-equilibrium states to the equilibrium states. This is the case in shock-wave physics where otherwise unreachable high pressure and high density regions of state space are explored. Pressure--density curves from shock-wave experiments do not provide enough thermodynamic information to infer an equation of state~\cite{Cowperthwaite1965} {(because other state variables also vary)}, but can be used to fit parameters in an assumed equation of state. We explore parameter estimation in the ideal gas equation of state, applied to a two-dimensional complex plasma. We demonstrate that this strongly-coupled system can be described by the ideal gas law, which is strictly valid only for systems of weakly-interacting particles. A laboratory complex plasma consists of plastic microspheres suspended in a low-density ionized gas. The microspheres are often referred to as \textit{dust} particles in analogy with dusty plasmas observed in astronomy~\cite{Merlino2004,Newbury1997}. Fast-moving electrons and relatively slow-moving ions in the plasma deposit a net negative charge on the dust, which repel each other via a screened Coulomb force (Yukawa or Debye-H\"uckel) \cite{Shukla2009:RMP}. Condensed-matter-like behavior results when the dust is confined electrostatically, with the dust mimicking microscopic constituents of a fluid (individual molecules or atoms), yet being observable on a macroscopic scale (even to the naked eye). The space between dust particles is occupied by a rarefied gas, so these dusty plasma structures experience weak damping, and are therefore considered to be representative models of liquids and solids~\cite{Morfill2009}. Dusty plasmas are an excellent vehicle for exploring the microscopic kinematics of melting processes and crystal formation. These kinematics are influenced by the local coupling constant $\Gamma$, which is the ratio of (interparticle interaction) potential energy to (thermal) kinetic energy for each particle. Ideal gases are weakly coupled with $\Gamma<1$. A thermodynamic description of the dust is provided by {state variables} which can be calculated from the kinematics of the individual particles. Individual particle positions extracted from images are used to determine both the dust density (via Voronoi analysis~\cite{Voronoi1908,Aurenhammer91voronoidiagrams}), and the coupling constant~\cite{Knapek2007}. Particle velocities are used to determine the kinetic temperature~\cite{Oxtoby2012a,Samsonov2008:IEEE}. {Fluctuations in these statistical properties are negligible in the thermodynamic limit, and at equilibrium. For finite systems out of equilibrium, the statistical description retains validity, but fluctuations will be non-negligible.} Dust kinematics are normally estimated using particle tracking velocimetry (PTV)~\cite{Stegeman1995PTV}, where \textit{average} velocity $\vec{v}_\mathrm{PTV}(t+T/2)=[\vec{x}(t+T)-\vec{x}(t)]/T$ is calculated from consecutive position measurements $\vec{x}(t+T)$, $\vec{x}(t)$, which are extracted from a sequence of images taken with a high-speed camera at a frame rate of $1/T$ (typically 500--1000 frames per second). The velocity calculated in this way is subject to two sources of inaccuracy: position uncertainty in the measurement, and nonzero acceleration. For very high frame rates $T\rightarrow 0$, $v_\mathrm{PTV}$ is limited by position uncertainty, which is due to finite pixel size and noise in the camera sensor~\cite{Feng2007,FengRSI11}. These limitations can lead to artifacts in results calculated from PTV-estimated kinematics. Recursive state estimation (also known as \textit{object tracking}~\cite{BarShalom,JFRopaedia}) has been employed to estimate the kinematics of dusty plasma particles~\cite{hadziavdic2006,Oxtoby2012a}. Object tracking algorithms filter noisy measurements via a set of equations to produce estimates of the \textit{instantaneous} kinematics which are resilient to the limitations discussed above. The most ubiquitous recursive Bayesian estimator is the Kalman filter~\cite{Kalman1960}. In this work we employed object tracking using an interacting multiple model tracker~\cite{Oxtoby2012a} based on Kalman filtering (KF) to generate thousands of simultaneous particle tracks from shock-wave experiments on a two-dimensional (2D) dusty plasma. We used Rankine-Hugoniot relations~\cite{BondGasDynamics} to calculate Hugoniot curves arising from the estimated kinematics, and observed ideal gas behavior despite the strong coupling between the dust particles ($\Gamma\gg1$). Our experimental data fit the combined ideal gas/Rankine-Hugoniot model very well, but more complex models may be necessary for other regions of parameter space. We compared our KF results with those from PTV. The PTV results are unreliable due primarily to the significant particle acceleration in shock-wave experiments, and we observed a resulting systematic error that gave rise to a bias in the parameter estimation. Our object tracking algorithm avoids this bias by including particle acceleration, along with position and velocity, in the recursive estimation. The ideal gas law is a thermodynamic relation between state variables. It can be written in terms of specific (per unit mass) pressure $p$, internal energy $e$ and density $n$ as \begin{equation} p(e,n) = (\gamma - 1) e n ~, \label{eq:EoS_ideal_gas} \end{equation} where $\gamma$ is the adiabatic index. Strictly speaking, the ideal gas law is a valid description for systems of non-interacting particles, but it can be applied to systems involving non-negligible particle interactions with sufficient accuracy in many cases~\cite{Pandey2004}. Deviations from the ideal gas law were first considered by van der Waals~\cite{VanDerWaals1873} to account for finite particle size and interactions. Here we explore the $p(e,n)$ relation in a non-perturbative manner by {generating a series of normal shock waves} of different magnitudes in the dust~\cite{Samsonov2003,Samsonov2004,Samsonov2008:IEEE}. A normal shock wave is one where the shock front is normal to the direction of propagation, and the bulk flow is one-dimensional. In the frame of a normal shock wave moving at speed $u_\mathrm{S}$, the Rankine-Hugoniot jump relations for conservation of mass, momentum and energy across the shock front are, respectively,~\cite{BondGasDynamics} \begin{subequations} \label{eqs:RankineHugoniot} \begin{align} \label{eq:RankineHugoniotMass_ShockFrame} n_2\tilde{u}_2 &= n_1\tilde{u}_1 \\ \label{eq:RankineHugoniotMomentum_ShockFrame} \tilde{p}_2 + n_2\tilde{u}_2^2 &= \tilde{p}_1 + n_1\tilde{u}_1^2 \\ \label{eq:RankineHugoniotEnergy_ShockFrame} e_2 + \frac{1}{2}\tilde{u}_2^2 + \frac{\tilde{p}_2}{n_2} &= e_1 + \frac{1}{2}\tilde{u}_1^2 + \frac{\tilde{p}_1}{n_1} ~, \end{align} \end{subequations} where $u=u_\mathrm{S}-\tilde{u}$ is particle speed in the laboratory frame, a tilde denotes the reference frame of the shock wave, and the downstream (upstream) region is denoted with subscript 1 (2) --- see Fig.~\ref{fig:image}. Number density $n$ and specific internal energy $e$ are equal in the laboratory and moving frames, but pressure has a kinetic component. Using the technique of~\cite{Knapek2007} we used the particle kinematics to find $10^3 \lesssim \Gamma \lesssim 10^4${}\ in the crystal state ahead of the shock wave, implying negligible kinetic pressure ($\tilde{p}_1~\approx~p_1$). We observed a similar trend upstream, but the technique of~\cite{Knapek2007} cannot be applied in the wake of the shock wave due to the disorder, so we calculated $\tilde{p}_{1,2}$ in this work. Equation~(\ref{eq:RankineHugoniotEnergy_ShockFrame}) is known as \textit{the Hugoniot}~\cite{Henderson2001}. Using equation~(\ref{eq:EoS_ideal_gas}) to eliminate internal energy from equation~(\ref{eq:RankineHugoniotEnergy_ShockFrame}), and combining with equation~(\ref{eq:RankineHugoniotMomentum_ShockFrame}), we can write~\cite{BondGasDynamics} \begin{equation} \xi(\eta) = \frac{\eta\ro{\gamma+1} - (\gamma-1)}{\ro{\gamma+1} - \eta(\gamma-1)} ~, \label{eq:HugoniotEoS_Final} \end{equation} where $\xi\equiv \tilde{p}_2/\tilde{p}_1$ is the shock strength and $\eta\equiv n_2/n_1$ is the compression ratio across the shock front. An estimate of $\gamma$, and hence an approximate equation of state for the shocked dust in the form of equation~(\ref{eq:EoS_ideal_gas}), is obtained from a least-squares fit of the experimental results to equation~(\ref{eq:HugoniotEoS_Final}). With the ideal gas law as the equation of state, the polytropic index $g$ can be used to describe the physical nature of a process that changes an initial state (downstream $p_1, n_1$) to a final state (upstream $p_2, n_2$). Polytropic processes follow $p/n^g = C$~\cite{Newbury1997}, which is a curve in the pressure--density diagram, with $g$ and $C$ defining a solution for the changes linking initial and final states. Equating initial and final states (both equal to the constant $C$) then combining with $\xi$ and $\eta$ and solving for $g$ allows the polytropic index to be expressed as \begin{equation} g =\frac{\ln(\xi)}{\ln(\eta)} ~, \label{eq:Polytropic_index} \end{equation} where $g=0$ indicates an isobaric process, $g=1$ is an isothermal process and $g=\gamma$ is an adiabatic process. The experiment involves levitating a 2D cloud of microspheres 10mm above the floor of an Argon-filled chamber pressurized to $2.05\mathrm{Pa}$. The spheres (9.2$\mu$m diameter) were allowed to settle into a well-spaced crystalline structure, forming a ``plasma crystal"~\cite{Samsonov2004} which is visible to the naked eye when illuminated by a laser sheet (figure~\ref{fig:image}). The dust particles each hold an approximate charge of $Q=16000e$ and have a Debye length of $\lambda_D=1.0\mathrm{mm}$~\cite{Harvey2010,Durniak2010:IEEE}. Shock waves were created by an electrode located to one side of the field of view, which was pulsed for 2 seconds with a voltage selected from $-20$V to $-50$V in $5$V steps. The crystal was allowed to reset between each run (requiring approximately 100 seconds). The experiment was repeated at each voltage level to reduce the impact of local variation in crystal structure that can form on reset. The dust was imaged from above by a grayscale camera at 500 frames per second for 1.2 seconds, and the resulting images processed by PTV and our Kalman-filter-based tracking algorithm to obtain the dust kinematics. A sample image of the dust, enhanced for presentation with enlarged dots and false color, is shown in figure~\ref{fig:image} with a zoomed inset around the shock front. Further details of the experimental apparatus are described in~\cite{Samsonov2004} and~\cite{Samsonov2005}. \begin{figure}[ht] {\centering \includegraphics[width=0.9\columnwidth]{figure1.eps} \caption{Enhanced experimental image (enlarged dots, false color) with zoomed inset. The field of view is 32.8mm/1024 pixels square. Number density $n_{1,2}$ and specific pressure $p_{1,2}$ show the downstream and upstream regions (subscript 1/2).} \label{fig:image} } \end{figure} The symmetry inherent in normal shock waves permits a 1D description of the dynamics. \textit{Profile} values were calculated as robust average quantities (median) in each of 50 bins which were equally spaced along the $X$ axis {(the direction of propagation)}, and which spanned the $Y$ axis. Density and pressure profiles were used in our analysis. Density is the inverse of the Voronoi cell area~\cite{Voronoi1908,Aurenhammer91voronoidiagrams}, and the pressure is normal stress (in the direction of propagation), which here is the first diagonal component of the 2D stress tensor, $P_{XX}$. Our investigation proceeded as follows. The shock front was identified as a peak in the density profile evolution (figure~\ref{fig:time_evolution}), from which the shock front position and speed was determined. The upstream and downstream quantities in the Rankine-Hugoniot shock-jump relations (pressure, density, etc.)\ were selected from $0.656$mm (1 bin) behind the shock front and $3.28$mm (5 bins) ahead. We needed to look further ahead to overcome the finite width of the shock front (an ideal shock wave would have vanishing width). Results were also sensitive to the chosen upstream distance due to structure a few millimeters behind the shock wave (see multi-shock discussion below). Shock-wave (Hugoniot) investigations such as in this Letter require repeated shock-wave experiments of different magnitudes, sharing a common initial condition. Reliably reproducing the same initial condition in dust crystal {experiments} is extremely difficult, if not impossible. For this reason the data was post-selected from the densest cluster of initial conditions and constrained to lie within 1\% of the cluster centroid. This is illustrated in figure \ref{fig:selected_points} where the post-selected initial conditions (downstream) are shown as blue dots and all others as red crosses. The inset of figure \ref{fig:selected_points} shows the final states corresponding to the post-selected initial conditions. From $13$ similar experimental runs, $118$ total data points were generated, of which $26$ were post-selected. The apparently small dynamic range of the post-selected data in figure~\ref{fig:xi_eta} is typical for shock-wave experiments in dusty plasmas~\cite{Melzer1994,Samsonov2000,Samsonov2004,Durniak2010:IEEE}. It is a consequence of the crystal softness (very strong shock waves completely destroy soft crystals), which is due to the large interparticle spacing relative to the particle size~\cite{FengSound2012}. The post-selected data was analyzed using equations (\ref{eqs:RankineHugoniot})--(\ref{eq:Polytropic_index}). \begin{figure}[ht] {\centering \includegraphics[width=0.9\columnwidth]{figure2.eps} \caption{The dust number density profile evolution $n(X,t)$ showing the shock wave (black circles), and trailing wave (white squares) with quadratic least-squares fits.} \label{fig:time_evolution} } \end{figure} {A typical experiment is visualized in} figure~\ref{fig:time_evolution}. Two number density peaks emerged following the applied voltage pulse: a shock wave (black line) and a trailing wave (white line). Such multi-shock structures~\cite{Duvall1977} can be described by a sequence of jump relations like equation (\ref{eqs:RankineHugoniot}). Wave speeds calculated from least-squares fits for the peak positions were $u_\mathrm{S}(t) = -17.2t + 43.7$ mm/s (shock, $t\geq0.24$s) and $u_\mathrm{T}(t) = -8.4 t + 33.0$ mm/s (trailing, $t\geq0.45$s). \begin{figure}[ht] {\centering \includegraphics[width=0.9\columnwidth]{figure3.eps} \caption{Initial pressure and density ($n_1,p_1$) for each run: blue dots survived post-selection (see text). Inset: Pressure--density diagram showing all post-selected data: initial states (blue dots) and the corresponding final states (black circles).} \label{fig:selected_points} } \end{figure} {The adiabatic index of the dust was estimated by least-squares fits to equation~(\ref{eq:HugoniotEoS_Final}).} Figure~\ref{fig:xi_eta} shows these fits of shock strength vs.\ compression for both PTV and KF (object tracking). We found $\gamma_\mathrm{KF} = 1.67~\pm~0.01$, which is consistent with that of a monatomic ideal gas $\gamma_\mathrm{Ideal}=5/3=1.6\dot{6}$. We found $\gamma_\mathrm{PTV} = 1.79~\pm~0.01$ using PTV. This is a biased overestimate, as we now explain. The PTV and KF results for the crystal-like downstream states were comparable, so the source of the PTV bias was the upstream estimates of $\tilde{p}_2$ and $n_2$. Dust pressure is dominated by the Yukawa interaction, which is non-linear in interparticle spacing $r$ (e.g., see~\cite{Oxtoby2012a}), and so sensitive to errors in $r$. These errors are greater for PTV than KF~\cite{Oxtoby2011Fusion,Oxtoby2012a} and, when averaged, propagate through the non-linearities to create a biased \textit{overestimate} of upstream pressure. This shifts erroneous results upward in the $\xi$--$\eta$ plane. Dust density is \textit{underestimated} when shocked dust particles intermittently leave the plane of illumination, shifting erroneous results left in the $\xi$--$\eta$ plane. Object tracking provides a robust way to maintain tracks for these particles, whereas PTV does not. Thus, as observed in figure~\ref{fig:xi_eta}, we expected the PTV result to lie above and to the left of the KF result (which itself would lie above the true result if biases were present). We determined the polytropic index of the shocked dust via the mean value of equation~(\ref{eq:Polytropic_index}), using KF results. We found $g_\mathrm{KF} = 1.71~\pm~0.07$ which satisfies $g \approx \gamma$, thereby demonstrating that shock waves in a dusty plasma crystal constitute an adiabatic process, as is the case for an ideal gas~\cite{Newbury1997}. This is further experimental evidence of ideal gas behavior in a 2D dusty plasma. \begin{figure}[ht] {\centering \includegraphics[width=0.9\columnwidth]{figure4.eps} \caption{Shock strength vs.\ compression ratio for PTV and KF. Least-squares fits to equation~(\ref{eq:HugoniotEoS_Final}) give the adiabatic index $\gamma$, with $3\sigma$ confidence regions in each case shown in gray. The KF fit overlaps with that of a monatomic ideal gas.} \label{fig:xi_eta} } \end{figure} Our final result is the shock Hugoniot~\cite{Rice1958,Nagayama2002} in figure~\ref{fig:shock_hugoniot}, where shock wave speed $u_\mathrm{S}$ is linearly related to upstream particle speed $u_2$: $u_\mathrm{S} = S u_2 + C_0$. Here $C_0$ is the zero-pressure bulk speed of sound (for an unshocked sample), and $S$ is a dimensionless constant of proportionality for the linear fit. The PTV and KF methods estimate $C_0$ to be $26.8$mm/s and $21.8$mm/s, respectively. These values are in line with the $25$mm/s and $28$mm/s speeds of sound observed in \cite{FengSound2012} and \cite{Schwabe2011} via different techniques. The very low speeds result from the extreme softness of the dust crystal. For the fits in figure~\ref{fig:shock_hugoniot}, the coefficient of determination $R^2$ showed the KF data ($R^2=0.64$) following the expected linear trend far better than the PTV data ($R^2 = 0.27$). This reinforces our conviction that object tracking methods should be used to analyze dusty plasma experiments, rather than PTV. \begin{figure}[htp] {\centering \includegraphics[width=0.9\columnwidth]{figure5.eps} \caption{Shock front speed vs.\ upstream particle speed (\textit{Shock Hugoniot}) for PTV and KF.} \label{fig:shock_hugoniot} } \end{figure} In this work we performed shock-wave experiments on a 2D dusty plasma crystal, a system of strongly-coupled particles with Coulomb coupling parameter $\Gamma \sim 10^3$. We calculated state variables for the dust (pressure, density) directly from the dust particle kinematics. The kinematics were estimated using two techniques: object tracking (recursive Bayesian state estimation), and particle tracking velocimetry (the standard approach in dusty plasma physics, which is less accurate~\cite{Oxtoby2012a}, and unreliable for shock-wave experiments). Conservation laws (Rankine-Hugoniot equations) were combined with the ideal gas law to estimate the adiabatic index of the dust, which revealed a significant finding: a strongly-coupled ($\Gamma \gg 1$) 2D dusty plasma behaves as an ideal gas. This is explained by the relatively low compression ratio tolerable by soft crystals, e.g.~dusty plasma crystals, which negates the need for higher-order density terms as found in equations of state for non-ideal gases. While the ideal gas law combined with the Rankine-Hugoniot equations produced a very good fit to the experimental data, more complex models may be required when accessing different regions of parameter space. We acknowledge financial support from the Engineering and Physical Sciences Research Council of the United Kingdom (grant number EP/G007918) and high-throughput computational resources provided by the eScience team at the University of Liverpool. We thank the anonymous referees for helpful critiques. The experiments were performed by D.S., who passed away during the final preparation stages of this paper. Dmitry will be sorely missed by the plasma physics community.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} The link between bank and sovereign credit risk is often considered as a key transmission mechanism of the European sovereign debt crisis. Starting in 2009, concerns over the sustainability of government debt in several peripheral European countries led to a sudden increase in banks' credit risk, which was largely due to banks' exposure to sovereign debt exposures. The resulting decrease in asset values led to a tightening in banks' financing conditions and the subsequent contraction in bank lending (see, e.g., \cite*{Acharya2015}, \cite{Bocola2016}, and \cite{Brunnermeier2016}). The negative feedback loop between the sovereign and the real economy reinforced the recessionary impact of the credit crunch; weak economic activity further strained the fiscal position of the sovereign and put upward pressure on sovereign borrowing costs \citep{Brunnermeier2016}. Banks' exposure to sovereign debt transmits sovereign risk to the real economy in three ways. First, a decline in the value of banks' assets restricts the banks' access to funding, which leads to a reduction in private lending \citep*{Bocola2016, Gennaioli2018}. Second, a decline in sovereign bond prices may tighten banks' funding conditions, given that government debt is often used as collateral in the interbank lending \citep{Engler2016}. Third, banks may reduce private lending by changing the compositions of their balance sheets during a crisis \citep*{Acharya2018, Becker2017}. Although the effect of sovereign risk shocks on bank lending has garnered much attention in recent years (see, e.g., \cite{Acharya2015}, \cite{Bahaj2019}, \cite*{Bofondi2017}, and \cite{Popov2014}), the extent to which sovereign default risk accounts for the decrease in credit supply and the subsequent decrease in economic activity is not clear. In this study, we re-examine the credit channel of sovereign default risk and provide cross-country evidence about its effect on real economic activity during the European sovereign debt crisis. We proceed by estimating a structural panel vector autoregressive (SVAR) model for Italy, Spain, Portugal, and Ireland, using monthly data covering the period 2003:M1-2018:M12. We identify a sovereign risk shock using sign restrictions that disentangle the origin of the shock from other shocks that affect the supply of credit. The use of macroeconomic data provides a new perspective on the negative effects of the sovereign risk shock on bank lending, which has been previously mainly examined using bank-level data. In comparison to studies using micro-level data, the panel SVAR approach helps to describe general equilibrium or substitution effects and is able to capture potential feedback effects between sovereign risk and economic activity. The results indicate that the macroeconomic effects of the sovereign risk shock varied across the crisis-hit countries during the sovereign debt crisis. A contractionary sovereign risk shock appears to have a large and persistent negative effect on bank lending and real economic activity. Moreover, the sovereign risk shock partly explains the decline in economic output during the sovereign debt crisis in Italy, Portugal and Spain. The findings also imply that the sovereign risk shock led to an increase in economic output prior to the crisis. Moreover, the protracted increase in the home bias of banks' government debt holdings also suggests that banks' portfolio reallocation from private lending to sovereign debt might have cut back the credit supply by crowding out lending. The results are also important from a policy perspective, given that the policies to prevent the spillover from sovereign risk to banks would seem to be effective in mitigating or preventing the crisis. Moreover, the fact that banks' holding of sovereign debt largely contributed to the crisis implies the need for a European safe-asset as a store of value, as proposed by \cite{Brunnermeier2016}. The rest of the paper is organized as follows. \cref{sec:literature} reviews the related literature. \cref{sec:methods} provides an overview of the methodology, the identification, and the data. \cref{sec:results} presents and discusses results. \cref{sec:conclusion} concludes. \section{Literature review} \label{sec:literature} Previous studies document a strong link between the yields on government debt and business cycles in developing and, more recently, in developed economies. \cite{Uribe2006} find that international financial shocks explain a large share of the business cycle variation in emerging countries through their effect on domestic borrowing costs. They also find that the response of domestic interest rate spreads to domestic fundamentals further reinforces the shocks. \cite{Neri2015} use a large-dimensional factor augmented vector autoregression (FAVAR) model to identify macroeconomic effects of increases in sovereign risk during the European sovereign debt crisis. They show that sovereign debt tensions had a sizeable effect on economic activity by tightening bank-lending conditions. \cite{Bahaj2019} uses high-frequency government bond yield movements around key events during the European sovereign debt crisis to study the macroeconomic impact of innovations to sovereign risk premia in Spain, Ireland, Italy and Portugal in a Bayesian panel SVAR. The results suggest that unanticipated changes in government bond risk premia can explain approximately a third of the variation in the unemployment rate and a sizeable share of the variation in borrowing costs, which lends support to the significant role of the banking sector in the transmission of the crisis. Although previous research documents a large role of the banking sector, the transmission mechanism is not explicit. This paper contributes to the existing literature by examining the role of the banking sector in the transmission of sovereign risk to the real economy. This paper is also related to previous studies that use sign-identified SVAR models to study the macroeconomic effects of financial shocks. It deviates from previous research by distinguishing the financial shocks originating from unanticipated changes sovereign risk\footnote{The method we use to determine the origin of financial shocks resembles the identification strategy in \cite*{Furlanetto2017} based on sign restrictions in a structural VAR model to decompose financial shocks into housing market, credit market, and uncertainty shocks.}. \cite{Gambetti2016} study the effects of loan supply shocks on the business cycle in the euro area, the UK and the US by estimating a VAR model with time-varying parameters and stochastic volatility. They find that loan supply shocks play a significant role in the variation of economic activity, especially during recessions. \cite{Bijsterbosch2015} find that a credit supply shock spurred economic growth before the financial crisis but worsened the recession during the crisis in all countries. However, from the beginning of the European sovereign debt crisis, the credit supply shock contributed negatively to the growth of output in crisis-hit countries, whereas it affected economic growth positively in the core countries. \cite*{Hristov2012} employ a panel SVAR for eleven euro-area member countries and find that credit supply shocks significantly affected real gross domestic product and loans issued by banks in all countries during the financial crisis. This paper is also closely related to studies using micro-level data to examine the effect of sovereign debt crisis on bank loan volumes (see, e.g., \cite{Acharya2015}, \cite{Acharya2018}, \cite{Bofondi2017}, \cite{DeMarco2019}, and \cite{Popov2014}). \cite{Acharya2018} study the effects of the sovereign debt crisis on bank lending using data syndicated loans for European non-financial corporations. They find that the reduction in bank lending to firms in crisis-hit countries was largely explained by a reduction in the value of banks' holdings of sovereign debt and crowding out private borrowing through further government debt purchases. In all, the credit crunch induced by the risk of government default can explain up to one-half of negative real effects during the sovereign debt crisis. \cite{Popov2014} examine the cross-border transmission of sovereign debt tensions by studying the issuance of syndicated loans by banks domiciled in 11 European non-crisis countries during the sovereign debt crisis. They document a significantly smaller increase in loan issuance after 2009 by banks that were exposed to government debt of crisis-hit countries. This paper complements the micro-level evidence by taking into account both macroeconomic as well as general equilibrium effects of financial shocks. \section{The model and the identification strategy} \label{sec:methods} This section provides an overview of the econometric model, the data, and the identification approach. \subsection{The panel SVAR model} We estimate a Bayesian panel SVAR model for Italy, Ireland, Portugal and Spain; the setup is similar to that of \cite{Jarocinski2010}, who used it to study the effects of monetary policy. The partially pooled model is chosen to allow for cross-country heterogeneity in the dynamics of the model, while retaining similarities between the economic structures in each country. Moreover, in comparison to individually estimated models, the use of panel data is likely to improve the quality of the estimates of the model. However, the expected increase in efficiency comes at the cost of losing information in the cross-sectional dimension by shrinking the estimates around a common mean. The model is identified using sign and exclusion restrictions on the impulse response functions by using the algorithm of \cite*{Arias2018}\footnote{The estimation of the model is done in Matlab using the Bayesian Estimation, Analysis and Regression (BEAR) toolbox \citep*{BEAR}.}. For each country $c$, the reduced-form VAR model model is \begin{equation} \matr{y}_{c,t} = \matr{\Gamma}_{c}\matr{z}_{c,t} + \sum_{l=1}^{L} \matr{B}_{c,l} \matr{y}_{c,t-l} + \matr{u}_{c,t}, \label{var} \end{equation} where $\matr{y}_{c,t}$ is an $N \times 1$ vector of endogenous variables. $\matr{\Gamma}_{c}$ is an $N \times M$ coefficient matrix for the $M \times 1$ vector of deterministic variables $\matr{z}_{c,t}$, $\matr{y}_{c,t-l}$ is the $l$-th lag of vector of endogenous variables, $\matr{B}_{c,l}$ are the $N \times N$ country-specific coefficient matrices for the $l$-th lag of endogenous variables, and $\matr{u}_{c,t}$ is a $N \times 1$ vector of Gaussian white noise error terms with $\matr{u}_{c,t} \sim \mathcal{N}(\matr{0},\matr{\Sigma}_c)$. For each country, the model may be written in matrix notation by stacking $\matr{y}_{c,t}'$ for all observations $t$ such that \begin{equation} \matr{Y}_{c} = \matr{X}_{c}\matr{B}_{c} + \matr{Z}_{c}\matr{\Gamma}_{c} + \matr{U}_{c}, \label{matrix_var} \end{equation} where $\matr{Y}_{c} \equiv [\matr{y}_{c,1}' , \ldots, \matr{y}_{c,T}']'$, $\matr{X}_{c} \equiv [\matr{X}_{c,0}', \ldots, \matr{X}_{c,T-1}']'$, with $\matr{X}_{c,t} \equiv [\matr{y}_{c,t-1}', \ldots, \matr{y}_{c,t-L}']$ and $\matr{B}_{c} \equiv [\matr{B}_{c,1}', \ldots \matr{B}_{c,L}']' $. The likelihood for each country specified in \cref{matrix_var} may be expressed in vectorized form as \begin{equation} p\left(\matr{y}_c \vert \matr{\beta}_c, \matr{\gamma}_c, \matr{\Sigma}_c\right) = \mathcal{N}\left(\left( \matr{I}_n \otimes \matr{x}_c \right) \matr{\beta}_c + \left( \matr{I}_m \otimes \matr{z}_c \right) \matr{\gamma}_c , \ \matr{\Sigma}_c \right), \end{equation} where $\matr{y}_c \equiv \text{vec}\left({\matr{Y}_c}\right)$, $\matr{X}_c \equiv \text{vec}\left({\matr{X}_c}\right)$, $\matr{\beta}_c \equiv \text{vec}\left({\matr{B}_c}\right)$, $\matr{z}_c \equiv \text{vec}\left({\matr{Z}_c}\right)$, $\matr{\gamma}_c \equiv \text{vec}\left({\matr{\Gamma}_c}\right)$, and $\matr{u}_c \equiv \text{vec}\left({\matr{U}_c}\right)$. The prior for the coefficient of deterministic variables is assumed to be non-informative such that $\matr{\gamma} \propto 1$. We also assume a diffuse prior for the error variances $\matr{\Sigma}_c$ of the form \begin{equation} p(\matr{\Sigma}_c) \propto \vert \matr{\Sigma}_c \vert^{-\frac{1}{2}\left(N+1 \right)}. \end{equation} We specify an exchangeable prior distribution for the coefficients of the endogenous variables $\beta_{c}$, which has common mean and variance across countries: \begin{equation} p(\matr{\beta}_c \vert \matr{b}, \matr{\Lambda}) \sim \mathcal{N}( \matr{b} ,\ \matr{\Lambda}). \end{equation} The prior for the mean is specified to be non-informative $\matr{b} \propto 1$. The covariance matrix $\matr{\Lambda}$ has a Minnesota-type prior $\matr{\Lambda} \equiv \lambda_1 \matr{\Omega}$, for which the off-diagonal elements of $\matr{\Omega}$ are zero and the diagonal elements may be positive and non-zero. For each lag $l$, equation $i$ and variable $k=1,\ldots,N$, the standard deviation of the coefficient is $1 / l^{\lambda_3}$ if $i=j$ and $\sigma_i \lambda_2/\sigma_j$ if $i \neq j$. The ratio $\sigma_i/\sigma_j$ is included to account for the different magnitude of the coefficients. Because the variance is assumed to be the same for each country, in practice, $\sigma_i$ is calculated as the standard deviation of the pooled regression for each variable on $L$ lagged values. Following \cite{Jarocinski2010}, we set $\lambda_2 = 1 $ and $\lambda_3 = 0$. The overall tightness of the prior is determined by a parameter $\lambda_1$. As $\lambda_1$ approaches zero the coefficients shrink toward the common mean $\matr{b}$, which results in a pooled model by reducing country-specific variation. As $\lambda_1$ grows, the more the country-specific coefficients are allowed to vary. Therefore, $\lambda_1$ essentially determines the extent to which the dynamics of the model vary across countries. In practice, $\matr{\Omega}$ is treated as fixed, but the overall tightness of the prior $\lambda_1$ is considered to be random. We specify a weakly informative inverse-Gamma prior for $\lambda_1$, such that \begin{equation} p\left( \lambda_1 \vert s, \ \nu \right) \propto \lambda_1^{-(s+1)} \text{exp}\left(-\frac{\nu}{\lambda_1}\right). \end{equation} Following \cite{BEAR}, we set $s=0.001$ and $\nu=0.001$ to obtain a prior of the form $p(\lambda) = \lambda_1^{-\frac{1}{2}}$. The model is estimated using a Gibbs sampler, as detailed in \cite{Jarocinski2010}\footnote{The results are robust to a range of other weakly informative priors between 0.1 and 0.001, which have been used by \cite{Gelman2006} and \cite{Jarocinski2010}.}. \subsection{Data} The panel VAR model is estimated for Italy, Spain, Ireland, and Portugal using monthly observations, covering the period from 2003:1 to 2018:12\footnote{The start of the sample is dictated by the availability of the balance-sheet data of the monetary and financial institutions provided by the European Central Bank (ECB).}\footnote{We exclude Greece from the analysis due to the break in the bond yield series following the debt restructuring, although it was one of the countries that was most severely affected by the sovereign debt crisis.}. The data includes the following variables for each country: a measure of real output, an index of consumer prices, volume of loans to non-financial firms, a composite retail lending rate, and a measure of the home bias of domestic banks' holdings of government debt. In addition, a shadow short rate for the euro area is also included to indicate the monetary policy stance. The model thus includes variables that cover the financial and real sectors of the economy and which have been used in previous literature to identify shocks to the banking sector (see, e.g., \cite{Gambetti2016}, \cite{Hristov2012}, and \cite{Peersman2011b}). Industrial output is used as a measure of real economic output in the baseline model. Industrial output enters as the log of a wide production index. The volume of loans is measured by the log of total loans to non-financial companies adjusted for loan sales and securitization. The government bond spread is defined as the difference between the benchmark 10-year yield and the corresponding 10-year zero-coupon swap rate, which is used as a measure of the risk-free rate. The home bias of banks' holdings of government debt is calculated as the ratio of domestic government bonds to foreign government debt held by the monetary and financial institutions in the domestic country, excluding the European System of Central Banks. A shadow short rate by \cite{Krippner2013} is included as a measure monetary policy stance for the euro area. A detailed description of the data and its sources is presented in \Cref{app:data}. \subsection{Identification of the sovereign risk shock in the baseline model} \label{sec:identification} We use sign restrictions on the impulse responses to identify a sovereign risk shock. Sign restrictions are commonly used in applied macroeconomic literature to identify economic shocks, as pioneered by \cite{Faust1998}, \cite{Canova2002}, \cite{Uhlig2005}, and \cite{Peersman2005}, and more recently, sign restrictions have been used to identify credit supply and financial shocks (see, e.g., \cite{Hristov2012}, \cite{Gambetti2016}, and \cite{Furlanetto2017})\footnote{In their critique of sign-identified structural VAR models, \cite{Baumeister2015} caution that uninformative priors may be inadvertently informative in sign-identified structural VAR models. To alleviate some concerns about uninformative priors influencing the estimation, a number of robustness checks were conducted: alternative endogenous variables, a shorter sample length and alternative restrictions on the timing of effect of the shocks on the variables. While in principle identifying the importance of priors in influencing the results may have benefits, there is disagreement as to whether laying out explicit prior beliefs is relevant in practice \cite{Kilian2019}. }. In comparison to a recursive structural model, sign restrictions require less stringent assumptions about the contemporaneous relationships between the variables. For example imposing exclusion restrictions on the impact multiplier matrix for financial variables may not be justifiable from a theoretical perspective, given that these variables respond quickly to new information (see discussion in \cite{Peersman2005} and \cite{Bjornland2009}. We proceed by identifying three different shocks: a sovereign risk shock, a credit supply shock, and a credit demand shock. The structural shocks are identified using a combination of sign and zero restrictions using the algorithm by \cite{Arias2018}. The details of the algorithm are provided in \Cref{algorithm}. The restrictions are summarized in \Cref{tab:restrictions}. \begin{table}[h!] \small \begin{threeparttable} \centering \caption{Sign restrictions in the baseline model} \label{tab:restrictions} \renewcommand{\arraystretch}{0.9} \begin{tabularx}{\linewidth}{lccc} \\ \toprule & \multicolumn{3}{c}{\textit{Shock}}\\ \textit{Variable} & Credit demand shock & Credit supply shock & Sovereign risk shock\\ \midrule Output & $-$ & $-$ & $-$ \\ Prices & $\bullet$ & $\bullet$ & $\bullet$ \\ Loan volume & $-$ & $-$ & $-$ \\ Retail lending rate & $-$ & $+$ & $+$ \\ Banks' home bias & $\bullet$ & $0$ & $+$ \\ Government bond spread & $\bullet$ & $\bullet$ & $+$ \\ Short rate & $\bullet$ & $\bullet$ & $-$ \\ \bottomrule \end{tabularx} \captionsetup{justification=centering} \begin{tablenotes} \small \item \textit{Notes:} $+$/$-$ denote the sign on impact of impulse responses for each variable to a contractionary shock. The sign is set to $0$ if the response of the shock is restricted to zero on impact. The responses of output, loan volume and the lending rate are restricted for periods three to six following the shock. The response is left unrestricted, if the sign is indicated by $\bullet$. \end{tablenotes} \end{threeparttable} \end{table} The identification of thr credit demand and the credit supply shock is standard in the literature (e.g., \cite{Peersman2011b}, \cite{Gambetti2016} and \cite{Hristov2012}) and is based on general macroeconomic models (see e.g, \cite{Curdia2010}, \cite*{Gerali2010}, \cite*{Gertler2011})\footnote{See \cite{Gambetti2016} and \cite{Hristov2012} for a summary of sign restrictions used in theoretical models.}. The credit demand shock causes the volume of loans and the retail lending rate to move in the same direction. A positive credit demand shock increases the volume of loans and simultaneously increases the cost of lending due to inelastic supply of loans. The credit demand shock may be interpreted as an unanticipated change in firms' and households' demand for loans, for example following changes in aggregate demand. Positive co-movement between retail lending rates and loan volumes may also be associated with loosening borrowing constraints. The credit supply shock is related to exogenous changes in banks' lending to firms and households, for example unanticipated change in banks' capital, funding conditions or new regulations. It also encompasses unanticipated changes in monetary policy, which shows up as a change in the short rate. A positive credit supply shock increases the supply of loans to firms and households and decreases the cost of lending. Therefore, their distinct effects on the co-movement between loan volumes and retail lending rates distinguish the credit supply shock and the credit demand shock from each other. The sovereign risk shock affects the real economy by affecting the supply of credit. The theoretical literature finds that sovereign default risk affects the financial sector due to banks' large exposure to government debt (see e.g., \cite{Bocola2016}). An increase in sovereign borrowing costs reduces the value of the banks' balance sheets and tightens their funding constraints as well as makes loans to households and firms more risky. This subsequently reduces the amount of loans banks issue to domestic firms and households, resulting in a credit crunch. To disentangle the credit supply shock and the sovereign risk shock, we introduce additional restrictions on the impulse responses. Namely, we restrict the response of the home bias of banks' holdings of sovereign debt to be positive following a sovereign risk shock, whereas the credit supply shock that is not associated with a change in sovereign risk is not assumed to affect banks' home bias. This assumption is consistent with the observation that banks in distressed countries have increased their holdings of domestic sovereign debt in response to sovereign default risk. \cite*{Battistini2014} study the effects of sovereign default risk on the domestic sovereign exposures of banks in the euro area. They find that banks in peripheral countries increased their domestic exposure to sovereign debt in response to a country-level shock, whereas banks in core countries did not. However, banks in both peripheral and core countries increased their domestic exposure in response to systemic shocks. Their findings support the idea that banks' in peripheral countries increased their exposure for higher returns and in response to 'moral suasion' or pressure by domestic regulators. Moreover, domestic banks have a comparative advantage in bearing risk in the case of a break-up of the euro area, whereby liabilities and assets would be redenominated into new currency. According to \cite*{Broner2014} banks increase their holdings of domestic sovereign debt when sovereign default risk increases, because sovereign debt delivers a higher expected return to domestic investors than to foreign investors. Discrimination of foreign investors may arise due to regulations or moral suasion, whereby regulators induce domestic banks to take on risk to support the demand for bonds. Home bias of banks' sovereign debt holdings may also result from high risk trades, or so-called 'carry trades' in which banks used low funding costs to invest in higher yielding sovereign bonds. Particularly weakly-capitalized banks increased their exposure to domestic high yielding sovereign debt, because the bank would profit in all circumstances except when both the government and the bank were in default \citep{Acharya2015}. The sovereign risk shock is assumed to affect the home bias of banks' holdings of sovereign debt on impact, such that banks increase their share of domestic government bonds to foreign government bonds as the government bond spread increases. We restrict the response of banks' home bias on impact following a credit supply shock to zero. Home bias of government debt is defined as the ratio of domestic government bonds to foreign government bonds held by the domestic monetary and financial institutions, excluding the Eurosystem. We also restrict the response of the short rate to be negative following a contractionary sovereign risk shock to account for the negative impact on output, given that a country-specific specific sovereign risk shock poses a systematic risk to the whole monetary union\footnote{The results are not changed when the sign of the response of the short rate is left unrestricted.}. This is also consistent with the assumption that banks increase their domestic government bond holdings by exploiting lower borrowing costs. We remain agnostic about the sign of the effect on the short rate of the credit supply and credit demand shocks, due to the assumption that the European Central Bank considers the overall economic conditions in the whole euro area in making monetary policy decisions instead of the conditions of a single member state. All identified structural shocks are restricted to affect output, lending to firms and household and the retail lending rate with the given signs three to six months following the shock, while the other restrictions only apply on impact\footnote{The results are robust to restrictions imposed only on impact. Similar timing restrictions, which impose a lagged response of real economic variables to shocks originating in the financial sector, have been used in previous empirical studies (see e.g., \cite{Peersman2011b} and \cite{Barnett2014}). Restricting the response of output is necessary for identification.} Moreover, we leave the sign of inflation unrestricted due to uncertainty about whether aggregate supply or aggregate demand effects dominate following shocks to credit (see discussion in \cite*{Abbate2016}). A particular concern with disentangling the credit supply and sovereign risk shocks is related to the negative feedback loop between economic activity and government bond spreads. Independent of their origin, credit supply shocks have a recessionary impact on economic output, which may further deteriorate the fiscal position of the sovereign and limit the banks' ability to lend to the private sector \citep{Brunnermeier2016}. Moreover, a credit supply shock may be interpreted as a sovereign risk shock if the credit supply shock increases government bond spreads and banks' home bias of government debt within the period. However, given that we estimate the model using monthly data, it is reasonable to assume that it takes over one period for the feedback loop to complete. \section{Results} \label{sec:results} This section presents the results of the partially pooled panel VAR. In addition to country-specific estimates, we also present results for the fully pooled model for comparison. Each model is estimated using four lags for each country. The suggested lag length based on the Bayesian information criterion (BIC) ranges between one and two for individual countries. Given that the data is monthly, we account for possible seasonality in the data by including further lags. The results are based on 110000 draws from the Gibbs sampler with 10000 initial draws discarded as burn-in\footnote{The baseline results are robust to a shorter sample period, a range of alternative priors and alternative measures of monetary policy. To save space, the results are not reported in detail, but they are available on request.}. \subsection{Real effects of sovereign risk shocks} Impulse response functions are used to illustrate the macroeconomic impact of the structural shocks. \Cref{irf_sr_pooled} traces out the effect of a contractionary sovereign risk shock on the endogenous variables in the pooled SVAR model. The shock is normalized such that the median response of the government bond spread is 10 basis points on impact. \begin{figure} \caption{Impulse response functions to a contractionary sovereign risk shock in the pooled panel SVAR model} \bigskip \includegraphics[scale=0.9]{epsFig_IRF_3_pooled} \smallskip \caption*{\textit{Notes}: The solid line is the posterior median and the shaded regions mark the 68 \% credible intervals. The shock is normalized, such that the the government bond spread increases by 10 basis points on impact.} \label{irf_sr_pooled} \end{figure} A contractionary sovereign risk shock leads to an immediate decrease in output and the permanent fall in loan volumes. The effect on output is negative with high posterior probability for almost 40 months following the shock. There is also a persistent and negative effect on loan volumes. The shock also has an immediate positive effect on the retail lending rate. A 10 basis point increase in the government bond spread is associated with a median increase in the retail lending rate of five basis points. The size of the response is close to the effect reported by \cite{Bahaj2019}, who finds a 30 basis point increase in private sector funding costs with a 100 basis point increases in the 2-year government bond spread. The median response of the retail lending rate turns negative after more than a year following the shock. The short-lived increase and the subsequent fall in the retail lending rate may be caused by the accommodative stance of monetary policy. The short rate falls by 5 basis points on impact and continues to decline for two years following the shock. A contractionary sovereign risk shock also leads to a large and persistent increase in the home bias of banks' holdings of sovereign debt. The effect peaks shortly after the impact and dissipates largely after two years. The response of prices is less precisely estimated, however a contractionary sovereign shock leads to a persistent decline in prices with high probability after an immediate positive median response. The impulse responses of the country-specific model in \Cref{irf_sr} are generally consistent with the pooled estimates. The impulse response functions show that the impact effects of a contractionary sovereign risk shock on the retail lending rates vary by country. A 10 basis point increase in the government bond spread is associated with an increase in the the median retail lending rate by 5 basis points on impact in Spain, Portugal, and Italy and 10 basis points in Ireland. A sovereign risk shock also results in a large and persistent increase in the home bias of banks' holdings of government debt in Italy and Spain and to a lesser extent in Ireland and Portugal. \begin{sidewaysfigure} \caption{Impulse response functions to a contractionary sovereign risk shock in the partially pooled panel SVAR model} \bigskip \bigskip \includegraphics[scale=0.9]{epsFig_IRF_3} \smallskip \caption*{\textit{Notes}: See the notes in \Cref{irf_sr_pooled}} \label{irf_sr} \end{sidewaysfigure} Next, we assess the relative importance of the sovereign risk shock on output during the sample period by using historical decompositions to construct a counterfactual time series of each variable that removes the effect of sovereign risk shocks. To that end, the methodology of \cite{Kilian2014} is used. The contribution of the sovereign risk shock to output in each country is presented in \Cref{cf13}, which shows the difference between the actual output and the median output in the absence of sovereign risk shocks in each country and the 68 \% credible intervals. The results suggest that sovereign risk shocks contributed to the decline in industrial output in Italy, Portugal and Spain after 2011. Based on the median estimates, at the height of the crisis, output would have been approximately 3-5 \% higher in these countries in the absence of sovereign risk shocks. The effect of sovereign risk shocks on output is negligible in Ireland. However, the estimations that use unemployment as an alternative measure of economic activity show that sovereign risk shocks also contributed to a median increase in unemployment in Ireland, as discussed in \Cref{sec:alt}. In Portugal, the negative effect on output is more short-lived compared to Italy and Spain, which coincides with the countries re-access to financial markets after the government bailout. \begin{figure} \caption{The difference between the actual output and the counterfactual in the absence of the sovereign risk shock} \bigskip \includegraphics[scale=0.9]{epsFig_Counterfactual_1_3} \smallskip \caption*{\textit{Notes}: The solid line is the median difference between the actual output and the counterfactual. Shaded areas mark the 68 \% credible intervals.} \label{cf13} \end{figure} \begin{table} \small \begin{threeparttable} \centering \caption{Forecast error variance decomposition for the sovereign risk shock} \bigskip \label{tab:fevd} \renewcommand{\arraystretch}{1.2} \begin{tabularx}{\linewidth}{lp{5mm}ccccp{5mm}cccc} \\ \toprule & & \multicolumn{4}{c}{Spain} & & \multicolumn{4}{c}{Ireland}\\ & & \multicolumn{4}{c}{\textit{Horizon}} & & \multicolumn{4}{c}{\textit{Horizon}}\\ \textit{Variable} & & 1 & 6 & 12 & 24 & & 1 & 6 & 12 & 24\\ \cline{1-1} \cline{3-6} \cline{8-11} Output&&0.02&0.04&0.09&0.14&&0.01&0.02&0.02&0.03\\ Prices&&0.10&0.09&0.07&0.08&&0.09&0.07&0.05&0.05\\ Loan volume &&0.04&0.06&0.10&0.14&&0.02&0.03&0.04&0.06\\ Retail lending rate&&0.15&0.06&0.03&0.03&&0.19&0.07&0.04&0.04\\ Banks' home bias&&0.06&0.07&0.09&0.10&&0.05&0.09&0.12&0.14\\ Government bond spread&&0.16&0.13&0.12&0.11&&0.09&0.09&0.08&0.07\\ Short rate&&0.09&0.10&0.11&0.11&&0.06&0.21&0.25&0.24\\ \\ & & \multicolumn{4}{c}{Italy} & & \multicolumn{4}{c}{Portugal}\\ & & \multicolumn{4}{c}{\textit{Horizon}} & & \multicolumn{4}{c}{\textit{Horizon}}\\ \textit{Variable} & & 1 & 6 & 12 & 24 & & 1 & 6 & 12 & 24\\ \cline{1-1} \cline{3-6} \cline{8-11} Output&&0.02&0.04&0.09&0.14&&0.02&0.04&0.07&0.10\\ Prices&&0.09&0.09&0.08&0.08&&0.10&0.10&0.09&0.09\\ Loan volume &&0.05&0.05&0.07&0.10&&0.05&0.05&0.07&0.10\\ Retail lending rate&&0.16&0.07&0.04&0.04&&0.15&0.07&0.04&0.04\\ Banks' home bias&&0.05&0.05&0.07&0.08&&0.07&0.10&0.12&0.13\\ Government bond spread&&0.14&0.14&0.13&0.11&&0.18&0.17&0.16&0.14\\ Short rate&&0.11&0.11&0.12&0.13&&0.09&0.15&0.18&0.17\\ \bottomrule \end{tabularx} \captionsetup{justification=centering} \begin{tablenotes} \small \item \textit{Notes:} Contribution of the sovereign risk shock to the forecast error variance at the given horizon. The forecast horizon is in months. The forecast error decomposition is based on the median of the impulse responses. \end{tablenotes} \end{threeparttable} \end{table} \Cref{tab:fevd} reports the relative contribution of the sovereign risk shock to the forecast error variance of the variables at various forecast horizons. The sovereign risk shock accounts for between 3 and 14 percent of the forecast error variance of output at the 24-month horizon. The forecast error variance of output increases over time, which points to the persistence of the sovereign risk shock. The estimates are similar to estimates by \cite{Bahaj2019}, who reports that the sovereign risk shock accounts for 15 percent of the forecast error variance of industrial output at a horizon greater than 18 months. Moreover, the sovereign risk shock accounts for approximately 6 to 14 percent of the forecast error variance of loan volumes at the two-year horizon. \subsection{Sovereign risk shocks and government bond spreads} As discussed in the previous section, the impulse response functions show that the sovereign risk shock affecting the financial sector tends to have a persistent effect on the government bond spread in each country. Next, we turn to discuss the contribution of the sovereign risk shock to the evolution of government bond spreads by carrying out a similar counterfactual exercise that eliminates the effect of the sovereign risk shock, as for output above. \begin{figure} \caption{The difference between the actual government bond spread and the counterfactual in the absence of the sovereign risk shock} \bigskip \includegraphics[scale=0.9]{epsFig_Counterfactual_6_3} \smallskip \caption*{\textit{Notes}: The solid line is the median difference between the actual government bond spread and the counterfactual. Shaded areas mark the 68 \% credible intervals.} \label{cf63} \end{figure} The differences between the actual and counterfactual time series are shown in \Cref{cf63}. The figure shows that the sovereign risk shock contributed to higher government bond spreads in all countries during the sovereign debt crisis, although the effect on the government bond spread in Ireland is more muted. In 2012, the sovereign risk shocks contributed to an increase in government bond spreads, with the median ranging from 75 basis points in Italy and Spain to approximately 150 basis points in Portugal. These values are in most cases below the estimates of the overall effect of unanticipated changes in government bond spreads in previous studies. \cite{Bahaj2019} reports that shocks to government bond spreads unrelated to economic conditions increased sovereign risk premiums by 100 basis points in Spain and by 150 basis points in Italy\footnote{\cite{Bahaj2019} reports the government bond spread of the two-year yield over the comparable German bond}. The reported increase is even larger for Ireland and Portugal where government bond spreads increased by more than 400 and 700 basis points, respectively. The estimates are also below those in \cite{Neri2013} who reported an increase in sovereign borrowing costs increased private borrowing costs in the crisis-hit countries by 1.3 percentage points on average. \Cref{tab:fevd} shows that the sovereign risk shock accounts for between 8 and 16 percent of forecast error variance of the government bond spread at 12 month-horizon. This suggests that a non-negligible share of the government bond spreads is explained by the sovereign risk shock that is transmitted via the banking sector, lending support to the transmission of bank risk to the sovereign. The results are consistent with the historical account of the evolution of the crisis, which began in Greece and Ireland and subsequently spread to the rest of the peripheral euro area countries. The results could be interpreted as evidence in favor of the sovereign risk largely passing through to the economy via the banking sector in all countries and the existence of a feedback loop between economic conditions and sovereign risk. \subsection{Real effects of credit supply shocks} In this section, we discuss how the credit supply shock contributed to real economic activity in relation to the sovereign risk shock during the sovereign debt crisis. The impulse responses of the endogenous variables to the credit supply shock in the pooled and the partially pooled model are displayed in \Cref{irf_cs_pooled} and \Cref{irf_cs}, respectively. Each shock is normalized such that the median response of the retail lending rate is 10 basis points on impact. As \Cref{irf_cs_pooled} shows, a contractionary credit supply shock leads to a persistent decline in industrial output. The effect on output is negative with high posterior probability for around 10 months. The partially pooled model displays a less persistent effect of the credit supply shock on output, as is shown in \Cref{irf_cs}. The negative effect on output dissipates with high posterior probability within a year in each country. A contractionary credit supply shock also leads to an immediate and protracted decline in loan volumes; however, the long-run effect is not precisely estimated. The increase in the retail lending rate lasts with high posterior probability for around a year. Moreover, a credit supply shock does not have a significant effect on the government bond spread and is more likely to reduce the home bias of domestic banks. \begin{figure} \caption{Impulse response functions to a contractionary credit supply shock in the pooled panel SVAR model} \bigskip \includegraphics[scale=0.9]{epsFig_IRF_1_pooled} \smallskip \caption*{\textit{Notes}: The solid line is the posterior median and the shaded regions mark the 68 \% credible intervals. The shock is normalized, such that the lending rate increases by 10 basis points on impact.} \label{irf_cs_pooled} \end{figure} \begin{sidewaysfigure} \caption{Impulse response functions to a contractionary credit supply shock in the partially pooled panel SVAR} \bigskip \bigskip \includegraphics[scale=0.9]{epsFig_IRF_1} \smallskip \caption*{\textit{Notes}: See the notes in \Cref{irf_cs_pooled}} \label{irf_cs} \end{sidewaysfigure} \begin{figure} \caption{The difference between the actual output and the counterfactual in the absence of the credit supply shock} \bigskip \includegraphics[scale=0.9]{epsFig_Counterfactual_1_1} \smallskip \caption*{\textit{Notes}: The solid line is the median difference between the actual output and the counterfactual. Shaded areas mark the 68 \% credible intervals.} \label{cf11} \end{figure} Although the estimates are not precise, the historical decompositions, shown in \Cref{cf11}, suggest that the credit supply shock had a positive median effect on economic output in Italy and Spain between 2006 and 2008 prior to the Global Financial Crisis. However, the countries were negatively affected to a varying degree by the disturbances in the supply of credit after the onset of the financial crisis in 2008. The contraction in output starting in 2008 was especially pronounced in Spain, where the beginning of the downturn coincided with the bursting of the Spanish property bubble. Moreover, the figure shows that credit supply shocks negatively affected real economic activity particularly in Portugal during around 2015. The results suggest that the sovereign risk shock accounts for a considerable share of the negative effect on economic output at the height of the crisis in 2012, but the credit supply shock also partly explains the decrease in output after the financial crisis. \subsection{Discussion of results} As discussed above, the results suggest that unanticipated changes in the sovereign risk reduced economic output in Italy, Portugal and Spain at the height of the crisis in 2012. This finding supports the arguments made in the literature that sovereign risk shocks affect real economic activity via the bank-lending channel primarily due to domestic banks' large exposure to sovereign debt \citep{Bocola2016} and the existence of a feedback loop between sovereign risk and the real economy \citep{Brunnermeier2016}. The results also corroborate the findings of recent empirical studies using micro-level \citep{Acharya2018, Bofondi2017, DeMarco2019, Gennaioli2018, Popov2014} and country-level \citep{Bahaj2019} data that document a tightening of financial conditions following an increase in government bond spreads during the European sovereign debt crisis. A notable exception is Ireland, where the economic output was not largely affected by the sovereign risk shock. This suggests that bank recapitalizations at the early stage of the crisis might have helped to break the vicious cycle between banks and sovereigns. The results also suggest that the sovereign risk shock contributed positively to output via looser funding conditions in Spain, Portugal and Italy before the financial crisis. The positive effect of the credit supply shock on economic growth during the pre-crisis period is also documented by \cite{Bijsterbosch2015}. Moreover, in Portugal the negative effect of the sovereign risk shock largely dissipated after the government financial assistance program in 2012. These observations are important from a policy perspective in two ways. First, low government borrowing rates may have contributed to the overheating of the economy prior to the crisis, which suggests that tighter monetary policy could have helped to contain the economic boom. Second, government bailout programs and actions by the ECB to address financial market fragmentation appear to be effective in subduing the crisis by supporting the functioning of the financial sector. This suggests that policy reforms to prevent spillovers from the banking sector to the sovereign, and vice versa and actions to strengthen risk tolerance of the financial sector would play an important role in preventing future crises. Although we do not explicitly disentangle the channels through which sovereign risk affects bank lending, the persistent increase in the domestic holdings of sovereign debt following increases in sovereign risk suggest that the reduction in bank lending in crisis-hit countries may be attributed to crowding out of corporate lending in favor of risky sovereign debt. \cite{Acharya2018} and \cite{Becker2017} also document changes in banks' asset composition in response to sovereign risk, which might be the results of banks' risk shifting or moral suasion by domestic regulators (see, e.g., discussion in \cite{Acharya2018}). In addition to crowding out lending, the increase in exposure to sovereign risk might have exacerbated the crisis. This finding suggests that the risk-weighting associated with banks' government bond holdings should be reassessed and implies that risky sovereign debt should be replaced by a European safe-asset as a store of value, as proposed by \cite{Brunnermeier2016}. The results lend support to the view that credit supply shocks play a significant role in business cycle fluctuations in European countries (see, e.g., \cite{Hristov2012}). The results also show that credit supply shocks supported growth prior to the financial crisis in Italy, Portugal, and Spain, until they negatively contributed to output growth from 2008 to 2014. \subsection{Alternative measure of real economic activity} \label{sec:alt} The results of the baseline model are largely unaffected by an alternative measure of real economic activity. To assess the robustness of the main results, we estimate the panel SVAR model using the unemployment rate for each country as the measure of economic activity. As in the baseline model, the VAR model includes a consumer price index, volume of loans to non-financial corporations, ratio of banks' holdings of domestic to foreign government debt, the retail lending rate on new loans, the government bond spread for each country and a proxy for the policy rate. The sign restrictions used to identify the three shocks are the same as in the baseline model (\Cref{tab:restrictions}), with the exception that the effects on the unemployment rate are assumed to be positive. The impulse response functions and the historical decompositions of the identified shocks are qualitatively similar to those obtained in the baseline models and do not alter the conclusion drawn from the baseline model. The impulse responses to the sovereign risk shock and the credit supply shock are presented in \Cref{irf_sr_u} and \Cref{irf_cs_u}, respectively. The impulse response functions to a contractionary sovereign risk shock show a large and persistent increase in the unemployment rate. As in the baseline model, an increase in sovereign risk corresponds with a persistent decrease in the volume of loans. Moreover, the immediate increase in the lending rate is likewise short-lived. The sovereign risk shock also has a persistent effect on the government bond spread, which is similar across the countries. Sovereign risk shocks affect the home bias of domestic banks in varying degrees across the countries. The effect of a sovereign risk shock increasing the government bond spread by 10 basis points on the home bias ranges from 1 basis point for Ireland to 200 basis points for Italy. \begin{sidewaysfigure}[htpb] \caption{Impulse response functions to a contractionary sovereign risk shock in the alternative model} \bigskip \bigskip \includegraphics[scale=0.9]{epsFig_IRF_3_U} \smallskip \caption*{See the notes in \Cref{irf_sr_pooled}} \label{irf_sr_u} \end{sidewaysfigure} \begin{sidewaysfigure}[htpb] \caption{Impulse response functions to a contractionary credit supply shock in the alternative model} \bigskip \bigskip \includegraphics[scale=0.9]{epsFig_IRF_1_U} \smallskip \caption*{See the notes in \Cref{irf_sr_pooled}.} \label{irf_cs_u} \end{sidewaysfigure} The impulse response functions to a contractionary credit supply shock (\cref{irf_cs_u}) also leads to a persistent increase in unemployment. The positive impact on unemployment dissipates after two years in the median response. The shock is also related to a persistent decrease in lending volumes across all countries. As in the case of the sovereign risk shock, the effect on the lending rate quickly dissipates following the shock. The impulse response functions also display a negligible effect on the government bond spread or the home bias of domestic banks. The historical decompositions support the results from the baseline model. \cref{cf13_u} shows that sovereign risk shocks led to a median increase in unemployment between 2012 and 2015. Although the effects are not precisely estimated, the historical decompositions indicate that sovereign risk shocks led to a median increase of one percent in unemployment during the crisis. In comparison to the baseline results, sovereign risk shocks appear to have contributed to higher unemployment in Ireland. The difference between the baseline results might arise due to differences in the structure of the economy. \begin{figure} \caption{The difference between the actual unemployment and the counterfactual in the absence of the sovereign risk shock in the alternative model} \bigskip \includegraphics[scale=0.9]{epsFig_Counterfactual_1_3_U} \smallskip \caption*{\textit{Notes}: The solid line is the median difference between the actual unemployment and the counterfactual. Shaded areas mark the 68 \% credible intervals.} \label{cf13_u} \end{figure} \section{Conclusion}\label{sec:conclusion} The purpose of this paper is to re-examine the spillover of sovereign risk to the real economy via the banking sector. We proceed to evaluate the macroeconomic effects of sovereign risk shocks during the sovereign debt crisis by estimating a sign-restricted panel SVAR model for Italy, Ireland, Portugal and Spain. The results show that an increase in government bond spreads contributed to fall in economic activity in Portugal, Italy, and Spain and to a lesser extent in Ireland during the sovereign debt crisis. Moreover, the sovereign risk shock largely accounted for the disruptions in the credit supply during the debt crisis. The findings in this paper point to several avenues for future research. First, identifying other transmission channels of sovereign risk would provide evidence of the relative importance of the bank-lending channel in the pass-through of sovereign risk in explaining fluctuations in real economic activity. To this end, estimation of SVAR models incorporating the housing market would be interesting to investigate the contribution of the real estate markets to bank and sovereign risk. Second, assessing the sensitivity of the transmission channel to banks' exposure to sovereign debt would be of interest from a macroprudential policy perspective. \newpage \section*{Acknowledgements} I also gratefully acknowledge financial support from the Academy of Finland (grant no. 308628), the Yrj{\"o} Jahnsson Foundation (grant no. 6609, 7016), the Foundation for the Advancement of Finnish Securities Markets, and the Nordea Bank Foundation. I would like to thank Markku Lanne, Henri Nyberg, Joshua Aizenman and two anonymous referees for their helpful comments on the manuscript. I also thank the participants at the FDPE and Helsinki GSE Econometrics Workshops and the Freie Universit{\"a}t Berlin seminar on Topics in Time Series Econometrics for useful comments. \bibliographystyle{apalike}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Since the discovery of the IST, a great number of nonlinear differential equations have been solved using this technique (see for instance ref.\cite{Ablowitz} and the references therein). Most of the scientific effort has been focused on finding soliton solutions for these equations. As firstly observed by Scott-Russell in 1834, solitons have the property that they maintain their shape over long time-scales \cite{Scott}. From this observation the conclusion can be drawn that solitons which are observed in nature are stable solutions. In this paper we focus on the KdV-equation. The IST approach, as introduced by Gardner, Greene, Kruskal and Miura (GGMT), uses the inverse problem of the Schr\"{o}dinger equation to solve the KdV-equation. In their approach a solution of the KdV-equation can be regarded as a time-dependent potential of the Schr\"{o}dinger equation \cite{Gardner}. GGMT have presented a method to compute the time-dependence of the S-matrix that is used to solve the inverse problem of the Schr\"{o}dinger equation. Furthermore, they have shown that soliton solutions can be constructed using the discrete part of the spectrum of the Schr\"{o}dinger equation. Recently, it was discovered, (by solving the KdV-equation using the continuous part of the spectrum of the Schr\"{o}dinger equation) that the KdV-equation has also singular solutions \cite{Dorren0}. It is also shown in ref.\cite{Dorren0}, that the time-evolution of these singular solutions can be associated with a positive Lyapunov exponent. This implies that singular solutions of the KdV-equation can exhibit unstable behavior. It is remarkable that although the KdV-equation is one of the better studied equations in the field of mathematical physics, the matter of the stability of this equation has not been investigated. The classical argument that explains that solitons maintain their shape over long time-scales is the presence of the nonlinear term in the KdV-equation that cancels the effects of dispersion. However in nature, solitons are always contaminated with noise and it is not clear that when noise is added to a pure soliton solution, the soliton retains its identity. This matter is related to the stability properties of solutions of the KdV-equation. In this paper we show that for the KdV-equation solitons posses stable behavior if the propagation velocity of the perturbation differs from the perturbation velocity of the unperturbed soliton. In the case of the singular solution this is also the case, but in contrast to the soliton case, the amplitude of the perturbation can grow dramatically. This implies that the shape of the singular solution changes. This paper has the following structure. In Sec.2 we discuss the balance between nonlinearity and dispersion for non-dispersive solutions of the KdV-equation. It is shown that for small perturbations this balance is disturbed, and that the perturbation has a different propagation velocity than the unperturbed solution. In Sec.3, we derive a general expression for a perturbation of a solution of the KdV-equation. It is shown that the time-evolution of the perturbation consists of two parts. The first part represents the time-evolution of the perturbation in absence of the unperturbed solution. The second part represents the interaction between the soliton and the perturbation. In Sec.4, we give an expansion of the time-evolution of the perturbation in a series solution. This enables us to formulate an expression for the stability of solutions of the KdV-equation. In Sec.5, an numerical example is given to illustrate that the results that are derived in previous sections are also valid in more general cases. The results are discussed in Sec.6. Technical matter concerning the Gelfand-Levitan-Marchenko equations are added in a appendix. \section{Nonlinearity versus dispersion} Consider the KdV-equation: \begin{equation} \begin{array}{l} u_{xxx} - 6 u u_{x} + u_{t} = 0 \\ u(x,t=0) = u_{0}(x) \label{kdv} \end{array} \end{equation} In Eq.(\ref{kdv}), $u_{0}(x)$ represents the initial condition. Non-dispersive solutions of Eq.(\ref{kdv}) can be obtained by searching for solutions $u(x,t) = u(x-ct) \equiv u(z)$. In this special case Eq.(\ref{kdv}) reduces to: \begin{equation} u^{\prime \prime \prime} - 6 u u^{\prime} - c u^{\prime} =0 \label{kdv_nd} \end{equation} The KdV-equation (\ref{kdv}) is in the special case of non-dispersive solutions reduced to a third-order ordinary nonlinear differential equation. In Eq.(\ref{kdv_nd}), the derivative $u^{\prime}$ stands for $\frac{d}{dz} u(z)$. We can reformulate Eq.(\ref{kdv_nd}) in the following way: \begin{equation} \partial_{z} \left( u^{\prime \prime} - 3 u^{2} - c u \right) = 0 \label{kdv_nd1} \end{equation} By integrating Eq.(\ref{kdv_nd1}), we find that Eq.(\ref{kdv_nd}) is equivalent with: \begin{equation} u^{\prime \prime} - 3 u^{2} - cu +m =0 \label{kdv_in1} \end{equation} In Eq.(\ref{kdv_in1}), the constant $m \in I\!\!R$ is an arbitrary integration constant. We can perform a further simplification by multiplying both sides of Eq.(\ref{kdv_in1}) with a factor $u^{\prime}$. The total result can be formulated in the following way: \begin{equation} \partial_{z} \left( \frac{1}{2} ( u^{\prime} )^{2} - u^{3} - \frac{1}{2} c u^{2} + mu \right) = 0 \label{int1} \end{equation} By performing a further integration we find that Eq.(\ref{int1}) is equivalent with: \begin{equation} \frac{1}{2} ( u^{\prime} )^{2} - u^{3} - \frac{1}{2} c u^{2} + mu +n = 0 \label{kdv_in2} \end{equation} In Eq.(\ref{kdv_in2}), the constant $n \in I\!\!R$ is another arbitrary integration constant. In the special case that $m=n=0$, Eq.(\ref{kdv_in2}) is equivalent with: \begin{equation} u^{\prime} = \pm u \sqrt{2u+c} \label{kdv_end} \end{equation} We have obtained that in the case of non-dispersive solutions the KdV-equation (\ref{kdv}) is equivalent with Eq.(\ref{kdv_end}). Eq.(\ref{kdv_end}) is an ordinary first-order nonlinear differential equation that can be solved directly. This leads to the following result: \begin{equation} u(z) = \frac{ \frac{4 c}{D_{0}} e^{ \pm z \sqrt{c} } }{ \left( 1 - \frac{2}{D_{0}} e^{ \pm z \sqrt{c} } \right)^{2} } \label{sol_rew} \end{equation} In Eq.(\ref{sol_rew}) the constant $D_{0} \in I \!\!R$ can be chosen arbitrarily and has to be determined by the initial condition. If the constant $D_{0}$ is negative, the solution (\ref{sol_rew}) can be written as a soliton solution: \begin{equation} u(z) = - \frac{c}{2} \mbox{sech}^{2}(z \sqrt{c} + x_{0} ) \label{soluton} \end{equation} If the constant $D_{0}$ is positive, this reduction does not take place. It follows from Eq.(\ref{sol_rew}) that in this case the solution of the KdV-equation has a singularity, but the solution propagates without dispersion. It has been argued in ref.\cite{Dorren0} that these singular solutions can be associated with a time-evolution having a positive Lyapunov exponent. This leads to the conclusion that apart from the stable soliton solutions, the KdV-equation has also unstable solutions. Soliton solutions are understood to be stable solutions of the KdV-equation because the effects of dispersion and nonlinearity cancel each other. It is tempting to choose Eq.(\ref{kdv_end}) as a starting point for the stability analysis for the KdV-equation since Eq.(\ref{kdv_end}) is an ordinary differential equation on which all the standard techniques of stability analysis can be applied. However, due to the fact that Eq.(\ref{kdv_end}) is only valid for non-dispersive solutions the corresponding stability analysis is only valid for solutions of the KdV-equation (including a perturbation) that maintain their shape. For non-dispersive solutions of the KdV-equation, the effects of dispersion and non-linearity are in balance. If a perturbation is added to a non-dispersive solution of the KdV-equation, the balance between the dispersion and non-linearity is disturbed. In order to investigate this effect we rewrite the KdV-equation in the following way: \begin{equation} u_{t} = - c u_{x} + ( c u_{x} - u_{xxx} ) + 6 u u_{x} \label{disvnl} \end{equation} If the last two terms on the right-hand side of Eq.(\ref{disvnl}) are removed, we obtain: \begin{equation} u_{t} = -c u_{x} \label{eqnl} \end{equation} Eq.(\ref{eqnl}) has the non-dispersive solution $u(x,t)=g(x-ct)$. If the KdV-equation has non-dispersive solutions, the second and the third term on the right-hand side of Eq.(\ref{disvnl}) cancel each other. The $c u_{x} - u_{xxx}$ term on the right-hand side of Eq.(\ref{disvnl}) describes the dispersion of the solution $u(x,t)$. The $6 u u_{x}$ term on the right-hand side describes the effects of the nonlinearity. The solid line in Fig.1a the solid line represents a soliton at time $t=0$. The dashed line in Fig.1a, represents the soliton that is contaminated with a $10 \%$ amplitude error. In Fig.1b, the balance between the effects of nonlinearity and the effects of dispersion is shown for the unperturbed soliton. The short-dashed line in Fig.1b represent the effect of dispersion as given by the $c u_{x} - u_{xxx}$ term on the right-hand side of Eq.(\ref{disvnl}). The long-dashed line describes the nonlinearity as given by the $6 u u_{x}$ term on the right-hand side of Eq.(\ref{disvnl}). The sum of these curves are given by the solid line in Fig.1b. It follows from Fig.1b that the dispersion and nonlinearity are in balance. This is consistent with the fact that that the soliton propagates with velocity $c$ while maintaining its shape. If the soliton is contaminated with a small amplitude error, the dispersion and nonlinearity are no longer in balance. In Fig.1c, the examples of Fig.1b are repeated for the contaminated soliton shown by the dashed line in Fig.1a. Similarly as in Fig.1b, the short-dashed line in Fig.1c represents the effect of dispersion as given by the $c u_{x} - u_{xxx}$ term on the right-hand side of Eq.(\ref{disvnl}). The long-dashed line describes the nonlinearity as given by the $6 u u_{x}$ term on the right-hand side of Eq.(\ref{disvnl}). The solid line in Fig.1c represents the sum of the dispersion and the the nonlinearity. An imbalance between the dispersion and the nonlinearity is introduced due to the amplification of the nonlinearity. As a result of the fact that the nonlinearity and the dispersion no longer cancel, Eq.(\ref{eqnl}) (which is only valid for non-dispersive solutions of the KdV-equation), does not describe the time-evolution of the perturbed solution. The propagation effects of the contaminated soliton is given by Eq.(\ref{disvnl}). The imbalance between the effects of dispersion and nonlinearity is given by the solid line Fig.1c, which is equal to the sum of the last two terms on the right-hand side of Eq.(\ref{disvnl}). In Fig.1d, the time-derivative of the perturbation is plotted. It turns out that the time-derivative of the perturbation is not equal to a constant times the space derivative of the perturbation. This implies that the time-evolution of the perturbation is dispersive. In the following sections this behavior is investigated analytically. Finally, in Fig.1e, the finite difference solution of contaminated soliton is given after at $t=2.2$ sec. It is observed that as a consequence of the imbalance between the nonlinearity and dispersion, the perturbation propagates with a different velocity than the unperturbed soliton, and during the course of time, the contaminated soliton takes its most natural form. From this experiment, we can conclude that as a result of the imbalance between the effects of nonlinearity and dispersion, the perturbation of a solution of the KdV-equation propagates with a different velocity. As a result of this the soliton and the perturbation separate during the course of time. In Fig.2ab the examples of Fig.1 are repeated for the singular solution. The solid line in Fig.2a describes the singular solution of the KdV-equation at $t=0$. The short-dashed line in Fig.2a describes the effects of dispersion as given by the $c u_{x} - u_{xxx}$ term on the right-hand side of Eq.(\ref{disvnl}). The long-dashed line describes the effects of nonlinearity as described by the $6 u u_{x}$ term on the right-hand side of Eq.(\ref{disvnl}). Similarly as for the soliton, it turns out that also for the singular solution effects of dispersion and nonlinearity are in balance. If a small amplitude error is made, this balance is no longer present. In Fig.2b, the effects of dispersion and nonlinearity are plotted. The solid line in Fig.2b represents the sum of the nonlinearity and dispersion in the contaminated case. It turns out that the effects on nonlinearity and dispersion are no longer in balance. As a result of this, the perturbation propagates in the opposite direction the unperturbed wave. From the simple examples in this section, we can conclude that if nondispersive solutions of the KdV-equation are contaminated with small perturbations, dispersion effects are introduced. The imbalance between the dispersion and nonlinearity is in the coordinate frame that moves with the unperturbed solution very close to the x-derivative of the perturbation. This implies that the perturbation has a non-zero velocity in this reference frame so that the perturbation separates with time from the unperturbed solution. \section{ The stability of localized solutions} The result in the previous section indicates that perturbations on the initial condition of the KdV-equation propagate with a different velocity than the unperturbed solution. In this section the effects of different perturbations are investigated in a more general fashion. If a perturbation $u(x,t) \rightarrow u(x,t) + f(x,t)$ is substituted is Eq.(\ref{kdv}), the following differential equation for the perturbation $f(x,t)$ can be derived: \begin{equation} \begin{array}{l} f_{xxx} - 6 \left( uf_{x} + f_{x}u + ff_{x} \right) + f_{t} = 0 \\ f(x,t=0) = f_{0}(x) \end{array} \label{d_pert} \end{equation} Eq.(\ref{d_pert}) represents a differential equation for the perturbation $f(x,t)$, which depends on the unperturbed solution of the KdV-equation $u(x,t)$. Eq.(\ref{d_pert}) can be solved using an inverse scattering technique if a satisfying Lax-pair is constructed. However, because of the fact that the perturbed solution $u(x,t)+f(x,t)$ satisfies the KdV-equation, the solution $f(x,t)$ of Eq.(\ref{d_pert}) can be computed directly using the techniques described in ref.\cite{Dorren}. In Appendix A, a brief overview of these methods is given: As a starting point, we assume that the reflection coefficient corresponding to the unperturbed initial condition $u_{0}(x)$ undergoes a perturbation: \begin{equation} R(k,t=0) \rightarrow R(k,t=0) + \overline{R}(k,t=0) \label{ref_pert} \end{equation} In Eq.(\ref{ref_pert}), $R(k,t)$ describes the reflection coefficient corresponding to the initial condition. Since the relation between the reflection coefficient and the potential function is nonlinear, the perturbation of the reflection coefficient $\overline{R}(k,t=0)$ cannot be associated with the spectral reflection coefficient corresponding to the initial condition $f_{0}(x)$. However, we can construct any initial condition $f_{0}(x)$, by imposing special conditions on the spectral reflection coefficient $\overline{R}(k,t)$. With this we mean, the we can compute and analyze the time-evolution of different classes of perturbations $f(x,t)$, by changing the analytical structure of $\overline{R}(k,t)$. If both the unperturbed reflection coefficient and the perturbation of the reflection coefficient are rational functions of the wave-number, analytic expressions for $f(x,t)$ can be obtained. Suppose $R(k,t)$ is a spectral reflection coefficient that can be associated with the unperturbed solution $u(x,t)$. In Appendix A, analytical expressions for the solution $u(x,t)$ are derived if the reflection coefficient $R(k,t)$ is a rational function of the wavenumber. From the unperturbed reflection coefficient $R(k,t)$, we can construct a kernel $K(x,x,t)$ (Appendix A): \begin{equation} K(x,x,t) = \frac{ {\cal D}^{\prime}(x,t) }{ {\cal D}(x,t) } \label{kernel} \end{equation} In Eq.(\ref{kernel}) stands the prime for the derivative with respect to the space-coordinate $x$. The determinant ${\cal D}(x,t)$ in Eq.(\ref{kernel}) is given by: \begin{equation} {\cal D}(x,t) = \mbox{det} \left\{ \delta_{ij} - (p_{i}+p_{j})^{-1} R_{j} e^{2i(p_{j}x + 4p_{j}^{3}t)} \right\} \label{deter} \end{equation} In Eq.(\ref{deter}), $p_{i}$ are the poles and $R_{i}$ the residues of the unperturbed reflection coefficient $R(k,t)$ Solutions of the KdV-equation can be derived by taking the following derivative: \begin{equation} u(x,t) = -2 \frac{d}{dx} K(x,x,t) \label{sol_kdv} \end{equation} If the reflection coefficient $R(k,t)$ is contaminated with a small perturbation $\overline{R}(k,t)$, Eq.(\ref{deter}) contains the poles and residues of both the unperturbed reflection coefficient $R(k,t)$ and the reflection coefficient corresponding to the perturbation $\overline{R}(k,t)$. It is shown in ref.\cite{Dorren}, that if the reflection coefficient $R(k,t)$ undergoes a perturbation as given in Eq.(\ref{ref_pert}), the determinant ${\cal D}(x,t)$ can be expanded into the following series: \begin{equation} {\cal D}(x,t) \rightarrow {\cal D}(x,t) + {\cal E}(x,t) \label{detpert} \end{equation} In Eq.(\ref{detpert}), ${\cal D}(x,t)$ is the determinant (\ref{deter}) in absence of perturbations. The effect of the perturbation is expressed in the determinant ${\cal E}(x,t)$. If Eq.(\ref{detpert}) is substituted in Eq.(\ref{kernel}) we obtain the following result: \begin{equation} K(x,x,t) \rightarrow \frac{ {\cal D}^{\prime}(x,t) + {\cal E}^{\prime}(x,t) }{ {\cal D}(x,t) + {\cal E}(x,t) } \label{kerpert} \end{equation} Using some basic algebra, the kernel associated with the unperturbed solution $u(x,t)$ can be separated from the right-hand side of Eq.(\ref{kerpert}): \begin{equation} K(x,x,t) \rightarrow \frac{ {\cal D}^{\prime}(x,t) }{ {\cal D}(x,t) } + \frac{ {\cal D}(x,t) {\cal E}^{\prime}(x,t) - {\cal D}^{\prime}(x,t) {\cal E}(x,t) }{ {\cal D}(x,t) [ {\cal D}(x,t) + {\cal E}(x,t) ] } \label{xxxx} \end{equation} The term ${\cal D}^{\prime}(x,t)/{\cal D}(x,t)$ in Eq.(\ref{xxxx}) can be identified with the time-evolution of the unperturbed problem. From Eq.(\ref{xxxx}), we can identify an expression for the perturbation $f(x,t)$: \begin{equation} f(x,t) = - 2 \frac{d}{dx} \left\{ \frac{ {\cal D}(x,t) {\cal E}^{\prime}(x,t) - {\cal D}^{\prime}(x,t) {\cal E}(x,t) }{ {\cal D}(x,t) [ {\cal D}(x,t) + {\cal E}(x,t) ] } \right\} \label{solupert} \end{equation} It should be realized that in the determinant ${\cal E}(x,t)$ the poles and residues of both the reflection coefficients $R(k,t)$ and $\overline{R}(k,t)$ are present. It follows from Eq.(\ref{xxxx}) that the perturbation $f(x,t)$ is large with respect to the unperturbed solution if the denominator ${\cal D}(x,t)[ {\cal D}(x,t) + {\cal E}(x,t)]$ of Eq.(\ref{solupert}) is small. As illustrated in the following examples, in this case the perturbation $f(x,t)$ can dominate the total solution of the KdV-equation. In the following examples, we consider the case in which the unperturbed solution of the KdV-equation has one single pole ($p=i \beta$) and one residue ($R=id$). The unperturbed solution of the KdV-equation has either soliton-like behavior as in Eq.(\ref{soluton}) or singular behavior depending on the position of the pole and the residue. In the following experiment, we choose in case of the soliton $d=-1$ and $\beta =1$. We can illustrate the effects of perturbations on the soliton by adding a certain number of poles and residues to the unperturbed determinant ${\cal D}(x,t)$. In the lower panel of Fig.3a, the time-evolution of a contaminated soliton $u(x,t)$ is plotted. As a reference, in the upper panel of Fig.3a the time-evolution of the unperturbed soliton is given. It can be observed from Fig.3a that the effects of the contamination either spreads out, or travel with a different velocity. This implies that after a certain amount of time the unperturbed soliton and the effects of the perturbation separate. This is more clear in Fig.3b. The short-dashed line in Fig.3b indicates the initial condition and the solid line indicates unperturbed initial condition. The long-dashed line indicates the solution at $t=1$. In the example of Fig.3, the positions of the poles and residues is chosen in such a way that the numerical value of the denominator of Eq.(\ref{solupert}) does not differ significantly from the numerical value of ${\cal D}(x,t)$. As a result of this the perturbation remains in the same order of magnitude as the unperturbed solution. The situation dramatically changes if we chose $d=0.01$ and $\beta =1$ and keep the positions of all the other 9 poles and residues which specify the perturbation constant. By choosing $d=0.01$ and $\beta =1$, the unperturbed determinant ${\cal D}(x,t)$ can by equal to zero for certain values of $x$ and $t$. Because of the analytical structure of the determinant (\ref{detpert}), the perturbation ${\cal E}(x,t)$ has in this case also zeros for certain values of $x$ and $t$. The time-evolution is in this case plotted in the lower panel of Fig.4. This result has to be compared with the time-evolution of the unperturbed case which is given in the upper panel in Fig.4. It follows that in this special case, the perturbed solution consists of two different branches. The first branch propagates with a similar propagation velocity as the unperturbed solution but it has large amplitude fluctuations on the characteristic of the unperturbed solution. The second part propagates with a different velocity than the unperturbed solution. In the following section, we derive analytical expressions for these different parts of the perturbation. We want to remark that a small perturbation of a singular solution at $t=0$ can always be constructed by choosing the positions of the poles and residues properly. It follows from the structure of equation (\ref{solupert}) that these small perturbations can grow when ${\cal D}(x,t)[{\cal D}(x,t) + {\cal E}(x,t)]$ is small. From the results in this section we can conclude that the soliton exhibits stable behavior whereas the singular solution is unstable. Eq.(\ref{solupert}) gives a general expression for the perturbation of a solution of the KdV-equation. The perturbation is small with respect to the unperturbed solution, if the unperturbed solution has no poles close to the origin in the complex plane since the dominator ${\cal D}(x,t) + {\cal E}(x,t)$ can not be zero. In contrast to this, the perturbation is large with respect to the unperturbed solution if the dominator ${\cal D}(x,t) + {\cal E}(x,t)$ in Eq.(\ref{solupert}) is nearly singular. This is the case if the unperturbed problem has poles close to the origin in the complex plane. This means that the singular solution is sensitive for noise. Moreover, for both the soliton and the singular solution the propagation velocity of the perturbation, which determines whether the perturbation separates from the unperturbed solution, is a crucial parameter for the stability. The propagation velocity and the amplitude of the perturbation depend of the position of the poles and residues of the perturbation. In the following section we study this in more detail. \section{A series solution for $f(x,t)$} In order to analyze the behavior of the perturbation $f(x,t)$ it is convenient to formulate this function as a series solution. Our starting point is the Marchenko equation without bound states in the wave-number domain as given in Appendix A. If the reflection coefficient undergoes a perturbation (\ref{ref_pert}), we find the following relation. \begin{displaymath} F(k,x,t)= 1+ \frac{1}{2 \pi i} \lim_{\epsilon \rightarrow 0+} \int_{-\infty}^{\infty} dk^{\prime} \frac{ \left\{ R(k^{\prime},t=0) + \overline{R}(k^{\prime},t=0) \right\} F(k^{\prime},x,t)exp[2i(k^{\prime}x + 4 \{ k^{\prime} \}^{3} t)] }{ k^{\prime} + k + i \epsilon }= \end{displaymath} \begin{equation} 1+ \int_{-\infty}^{\infty} C(k,k^{\prime},t) F(k^{\prime},x,t) dk^{\prime}+ \int_{-\infty}^{\infty} \overline{C}(k,k^{\prime},t) F(k^{\prime},x,t) dk^{\prime} \label{fexp_c} \end{equation} The function $F(k,x,t)$ is related to the kernel $K(x,y,t)$ by the following Fourier transform: \begin{equation} K(x,y,t)=(2\pi)^{-1} \int_{-\infty}^{\infty} dk e^{-ik(y-x)}( F(k,x,t) -1 ) \label{kertran_c} \end{equation} Furthermore, the kernel $C(k^{\prime},k,t)$ in Eq.(\ref{fexp_c}) is given by: \begin{equation} C(k,k^{\prime},t) = \lim_{\epsilon \rightarrow 0+} \frac{1}{2i\pi} \frac{ R(k^{\prime},t=0) e^{2i(k^{\prime}x + 4 \{ k^{\prime} \}^{3} t )} }{ k^{\prime} + k + i \epsilon } \label{cker_c} \end{equation} The kernel $\overline{C}(k^{\prime},k,t)$ in Eq.(\ref{fexp_c}) is defined by: \begin{equation} \overline{C}(k,k^{\prime},t) = \lim_{\epsilon \rightarrow 0+} \frac{1}{2i\pi} \frac{ \overline{R}(k^{\prime},t=0) e^{2i(k^{\prime}x + 4 \{ k^{\prime} \}^{3} t )} }{ k^{\prime} + k + i \epsilon } \label{cker1_c} \end{equation} Eq.(\ref{fexp_c}) can be represented schematically using a Dyson's representation as given if Fig.5. Using an iteration technique, the Dyson's series of Fig.5, can be expanded. The result is given in Fig.6. It is observed from Fig.6, that the total expression for the function $F(k,x,t)$ consists of three parts. The first part can be identified with all diagrams consisting of solid dots only. This series of diagrams represents the time-evolution of the unperturbed solution. The second series consists of diagrams having open dots only. This series of diagrams represents the time-evolution the perturbation in absence on the unperturbed solution. The remaining diagrams consist of a combination of both solid and open dots. This series of diagrams represents the interaction between the unperturbed solution and the perturbation. We can formally solve the Dyson's equation by iteration. The solution is shown by the diagrams in Fig.6, and is given by: \begin{displaymath} F(k,x,t) = 1 + \int_{-\infty}^{\infty} C(k^{\prime},k,t) dk^{\prime} + \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} C(k,k^{\prime},t)C(k^{\prime},k^{\prime\prime},t ) dk^{\prime} dk^{\prime \prime} + \cdots \end{displaymath} \begin{displaymath} + \int_{-\infty}^{\infty} \overline{C}(k^{\prime},k,t) dk^{\prime} + \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \overline{C}(k,k^{\prime},t) \overline{C}(k^{\prime},k^{\prime\prime},t) dk^{\prime} dk^{\prime \prime} + \cdots \end{displaymath} \begin{equation} + \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} C(k,k^{\prime},t)\overline{C}(k^{\prime},k^{\prime\prime},t) dk^{\prime} dk^{\prime \prime} + \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \overline{C}(k,k^{\prime},t)C(k^{\prime},k^{\prime\prime},t) dk^{\prime} dk^{\prime \prime} + \cdots \label{tot_ser} \end{equation} If both $R(k,t=0)$ and $\overline{R}(k,t=0)$ are rational functions of the wave-number, the integrations in the series (\ref{tot_ser}) can be carried out analytically by performing a contour integration in $\mbox{{\rm C}$\! \! \! $\raisebox{.45ex}{\tiny{$|$}}$\; $}^{+}$. This is justified by the fact that the reflection coefficient $R(k,t=0) \rightarrow {\cal O}(1/k)$ if $k \rightarrow \infty$. The poles of the denominator of Eq.(\ref{tot_ser}) are all situated in $\mbox{{\rm C}$\! \! \! $\raisebox{.45ex}{\tiny{$|$}}$\; $}^{-}$ so the only contribution to the integrals of Eq.(\ref{tot_ser}) comes from the poles of $R(k,t=0)$ which are situated in $\mbox{{\rm C}$\! \! \! $\raisebox{.45ex}{\tiny{$|$}}$\; $}^{+}$. We write Eq.(\ref{tot_ser}) in the following way: \begin{equation} F(x,k,t) - 1 = F_{un}(x,k,t) + F_{pe}(x,k,t) + F_{int}(x,k,t) \label{all} \end{equation} The function $F_{un}(x,k,t)$ only contains contributions of $R(k,t)$ and can be identified with the time-evolution of the unperturbed solution (solid-dot diagrams). The function $f(x,t)$ consists of contributions of both $F_{pe}(x,k,t)$ and $F_{int}(x,k,t)$. The term $F_{pe}(x,k)$ only contains contributions of $\overline{R}(k,t)$ and can be identified with the time-evolution of the perturbation in the absence of $u(x,t)$ (open-dot diagrams). The term $F_{int}(x,k,t)$ contains contributions of both $R(k,t)$ and $\overline{R}(k,t)$ and represents the interaction between the unperturbed solution $u(x,t)$ and the perturbation (diagrams consisting of solid-dots and open-dots). In the following we analyze these three terms separately. \subsection{The time-evolution of the non-interaction terms} Suppose the unperturbed reflection coefficient $R(k,t)$ has $N$ poles. It follows from Eq.(\ref{tot_ser}) and Eq.(\ref{all}) that the contribution to the function $F(k,x,t)$ from the terms containing only the reflection coefficient $R(k,t)$ (solid-dot diagrams) is equal to: \begin{displaymath} F_{un}(x,k,t) = \sum_{i=1}^{N} \frac{R_{i}}{k+p_{i}} e^{2i(p_{i}x +4p_{i}^{3}t)} + \sum_{i,j=1}^{N} \frac{R_{i}R_{j}} {(k+p_{j})(p_{i}+p_{j})} e^{2i \{ (p_{i}+p_{j})x + 4(p_{i}^{3} + p_{j}^{3})t \} } \end{displaymath} \begin{equation} \sum_{i,j,l=1}^{N} \frac{R_{i}R_{j}R_{l}} {(k+p_{l})(p_{j}+p_{l})(p_{i}+p_{j})} e^{2i \{ (p_{i}+p_{j}+p_{l})x + 4( p_{i}^{3} + p_{j}^{3} +p_{l}^{3})t \} } \cdots, \end{equation} where $p_{i}$ and $R_{i}$ are the poles and residues of the unperturbed reflection coefficient. If the Fourier transform (\ref{kertran_c}) is now performed on the unperturbed part of $F_{un}(x,k,t)$, we obtain the following expression for the kernel $K_{un}(x,y,t)$: \begin{displaymath} K_{un}(x,y,t) = i \sum_{i=1}^{N} R_{i} e^{2ip_{i}(x+y) +8ip_{i}^{3}t} + i \sum_{i,j=1}^{N} \frac{R_{i}R_{j}}{p_{i}+p_{j}}e^{ip_{j}(x+y)} e^{2ip_{i}x} e^{8i(p_{i}^{3} + p_{j}^{3})t} + \end{displaymath} \begin{equation} i \sum_{i,j,l=1}^{N} \frac{R_{i}R_{j}R_{l}}{(p_{j}+p_{l})(p_{i}+p_{j})} e^{ip_{l}(x+y)}e^{2i(p_{i}+p_{j})x} e^{8i(p_{i}^{3} + p_{j}^{3} + p_{l}^{3} )t} + \cdots \label{ker_ser_c} \end{equation} After putting $y=x$ and taking the derivative: \begin{equation} u_{un}(x,t) = - 2 \frac{d}{dx} K_{un}(x,x,t), \end{equation} the following expression for the unperturbed solution is obtained. \begin{displaymath} u_{un}(x,t)= 4 \sum_{i=1}^{N} R_{i}p_{i} e^{2i(p_{i}x+4p_{i}^{3}t)} + 4 \sum_{i,j=1}^{N} R_{i}R_{j}e^{2i \{ (p_{i}+p_{j})x + 4(p_{i}^{3} + p_{j}^{3} ) t \} } + \end{displaymath} \begin{equation} 4 \sum_{i,j,l=1}^{N} \frac{(R_{i}R_{j}R_{l})(p_{i}+p_{j}+p_{l})}{(p_{i}+p_{j})(p_{j}+p_{l})} e^{2i \{ (p_{i}+p_{j}+p_{l})x + 4(p_{i}^{3} +p_{j}^{3} + p_{l}^{3})t \}} + \cdots \label{part_un} \end{equation} This result is already obtained in ref.\cite{Dorren0}. From this result we can conclude that a general solution of the KdV-equation can be expanded in an infinite series of exponential basis functions. In a similar manner we can derive the time-evolution of the contributions to the Dyson's series in Fig.6 for all the terms that can be identified with the perturbation only (open-dot diagrams). Suppose the perturbation on the reflection coefficient $\overline{R}(k,t)$ has $M$ poles, it follows using a similar argumentation as for the evaluation of the term $u_{un}(x,k,t)$ that: \begin{displaymath} u_{pe}(x,t)= 4 \sum_{i=1}^{M} \overline{R}_{i} \overline{p}_{i} e^{2i(\overline{p}_{i}x+4\overline{p}_{i}^{3}t)} + 4 \sum_{i,j=1}^{M} \overline{R}_{i}\overline{R}_{j} e^{2i \{ (\overline{p}_{i}+\overline{p}_{j})x + 4(\overline{p}_{i}^{3} + \overline{p}_{j}^{3} ) t \} } + \end{displaymath} \begin{equation} 4 \sum_{i,j,l=1}^{M} \frac{ ( \overline{R}_{i} \overline{R}_{j} \overline{R}_{l} )( \overline{p}_{i}+ \overline{p}_{j}+ \overline{p}_{l}) }{ (\overline{p}_{i}+\overline{p}_{j}) (\overline{p}_{j}+\overline{p}_{l})} e^{2i \{ (\overline{p}_{i}+ \overline{p}_{j}+ \overline{p}_{l})x + 4(\overline{p}_{i}^{3} + \overline{p}_{j}^{3} + \overline{p}_{l}^{3})t \}} + \cdots \label{part_pert} \end{equation} We observe from this result that both the unperturbed solution and the time-evolution of the non-interaction elements have a similar analytical structure. We remark that the solution $u_{un}(x,t)$ corresponds to a series expansion of the terms in Eq.(\ref{xxxx}) is which only the determinant ${\cal D}(x,t)$ is present. We can conclude that both the unperturbed solution and the perturbation evolve as an infinite series of non-dispersive solutions in time. However, the spectral components of the perturbation $u_{pe}(x,t)$ generally travel with a different velocity than $u_{un}(x,t)$. This is already observed in Fig.1 and Fig.2. In Fig.1c it is shown that a perturbation of a nondispersive solution of the KdV-equation introduces a dispersion effect, and in Fig.1d, it is shown that this results in a different propagation velocity of perturbation. If the unperturbed solution is a localized non-dispersive function, both the unperturbed solution and the perturbation travel with a different velocity depending on the position of the poles of the perturbation. The velocity of every spectral component of the perturbation is determined by its corresponding pole position. In the following we examine the behavior of the interaction terms. \subsection{The time-evolution of the interaction terms} The function $F_{int}(x,k,t)$ corresponding to the interaction term contains contributions of both the unperturbed reflection coefficient $R(k,t)$ and the perturbation of the reflection coefficient $\overline{R}(k,t)$. In Fig.6, it consists of all the diagrams having both solid and open dots. An analytic expression of all these diagrams is given by the following equation: \begin{eqnarray} F_{int}(x,k,t) & = & \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} C(k,k^{\prime},t)\overline{C}(k^{\prime},k^{\prime\prime},t) dk^{\prime} dk^{\prime \prime} + \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \overline{C}(k,k^{\prime},t)C(k^{\prime},k^{\prime\prime},t)dk^{\prime} dk^{\prime \prime} \nonumber \\ &+& \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} C(k,k^{\prime},t)\overline{C}(k^{\prime},k^{\prime\prime},t) \overline{C}(k^{\prime\prime},k^{\prime\prime\prime},t) dk^{\prime} dk^{\prime \prime} dk^{\prime \prime \prime} + \cdots \label{f_m} \end{eqnarray} Since, both $R(k,t)$ and $\overline{R}(k,t)$ are rational functions of the wave-number, we can carry out the integrations in Eq.(\ref{f_m}) analytically. If we proceed in a similar manner as for the non-interaction terms, we obtain the following result for the time-evolution of the interaction terms: \begin{eqnarray} u_{int}(x,t) &=& 8 \sum_{i=1}^{N} \sum_{j=1}^{M} R_{i} \overline{R}_{j} e^{2i \{ (p_{i}+\overline{p}_{j})x + 4( p_{i}^{3} + \overline{p}_{j}^{3} ) t \} } \nonumber \\ &+& 12 \sum_{i=1}^{N} \sum_{j,l=1}^{M} \frac{ R_{i} \overline{R}_{j} \overline{R}_{l} (p_{i} + \overline{p}_{j} + \overline{p}_{l} ) }{ (p_{i}+\overline{p}_{j})(\overline{p}_{j} + \overline{p}_{l}) } e^{ 2i \{ (p_{i} + \overline{p}_{j} + \overline{p}_{l} )x + 4 ( p_{i}^{3} + \overline{p}_{j}^{3} + \overline{p}_{l}^{3} )t \} } \label{ker_ser_m1} \\ &+& 12 \sum_{i,j=1}^{N} \sum_{l=1}^{M} \frac{ R_{i} R_{j} \overline{R}_{l} (p_{i} + p_{j} + \overline{p}_{l} ) }{ (p_{i}+p_{j})(p_{j} + \overline{p}_{l}) } e^{ 2i \{ (p_{i} + p_{j} + \overline{p}_{l} )x + 4 ( p_{i}^{3} + p_{j}^{3} + \overline{p}_{l}^{3} )t \} } + \cdots \nonumber \end{eqnarray} As witnessed from the previous section, it follows from Eq.(\ref{ker_ser_m1}) that the interaction term is large with respect to the unperturbed solution, if the unperturbed problem has poles close to the origin in the complex plane. The analytical structure of Eq.(\ref{part_un}), Eq.(\ref{part_pert}) and Eq.(\ref{ker_ser_m1}) enables us to formulate an expression for the amplitude behavior of the perturbation $f(x,t)$. If the perturbation does not have poles close to the origin in the complex plane and if the perturbation is small with respect to the unperturbed problem at $t=0$, it follows from Eq.(\ref{part_un}) and Eq.(\ref{part_pert}) that the term $u_{pe}(x,t)$ remains small with respect to $u_{un}(x,t)$. However, as already concluded in the previous section, if the unperturbed solution has singularities, the structure of the interaction term (\ref{ker_ser_m1}) introduces necessarily singularities in the time-evolution of the perturbed problem. The physical meaning of $u_{int}(x,t)$ is visualized in Fig.4. The example given in Fig.4 only differs from the example given in Fig.3, by the position of the poles and residues that characterize the unperturbed solution. This implies that the function $u_{pe}(x,t)$ in Fig.4 does not differ from that in Fig.3. However, due to the large amplitudes in Fig.4, $u_{pe}(x,t)$ is small with respect to $u_{un}(x,t)$. In the upper panel of Fig.4, the unperturbed singular solution is plotted. In the lower panel of Fig.4, the perturbed solution is plotted. As remarked in Sec.3, in this special case, the perturbation consists of two parts. One part propagates over a different characteristic than $u_{un}(x,t)$. The second part propagates over the characteristic of $u_{un}(x,t)$ and is responsible for large amplitude fluctuations on the characteristic of the unperturbed solution. It is easy to see that an interaction term as given by equation (\ref{ker_ser_m1}) introduces large amplitude fluctuations. If we assume that the unperturbed solution consist of one pole $(p_{i}=p)$ and one residue, than we find at the characteristic $x= -4 p^{2} t$: \begin{equation} u_{int}(x = -4 p^{2},t) = 8 \sum_{j=1}^{M} R \overline{R}_{j} e^{ 8i \overline{p}_{j} \{ \overline{p}_{j}^{2} - p^{2} \} t } + \mbox{h.o.t.} \end{equation} From this result it follows that the interaction term $f_{int}(x,t)$ introduces fluctuations at the characteristic of the unperturbed solutions. This result explains the behavior of the perturbed singular solution in Fig.4. It is observed in this figure that certain spectral components of the noise travel with a different velocity, and that at the characteristic of the unperturbed solution large amplitude fluctuations occur. This is the result of the presence of the interaction term $u_{int}(x,t)$ which has the same magnitude as the unperturbed solution $u_{un}(x,t)$. The interaction term is large with respect to the unperturbed solution if $|\overline{R}_{i}| \ll p_{j}$ for all possible $\overline{R}_{i}$ and $p_{i}$. If this condition is satisfied, the amplitude of the perturbation $f(x,t)$ is small with respect to the amplitude of the unperturbed solution. \section{Numerical example} In this section the results that are obtained analytically in the previous sections are illustrated numerically in the case of a reflection coefficient consisting of an infinite number of poles and residues. In this section we give an numerical example in to illustrate the stability of the soliton. In a discrete representation the KdV-equation takes the following form: \begin{equation} u_{n}^{i+1} = u_{n}^{i} + \Delta t \left\{ \frac{ u_{n+3}^{i} - 3 u_{n+1}^{i} + 3 u_{n-1}^{i} - u_{n-3}^{i} }{ (2 \Delta x)^{3} } - \frac{ 6 u_{n}^{i} u_{n+1}^{i} - 6 u_{n}^{i} u_{n}^{i} }{ 2 \Delta x } \right\} \label{kdv_dis} \end{equation} In Eq.(\ref{kdv_dis}), the solution of the KdV-equation at time $i \Delta t$ and position $n \Delta x $ is given by $u_{n}^{i}$. In Eq.(\ref{kdv_dis}), $\Delta t$ represents the time-step and $\Delta x$ represents the distance between two grid-points. The discrete KdV-equation (\ref{kdv_dis}) is solved numerically using the fourth-order Runge-Kutta scheme as given in ref.\cite{Numres}. In the numerical example that follows, the KdV-equation is solved on a line-segment of a total length of $8 \pi$. The initial condition given in Fig.7a consists of the standard soliton which is used in the previous sections ($\beta=1$ and $d=-1$). The soliton is contaminated with a noise-function having a maximum amplitude of $10 \%$ of the maximum amplitude of the soliton. In Fig.7b, the soliton after a propagation-time of $1.1$ sec is given, it can be seen that the contamination has started propagating out of the soliton. This process continues, and at $t=2.2$ sec, the contamination has virtually propagated out of the soliton. This experiment is a numerical confirmation of the experiments of the previous sections. It reflects the case that small perturbations propagate out of the unperturbed soliton so that the noise separates form the unperturbed solution. This is also the reason why we observe solitons in nature. In the example of Fig.7 the spectral contents of the perturbation are chosen in such a way that the soliton and all the spectral components of the perturbation travel in such a way that the unperturbed solution and the perturbation separate during the course of time. In a real physical situation solitons are always contaminated with noise at time $t=0$. If we observe solitons in nature, it follows from this paper that the spectral components of the perturbation have a small amplitude for low frequencies. As a result of this, the soliton and the perturbation separate during the course of time and the soliton is ``born''. If the spectral components of the noise function are of the same magnitude than the spectral components of the soliton, the noise propagates with a similar velocity as the soliton, and hence we can conclude that no soliton is created. \section{Conclusions} In this paper the stability of the KdV-equation is investigated. In particular, attention is paid to the stability of localized solutions. Is Sec.2, the effects of nonlinearity and dispersion are investigated. For non-dispersive solutions of the KdV-equation these effects have to be in balance. It is pointed out in Sec.2 that in the cases of small perturbations the balance the dispersion and the nonlinearity is disturbed. As a result of this an additional dispersion effect is introduced, which generates the ``force'' that separates the noise from the unperturbed solution. In Sec.3, inverse scattering techniques are used to formulate an analytical expression which describes the behavior of perturbations of solutions of the KdV-equation. From the examples of Sec.3, it can be concluded for soliton-like solutions of the KdV-equation, that stability implies that noise either propagates out of the unperturbed solution, or that the noise spreads out so that the amplitude is reduced. Furthermore, it is observed for singular solutions that although the noise is propagating partly out of the unperturbed solution, large amplitude variations contaminate the perturbed solution. This behavior is examined in Sec.4, by expanding the perturbation into a series-solution. It is concluded in Sec.4 that the behavior of the perturbation $f(x,t)$ of the KdV-equation is strongly correlated to the behavior of the unperturbed solution. If the unperturbed problem has singularities the amplitude of the perturbation is in the same order of magnitude as the amplitude of the unperturbed problem. This result explains the large amplitude variations from which the perturbed singular singular problems suffers. Furthermore, in Sec.4, a criterion for the position of the poles and residues of the perturbation is given to posses a stable behavior of the soliton. Lastly, in Sec.5 we give a numerical example is given to illustrate the stability of the soliton. The perturbation used in Sec.5 has an infinite number of poles and residues. We observe that in the numerical case the same conclusions can be drawn as in cases of analytical perturbations: the soliton possesses stable behavior if the noise is propagating out of the soliton. \subsection*{Acknowledgments} This research was supported by the Netherlands Organization for Scientific Research (N.W.O.) through the Pionier project PGS 76-144. This is Geodynamics Research Institute (Utrecht University) publication 96.xxx \newpage \section*{Appendix A: The inverse problem for rational reflection coefficients} \renewcommand{\theequation}{\mbox{A-\arabic{equation}}} \setcounter{equation}{0} In this appendix a brief formulation of the inverse problem for rational reflection coefficients based upon the formulation of Sabatier \cite{Sabatier} is given. For a detailed treatment of the mathematics we refer to the book of Chadan and Sabatier \cite{Chadan}. Our starting-point is the following equation: \begin{eqnarray} F_{\pm} (k,x,t) -1 & = & \frac {1}{2 \pi i} \lim_{ \epsilon \rightarrow 0^{+} } \int_{- \infty}^{\infty} \frac{ 1-T (k^{\prime},t) F_{\mp}(k^{\prime},x,t) } { k^{\prime} + k + i \epsilon } dk^{\prime} \nonumber \\ & + & \frac {1}{2 \pi i} \lim_{ \epsilon \rightarrow 0^{+} } \int_{- \infty}^{\infty} \frac{ R_{\pm} (k^{\prime},t) F_{\pm}(k^{\prime},x,t) exp[ \pm 2 ik^{\prime} x ] } { k^{\prime} + k + i \epsilon } dk^{\prime} \label{eq:invcau.a_a} \end{eqnarray} In Eq.(\ref{eq:invcau.a_a}) $F_{+}(k,x)$ is defined for $ x > 0 $, and $F_{-}(k,x,t)$ for $ x < 0 $, $k \in \mbox{{\rm C}$\! \! \! $\raisebox{.45ex}{\tiny{$|$}}$\; $}$. The function $F_{\pm}(k,x,t)$ is defined by: \begin{equation} F_{\pm}(k,x,t)=exp[ {\mp} ikx] f_{\pm}(k,x,t) \label{eq:fad.a_a} \end{equation} The Jost solutions $f_{\pm}(k,x,t)$ are those solutions of the Schr\"{o}dinger equation satisfying the following boundary conditions: \begin{eqnarray} & f_{+}(k,x,t): & \lim_{x \rightarrow \infty} e^{-ikx} f_{+}(k,x,t)=1 \label{eq:jostright.a_a} \\ & f_{-}(k,x,t): & \lim_{x \rightarrow - \infty} e^{ikx} f_{-}(k,x,t)=1 \label{eq:jostleft.a_a} \end{eqnarray} They satisfy the following integral equations: \begin{eqnarray} f_{+}(k,x,t) & = & e^{ikx} - \int_{x}^{\infty} \frac { \sin k(x-y) }{k} V(y,t) f_{+}(k,y,t)dy \label{eq:josrrint.a_a} \\ f_{-}(k,x,t) & = & e^{-ikx} - \int_{-\infty}^{x} \frac { \sin k(x-y) }{k} V(y,t) f_{-}(k,y,t)dy \label{eq:jostlint.a_a} \end{eqnarray} It is well known that the functions $f_{\pm}(k,x,t)$ and therefore also the functions $F_{\pm}(k,x,t)$ are holomorphic in $\mbox{{\rm C}$\! \! \! $\raisebox{.45ex}{\tiny{$|$}}$\; $}^{+}$ \cite{Chadan}. The potential $V(x,t)$ has to be in the Faddeev class $L_{1}^{1}$: \begin{equation} \int_{ - \infty}^{\infty} (1+|x|)|V(x,t)| < \infty \label{eq:fadclass.a_a} \end{equation} The scattering coefficients $R_{+}(k,t), R_{-}(k,t), T(k,t)$ are defined by the asymptotic behavior of the physical solution of the Schr\"{o}dinger equation: \begin{equation} \psi_{1}(k,x,t) \sim \left\{ \begin{array}{ll} e^{ikx}+R_{+}(k,t)e^{-ikx} & x < 0 \\ T(k,t)e^{ikx} & x \rightarrow + \infty \end{array} \right. \label{eq:asympl.a_a} \end{equation} \begin{equation} \psi_{2}(k,x,t) \sim \left\{ \begin{array}{ll} T(k,t)e^{-ikx} & x < 0 \\ e^{-ikx}+R_{-}(k,t)e^{ikx} & x \rightarrow + \infty \end{array} \right. \label{eq:asympr.a_a} \end{equation} In the case of rational reflection coefficients they take the following form \cite{Sabatier}: \begin{eqnarray} R_{+}(k,t=0) & = & \frac{ P(-k) }{ \prod_{j=1}^{q} (\lambda_{j} - k ) } \prod_{ \mu_{i} \in M^{+} } \frac{ \mu_{i} + k }{ \mu_{i} - k } \prod_{ \lambda_{l} \in L^{+} } \frac{ \lambda_{i} + k } { \lambda_{i} - k } \label{eq:regenl.a_a} \\ T(k,t=0) & = & \frac { \prod_{i=1}^{q}(\mu_{i} + k ) } { \prod_{j=1}^{q}(\lambda_{i} + k ) } \label{eq:transgen.a_a} \\ R_{-}(k,t=0) & = & \frac{ P(k) }{ \prod_{j=1}^{q} (\lambda_{j} - k )} \prod_{ \mu_{i} \in M^{-} } \frac{ \mu_{i} - k }{ \mu_{i} + k } \prod_{ \lambda_{l} \in L^{-} } \frac{ \lambda_{i} - k } { \lambda_{i} + k } \label{eq:regenr.a_a} \end{eqnarray} Following Sabatier \cite{Sabatier}, the degree of the polynomial $P(k)$ has to be smaller than $q$. Further, Im $ \mu_{i} > 0 $ except if $\mu_{i} = 0 $, Im $ \lambda_{l} < 0$. The transmission coefficient $T(k,t)$ is supposed to be an irreducible fraction, and the sets $M^{+}$, $M^{-}$, $L^{+}$, $L^{-}$ contain numbers $ \neq 0$. If the potential is real then $\mu_{k}$, $\lambda_{k}$ are pure imaginary. If $\mu_{k}$,$\lambda_{k}$ are not pure imaginary then there exists $ - \mu_{k}^{\ast}$, $ - \lambda_{k}^{\ast}$. It can be shown that $T(k)$ is meromorphic in $\mbox{{\rm C}$\! \! \! $\raisebox{.45ex}{\tiny{$|$}}$\; $}^{+}$ and if there are poles they are in Im $k$ \cite{Chadan}. If there are no bound states, $T(k,t)F_{\mp}(k,x,t)$ is holomorphic in $\mbox{{\rm C}$\! \! \! $\raisebox{.45ex}{\tiny{$|$}}$\; $}^{+}$ and the first integral of (\ref{eq:invcau.a_a}) is zero. If $T(k,t)F_{\mp}(k,x,t)$ is holomorphic in $\mbox{{\rm C}$\! \! \! $\raisebox{.45ex}{\tiny{$|$}}$\; $}^{+}$ and all the poles $p_{i}$ of $R_{+}(k,t)$ are simple, the integral (\ref{eq:invcau.a_a}) can be solved by contour integration in the upper-half plane. The result is: \begin{equation} F_{+} (k,x,t) -1 = \sum_{p_{j} \in {\cal P } } \frac{ R_{j} F_{+}(p_{j},x,t) e^{2i(p_{j}x -4 p_{j}^{3} t)} } { p_{j} + k } \label{eq:invrat.a_a} \end{equation} The time-evolution of the residues that is used in equation (\ref{eq:invrat.a_a}), is given in ref.\cite{Dorren0}. Equation (\ref{eq:invrat.a_a}) can be solved by letting take $k$ the values of the discrete poles $p \in {\cal P}$. We then obtain a linear set of algebraic equations that determine $F_{+}(p_{j},x,t)$ for all values of $j$. This set of equations can be solved making use of Cramer's rule. We obtain after resubstituting the result in equation (\ref{eq:invrat.a_a}), using equation (\ref{kertran_c}) and putting $x=y$: \begin{equation} K_{+}(x,x,t)= \frac{ {\cal D}_{+}^{\prime}(x,t) }{ {\cal D}_{+}(x,t) } \label{eq:kernel.a_a} \end{equation} where: \begin{equation} {\cal D}_{+}(x,t) = det \{ \delta_{ij} - ( p_{i} + p_{j} )^{-1} R_{j} e^{ 2i(p_{j}x-4p_{j}^{3}t) } \} \label{eq:det.a_a} \end{equation} and ${\cal D}_{+}^{\prime}(x,t)$ is the derivative of ${\cal D}_{+}(x,t)$ with respect to $x$. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Conclusions}\label{s:conclusions} We have shown that recent neural rendering techniques can successfully be applied to the \emph{analysis} of egocentric videos. We have done so by introducing NeuralDiff\xspace, a triple-stream neural renderer separating the background, foreground and actor via appropriate inductive biases. Together with other improvements such as smooth dynamics and more principled color mixing, NeuralDiff\xspace significantly outperforms baselines such as NeRF-W for our task. We have used NeuralDiff\xspace to identify objects that move (\ie are detached) in long and complex video sequences from EPIC-KITCHENS, even if the objects are small and move only little and sporadically. We believe that these results will inspire further research on the use of neural rendering for unsupervised image understanding. \noindent\textbf{Acknowledgments.} Andrea Vedaldi was partially sponsored by ERC 638009-IDIU. \section{EPIC-Diff\xspace benchmark}\label{s:dataset} Our goal is to identify any object which moves independently of the camera in a video sequence. We create a suitable benchmark for this task by augmenting the well-known EPIC-KITCHENS dataset~\cite{Damen2020RESCALING} with new annotations. EPIC-KITCHENS is an egocentric dataset, with 100 hours of recording, 20M frames, and 90'000 actions performed. \paragraph{Data selection.} We selected 10 video sequences, each lasts 14 minutes on average. Then we uniformly extracted 1'000 frames from each, and preprocessed them with COLMAP~\cite{schoenberger16sfm} to obtain the camera calibration and extrinsics (motion). We refer to each video sequence as a \textit{scene}. To select the scenes, we considered two constraints. First, the videos should contain a diversity of viewpoints and manipulated objects. Second, COLMAP must successfully reconstruct the scene and register at least 600 frames. We only retain the frames where COLMAP succeeds, which results in an average of 900 frames per scene. \paragraph{Data annotation.} Since our algorithms are unsupervised, we do not need to collect extensive data annotations. We uniformly hold out on average 56 frames for validation (for setting parameters) and 56 for testing. We annotated the latter with segmentation masks to assess background/foreground segmentation. These frames are \emph{not} used to train the model, so they can also be used to assess new-view synthesis. A frame annotation consists of a pixel-level binary segmentation mask, where the foreground contains any pixel that belongs to an object that is observed moving \emph{at any point in time} during the video. Note that the fact that an object is marked as foreground in a frame does not mean it moves in \emph{that} frame; it only means that it moves at least once in the scene. Based on this definition, the foreground mask covers both foreground objects and the actor. We obtain about 560 manual image-level segmentation masks. Examples of these masks are given in~\Cref{fig:data}. \begin{figure}[t!] \centering \includegraphics[width=0.24\linewidth]{img/example_masks/row1col1.png} \includegraphics[width=0.24\linewidth]{img/example_masks/row1col2.png} \includegraphics[width=0.24\linewidth]{img/example_masks/row2col1.png} \includegraphics[width=0.24\linewidth]{img/example_masks/row2col2.png} \vspace{-0.5em} \caption{Two frames from EPIC-Diff\xspace and their corresponding manual foreground/background segmentation masks.} \label{fig:data} \end{figure} \paragraph{Evaluation.} We evaluate the wide-baseline background subtraction task. For this, we use standard segmentation metrics: for each test frame, we sort all pixels based on their foreground score as defined above, and use the ground truth binary mask to compute average precision (AP). We then report mAP by averaging across frames and scenes. We also evaluate the new-view synthesis quality by tasking our model with synthesizing each of the test views, and we measure PSNR. Specifically, we report PSNR for different parts of the scene using the ground-truth segmentation masks: the whole image, the background and the foreground regions. \section{Experiments}\label{s:experiments} \subsection{Experimental settings}\label{sec:exp_protocol} \paragraph{Implementation details.} All reported experiments are based on a PyTorch implementation of NeRF~\cite{mildenhall20nerf:} extended with several NeRF-W components~\cite{martinbrualla2020nerfw}. The architecture combines several interconnected MLPs to model density, color and uncertainty of background, foreground and actor. Their architecture's details are given in the supplementary material. We reuse the same motion coefficients $z^f_k=z^a_k=z_k\in\mathbb{R}^{17}$ for both foreground and actor (see~\cref{sec:foreground_modelling}). Positional encoding is used to encode position, motion coefficients, and viewing direction using respectively 10, 10 and 4 frequencies. All models are trained using the Adam optimizer with an initial learning rate of $5 \times 10^{-4}$ using a cosine annealing schedule \cite{loshchilov2017sgdr}. For more details regarding this implementation and differences to NeRF and NeRF-W, see the supplementary material. \input{qualitative_reconstruction} \paragraph{Baselines.} We compare against the following baselines: (1) \textbf{NeRF}~\cite{mildenhall20nerf:} uses a single stream. As it has been designed to only capture the static part of a scene (background), we use the per-pixel prediction error as a pseudo foreground score (the larger the error, the more likely for the pixel to belong to the foreground, because NeRF cannot explain it with its static model). (2) \textbf{NeRF-B\/F\xspace} trains two parallel NeRF models, one for the background (B) and one for the foreground (F). The F stream is further conditioned on time, but passing a positional-encoded version of the time variable $t$ (frame number) as input, instead of the learnable dynamic frame encoding of our method. (3) \textbf{NeRF-W}~\cite{martinbrualla2020nerfw} also contains two interlinked background and foreground streams. Because it was initially proposed for image collections and not videos, by default the foreground parameters $z^f_t$ are learned independently for each frame.\footnote{In NeRF-W, the foreground is designed to capture the transient part of the scene that should be ignored (\eg persons occluding a landmark).} This design limits the applicability of NeRF-W, as it is unable to render novel views for frames not available in the training set. We redesign NeRF-W as a baseline for our task by adjusting the frame specific code such that it produces the foreground using the code of its closest neighboring frame from the training set. More precisely, given a test frame $t$, we set the code for this test view $z^f_t := z^f_j$ where $j$ is the closest frame to $t$ out of all training frames $\mathcal{I}$ (\ie $j = \arg\!\min_{i}(\{|i - t|:i \in \mathcal{I}\})$. We follow a similar strategy for the frame specific code for appearance (responsible for capturing the photometric variations). \input{tab_epicdiff} \subsection{Results} \label{sec:exp_results} We compare our method and the different baselines on the EPIC-Diff\xspace benchmark presented in the previous section. \paragraph{Quantitative results.} Table~\ref{tab:epic} compares the different methods according to the four evaluation metrics of EPIC-Diff\xspace. This allows to evaluate their capacity to discover and segment 3D objects but also to reconstruct dynamic scenes. We make the following observations. First, all flavors of our approach largely outperform the existing NeRF and NeRF-W. They also outperform the naive NeRF-B\/F\xspace which simply combines two NeRF models respectively modeling the foreground and the background. Second, including temporal information as an input to neural rendering (either with a low-rank expansion of the trajectory states as in NeuralDiff\xspace or using a two stream architecture that processes time transformed with positional encoding as in NeRF-B\/F\xspace) proves to be essential for improving the performance over NeRF and NeRF-W. Third, we observe that each of the proposed improvements, NeuralDiff+A\xspace and NeuralDiff+C\xspace, outperform the vanilla version of our approach. The third stream for modeling the actor brings +2.4 mAP points to the segmentation task while the better color model brings +0.7 mAP points. They also slightly improve the frame reconstructions (PSNR scores). Finally, we see that combining the two proposed improvements (NeuralDiff+C+A\xspace) further improve new-view synthesis but not the foreground segmentation quality. Note however that the segmentation metric does not reflect the ability of the full model to separate foreground into objects and actor (since they are merged together in the annotated masks). This ability is illustrated in~\Cref{fig:splash}. \input{qualitative_seg} \paragraph{Qualitative results.} First, we verify the quality of the views reconstructed by our method. \Cref{fig:quali_comparisons} compares three frames from three different scenes, with their reconstruction by NeRF, NeRF-W, NeuralDiff\xspace and NeuralDiff+C+A\xspace. As already observed~\cite{park2020nerfies,li21neural}, NeRF struggles with the dynamic components in the scene and produces blurry reconstructions. NeRF-W obtains sharper reconstructions of the static regions, but does not handle well the dynamic regions. NeuralDiff\xspace produces sharper results, especially for the moving objects. See for example the plates. Finally NeuralDiff+C+A\xspace captures more details, such as the arms in the first scene, or the spoon in the second. Next, \Cref{fig:best5masks_2col} illustrate success and failure cases for the segmentation task. In the best cases for our method (top), NeRF-W produces noisy predictions which barely capture the plate, the pasta colander, and the actor's body. NeuralDiff\xspace improves over these results and captures all detachable objects and more body parts, but classifies part of the floor as foreground. NeuralDiff+A\xspace successfully identifies the floor as static and better predicts the shape of the actor's body (second and third rows). For the failure cases (bottom), while the segmentation mAP scores are generally low, NeuralDiff\xspace outperforms NeRF-W even more significantly. Specifically, NeRF-W fails to identify foreground objects. NeuralDiff\xspace improves over that but incorrectly classifies some of the background as foreground (see \eg rows 6 to 8 where the drying rack, static in this scene, is predicted as dynamic). Such errors are reduced by NeuralDiff+C\xspace, NeuralDiff+A\xspace and NeuralDiff+C+A\xspace. \paragraph{Comparison to motion segmentation.} \input{qualitative_motionseg} \input{tab_motionseg} Finally, we compare to MoSeg \cite{OB14b} as a traditional motion segmentation method and MotionGroup \cite{Yang21a} as a recent one. We report intersection over union (IoU) scores, as these methods produce binary masks. Quantitative and qualitative results are shown in \Cref{tab:motionseg} and \Cref{fig:quali_motionseg}. We see that none of these approaches is suited for the task that our method is designed for, as these techniques are unable to capture objects that move infrequently and suffer from the heavy occlusions of the actor. The method of \cite{OB14b} is particularly ill-suited for the task, so we did not evaluate it quantitatively. \section{Introduction}\label{s:introduction} Given a video capturing a complex 3D scene, we consider the problem of segmenting the scene objects that move independently of the camera. Motion is a powerful cue for discovering and learning visual objects in an unsupervised manner. In fact, `detachability', namely the possibility of moving a body independently of the rest of the scene, is used by Gibson~\cite{gibson86the-ecological} to \emph{define} objects. However, \emph{measuring} detachment given only raw visual observations as input is not an easy task. If the video is taken from the viewpoint of a static camera, the problem of separating the static background from the moving foreground reduces to background subtraction. However, classic background subtraction techniques are inapplicable if the camera undergoes a motion that induces significant parallax. We may call this more challenging scenario \emph{wide-baseline background subtraction}. To understand this concept, consider for example an egocentric video of a person cooking. This actor intervenes in the scene by moving (and transforming) objects. However, egomotion is the dominant effect: by comparison, objects move only sporadically, and in a way that is hardly distinguishable from the much larger apparent motion induced by the viewpoint change. Extracting the moving objects automatically is thus very difficult, and essentially impossible for traditional background subtraction techniques. One may use motion segmentation techniques to separate a scene in different motion components. However, these techniques generally require correspondences (\eg, optical flow), they reason locally, across a handful of frames, and usually avoid explicit 3D reasoning. In short, they are of difficult applicability to video sequences such as the ones in~\Cref{fig:splash}, comprising many small rigid objects that move only occasionally throughout a long sequence. In this paper, we propose to leverage recent progress in neural rendering techniques~\cite{mildenhall20nerf:} to develop a motion \emph{analysis} tool to achieve the desired segmentation. We build on the ability of neural rendering to reconstruct accurately the appearance of a rigid 3D scene under a variable viewpoint, without requiring dense correspondences. Given the reconstruction of the background, it is then possible to measure the more subtle appearance `differences' induced by the objects that move \emph{independently} of the camera. We further note that the 3D objects manipulated in the video also contain significant structure. Specifically, they move in `bursts', changing their state as they are manipulated, but remaining otherwise rigidly attached to the background. We thus extend the neural renderer to also reconstruct the object appearance using a slowly-varying time encoding for them. In fact, we go one step further and introduce a third neural rendering stream that captures the actor observing and moving the objects. The intuition is that the actor moves continually, in a way that occludes the scene, with a motion linked to the camera and not to the scene (being the observer), leading to significantly different dynamics compared to the background and foreground objects. Our technical contribution is thus a three-stream neural rendering architecture, where the streams model respectively i) the static background, ii) the dynamic foreground objects, and iii) the actor. Those are then composed to explain the video as a whole (see \Cref{fig:architecture}). We design the streams differently, in order to incorporate inductive biases that match the statistics of each layer (background, foreground, actor). These inductive biases depart from previous neural rendering models because of the structure of the foreground, which is composed of several objects being manipulated at different intervals, and of the actor, a deformable body attached to the camera and not to the background. The resulting analysis-via-synthesis method shows that neural rendering techniques are not only useful for synthesis, but also for analysis. In particular, we are the first to demonstrate the effectiveness of these techniques in interpreting challenging egocentric videos, providing cues for the extraction of detached objects in scenes with a complex 3D structure and dynamics. We focus our empirical evaluation on egocentric videos because, with the emergence of AR, they are becoming increasingly popular and have the advantage of showing the interaction of actors with their environment. We expect this kind of videos to provide an enormous wealth of information for computer vision, particularly with the recent introduction of Ego4D~\cite{grauman21ego4d}. They are also particularly challenging to process, providing an excellent test scenario for this class of algorithms. For evaluation, we augment the EPIC-KITCHENS dataset~\cite{damen18epic} and manually segment all objects that move at some point during the scene, \ie that are thus detached. With these annotations, we can assess neural renderers not only in terms of new view synthesis quality, but also, and more to the point, in terms of their ability to separate videos in the various dynamic components. With this, we also define a new \emph{benchmark} for measuring progress in the challenging task of dynamic object segmentation in complex videos, thus inviting further research in the area. Using this data, which we call EPIC-Diff\xspace, we show that our model outperforms the direct application of existing neural rendering approaches such as NeRF~\cite{mildenhall20nerf:} or NeRF-W~\cite{martinbrualla2020nerfw} in the wide-baseline background subtraction problem. \section{Method} \label{sec:method} Given a video sequence $x$, we wish to extract a corresponding mask $m$ that separates the foreground objects from the background. We define as background the part of the scene that remains static throughout the entire video, generating an apparent motion only due to the camera viewpoint change. We define as foreground any object that moves independently of the camera in at least one frame. Traditional background subtraction techniques solve a similar problem by predicting the appearance of each video frame \emph{as if} the foreground objects were removed; given this prediction, the foreground objects can be segmented by taking the difference between the measured and predicted images. Predicting the appearance of the background under an occluding foreground object is relatively easy for a static camera, where correspondences with other video frames where the object is not present can be trivially established. However, the prediction is much more challenging if the viewpoint is also allowed to change. In order to solve this problem, we build on recent neural rendering techniques such as NeRF~\cite{mildenhall20nerf:} that can predict effectively the appearance of a static object, in our case the rigid background, under a variable viewpoint. In fact, we suggest that the foreground objects, which change over time, can \emph{also} be captured via a (distinct) neural rendering function, further promoting separation of background and foreground. Next, we first discuss neural rendering in general, and then introduce our model. \subsection{Neural rendering} \label{sec:basic_nerf} We base our method on NeRF~\cite{mildenhall20nerf:}, which we summarize here. A video $x$ is a collection $(x_t)_{t\in[0,\dots,T-1]}$ of $T$ video frames, each of which is an RGB image $x_t \in \mathbb{R}^{3\times H\times W}$. The video frames are a function $x_t=h(B,F_t,g_t)$ of the static background $B$, the variable foreground $F_t$ and a moving camera $g_t \in SE(3)$, where $SE(3)$ is the group of Euclidean transformations. The motion $g_t$ is assumed to be known, estimated using an off-the-shelf SfM algorithm such as COLMAP~\cite{schoenberger16sfm,schoenberger16mvs}. The background and foreground components comprise the shape and reflectance of the 3D surfaces in the scene as well as the illumination. Rather than attempting to invert $h$ to recover $B$ and $F_t$, which amounts to inverse rendering, neural rendering \emph{learns} the mapping $h$ directly, as a neural network $f$, $h(B,F_t,g_t)\approx f(g_t,t)$, providing time and viewpoint to reconstruct the corresponding video frame $x_t$. By a careful design of the function $f$, the learning process can induce a factorization of viewpoint $g$ and time $t$, thus generalizing $f$ to new (unobserved) viewpoints. NeRF additionally assumes that the scene is static, meaning that the variable foreground $F_t$ is empty, so the function simplifies to $f(g_t)$. The model $f(g_t)$ is further endowed with a specific structure, which is key to successful learning~\cite{zhang20nerf:}. Specifically, the color $x_{ut} \in \mathbb{R}^3$ of pixel $u \in \Omega = \{0,\dots,H-1\}\times \{0,\dots,W-1\}$ is obtained by a volumetric sampling process that simulates ray casting. One `shoots' a ray $ r_k = \ell_k K^{-1}(u) $ along the viewing direction of pixel $u$, where $ K : \mathbb{R}^2\times \{1\} \rightarrow \Omega $ is the camera calibration function and $\ell_0 < \cdots < \ell_{M+1} \in \mathbb{R}_+$ are the sampled depths. The pixel color is obtained by averaging the color of the 3D points $g_t r_k$ along the ray, weighed by the probability that a photon emanates from the point and reaches the camera. A neural network, \newcommand{\operatorname{MLP}}{\operatorname{MLP}} $ (\sigma_k^b,c_k^b) = \operatorname{MLP}^b(g_t r_k, d_t), $ estimates the density $\sigma_k^b \in \mathbb{R}_+$ and the color $c_k^b\in\mathbb{R}^3$ of each point $g_t r_k$, where the superscript $b$ denotes the fact that the quantities refer to the background `material' and $d_t$ is the unit-norm viewing direction. The probability that a photon is transmitted while traveling through the ray segment $(r_k, r_{k+1})$ is defined to be $ T^b_k = e^{-\delta_k \sigma^b_k} $ where the quantity $\delta_k = |r_{k+1}-r_k|$ is the length of the segment. This definition is consistent with the fact that the probability of transmission across several segments is the product of the individual transmission probabilities. We can thus write the color of pixel $u$ as: \begin{equation} x_{ut} = f_u(g_t) = \sum_{k=0}^M v_k\, (1 - T^b_k) \, c^b_k, ~~~~ v_k = \prod_{q=0}^{k-1} T^b_q. \end{equation} The network is trained by minimizing the reconstruction error $\|x - f(g_t)\|$, thus fitting a single video at a time. \subsection{Dynamic components}\label{sec:foreground_modelling} The method discussed above assumes that the scene is rigid. In our case, reconstructing the scene is more complex due to the variable foreground $F_t$. In order to capture this dependency, on top of the background MLP ($\operatorname{MLP}^b$), we introduce a foreground-specific MLP, $ (\sigma^f_k,c^f_k,\beta^f_k) = \operatorname{MLP}^f(g_t r_k, z^f_t). $ It produces a `foreground' occupancy $\sigma^f$ and color $c^f$. Additionally, it predicts an uncertainty score $\beta^f_k$ whose role is clarified in \Cref{sec:uncertainty}. We also introduce a dependency on a frame-specific code $z^f_t\in\mathbb{R}^D$, capturing the properties of the foreground that change over time. The color $x_{ut}$ of a pixel $u$ is obtained by composition of multiple materials $\mathcal{S}$ (\eg the background and the foreground, so $\mathcal{S}=\{b,f\}$): \begin{multline} x_{ut} = f_u(g_t,z_t) = \sum_{k=0}^M v_k \left( \sum_{p\in \mathcal{S}} w^p(T_k) c^p_k \right) , \\ \text{where}\qquad v_k = \prod_{q=0}^{k-1} \prod_{p \in \mathcal{S}} T_q^p \label{e:comp} \end{multline} The factor $v_k$ requires a photon to be transmitted from the camera to point $r_k$ through the different materials (hence the transmission probabilities are multiplied). The weights $w^p(T_k)$ mix the colors of the materials proportionally to their density. Following NeRF-W~\cite{martinbrualla2020nerfw}, we can simply set \begin{equation}\label{e:mix1} w^p(T_k) = 1 - T_k^p \in [0,1]. \end{equation} \paragraph{Smooth dynamics.} The model (MLP) weights and the frame-specific parameters $z_t$ can be optimized by minimizing the loss across all input frames $$ \min_{f,z_1,\dots,z_T} \frac{1}{T|\Omega|}\sum_{t=1}^T\sum_{u\in\Omega} \| x_t - f(g_t,z_t) \|^2 $$ in an auto-decoder fashion. However, the foreground, while dynamic, does not change arbitrarily from frame to frame. In particular, most foreground objects are in most frames rigidly attached to the background. Because of this, the dependency on independent frame-specific codes $z_t$ makes little sense; we replace it with a low-rank expansion of the trajectory of states, setting $z_t = B(t) \Gamma $ where $B(t) \in \mathbb{R}^ {P}$ is a simple handcrafted (fixed) basis and the motion $\Gamma \in \mathbb{R}^{P\times D}$ are coefficients such that $P \ll T$. We take in particular $B(t) = [1, t, \sin 2\pi t, \cos 2\pi t, \sin 4 \pi t, \cos 4\pi t,\dots ]$ to be a deterministic harmonic coding of time (meaning that $z_t$ varies slowly over time.) The method described above with a static part (as defined in \Cref{sec:basic_nerf}) and a dynamic part describing the objects ($\mathcal{S} = \{b,f\}$) is the basic version of our proposed approach. We refer to it as \texttt{NeuralDiff\xspace} in what follows. \paragraph{Improved geometry: capturing the actor.} In egocentric videos, we further distinguish the foreground objects manipulated by the actor/observer, which moves sporadically, from the actor's body, which moves continually. To model the latter, we consider a third MLP tasked with capturing parts of the actor's body that appear in the frames. Formally, the actor MLP is similar to the foreground MLP\@: $ (\sigma^a_k,c^a_k,\beta^a_k) = \operatorname{MLP}^a(r_k, z^a_t), $ with the key difference that the 3D point $r_k$ is expressed relative to the camera (v.s. $g_tr_k$ which is expressed relative to the world). This is due to the fact that the camera is anchored to the actor's body, which therefore shows a reduced variability in the reference frame of the camera. By contrast, the background is invariant if expressed in the reference frame of the world; the same is true for the foreground objects when they are not manipulated, which is true most of the time. This inductive bias helps factoring the different materials. Here $\mathcal{S} = \{b,f,a\}$. We refer to this new flavor as \texttt{NeuralDiff+A\xspace}. \paragraph{Improved color mixing.} Eq.(\ref{e:mix1}), used in prior work to mix colors from different model components, cannot be justified probabilistically as it amounts to summing non-exclusive probabilities (nothing in the model prevents two or more materials to have non-zero density at a given point). A principled mixing model is obtained by decomposing the segment $\delta_k$ in $Pn$ sub-segments, alternating between the $P$ different materials ($P=|\mathcal{S}|$, \eg $P=3$ if the background, foreground and actor are considered). In the limit, we can show that the probability that the photon is absorbed in a subsegment of material $p$ is given by: \begin{equation}\label{e:mix2} w^p(T_k) = \frac{\sigma^p_k}{\sum_{q=1}^P \sigma^q_k}\left(1 - \prod_{q=1}^P T^q_k\right). \end{equation} The second factor in parenthesis is, evidently, the probability that the photon is absorbed by any of the materials. The first factor, which involves the densities rather than the probabilities, is the probability that a given material $p$ is responsible for the absorption. Note that, differently from~\cref{e:mix1}, this definition yields $ \sum_{p=1}^P w^p(T_k) = 1 - \prod_{q=1}^P T^q_k = 1 - v_k / v_{k-1} $ which is consistent with the definition of transmission probability $v_k$. A proof of~\cref{e:mix2} is available in the supplementary material. The NeuralDiff\xspace model can thus be improved by taking into account this more principled way of producing pixel colors in the reconstruction. We refer to this improved version as \texttt{NeuralDiff+C\xspace}. Note that the two proposed improvements are complementary and we refer to the method enhanced by both as \texttt{NeuralDiff+C+A\xspace}. \subsection{Uncertainty and regularization} \label{sec:uncertainty} \paragraph{Uncertainty.} The MLPs also predict scalars $\beta^p_k \geq 0$ (where, in practice, $\beta^b_k=0$ for the background). These are used to express the uncertainty of the color associated to each 3D point $r_k$ for each material $p$ as pseudo-standard deviations (StDs). Following~\cite{martinbrualla2020nerfw}, the StD of the rendered color $x_{ut}$ is just the sum of the StDs $\beta_{ut} = \sum_{p}\beta^p_{ut}$, where $\beta^p_{ut}$ is obtained via \cref{e:comp} by `rendering' the StDs $\beta^p_k$ of each 3D material point (it suffices to replace $\beta^p_k$ for $c^p_k$ in~\cref{e:comp}). The StD $\beta_{ut}$ is used in a Gaussian observation loss as a form of self-calibrated aleatoric uncertainty~\cite{novotny17learning,kendall17what}: \begin{equation}\label{e:loss1} \mathcal{L}_\text{prob}(f,z_t|x_t,g_t,u) = \frac{\| x_{ut} - f_u(g_t, z_t) \|^2}{2\beta_{ut}^2} + \log \beta_{ut}^2. \end{equation} \paragraph{Sparsity.} We follow NeRF-W~\cite{martinbrualla2020nerfw} and further penalize the occupancy of the foreground and actor components using an $L^1$ penalty: $$ \mathcal{L}_\text{sparse}(f,z_t|x_t,g_t,u) = \sum_{p=1}^P\sum_{k=0}^M \sigma^p_k. $$ This is the $L^1$ norm of the ray occupancies, which encourages the foreground occupancy to be sparse. \paragraph{Training loss.} Finally, the model is trained using the loss $ \mathcal{L} = \mathcal{L}_\text{prob} + \lambda \mathcal{L}_\text{sparse} $ where $\lambda > 0$ is a weight set to 0.01. \subsection{From NeuralDiff\xspace to a scene segmentation} Our approach is trained for a reconstruction task, but our primary goal is wide-baseline background subtraction, so we need to extract masks that assign a given pixel to the background, foreground, or actor layers. We do so by constructing three indicator channels and we use \cref{e:comp} to `render' them as a mask $m_{ut} \in \mathbb{R}^3$ --- in other words, we simply associate pseudo-colors $(1,0,0)$, $(0,1,0)$ and $(0,0,1)$ to all 3D points of background, foreground and actor, respectively, and use~\cref{e:comp} to obtain $m_{ut}$. \section{Related work}\label{s:related} \begin{figure*}[t!] \centering \includegraphics[width=0.85\linewidth]{img/method.pdf} \vspace{-0.2cm} \caption{\textbf{Overview of NeuralDiff\xspace}. Given only the camera viewpoint $g_t$ and a frame specific code $z_t^f$ (learned latent variable), our three stream architecture learns to predict the color value of pixel $x_{ut}$ by combining information coming from the static model of the background, and from two dynamic components, one for the foreground objects, and one for the actor. The parameters of this model are learned using a probabilistic loss $\mathcal{L}$.} \label{fig:architecture} \end{figure*} Although unique, our research problem relates to several existing research directions that we mention below. \paragraph{Background subtraction.} Background subtraction techniques have been used for detecting moving objects in video sequences (see survey~\cite{bouwmans2014traditional}). They typically assume that only the foreground moves, and many methods rely on a background initialization step which assumes that the first few frames give a good estimate of the background's appearance. The background model can be updated during the sequence, but these methods fail when the background varies as much as in the egocentric videos that we consider~\cite{kalsotra2019comprehensive}. \paragraph{Motion segmentation.} Closely related to background subtraction, motion segmentation is the more general task of decomposing a video into individually moving objects \cite{08motionseg, 21motionseg, OB14b}. These techniques generally rely on optical flow, which is subject to ambiguities \cite{cvpr20motionseg}, and reasons only locally without a 3D representation. Occlusions are one of the main challenges \cite{cvpr21motionseg}. Many methods fail when dynamic objects temporarily remain static, even for a few frames \cite{08motionseg, 21motionseg}. These issues prevent applying standard motion segmentation methods to egocentric videos (where objects rarely move and the actor heavily occludes the scene). \paragraph{Discovering and segmenting objects in videos.} This long-standing problem is related to background subtraction and motion segmentation \cite{bideau2016s, papazoglou2013fast, tokmakov2017learning, jain2017fusionseg, xie2019object}. For example, one can use a probabilistic model that acts upon optical flow to segment moving objects from the background~\cite{bideau2016s}. In~\cite{sundaram2010dense, brox2010object}, pixel trajectories and spectral clustering are combined to produce motion segments. The method of~\cite{matzen14scene} uses image collections to reconstruct urban scenes and to discover their dynamic elements such as billboards or street art, by clustering 3D points in space and time. Recent work revisits classical motion segmentation techniques from a data-driven perspective~\cite{Yang21a, wang2019zero, xie2019object}, \eg using physical motion cues to learn 3D representations~\cite{tenenbaum00a-global} or learning a factored scene representation with neural rendering~\cite{yuan2021star}. Our work departs from these approaches and learns a holistic representation capable of handling occlusions and sporadically moving objects, without the need for multi-view videos as in~\cite{yuan2021star}. We approach the problem on complex real-world data as opposed to clean synthetic ones as in \cite{tenenbaum00a-global}. \paragraph{Neural rendering.} Neural rendering is a way of synthesizing novel views using a neural architecture and classic volume rendering techniques. It was introduced as Neural Radiance Fields (NeRF) in \cite{mildenhall20nerf:} for static scenes. By default, it poorly handles dynamic scenes and occlusions. It is extended in \cite{martinbrualla2020nerfw} as NeRF-W for the case of unconstrained photo collections to deal with transient objects occluding the static scene. Another research direction is studied in \cite{xie2021fignerf, stelzner2021decomposing} to model the 3D geometry of one or several static objects. Recent work has also focused on modeling dynamic scenes, mostly with monocular videos as input \cite{park2020nerfies,li21neural, pumarola2021d, chen2021animatable, gao2021dynamic, tretschk2021nonrigid}. Most approaches~\cite{park2020nerfies,pumarola2021d,chen2021animatable,peng2021animatable} combine a canonical model of the object with a deformation network, or warp space~\cite{tretschk2021nonrigid}, still starting from a canonical volume. Closer to our work, \cite{li21neural} and \cite{gao2021dynamic} combine a static NeRF model and a dynamic one. Yet, none of those methods explicitly tackle the segmentation of 3D objects in such long and challenging video sequences. \section*{Supplemental Material} \section{Proof of equation (4)} Section~\ref{sec:method} describes the problem of mixing colors from different model components, and introduces a more principled color mixing model to resolve this issue. This model is obtained by decomposing the segment $\delta_k$ in $Pn$ sub-segments, alternating between the $P$ different materials ($P=|\mathcal{S}|$, \eg $P = 3$ if the background, foreground and actor are considered). In the limit, the probability that the photon is absorbed in a subsegment of material $p$ is given by: \begin{equation}\label{e:mixsuppl} w^p(T_k) = \frac{\sigma^p_k}{\sum_{q=1}^P \sigma^q_k}\left(1 - \prod_{q=1}^P T^q_k\right). \end{equation} and we propose a proof for this claim below. \begin{proof}[Proof of~\Cref{e:mixsuppl}] Decompose segment $\delta_k$ in $Pn$ subsegments, alternating between materials $p \in \{1,\dots,P\}$ in a cyclic fashion. The probability that material $p=1$ is responsible for the absorption is given by: $$ \sum_{i=0}^{n-1} (1 - (T_k)^{\frac{1}{n}})^i (1-(T_k^p)^{\frac{1}{n}}) = \frac {1-(T_k^p)^{\frac{1}{n}}} {1-\bar T_k^{\frac{1}{n}}} (1-T_k), $$ where $ \bar T_k = \prod_{q=1}^P T_k^q $ and $ T_k^p = e^{-\sigma^p_k\delta_k}. $ In the limit for $n\rightarrow \infty$, this expression reduces to $ ({\ln T_k^p}/{\ln \bar T_k}) (1-T_k) $ which is the same as~\Cref{e:mix2}. \end{proof} \section{Implementation details} \Cref{sec:exp_protocol} contains some implementation details about the architecture. We provide further details below. \paragraph{Architecture.} As outlined in \Cref{sec:method}, we make use of a three stream architecture to separate the background, foreground and actor. Similar to \cite{martinbrualla2020nerfw}, we implemented $\operatorname{MLP}^b$ and $\operatorname{MLP}^f$ such that they share the weights of their initial layers. Let us define the set of shared layers as $\operatorname{MLP}^s$. Given a ray $r_k$, the shared MLP encodes the ray as $\rho_k$ and produces the background density with $(\rho_k, \sigma_k^b) = \operatorname{MLP}^s(g_t r_k$). Same as the static model in \cite{martinbrualla2020nerfw}, $\operatorname{MLP}^b$ further processes $\rho_k$ and outputs the corresponding background color $(c_k^b) = \operatorname{MLP}^b(\rho_k, d_t, y_k^f)$ where $d_t = q_t / \|q_t\|_2$ with $q_t = g_t K^{-1} u$ is the respective unit normalized viewing direction and $y_k^f$ is the frame specific appearance code (equivalent to the latent appearance embedding in NeRF-W\footnote{While the EPIC-KITCHENS dataset has less variability in terms of photometric variation than the unconstrained photo collections used in NeRF-W~\cite{martinbrualla2020nerfw}, we observed that encoding appearance still results in better reconstructions.}). The foreground MLP takes the encoded ray and the frame specific code $z_t^f$ as input and produces the density, color and uncertainty score as in $(\sigma^f_k,c^f_k,\beta^f_k) = \operatorname{MLP}^f(\rho_k, z^f_t)$. The actor MLP does not rely on $\operatorname{MLP}^s$, meaning it does not share weights with $\operatorname{MLP}^b$ and $\operatorname{MLP}^f$. Similar to $\operatorname{MLP}^f$ it also uses the frame specific code as input. It takes a ray that is relative to the camera, and the frame code as input and produces, analogously to $\operatorname{MLP}^f$, the density, color and uncertainty score with $(\sigma^a_k,c^a_k,\beta^a_k) = \operatorname{MLP}^a(r_k, z^a_t)$ with $z^a_t = z^f_t$. Same as in NeRF-W, we add a minimum importance $\beta_\text{min}$ (as hyperparameter) to the sum of the pseudo-standard deviations, resulting in $\beta_{ut} = \sum_{p}\beta^p_{ut} + \beta_\text{min}$, where $p$ is the material, and $u$ a pixel from frame $t$. The layers of the shared MLP, $\operatorname{MLP}^s$, consists of 256 units, the other MLPs consists of 128 units. We respectively use 8, 1, 4, and 4 layers for $\operatorname{MLP}^s$, $\operatorname{MLP}^b$, $\operatorname{MLP}^f$, and $\operatorname{MLP}^a$. \paragraph{Sampling points efficiently.} Analogously to NeRF~\cite{mildenhall20nerf:} and NeRF-W~\cite{martinbrualla2020nerfw}, we improve the sampling as described in \Cref{sec:basic_nerf} by simultaneously optimizing two volumetric radiance fields, a coarse one, $f^\text{coarse}$, and a fine one $f^\text{fine}$. Using both models enables us to sample free space and occluded regions that do not contribute to the rendered image less frequently by ``filtering'' these regions out with the coarse network. We achieve this by using the learned density of the coarse model to bias the sampling of the points along a ray for the fine model. Similarly to \cite{martinbrualla2020nerfw}, we apply the proposed architectural extensions (such as the actor model) only to the fine model. Therefore, in practice the training loss in \Cref{sec:uncertainty} (which describes the training of the fine radiance field $f^\text{fine}$) is extended with \begin{equation*} \mathcal{L^\text{coarse}}(f^\text{coarse}|x_t,g_t,u) = \sum_{u}{\| x_{ut} - f_u^\text{coarse}(g_t) \|^2}, \end{equation*} where pixel $u \in \Omega = \{0,\dots,H-1\}\times \{0,\dots,W-1\}$. Our final loss is then $\mathcal{L} = \mathcal{L}_\text{prob} + \lambda \mathcal{L}_\text{sparse} + \mathcal{L^\text{coarse}}$. \paragraph{Similarities and differences with NeRF and NeRF-W.} The rendering mechanism is virtually the same as the one described in NeRF \cite{mildenhall20nerf:} with the exception that we use a batch size of 1048 rays, and sample 64 points along each ray in the coarse volume and 64 additional points in the fine volume. Similarly to \cite{martinbrualla2020nerfw}, we set $\beta_\text{min} = 0.03$, and apply positional encoding on the inputs. In comparison, we use $256$ units for the shared MLP (referred to as $\operatorname{MLP}_{{\theta}_1}$ in \cite[p.4]{martinbrualla2020nerfw}) and do not omit the color and density from the foreground model (see \cite[p.5]{martinbrualla2020nerfw}), as we use it for the final rendering. \paragraph{Training.} All models are trained separately for each scene for 10 epochs on 1 GPU, taking approximately 24 hours with an NVIDIA Tesla P40. We downscale the images extracted from the videos of the EPIC Kitchens dataset to a resolution of $128 \times 228$. The different hyper-parameters are selected on the validation set via grid search to improve the photometric reconstruction. \begin{table}[t!] \centering \begin{tabular}{ccccccc} \toprule ID & KID & Train & Val. & Test & $\text{Ann.}$ & Duration \\ \midrule 01 & P01\_01 & 752 & 54 & 54 & 54 & 27 min \\ 02 & P03\_04 & 794 & 57 & 57 & 56 & 28 min \\ 03 & P04\_01 & 797 & 57 & 57 & 57 & 19 min \\ 04 & P05\_01 & 808 & 58 & 58 & 58 & 06 min \\ 05 & P06\_03 & 867 & 62 & 62 & 61 & 11 min \\ 06 & P08\_01 & 656 & 47 & 47 & 47 & 10 min \\ 07 & P09\_02 & 757 & 54 & 55 & 54 & 06 min \\ 08 & P13\_03 & 689 & 49 & 50 & 50 & 06 min \\ 09 & P16\_01 & 838 & 60 & 60 & 60 & 20 min \\ 10 & P21\_01 & 867 & 62 & 62 & 61 & 11 min \\ \midrule - & All & 7825 & 560 & 562 & 558 & 144 min \\ \bottomrule \end{tabular} \caption{\textbf{Summary of EPIC-Diff\xspace.} Number of frames per scene for training, evaluation, and testing. Including annotations for test frames. KID refers to the video ID from the EPIC Kitchens dataset. Note that some frames do not show any detachable objects or the actor, hence resulting in some cases in fewer annotations than test set frames.}\label{tab:summary_scene_epicdiff} \end{table} \paragraph{Testing.} The weights of a model used for representing one complete scene take about 17MB of disk space. We render the views of one entire scene in about one hour with an NVIDIA GeForce RTX 2080. \section{Details about the EPIC-Diff\xspace benchmark} \begin{figure}[t!] \centering \includegraphics[width=0.48\linewidth]{img/colmap_im.jpg} \includegraphics[width=0.48\linewidth]{img/colmap_mask.png} \caption{\textbf{Image and mask as input for COLMAP.} The mask approximates the location of the hands over all frames. COLMAP will use the features found in the white area and will ignore the masked ones (black).} \label{fig:colmap_mask} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=1.\linewidth]{img/pointclouds/p03_04.png} \includegraphics[width=1.\linewidth]{img/pointclouds/p05_01.png} \includegraphics[width=1.\linewidth]{img/pointclouds/p06_03.png} \includegraphics[width=1.\linewidth]{img/pointclouds/p13_03.png} \includegraphics[width=1.\linewidth]{img/pointclouds/p21_01.png} \caption{\textbf{Sparse reconstructions of 5 scenes.} We visualize point clouds for 5 scenes, with example images showing different views of each scene with their corresponding extracted features. The red dots in the point clouds represent estimated extrinsic camera parameters. Note that the bottom middle part of each image does not contain features as explained in \Cref{fig:colmap_mask}.} \label{fig:pointclouds} \end{figure} This section extends \Cref{s:dataset} from the main paper. We give additional details about the way we created the EPIC-Diff\xspace benchmark out of the EPIC-KITCHENS dataset~\cite{Damen2020RESCALING}. As a first pre-filtering step, we ignored videos where the person is mostly washing dishes and we selected through visual inspection the scenes that show a high variety of viewpoints and manipulated objects. We then extracted the frames from these videos and sampled each $i$-th frame (linearly), where $i$ is chosen such that we have 1000 frames per scene. In the next step we used COLMAP's feature extractor with SIFT to calculate keypoints in the most reliable frame regions (\ie, we excluded regions that are likely to correspond to the actor and not the scene, using a fixed mask, as shown in \Cref{fig:colmap_mask}). Then we match the features with vocabulary feature matching, and create a sparse 3D reconstruction. This results in a subset of the 1000 frames that get registered. We further filter the scenes by constraining them to have a minimum of 600 registered frames (out of the 1000 initial ones). We split the remaining frames of each scene into a train, a validation and a test set, where we select each $16$-th frame for validation and every other $16$-th frame for testing. The remaining frames are then used for training. For the foreground segmentation task, we annotate all images from the test set with the VGG Image Annotator \cite{dutta2016via, dutta2019vgg} for 10 scenes from the EPIC-KITCHENS dataset \cite{Damen2020RESCALING}, filtered with the procedure described above. For this task, we define the foreground as all moving objects \textit{and} the actor. A summary of the dataset statistics can be found in \Cref{tab:summary_scene_epicdiff}. This table shows the number of frames per train, validation and test set with the latter's corresponding to annotated frames. For EPIC-Diff\xspace, we extracted a total number of about 9000 frames from about 140 minutes of video material for the 10 scenes, and annotated 558 frames out of these frames. Sparse reconstructions for 5 out of the 10 scenes can be found in \Cref{fig:pointclouds}. \section{Segmentation precision-recall curves} We evaluate the capacity of the methods to discover and segment 3D objects with a precision-recall curve in \Cref{fig:pre_rec_curve}. We calculate each curve by taking the prediction scores and ground truth masks for all the pixels of all test frames from all scenes, and then calculating the precision and recall with varying thresholds for the prediction scores. This leads to observations similar to the ones we made for \Cref{tab:epic} in \Cref{sec:exp_results}. More precisely, we observe that, for any recall, NeRF-W exhibits a lower precision than any flavor of our approach. We can also see that NeuralDiff\xspace has a lower precision than NeuralDiff+A\xspace, NeuralDiff+C\xspace, and NeuralDiff+C+A\xspace, indicating that NeuralDiff\xspace benefits from the actor model and color normalization (individually and combined). \begin{figure} \begin{center} \includegraphics[width=0.95\linewidth]{img/pr_curve.png} \end{center} \caption{\textbf{Precision-Recall Curve calculated over all scenes.} We combine the predicted and target masks from all scenes and calculate the average precision and recall over all the pixels.} \label{fig:pre_rec_curve} \end{figure}
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection{$h\to ff$} We parametrize NLO corrections to the $h\to ff$ decay rate as \begin{align} \Gamma(h\to ff)=\Gamma_0(h\to ff)[1+\Delta_{\rm weak}^f+\Delta_{\rm QED}^f+\Delta_{\rm QCD}^f], \end{align} where the weak and QED corrections are decomposed in order to treat the infrared divergence in the QED correction separately. The decay rate at the LO is given by \begin{align} \Gamma_0(h\to ff) =\frac{\beta_f}{16\pi m_h}|{\cal M}^{\rm tree}_{hff}|^2 =\frac{N_c m_h \beta_f^3}{8\pi}|\Gamma^{S,{\rm tree}}_{hff}|^2 =\frac{N_c G_F m_f^2 m_h \beta_f^3}{4\sqrt{2}\pi}\kappa_f^2 \label{G0_hff} \end{align} with $N_c=3\ (1)$ for quarks (leptons) and $\beta_f=(1-4m_f^2/m_h^2)^{1/2}$. The weak correction is expressed as~\cite{Kniehl:1991ze,Dabelstein:1991ky} \begin{align} \Delta_{\rm weak}^f= -\Delta r +\frac{1}{|{\cal M}^{\rm tree}_{hff}|^2} 2{\rm Re}({\cal M}^{\rm tree}_{hff}{\cal M}^{\rm loop*}_{hff}), \label{weak_hff} \end{align} where $\Delta r$ is the weak corrections to the muon decay (see Appendix B in Ref.~\cite{Kanemura:2015fra} for the explicit formula). The one-loop contribution ${\cal M}^{\rm loop}_{hff}$ to the amplitudes is written in terms of the one-loop part of the renormalized form factors defined above as \begin{align} {\cal M}^{\rm loop}_{hff}&=m_h\beta_f\Big[ \Gamma_{hff}^{S,{\rm loop}} +m_f(\Gamma^{V_1,{\rm loop}}_{hff}-\Gamma^{V_2,{\rm loop}}_{hff}) +(m_h^2-m_f^2)\Gamma_{hff}^{T,{\rm loop}} \Big], \end{align} where we fix the momenta of the form factors as $p_1^2 = p_2^2 = m_f^2$ and $q^2 = m_h^2$. We note that with these momenta $\Gamma^{V_1,{\rm loop}}_{hff}=-\Gamma^{V_2,{\rm loop}}_{hff}$ and the form factors proportional to $\gamma_5$ do not contribute to the decay rate. The QED correction comes from the photon-exchange loop and the corresponding counterterm as well as the real photon emission, and is common to the SM~\cite{Bardin:1990zj,Kniehl:1991ze,Dabelstein:1991ky}, which is given in the on-shell renormalization scheme by \begin{align} \Delta_{\rm QED}^f= \frac{\alpha_{\rm EM}}{\pi}Q_f^2\Big[\frac{9}{4}+\frac{3}{2}\log\Big(\frac{m_f^2}{m_h^2}\Big)\Big] \end{align} in the limit of $m_f^2\ll m_h^2$,% \footnote{We keep $\beta_f$ in the numerical computation; see Ref.~\cite{Dabelstein:1991ky} for the explicit formula.} where $Q_f$ is the electric charge of the fermions. For the Higgs boson decay into a quark pair, the QCD correction is dominant over the EW correction and known up to order $\alpha_s^4$ in the SM; see, e.g. a recent paper~\cite{Mihaila:2015lwa}, where the mixed QCD-EW (${\cal O}(\alpha_s\alpha_{\rm EM})$) corrections are also presented. Similar to the QED correction, the QCD correction is common between the SM and the extended Higgs models. In our calculation, we include the one-loop QCD correction% \footnote{As long as we focus on the deviations from the SM predictions, one-loop corrections are sufficient.} in the $\overline{\rm MS}$ renormalization scheme \begin{align} \Delta_{\rm QCD}^f= \frac{\bar\alpha_s(\mu)}{\pi}C_F\Big[\frac{17}{4}+\frac{3}{2}\log\Big(\frac{\mu^2}{m_h^2}\Big)\Big], \end{align} where $C_F=4/3$ and we choose $\mu^2=m_h^2$. The pole mass for the quarks in Eq.~\eqref{G0_hff} should be replaced by the running mass $\bar m_f(\mu)$, and $\bar m_f$ and $\bar\alpha_s$ are defined at the scale $m_h$. We also adopt the $\overline{\rm MS}$ scheme for the QED correction to the decay rates into a quark pair, which is obtained from $\Delta_{\rm QCD}^f$ by the replacement $\bar\alpha_s\to Q_f^2\alpha_{\rm EM}$ and $C_F\to1$. \subsection{$h\to VV^*$} \label{sec:hvv} The 125~GeV Higgs boson can also decay into a pair of weak bosons with one on-shell $V$ and the other off-shell $V^*$. In this work we evaluate the NLO EW and QCD corrections to the three-body decay $h\to Vff$ to treat the off-shell gauge boson properly. For $V=Z$, similar to $h\to ff$, we parametrize the one-loop corrections to the $h\to Zff$ decay rate as \begin{align} \Gamma(h\to Zff)=\Gamma_0(h\to Zff)[1+\Delta_{\rm weak}^Z+\Delta_{\rm QED}^Z+\Delta_{\rm QCD}^Z]. \end{align} At the LO the $h\to Zff$ decay only happens via the off-shell $Z$ boson, $h\to ZZ^*\to Zff$, and the decay rate is given by~\cite{Pocsik:1980ta,Rizzo:1980gz,Keung:1984hn} \begin{align} \Gamma_0(h\to Zff)= \frac{1}{2m_h}\int d\Phi_3\,|{\cal M}^{\rm tree}_{hZff}|^2 =\frac{G_F^2m_Z^4m_h}{24\pi^3}(v_f^2+a_f^2)F\Big(\frac{m_Z^2}{m_h^2}\Big)\kappa_V^2, \end{align} where $v_f=I_f/2-Q_f\sin\theta_{W}^2$ and $a_f=I_f/2$ are the vector and axial-vector couplings for the weak neutral-current interactions with the isospin of the fermions $I_f$ and the weak mixing angle $\theta_W$. The fermion mass is ignored in this expression. The function $F(x)$ is given by \begin{align} F(x)=\frac{3(1-8x+20x^2)}{(4x-1)^{1/2}}\arccos\Big(\frac{3x-1}{2x^{3/2}}\Big) -\frac{1-x}{2x}(2-13x+47x^2)-\frac{3}{2}(1-6x+4x^2)\log x. \end{align} \begin{figure} \center \includegraphics[width=0.9\textwidth]{diagram}\\[-2mm] (a)\hspace*{26.25mm}(b)\hspace*{26.25mm}(c)\hspace*{26.25mm}(d)\hspace*{26.25mm}(e) \caption{Schematic diagrams of each loop contribution to $h\to Vff$.} \label{fig:diagram} \end{figure} The weak correction is given by~\cite{Kniehl:1991hk,Denner:1992bc} \begin{align} \Delta_{\rm weak}^Z= -2\Delta r-\Delta Z_{\rm wf} +\int d\Phi_3\frac{1}{|{\cal M}^{\rm tree}_{hZff}|^2} 2{\rm Re}({\cal M}^{\rm tree}_{hZff}{\cal M}^{\rm loop*}_{hZff}), \end{align} where $\Delta r$ is the same as in Eq.~\eqref{weak_hff} and $\Delta Z_{\rm wf}$ is due to the $Z$-boson wavefunction renormalization. The one-loop contribution ${\cal M}^{\rm loop}_{hZff}$ to the amplitudes is the sum of each loop contribution whose diagram is schematically depicted in order in Fig.~\ref{fig:diagram}\,(a)-(e): \begin{align} {\cal M}^{\rm loop}_{hZff}={\cal M}^{\rm loop}_{a}+{\cal M}^{\rm loop}_{b} +{\cal M}^{\rm loop}_{c}+{\cal M}^{\rm loop}_{d}+{\cal M}^{\rm loop}_{e}. \end{align} The amplitude ${\cal M}^{\rm loop}_{a}$ represented by Fig.~\ref{fig:diagram}\,(a) contains the one-loop part of the renormalized vertices $\Gamma^{i,{\rm loop}}_{hVV}$, while ${\cal M}^{\rm loop}_{b}$ by Fig.~\ref{fig:diagram}\,(b) has the renormalized one-loop gauge-boson 2-point functions. Additional scalars can contribute in these loops. For $m_f=0$, the extended Higgs sectors do not affect the $Zff$ vertex in the diagram (c) in Fig.~\ref{fig:diagram}, but modify the $hff$ vertex in the diagram (d) and the box diagram (e) only by the mixing effects as the $\kappa_V$ factor. The renormalized vertices $\Gamma^{i,{\rm loop}}_{hVV}$ and $\Gamma^{i,{\rm loop}}_{hff}$ as well as the renormalized 2-point functions can be evaluated by using {\sc H-COUPv1} \!\!, while we newly implemented the renormalized $Zff$ vertices in SM and the $hZff$ box contributions in the {\sc H-COUP} framework. Numerical computations were done in two ways independently, with the trace technique and with the helicity amplitude calculation, and confirmed to reproduce the previous works in the SM~\cite{Kniehl:1991hk,Denner:1992bc} and to agree with each other in extended Higgs models, which will be reported elsewhere with all the explicit formulae~\cite{Kanemura:2018xxx}. The QED and QCD corrections come from photon and gluon exchanges between the fermions in the final state as well as real emissions. They are taken into account by~\cite{Kniehl:1993ay} \begin{align} \Delta_{\rm QED}^Z+\Delta_{\rm QCD}^Z=\frac{3}{4\pi}(Q_f^2\alpha_{\rm EM} + C_F\bar\alpha_s). \end{align} The decay rate $\Gamma(h\to Wff')$ can be calculated in analogy to the case of $V=Z$ discussed above. Differences are that $W$ bosons can emit a photon and the separation between weak and QED corrections cannot be done. The study of this decay mode will be presented in Ref.~\cite{Kanemura:2018xxx}. We note that, in this way of the calculation for the Higgs decays into weak-boson pairs, we cannot take into account the interference of some specific final state in the Higgs decays into four fermions such as $h\to e^+e^-e^+e^-$, $e^+\nu_ee^-\bar\nu_e$, etc, whose contributions to the total width in the SM is very small, about 0.2\%~\cite{Bredenstein:2006rh,deFlorian:2016spz}. \subsection{$h\to gg/\gamma\gamma/Z\gamma$} {\sc H-COUPv1} can provide the decay rates of the loop-induced processes $h\to gg/\gamma\gamma/Z\gamma$ in extended Higgs models at LO. Their explicit analytic formulae in the HSM and the THDMs are given in Refs.~\cite{Kanemura:2015fra} and \cite{Kanemura:2015mxa}, respectively. The QCD corrections are taken into account in the heavy top-quark limit; see, e.g., Ref.~\cite{Djouadi:2005gi}. \section{Introduction} Except a resonance with the 125~GeV mass, the LHC experiments have not observed any other states expected by new physics (NP) beyond the standard model (SM), and instead put limits on them to higher and higher mass scales by increasing the collision energy and accumulating more data. On the other hand, all the measurements for the discovered particle agree with the predictions of the SM Higgs boson within the experimental uncertainty so far~\cite{Khachatryan:2016vau}. Such situation highly motivates us a thorough study of the Higgs sector at the LHC Run-II as well as in future high-precision experimental programs such as the high-luminosity LHC (HL-LHC)~\cite{ATLAS:2013hta,CMS:2013xfa} and future lepton colliders such as the international linear collider (ILC)~\cite{Baer:2013cma,Asai:2017pwp,Fujii:2017vwa}, the lepton collision option of the future circular collider (FCC-ee)~\cite{Gomez-Ceballos:2013zzn}, the circular electron positron collider (CEPC)~\cite{CEPC-SPPCStudyGroup:2015csa} and the compact linear collider (CLIC)~\cite{CLIC:2016zwp}. Although the SM takes the minimal setup for the Higgs sector; i.e., a scalar isospin doublet field, there is no compelling reason to be minimal, and indeed many NP models predict additional scalar multiplets. For instance, the $B-L$ extended SM~\cite{Khalil:2006yi} contains an additional singlet scalar field, while the minimal supersymmetric SM has another doublet scalar field. Furthermore, some of the scenarios for radiative seesaw models predict extended Higgs sectors~\cite{Zee:1980ai,Zee:1985rj,Zee:1985id,Babu:1988ki,Krauss:2002px,Ma:2006km,Aoki:2008av}, and many of the scenarios of electroweak baryogenesis also require non-minimal structures for the Higgs sector~\cite{Turok:1990in,Bochkarev:1990gb,Nelson:1991ab}. Therefore, by studying the structure of the Higgs sector by experiments, we may be able to determine the models of NP. There are two important consequences in models with a non-minimal Higgs sector. One is the existence of additional scalar states, and the other is deviations of the interactions for the SM-like Higgs boson from the SM prediction due to the mixing with other neutral scalars as well as loop effects of additional scalars. The current LHC Higgs program clearly targets them~\cite{deFlorian:2016spz}, and has been seeking for extra scalars in the direct searches~\cite{CMS:2016qbe,Aaboud:2017cxo,Aaboud:2017rel,Aaboud:2017gsl} and looking for deviations in the Higgs coupling measurements~\cite{Khachatryan:2016vau,Aad:2015pla,CMS:2016qbe}, which already put constraints on the parameter space of extended Higgs models. In this work, we focus on the latter aspect; i.e., deviations from the SM via precise measurements at the LHC as well as in the future experiments, where more accurate theoretical predictions are required not only in the SM but also in NP models. Especially, once a deviation from the SM is detected, accurate NP predictions including loop effects become crucial to identify a model from the other models. As the simplest extensions of the SM Higgs sector with the $\rho$ parameter to be unity at the tree level, we consider the Higgs singlet model (HSM) and the two Higgs doublet model (THDM). In particular, we study the model with a real singlet scalar as the HSM, and four types of the THDM with softly broken $Z_2$ symmetry to avoid dangerous flavor changing neutral currents. In Ref.~\cite{Kanemura:2017gbi} a full set of the numerical code ({\sc H-COUPv1} \!\!) to evaluate the renormalized gauge invariant vertex functions for the SM-like Higgs boson has been released in various extended Higgs models: the HSM, four types of the THDM and the inert doublet model (IDM), based on Refs.~\cite{Kanemura:2004mg,Kanemura:2014dja,Kanemura:2015mxa,Kanemura:2015fra,Kanemura:2016sos,Kanemura:2016lkz,Kanemura:2017wtm}. However, in order to compare the theory predictions on the Higgs boson couplings with higher-order calculations with experimental data, we should directly evaluate physics quantities such as production cross sections and decay branching ratios instead of, for example, the renormalized vertex functions or something like the $\kappa$ parameters (the scale factors) for the Higgs boson couplings which might not be well defined beyond the leading order (LO). In this letter, in order to identify the Higgs sector using future precision data, we calculate the partial decay widths of the discovered Higgs boson ($h$) into fermion pairs and gauge-boson pairs with one-loop electroweak (EW) and one-loop QCD corrections in the HSM and four types of the THDM. So far, a few next-to-leading order (NLO) EW radiative corrections in the THDMs have been done for $h\to bb/\tau\tau$~\cite{Arhrib:2003ph,Arhrib:2016snv} and for $h\to ZZ^*\to Z\ell\ell$~\cite{Castilla-Valdez:2015sng}. NLO EW and QCD calculations for $h\to WW/ZZ\to 4\,$fermions in the HSM~\cite{Altenkamp:2018bcs} as well as in the THDMs~\cite{Altenkamp:2017ldc,Altenkamp:2017kxk} were also reported. However, there has been no study to compute all the decay rates with higher-order corrections in various extended Higgs models comprehensively. We employ and extend {\sc H-COUPv1} to systematically compute all the decay rates of the SM-like Higgs boson in each model at the one-loop level. In order to study the deviations from the SM predictions, we evaluate the ratios of the decay rates in extended Higgs models to those in the SM. We then discuss how the NP model can be identified by the pattern of the deviations. Furthermore, we study how we can obtain information on the mass of the additional Higgs bosons from detecting the deviations in the decay rates of the Higgs boson. From the current LHC data, the Higgs couplings to the weak gauge bosons have been measured with the precision of about 10\% level, and those to $\tau\tau$ and $\gamma\gamma$ ($bb$) are about 20 (50)\% at 1$\sigma$~\cite{Khachatryan:2016vau}. The accuracy for the measurements of the Higgs couplings is expected to be improved in future experiments. For instance, the uncertainties will be down to 9\% 11\%, 4\% and 4.2\% for the Higgs couplings to $\tau\tau$, $bb$, $ZZ$ and $\gamma\gamma$ at the HL-LHC with the integrated luminosity of 3 ab$^{-1}$~\cite{ATL-PHYS-PUB-2014-016}. Furthermore, those will be 1.9\% 1.8\%, 2.4\%, 0.38\% and 11\% for the Higgs couplings to $\tau\tau$, $bb$, $cc$, $ZZ$ and $\gamma\gamma$ at the ILC with the integrated luminosity of 2 ab$^{-1}$ at $\sqrt{s}=250$~GeV~\cite{Fujii:2017vwa}. Therefore, the radiative corrections to the Higgs boson decay rates discussed in this work are inevitable to be compared with these precise measurements in the future experiments. This letter is organized as follows. In Sec.~\ref{sec:models} we introduce extended Higgs models which we consider, namely the HSM and the THDM. We describe the framework of the calculation for the Higgs decay rates in Sec.~\ref{sec:hwidth}, and show numerical results in Sec.~\ref{sec:results}. The conclusion is given in Sec.~\ref{sec:summary}. \section{Decay widths of the SM-like Higgs boson at one loop}\label{sec:hwidth} In this section we describe the framework to calculate the Higgs decay rates at one loop. \subsection{Higgs boson vertices} We begin with definitions of the renormalized Higgs boson vertices $hff$ and $hVV$, which are the main piece of the one-loop calculation for the Higgs decay rates and can be evaluated by {\sc H-COUPv1} \cite{Kanemura:2017gbi}. We apply the improved on-shell renormalization scheme adopted in Refs.~\cite{Fleischer:1980ub,Krause:2016oke,Kanemura:2017wtm}, where gauge dependence appearing in the renormalization of mixing angles among scalar fields is removed by using the pinch technique. We calculate the one-loop amplitudes in the Feynman gauge. The renormalized $hff$ and $hVV$ vertices can be decomposed by the following form factors: \begin{align} \hat{\Gamma}_{h ff}(p_1^2,p_2^2,q^2)&= \hat{\Gamma}_{h ff}^S+\gamma_5 \hat{\Gamma}_{h ff}^P+p_1\hspace{-3.5mm}/\hspace{2mm}\hat{\Gamma}_{h ff}^{V_1} +p_2\hspace{-3.5mm}/\hspace{2mm}\hat{\Gamma}_{h ff}^{V_2}\notag\\ &\quad +p_1\hspace{-3.5mm}/\hspace{2mm}\gamma_5 \hat{\Gamma}_{h ff}^{A_1} +p_2\hspace{-3.5mm}/\hspace{2mm}\gamma_5\hat{\Gamma}_{h ff}^{A_2} +p_1\hspace{-3.5mm}/\hspace{2mm}p_2\hspace{-3.5mm}/\hspace{2mm}\hat{\Gamma}_{h ff}^{T} +p_1\hspace{-3.5mm}/\hspace{2mm}p_2\hspace{-3.5mm}/\hspace{2mm}\gamma_5\hat{\Gamma}_{h ff}^{PT},\\ \hat{\Gamma}_{h VV}^{\mu\nu}(p_1^2,p_2^2,q^2)&=g^{\mu\nu}\hat{\Gamma}_{h VV}^1 +\frac{p_1^\nu p_2^\mu}{m_V^2}\hat{\Gamma}_{h VV}^2 +i\epsilon^{\mu\nu\rho\sigma}\frac{p_{1\rho} p_{2\sigma}}{m_V^2}\hat{\Gamma}_{h VV}^3, \label{form_factor} \end{align} where $p_i^{\mu}$ is the incoming momentum of a fermion or a vector boson and $q^\mu$ is the momentum of the Higgs boson. These renormalized form factors $\hat{\Gamma}^i_{hXX}$ are expressed by \begin{align} \hat{\Gamma}^i_{hXX}(p_1^2,p_2^2,q^2)&=\Gamma^{i,{\rm tree}}_{hXX}+\Gamma^{i,{\rm loop}}_{hXX} =\Gamma^{i,{\rm tree}}_{hXX}+\Gamma^{i,{\rm 1PI}}_{hXX}(p_1^2,p_2^2,q^2)+\delta \Gamma^{i}_{hXX}, \label{eq:renohVV} \end{align} where $\Gamma_{hXX}^{i,{\rm tree}}$, $\Gamma_{hXX}^{i,{\rm 1PI}}$ and $\delta\Gamma^i_{hXX}$ denote the contributions from the tree-level diagram, 1PI diagrams for the vertex and the counterterms, respectively. The tree-level contributions are expressed as \begin{align} \Gamma_{hff}^{S,{\rm tree}}=-\frac{m_f}{v}\kappa_f,\quad \Gamma_{hVV}^{1,{\rm tree}}=\frac{2m_V^2}{v}\kappa_V, \end{align} where the scaling factors $\kappa_f$ and $\kappa_V$ in each model are given by \begin{align} &\kappa_f=\kappa_V=c_\alpha \quad \text{in\ the\ HSM}, \label{kappa_hsm}\\ &\kappa_f=s_{\beta-\alpha}+\zeta_f c_{\beta-\alpha}, \quad \kappa_V=s_{\beta-\alpha} \quad\text{in\ the\ THDMs}. \label{kappa_thdm} \end{align} The mixing parameters in each model are defined in Sec.~\ref{sec:models}, where $\zeta_f$ is shown in Table~\ref{yukawa_tab}. We note that the tree-level contributions to all the other form factors are zero. Explicit formulae for $\Gamma_{hXX}^{i,{\rm 1PI}}$ in the HSM and the THDMs are presented in Refs.~\cite{Kanemura:2015fra} and \cite{Kanemura:2015mxa} respectively, and those for $\delta\Gamma^i_{hXX}$ in each model are given in Ref.~\cite{Kanemura:2017wtm}. \section{Extended Higgs models}\label{sec:models} We briefly describe the HSM and the THDM in order. We give the Higgs potential, and define the input parameters for each model. \subsection{Higgs singlet model} \label{HSM} In addition to an isospin doublet Higgs field $\Phi$ with hypercharge $Y=1/2$ as in the SM, the HSM has a real singlet scalar field $S$ with $Y=0$. The most general Higgs potential is written as~\cite{Chen:2014ask,Kanemura:2015fra} \begin{align} V(\Phi,S) =&\, m_\Phi^2|\Phi|^2+\lambda |\Phi|^4 +\mu_{\Phi S}^{}|\Phi|^2 S+ \lambda_{\Phi S} |\Phi|^2 S^2 +t_S^{}S +m^2_SS^2+ \mu_SS^3+ \lambda_SS^4, \label{Eq:HSM_pot} \end{align} where all the parameters are real. The scalar fields $\Phi$ and $S$ are expressed in terms of the component fields by \begin{align} \Phi=\left(\begin{array}{c} G^+\\ \frac{1}{\sqrt{2}}(v+\phi+iG^0) \end{array}\right),\quad S=v_S^{} + s, \label{hsm_f} \end{align} where $v$ and $v_S$ are the vacuum expectation value (VEV), and $G^{\pm,0}$ are the Nambu-Goldstone bosons to be absorbed by the longitudinal components of the weak gauge bosons. The potential is invariant under the shift of the VEV for the singlet field, so that $v_S$ can be fixed to be zero without any loss of generality~\cite{Chen:2014ask}. The mass eigenstates of the Higgs bosons are defined by introducing the mixing angle as \begin{align} \begin{pmatrix} s \\ \phi \end{pmatrix} = R(\alpha) \begin{pmatrix} H \\ h \end{pmatrix}~~\text{with}~~R(\alpha) = \begin{pmatrix} c_\alpha & -s_ \alpha \\ s_\alpha & c_\alpha \end{pmatrix}, \label{mat_r} \end{align} where $s_\theta$ and $c_\theta$ represent $\sin\theta$ and $\cos\theta$, respectively. The squared masses and the mixing angle are expressed as \begin{align} &m_H^2=M_{11}^2c^2_\alpha +M_{22}^2s^2_\alpha +M_{12}^2s_{2\alpha},\quad m_h^2=M_{11}^2s^2_\alpha +M_{22}^2c^2_\alpha -M_{12}^2s_{2\alpha}, \notag \\ &\tan 2\alpha=\frac{2M_{12}^2}{M_{11}^2-M_{22}^2}, \label{tan2a} \end{align} where the mass matrix elements $M^2_{ij}$ are given by \begin{align} M^2_{11}= M^2+ v^2\lambda_{\Phi S} ,\quad M^2_{22}=2v^2\lambda,\quad M^2_{12}=v\mu_{\Phi S}^{}, \label{mij} \end{align} with $M^2\equiv 2m_S^2$. The parameters $m_\Phi^2$ and $t_S^{}$ are eliminated by using the stationary conditions for $\phi$ and $s$. There are seven parameters in the Higgs potential, which are rewritten by \begin{align} v,~m_h,~m_H,~M^2,~\mu_{S},~\lambda_S,~\alpha, \end{align} among which $v$ and $m_h$ are fixed to be $(\sqrt{2} G_F)^{-1/2}\simeq 246~{\rm GeV}$ with $G_F$ being the Fermi constant and 125~GeV, respectively. The rest five parameters are free parameters of the model. The size of the dimensionless parameters in the potential can be constrained by imposing bounds from perturbative unitarity and vacuum stability. These constraints are translated in terms of the constraints on physical quantities, i.e., masses of Higgs bosons and mixing angles via the relations given in Eqs.~(\ref{tan2a})--(\ref{mij}). Concerning the former bound, all the independent eigenvalues of the $s$-wave amplitude matrix for the elastic $2\to 2$ body scatterings have been derived in Ref.~\cite{Cynolter:2004cq}. For the latter bound, the necessary conditions to guarantee the potential to be bounded from below at large field values has been given in Ref.~\cite{Pruna:2013bma}. In addition to these bounds, one has to avoid wrong local extrema in which the true vacuum, $\langle\Phi^0\rangle = v/\sqrt{2}$, does not correspond to the deepest minimum. Such wrong vacuum can appear due to the existence of the scalar trilinear couplings $\mu_{\Phi S}^{}$ and $\mu_{S}^{}$. In Refs.~\cite{Espinosa:2011ax,Chen:2014ask,Lewis:2017dme}, the conditions to avoid the wrong vacuum have been found. It is also important to take into account the constraint from EW oblique parameters such as the $S$ and $T$ parameters~\cite{Peskin:1990zt}. These parameters are calculated in terms of the gauge boson 2-point functions whose analytic formulae can be found in Ref.~\cite{Lopez-Val:2014jva}. In addition to the above constraints, LHC data, i.e., direct searches for additional Higgs bosons and signal strengths for the discovered Higgs boson can also set a limit on the mass of the extra Higgs boson and their couplings to SM particles. In the HSM, constraints on $m_H^{}$ and $\alpha$ have been studied by using LHC Run-I~\cite{Robens:2015gla} and Run-II data~\cite{Robens:2016xkb, Blasi:2017zel,Gu:2017ckc}. \subsection{Two Higgs doublet model} Instead of a singlet field as in the HSM, the THDM has an additional isospin doublet scalar field with $Y=1/2$. In order to avoid flavor changing neutral currents at the tree level, we impose a $Z_2$ symmetry~\cite{Glashow:1976nt}, which can be broken softly. Under this symmetry the Higgs potential is given by \begin{align} V(\Phi_1,\Phi_2) &= m_1^2|\Phi_1|^2+m_2^2|\Phi_2|^2-m_3^2(\Phi_1^\dagger \Phi_2^{} +\text{h.c.})\notag\\ &\quad +\frac{1}{2}\lambda_1|\Phi_1|^4+\frac{1}{2}\lambda_2|\Phi_2|^4+\lambda_3|\Phi_1|^2|\Phi_2|^2+\lambda_4|\Phi_1^\dagger\Phi_2^{}|^2 +\frac{1}{2}\lambda_5[(\Phi_1^\dagger\Phi_2^{})^2+\text{h.c.}], \label{pot_thdm2} \end{align} where all the parameters can be real by assuming the $CP$ conservation. The two doublet fields $\Phi_1$ and $\Phi_2$ are parameterized as \begin{align} \Phi_i=\left(\begin{array}{c} w_i^+\\ \frac{1}{\sqrt{2}}(v_i+h_i+iz_i) \end{array}\right)~~\text{with}~~i=1,2, \end{align} where $v_1$ and $v_2$ are the VEVs of the Higgs doublet fields with $v=(v_1^2+v_2^2)^{1/2}$. The mass eigenstates of the Higgs fields are defined as follows: \begin{align} \left(\begin{array}{c} h_1\\ h_2 \end{array}\right)=R(\alpha) \left(\begin{array}{c} H\\ h \end{array}\right), \quad \left(\begin{array}{c} z_1\\ z_2 \end{array}\right) =R(\beta)\left(\begin{array}{c} G^0\\ A \end{array}\right), \quad \left(\begin{array}{c} w_1^\pm\\ w_2^\pm \end{array}\right)&=R(\beta) \left(\begin{array}{c} G^\pm\\ H^\pm \end{array}\right), \label{mixing} \end{align} where $\tan\beta = v_2/v_1$. By solving the two stationary conditions for $h_1$ and $h_2$, we can eliminate the parameters $m_1^2$ and $m_2^2$. Then, the squared masses of the physical Higgs bosons and the mixing angle $\alpha$ are expressed by \begin{align} &m_{H^\pm}^2 = M^2-\frac{1}{2}v^2(\lambda_4+\lambda_5),\quad \nota m_A^2=M^2-v^2\lambda_5, \\ &m_H^2=M_{11}^2 c^2_{\beta-\alpha} + M_{22}^2 s^2_{\beta-\alpha} -M_{12}^2 s_{2(\beta-\alpha)}, \quad m_h^2=M_{11}^2 s^2_{\beta-\alpha} + M_{22}^2 c^2_{\beta-\alpha} +M_{12}^2 s_{2(\beta-\alpha)}, \notag \\ &\tan 2(\beta-\alpha)= -\frac{2M_{12}^2}{M_{11}^2-M_{22}^2}, \label{333} \end{align} where $M^2 \equiv m_3^2/s_\beta c_\beta$ describes the soft breaking scale of the $Z_2$ symmetry, and $M_{ij}^2$ are the mass matrix elements for the $CP$-even scalar states in the basis of $(h_1,h_2)R(\beta)$: \begin{align} &M_{11}^2=v^2(\lambda_1c^4_\beta+\lambda_2s^4_\beta+\lambda_{345}s^2_{\beta}c^2_\beta),\quad M_{22}^2=M^2+\frac{1}{2}v^2s^2_{2\beta}(\lambda_1+\lambda_2-2\lambda_{345}), \notag\\ &M_{12}^2=\frac{1}{2}v^2 s_{2\beta}( -\lambda_1c^2_\beta+\lambda_2s^2_\beta+\lambda_{345} c_{2\beta} ), \end{align} with $\lambda_{345}\equiv \lambda_3+\lambda_4+\lambda_5$. We choose the seven free parameters in the THDM: \begin{align} m_H^{},~ m_A^{},~ m_{H^\pm},~ M^2,~ \tan\beta,~ s_{\beta-\alpha}(\geq0),~ {\rm Sign}(c_{\beta-\alpha}). \end{align} For the Yukawa sector, we can define four types of the interactions under the softly-broken $Z_2$ symmetry~\cite{Barger:1989fj,Grossman:1994jb,Aoki:2009ha}, as shown in Table~\ref{yukawa_tab}, depending on the $Z_2$ charge assignment for the right-handed fermions. As seen later, the difference in the Yukawa sector plays an important role for the Higgs decay rates in each type of the THDM. \begin{table} \begin{center} \begin{tabular}{l|ccccccc|ccc}\hline &$\Phi_1$&$\Phi_2$&$Q_L$&$L_L$&$u_R$&$d_R$&$e_R$&$\zeta_u$ &$\zeta_d$&$\zeta_e$ \\\hline Type-I &$+$& $-$&$+$&$+$& $-$&$-$&$-$&$\cot\beta$&$\cot\beta$&$\cot\beta$ \\ Type-II&$+$& $-$&$+$&$+$& $-$ &$+$&$+$& $\cot\beta$&$-\tan\beta$&$-\tan\beta$ \\ Type-X (lepton-specific) &$+$& $-$&$+$&$+$& $-$ &$-$&$+$&$\cot\beta$&$\cot\beta$&$-\tan\beta$ \\ Type-Y (flipped) &$+$& $-$&$+$&$+$& $-$ &$+$&$-$& $\cot\beta$&$-\tan\beta$&$\cot\beta$ \\\hline \end{tabular}} \caption{The $Z_2$ charge assignment and the $\zeta_f$ factors in Eq.~\eqref{kappa_thdm} for each type of the Yukawa interactions. } \label{yukawa_tab} \end{center} \end{table} As we mentioned in the previous subsection, regions of the parameter space can be constrained by imposing the bounds from the perturbative unitarity~\cite{Kanemura:1993hm,Akeroyd:2000wc,Ginzburg:2005dt,Kanemura:2015ska}, the vacuum stability~\cite{Deshpande:1977rw,Sher:1988mj,Kanemura:1999xf} and the $S$, $T$ parameters~\cite{Toussaint:1978zm,Bertolini:1985ia,Peskin:2001rw,Grimus:2008nb,Kanemura:2011sj}. Constraints from the LHC Run-I and Run-II data have been discussed in Refs.~\cite{Bernon:2015qea,Dorsch:2016tab,Han:2017pfo,Arbey:2017gmh,Chang:2015goa,Blasi:2017zel,Gu:2017ckc}. Differently from the HSM, in the THDMs flavor experiments such as $B$ meson decays also give an important constraint particularly on the mass of the charged Higgs bosons and $\tan\beta$. Constraints from various $B$ physics processes have been studied comprehensively in Refs.~\cite{Mahmoudi:2009zx,Enomoto:2015wbn}. \section{Numerical evaluations}\label{sec:results} \subsection{Parameter spaces} In order to discuss deviations from predictions in the SM, we evaluate the ratios of the partial decay rates \begin{align} \Delta R(h\to XX)=\frac{\Gamma(h\to XX)}{\Gamma_{\rm SM}(h\to XX)}-1, \label{delR} \end{align} where $\Gamma_{\rm SM}(h\to XX)$ is the partial decay rate in the SM with the one-loop EW and one-loop QCD corrections. Assuming the discovered Higgs boson with the mass of 125~GeV as the lightest Higgs boson $h$, we take several sets for masses of the additional Higgs bosons, and we scan the other internal model parameters in each model in the following way. In the HSM, there are five free parameters; i.e., $m_H$, $c_\alpha$, $M^2$, $\mu_S$ and $\lambda_S$. The mass of the second Higgs boson $H$ is taken as \begin{align} &m_H=500, 1000, 2000, 3000\ {\rm and}\ 5000{\rm\ GeV}, \label{mass_hsm} \end{align} while $c_\alpha$ and $M^2$ are scanned as \begin{align} &0.95<c_\alpha<1,\quad 0<M^2<m_H^2. \label{sca_HSM} \end{align} Here, the region of $c_\alpha$ is taken so that the model is not far from the SM-like limit. The other parameters $\mu_S$ and $\lambda_S$ are taken to be zero, because the $\mu_S$ dependence in $\Delta R$'s is numerically negligible while $\lambda_S$ is irrelevant to our current analysis. In the THDMs, there are seven free parameters; i.e., $m_H$, $m_A$, $m_{H^\pm}$, $s_{\beta-\alpha}$, $\tan\beta$, $M^2$ and Sign($c_{\beta-\alpha}$). In order to avoid the constraint from the $T$ parameter, we take $m_{H^{\pm}}=m_A$ by which new contributions to the $T$ parameter is suppressed due to the custodial symmetry~\cite{Haber:1992py,Pomarol:1993mu}. Furthermore, throughout our analysis we assume $m_H=m_{H^{\pm}}=m_A$ for simplicity. The degenerate mass is taken as \begin{align} &m_H=400,700,1000, 1500\ {\rm and}\ 2000{\rm\ GeV}. \label{mass_thdm} \end{align} The other parameters are scanned as% \footnote{The lower bound of $\tan\beta$ comes from the constraint of flavor experiments~\cite{Enomoto:2015wbn}. For the upper limit of $\tan\beta$, a larger value can also be taken near the limit of $s_{\beta-\alpha}=1$, where deviations in the decay rates are too small to be detected. We do not consider the case where the Yukawa coupling constant changes the sign by a large value of $\tan\beta$ for $c_{\beta-\alpha}<1$, because such a case has already been mostly excluded by the current LHC data~\cite{Aad:2015pla,CMS:2016qbe}. } \begin{align} &0.95<s_{\beta-\alpha}<1,\quad 1<\tan\beta<3,\quad 0<M^2<m_H^2. \label{sca_THDM} \end{align} Similar to $c_\alpha$ in the HSM, the region of $s_{\beta-\alpha}$ is taken to be close to the SM-like limit. Finally, we consider both the positive and negative cases of $c_{\beta-\alpha}$. Over the above parameter spaces, we take into account the following constraints discussed in Sec.~\ref{sec:models}: perturbative unitarity, vacuum stability, and the compatibility to the EW $S$ and $T$ parameters. In addition, a condition to avoid wrong vacua is imposed to the parameter space in the HSM. Those constraints are already implemented in {\sc H-COUPv1} \!\!; see the manual~\cite{Kanemura:2017gbi} in details. We briefly mention the constraints from the LHC Higgs measurements. By using the data of approximately 5~fb$^{-1}$ at $\sqrt{s}=7$~TeV and 20~fb$^{-1}$ at $\sqrt{s}=8$~TeV, the ATLAS and CMS collaborations put the constraints on the mixing angles of the HSM and the THDMs as follows~\cite{Aad:2015pla,CMS:2016qbe}. In the HSM, the observed (expected) limit at 95\% confidence level (CL) is $c_\alpha>0.94\ (0.88)$. In the THDMs, the constraint on $s_{\beta-\alpha}$ depends on the type of the models as well as $\tan\beta$. For example, for $\tan\beta=1$ and $c_{\beta-\alpha}>0$, the observed 95\% CL limits are $s_{\beta-\alpha}>0.94$ in the Type-I THDM, $s_{\beta-\alpha}>0.97$ in the Type-II and Type-Y THDMs, and $s_{\beta-\alpha}>0.99$ in the Type-X THDM. We note that those constraints are obtained by using the LO-motivated $\kappa$ framework~\cite{Heinemeyer:2013tqa}, and the interpretation of such constraints at the higher-order level might not be straightforward. \subsection{$h\to\tau\tau$ vs. $h\to bb$} In Fig.~\ref{fig:dR_tb} we show correlations of the ratios of the Higgs decay widths between $h\to\tau\tau$ and $h\to bb$ with one-loop EW and one-loop QCD corrections, defined in Eq.~\eqref{delR}, in the HSM and in four types of the THDM, i.e. Type-I, Type-II, Type-X and Type-Y. The colored regions correspond to the predictions in these models; i.e., yellow, red, blue, green and purple correspond to the HSM, Type-I, Type-II, Type-X and Type-Y THDMs, respectively. The contrast of the colors represents five mass scales of the additional Higgs bosons given in Eqs.~\eqref{mass_hsm} and \eqref{mass_thdm} from light to dark. The evaluation is performed separately for $c_{\beta-\alpha}<0$ (left panel) and $>0$ (right panel) in the THDMs, while the other parameters are scanned in the regions shown in Eqs.~\eqref{sca_HSM} and \eqref{sca_THDM} under the constraints described above. We also show the tree-level predictions with $\tan\beta=1$ and 3 in the THDMs by gray and black lines with dots denoting $s_{\beta-\alpha}=$1, 0.995, 0.99, 0.98, 0.95 from the origin, which corresponds to the SM limit. \begin{figure} \center \includegraphics[width=0.49\textwidth]{fig_tb_cm.png}\ \ \includegraphics[width=0.49\textwidth]{fig_tb_cp.png} \caption{Correlation of the ratios of the Higgs decay widths in the HSM and four types of the THDM to those in the SM, defined in Eq.~\eqref{delR}, between $h\to\tau\tau$ and $h\to bb$. The masses of additional Higgs bosons are taken as 500, 1000, 2000, 3000 and 5000~GeV in the HSM and as 400, 700, 1000, 1500 and 2000~GeV in the THDMs. The left and right panels show the cases for $c_{\beta-\alpha}<0$ and $>0$ in the THDMs. The other parameters are scanned in the region shown in Eqs.~\eqref{sca_HSM} and \eqref{sca_THDM} under the constraints described in the text, while $\mu_S$ and $\lambda_S$ in the HSM are taken to be zero. As a reference, the tree-level predictions with $\tan\beta=1$ and 3 in the THDMs are also presented by gray and black lines, respectively. } \label{fig:dR_tb} \end{figure} The patterns of the deviations can be mainly determined by the tree-level mixing effects on the couplings, i.e. Eqs.~\eqref{kappa_hsm} and \eqref{kappa_thdm} in the HSM and the THDMs, respectively, which were studied in Ref.~\cite{Kanemura:2014bqa} in details. In the HSM, as the mixing angle $c_\alpha$ is decreasing from the SM limit ($c_\alpha=1$), the decay widths are monotonically decreasing from the SM predictions both for the $\tau\tau$ and $bb$ modes. On the other hand, those in each type of the THDM are quite distinctive due to the peculiar Yukawa structures. The dependence on the additional Higgs boson masses comes from the theoretical constraints such as perturbative unitarity and vacuum stability; i.e., for a given mass of the heavy Higgs bosons the mixing parameters are constrained. When the additional Higgs bosons are heavier by the growing $M^2$, the mixing goes to zero and the model becomes closer to the SM-like limit. We note that the darker-colored regions include the lighter-colored regions. In other words, when a deviation is observed, we can set the upper bound of the additional Higgs boson masses. The above tree-level picture can be modified by quantum effects. As expected, the behaviors of $\Delta R_{XX}(\equiv \Delta R(h\to XX))$ with radiative corrections are almost consistent with the analysis based on the $\kappa$ scheme in Refs.~\cite{Kanemura:2004mg,Kanemura:2014dja,Kanemura:2015mxa,Kanemura:2015fra}, but in the present analysis considerable improvements have been done in details of computations as already discussed. The pure quantum effects from the extra Higgs bosons can be observed, for example, in the Type-X THDM and the Type-Y THDM in the region just below $\Delta R_{bb}=-\Delta R_{\tau\tau}$ for $c_{\beta-\alpha}<0$, where those tend to reduce the decay widths. We note that in the Type-X THDM loop contributions to $\Gamma(h\to\tau\tau)$ are more sensitive to $\tan\beta$ than those to $\Gamma(h\to bb)$~\cite{Kanemura:2014dja}. On the contrary, in the Type-Y THDM loop contributions to $\Gamma(h\to bb)$ are more sensitive than those to $\Gamma(h\to\tau\tau)$. Looking at Fig.~\ref{fig:dR_tb}(left), we see subtle differences in magnitudes of radiative corrections between the Type-X THDM and the Type-Y THDM; namely, in the regions of the colored plots below the line of $\tan\beta=1$ for the tree-level predictions. The radiative corrections to the $h\to\tau\tau$ decay in the Type-X THDM is slightly larger than those to $h\to bb$ in the Type-Y THDM even though the Yukawa structures for $h\tau\tau$ and $hbb$ are same. This is due to the top-quark contributions to the $bb$ mode~\cite{Kanemura:2014dja}. From the detailed analyses with the fixed mixing angles, which will be presented in details in elsewhere~\cite{Kanemura:2018xxx}, we find that the radiative corrections from the extended Higgs sector can be as large as several times per cent. Such large corrections are the consequence of the non-decoupling effect~\cite{Kanemura:2004mg,Kanemura:2014dja,Kanemura:2015mxa,Kanemura:2015fra} of the loop corrections which is proportional to $m_H^2/(16\pi^2v^2)$ when $M^2\lesssim v^2$, where the masses of the additional scalars mainly come from the VEV of the EW symmetry breaking; see the mass formulae in Sec.~\ref{sec:models}. For a given mass of the additional Higgs boson(s), the minimum of $M^2$ determined by the theoretical constraints gives rise to the largest deviations. \begin{figure} \center \includegraphics[width=0.49\textwidth]{fig_tc_cm.png}\ \ \includegraphics[width=0.49\textwidth]{fig_tc_cp.png} \caption{Correlation of the ratios of the Higgs decay widths in the HSM and four types of the THDM to those in the SM, defined in Eq.~\eqref{delR}, between $h\to\tau\tau$ and $h\to cc$. See Fig.~\ref{fig:dR_tb} for the descriptions.} \label{fig:dR_tc} \end{figure} \subsection{$h\to\tau\tau$ vs. $h\to cc$} In Fig.~\ref{fig:dR_tc} we show the correlation in the radiative corrected decay rates of $h\to\tau\tau$ and $h\to cc$, where the descriptions for the figure are same as in Fig.~\ref{fig:dR_tb}. At the tree level, the results in the Type-I (Type-II) THDM coincide with those in the Type-Y (Type-X) THDM, because the Yukawa structures for up-type quarks and leptons are common. In contrast to the case in Fig.~\ref{fig:dR_tb}, the predicted region in the Type-II THDM spreads out, as the Yukawa structures between up-type quarks and leptons are different. From the correlations among the three different fermionic decay modes of the Higgs boson shown in Figs.~\ref{fig:dR_tb} and \ref{fig:dR_tc}, we can identify a type of the THDM independently of the model parameters when a deviation is observed in experiments. Similar to Fig.~\ref{fig:dR_tb}, quantum corrections of the additional Higgs bosons can be significant. For example, in the Type-II THDM for $c_{\beta-\alpha}<0$ we see the regions just below the line of the tree-level prediction at $\tan\beta=1$. The plots in these regions purely correspond to the quantum corrections. We also find the dependence on the mass of the additional Higgs bosons, which is similar to that in the Type-X THDM in Fig.~\ref{fig:dR_tb}. \subsection{$h\to\tau\tau$ vs. $h\to ZZ^*$} \begin{figure} \center \includegraphics[width=0.49\textwidth]{fig_tz_cm.png}\ \ \includegraphics[width=0.49\textwidth]{fig_tz_cp.png} \caption{Correlation of the ratios of the Higgs decay widths in the HSM and four types of the THDM to those in the SM, defined in Eq.~\eqref{delR}, between $h\to\tau\tau$ and $h\to ZZ^*$. See Fig.~\ref{fig:dR_tb} for the descriptions.} \label{fig:dR_tz} \end{figure} In Fig.~\ref{fig:dR_tz} we show the correlation in radiative corrections to the decay rates of $h\to\tau\tau$ and $h\to ZZ^*$, where the descriptions for the figure are same as in Fig.~\ref{fig:dR_tb}. Similar to the discussion on Fig.~\ref{fig:dR_tb}, patterns of the deviations can be mainly governed by the mixing effects on the Higgs couplings at the tree level. The HSM can be distinguished from the THDMs if the deviation $\Delta R_{ZZ^*}$ is larger than a few per cent. On the other hand, the results in the Type-I (Type-II) THDM coincide with those in the Type-Y (Type-X) THDM at the tree level, as the Yukawa structures for leptons are the same. An important remark on the THDMs is the difference of the magnitude of the deviations between $h\to ff$ and $h\to VV^*$~\cite{Kanemura:2015mxa}. The 5\% deviation for the coupling in the gauge sector (i.e. $s_{\beta-\alpha}=0.95$) gives rise to $\Delta R_{ZZ^*} \sim -10\%$ -- $-15\%$, while $\Delta R_{ff}\sim \pm60\%$ or more. We also remark that the dependence on the extra Higgs boson masses are different between the HSM and the THDMs. The deviations in the HSM can be larger than those in the THDMs for $m_H>1$~TeV~\cite{Kanemura:2015fra,Kanemura:2017wtm}. Comparing with the $h\to ff$ decay, the loop correction to the three-body $h\to ZZ^*\to Zff$ decay is much more intricate as discussed in Sec.~\ref{sec:hvv}. The pure quantum effects from the extra Higgs bosons can be seen as thickness of the line in the HSM. In the THDMs, the pure loop effects can be in the regions below the tree-level $\tan\beta=3$ line in the Type-I THDM and below the $\tan\beta=1$ line in the Type-II THDM. As seen, the loop corrections from the heavy Higgs bosons tend to reduce the decay widths. We find that in the THDMs the corrections to $\Delta R_{ZZ^*}$ are larger for a smaller $\tan\beta$ and for a smaller $s_{\beta-\alpha}$. We find that the magnitude of the radiative corrections can be as large as a few times per cent. In this plot, near the origin, i.e. around the SM-like limit, we clearly observe the non-decoupling effect discussed above. In contrast to the tree-level analysis, even with heavier Higgs bosons a larger deviation can be predicted as seen in the region with $m_H=700$~GeV near the SM-like limit in the THDMs. \begin{figure} \center \includegraphics[width=0.49\textwidth]{fig_tg_cm.png}\ \ \includegraphics[width=0.49\textwidth]{fig_tg_cp.png} \caption{Correlation of the ratios of the Higgs decay widths in the HSM and four types of the THDM to those in the SM, defined in Eq.~\eqref{delR}, between $h\to\tau\tau$ and $h\to\gamma\gamma$. See Fig.~\ref{fig:dR_tb} for the descriptions.} \label{fig:dR_tg} \end{figure} \subsection{$h\to\tau\tau$ vs. $h\to\gamma\gamma$} In Fig.~\ref{fig:dR_tz} we show correlations between $h\to\tau\tau$ and $h\to\gamma\gamma$. Unlike the previous correlations, the shape of the regions predicted in the THDMs is rather different between the cases for different sign of $c_{\beta-\alpha}$. The reason why the deviations are larger for $c_{\beta-\alpha}>0$ than those for $c_{\beta-\alpha}<0$ is that contributions of the charged Higgs loop diagram and the top-loop diagram are constructive for $c_{\beta-\alpha}>0$ so that the deviations from the SM prediction can be larger. The different shapes of the predicted regions between $c_{\beta-\alpha}<0$ and $>0$ can help to disentangle the degeneracy between the Type-I THDM and the Type-II THDM. For instance, $\Delta R_{\gamma\gamma}$ is mostly negative, but can be positive by a few per cent at most for $c_{\beta-\alpha}<0$. In such a case, if $\Delta R_{\tau\tau}$ is negative (positive), the model can be identified as the Type-I (Type-II) THDM. \section{Conclusions}\label{sec:summary} In order to examine the Higgs sector using future precision data, we have studied the deviations from the SM predictions for the various partial decay widths of the 125~GeV Higgs boson with one-loop EW and one-loop QCD corrections in extended Higgs models, such as the HSM as a real singlet extension and four types of the THDM with softly broken $Z_2$ symmetry. By employing and extending {\sc H-COUPv1} \!\!, which evaluates the renormalized vertex functions for the discovered Higgs boson in various extended Higgs models, we calculated the partial decay widths for $h\to\tau\tau$, $bb$, $cc$, $Zff$ and $\gamma\gamma$, and presented the ratios of the decay widths in the extended Higgs models to those in the SM. We described the pattern of the deviations in the various correlations, which are distinctive for each model and determined by the tree-level mixing effects on the Higgs couplings. Even with a full set of radiative corrections we may be able to discriminate these extended Higgs models as long as any of the deviations from the SM predictions is detected at future precision measurements. For example, we can discriminate the Type-X THDM and the Type-Y THDM from the others (the Type-I THDM, the Type-II THDM and the HSM) by the analysis shown in Fig.~\ref{fig:dR_tb}. Then, the Type-II THDM can be separated from the Type-I THDM and the HSM as shown in Fig.~\ref{fig:dR_tc}. We can further distinguish the Type-I THDM from the HSM as in Fig.~\ref{fig:dR_tz}. Finally, by using the data from $\Gamma(h\to\gamma\gamma)$, we can check the consistency as in Fig.~\ref{fig:dR_tg}. In order to complete such a logic realized, we need to measure the decay rate of $h\to cc$ precisely, in addition to measuring the other decay modes as precisely as possible. To this end the realization of future lepton colliders is deeply desirable. Furthermore, although the dependence on the additional Higgs boson mass generally indicates the decoupling behavior under the theoretical constraints from perturbative unitary and vacuum stability, the loop effects from the extended Higgs sector can be large due to the non-decoupling effect. We can extract important information on the mass scale of extra Higgs bosons indirectly from the magnitude of the deviations of the SM-like Higgs decay widths, even if new particles are not discovered at the LHC.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\@startsection {section}{1}{\z@}% {-5.5ex \@plus -1ex \@minus -.2ex {2.3ex \@plus.2ex}% {\normalfont\large\bfseries}} \renewcommand\subsection{\@startsection{subsection}{2}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\normalsize\bfseries}} \renewcommand\thesection {\@arabic\c@section} \renewcommand\thesubsection {\thesection.\@arabic\c@subsection} \renewcommand{\@seccntformat}[1]{% \csname the#1\endcsname.\hspace{1.0em}} \makeatother \begin{document} \begin{titlepage} \begin{flushright} BI-TP 2008/02\\ NSF-KITP-08-39\\ \end{flushright} \begin{centering} \vfill \noindent {\Large{\bf Sterile neutrino dark matter as a consequence \\[1mm] of $\nu$MSM-induced lepton asymmetry}} \vspace{0.8cm} Mikko~Laine$^\rmi{a,b}$ and Mikhail~Shaposhnikov$^\rmi{c}$ \vspace{0.8cm} $^\rmi{a}${\em Faculty of Physics, University of Bielefeld, D-33501 Bielefeld, Germany\\} \vspace{0.3cm} $^\rmi{b}${\em Department of Physics, University of Oulu, FI-90014 Oulu, Finland\\} \vspace{0.3cm} $^\rmi{c}${\em Institut de Th\'eorie des Ph\'enom\`enes Physiques, EPFL, CH-1015 Lausanne, Switzerland} \vspace*{0.8cm} \mbox{\bf Abstract} \end{centering} \vspace*{0.3cm} \noindent It has been pointed out in ref.~\cite{asy} that in the $\nu$MSM (Standard Model extended by three right-handed neutrinos with masses smaller than the electroweak scale), there is a corner in the parameter space where CP-violating resonant oscillations among the two heaviest right-handed neutrinos continue to operate below the freeze-out temperature of sphaleron transitions, leading to a lepton asymmetry which is considerably larger than the baryon asymmetry. Consequently, the lightest right-handed (``sterile'') neutrinos, which may serve as dark matter, are generated through an efficient resonant mechanism proposed by Shi and Fuller~\cite{Shi:1998km}. We re-compute the dark matter relic density and non-equilibrium momentum distribution function in this situation with quantum field theoretic methods and, confronting the results with existing astrophysical data, derive bounds on the properties of the lightest right-handed neutrinos. Our spectra can be used as an input for structure formation simulations in warm dark matter cosmologies, for a Lyman-$\alpha$ analysis of the dark matter distribution on small scales, and for studying the properties of haloes of dwarf spheroidal galaxies. \vfill \noindent \vspace*{1cm} \noindent June 2008 \vfill \end{titlepage} \section{Introduction} Ever since the experimental discovery of neutrino mass differences, there has been a compelling case for the existence of right-handed neutrinos in nature. It turns out, however, to be difficult to determine the parameters associated with them with any precision. Indeed, given that right-handed neutrinos are gauge singlets, their Lagrangian contains explicit (Majorana) mass terms in addition to the usual Yukawa interactions. The known mass differences only constrain certain combinations of the Yukawa couplings and Majorana masses, so that the absolute scale of the Majorana masses cannot be fixed from the existing data. Recently, it has been pointed out~\cite{Asaka:2005an,Asaka:2005pn} that if the Majorana masses are chosen to be significantly smaller than has been the common choice (this corner of the parameter space was named $\nu$MSM, for ``neutrino Minimal Standard Model''), then it appears possible to find an amazingly complete description of the main cosmological mysteries that cannot be explained within the Standard Model. Suppose that there are three generations of the right-handed neutrinos, like there are of all other fermions in the Standard Model. Then the lightest right-handed, or ``sterile'' neutrinos, with masses in the keV range, might serve as (warm) dark matter~\cite{Dodelson:1993je,Shi:1998km}, \cite{Dolgov:2000ew}--\cite{Kishimoto:2008ic}\footnote{% Various cosmological and astrophysical phenomena related to the dark matter sterile neutrinos have been discussed in refs.~\cite{astro}. }; the two heavier right-handed neutrinos, with masses in the GeV range and almost degenerate with each other, could account simultaneously for baryogenesis and the observed active neutrino mass matrix~\cite{Akhmedov:1998qx,Asaka:2005pn}; while a non-minimal coupling of the Higgs field in this theory to the Ricci scalar might explain inflation~\cite{Bezrukov:2007ep}. In fact it can be argued that the $\nu$MSM could be a good effective field theory all the way up to the Planck scale~\cite{Shaposhnikov:2007nj} (for a similar argument in a related theory, see ref.~\cite{Meissner:2006zh}). On the quantitative level, though, it is non-trivial to realize all of these possibilities within the $\nu$MSM. Consider the explanation of dark matter by the lightest sterile neutrinos, for instance. There are strong experimental constraints from two sides: from the non-observation of an X-ray signal generated by the decay of the dark matter neutrinos on one hand (\cite{Boyarsky:2005us}--\cite{Boyarsky:2007ge} and references therein), and from structure formation simulations on the other~\cite{Hansen:2001zv}--\cite{Viel:2007mv}. Combining these experimental constraints with the results of theoretical computations of thermal dark matter production due to active-sterile mixing (the so-called Dodelson-Widrow mechanism)~\cite{Dodelson:1993je,Dolgov:2000ew, Abazajian:2001nj,Abazajian:2002yz,Abazajian:2005gj,als2} appears in fact to all but exclude the warm dark matter scenario~\cite{Seljak:2006qw}--\cite{Viel:2007mv}, \cite{Boyarsky:2007ay}\footnote{% Extending the $\nu$MSM by an extra scalar field allows for an additional mechanism for dark matter sterile neutrino creation which evades these limits~\cite{Shaposhnikov:2006xi,Kusenko:2006rh,Petraki:2007gq}; another relaxation follows if the reheating temperature after inflation is low (in the MeV range)~\cite{Gelmini:2004ah,Yaguna:2007wi,Khalil:2008kp}. }. Such a negative conclusion is premature, however. First of all, the structure formation simulations of refs.~\cite{Seljak:2006qw}--\cite{Viel:2007mv} assumed the spectrum of the dark matter sterile neutrinos to be {\em thermal}, with possibly a modest shift of the average momentum towards the infrared, while in reality the deviations from the Fermi-Dirac distribution are substantial~\cite{Abazajian:2005gj,als2}. Second, and even more importantly, it has been pointed out in ref.~\cite{asy} that in the framework of the $\nu$MSM it is possible to generate a large leptonic chemical potential surviving down to temperatures of a few hundred MeV. In this situation the results of the theoretical computation change dramatically~\cite{Shi:1998km,Abazajian:2001nj}, and it becomes easier to explain dark matter with sterile neutrinos. The purpose of the present paper is to elaborate on the latter possibility. More precisely, we re-compute the dark matter relic density in this situation with the quantum field theoretic methods introduced in refs.~\cite{als,als2}; analyse uncertainties related to unknown parameters and poorly known QCD phenomena; and compare with previous computations in the literature. Our main finding is that if lepton asymmetries in the range $n_{\nu_e}/s \mathop{\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 0.8\times 10^{-5}$ exist, where $n_{\nu_e}$ refers to the asymmetry in electron-like neutrinos and $s$ is the total entropy density, then the $\nu$MSM can indeed account for the observed dark matter abundance. This bound can be consolidated once structure formation simulations have been repeated with the non-equilibrium momentum distributions functions (``spectra'') of the sterile neutrinos that we derive. In any case, asymmetries in the range $n_{\nu_e}/s \mathop{\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 0.8\times 10^{-5}$ may be reachable in the so-called {\bf Scenario IIa} of parameter values of ref.~\cite{asy}. Note that the largest possible asymmetry leading to successful Big Bang Nucleosynthesis corresponds to a chemical potential $|\mu_L/T|\mathop{\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 0.07$ at $T\sim 1$~MeV (95\% CL)~\cite{Kohri:1996ke,Dolgov:2002ab}, meaning $n_{\nu_e}/s \mathop{\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 2.5 \times 10^{-3}$ in our units (cf.\ appendix~A). The maximal asymmetry which can be produced within the $\nu$MSM is somewhat smaller, $n_{\nu_e}/s \mathop{\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 0.7 \times 10^{-3}$~\cite{asy}. It is appropriate to stress that even though our considerations are naturally viewed as a sequel to the lepton asymmetry generated {\em \`a la} ref.~\cite{asy}, from a practical point of view the origin of the lepton asymmetry plays no role in the present analysis, nor do the parameters related to the two heaviest right-handed neutrinos. Indeed the only ingredient entering our computation is the absolute value of the lepton asymmetry which, as mentioned, we parametrize through the ratio $n_{\nu_e}/s$. Recalling that the observed baryon asymmetry is $n_B/s \simeq (0.9 - 1.0) \times 10^{-10}$~\cite{pdg,Komatsu:2008hk}, we hence need to assume a boost of some five orders of magnitude in the leptonic sector. Besides $\nu$MSM, another possible origin for such an asymmetry could be the Affleck-Dine mechanism~\cite{Affleck:1984fy}, if it takes place below the electroweak scale and is based on a condensate producing many more leptons than quarks. Our presentation is organized as follows. In Sec.~\ref{se:general} we generalize the formalism of refs.~\cite{als,als2} to the charge-asymmetric situation. The resulting equation for the sterile neutrino abundance is integrated numerically in Sec.~\ref{se:abundance}, and the equation for the sterile neutrino spectrum in Sec.~\ref{se:spectrum}. We discuss the astrophysical consequences of these results in Sec.~\ref{se:astro}, and conclude in Sec.~\ref{se:concl}. In appendix~A we recall the relations of our characterization of the lepton asymmetry, through $n_{\nu_e}/s$, to a number of other conventions appearing in the literature. A reader only interested in the phenomenological consequences of our analysis could start directly from Sec.~\ref{se:astro}. \section{Basic formalism} \la{se:general} Our starting point is the Lagrangian \begin{equation} \mathcal{L} = \fr12 \bar{\tilde N_1} i \msl{\partial} \tilde N_1 - \fr12 M_1 \bar{\tilde N_1} \tilde N_1 - F_{\alpha 1} \bar L_\alpha \tilde \phi\, a_R \tilde N_1 - F_{\alpha 1}^* \bar{\tilde N_1} \tilde \phi^\dagger a_L L_\alpha + \mathcal{L}_\rmi{MSM} \;, \la{LM} \end{equation} where $\tilde N_1$ are Majorana spinors, and the subscript ``1'' refers to the lightest right-handed neutrino; repeated indices are summed over; $M_1$ is the Majorana mass that we have chosen to be real in this basis; $L_\alpha$ are the active lepton doublets; $F_{\alpha 1}$ are elements of a complex Yukawa matrix; $\tilde\phi = i \tau_2 \phi^*$ is the conjugate Higgs doublet; and $a_L \equiv (1-\gamma_5)/2$, $a_R \equiv (1+\gamma_5)/2$ are chiral projectors. To compute the abundance of $N_1$ from first principles, we make a number of basic assumptions, following refs.~\cite{als,als2}. First of all, we restrict to temperatures below a few GeV, implying that the electroweak symmetry is broken: $\langle \tilde \phi \rangle \simeq (v/\sqrt{2},0)$, where $v\simeq 246$~GeV is the Higgs field vacuum expectation value. Second, we assume that the mixing angles $\theta_{\alpha 1}^2 \equiv |M_D|_{\alpha 1}^2/M_1^2$, where $|M_D|_{\alpha 1} \equiv |v F_{\alpha 1}| /\sqrt{2}$, are very small, $\theta_{\alpha 1} \mathop{\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 10^{-3}$. Then it is sufficient to restrict to the leading non-trivial order in a Taylor series in $\theta_{\alpha 1}^2$. The third assumption concerns the flavour structure of the lepton asymmetry. We will only consider the case when the asymmetries in all active species ($\nu_e$, $e_L$, $e_R$, $\nu_\mu$, $\mu_L$, $\mu_R$, $\nu_\tau$, $\tau_L$, $\tau_R$) are equal. Strictly speaking, this is not satisfied in the $\nu$MSM, since the generation of lepton asymmetries takes place when the reactions changing neutrino flavours freeze out~\cite{asy}. We make this assumption in order to keep the discussion as simple as possible, and also because it yields the most conservative constraints, leading to the largest sterile neutrino abundance and consequently weakening the X-ray bounds. We discuss the general formalism applicable in this setting in Sec.~\ref{ss:muL}. At the same time, there is no reservoir replenishing the lepton asymmetry if a part of it is converted to right-handed neutrinos: the CP-violating reactions generating the asymmetry cease to take place at temperatures above a few GeV~\cite{asy}. Together with the third assumption this completely fixes the time evolution of the lepton asymmetry; this will be demonstrated in Sec.~\ref{ss:BR1}. The fourth and final assumption asserts that the density of the right-handed neutrinos produced is below its equilibrium value. This assumption is necessary for the validity of the quantum field theoretic formulation of refs.~\cite{als,als2}; on the other hand, it may be violated in certain parts of the parameter space. In Sec.~\ref{ss:BR2} we outline a phenomenological way to correct for a possible violation. \subsection{Results in terms of a generic lepton asymmetry} \la{ss:muL} Under the assumption of a chemically equilibrated lepton asymmetry among the active species, the formal determination of the sterile neutrino production rate proceeds almost exactly as in ref.~\cite{als}, with the only difference that the density matrix of the Minimal Standard Model (MSM) now takes the form\footnote{% In general, different chemical potentials have to be introduced for different leptonic flavours.} \begin{equation} \hat \rho_\rmi{MSM} = Z^{-1}_\rmi{MSM} \exp[-\beta (\hat H_\rmi{MSM} - \mu_L \hat L_\rmi{MSM})] \;. \end{equation} Here $\beta \equiv 1/T $; $\mu_L\neq 0$ unlike in ref.~\cite{als}; and $\hat L_\rmi{MSM}$ is the total lepton number operator within the MSM, \begin{equation} \hat L_\rmi{MSM} \equiv \int \!{\rm d}^3 \vec{x} \, \sum_{\alpha = e,\mu,\tau} \Bigl[ \hb{l}_{\alpha L} \,\gamma_0\, \hat{l}_{\alpha L} + \hb{l}_{\alpha R} \,\gamma_0\, \hat{l}_{\alpha R} + \hb{\nu}_{\alpha L} \,\gamma_0\, \hat{\nu}_{\alpha L} \Bigr] \;, \end{equation} with $ l_e \equiv e, l_\mu \equiv \mu, l_\tau \equiv \tau $; and $\hat \psi_L \equiv a_L\hat \psi$, $\hat \psi_R \equiv a_R\hat \psi$. According to ref.~\cite{als}, the phase space density $f} %{n} %{\mbox{\sl f\,}_1(t,\vec{q})$ of right-handed neutrinos in either helicity state $s$, \begin{equation} f} %{n} %{\mbox{\sl f\,}_1(t,\vec{q}) \equiv \sum_{s=1,2} \frac{{\rm d} N_1^{(s)}(t,\vec{x},\vec{q})}{{\rm d}^3 \vec{x}\,{\rm d}^3 \vec{q}} \;, \la{n1_norm} \end{equation} obeys the equation \begin{equation} \biggl( \frac{\partial}{\partial t} - H q_i \frac{\partial}{\partial q_i}\biggr) f} %{n} %{\mbox{\sl f\,}_1(t,\vec{q}) = R(T,\vec{q}) \;. \la{expansion} \la{kinetic} \end{equation} Here $H$ is the Hubble parameter, $H={\rm d}\ln a(t)/ {\rm d}t$, and $q_i$ are the spatial components of the physical momentum $\vec{q}$, defined in a local Minkowskian frame. Repeating the analysis of ref.~\cite{als} with $\mu_L \neq 0$, the source term reads \begin{equation} R(T,\vec{q}) = \frac{1}{(2\pi)^3 q^0} \sum_{\alpha = 1}^{3} |M_D|^2_{\alpha 1} {\rm Tr\,}\Bigl\{ \bsl{Q} a_L \Bigl[ \nF{}(q^0 - \mu_L) \rho_{\alpha\alpha}(Q) + \nF{}(q^0 + \mu_L) \rho_{\alpha\alpha}(-Q) \Bigr] a_R \Bigr\} \;, \la{master} \end{equation} where $\nF{}(q) \equiv 1/[\exp(q/T)+ 1]$ is the Fermi distribution function; $\rho_{\alpha\alpha}$ is the spectral function related to the propagator of the active neutrino of generation $\alpha$; and $Q$ is the on-shell four-momentum of the right-handed neutrino, i.e.\ $Q^2 = M_1^2$. Noting that the solution of Eq.~\nr{kinetic} only depends on $q\equiv |\vec{q}|$, it can be written as~\cite{als2} \begin{equation} f} %{n} %{\mbox{\sl f\,}_1(t_0,q)= \int_{T_0}^\infty \! \frac{{\rm d}T}{T^3} \, \frac{M_0(T)}{3 c_s^2(T)} R\biggl( {T}, q \frac{T}{T_0} \left[\frac{h_\rmi{eff}(T)}{h_\rmi{eff}(T_0)}\right]^{\frac{1}{3}} \biggr) \;, \la{distribution} \end{equation} where $ M_0(T) \equiv M_\rmi{Pl} \sqrt{ {45} / {4\pi^3 g_\rmi{eff}(T)} } $; $g_\rmi{eff}(T)$ parametrizes the energy density $e$ as $ e \equiv {\pi^2 T^4 g_\rmi{eff}(T)}/{30} $; and $h_\rmi{eff}(T)$ parametrizes the entropy density $s$ as $ s \equiv {2 \pi^2 T^3} h_\rmi{eff}(T) / {45} $. Moreover, $c_s^2$ is the sound speed squared, given by $ 1/{c_s^{2}(T)} = 3 + {T h'_\rmi{eff}(T)} / {h_\rmi{eff}(T)} $. Note that in Eq.~\nr{distribution}, the chemical potential $\mu_L$ may be taken to depend on $T$ in any way, to be specified later on from physical considerations. To derive an expression for $\rho_{\alpha\alpha}$, we follow the steps in Sec.~3.2 of ref.~\cite{als}, but without assuming anything about the symmetry properties of the active neutrino self-energy for the moment. The Euclidean propagator (cf.\ Eq.~(3.10) of ref.~\cite{als}) then becomes \begin{equation} \Pi^E_{\alpha\alpha}(\tilde Q) = a_L \frac{1}{- i \bsl{\tilde Q} + i \bsl{\tilde\Sigma} ( -\tilde Q)} a_R = a_L\, \frac{i \bsl{\tilde Q} - i \bsl{\tilde\Sigma}(-\tilde Q)} {[\tilde Q - \tilde \Sigma(-\tilde Q)]^2}\, a_R \;. \la{prop} \end{equation} We have left out the flavour indices from the active neutrino self-energy $\tilde\Sigma$ to compactify the notation somewhat, and the tildes are a reminder of Euclidean conventions. Carrying out the Wick rotation, we can transform Eq.~\nr{prop} into a retarded Minkowskian propagator (cf.\ Eq.~(3.11) of ref.~\cite{als}): \begin{equation} \Pi^R_{\alpha\alpha}(q^0,\vec{q}) = \Pi^E_{\alpha\alpha}(-i q^0,\vec{q}) = a_L \frac{-\bsl{Q} + \bsl{\Sigma}(-Q)} {Q^2 - 2 Q\cdot \Sigma(-Q) + \Sigma^2(-Q)} a_R \;. \la{retd} \end{equation} Writing finally $\Sigma(q^0\pm i 0^+,\vec{q}) \equiv \mathop{\mbox{Re}} \Sigma(Q) \pm i \mathop{\mbox{Im}} \Sigma(Q)$ and correspondingly $\Sigma(-[q^0\pm i 0^+],-\vec{q}) = \mathop{\mbox{Re}} \Sigma(-Q) \mp i \mathop{\mbox{Im}} \Sigma(-Q)$, allows to obtain (cf.\ Eqs.~(3.1), (3.4) of ref.~\cite{als}) \begin{eqnarray} \rho_{\alpha\alpha}(Q) & = & \frac{1}{2i} \Bigl[ \Pi^R_{\alpha\alpha}(q^0 + i 0^+,\vec{q}) - \Pi^R_{\alpha\alpha}(q^0 - i 0^+,\vec{q}) \Bigr] \\ & = & a_L \frac{ - S_I(-Q) [ \bsl{Q} - \mathop{\mbox{Re}} \bsl{\Sigma}(-Q)] - S_R(-Q) \mathop{\mbox{Im}}\bsl{\Sigma}(-Q) } { S_R^2(-Q) + S_I^2(-Q) } a_R \;, \la{rhoQ} \end{eqnarray} where \begin{eqnarray} S_R(-Q) & \equiv & [Q - \mathop{\mbox{Re}} \Sigma(-Q)]^2 - [\mathop{\mbox{Im}}\Sigma(-Q)]^2 \;, \la{SR} \\ S_I(-Q) & \equiv & -2 [Q - \mathop{\mbox{Re}}\Sigma(-Q) ] \cdot \mathop{\mbox{Im}}\Sigma(-Q) \;. \la{SI} \end{eqnarray} These expressions can trivially be written also for $\rho_{\alpha\alpha}(-Q)$, and the results can then be inserted into Eq.~\nr{master}. The outcome constitutes a generalization of Eq.~(3.12) of ref.~\cite{als}. To compactify the resulting equations somewhat, we make the following simplifications. We note, first of all, that in the imaginary part $\mathop{\mbox{Im}}\Sigma$, the chemical potential changes the thermal distribution functions of the on-shell leptons that appear in the intermediate states. Given that we are interested in the case $\mu_L/T\ll 1.0$, however, these changes are not important, and will be ignored in the following. Then we can assume that $\mathop{\mbox{Im}}\Sigma$ does not get modified by $\mu_L$ and, in particular, that $\mathop{\mbox{Im}}\Sigma(-Q) = \mathop{\mbox{Im}}\Sigma(Q)$, as is the case for $\mu_L = 0$. As far as $\mathop{\mbox{Re}}\Sigma$ is concerned, we note that its general structure can be written as \begin{equation} \mathop{\mbox{Re}} \bsl{\Sigma}_{\alpha\alpha}(Q) = \bsl{Q} a_{\alpha\alpha}(Q) + \msl{u} b_{\alpha\alpha}(Q) + \msl{u} c_{\alpha\alpha}(Q) \;, \la{Restruct} \end{equation} where $u=(1,\vec{0})$. We note that the function $a_{\alpha\alpha}(Q)$ can be ignored, since it is small compared with the tree-level term $\bsl{Q}$. On the other hand the latter structures in Eq.~\nr{Restruct} do not appear at tree-level, and need to be kept. We have separated two terms: a function $b_{\alpha\alpha}(Q)$ odd in $Q$, appearing already in the charge symmetric situation, as well as a function $c_{\alpha\alpha}(Q)$, defined to be even in $Q$. The function $c_{\alpha\alpha}(Q)$ must be proportional to the leptonic chemical potential (or leptonic net number densities), and it is this function which plays an essential role in the following. The explicit expression for $c_{\alpha\alpha}(Q)$ can be extracted from ref.~\cite{ReSigmaold}; for $q^0\ll m_W$, we can work to first order in an expansion in $1/m_W^2$, and then the result reads \begin{equation} c_{\alpha\alpha} = 3 \sqrt{2} G_F (1 + 4 \sin^2\! \theta_\rmii{W}\,) n_{\nu_e} \;, \la{c_simple} \end{equation} where $G_F = g_w^2/4\sqrt{2} m_W^2$ is the Fermi constant and, as mentioned, we assumed that all active leptonic densities are equal: $ n_{\nu_e} = n_{e_L} = n_{e_R} = n_{\mu_L} = ...~ $.\footnote{ In general, $ c_{\alpha\alpha} = \sqrt{2} G_F [ (1 + 2 x_\rmii{W}\,) n_{l_{\alpha L}} -(1 - 2 x_\rmii{W}\,) \sum_{\beta\neq\alpha }n_{l_{\beta L}} + 2 x_\rmii{W}\, \sum_{\beta} n_{l_{\beta R}} + 2 n_{\nu_\alpha} + \sum_{\beta\neq \alpha} n_{\nu_\beta} ] $, where $x_\rmii{W} \equiv \sin^2\! \theta_\rmii{W}$. } With these simplifications, we can write \begin{eqnarray} \bsl{Q} + \mathop{\mbox{Re}} \bsl{\Sigma} (Q) & \approx & \bsl{Q} + \msl{u} ( b + c ) \;, \\[1mm] \Bigl[ Q + \mathop{\mbox{Re}}\Sigma(Q) \Bigr]^2 & \approx & M_1^2 + 2 q^0 (b + c) + (b + c)^2 \;, \\[2mm] \bsl{Q} - \mathop{\mbox{Re}} \bsl{\Sigma} (-Q) & \approx & \bsl{Q} + \msl{u} ( b - c ) \;, \\[1mm] \Bigl[ Q - \mathop{\mbox{Re}}\Sigma(-Q) \Bigr]^2 & \approx & M_1^2 + 2 q^0 (b - c) + (b - c)^2 \;, \end{eqnarray} where $b\equiv b_{\alpha\alpha}(Q)$, $c\equiv c_{\alpha\alpha}(Q)$. Furthermore, all appearances of $\mathop{\mbox{Im}}\Sigma$ can be written in terms of the objects \begin{eqnarray} I_Q & \equiv & {\rm Tr\,} \Bigl[ \bsl{Q} a_L \mathop{\mbox{Im}} \bsl{\Sigma}(Q) a_R \Bigr] = 2\; Q \cdot \mathop{\mbox{Im}}\Sigma(Q) \;, \la{IQ} \\ I_u & \equiv & {\rm Tr\,} \Bigl[ \msl{u} a_L \mathop{\mbox{Im}} \bsl{\Sigma}(Q) a_R \Bigr] = 2\; u \cdot \mathop{\mbox{Im}}\Sigma(Q) \;. \la{Iu} \end{eqnarray} Note, in particular, that $\mathop{\mbox{Im}} \bsl{\Sigma}$ has a structure analogous to Eq.~\nr{Restruct}, with one term proportional to $\bsl{Q}$ and another to $\msl{u}$, and that consequently even $[\mathop{\mbox{Im}}\Sigma]^2$ can be written in terms of the structures in Eqs.~\nr{IQ}, \nr{Iu}, as \begin{equation} \Bigl[\mathop{\mbox{Im}}\Sigma (Q) \Bigr]^2 = \frac{ -I_Q^2 + 2 q^0 I_Q I_u - M_1^2 I_u^2 } {4 \vec{q}^2} \;. \end{equation} Inserting these simplifications into Eqs.~\nr{master}, \nr{rhoQ}--\nr{SI}, we finally obtain \begin{eqnarray} && \hspace*{-1cm} R(T,{q}) \approx \frac{1}{(2\pi)^3 q^0} \sum_{\alpha = e,\mu,\tau} {|M_D|^2_{\alpha 1}} \times \nonumber \\ && \hspace*{-5mm} \times \biggl\{ \nF{}(q^0 + \mu_L) \frac{ 2 S_I(Q) [M_1^2 + q^0 (b+c)] - S_R(Q) I_Q } { S_R^2(Q) + S_I^2(Q) } + (c\to -c, \mu_L\to - \mu_L) \biggr\} \;, \la{master2} \end{eqnarray} where \begin{eqnarray} S_R(Q) & = & M_1^2 + 2 q^0 (b + c) + (b + c)^2 +\frac{ I_Q^2 - 2 q^0 I_Q I_u + M_1^2 I_u^2 } {4 \vec{q}^2} \;, \la{SRQ} \\ S_I(Q) & = & I_Q + (b + c) I_u \;. \la{SIQ} \end{eqnarray} We remark that the production of the dark matter sterile neutrinos, with masses in the keV range, takes place at temperatures below a few GeV (cf.\ Fig.~\ref{fig:Ts} below). In this case, like already at $\mu_L = 0$, it is numerically a very good approximation to set the term $I_u$ to zero, whereby Eqs.~\nr{SRQ}, \nr{SIQ} simplify further. Although the formulae given are valid beyond perturbation theory, a practical application does make use of approximate perturbative expressions for the functions $b, c$ and $I_Q$. It is important to realise that at the point of a resonance, where some of the ``large'' terms ($M_1^2 + 2 q^0 b$ and $\pm 2 q^0 c$) cancel against each other, the magnitude of the remainder is determined by higher order terms ($(b\pm c)^2$ and $I_Q$). A consistent treatment to a certain order in perturbation theory would hence require a correspondingly precise (2-loop) determination of the large terms $2 q^0 b, 2 q^0 c$. At the same time, in a practical application we are not sitting precisely at the point of a resonance, but integrate over its contribution, so that for instance a slight misplacement of the precise temperature at which the resonance takes place plays little role. Consequently, we continue to use 1-loop expressions for the functions $b$ and $c$ throughout\footnote{% For all quantities not specified explicitly in this section, we use the expressions given in ref.~\cite{als}. }. Nevertheless, as we will see, the fact that at the point of the resonance, $\mathop{\mbox{Im}}\Sigma$ plays a role also in the denominator, will imply that a large $\mathop{\mbox{Im}}\Sigma$ can also lead to a decreased abundance, in contrast to the case of non-resonant production, where $\mathop{\mbox{Im}}\Sigma$ essentially only plays a role in the numerator. For a recent discussion of various resonance-related phenomena, see ref.~\cite{db}. To be more quantitative about the role of the resonance, we can work out its contribution to the production rate semi-analytically. To a good approximation, the resonance is at the point where the function \begin{equation} \mathcal{F}(T) \equiv M_1^2 + 2 q^0 (b-c) \end{equation} vanishes (this comes from the latter term in Eq.~\nr{master2}, after the insertion of Eq.~\nr{SRQ}). Around this point, the production rate in Eq.~\nr{master2} can be approximated as \begin{eqnarray} R(T,{q}) & \approx & \frac{\nF{}(q^0-\mu_L)}{(2\pi)^3 q^0} \sum_{\alpha = e,\mu,\tau} {|M_D|^2_{\alpha 1}} [M_1^2 + q^0 (b-c)] \frac{ 2 I_Q } { \mathcal{F}^2(T) + I_Q^2 } \\ & \approx & \frac{\nF{}(q^0-\mu_L)}{(2\pi)^2 2 q^0} \sum_{\alpha = e,\mu,\tau} {|M_D|^2_{\alpha 1}} \, M_1^2 \, \delta(\mathcal{F}(T)) \;, \la{master3} \end{eqnarray} where we made use of the fact that $I_Q$ is very small, in order to identify a representation of the Dirac delta-function. Inspecting the expressions for $b$ and $c$, the function $\mathcal{F}$ is positive at very low temperatures (because $M_1^2$ dominates) and at very large temperatures (because $b$ dominates), but for sufficiently large $n_{\nu_e}/s$ and sufficiently small $q/T$, the term $c$ overtakes the others at intermediate temperatures; there are then two zeros of $\mathcal{F}$, and it turns out to be the lower among these that gives the dominant contribution. We denote the corresponding temperature by $T_R$. Eq.~\nr{distribution} can now be integrated, to yield \begin{equation} f} %{n} %{\mbox{\sl f\,}_1(t_0,q)\approx \sum_{\alpha = e,\mu,\tau} {|M_D|^2_{\alpha 1}} \left. \frac{1}{T_R^3} \frac{M_0(T_R)}{3 c_s^2(T_R)} \frac{\nF{}(q^0-\mu_L)}{(2\pi)^2 2 q^0} \frac{M_1^2} {|\mathcal{F}'(T_R)|} \right|_{\mathcal{F}(T_R) = 0} \;. \la{resonance} \end{equation} \subsection{Time evolution of the lepton asymmetry} \la{ss:BR1} The main formula of the previous section, Eq.~\nr{master2}, depends on the leptonic chemical potential, $\mu_L$, and on the lepton asymmetry, $n_{\nu_e}$, the two of which are related through Eq.~\nr{n_vs_mu}. However, the dependence of these quantities on the time (or temperature) has been left open. We now need to insert further physical input in order to fix this dependence. It is important to realize, first of all, that no reservoir exists for the lepton asymmetry: as explained in ref.~\cite{asy}, the lepton asymmetry was generated by CP-violating processes active at temperatures around a few GeV, which subsequently ceased to operate. Second, the mass of the lightest right-handed neutrino, $M_1$, is much below the temperature, $M_1 \ll T$. Therefore lepton asymmetry violating processes, whose rate is proportional to $M_1^2$, can to a very good approximation be neglected. In other words, dark matter sterile neutrinos and the active leptons can be characterized by a conserved quantity, which we may call the total lepton number. In fact, this physics is effectively already built in in Eq.~\nr{master}, which shows that the rate for generating any of the two sterile neutrino states is a sum of two terms, with opposite chemical potentials appearing in them, as is appropriate for ``particles'' and ``anti-particles''. As a result of these two facts, the resonant transitions from active to sterile neutrinos, or more precisely the C-odd part of Eq.~\nr{master}, cause the original asymmetry to get {\em depleted}. If the resonance is very effective, the depletion is fast and thereby rapidly terminates the resonance phenomenon. To be more quantitative, we make the (optimistic) assumption that the flavour and chirality changing processes within the active generations are fast enough to stay in thermal equilibrium. There is then a reservoir of nine spin-1/2 degrees of freedom (three generations, each with left-handed neutrinos and both-handed charged leptons) converting to sterile neutrinos. Denoting the two terms in Eq.~\nr{master} by $R_-$ and $R_+$, respectively, Eq.~\nr{expansion} can then be split and subsequently completed into a closed set of three equations (we also adopt an ansatz removing the terms proportional to the Hubble parameter from Eq.~\nr{kinetic}): \begin{eqnarray} \frac{{\rm d}}{{\rm d}t} f} %{n} %{\mbox{\sl f\,}_-\Bigl(t,q(t_0) \frac{a(t_0)}{a(t)}\Bigr) & = & R_-\Bigl(T,q(t_0) \frac{a(t_0)}{a(t)}\Bigr) \;, \la{expansion_2a} \\ \frac{{\rm d}}{{\rm d}t} f} %{n} %{\mbox{\sl f\,}_+\Bigl(t,q(t_0) \frac{a(t_0)}{a(t)}\Bigr) & = & R_+\Bigl(T,q(t_0) \frac{a(t_0)}{a(t)}\Bigr) \;, \la{expansion_2b} \\ \frac{{\rm d}}{{\rm d}t} \Bigl\{9\, a^3(t) n_{\nu_e}(t) \Bigr\} & = & {a^3(t)} \int \! {\rm d}^3\vec{q} \, \Bigl[ R_+(T,q) - R_-(T,q) \Bigr] \;, \la{expansion_2c} \end{eqnarray} where the dark matter spectrum $f} %{n} %{\mbox{\sl f\,}_1$ is now represented by the sum $f} %{n} %{\mbox{\sl f\,}_1(T,q) = f} %{n} %{\mbox{\sl f\,}_{-}(T,q) + f} %{n} %{\mbox{\sl f\,}_{+}(T,q)$. The structure of these equations is such that the total lepton charge in a comoving volume, \begin{equation} {L}_\rmi{tot} \equiv a^3(t) \Bigl\{ 9\, n_{\nu_e}(t) + \int \! {\rm d}^3\vec{q} \, \Bigl[ f} %{n} %{\mbox{\sl f\,}_-(t,q) - f} %{n} %{\mbox{\sl f\,}_+(t,q) \Bigr] \Bigr\} \;, \end{equation} indeed remains conserved, as must be the case for $M_1\to 0$. Note that within the approximation of Eq.~\nr{master3}, the term $R_+$ could be omitted from Eqs.~\nr{expansion_2a}--\nr{expansion_2c}, which would simplify the system somewhat. Another practical simplification is to solve the equations in terms of the temperature rather than the time, as we already did in Eq.~\nr{distribution}. A rough estimate for when the depletion has a substantial impact can be obtained as follows. If {\em all} of the original lepton asymmetry converts to sterile neutrinos, then $n_{1}/s \ge 9 n_{\nu_e}/s$, where $s$ is the total entropy density, and \begin{equation} n_1(t_0) \equiv \int \! {\rm d}^3\vec{q} \, f} %{n} %{\mbox{\sl f\,}_1(t_0,\vec{q}) \;. \la{na1} \end{equation} Therefore the depletion is substantial if $n_{\nu_e}/s \mathop{\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} n_{1}/9 s$. Evaluating the right-hand side of this inequality for the case that sterile neutrinos account for all of dark matter, Eq.~\nr{Ya1_bound}, we get $ {n_{\nu_e}} / {s} \mathop{\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} { 4.0 \times 10^{-4}} \times { \mbox{keV} } / { 9 M_1 } $, i.e. \begin{equation} \frac{ M_1 }{ \mbox{keV} } \mathop{\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 45 \, \biggl( 10^6 \frac{n_{\nu_e}}{s} \biggr)^{-1} \;. \end{equation} In other words, for a small initial asymmetry, the depletion has a significant impact at all the masses, while for a large initial asymmetry, the effect of the depletion is subdominant (because there is more to deplete), unless the mass is small. \subsection{Back reaction and equilibration} \la{ss:BR2} The derivation of the formulae that our work is based upon, Eqs.~\nr{expansion_2a}--\nr{expansion_2c}, contains the assumption that the particles produced do not thermalize, i.e.,\ that their density remains below the equilibrium value at all times. Let us investigate the validity of this assumption. It is relatively easy to establish that the {\em total} number density of the sterile neutrinos produced does remain significantly below the equilibrium value. Indeed, the density of the sterile neutrinos is constrained from above by Eq.~\nr{Ya1_bound}, and consequently \begin{eqnarray} \frac{n_{1}(T_0)}{n_\rmi{eq}(T_0)} & \mathop{\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} & 4.0\times 10^{-4} \frac{s(T_0)}{n_\rmi{eq}(T_0)} \frac{\mbox{keV}}{M_1} \quad \approx \quad 9.64 \times 10^{-4} \, h_\rmi{eff}(T_0) \, \frac{\mbox{keV}}{M_1} \nonumber \\ & \mathop{\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} & 0.072 \, \frac{\mbox{keV}}{M_1} \;, \la{na1_abs} \end{eqnarray} where we inserted $n_\rmi{eq}(T_0) = 3 \zeta(3) T_0^3/2\pi^2$ (cf.\ Eq.~\nr{Delta_def}) as well as $h_\rmi{eff}(T_0) \mathop{\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 75$ corresponding to $T_0 \mathop{\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 1$~GeV~\cite{pheneos}. Thereby the lightest sterile neutrinos appear indeed to be out of equilibrium in the whole range $M_1/\mbox{keV} \ge 0.1$ that we are interested in. It must be realized, however, that the inequality \nr{na1_abs} is not sufficient to guarantee the absence of problems. Indeed, the spectrum of the sterile neutrinos produced is strongly tilted towards the infrared~\cite{Shi:1998km,Abazajian:2001nj}. Given that we are considering fermions, it must then be checked that the Pauli exclusion principle is not violated at small momenta. Although an exact quantum field theoretic treatment would guarantee this automatically, the assumption of a non-thermal result in the derivation of Eq.~\nr{master2} means that this consideration now enters as an additional ingredient. We refer to the dynamics that prevents an excessive growth of the fermionic density at small momenta as ``back reaction''. Motivated by Boltzmann equations, and recalling our normalization (cf.\ Eq.~\nr{n1_norm}), we expect that the way that back reaction works is to modify the source terms for the distribution functions $f} %{n} %{\mbox{\sl f\,}_-, f} %{n} %{\mbox{\sl f\,}_+$ (cf.\ Eqs.~\nr{master}, \nr{expansion_2a}, \nr{expansion_2b}) by replacing \begin{equation} \frac{\nF{}(q^0 - \mu_L)}{(2\pi)^3} \to \frac{\nF{}(q^0 - \mu_L)}{(2\pi)^3} - f} %{n} %{\mbox{\sl f\,}_- \;, \quad \frac{\nF{}(q^0 + \mu_L)}{(2\pi)^3} \to \frac{\nF{}(q^0 + \mu_L)}{(2\pi)^3} - f} %{n} %{\mbox{\sl f\,}_+ \;. \la{non-lin} \end{equation} However, since this recipe would be purely phenomenological at this stage, and since the resulting equations are quite difficult to solve numerically\footnote If the spectra $f} %{n} %{\mbox{\sl f\,}_\mp$ do {\em not} appear in $R_\mp$ on the right-hand side, we can integrate over momenta in Eqs.~\nr{expansion_2a}, \nr{expansion_2b}, to obtain a coupled set of ordinary differential equations for integrated densities; on the contrary, if the spectra {\em do} appear in $R_\mp$ on the right-hand side, modes with different momenta $q$ couple to each other and need to be solved simultaneously, which makes the problem significantly harder. }, we follow a simpler approach in the following. Indeed, we first solve Eqs.~\nr{expansion_2a}--\nr{expansion_2c} without back reaction, yielding the distribution functions which we denote by $ f} %{n} %{\mbox{\sl f\,}_-^{(0)} $, $ f} %{n} %{\mbox{\sl f\,}_+^{(0)} $. Subsequently, we construct the approximants \begin{equation} f} %{n} %{\mbox{\sl f\,}_\mp \simeq \frac{\frac{\nF{}(q^0 \mp \mu_L)}{(2\pi)^3} \cdot f} %{n} %{\mbox{\sl f\,}_\mp^{(0)} } {\frac{\nF{}(q^0 \mp \mu_L)}{(2\pi)^3} + f} %{n} %{\mbox{\sl f\,}_\mp^{(0)} } \;. \la{iter} \end{equation} This amounts to a rough iterative solution of the structures suggested by Eq.~\nr{non-lin}; guarantees that $f} %{n} %{\mbox{\sl f\,}_\mp$ never exceed the equilibrium distributions $\nF{}(q^0\mp \mu_L)/(2\pi)^3$; and, for $f} %{n} %{\mbox{\sl f\,}_\mp \ll \nF{}(q^0\mp \mu_L)/(2\pi)^3$, yields the correct result $f} %{n} %{\mbox{\sl f\,}_\mp = f} %{n} %{\mbox{\sl f\,}_\mp^{(0)}$. In order to estimate the practical importance of the back reaction, we have determined our main observables (cf.\ Secs.~\ref{se:abundance}, \ref{se:spectrum}) for a number of parameter values both from $f} %{n} %{\mbox{\sl f\,}_\mp^{(0)}$ and $f} %{n} %{\mbox{\sl f\,}_\mp$. We return to the corresponding error estimates in connection with the numerical data. \section{Sterile neutrino abundance} \la{se:abundance} The task now is to evaluate the integral in Eq.~\nr{distribution}, with the lepton asymmetry evolved according to Eq.~\nr{expansion_2c}, producing the distribution function $f} %{n} %{\mbox{\sl f\,}_1(t_0,q)$, with $q\equiv |\vec{q}|$; and then to integrate over $\vec{q}$, to get the total number density $n_1$ (the order of the integrations over $T$ and $q$ can of course be interchanged). We choose $t_0$ to be the time corresponding to $T_0 = 1$~MeV, below which active neutrinos start to decouple. In order to present the numerical results, we start by introducing some further notation. First of all, it is conventional to express the mixing angle as $\sin^2 \! 2 \theta_{\alpha 1}$. For the very small Yukawa couplings that we are interested in, it is an excellent approximation to write \begin{equation} \sin^2 \! 2 \theta_{\alpha 1} = 4 \theta^2_{\alpha 1} = 4 \frac{|M_D|_{\alpha 1}^2}{M_1^2} \;. \la{thethe} \end{equation} Second, we introduce a total mixing angle as \begin{equation} \sin^2 \! 2 \theta \equiv \sum_{\alpha = e,\mu,\tau} 4 \theta_{\alpha 1}^2 \;, \la{sin_sum} \end{equation} which is the quantity appearing in the X-ray constraints to be discussed below (cf.\ Fig.~\ref{fig:exclusion}). Now, we can write the total right-handed neutrino density as $ n_1(t_0) \equiv \sum_{\alpha = e, \mu, \tau} n_{\alpha 1}(t_0) $, where $n_{\alpha 1}$ is the contribution from active flavour $\alpha$ to the dark matter abundance. This contribution can conveniently be characterized through the yield parameter $ Y_{\alpha 1} \equiv {n_{\alpha 1}}/{s} $. The corresponding relative energy fraction is $ \Omega_{\alpha 1} \equiv {M_1 n_{\alpha 1}} / {\rho_\rmi{cr}} = {M_1 Y_{\alpha 1}}/({\rho_\rmi{cr}/s}) $. Inserting $ \rho_\rmi{cr}/s \approx 3.65 \times 10^{-9} h^2 \, \mbox{GeV} $ from Particle Data Group~\cite{pdg}, and noting that $\Omega_{\alpha 1} h^2$ can amount to at most the experimentally known dark-matter density, $ \Omega_{\rm dm} h^2 = 0.1143 \pm 0.0034 $ (68\% CL)~\cite{Komatsu:2008hk}, we obtain an upper bound on $Y_{\alpha 1}$: \begin{equation} Y_{\alpha 1} \mathop{\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 4.0 \times 10^{-4} \times \frac{ \mbox{keV} }{ M_1 } \;. \la{Ya1_bound} \end{equation} Since $Y_{\alpha 1}$ depends monotonously (though non-linearly, because of the depletion discussed in Sec.~\ref{ss:BR1}) on $\sin^2 \! 2 \theta_{\alpha 1}$, this equation yields an upper bound on the mixing angle. Following ref.~\cite{als2}, we concentrate particularly on two flavour structures. In the non-resonant case, for a fixed mixing angle, a hierarchy $Y_{e1} > Y_{\mu 1} > Y_{\tau 1}$ can be observed~\cite{als2}, because heavier scatterers suppress the production rate. The largest abundance, and the most stringent upper bound on $\sin^2\! 2 \theta$, is then obtained with \begin{eqnarray} | M_D |_{e 1} \neq 0\,,~~ | M_D |_{\mu 1} = | M_D |_{\tau 1} = 0 \; \qquad\qquad\mbox{``case 1''} \;, \la{case1} \end{eqnarray} while the smallest relic abundance and the weakest upper bound on $\sin^2\! 2 \theta$ is obtained when \begin{eqnarray} | M_D |_{\tau 1} \neq 0\,,~~ | M_D |_{e 1} = | M_D |_{\mu 1} = 0 \; \qquad\qquad\mbox{``case 2''} \;. \la{case2} \end{eqnarray} As already mentioned, in the resonant case the roles of what leads to the strongest and weakest upper bound may get interchanged, because the structure of Eq.~\nr{master2} is fairly complicated, but it appears that these two cases should still capture the most extreme possibilities. To integrate Eqs.~\nr{distribution}, \nr{expansion_2c} in practice, we set the upper limit of the $T$-integration to $T_\rmi{max} = 4$~GeV where $R(T,q)$ is vanishingly small, and the lower limit to $T_0 = 1$~MeV. We first integrate over ${q}$, and then evolve $n_{\alpha 1}$, $Y_{\alpha 1}$ and $n_{\nu_e}/s$ through coupled ordinary differential equations in $T$. This is repeated with several $\sin^2 \! 2 \theta$ in order to find the value satisfying the constraint in Eq.~\nr{Ya1_bound}. Our numerical implementation follows that in ref.~\cite{als2}. In particular, as mentioned above, $\mathop{\mbox{Im}}\Sigma$ can be evaluated just as in the case without a lepton asymmetry. At the same time, the existence of a narrow resonance does make the integrations over $q$ and $T$ much more demanding than in the charge-symmetric case; most importantly, the resolution in the $q$-direction needs to be significantly increased. We also remark that at small $M_1/$keV and large asymmetries, a direct numerical integration becomes increasingly difficult, but at the same time Eq.~\nr{resonance} becomes more accurate. However, the asymmetry gets rapidly depleted in this regime, so that in fact Eq.~\nr{resonance} is a good approximation only at the early stages of the resonance. We have found that a workable method is to approximate $R$ as a sum of a C-odd and C-even part; the C-odd part is approximated by Eq.~\nr{resonance}, while the C-even part, which dominates for small asymmetries, is approximated by the full $R$ from ref.~\cite{als2}, with $\mu_L$ equal to zero. We have checked that the results obtained this way extrapolate, within our resolution, to the ``exact'' results which can be reliably determined at large masses, $M_1 \mathop{\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 10$~keV. \begin{figure}[t] \centerline{% \epsfysize=7.5cm\epsfbox{Tevol_backreac.eps}% ~~~\epsfysize=7.5cm\epsfbox{Tevol_backreac_alfa3.eps}% } \caption[a]{\small Examples of the $T$-evolution of the lepton asymmetry $n_{\nu_e}/s$ (cf.\ Sec.~\ref{ss:BR1}), for a fixed $M_1 = 3$~keV. Left: $\alpha = e$. Right: $\alpha = \tau$. Note that our results differ even qualitatively from ref.~\cite{Kishimoto:2008ic} where the asymmetry crosses zero at some temperature. } \la{fig:Tevol} \end{figure} In Fig.~\ref{fig:Tevol} we show examples of the evolution of the lepton asymmetry for various initial values. It can be observed that the resonance is quite narrow, and quite effective; in particular, for $n_{\nu_e}/s \mathop{\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 10^{-6}$, most of the initial lepton asymmetry is rapidly converted to sterile neutrinos, so that the resonance becomes ineffective. Sterile neutrinos are then dominantly produced thermally, like in ref.~\cite{als2}, and the mixing angle needs to be large. If the lepton asymmetry reservoir were smaller, for instance with three components rather than nine, then the depletion would be even more rapid. \begin{figure}[t] \centerline{% \epsfysize=7.5cm\epsfbox{T_SF_resonance.eps}% ~~~\epsfysize=7.5cm\epsfbox{T_alfa3_SF_resonance.eps}% } \caption[a]{\small The resonance temperature corresponding to Eq.~\nr{resonance}, for the modes $q/T_0 = 1$ and $q/T_0=3$, with $T_0 = 1$~MeV. Left: $\alpha = e$. Right: $\alpha = \tau$. It is seen that, for a given $M_1$, the resonance first affects the smallest values of $q/T_0$, and that the resonance extends to larger $M_1$ with increasing asymmetry (the asymmetry is indicated in units of $10^6 n_{\nu_e}/s$ on top of the curves).} \la{fig:Ts} \end{figure} In Fig.~\ref{fig:Ts} we show the resonance temperatures (where existent) for two momenta and various asymmetries, as a function of the mass $M_1$. We note that for $M_1$ of a few keV, the production peaks at temperatures very close to the QCD crossover. This introduces severe hadronic uncertainties to the results, as will be discussed below. \begin{figure}[t] \centerline{% \epsfysize=9.0cm\epsfbox{exclusion_th_backreac.eps}% } \caption[a]{\small The parameter values that, according to our theoretical computation, lead to the correct dark matter abundance in the Shi-Fuller scenario~\cite{Shi:1998km}; if additional sources are present, $\sin^2\!2\theta$ must lie {\em below} the curves shown (cf.\ Eq.~\nr{Ya1_bound}). For better visibility, the results have been multiplied by $M_1 / $keV. The grey region between case 1 (lower solid line on the left, upper solid line in the middle and on the right) and case 2 (other solid line) corresponds to different patterns of the active-sterile mixing angles, cf.\ Eqs.~\nr{case1}, \nr{case2}. The dotted and dashed lines correspond to one of these limiting patterns with simultaneously the uncertainties from the equation-of-state and from hadronic scatterings set to their maximal values. The thick dotted line marked with ``Abazajian et al'' shows the result in \fig1 of ref.~\cite{Abazajian:2006yn} (the case $L = 0.003$). } \la{fig:exclusion_th} \end{figure} In Fig.~\ref{fig:exclusion_th} we show the upper bound on the mixing angle following from Eq.~\nr{Ya1_bound}, for $n_{\nu_e}/s = 16.3\times 10^{-6}$. This value has been chosen in order to allow for a comparison with fig.~1 of ref.~\cite{Abazajian:2006yn}. It can be seen that at large masses, $M_1 \mathop{\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 3$~keV, the general order of magnitude of our result is remarkably close to the result of ref.~\cite{Abazajian:2006yn}, despite the fact that this work was using a simplified kinetic equation and different approximations. On the other hand, at small masses, where the depletion discussed in Sec.~\ref{ss:BR1} is effective, the results are dramatically different. In Fig.~\ref{fig:exclusion_th} we have also considered the effects of two sources of hadronic uncertainties: from the equation-of-state, which is defined to correspond to a 20\% rescaling of the pseudocritical temperature of the QCD crossover; and from hadronic scatterings, which is defined to correspond to evaluating the hadronic contributions to the vector and axial current spectral functions with {\em non-interacting} quarks, and an effective number $N_{\rm c} = 0$ or $N_{\rm c} = 3$ of colours. For justification and more details on this phenomenological but nevertheless conservative procedure, we refer to \se5 of ref.~\cite{als2}. For plotting the dashed and dotted lines in Fig.~\ref{fig:exclusion_th}, we have simultaneously set both uncertainties to their maximal values. It is seen that the resulting error depends strongly on parameters, but can be as large as 50\%. \begin{figure}[t] \centerline{% \epsfysize=7.5cm\epsfbox{exclusion_c_backreac.eps}% ~~~\epsfysize=7.5cm\epsfbox{exclusion_c_backreac_alfa3.eps}% } \caption[a]{\small The central region of Fig.~\ref{fig:exclusion_th}, $M_1 = 0.3 \ldots 100.0$~{keV}, compared with regions excluded by various X-ray constraints~% \cite{Boyarsky:2006fg,Boyarsky:2006ag,Boyarsky:2007ay,Boyarsky:2007ge}, coming from XMM-Newton observations of the Large Magellanic Cloud (LMC), the Milky Way (MW), and the Andromeda galaxy (M31). SPI marks the constraints from 5 years of observations of the Milky Way galactic center by the SPI spectrometer on board the Integral observatory.} \la{fig:exclusion} \end{figure} The theoretical upper bound from Eq.~\nr{Ya1_bound} is compared with experimental constraints (from the non-observation of any X-ray sterile neutrino decay peak in various presumed dark matter concentrations) in Fig.~\ref{fig:exclusion}. A more detailed discussion concerning the implications of Fig.~\ref{fig:exclusion} follows in Sec.~\ref{se:astro}. Finally, we note that the plots in this section were produced without taking into account the back reaction discussed in Sec.~\ref{ss:BR2}, i.e., by using the distributions $f} %{n} %{\mbox{\sl f\,}_\mp^{(0)}$. By recomputing $Y_{\alpha 1}$ for a number of masses and asymmetries from the corrected distributions $f} %{n} %{\mbox{\sl f\,}_\mp$, we find that the errors are maximal, $\sim 25\%$, for small masses, $M_1 \mathop{\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 3$~keV, and intermediate asymmetries, $n_{\nu_e}/s \sim 10 - 20 \times 10^{-6}$. For larger masses and other asymmetries, the error from the omission of the back reaction is typically below 10\%, which we estimate to be well below our other systematic uncertainties. \section{Sterile neutrino spectrum} \la{se:spectrum} We now move from the integrated total sterile neutrino abundance, Eq.~\nr{na1}, to the momentum distribution function, Eq.~\nr{distribution}. The physics context where this plays a role is that of structure formation, particularly at the smallest scales (Lyman-$\alpha$ data). The corresponding constraints are considered to be subject to more uncertainties than the X-ray bounds, both as far as direct observational issues are concerned, as well as with regard to dark matter simulations, which have not been carried out with actual non-equilibrium spectra so far. Nevertheless, adopting a simple recipe for estimating the non-equilibrium effects (cf.\ Eq.~\nr{M0_c}), the results of refs.~\cite{Seljak:2006qw,Viel:2006kd} can be re-interpreted as the constraints $M_1} %{M_{\rm s} \mathop{\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 11.6$ keV and $M_1} %{M_{\rm s} \mathop{\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 8$ keV, respectively (95\% CL), at vanishing asymmetry~\cite{als2}. Very recently limits stronger by a factor 2--3 have been reported~\cite{Viel:2007mv}. We return to how the constraints change in the case of a non-zero lepton asymmetry in Sec.~\ref{se:astro}. We note, however, that the most conservative bound, the so-called Tremaine-Gunn bound~\cite{Tremaine:1979we,Lin:1983vq}, is much weaker and reads $M_1} %{M_{\rm s} \mathop{\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 0.3$ keV \cite{Dalcanton:2000hn}, which we have chosen as the lower end of the horizontal axes in Figs.~\ref{fig:exclusion}, \ref{fig:avq}. \begin{figure}[t] \centerline{% \epsfysize=7.5cm\epsfbox{distr_m3_backreac.eps}% ~~~\epsfysize=7.5cm\epsfbox{distr_m3_backreac_alfa3.eps}% } \caption[a]{\small The distribution function $f} %{n} %{\mbox{\sl f\,}_{\alpha 1}(t_0,q)$, for $T_0 = 1$~MeV and $M_1 = 3$~keV, normalised to the massless equilibrium value, $f} %{n} %{\mbox{\sl f\,}_\rmi{eq}(t_0,q) = 2 \nF{}(q) /(2\pi)^3$. Left: case 1. Right: case 2. These results can be compared with refs.~\cite{Shi:1998km,Abazajian:2001nj}: the general feature of strong enhancement at small momenta is the same, but our distribution functions show more structure. The case $n_{\nu_e}/s = 16\times 10^{-6}$ is particularly complicated (and sensitive to uncertainties), since the resonance happens to lie just on top of the QCD crossover, at $T\sim 150-200$~MeV, cf.\ Figs.~\ref{fig:Tevol}--\ref{fig:exclusion_th}. } \la{fig:distr} \end{figure} \begin{figure}[t] \centerline{% \epsfysize=7.5cm\epsfbox{avq_backreac.eps}% ~~~\epsfysize=7.5cm\epsfbox{avq_backreac_alfa3.eps}% } \caption[a]{\small The average sterile neutrino momentum, $ \langle q \rangle_{s} $, normalised to the active neutrino equilibrium value, $ \langle q \rangle_{a} \equiv 7 \pi^4 T_0/180 \zeta(3) \approx 3.15T_0 $. Left: case 1. Right: case 2. } \la{fig:avq} \end{figure} In Fig.~\ref{fig:distr} we show examples of the spectra, for a relatively small mass $M_1 = 3$~keV (like in Fig.~\ref{fig:Tevol}), at which point the significant changes caused by the asymmetry can be clearly identified. The general pattern to be observed in Fig.~\ref{fig:distr} is that for a small asymmetry, the distribution function is boosted only at very small momenta. Quantities like the average momentum $\langle q \rangle_s$ then decrease, as can be seen in Fig.~\ref{fig:avq}. For large asymmetry, the resonance affects all $q$; the total abundance is strongly enhanced with respect to the case without a resonance, but the shape of the distribution function is less distorted than at small asymmetry, so that the average momentum $\langle q \rangle_s$ returns back towards the value in the non-resonant case. Therefore, for any given mass, we can observe a minimal value of $\langle q \rangle_s$ in Fig.~\ref{fig:avq}, $\langle q \rangle_s \mathop{\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 0.3 \langle q \rangle_a$. This minimal value is remarkably independent of $M_1$, but the value of asymmetry at which it is reached decreases with increasing $M_1$. Let us stress, however, that the values in Fig.~\ref{fig:avq} were produced without taking into account the back reaction discussed in Sec.~\ref{ss:BR2}, i.e., by using the distributions $f} %{n} %{\mbox{\sl f\,}_\mp^{(0)}$. By recomputing $\langle q \rangle_{s} / \langle q \rangle_{a}$ for a number of masses and asymmetries from the approximatively corrected distributions $f} %{n} %{\mbox{\sl f\,}_\mp$ (shown in Fig.~\ref{fig:distr}), we find that for small masses, $M_1 \mathop{\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 3$~keV, and intermediate asymmetries, $n_{\nu_e}/s \sim 10 - 20 \times 10^{-6}$, the results in Fig.~\ref{fig:avq} may be {\em too small} by up to $\sim 25\%$. Thus, for small masses, the minimal average momentum may be better approximated as $\langle q \rangle_s \mathop{\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 0.4 \langle q \rangle_a$. For larger masses and other asymmetries, the error from the omission of the back reaction is typically below 10\%, which we estimate to be below our other systematic uncertainties. \section{Astrophysical constraints} \la{se:astro} The purpose of the present section is to combine the results of the previous two sections, and derive the astrophysical constraints that follow from them. Let us start by briefly recapitulating, once more, the two different types of considerations that we have carried out. First of all, the theoretical computation described in Sec.~\ref{se:abundance} produces, for a given mass $M_1$ and mixing angle $\sin^2\! 2 \theta$, a definite total abundance of sterile neutrinos. Requiring that this abundance account for all of the observed energy density in dark matter, leads to the (lepton asymmetry dependent) mass--angle relation shown in Fig.~\ref{fig:exclusion}. The most direct constraint on the sterile neutrino dark matter scenario comes from comparing these curves with X-ray observations (Fig.~\ref{fig:exclusion}). For $n_{\nu_e}/s = 0.0$ we are in the allowed region only for $M_1 \le 3$ keV. Increasing the asymmetry to $n_{\nu_e}/s \simeq 8 \times 10^{-6}$ opens suddenly a whole range of allowed mass values, up to $M_1 \simeq 25$ keV. Increasing the asymmetry further relaxes the upper bound even more but rather slowly; for instance, if the asymmetry is increased to $n_{\nu_e}/s \simeq 25 \times 10^{-6}$, then we read from Fig.~\ref{fig:exclusion} the upper bound $M_1 \mathop{\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 40$ keV, while the maximal allowed asymmetry $n_{\nu_e}/s \simeq 2500 \times 10^{-6}$ yields the upper bound $M_1 \mathop{\;\raise0.3ex\hbox{$<$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 50$ keV. Another important effect comes from the modification of the sterile neutrino spectrum through a lepton asymmetry. As already found in ref.~\cite{Shi:1998km}, the non-equilibrium spectrum of the dark matter sterile neutrinos created in the presence of a lepton asymmetry is very different from the thermal one. Some examples are shown in Fig.~\ref{fig:distr}. Now, an observation of small scale structures in the Lyman-$\alpha$ data puts an upper bound on the free-streaming length and, consequently, on the average velocity of the dark matter particles. This converts to a lower bound on the inverse velocity, which, in the absence of an actual analysis with non-equilibrium spectra, can be roughly estimated as~\cite{Hansen:2001zv} \begin{equation} M_1 \frac{\langle {q} \rangle_a }{ \langle {q} \rangle_s } \mathop{\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} M_0 \quad \Leftrightarrow \quad M_1 \mathop{\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} M_0 \frac{\langle {q} \rangle_s }{ \langle {q} \rangle_a } \;, \la{M0_c} \end{equation} where $\langle {q} \rangle_a$ and $\langle {q} \rangle_s$ are the average momenta of active and sterile neutrinos, respectively, at the moment of structure formation, and the value of $M_0$ is $M_0 \simeq 14$ keV (95\% CL) according to ref.~\cite{Seljak:2006qw} (or $M_0 \simeq 10$ keV at 99.9\% CL). According to ref.~\cite{Viel:2006kd} the bound is somewhat weaker, while according to ref.~\cite{Viel:2007mv} it could be as strong as $M_0 \simeq 28$ keV (95\% CL). Let us start by considering the most conservative bound $M_0=10$ keV.\footnote{% In this work we only consider lower bounds on the mass of the dark matter particle from structure formation. However, the problems of missing satellites and cuspy profiles in Cold Dark Matter cosmological models, as well as that of the galactic angular momentum, suggest that an upper bound may exist as well~\cite{cusp}. } The dependence of $\langle {q} \rangle_s / \langle {q} \rangle_a$ on $M_1$ and the lepton asymmetry is shown in Fig.~\ref{fig:avq}. Quite interestingly, this ratio can decrease to $0.3$ (or $0.4$ at small $M_1$, cf.\ end of Sec.~\ref{se:spectrum}) for a certain range of asymmetries. However, $\langle {q} \rangle_s / \langle {q} \rangle_a$ does not decrease further with increasing asymmetry, but increases again. Therefore, the lower limit $M_0 = 10$~keV corresponds to $M_1 \mathop{\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 4$~keV. Combining now the two constraints (from X-rays, Fig.~\ref{fig:exclusion}, and from structure formation, Eq.~\nr{M0_c}), we observe that a solution satisfying both constraints exists for $n_{\nu_e}/s \mathop{\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 8 \times 10^{-6}$. The solution corresponds to masses $M_1 \simeq 4 - 25$ keV and mixing angles $\sin^2\! 2\theta \simeq 8 \times 10^{-10} - 2 \times 10^{-12}$. If the lepton asymmetry is increased, larger masses and smaller mixing angles become possible. Let us then consider the case $M_0 \simeq 28$ keV~\cite{Viel:2007mv}. Using the minimal value $\langle q\rangle_s/\langle q\rangle_a \sim 0.3$, $M_0 \simeq 28$ keV corresponds to $M_1 \mathop{\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 28 \times 0.3 \simeq 8.4$ keV according to Eq.~\nr{M0_c}. Combining this with the X-ray constraints in Fig.~\ref{fig:exclusion}, we see that this can in fact be satisfied with the same asymmetry as before, $n_{\nu_e}/s \mathop{\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 8 \times 10^{-6}$. It should be stressed, however, that the validity of Eq.~\nr{M0_c} for spectra as extreme as those in Fig.~\ref{fig:distr} remains to be cross-checked. Nevertheless, if true, we can establish the absolute lower bound $M_1 \ge 4$~keV for sterile neutrinos capable of accounting for all of the dark matter in the Universe (assuming that the only mechanism for dark matter sterile neutrino production is active-sterile mixing). \section{Conclusions} \la{se:concl} The $\nu$MSM, i.e.\ Minimal Standard Model extended by three right-handed neutrinos with masses smaller than the electroweak scale, has a number of parameters not appearing in the Standard Model: three Majorana masses and a $3\times 3$ complex matrix of Yukawa couplings. In ref.~\cite{asy}, the part of the parameter space associated with the two heaviest right-handed neutrinos was explored in detail, and a phenomenologically interesting corner was identified. Specifically, it was found that if the mass difference of the heavy (dominantly right-handed) neutrino mass eigenstates is much {\em smaller} than the known mass differences of the light (dominantly left-handed) mass eigenstates ({\bf Scenario IIa}), then it is possible to explain the known active neutrino mass differences and mixings, and simultaneously generate and subsequently maintain a significant lepton asymmetry in the Universe, without violating constraints related to Big Bang Nucleosynthesis at temperatures of about 0.1 MeV. The purpose of the present paper has been to constrain the parameters associated with the lightest of the right-handed neutrinos, referred to with the subscript ``1''. In this case there are no constraints from the known active neutrino mass differences and mixings; rather, the contribution from the lightest right-handed neutrinos to the see-saw formulae is much below 0.01~eV. (Consequently the $\nu$MSM excludes the case of degenerate active neutrinos with a common mass scale $\gg 0.01$~eV; in particular, the effective mass for neutrinoless double beta decay cannot exceed 0.05~eV~\cite{Bezrukov:2005mx}.) In contrast, the assertion that all of dark matter be made of these neutrinos does allow us to place further constraints on the parameters. More precisely, we were led in Sec.~\ref{se:astro} to the mass range $M_1 \simeq 4 \ldots 50$~keV. The lower bound originates from combining the theoretical analysis of the present paper with observational X-ray (Fig.~\ref{fig:exclusion}) and structure formation (Eq.~\nr{M0_c}, Fig.~\ref{fig:avq}) constraints, whereas the upper bound is dictated by the maximal lepton asymmetry allowed by Big Bang Nucleosynthesis. The absolute values of the Yukawa couplings of the lightest right-handed neutrinos should be in the range $5\times 10^{-15} \ldots 4 \times 10^{-13}$ in this case. Of course, these constraints are relaxed if the sterile neutrinos only account for a fraction of the dark matter (see, e.g., ref.~\cite{Palazzo:2007gz}); if a part of them are produced by some non-equilibrium mechanism not related to active-sterile mixing (see, e.g., refs.~\cite{Shaposhnikov:2006xi,Kusenko:2006rh,Petraki:2007gq}); or if the thermal history of the Universe is non-standard (see, e.g., refs.~\cite{Gelmini:2004ah,Yaguna:2007wi,Khalil:2008kp}). It is important to stress, in addition, that the lower bound $M_1 \mathop{\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 4$~keV relies on a naive re-interpretation of structure formation simulations which were carried out assuming a thermal spectrum of dark matter particles, rather than a proper non-equilibrium shape as given in Fig.~\ref{fig:distr}. Hopefully this issue can be put on more solid ground soon. Finally, we recall that perhaps the most realistic hope for an experimental detection of dark matter sterile neutrinos in the parameter range that we have discussed, would be through the discovery of a peak in the diffuse X-ray background from regions where dark matter decays. The dominant decay channel is $N_1 \to \nu\gamma$ and the spectrum should thus peak at the energy $M_1/2 \mathop{\;\raise0.3ex\hbox{$>$\kern-0.75em\raise-1.1ex\hbox{$\sim$}}\;} 2$~keV. As far as laboratory searches are concerned, they are quite difficult due to the very small Yukawa couplings of the dark matter sterile neutrinos; however, a possibility has been suggested in ref.~\cite{Bezrukov:2006cy}. \section*{Acknowledgements} We thank Alexey Boyarsky and Oleg Ruchayskiy for providing the X-ray data plotted in Fig.~\ref{fig:exclusion} and for helpful remarks. The work of M.L. was supported in part by the National Science Foundation, under Grant No.\ PHY05-51164, and that of M.S.\ by the Swiss National Science Foundation. We thank Takehiko Asaka for collaboration at initial stages of this work.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $(\Omega,\mathcal{F},\mathbb{P})$ be a probability space. Hoeffding's inequality says that for bounded independent zero-mean random variables $\xi_1,...,\xi_n$ such that $\mathbb{P}(\xi_i\in[a_i,b_i])=1$, $i=1,...n$, the following estimate on the probability of the deviation of their sum $S_n=\sum_{i=1}^n\xi_i$ from zero $$ \mathbb{P}(|S_n|\ge\varepsilon)\le 2\exp\Big(-\frac{2\varepsilon}{\sum_{i=1}^n(b_i-a_i)^2}\Big) $$ holds. The proof is based on Hoeffding's lemma: If $\xi$ is a random variable with mean zero such that $\mathbb{P}(\xi\in[a,b])=1$ then \begin{equation} \label{est1} \mathbb{E}\exp(\lambda\xi)\le \exp\Big(\frac{1}{8}(b-a)^2\lambda^2\Big); \end{equation} compare \cite{Hoeff}. Let us observe that the above inequality means that the centered bounded random variable $\xi$ is a subgaussian random variable. Let us recall the notion of the subgaussian random variable which was introduced by Kahane in \cite{Kahane}. A random variable $\xi$ is {\it subgaussian} if there exists a number $c\in[0,\infty)$ such that for every $\lambda\in\mathbb{R}$ the following inequality holds $$ \mathbb{E}\exp(\lambda\xi)\le \exp\Big(\frac{c^2\lambda^2}{2}\Big), $$ that is the moment generating function of $\xi$ is majorized by the moment generating function of some centered gaussian random variables with variance $c^2$ (see Buldygin and Kozachenko \cite{BK} or \cite[Ch.1]{BulKoz}). In terms of the cumulant generating functions this condition takes a form: $\ln\mathbb{E}\exp(\lambda \xi)\leq c^2\lambda^2/2$. For a random variable $\xi$ a number $\tau(\xi)$ defined as follows $$ \tau(\xi)=\inf\Big\{c\ge 0:\;\forall_{\lambda\in\mathbb{R}}\; \ln\mathbb{E}\exp(\lambda\xi)\le\frac{c^2\lambda^2}{2}\Big\} $$ is a norm in a space $Sub(\Omega)=\{\xi\in L(\Omega):\;\tau(\xi)<\infty\}$ of subgaussian random variables, where $L(\Omega)$ denote the family of all real valued random variables defined on $\Omega$. The space $Sub(\Omega)$ is a Banach space with respect to the norm $\tau$; see \cite[Ch.1, Th.1.2]{BulKoz}. Immediately by the definition of $\tau$ we have that $$ \ln\mathbb{E}\exp(\lambda\xi)\le\frac{\tau(\xi)^2\lambda^2}{2}. $$ By using this inequality one can show some estimate of tails distribution of $\xi$ in the form $$ \mathbb{P}(|\xi|\ge\varepsilon)\le 2\exp\Big(-\frac{\varepsilon^2}{2\tau(\xi)^2}\Big); $$ see \cite[Ch.1, Lem.1.3]{BulKoz}. If we do not know a value of the norm $\tau(\xi)$ but know some upper bound on it then we can formulate the above inequality substituting this bound instead of $\tau(\xi)$. And so for a centered random variable $\xi$ essentially bounded by the interval $[a,b]$, by virtue of the inequality (\ref{est1}), we have that $\tau(\xi)\le (b-a)/2$ and we get Hoeffding's inequality for one variable. Because for a sum of independent subgaussian random variables the following follows $$ \tau\Big(\sum_{i=1}^n\xi_i\Big)^2\le \sum_{i=1}^n\tau(\xi_i)^2, $$ see \cite[Ch.1, Lem.1.7]{BulKoz}, then combining these facts we obtain the proof of Hoeffding's inequality; compare \cite[Ch.1, Cor.1.3]{BulKoz}. \section{Spaces of subgaussian of rank $p$ random variables} One can generalize the notion of subgaussian random variables to classes of $\varphi$-subgaussian r.v.s (see \cite[Ch.2]{BulKoz}). A continuous even convex function $\varphi(x)$ $(x\in \mathbb{R}$) is called a {\em $N$-function}, if the following condition hold:\\ (a) $\varphi(0)=0$ and $\varphi(x)$ is monotone increasing for $x>0$,\\ (b) $\lim_{x\to 0}\varphi(x)/x=0$ and $\lim_{x\to \infty}\varphi(x)/x=\infty$.\\ It is called a {\it quadratic $N$-function}, if in addition $\varphi(x)=ax^2$ for all $|x|\le x_0$, with $a>0$ and $x_0>0$. The quadratic condition is needed to ensure nontriviality for classes of $\varphi$-subgaussian random variables (see \cite[Ch.2, p.67]{BulKoz}). Let $\varphi$ be a quadratic $N$-function. A random variable $\xi$ is said to be {\it $\varphi$-subgaussian} if there is a constant $c>0$ such that $\ln\mathbb{E}\exp(\lambda \xi)\leq \varphi(c\lambda)$. The {\it $\varphi$-subgaussian standard (norm) $\tau_{\varphi}(\xi)$} is defined as $$ \tau_{\varphi}(\xi)=\inf\{c\ge 0:\;\forall_{\lambda\in\mathbb{R}}\; \ln\mathbb{E}\exp(\lambda \xi)\le\varphi(c\lambda)\}; $$ a space $Sub_\varphi(\Omega)=\{\xi\in L(\Omega):\;\tau_{\varphi}(\xi)<\infty\}$ with the norm $\tau_{\varphi}$ is a Banach space (see \cite[Ch.2, Th.4.1]{BulKoz}). Now we define some class of such spaces \begin{defn} Let for $p > 1$ $$ \varphi_p(x)=\left\{ \begin{array}{ccl} \frac{x^2}{2}, & {\rm if} & |x|\le 1,\\ \frac{1}{p}|x|^p-\frac{1}{p}+\frac{1}{2}, & {\rm if} & |x|>1. \end{array} \right. $$ \end{defn} The functions $\varphi_p$ are examples of quadratic $N$-functions. Let us observe that if $1< p'< p$ then $\varphi_{p'}\le \varphi_p$ and, in consequence, $Sub_{\varphi_{p'}}(\Omega)\subset Sub_{\varphi_p}(\Omega)$, since $\tau_{\varphi_{p'}}(\xi)\ge \tau_{\varphi_p}(\xi)$ for any $\xi\in Sub_{\varphi_{p'}}(\Omega)$. Let us emphasize that the spaces $\{Sub_{\varphi_p}(\Omega):\;p>1\}$ form increasing family with respect to $p$. For the sake of completeness our presentation we show that any centered bounded random variable is subgaussian of any rank $p$. In general it is $\varphi$-subgaussian random variable for any quadratic $N$-function $\varphi$ (see \cite[Ex.3.1]{Rita}). \begin{pro} Let $\varphi$ be an $N$-function such that $\varphi(x)=x^2/2$ for $|x|\le 1$. If $\xi$ is a bounded random variables with $\mathbb{E}\xi=0$ then $\xi\in Sub_\varphi(\Omega)$. \end{pro} \begin{proof} If $\xi = 0$ with probability one then $$ 0=\psi_\xi(\lambda)\le \varphi(c\lambda) $$ for $\lambda\in\mathbb{R}$, $c\ge 0$ and any $N$-function $\varphi$. Let us recall that if $\xi$ is bounded but nonconstant then $\psi_\xi$ is strictly convex on $\mathbb{R}$ and it follows positivity $\psi_\xi^{\prime\prime}$ on whole $\mathbb{R}$. Let $|\xi|\le d$ almost surely then $$ \psi_\xi(\lambda)\le d|\lambda|=\varphi(\varphi^{(-1)}(d|\lambda|))\le\varphi(d\lambda\varphi^{(-1)}(1)) $$ for $|\lambda|> 1/d$ (see \cite[Ch.2, Lem. 2.3]{BulKoz}), where $\varphi^{(-1)}(x)$, $x\ge 0$, is the inverse function of $\varphi(x)$, $x\ge 0$. For $|\lambda|\le 1$, by the Taylor theorem, we get $$ \psi_\xi(\lambda)=\psi_\xi(0)+\psi_\xi'(0)\lambda+\frac{1}{2}\psi_\xi^{\prime\prime}(\theta)\lambda^2=\frac{1}{2}\psi_\xi^{\prime\prime}(\theta)\lambda^2 $$ for some $\theta$ between $0$ and $\lambda$. Let $c=\max_{\lambda\in[-1,1]}\psi_\xi^{\prime\prime}(\lambda)$ then $ \psi_\xi(\lambda)\le 1/2(\sqrt{c}\lambda)^2 $. Let us observe that for $|\lambda|\le 1/\sqrt{c}$ the function $1/2(\sqrt{c}\lambda)^2=\varphi(\sqrt{c}\lambda)$. Without loss of generality we may assume that $d>\sqrt{c}$ that is $1/d<1/\sqrt{c}$. Taking $b=\max\{d\varphi^{(-1)}(1),\;\sqrt{c}\}$ we get $$ \psi_\xi(\lambda)\le \varphi(b\lambda) $$ for every $\lambda\in\mathbb{R}$ which follows $\xi\in Sub_\varphi(\Omega)$. \end{proof} Let $\varphi(x)$ ($x\in\mathbb{R}$) be a real-valued function. The function $\varphi^\ast(y)$ ($y\in\mathbb{R}$) defined by $\varphi^\ast(y)=\sup_{x\in\mathbb{R}}\{xy-\varphi(x)\}$ is called the {\it Young-Fenchel transform} or the {\it convex conjugate} of $\varphi$ (in general, $\varphi^\ast$ may take value $\infty$). It is known that if $\varphi$ is a quadratic $N$-function then $\varphi^\ast$ is quadratic $N$-function too. For instance, since our $\varphi_p$ is a differentiable (even at $\pm 1$) function one can easy check that $\varphi_p^\ast=\varphi_q$ for $p,q>1$, if $1/p+1/q=1$. An exponential estimate for tails distribution of a random variable $\xi$ belonging to the space $Sub_{\varphi}(\Omega)$ is as follows: \begin{equation} \label{estm} \mathbb{P}(|\xi|\ge\varepsilon)\le 2\exp\Big(-\varphi^\ast\Big(\frac{\varepsilon}{\tau_{\varphi}(\xi)}\Big)\Big); \end{equation} see \cite[Ch.2, Lem.4.3]{BulKoz}. Moreover $\xi\in Sub_{\varphi}(\Omega)$ if and only if $\mathbb{E}\xi=0$ and there exist constant $C>0$ and $D>0$ such that $$ \mathbb{P}(|\xi|\ge\varepsilon)\le C\exp\Big(-\varphi^\ast\Big(\frac{\varepsilon}{D}\Big)\Big) $$ for every $\varepsilon>0$; compare \cite[Ch.2, Cor.4.1]{BulKoz}. By virtue of the above facts one can show examples of subgaussian of rank $p$ random variables. \begin{exa} Let $q>1$ and a random variable $\xi$ has the double Weibull distribution with a density function $$ g_\xi(x)=\frac{1}{2}|x|^{q-1}\exp\Big\{-\frac{1}{q}|x|^q\Big\}. $$ Then $\xi\in Sub_{\varphi_p}(\Omega)$, where $1/p+1/q=1$. \end{exa} \begin{exa} Let $\xi\sim\mathcal{N}(0,1)$, then $\eta=|\xi|^{2/q}-\mathbb{E}|\xi|^{2/q}\in Sub_{\varphi_p}(\Omega)$. \end{exa} \section{On the Azuma inequality} Before we prove some general form of Azuma's inequality, first we show some upper bound on the norm of centered bounded random variable in $Sub_{\varphi_p}(\Omega)$ for any $p>1$. \begin{lem} \label{lem1} Let $\xi$ be a bounded random variable such that $\mathbb{P}(\xi\in [a,b])=1$ and $\mathbb{E}\xi=0$. Let $c$ denote the number $(b-a)/2$ and $d$ the number $\max\{-a,b\}$. Then for every $p>1$ the norm $\tau_{\varphi_p}(\xi)\le \gamma_r=c^2/(2d)\{r[2(d/c)^2+1/r-1/2)]\}^{1/r}$, where $r=\min\{p,2\}$. \end{lem} \begin{proof} If $p\ge2$ then $r=2$ and $\gamma_r=\gamma_2=c$. By Hoeffding's lemma we have that $\tau_{\varphi_2}(\xi)\le c$, and because for every $p\ge 2$ the norm $\tau_{\varphi_p}(\xi)\le \tau_{\varphi_2}(\xi)$ then the Lemma follows. Assume now that $1<p<2$, then $r=p$. Let us note that there exist a very simple estimate of the cumulant generating function of $\xi$: $$ \psi_\xi(\lambda)=\ln\mathbb{E}\exp(\lambda\xi)\le \ln\mathbb{E}\exp(d|\lambda|)=d|\lambda|. $$ We can form some majorant of $\psi_\xi$ with this function and Hoeffding's bound. Solving the equation $d\lambda=c^2\lambda^2/2$ we obtain $\lambda=2d/c^2$. Let us emphasize that for $1<p<2$ a function $$ f(\lambda):=\min\Big\{\frac{c^2\lambda^2}{2},d|\lambda|\Big\}=\left\{ \begin{array}{ccl} \frac{c^2\lambda^2}{2}, & {\rm if} & |\lambda|\le \frac{2d}{c^2} \\ d|\lambda|, & {\rm if} & |\lambda|>\frac{2d}{c^2} \end{array} \right. $$ is a majorant of $\psi_\xi$. Let us observe now that $f(2d/c^2)=2(d/c)^2\ge 2$, since $d\ge c$. We find now $\gamma_p$ such that $\varphi_p(\gamma_p2d/c^2)=2(d/c)^2$. Notice that $\varphi_p([-1,1])=[0,1/2]$. It follows that solving the equation $\varphi_p(\gamma_p2d/c^2)=2(d/c)^2$ we should use the form of $\varphi_p(x)$ for $|x|>1$, i.e. $\varphi_p(x)=1/p|x|^p-1/p+1/2$. A solution of the equation $$ \frac{1}{p}\Big|\gamma_p\frac{2}{d}\Big|^\frac{1}{p}-\frac{1}{p}+\frac{1}{2}=2\Big(\frac{d}{c}\Big)^2 $$ has the form $$ \gamma_p=\frac{c^2}{2d}\Big\{p\Big[2\Big(\frac{d}{c}\Big)^2+\frac{1}{p}-\frac{1}{2}\Big]\Big\}^\frac{1}{p}. $$ Let us emphasize that for $p\in(1,2]$ we have the following inequality $$ f(\lambda)\le \varphi_p(\gamma_p\lambda). $$ Since $f$ is the majorant of $\psi_\xi$, we get that $$ \psi_\xi(\lambda)\le \varphi_p(\gamma_p\lambda) $$ for every $\lambda\in\mathbb{R}$. Thus, by definition of the norm $\tau_{\varphi_p}$ at $\xi$, $$ \tau_{\varphi_p}(\xi)\le \gamma_r=\frac{c^2}{2d}\Big\{r\Big[2\Big(\frac{d}{c}\Big)^2+\frac{1}{r}-\frac{1}{2}\Big]\Big\}^\frac{1}{r} $$ in the case $r=p$, which completes the proof. \end{proof} \begin{rem} \label{uwg} Let us note once again that $\gamma_2=c$. Because $\gamma_p$ is the solution of the equation $\varphi_p(\gamma_p2d/c^2)=2(d/c)^2$ and $\varphi_p\ge\varphi_{p'}$ for $p\ge p'$ then $\gamma_{p}\le \gamma_{p'}$. More precisely $\gamma_p$ is strictly increasing as $p$ is decreasing to $1$. Thus there exists $\lim_{p\searrow 1}\gamma_p$. Denote it by $\gamma_1$. One can check that $\gamma_1=d+c^2/4d$. \end{rem} Now we can formulate our main result. \begin{thm} \label{tw1} Let $\xi_0$ be a subgaussian of a rank $p$ random variable such that $\tau_{\varphi_p}(\xi_0)\le d_0$ and $(\xi_n)_{n\ge0}$ be a martingale with bounded increments, i.e. $|\xi_n-\xi_{n-1}|\le d_n$ almost surely for $n=1,2,...$. Let $c$ denote $\sqrt{\sum_{i=1}^nd_i^2}$ and $d$ the number $\sum_{i=1}^nd_i$. Then $$ \mathbb{P}(|\xi_n|\ge\varepsilon)\le 2\exp\Big(-\varphi_q\Big(\frac{\varepsilon}{\sqrt[r]{\gamma_r^r+d_0^r}}\Big)\Big), $$ where $1/p+1/q=1$, $r:=\min\{p,2\}$ and $\gamma_r=c^2/(2d)\{r[2(d/c)^2+1/r-1/2)]\}^{1/r}$. \end{thm} \begin{proof} First we recall an argument which gives the similar estimate on the moment generating function of martingales with bounded increments as in the case of sums of independent random variables. For the martingale $(\xi_n)_{n\ge 0}$ we have $$ \mathbb{E}\exp(\lambda\xi_n)=\mathbb{E}\Big(\exp(\lambda\xi_{n-1})\mathbb{E}\big(\exp(\lambda(\xi_n-\xi_{n-1}))\big|\mathcal{F}_{n-1} \big)\Big), $$ where $\mathcal{F}_{n-1}$ denotes $\sigma$-field generated by random variables $\xi_0,\xi_1,...,\xi_{n-1}$. Now we find a bound for $\mathbb{E}(\exp(\lambda(\xi_n-\xi_{n-1}))|\mathcal{F}_{n-1})$. Let $\eta_n:=(\xi_n-\xi_{n-1})/d_n$. Observe that $-1\le\eta_n\le 1$ a.s.. By convexity of the natural exponential function we get $$ \exp(\lambda(\xi_n-\xi_{n-1}))=\exp(d_n\lambda\eta_n)\le \frac{1+\eta_n}{2}\exp(d_n\lambda)+\frac{1-\eta_n}{2}\exp(-d_n\lambda), $$ and, in consequence, $$ \mathbb{E}\big(\exp(d_n\lambda\eta_n)\big|\mathcal{F}_{n-1}\big)\le \frac{1}{2}\exp(d_n\lambda)+\frac{1}{2}\exp(-d_n\lambda) $$ since $\mathbb{E}(\eta_n|\mathcal{F}_{n-1})=0$. By virtue of the inequality $1/2\exp(d_n\lambda)+1/2\exp(-d_n\lambda)\le\exp(\lambda^2d_n^2/2)$ one gets $$ \mathbb{E}\exp(\lambda\xi_n)\le\exp\Big(\frac{\lambda^2d_n^2}{2}\Big)\mathbb{E}\exp(\lambda\xi_{n-1}) $$ and, inductively, $$ \mathbb{E}\exp(\lambda\xi_n)\le\exp\Big(\frac{\lambda^2\sum_{i=1}^nd_i^2}{2}\Big)\mathbb{E}\exp(\lambda\xi_0). $$ Taking the logarithm of both sides we obtain $$ \ln\mathbb{E}(\exp(\lambda\xi_n)\le\frac{\lambda^2\sum_{i=1}^nd_i^2}{2}+\ln\mathbb{E}\exp(\lambda\xi_0), $$ that is \begin{equation} \label{row1} \psi_{\xi_n}(\lambda)\le \varphi_2\Big(\lambda\Big(\sum_{i=1}^nd_i^2\Big)^{1/2}\Big)+\psi_{\xi_0}(\lambda). \end{equation} Because $\psi_{\xi_0}(\lambda)\le \varphi_p(d_0\lambda)$ and a random variable $\xi_n-\xi_0$ is the bounded random variable ($|\xi_n-\xi_0|\le\sum_{i=1}^nd_i$ a.s.), by Lemma \ref{lem1}, we can rewrite the above estimate on $\psi_{\xi_n}$ as follows \begin{equation} \label{psin} \psi_{\xi_n}(\lambda)\le \varphi_p(\gamma_r\lambda)+\varphi_p(d_0\lambda), \end{equation} where $\gamma_r$ is as in Lemma \ref{lem1} ($c=\sqrt{\sum_{i=1}^nd_i^2}$ and $d=\sum_{i=1}^nd_i$). The composition $\varphi_p$ with the function $\sqrt[r]{\cdot}$ is still convex. By properties of $N$-functions (see \cite[Ch.2, Lem.2.2]{BulKoz}) for $\lambda>0$ we get \begin{eqnarray*} \varphi_p(\gamma_r\lambda)+\varphi_p(d_0\lambda) &=&\varphi_p\big(\sqrt[r]{\gamma_r^r\lambda^r}\big)+\varphi_p\big(\sqrt[r]{d_0^r\lambda^r}\big) \\ \; &\le& \varphi_p\Big(\lambda\sqrt[r]{\gamma_r^r+d_0^r}\Big), \end{eqnarray*} which combining with (\ref{psin}) gives $$ \psi_{\xi_n}(\lambda)\le \varphi_p\Big(\lambda\sqrt[r]{\gamma_r^r+d_0^r}\Big). $$ Because $\varphi_p$ is the even function then the above inequality is valid for any $\lambda$. It means that the random variable $\xi_n\in Sub_{\varphi_p}(\Omega)$ and its norm $\tau_{\varphi_p}(\xi_n)\le \sqrt[r]{\gamma_r^r+d_0^r}$. Recall that the convex conjugate $\varphi_p^\ast=\varphi_q$, where $1/p+1/q=1$. By (\ref{estm}) and the above estimate of $\tau_{\varphi_p}(\xi_n)$ we obtain our inequality. \end{proof} \begin{rem} For $\xi_0=0$ a.s. $d_0=0$ and we can assume that $\xi_0$ is subgaussian of rank $2$ (classic subgaussian). Recall that if $p=2$ then $q=2$ and $\varphi_q(x)=x^2/2$. In this case $\gamma_r=\gamma_2=c$ and we get the classic form of Hoeffding-Azuma's inequality. \end{rem} Let us observe that $\xi_0=0$ a.s. is subgaussian of any rank $p$. Consider more precisely case $1<p<2$. If $1<p<2$ then the H\"older conjugate $q>2$. Let us recall that for $\varepsilon\in\mathbb{R}$ $\varphi_q(\varepsilon)\ge \varphi_2(\varepsilon)$ and moreover $\varphi_q(\varepsilon)> \varphi_2(\varepsilon)$ if $|\varepsilon|>1$. Because $c=\gamma_2<\gamma_p$, where $c=\sqrt{\sum_{i=1}^n d_i^2}$, there exists exactly one $\varepsilon_p>0$ such that $$ \varphi_q\Big(\frac{\varepsilon_p}{\gamma_p}\Big)=\varphi_2\Big(\frac{\varepsilon_p}{c}\Big), $$ i.e. $\varepsilon_p$ is the unique solution of the equation $$ \frac{1}{q}\Big(\frac{\varepsilon_p}{\gamma_p}\Big)^q-\frac{1}{q}+\frac{1}{2}=\frac{\varepsilon_p^2}{2c}. $$ If $|\varepsilon|<\varepsilon_p$ then $\varphi_q(\varepsilon/\gamma_p)<\varphi_c(\varepsilon/c)$ and $\varphi_q(\varepsilon/\gamma_p)>\varphi_c(\varepsilon/c)$ if $|\varepsilon|>\varepsilon_p$. It follows that for $|\varepsilon|>\varepsilon_p$ the estimate $$ \mathbb{P}(|\xi_n|\ge\varepsilon)\le 2\exp\Big(-\varphi_q\Big(\frac{\varepsilon}{\gamma_p}\Big)\Big) $$ is sharper than the Hoeffding-Azuma's inequality. It means that by $\xi_0=0$ Theorem \ref{tw1} is some supplement of the Hoeffding-Azuma inequality. Moreover in this Theorem we considered the case when $\xi_0$ is any subgaussian of rank $p$ random variable. Many another examples of concentration inequalities one can find for instance in \cite{McDi}. Let us emphasize that most of them concern the case of independent summands. The Azuma inequality is dealt with dependent ones. Let us note that in the original Azuma's paper \cite{Azuma} are considered bounded increments satisfying some general conditions that hold for martingales increments. It is most important to us that we can find some bound of the norm of their sums in the spaces of subgaussian of rank $p$ random variables which allow us to get an estimate for the probabilities of tail distributions. Applications of such estimates may by multiple. I would like to drew attention on some application to prove of the strong laws of large numbers for dependent random variables in these spaces (see \cite{Zaj}).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Pixel-level tasks, such as semantic \cite{chen2014semantic,bertasius2017convolutional, yang2018denseaspp,zhang2019acfnet}, instance \cite{he2017mask}, and panoptic \cite{kirillov2019panoptic} segmentation, are fundamental to many computer vision problems (autonomous driving being a prime example). Recent research \cite{cheng2021per,chen2019graph,fu2019dual,yuan2020object} has shown that while predictions for these tasks are fundamentally local (at a pixel level), global context plays an important role in improving performance in practice. While initially, approaches have focused on local (\eg, multi-scale context, where attention is computed over progressively larger patches around each pixel) or global context (\eg, Parsenet~\cite{liu2015parsenet}), latter approaches have gravitated towards attentional pooling of information into a set of {\em region} representations (\eg, using double-attention ~\cite{double-attention} or as in ACFNet ~\cite{zhang2019acfnet}). The individual pixel representations are then enhanced and contextualized by the aggregated regional feature representations. The main difference among these approaches is how the regions are formed, whether or not regions representations themselves are allowed to interact (\eg, using graph propagation \cite{chen2019graph}) and how the regional features are aggregated back to enhance pixel representations; some approaches use back-projection while others learn relations between pixels and regions. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{Fig_1_CVPR_2023} \vspace{-0.15in} \caption{{\bf Semantic Global Reasoning.} Illustrated is our proposed SGR approach to semantic segmentation (top schematic). Given a feature map from a backbone (ResNet), SGR learns to group pixels into soft semantic latent regions (top); features are aggregated from these regions to form semantic concept token representations that are refined using a Transformer and then back-projected to enhance the original ResNet features, before passing them into segmentation head to produce per-pixel labels. Notably, resulting latent regions are more semantic than prior works, as measured by the proposed class-semantics ($S_C$) and instance-semantics ($S_I$) measures ($\downarrow$ is better), and are more diverse ($\uparrow$ is better), as measured by proposed class-diversity ($D_C$) and instance-diversity ($D_I$) (see GCNET \cite{chen2019graph} / Maskformer \cite{cheng2021per} in middle/bottom). } \vspace{-4mm} \label{semanticness} \end{figure} Aspirationally, these approaches are motivated by the overarching idea that regions would pull information over individual objects, enhancing pixels with object-centric globally-contextualized representations (\eg, obtained by allowing region representations to interact). In practice, however, this does not happen and visualizations (\eg, such as those in GloRE \cite{chen2019graph}) show that regions formed in such a manner fail to capture or respect the semantics of the scene. To address this issue, more recent approaches propose to supervise the regions directly. For example, OCR \cite{yuan2020object} and Maskformer \cite{cheng2021per} use original semantic mask annotations to supervise the region predictions; class-attention plays a similar role by learning class token embeddings \cite{MCTformer}. While these approaches do result in semantic and interpretable regions which improve the performance on the final semantic segmentation task, without requiring any additional supervision, they are still limited in a few key ways. Mainly, the number of regions is typically limited to the number of semantic classes \cite{cheng2021per,yuan2020object} (in the corresponding dataset) with those regions modeling the union of all object instances in a particular scene. In other words, they are class- and not object-centric. This could be sub-optimal since instances of the same class may have different appearances, or shapes or be located in different disjoint regions of the scene. Aggregating context indiscernibly across them may result in a loss of detail. We posit that a more object-centric aggregation, where regions may be more closely associated with object instances or {\em concepts} that compose those instances, and those {\em concept regions} would be allowed to aggregate information among them (similar to GloRE \cite{chen2019graph}) would provide a more granular and interpretable mechanism for attentional context. \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{Figure-Model} \caption{\textbf{Overview of our framework}. An x-y positional embedding is first added to features from the backbone. A dot product is then performed between the position-aware features and $K$ learned concept embeddings to generate soft masks. These masks aggregate features to form latent concept tokens. A transformer encoder learns to enhance these representations using global reasoning through self-attention. Finally, the contextualized output from the transformer is re-projected and added to the feature space and passed to a segmentation head.} \label{framework} \vspace{-6pt} \end{figure*} The core challenge lies in that one must do so without requiring any instance supervision and relying solely on semantic supervision, similar to prior work. To this end we make the following observations: (1) disconnected semantic segments are likely to belong to different instances -- this gives us a lower bound on the number of concept regions per image during training; (2) the union of concept regions for a given class should correspond to its semantic segmentation for that class in a given image; and (3) concept regions must be spatially disjoint. These constraints allow us to formulate a rich set of objective functions that encourage the discovery of concept regions, propagate and refine the representation of those regions in a globally consistent manner, and ultimately enhance the original pixel representation by aggregation of the refined region features. We show that the resulting method not only achieves a competitive performance compared to the state-of-the-art on semantic segmentation but also produces more semantic and diverse regions, as measured by our newly proposed entropy-based metrics. In addition, the more instance-centric nature of the attentional aggregation also ensures that our model learns better underlying feature representations that produce improved performance when transferred to object detection and segmentation tasks. \paragraph{Contributions.}Our contributions are as follows: \vspace{-2mm} \begin{enumerate} \item We propose a novel framework for semantically-enhanced global reasoning that enhances local feature representations. We do so by learning to form semantic latent regions that pool local information into concept token representations. These latent token representations are then globally refined, using a Transformer~\cite{vaswani2017attention}, and ultimately fused back into the original local feature representations. \vspace{-2mm} \item We propose a rich set of losses that encourage latent regions to be semantically interpretable. Specifically, we ensure that unions of regions map to connected components of ground truth segments; as well as that those regions are disjoint. \vspace{-2mm} \item We define new metrics that measure class- and instance-level semantics of concepts by considering the entropy over ground truth labels that form each corresponding region. \vspace{-2mm} \item We demonstrate that the resulting model, when combined with DeepLabV3~\cite{chen2017deeplab}, achieves SoTA performance on semantic segmentation in COCO-Stuffs-10K dataset~\cite{zhou2017scene} and achieve competitive performance on two other benchmark (Citiscapes \cite{Cordts2016Cityscapes}, and ADE-20K \cite{zhou2017scene}) datasets. In addition, the model results in more semantically interpretable and diverse region representations that leads to improved performance when transferred to the downstream tasks of object detection and segmentation (obtained using Mask R-CNN \cite{he2017mask} in MS COCO \cite{lin2014microsoft}). \end{enumerate} \vspace{-4mm} \section{Related Work} \vspace{-3pt} \paragraph{Semantic Segmentation.} Semantic segmentation is a well-studied problem in computer vision where the goal is to classify each pixel of an image. Earlier approaches required specification of seed points and then used graph cut~\cite{shi2000normalized, boykov2004experimental, boykov2006graph} to group semantically similar features; alternatives generated mask proposals~\cite{carreira2011cpmc,uijlings2013selective,arbelaez2012semantic, arbelaez2014multiscale} and classified them~\cite{carreira2012semantic, dai2015convolutional}. With the popularity of deep-learning, focus shifted on converting the segmentation problem to a per-pixel classification problem~\cite{chen2014semantic, long2015fully, chen2017deeplab}. Later approaches~\cite{chen2017deeplab, chen2017rethinking, chen2018encoder} have shown that capturing more global context leads to improved performance in semantic segmentation. In a shift from per-pixel classification approaches, a recent work, Maskformer~\cite{cheng2021per}, goes back to a mask classification approach~\cite{carreira2012semantic, dai2015convolutional} and uses a transformer~\cite{vaswani2017attention} to generate mask queries and classify them. \vspace{-4mm} \paragraph{Capturing Global Context.} Earlier approaches attempted to capture the global context in semantic segmentation tasks by either increasing receptive fields~\cite{chen2017deeplab, chen2017rethinking, chen2018encoder} using atrous convolution~\cite{holschneider1990real, yu2017dilated, dai2017deformable} or by aggregating features from multiple scales~\cite{zhao2017pyramid, chen2018encoder, yang2018denseaspp}. \vspace{-6mm} \paragraph{Global Reasoning.} Graph-based approaches and self-attention has been extensively used to globally reason between different concepts. CRFs~\cite{krahenbuhl2011efficient, fields2001probabilistic} were commonly used in the past for modeling pairwise interaction between all the pixels within an image, particularly to refine predictions~\cite{chen2014semantic, chandra2017dense, bertasius2017convolutional}. Non-local Nets~\cite{wang2018non} and variants built a fully connected graph between all the pixels or several sampled pixels~\cite{zhang2020dynamic} and used self-attention over the graph nodes. Recently, a few approaches~\cite{chen2019graph, liang2018symbolic, zhang2019latentgnn, wu2021visual} densely project the image features into latent nodes and then reason between them using graph convolutions~\cite{kipf2016semi} or transformers~\cite{wu2021visual, vaswani2017attention}. Although the idea of grouping pixels of similar concepts into regions is similar to ours, visualization of their latent spaces suggests that they fail to capture object semantics at either class or instance level of granularity. To this end, \cite{cheng2021per,yuan2020object} directly supervise latent representations. Several techniques~\cite{fu2019dual, huang2019ccnet, yuan2021ocnet } used different forms of self-attention~\cite{vaswani2017attention} to obtain the pair-wise relationship between each pixel and aggregate the information to capture the global context for semantic segmentation. Vision Transformers~\cite{dosovitskiy2020image} and its variants~\cite{liu2021swin,zheng2021rethinking, strudel2021segmenter, caron2021emerging} have tried to generalize self-attention models~\cite{vaswani2017attention} on many computer vision tasks, including semantic segmentation, with great success, achieving state-of-the-art performance. These models~\cite{dosovitskiy2020image, liu2021swin, jiang2021all,zeng2022not} simply divide images into patches and consider them as words similar to language models and use self-attention to reason between them. Subsequent approaches applied shifted windows~\cite{liu2021swin} or clustering~\cite{zeng2022not} to group the patches. Recently DINO~\cite{caron2021emerging} observed that the attention maps of class tokens attended to particular regions. However, these maps were class agnostic. MCT~\cite{xu2022multi} exploited this to use multiple class tokens to obtain class-aware attention maps which are refined to be used as pseudo ground truth of semantic segmentation. \section{Method} \vspace{-6pt} The motivation for our approach is simple, to enhance local feature representations (\eg, computed by a CNN backbone) with global contextualized scene information. To this end, we introduce a semantically enhanced global reasoning (SGR) component shown in \cref{framework}. The input to our component is a convolutional feature map of resolution $W \times H \times C$, where $W$ and $H$ are spatial dimensions and $C$ is the channel dimension. The output is a globally contextualized and enhanced feature map of the same size. SGR consists of four main steps. First, we divide the image into a set of $K$ soft {\em latent concept regions} of size $W \times H$ each. This is achieved by computing the similarity of each input feature column to the learned concept embedding. The representation of the concepts, which ultimately drives the soft concept segmentation, is learned in a weakly-supervised manner where unions of activated concept regions are supervised by connected segments of ground-truth semantic segmentation labels. Second, we form representations for each concept region by aggregating input feature columns corresponding to it. This is similar to context vector aggregation in soft attention and results in a set of $K$ {\em latent semantic token} representations. We add positional embeddings corresponding to the centroids of their regions to the token representations \cite{vaswani2017attention, dosovitskiy2020image}. This is important for concept disambiguation (\eg, a low-textured blue token in the lower- and upper-part of the image can be disambiguated, permitting distinctions between {\tt sky} and {\tt water}). Third, we perform global reasoning across these semantic tokens by refining their representations using a Transformer \cite{vaswani2017attention}. Transformers are effective at capturing global relations among tokens and are more efficient and easier to use than GNNs \cite{chen2019graph}. Fourth, we fuse information from refined semantic token representations back to the input CNN feature map. This is achieved by back-projecting semantic token representations using the same soft {\em concept regions}. Ultimately the result of applying SGR is a refined feature map of original resolution $W \times H \times C$, which is then passed to the segmentation head as shown in \cref{framework}. The key to ensuring the ``semanticness" of our concept regions and hence tokens is in the first step where the $K$ concept representations are learned across the dataset. Because we want, ideally, concepts to represent object instances or even parts of object instances, the number of concepts we learn must be a multiple of object classes (unlike \cite{chen2019graph,cheng2021per}). Further, since we do not assume instance-level annotations, we use connected components over the ground truth semantic segmentation annotations to provide a {\em lower} bound to the number of concepts that should be active in a given training image. We assume that a subset of $L$ (out of $K$) concepts can be active in each image and guide the learning of concept representations by matching concept regions to the class-specific connected components using a greedy matching strategy (similar to MDETR \cite{kamath2021mdetr}). Since the connected components are the {\em lower} bound, the matching from concepts to connected components is many-to-one. The supervision is such that the union of the concepts is encouraged to correspond to the connected component mask. To ensure that concepts matched to the same connected component are not identical, we force the concept regions corresponding to those tokens to be dissimilar by minimizing pairwise cosine similarity measure. In what follows we will detail each of our design choices as well as the metrics we defined to measure the semantics of resulting intermediate representations (see Section~\ref{sec:metric}). \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{Fig_2_Matching} \caption{{\bf Visualization of our greedy matching strategy.} Binary mask losses are used to compute a cost matrix, based on which, Hungarian matching is applied to perform 1-to-1 matching between the predicted latent region masks and the ground truth components. In the second step, we greedily select $L$-top matches from the remaining ones. Once matched, during training, we compute losses between the union of predicted masks assigned to the same ground truth component. } \label{matching} \vspace{-6pt} \end{figure} \vspace{-6pt} \begin{comment} Our semantically enhanced global reasoning component is shown in Figure~\ref{framework}. Our goal is to perform global reasoning between different semantic concepts present in the image which convolutional neural networks do not perform. Therefore, we take features from a convolutional backbone and apply our module to create semantically meaningful and interpretable tokens and reason between them using a transformer. Transformers are efficient at capturing global relations, unlike convolutional networks. It allows us to reason between distant semantic concepts. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{Model_1} \caption{Our global semantic reasoning framework. } \label{framework} \end{figure*} We first add positional embedding~\cite{vaswani2017attention, dosovitskiy2020image} to the image features from the backbone based on their x and y coordinate position. This adds positional information to every feature which is important for concept disambiguation during tokenization. We then project the features with added positional embedding onto a latent space using a projection matrix that is structured to ensure that only features from a particular semantic concept or class are. During training, the tokens are matched with the different connected components of semantic classes present in the image using a greedy matching strategy. We apply connected component analysis on ground truth binary masks to obtain connected components of each semantic class. The connected component masks are used to generate the cost matrix for the matching algorithm. Once matched, the projection matrices are supervised using the connected component masks. If more than one token is mapped to a component of a semantic concept, we force the projection matrices corresponding to those tokens to be dissimilar by minimizing the pairwise cosine similarity measure. This ensures that diversity among the tokens that are mapped to the same component of a semantic concept and help project features from different locations within that component. After the image features are tokenized they are passed to a shallow transformer for performing reasoning between them. We use the transformer to self-attend over the tokens generated so that we learn the relationship between the tokens. Since our tokens do not have any particular ordering, we compute the centroids of each projection matrix corresponding to each token and use that as positional encoding. Once we self-attend over the tokens, the output from the transformer is reprojected back to the feature space using the same projection weights as forward projection. The reprojected features are then added with the original image features from the CNN backbone which is then finally passed to a segmentation head for per-pixel classification. Below we discuss in detail different components of our global reasoning framework, the greedy strategy in matching the projected tokens to a particular class, and different components of the overall loss function. \end{comment} \subsection{Projection to Latent Concept Regions} \vspace{-1pt} The process of mapping image features into latent regions is illustrated in Figure~\ref{framework}. Given input feature map from a CNN backbone (\eg, ResNet101) $\mathbf{X} \in \mathbb{R}^{W \times H \times C}$, where $W$, $H$ and $C$ are width, height and number of channels respectively, we first add positional embeddings $PE_{pos}(X)$ and $PE_{pos}(Y)$. This is done by computing sines and cosines of different frequencies for $x$- and $y$- index of each feature cell \cite{vaswani2017attention}, where $PE_{pos}(x) \in \mathbb{R}^C$. The result is positionally aware local feature tensor is: \vspace{-6pt} \begin{equation} \imagefeature' = \left[ \begin{array}{c} \mathbf{X} + PE_{pos}(X) \\ \mathbf{X} + PE_{pos}(Y) \\ \end{array} \right], \end{equation} where $\imagefeature' \in \mathbb{R}^{W \times H \times 2C}$. We obtain the {\em latent concept regions} by computing dot product similarity of each feature cell of $\imagefeature'$ with learned concept embeddings $\mathbf{b}_k \in \mathbb{R}^{2C}$; $k \in [1, K]$, where $K$ is the total number of concepts. This results in $K$ soft masks of resolution $W \times H$, collectively forming $\mathbf{P} = [\mathbf{P}_1, \mathbf{P}_2, ..., \mathbf{P}_K] \in \mathbb{R}^{W \times H \times K}$. In practice, the above operation is implemented by a 2D convolution with $K$ kernels of $1 \times 1$: \vspace{-6pt} \begin{equation} \mathbf{P} = \text{\tt sigmoid}(\text{\tt Conv2D}(\imagefeature'; \{ \mathbf{b}_k \}_{k=0}^K )). \end{equation} \vspace{-6pt} \subsection{Projection to Latent Semantic Tokens} \vspace{-1pt} We use another convolution layer with $1 \times 1$ kernels to reduce the dimensionality of $\imagefeature'$ to $\mathbb{R}^{H \times W \times D}$. The latent semantic tokens for the $K$ regions are formed by matrix multiplication between the dimensionally reduced positional features and obtained soft concept region masks. This is facilitated by flattening both tensors along the first two dimensions and element-wise multiplying across channel dimensions of the $D$-dimensional features: \vspace{-6pt} \begin{equation} \label{eq:1} \mathbf{T} = \mathbf{P}^T \odot \text{\tt Conv2D}(\imagefeature'; \mathbf{W}_d)^T. \end{equation} \vspace{-8mm} \subsection{Global Reasoning Between the Tokens} \vspace{-1pt} After projecting features into latent tokens, we add positional encoding for the location of the tokens themselves into their respective representations. We characterize the position of the token by a weighted centroid computed by its soft concept region mask. Mainly, \vspace{-2pt} $\concepttoken' = \mathbf{T} + PE_{pos}(\mathbf{C})$, where \begin{equation} \mathbf{C}_k = \frac{1}{\sum_i \sum_j \mathbf{P}_{i,j,k}} \sum\limits_{w=1}^{W} \sum\limits_{h=1}^{H} \left[w \mathbf{P}_{w,h,k}; h \mathbf{P}_{w,h,k} \right]. \end{equation} \vspace{-2pt} The tokens with positional encoding, $\concepttoken'$, are passed on as inputs to the transformer \cite{vaswani2017attention}. The transformer applies self-attention over the tokens thereby performing global communication between them. Because each token represents a component of a semantic class, it only pulls features from the semantic concept or component it represents. Overall, this self-attention-based reasoning is analogous to identifying the relationships between different semantic concepts at different locations. The output of the transformer is then back-projected onto the pixel space using the same projection matrix that we used to project the original features to the latent space. The back-projected features are added with the input features and passed to segmentation head for per-pixel classification. \subsection{Latent Concept Supervision} We aim to make each token semantic, such that it only aggregates features from the component of the semantic class (ideally an instance, but possibly a part of the instance of an object) it represents. For example, we would like to have concepts that represent each {\em car} in a scene, but would also be happy with concepts that separately represent each car's {\em wheels} and {\em body}. We observe that disconnected components of the semantic segments for a given object class are likely to belong to different instances and hence they should correspond to different concepts as per the construction above. Hence, during training, we first apply connected component analysis over the binary ground truth segmentation masks to obtain the connected components of each semantic class\footnote{We apply simple morphological operations and ignore components that are smaller than 5\% of the maximum area of the connected components in the given training image.}. We use the resulting connected components to supervise soft latent concept region masks. At training time, we first match the predicted latent concept masks of each token to these connected components based on a cost matrix. Once matched, we supervise the projection matrices using the ground truth connected component masks to ensure that concept tokens are only formed from from a particular class component. Specifically, given $K$ soft latent concept region masks, we assume that only up to $L$ of them are active in any given image\footnote{Note that $K$ concepts are shared for the dataset and only a subset of those are likely to be present in any one image. We let $L$ be the maximum number of concepts that can be active in an image.} and must be matched to $C$ ground truth connected components. We assume $K > L > C$ (for example in most experiments we let $K=256$ and $L=64$). We first form a $K \times C$ cost matrix and then obtain matching using a two-stage procedure. First, we use the Hungarian algorithm~\cite{kuhn1955hungarian,cheng2021per,carion2020end} to perform a bipartite matching between ground truth connected components and latent concept region masks. This ensures that each connected component is associated with at least one latent concept region. Second, we match the remaining $L-C$ latent concept regions greedily to connected components by considering the remaining $(K-C) \times C$ portion of the cost matrix. The procedure is illustrated in detail in Figure~\ref{matching}. The key to the above procedure is the formation of the cost matrix. Each $(i,j)$-th entry in the cost matrix measures similarity between the $i$-th soft latent concept region mask $\mathbf{P}_i$ and the binary ground truth mask for a connected component $\mathbf{M}_j$ of one of the classes present in the given image. To measure similarity we use a combination of dice loss~\cite{milletari2016diceloss} and focal loss~\cite{lin2017focal}, \begin{equation} \label{matching_cost_matrix} Cost_{i,j} (\mathbf{M}_{j}) = \mathcal{L}_{focal} (\mathbf{P}_{i},\mathbf{M}_{j}) + \rho \mathcal{L}_{dice} (\mathbf{P}_{i}, \mathbf{M}_{j}). \end{equation} \noindent The hyperparameter $\rho$ controls the relative weight of the two terms in the cost computation. \begin{comment} We create a bipartite graph between projection matrices of latent tokens and the ground truth masks of the connected components of classes present in the image. Given a cost matrix, we first use Hungarian matching~\cite{kuhn1955hungarian,cheng2021per,carion2020end} to perform a bipartite matching between the tokens and the components present. Generally, the number of tokens $K$ is higher than the number of connected components of classes that an image contains $C$. \textcolor{magenta}{FIX} However, Hungarian matching only does one-to-one assignment. Therefore, after Hungarian matching, some tokens have not been mapped to any connected component. However, because certain connected components may be more salient or contain more useful information, we would like to allow multiple tokens to be mapped to a particular component based on the cost matrix. Hence, we set the cost of the tokens that have been selected to infinity. After that, we match up to top $L$, tokens that minimize the cost assigned to a class. This greedy approach of first using Hungarian matching ensures that at least one token is mapped to every connected component and then the more salient connected components are assigned more tokens. Because multiple tokens can get assigned to a particular connected component, we do not desire all these tokens to capture the same features of the connected component as this would lead to redundant tokens. Therefore, once the tokens have been mapped, we minimize the pairwise cosine similarity between the projection matrices of the tokens that have been assigned to the same component. This is done so that each token that has been assigned to the same component is distinct and attends to different locations within the same component. \textcolor{magenta}{Is this correct?} The cost matrix for matching is by \textcolor{magenta}{this is unclear} the binary mask losses between the projection matrix of each token and ground truth connected components of the classes present in an image. It is a linear combination of mask dice loss~\cite{milletari2016diceloss} and masks focal loss~\cite{lin2017focal}. The cost matrix is computed below: \begin{equation} \label{matching_cost_matrix} C_{i,j} = \rho L_{focal} (B_{i},M_{j}) + \sigma L_{dice} (B_{i},M_{j}) \end{equation} \noindent where $C_{i,j}$ is the cost of assigning token $i$ to the component $j$, $L_{focal}$ and $L_{dice}$ are focal and dice losses respectively of assigning projection matrix $i$ to a connected component $j$.The hyperparameters $\mu, \rho$, and $\sigma$ control the relative weight of cross-entropy loss, focal loss, and dice loss in the cost matrix respectively. \end{comment} Once the tokens are matched, we can supervise them using the same losses used to match them. However, since we have many-to-one matching between latent concept regions and ground-truth connected components this results in tokens that are duplicates of one another. To avoid this scenario we add regularization which ensures that latent regions compete for support. This is achieved using cosine similarity loss that is applied to pairs of latent concepts that are matched to the {\em same} connected component: $\mathcal{L}_{cos}(\mathbf{P}_{i \rightarrow j}, \mathbf{P}_{k \rightarrow j})$. The final loss for supervision of latent concepts can be written as follows: \vspace{-6pt} \iffalse {\small \begin{eqnarray} \mathcal{L}_{concept}(\mathbf{P}, \mathbf{M}) & = & \sum\limits_{j=1}^{C} \mathcal{L}_{focal} ( \sum\limits_{i \rightarrow j} \mathbf{P}_{i}, \mathbf{M}_{j} ) \\ & & + \rho \nonumber \sum\limits_{j=1}^{C} \mathcal{L}_{dice} ( \sum\limits_{i \rightarrow j} \mathbf{P}_{i}, \mathbf{M}_{j} ) \\ & & + \nonumber \gamma \sum\limits_{i=1}^{L} \sum\limits_{k=1}^{L} \mathcal{L}_{cos}(\mathbf{P}_{i \rightarrow j}, \mathbf{P}_{k \rightarrow j}). \nonumber \end{eqnarray} }% \fi \vspace{-6pt} {\small \begin{multline} \label{eqn_adv_x} \mathcal{L}_{concept}(\mathbf{P}, \mathbf{M}) = \sum\limits_{j=1}^{C} \mathcal{L}_{focal} ( \sum\limits_{i \rightarrow j} \mathbf{P}_{i}, \mathbf{M}_{j} ) + \\ \rho \sum\limits_{j=1}^{C} \mathcal{L}_{dice} ( \sum\limits_{i \rightarrow j} \mathbf{P}_{i}, \mathbf{M}_{j} ) + \gamma \sum\limits_{i=1}^{L} \sum\limits_{k=1}^{L} \mathcal{L}_{cos}(\mathbf{P}_{i \rightarrow j}, \mathbf{P}_{k \rightarrow j}). \vspace{-6pt} \end{multline} }% Note that with a slight abuse of notation the sums in the focal and dice losses are overall concept regions $i$ that matched to one connected component $j$, and are effectively modeling the union of the concept regions matched to a given component (see Figure~\ref{matching} (right)). This, in effect means that concepts are only weakly-supervised. The $\rho$ and $\gamma$ are the balancing parameters for the loss terms. \subsection{Final Loss} The final loss for our model is a combination of the traditional per-pixel supervised classification loss and the latent concept loss defined above. In other words, we optimize: \begin{equation} \mathcal{L} = \mathcal{L}_{CE}(\cdot, \cdot) + \beta \mathcal{L}_{concept}(\mathbf{P}, \mathbf{M}), \end{equation} where $\mathcal{L}_{CE}$ is a typical per-pixel cross-entropy loss and we omit parameters to avoid notation clutter. Again, $\beta$ is a balancing parameter between the two terms. \begin{comment} While training, after the tokens have been matched to the connected components of each class present in the image, we group the tokens that have been assigned to the same component. Within each group of tokens, we compute pairwise cosine similarity which we minimize as a loss term, to ensure diversity within a token group. Since we are imposing a diversity constraint for tokens within a group, we take the union of the projection matrices of the tokens belonging to the same group by adding them up. We then minimize the mask losses (dice and focal loss) between the unified projection matrix and the ground truth connected components as a loss term. To train the semantic segmentation model, the final output is supervised by per-pixel classification loss. The overall loss term for training the model parameters is given below: \begin{equation} L = \alpha \sum_{x,y} L_{CE}(output_{x,y}, target_{x,y}) + \beta L_{M} \end{equation} \begin{align} \begin{split} L_{M} = \sum_{k=1}^N \left[ L_{b} (B_{V_k}, M_k ) + \sum_{v_i, v_j \in V_k}S_{cos}(B_{v_i}, B_{v_j}) \right] \end{split} \end{align} \begin{equation} B_{V_k} = \sum_{v \in V_k} B_v \end{equation} \begin{equation} L_{b}(B_{V_k}, M_k ) = \delta L_{focal}(B_{V_k}, M_k) + \zeta L_{dice}(B_{V_k}, M_k) \end{equation} \noindent where $L_{CE}$ denotes cross-entropy loss, $L_M$ denotes total mask loss, $output_{x,y}$ denotes the output of the segmentation head and $target_{x,y}$ denotes the ground truth class at $(x,y)$ pixel location respectively. $L_{b}$ denotes binary mask losses and $S_{cos}$ denotes the cosine similarity. $B_v$ denotes the projection matrix for token $v$. $N$ denotes the total number of connected components, $V_k$ denotes a group of tokens that have been assigned the same connected component $k$ and $M_k$ denotes the binary mask for connected component k. $L_{focal}$ denotes focal loss and $L_{dice}$ denotes dice loss. The hyperparameters $\alpha$, $\beta$, $\gamma$, $\delta$ and $\zeta$ control the importance of individual loss terms. \end{comment} \section{Metrics for interpretability} \label{sec:metric} \vspace{-2pt} Our goal is not only to obtain an effective segmentation model but to do so with latent region representations that are semantic. Hence we propose a series of metrics that measure the semantics of representations. Our metrics rely on two core assumptions: (1) the token is semantic if its latent region belongs to a coherent (object class or instance), and (2) latent regions, as a collection, should capture as many object categories and instances as possible (\ie, be diverse). \vspace{0.07in} \noindent {\bf Class-Semantics ($\mathcal{S}_C$).} To measure semantics, we first compute a histogram for each token of ground truth object classes that lay within its latent support region. We do so by forming a histogram where bins correspond to the classes present in the image. Each pixel soft votes for the label of its labeled ground truth object class with the soft weight assigned to it by the token. The histogram is then normalized to sum to 1. In effect, for each image, we compute $K$ discrete probability distributions that measure the empirical probability of the object class belonging to the token. We measure the entropy of these probability distributions and then average the resulting K entropies. We take this mean entropy as the measure of the semantics of token representations for the given image. Note, the {\em lower the entropy the more semantic the token representations are} because a lower entropy indicates a uni-modal distribution suggesting that most of the token support comes from a single object class. A dataset measure can be obtained by averaging mean entropy over all images. \vspace{0.07in} \noindent {\bf Instance-Semantics ($\mathcal{S}_I$).} We extend this metric to quantify the ability of our tokens to distinguish between individual object instances and not just classes. In this case, we make the bins of the histogram correspond to the individual instances for each of the "things" classes (a class that has instance annotations). The rest of the metrics can be computed in the same way. We note that having a desirable low value for instance-semantics will inherently lead to a low value for class-semantics, but not vice versa. \vspace{0.07in} \noindent {\bf Diversity.} One can minimize the semantics metric above by having {\em all} tokens focus on only one object class or instance. Hence it is desirable to also measure diversity of representations. To quantify the diversity of our tokens, we propose the token diversity metric. To compute this metric, we first compute the mean of the normalized histograms mentioned above. Then we measure the variance of the histograms for an image. Finally, we compute the mean of these variances for the entire dataset. The higher the variance, the more diverse the tokens are. We desire a high token diversity. Similar to semantics, diversity can also be defined at the class- or instance level. For the rest of the paper, we refer to class-level token diversity as ($\mathcal{D}_C$) and instance-level token diversity as ($\mathcal{D}_I$). \vspace{-6pt} \section{Experiments} To show the effectiveness of our SGR framework we have experimented with three widely used semantic segmentation datasets: Cityscapes~\cite{Cordts2016Cityscapes}, COCO-Stuffs-10k~\cite{caesar2018coco} and ADE-20K~\cite{zhou2017scene}. We add our SGR unit just before the segmentation head of DeepLabV3~\cite{chen2017rethinking} semantic segmentation network. We then conduct three sets of experiments: (1) experiments on semantic segmentation where we show that the resulting method achieves SoTA performance on all three datasets (ranking 1-st on one dataset and 2-nd on the other two); (2) experiments that compare semantics and diversity of our resulting representations with recent alternatives GCNET~\cite{chen2019graph} and Maskformer \cite{cheng2021per}, where we show superiority of our representations. Finally, (3) we demonstrate the effectiveness of the features learned by our global reasoning component, by transferring the learned weights from the semantic segmentation network and fine-tuning to downstream tasks like object detection and instance segmentation on the MS-COCO dataset~\cite{lin2014microsoft}. We also carried out a series of ablation studies to justify our design choices. \subsection{Datasets} \noindent \textbf{Cityscapes}~\cite{Cordts2016Cityscapes} contains high-resolution images of street-view in different European cities captured using a dashcam. There are 2975 images for training, 500 for validation, and 1525 images for testing. It has 19 semantic classes labeled. \noindent \textbf{COCO-Stuffs-10K}~\cite{caesar2018coco} is a subset of the MS-COCO dataset~\cite{lin2014microsoft} commonly used for semantic segmentation. It has pixel-level annotations of 171 semantic classes of which there are 80 ``things" and 91 ``stuffs" classes. There are 9K images (varying resolution) for training and 1K for testing. \noindent \textbf{ADE-20K}~\cite{zhou2017scene} is a subset of the ADE20K-Full dataset containing several indoor and outdoor scenes with 150 semantic classes. It contains around 20K images for training and 2K images for validation of varying resolutions. \noindent \textbf{MS-COCO}~\cite{lin2014microsoft} is a large-scale dataset used for object detection, segmentation, and image captioning. \subsection{Implementation Details} \vspace{-3pt} \paragraph{Semantic Segmentation.} For semantic segmentation, we add our SGR component after the final layer of the ResNet~\cite{he2016deep} backbone, pre-trained on ImageNet, just before the segmentation head of DeepLabV3~\cite{chen2017rethinking}. DeepLabV3 uses a multi-grid approach with dilated convolutions during training. The last two downsampling layers are removed resulting in an output stride of 8. We use the SGD optimizer with a momentum of 0.9~\cite{sutskever2013importance} and a polynomial learning rate policy where the learning rate decreases with the formula $ (1 - \frac{iter}{total\_iter})^{0.9}$. For Cityscapes~\cite{Cordts2016Cityscapes}, we use a batch size of 8 and a crop size of $768 \times 768$ with an initial base learning rate of 0.006. For both Coco-Stuffs~\cite{caesar2018coco} and ADE-20K~\cite{zhou2017scene} we use a batch size of 16, crop size of $512 \times 512$ and an initial base learning rate of 0.004. For all our experiments, we multiply the initial base learning rate by 10.0 for the parameters of our SGR component and the layers that correspond to the segmentation head. We train Cityscapes, COCO-Stuffs-10K, and ADE-20K for 240 epochs, 140 epochs, and 120 epochs respectively. We report both the single-scale inference and multi-scale inference with a horizontal flip at scales 0.5, 0.75, 1.0, 1.25, 1.5, and 1.75 following existing works~\cite{yuan2020object,cheng2021per,fu2019dual, chen2017rethinking}. \iffalse To train our semantic segmentation models, we use the SGD optimizer with momentum~\cite{sutskever2013importance} with a polynomial learning rate policy where the learning rate decreases with the formula $ (1 - \frac{iter}{total\_iter})^{0.9}$ and a momentum of 0.9. For Cityscapes~\cite{Cordts2016Cityscapes}, we use a batch size of 8 and a crop size of $768 \times 768$ with an initial base learning rate of 0.006. For both Coco-Stuffs-10K~\cite{caesar2018coco} and ADE-20K~\cite{zhou2017scene} we use a batch size of 16, crop size of $512 \times 512$ and an initial base learning rate 0.004. For all our experiments, we multiply the initial base learning rate by 10.0 for the parameters of our global reasoning component and the layers that correspond to the segmentation head. We train Cityscapes, COCO-Stuffs-10K, and ADE-20K for 240 epochs, 140 epochs and 120 epochs respectively. We apply synchronized batchnorm~\cite{zhang2018context} to syncrhonize batchnorm statistics across multiple GPUs. For data augmentation, we applied random horizontal flip, random scaling between [0.5-2.0] and random color jittering following~\cite{yuan2020object,cheng2021per}. For Hungarian matching during training time, we used equal weights of 1.0 for both focal loss and dice loss. For matching, as mentioned before, we match $L=64$ best tokens based on the cost matrix. Finally after the matching is done, we used a weight of 1.0 for per-pixel classification loss, and 0.25 for mask loss, dice loss and cosine similarity. For all three datasets, we report both the single scale inference and multi-scale inference with horizontal flip at scales 0.5, 0.75, 1.0, 1.25, 1.5 and 1.75 following existing works~\cite{yuan2020object,cheng2021per,fu2019dual, chen2017rethinking}. During multi-scale inference, the final output is calculated by taking the mean probabilities over each scale and their corresponding flipped inputs. Following ~\cite{fu2019dual, yuan2020object, cheng2021per}, for ADE-20K and COCO-Stuffs dataset, we resize the shorter side of the image to a particular crop size followed by a center crop to ensure that all images are of same size. \fi \vspace{-4mm} \paragraph{Transfer to Downstream Tasks.} To show the effectiveness of the features learned by our framework, we transfer the model learned on the semantic segmentation task to object detection and segmentation. We first remove the segmentation head from our semantic segmentation network trained on COCO-Stuffs-10K and use it as a backbone for Mask-RCNN~\cite{he2017mask} to fine-tune on MS-COCO for object detection and instance segmentation. We trained our model on MS-COCO \textbf{train2017} which has 118K images and evaluated on \textbf{val2017} which has 5K images. For a fair comparison with other backbones, we trained with the same batch size, learning rate, and iterations. We used a batch size of 8, an initial learning rate of 0.02, and used SGD with a momentum of 0.9 and weight decay of 0.0001 to train the models. We trained for 270K iterations with a learning rate decreased by 0.1 at 210K and 250K iterations. Additional details are given in Supplemental. \iffalse For all the experiments, we use a batch size of 8 with an initial learning rate of 0.02. The models are trained for 270K iterations with learning rate decreased by a factor of 10 at 210K and 250K itrations and using SGD with momentum of 0.9 and a weight decay of 0.0001. The RPN anchors span 5 scales and 3 aspect ratios, as suggested by ~\cite{he2017mask, lin2017feature}. For all the reported backbones, 512 ROIs are sampled with a positive to negative ratio 1:3. \fi \vspace{-3pt} \subsection{Results} \vspace{-2pt} \paragraph{Semantic segmentation.} The performance of our approach on semantic segmentation is shown in Table~\ref{tab:seg_result}. As can be observed, our model obtains competitive performance across all three datasets compared to the state-of-the-art (SoTA) models that use the ResNet backbone. We achieve SoTA for COCO-Stuffs-10K~\cite{caesar2018coco} in both single- \textbf{(s.s.)} and multi-scale \textbf{(m.s.)} settings. On ADE-20K our approach achieves second best performance followed by Maskformer~\cite{cheng2021per}. However, unlike other approaches that use Resnet-101, they use an FPN-based pixel decoder which increases the output stride to $4\times$. On the Cityscapes validation set, we also obtain a competitive result (third on \textbf{s.s.}; second on \textbf{m.s.} settings). However, Cityscapes only has 19 semantic classes which are not evenly balanced (in terms of a number of pixels and presence in the dataset) compared to 150 and 171 semantic classes for ADE-20K and COCO-Stuffs respectively. As argued by Maskformer~\cite{cheng2021per}, we also observe that our global reasoning benefits from having a greater number of semantic classes and diversifies the type of tokens among which the transformer can reason. \input{tabels/tab1} \vspace{-6pt} \input{tabels/tab2} \vspace{-6pt} \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{Qualitative} \caption{\textbf{Qualitative results} showing that our SGR component generates a more semantically meaningful and diverse tokens. In all three images, SGR was able to disambiguate between the instances (was able to differentiate the cow in the left in the first image, and different groups/ instances of the car in the last row) unlike Maskformer~\cite{cheng2021per}. GCNET~\cite{chen2019graph} tokens, on the other hand, lack strong semantic meaning.} \label{qualitative} \end{figure*} \paragraph{Class and instance-semantics.} We quantify the interpretability of the generated tokens for SGR at the class-level and instance-level using the metrics discussed in Section~\ref{sec:metric} and compare against intermediate representations of Maskformer~\cite{cheng2021per} and tokens of GLoRE~\cite{chen2019graph}. The results are shown in Table~\ref{tab:semanticness}. $\mathcal{S}_C$ and $\mathcal{S}_I$ denotes class-level and instance-level semantics respectively while $\mathcal{D}_C$ and $\mathcal{D}_I$ indicates token diversity at class and instance level. As can be observed, our tokens are more semantically meaningful than other intermediate representations at both semantic and instance levels while at the same time being more diverse. As mentioned in Section~\ref{sec:metric}, a lower semantics value at the class level or instance level means that the token only aggregates information from that particular class or instance. A higher diversity ensures that our tokens aggregate information from diverse classes and instances. For instance-semantics, we reported results on MS-COCO because COCO-Stuffs does not have instance annotations, while the models themselves are trained on COCO-Stuffs. \begin{table} \vspace{1mm} \centering\resizebox{\columnwidth}{!}{ \begin{tabular}{@{}|p{3cm}| r |r| r| r| r | r| @{}} \hline Backbone & $AP_{bbox}$ & $AP_{50}$ & $AP_{75}$ & $AP_{S}$ & $AP_M$ & $AP_L$ \\ \hline Res101-C4~\cite{he2016deep, he2017mask} & 40.29 & 59.58 & 43.37 & 22.41 & 44.94 & 55.05\\ Res101-GCNET~\cite{chen2019graph} & 38.85 & 58.82 & 42.30 & 21.62 & 42.76 & 51.88 \\ \textbf{Ours-Res101-SGR} & \textbf{41.91} & \textbf{62.79} & \textbf{45.23} & \textbf{22.95} & \textbf{46.04} & \textbf{56.32}\\ \hline Ours (w/o token sup.) & 33.48 & 51.72 & 36.2 & 18.73 & 36.96 & 45.84\\ \hline \end{tabular} } \caption{{\bf Transfer to Object Detection.} Results showing transfer learning performance on object detection using Mask-RCNN~\cite{he2017mask}. All the models are initialized with corresponding pre-trained backbone and then trained on COCO \textbf{train2017} by us and evaluated on \textbf{val2017} for a fair comparison.} \label{tab:object_detection} \vspace{-6pt} \end{table} \begin{table} \vspace{1mm} \centering\resizebox{\columnwidth}{!}{ \begin{tabular}{@{}|p{3cm}| r |r| r| r| r | r| @{}} \hline Backbone & $AP_{mask}$ & $AP_{50}$ & $AP_{75}$ & $AP_{S}$ & $AP_M$ & $AP_L$ \\ \hline Res101-C4 & 34.88 & 56.16 & 37.27 & 15.30 & 38.32 & 53.18\\ Res101-GCNET & 34.35 & 55.08 & 36.43 & 15.07 & 38.33 & 51.37 \\ \textbf{Ours-Res101-SGR} & \textbf{37.06} & \textbf{59.28} & \textbf{39.31} & \textbf{16.30} & \textbf{40.95} & \textbf{56.19}\\ \hline Ours (w/o token sup.) & 29.73 & 48.75 & 31.41 & 13.55 & 33.85 & 46.41\\ \hline \end{tabular} } \caption{{\bf Transfer to Instance Segmentation.} Results showing transfer learning performance on object segmentation using Mask-RCNN~\cite{he2017mask}. All the models are initialized with corresponding pre-trained backbone and then trained on COCO \textbf{train2017} by us and evaluated on \textbf{val2017} for a fair comparison.} \label{tab:instance_segment} \vspace{-6pt} \end{table} \begin{table} \vspace{1mm} \centering\resizebox{\columnwidth}{!}{ \begin{tabular}{@{}|l| p{2.5cm} | r| r| r| r | r| @{}} \hline \multicolumn{2}{|c|}{ } & \multicolumn{3}{c|}{COCO-Stuffs-10K} & \multicolumn{2}{c|}{MS-COCO}\\ \hline Backbone & Method & mIOU(m.s.) & $\mathcal{S}_C$\textbf{$\downarrow$} & $\mathcal{D}_C$\textbf{$\uparrow$} & $\mathcal{S}_I$ \textbf{$\downarrow$} & $\mathcal{D}_I$ \ \textbf{$\uparrow$} \\ \hline Res-101 & token sup. & \textbf{39.7} & \textbf{0.226} & \textbf{0.389} & \textbf{0.315} & \textbf{0.316} \\ Res-50 & token sup & 37.1 & 0.235 & 0.371 & 0.331 & 0.306 \\ Res-101 & w/o token sup. & 38.6 & 0.556 & 0.001 & 0.640 & 0.001\\ \hline \end{tabular} } \caption{{\bf Ablation Studies on COCO-Stuffs-10K.} mIOU and class-semantics reported on COCO-Stuffs-10K. Instance semantics and instance diversity are reported on COCO val2017. All models for ablation studies are trained on COCO-Stuff-10K-train. ($\mathcal{S}_C$) and ($\mathcal{S}_I$) represent class and instance semantics respectively while ($\mathcal{D}_C$) and ($\mathcal{D}_I$) -- class and instance-level token diversity. } \label{tab:Ablation} \vspace{-6pt} \end{table} \vspace{0.07in} \noindent {\bf Transfer to downstream tasks.} Tables~\ref{tab:object_detection} and \ref{tab:instance_segment} show the performance of our proposed component when transferred to down-stream tasks of object detection and segmentation on MS-COCO~\cite{lin2014microsoft} dataset using Mask-RCNN~\cite{he2017mask}. We compared against two other backbones, Resnet-101 (Res101-C4) pre-trained on Imagenet and GloRE (Res101-GCNET)~\cite{chen2019graph} pre-trained on COCO-Stuffs-10K on semantic segmentation task. As can be observed from Table~\ref{tab:object_detection} and \ref{tab:instance_segment}, SGR outperforms both on object detection and instance segmentation tasks. This demonstrates that the more semantically interpretable and diverse token representations allow us to learn richer features that are broadly more useful and transferable. Note that these downstream tasks require the ability to discern multiple instances and the instance-centric nature of the way our SGR aggregates information allows us to achieve this improved performance. This is further highlighted when compared against the model transferred from GloRE~\cite{chen2019graph} which also aggregates features into multiple tokens and reasons between them but the tokens lack any semantic coherence. \vspace{-5mm} \noindent \paragraph{Ablation Studies.}The last row of Tables ~\ref{tab:object_detection} and \ref{tab:instance_segment} further highlights the importance of our approach of supervising tokens to aggregate semantically meaningful regions. As shown in those tables, when we train our SGR model on COCO-Stuffs-10K without using token supervision, the performance on downstream tasks drastically drops. Table~\ref{tab:Ablation} shows the semantic segmentation performance of our method on a different backbone (Resnet-50) and the corresponding class and instance semantics and diversity. As can be observed, even when we use a weaker backbone, our model can retain the semantics on both class and instance-level reasonably well and produces diverse tokens. We further report the performance of SGR when the tokens are not supervised to be semantically meaningful. Not only does the semantic segmentation performance deteriorate, but the tokens are no longer semantically meaningful (as expected). \noindent {\bf Qualitative Results.} Figure~\ref{qualitative} shows qualitative results for generated tokens on multiple images. As can be observed, our tokens are more semantically interpretable and diverse, compared to the GCNET~\cite{chen2019graph} and Maskformer~\cite{cheng2021per}. Crucially, compared to Maskformer, which also supervises tokens, SGR can distinguish between instances of objects at different spatial locations; \eg, in the first row, one of the tokens of SGR is able to distinguish the left cow from the other, while Maskformer~\cite{cheng2021per} fails to do so. We can similarly observe that SGR is able to distinguish the rightmost horse in the second image which is disjoint from the rest and in the last image three different tokens of SGR are attending to three different groups of cars where other methods failed. \vspace{-6pt} \section{Conclusion} \vspace{-6pt} To summarize, we propose a novel component that learns to semantically group image features into latent tokens and reasons between them using self-attention. The losses we propose allow our latent representations to distinguish between individual connected components of a semantic class. We also propose new metrics to demonstrate that our latent tokens are meaningful and semantically interpretable at both class- and instance-levels. Moreover, we achieve competitive performance compared to the state-of-the-art using CNN backbones on semantic segmentation. We have also demonstrated that we learn a rich set of features that can be transferred to downstream object detection and instance segmentation tasks. \paragraph{Acknowledgments} This work was funded, in part, by the Vector Institute for AI, Canada CIFAR AI Chair, NSERC CRC and an NSERC DG and Accelerator Grants. Hardware resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute\footnote{\href{www.vectorinstitute.ai/\#partners}{www.vectorinstitute.ai/\#partners}}. Additional support was provided by JELF CFI grant and Compute Canada under the RAC award. Finally, we sincerely thank Gaurav Bhatt for his valuable feedback and help with the paper draft. \begin{comment} \subsection{Language} All manuscripts must be in English. \subsection{Dual submission} Please refer to the author guidelines on the CVPR\ 2023\ web page for a discussion of the policy on dual submissions. \subsection{Paper length} Papers, excluding the references section, must be no longer than eight pages in length. The references section will not be included in the page count, and there is no limit on the length of the references section. For example, a paper of eight pages with two pages of references would have a total length of 10 pages. {\bf There will be no extra page charges for CVPR\ 2023.} Overlength papers will simply not be reviewed. This includes papers where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts. The reviewing process cannot determine the suitability of the paper for presentation in eight pages if it is reviewed in eleven. \subsection{The ruler} The \LaTeX\ style defines a printed ruler which should be present in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document using a non-\LaTeX\ document preparation system, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera-ready copy should not contain a ruler. (\LaTeX\ users may use options of cvpr.sty to switch between different versions.) Reviewers: note that the ruler measurements do not align well with lines in the paper --- this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. Just use fractional references (\eg, this line is $087.5$), although in most cases one would expect that the approximate location will be adequate. \subsection{Paper ID} Make sure that the Paper ID from the submission system is visible in the version submitted for review (replacing the ``*****'' you see in this document). If you are using the \LaTeX\ template, \textbf{make sure to update paper ID in the appropriate place in the tex file}. \subsection{Mathematics} Please number all of your sections and displayed equations as in these examples: \begin{equation} E = m\cdot c^2 \label{eq:important} \end{equation} and \begin{equation} v = a\cdot t. \label{eq:also-important} \end{equation} It is important for readers to be able to refer to any particular equation. Just because you did not refer to it in the text does not mean some future reader might not need to refer to it. It is cumbersome to have to use circumlocutions like ``the equation second from the top of page 3 column 1''. (Note that the ruler will not be present in the final copy, so is not an alternative to equation numbers). All authors will benefit from reading Mermin's description of how to write mathematics: \url{http://www.pamitc.org/documents/mermin.pdf}. \subsection{Blind review} Many authors misunderstand the concept of anonymizing for blind review. Blind review does not mean that one must remove citations to one's own work---in fact it is often impossible to review a paper unless the previous citations are known and available. Blind review means that you do not use the words ``my'' or ``our'' when citing previous work. That is all. (But see below for tech reports.) Saying ``this builds on the work of Lucy Smith [1]'' does not say that you are Lucy Smith; it says that you are building on her work. If you are Smith and Jones, do not say ``as we show in [7]'', say ``as Smith and Jones show in [7]'' and at the end of the paper, include reference 7 as you would any other cited work. An example of a bad paper just asking to be rejected: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of our previous paper [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Removed for blind review \end{quote} An example of an acceptable paper: \begin{quote} \begin{center} An analysis of the frobnicatable foo filter. \end{center} In this paper we present a performance analysis of the paper of Smith \etal [1], and show it to be inferior to all previously known methods. Why the previous paper was accepted without this analysis is beyond me. [1] Smith, L and Jones, C. ``The frobnicatable foo filter, a fundamental contribution to human knowledge''. Nature 381(12), 1-213. \end{quote} If you are making a submission to another conference at the same time, which covers similar or overlapping material, you may need to refer to that submission in order to explain the differences, just as you would if you had previously published related work. In such cases, include the anonymized parallel submission~\cite{Authors14} as supplemental material and cite it as \begin{quote} [1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324, Supplied as supplemental material {\tt fg324.pdf}. \end{quote} Finally, you may feel you need to tell the reader that more details can be found elsewhere, and refer them to a technical report. For conference submissions, the paper must stand on its own, and not {\em require} the reviewer to go to a tech report for further details. Thus, you may say in the body of the paper ``further details may be found in~\cite{Authors14b}''. Then submit the tech report as supplemental material. Again, you may not assume the reviewers will read this material. Sometimes your paper is about a problem which you tested using a tool that is widely known to be restricted to a single institution. For example, let's say it's 1969, you have solved a key problem on the Apollo lander, and you believe that the CVPR70 audience would like to hear about your solution. The work is a development of your celebrated 1968 paper entitled ``Zero-g frobnication: How being the only people in the world with access to the Apollo lander source code makes us a wow at parties'', by Zeus \etal. You can handle this paper like any other. Do not write ``We show how to improve our previous work [Anonymous, 1968]. This time we tested the algorithm on a lunar lander [name of lander removed for blind review]''. That would be silly, and would immediately identify the authors. Instead write the following: \begin{quotation} \noindent We describe a system for zero-g frobnication. This system is new because it handles the following cases: A, B. Previous systems [Zeus et al. 1968] did not handle case B properly. Ours handles it by including a foo term in the bar integral. ... The proposed system was integrated with the Apollo lunar lander, and went all the way to the moon, don't you know. It displayed the following behaviours, which show how well we solved cases A and B: ... \end{quotation} As you can see, the above text follows standard scientific convention, reads better than the first version, and does not explicitly name you as the authors. A reviewer might think it likely that the new paper was written by Zeus \etal, but cannot make any decision based on that guess. He or she would have to be sure that no other authors could have been contracted to solve problem B. \medskip \noindent FAQ\medskip\\ {\bf Q:} Are acknowledgements OK?\\ {\bf A:} No. Leave them for the final copy.\medskip\\ {\bf Q:} How do I cite my results reported in open challenges? {\bf A:} To conform with the double-blind review policy, you can report results of other challenge participants together with your results in your paper. For your results, however, you should not identify yourself and should not mention your participation in the challenge. Instead present your results referring to the method proposed in your paper and draw conclusions based on the experimental comparison to other results.\medskip\\ \begin{figure}[t] \centering \fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:onecol} \end{figure} \subsection{Miscellaneous} \noindent Compare the following:\\ \begin{tabular}{ll} \verb'$conf_a$' & $conf_a$ \\ \verb'$\mathit{conf}_a$' & $\mathit{conf}_a$ \end{tabular}\\ See The \TeX book, p165. The space after \eg, meaning ``for example'', should not be a sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided \verb'\eg' macro takes care of this. When citing a multi-author paper, you may save space by using ``et alia'', shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word). If you use the \verb'\etal' macro provided, then you need not worry about double periods when used at the end of a sentence as in Alpher \etal. However, use it only when there are three or more authors. Thus, the following is correct: ``Frobnication has been trendy lately. It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.'' This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...'' because reference~\cite{Alpher03} has just two authors. \begin{figure*} \centering \begin{subfigure}{0.68\linewidth} \fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}} \caption{An example of a subfigure.} \label{fig:short-a} \end{subfigure} \hfill \begin{subfigure}{0.28\linewidth} \fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}} \caption{Another example of a subfigure.} \label{fig:short-b} \end{subfigure} \caption{Example of a short caption, which should be centered.} \label{fig:short} \end{figure*} \section{Formatting your paper} \label{sec:formatting} All text must be in a two-column format. The total allowable size of the text area is $6\frac78$ inches (17.46 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the first page) should begin 1 inch (2.54 cm) from the top edge of the page. The second and following pages should begin 1 inch (2.54 cm) from the top edge. On all pages, the bottom margin should be $1\frac{1}{8}$ inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately $1\frac{5}{8}$ inches (4.13 cm) from the bottom edge of the page. \subsection{Margins and page numbering} All printed material, including text, illustrations, and charts, must be kept within a print area $6\frac{7}{8}$ inches (17.46 cm) wide by $8\frac{7}{8}$ inches (22.54 cm) high. Page numbers should be in the footer, centered and $\frac{3}{4}$ inches from the bottom of the page. The review version should have page numbers, yet the final version submitted as camera ready should not show any page numbers. The \LaTeX\ template takes care of this when used properly. \subsection{Type style and fonts} Wherever Times is specified, Times Roman may also be used. If neither is available on your word processor, please use the font closest in appearance to Times to which you have access. MAIN TITLE. Center the title $1\frac{3}{8}$ inches (3.49 cm) from the top edge of the first page. The title should be in Times 14-point, boldface type. Capitalize the first letter of nouns, pronouns, verbs, adjectives, and adverbs; do not capitalize articles, coordinate conjunctions, or prepositions (unless the title begins with such a word). Leave two blank lines after the title. AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title and printed in Times 12-point, non-boldface type. This information is to be followed by two blank lines. The ABSTRACT and MAIN TEXT are to be in a two-column format. MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use double-spacing. All paragraphs should be indented 1 pica (approx.~$\frac{1}{6}$ inch or 0.422 cm). Make sure your text is fully justified---that is, flush left and flush right. Please do not place any additional blank lines between paragraphs. Figure and table captions should be 9-point Roman type as in \cref{fig:onecol,fig:short}. Short captions should be centred. \noindent Callouts should be 9-point Helvetica, non-boldface type. Initially capitalize only the first word of section titles and first-, second-, and third-order headings. FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction}) should be Times 12-point boldface, initially capitalized, flush left, with one blank line before, and one blank line after. SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements}) should be Times 11-point boldface, initially capitalized, flush left, with one blank line before, and one after. If you require a third-order heading (we discourage it), use 10-point Times, boldface, initially capitalized, flush left, preceded by one blank line, followed by a period and your text on the same line. \subsection{Footnotes} Please use footnotes\footnote{This is what a footnote looks like. It often distracts the reader from the main flow of the argument.} sparingly. Indeed, try to avoid footnotes altogether and include necessary peripheral observations in the text (within parentheses, if you prefer, as in this sentence). If you wish to use a footnote, place it at the bottom of the column on the page on which it is referenced. Use Times 8-point type, single-spaced. \subsection{Cross-references} For the benefit of author(s) and readers, please use the {\small\begin{verbatim} \cref{...} \end{verbatim}} command for cross-referencing to figures, tables, equations, or sections. This will automatically insert the appropriate label alongside the cross-reference as in this example: \begin{quotation} To see how our method outperforms previous work, please see \cref{fig:onecol} and \cref{tab:example}. It is also possible to refer to multiple targets as once, \eg~to \cref{fig:onecol,fig:short-a}. You may also return to \cref{sec:formatting} or look at \cref{eq:also-important}. \end{quotation} If you do not wish to abbreviate the label, for example at the beginning of the sentence, you can use the {\small\begin{verbatim} \Cref{...} \end{verbatim}} command. Here is an example: \begin{quotation} \Cref{fig:onecol} is also quite important. \end{quotation} \subsection{References} List and number all bibliographical references in 9-point Times, single-spaced, at the end of your paper. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Authors14}. Where appropriate, include page numbers and the name(s) of editors of referenced books. When you cite multiple papers at once, please make sure that you cite them in numerical order like this \cite{Alpher02,Alpher03,Alpher05,Authors14b,Authors14}. If you use the template as advised, this will be taken care of automatically. \begin{table} \centering \begin{tabular}{@{}lc@{}} \toprule Method & Frobnability \\ \midrule Theirs & Frumpy \\ Yours & Frobbly \\ Ours & Makes one's heart Frob\\ \bottomrule \end{tabular} \caption{Results. Ours is better.} \label{tab:example} \end{table} \subsection{Illustrations, graphs, and photographs} All graphics should be centered. In \LaTeX, avoid using the \texttt{center} environment for this purpose, as this adds potentially unwanted whitespace. Instead use {\small\begin{verbatim} \centering \end{verbatim}} at the beginning of your figure. Please ensure that any point you wish to make is resolvable in a printed copy of the paper. Resize fonts in figures to match the font in the body text, and choose line widths that render effectively in print. Readers (and reviewers), even of an electronic copy, may choose to print your paper in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it's almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.pdf} \end{verbatim} } \subsection{Color} Please refer to the author guidelines on the CVPR\ 2023\ web page for a discussion of the use of color in your document. If you use color in your plots, please keep in mind that a significant subset of reviewers and readers may have a color vision deficiency; red-green blindness is the most frequent kind. Hence avoid relying only on color as the discriminative feature in plots (such as red \vs green lines), but add a second discriminative feature to ease disambiguation. \section{Final copy} You must include your signed IEEE copyright release form when you submit your finished paper. We MUST have this form before your paper can be published in the proceedings. Please direct any questions to the production editor in charge of these proceedings at the IEEE Computer Society Press: \url{https://www.computer.org/about/contact}. \end{comment} \section*{Supplementary Material} \setcounter{section}{0} \section{Implementation Details} \subsection{Semantic Segmentation} As mentioned in our main paper, for semantic segmentation, the SGR component is added after the final layer of the ResNet~\cite{he2016deep} backbone, pre-trained on ImageNet, just before the segmentation head of DeepLabV3~\cite{chen2017rethinking}. DeepLabV3 uses a multi-grid approach with dilated convolutions during training. The last two downsampling layers are removed resulting in an output stride of 8. The models are trained using the SGD optimizer with momentum~\cite{sutskever2013importance} of 0.9 with a weight decay of 0.0001. We used a polynomial learning rate policy where the learning rate decreases with the formula $ (1 - \frac{iter}{total\_iter})^{0.9}$ with every iteration. During training, we applied random horizontal flips, random scaling between [0.5-2.0] and random color jitter following~\cite{yuan2020object,cheng2021per} for data augmentation. For \textbf{cityscapes}~\cite{Cordts2016Cityscapes}, following the random data augmentation, the images are cropped from the center with a crop size of $768 \times 768$. For both \textbf{ADE-20K}~\cite{zhou2017scene} and \textbf{Coco-Stuffs-10K}~\cite{caesar2018coco} a center crop of crop size $512 \times 512$ is used following the above mentioned random image transformations during training. The model on \textbf{cityscapes}~\cite{Cordts2016Cityscapes} is trained with a batch size of 8 and an initial learning rate of 0.006. The models on \textbf{ADE-20K}~\cite{zhou2017scene} and \textbf{Coco-Stuffs-10K}~\cite{caesar2018coco} are trained with a batch size of 16 and an initial learning rate of 0.004. For all three datasets, the initial base learning rate is multiplied by a factor of 10.0 for the parameters of the SGR component and the layers that correspond to the segmentation head. When trained across multiple GPUs, we apply synchronized batchnorm~\cite{zhang2018context} to syncrhonize batch statistics following existing work~\cite{chen2017rethinking, fu2019dual, chen2019graph, yuan2020object, cheng2021per }. We train Cityscapes, COCO-Stuffs-10K, and ADE-20K for 240 epochs, 140 epochs and 120 epochs respectively. \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{gt_components} \caption{{\bf Ground truth segmentation mask and corresponding connected components.} The connected components are assumed to give a lower bound on the number of instances; \eg, illustrated image contains 6 people but the number of connected components that corresponds to person is only 3.} \label{fig:connected_comp} \vspace{-6pt} \end{figure} \begin{figure*}[!] \centering \includegraphics[width=\textwidth]{histogram} \caption{\textbf{Visualization of instance- and class-level histograms.} (Left) Image with ground truth instances of "things" classes. (2nd Column) Two different token projection matrices aggregating information from different concepts. (3rd Column) Instance-level histograms. (Right) Class-level histograms. Even though the bottom token captures a good class-level semantics (mostly aggregating features of cars), on instance-level the semantics are poor. } \label{fig:histogram} \end{figure*} \begin{figure*}[!] \centering \includegraphics[width=\textwidth]{Qualitative_city} \caption{\textbf{Qualitative results on Citscapes}. Leftmost two columns correspond to the image and ground truth semantic segmentation; Center column shows the predictions of our model; Rightmost two columns show predictions from Maskformer~\cite{cheng2021per} and GloRE~\cite{chen2019graph}. Red rectangles on the images of two rightmost columns indicate locations where our model correctly classifies semantic classes but the corresponding models fail. } \label{qualitative-city} \end{figure*} \begin{figure*}[!] \centering \includegraphics[width=0.9\textwidth]{Qualitative_coco.png} \caption{\textbf{Qualitative results on COCO-Stuffs-10K}. Leftmost two columns correspond to the image and ground truth semantic segmentation; Center column shows the predictions of our model; Rightmost two columns show predictions from Maskformer~\cite{cheng2021per} and GloRE~\cite{chen2019graph}.} \label{qualitative-coco} \end{figure*} \begin{figure*}[!] \centering \includegraphics[width=\textwidth]{Qualitative_object.png} \caption{\textbf{Qualitative results for downstream tasks of object detection and instance segmentation on MS-COCO}. Leftmost two columns correspond to the image and ground truth object locations and their corresponding segmentation; Center column shows the predictions of using our backbone; Rightmost two columns show predictions from using Res101-C4 and Res101-GloRE~\cite{chen2019graph} backbones. } \label{qualitative-object} \end{figure*} For all three datasets, we report both the single scale inference and multi-scale inference with horizontal flip at scales 0.5, 0.75, 1.0, 1.25, 1.5 and 1.75 following existing works~\cite{yuan2020object,cheng2021per,fu2019dual, chen2017rethinking}. During multi-scale inference, the final output is calculated by taking the mean probabilities over each scale and their corresponding flipped inputs. Following ~\cite{fu2019dual, yuan2020object, cheng2021per}, for ADE-20K and COCO-Stuffs, we resize the shorter side of the image to the crop size followed by a center crop to ensure that all images are of same size. \vspace{0.1in} \noindent {\bf Hyper-parameters for training.} For Hungarian matching, in training, we used a $\rho = 1.0$ for dice loss (see Eq. (5) in the main paper). For matching, as mentioned in the paper, the value of $L$ is set to 64. Hence, the top $L=64$ tokens are matched using the greedy matching approach based on the cost matrix (Figure 3 of the main paper). Once matched, we used a weight of 0.25 for the hyperparameter $\beta$ that controls the importance of binary mask losses with respect to cross-entropy loss to train the models (see Eq. (7)). \subsection{Transfer to Downstream Tasks} For transferring to the downstream tasks, we removed the segmentation head from our semantic segmentation network trained on COCO-Stuffs-10K and use it as a backbone for Mask-RCNN~\cite{he2017mask} to fine-tune on the MS-COCO {\tt train2017} subset, which has 118K images, for object detection and instance segmentation. The same approach was adopted while transferring the GloRE~\cite{chen2019graph} based backbone pretrained for segmentation on COCO-Stuffs-10K. For the Res101-C4 backbone however, we used the weights pretrained for classification on Imagenet. We reported our results on {\tt val2017} subset having 5K images. The authors of Mask-RCNN used a batch size of 16 and trained on the {\tt trainval-135K} subset and reported results on the {\tt minival} dataset which is the same as {\tt val2017}. Therefore, for a fair comparison with other backbones, we trained them from scratch on MS-COCO {\tt train2017} using the same batch size, learning rate and iterations. We used a batch size of 8, an initial learning rate of 0.02, and used SGD with a momentum of 0.9 and weight decay of 0.0001 to train the models. We trained for 270K iterations with a learning rate decreased by 0.1 at 210K and 250K iterations. Following Mask-RCNN~\cite{he2017mask}, the RPN anchors span 5 scales and 3 aspect ratios. For all the reported backbones, 512 ROIs are sampled with a positive to negative ratio 1:3. At inference time, we used 1000 region proposals for all the three backbones. Following Mask-RCNN~\cite{he2017mask}, the box predictions are performed on these region proposals and top 100 scoring boxes are chosen by non-maximum suppression on which the mask predictor is run. \vspace{-2mm} \section{Ground Truth connected components} Figure~\ref{fig:connected_comp} shows the result of applying connected component analysis on ground truth semantic segmentation masks. As can be seen in the figure, the class {\tt person} is divided into three different components. There are altogether 6 people in total. Hence, we observe that generally connected components form a lower bound on the number of instances. Similarly, the ``stuffs" class {\tt ground} is divided into two different components and the class {\tt banana} has only one component. For ``stuffs" classes the notion of instances is not well defined, but yet connected components serve as a good proxy for disjoint regions that are often semantically meaningful within the scene. \vspace{-2mm} \section{Visualization of histograms for tokens} Figure~\ref{fig:histogram} shows the visualization of class-level and instance-level histograms for two different tokens, which we use to compute class- and instance-level semantics metric (defined in the main text). The lower the entropy of each of these histograms, the more semantically meaningful the tokens are at class or instance level of granularity. As can be observed in Figure~\ref{fig:histogram}, the first token has high instance and class level semantics since it mostly aggregates information from a single car, in this case, car\_7. The lower token, despite being highly semantic at class-level (having lower entropy at class-level), is poor at capturing instance-level semantics. Hence, a token which is semantic at an instance-level is also highly semantic at class-level but not the other way around. \section{Qualitative Results} \subsection{Semantic Segmentation} \noindent {\bf Qualitative Results on Cityscapes.} Figure~\ref{qualitative-city} shows qualitative results of semantic segmentation of our model on Cityscapes compared to Maskformer~\cite{cheng2021per} and GCNET~\cite{chen2019graph}. The red rectangles indicate locations where our model performs better than others; \eg, in the first image, Maskformer has miss-classified {\em sky} for another class. GCNET has miss-classified the {\em pavement} (as shown by red rectangle in the rightmost corner) and a {\em road sign} in the middle. \vspace{0.1in} \noindent {\bf Qualitative Results on COCO-Stuffs-10K} Figure~\ref{qualitative-coco} shows the qualitative result of semantic segmentation of our model on COCO-Stuffs-10K compared to Maskformer~\cite{cheng2021per} and GloRE~\cite{chen2019graph}. For the first image, the predictions of our model are clearly more accurate, compared to the other two models. In the second image, all three models miss-classified the {\em snow} for {\em sky} to some extent. However, Maskformer has greater miss-classification of {\em snow} as {\em sky} compared to the rest. GCNET, on the other hand, overestimated the region of {\em trees} and had lower mIOU for {\em skis}, but has slightly better classification on {\em snow}. In the third image, Maskformer has completely miss-classified the {\em microwave} and GCNET incorrectly classified the {\em table} in the center and the {\em bottles} on top of the shelves. In the fourth image, we clearly do a better job at correctly predicting the segments, compared to the rest. In the last image, two competing models failed to classify the {\em mountains} correctly. They also miss-classified {\em gravel road} which our model correctly classified. \subsection{Object Detection and Instance Segmentation} Figure~\ref{qualitative-coco} shows the qualitative result of object detection and instance segmentation of using our pre-trained backbone in Mask-RCNN~\cite{he2017mask} on MS-COCO, compared to pre-trained Res101-C4 and Res101-GloRE~\cite{chen2019graph} backbones. In the first image, the other two backbones, miss-classified the leftmost {\em elephant} as a {\em person} which we correctly identify and segment. Moreover, they missed the rightmost instance of the {\em elephant} which model using our backbone was able to detect. Overall segmentation quality of each elephant was also better for our backbone. In the second image, other backbones erroneously classified the {\em lamp} on the left as {\em parking meter}. This is likely due to the lack of global reasoning needed to make a distinction between these two objects within the context of the scene that our backbone contains. Both of them also missed {\em backpack} of the person on the right. Our backbone equipped model, consistently identifies objects and segments them better. In the third image, our backbone segments the {\em couches} better than the other backbones. In the fourth image, the instance segmentation of the {\em person} is better than the other two backbones. Moreover, Res101-C4 backbone has missed the {\em handbag} altogether, while Res-101-GloRE backbone cannot segment the {\em handbag} properly. In the final image, Res-101-C4 backbone incorrectly labelled {\em US flag} as a {\em chair}. Besides the instance segmentation quality is lower than our backbone. The Res-101-GloRE failed to identify the {\em truck} completely, identifying part of of it as {\em motorcycle} and inaccurately segmented it. The general quality of object segmentation is also worse. All these qualitative results demonstrate the fact that our SGR component, due to instance-like supervision through connected components, learns richer features that when transferred to downstream tasks improves performance in object detection and instance segmentation. {\small \bibliographystyle{ieee_fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Nowadays, due to advances in imaging devices, large scale images have become increasingly available, and there has arisen the necessity of parallel algorithms for image processing. One suitable method for parallel computation is the domain decomposition method~(DDM), for which we solve a problem by splitting its domain into several smaller subdomains and conquering the small problem in each subdomain separately. We consider the Rudin--Osher--Fatemi (ROF) model~\cite{ROF:1992} as a model problem, which is a classical and effective model for image denoising: \begin{equation} \label{ROF} \min_{u \in BV(\Omega)} \frac{\alpha}{2}\int_{\Omega} {(u-f)^2 \,dx} + TV(u), \end{equation} where $\Omega$ is the rectangular domain of an image, $f \in L^2 (\Omega)$ is an observed noisy image, $\alpha$ is a positive denoising parameter, and $TV(u)$ is the total variation measure defined by \begin{equation*} \label{TV} TV(u) = \sup \left\{ \int_{\Omega} {u \mathrm{div} \mathbf{q} \,dx} : \mathbf{q} \in (C_0^1 (\Omega))^2 , |\mathbf{q} | \leq 1 \right\}. \end{equation*} Here, $| \mathbf{q}| \leq 1$ means that $|\mathbf{q} (x) | \leq 1$ for a.e.\ $x \in \Omega$. The solution space $BV(\Omega)$ denotes the space of the functions in $L^1 (\Omega)$ with the finite total variation, which is a Banach space equipped with the norm $\|u \|_{BV(\Omega)} = \| u \|_{L^1 (\Omega)} + |Du|(\Omega)$. It is well known that the ROF model has an anisotropic diffusion property so that it preserves edges and discontinuities in images~\cite{SC:2003}. While overlapping DDMs for image restoration were considered in~\cite{FLS:2010,XTW:2010}, nonoverlapping DDMs for the total variation minimization were proposed in \cite{FS:2009, HL:2013}. But Lee and Nam~\cite{LN:2017} gave a counterexample that an overlapping DDM does not converge to the global minimizer. In~\cite{LLWY:2016}, Lee~et al.\ suggested DDMs with the primal-dual stitching technique. In \cite{CTWY:2015, HL:2015, LN:2017}, DDMs based on the dual total variation minimization were proposed. In particular, Chang~et al.~\cite{CTWY:2015} showed that the overlapping subspace correction methods for the dual ROF model have $O(1/n)$ convergence. There are several major difficulties in designing DDMs for~\cref{ROF}. First, the energy functional in~\cref{ROF} is nonsmooth, which makes the design of solvers hard. In addition, the energy functional is nonseparable in the sense that it cannot be expressed as the sum of the local energy functionals in the subdomains due to the total variation term. Finally, the solution space $BV(\Omega)$ allows discontinuities of a solution on the subdomain interfaces, so that it is difficult to design an appropriate interface condition of a solution. One way to overcome such difficulties is to consider the Fenchel--Rockafellar dual problem as in \cite{CTWY:2015, HL:2015, LN:2017}, which is stated as \begin{equation} \label{dual_ROF_old} \min_{\mathbf{p} \in (C_0^1 (\Omega))^2} \frac{1}{2\alpha} \int_{\Omega} ( \mathrm{div} \mathbf{p} + \alpha f )^2 \,dx \hspace{0.5cm} \textrm{subject to } |\mathbf{p}| \leq 1. \end{equation} Even if it is cumbersome to treat the inequality constraint $|\mathbf{p}| \leq 1$, \cref{dual_ROF_old} is more suitable for DDMs, since the energy functional is separable and the solution space $(C_0^1 (\Omega))^2$ has some regularity on the subdomain interfaces. The desired primal solution $u$ is recovered from the dual solution $\mathbf{p}$ of~\cref{dual_ROF_old} by the following relation: \begin{equation*} u = f + \frac{1}{\alpha} \mathrm{div} \mathbf{p}. \end{equation*} Faster algorithms for solving \cref{dual_ROF_old} were developed in~\cite{BT:2009, Nesterov:2005}. In the existing works~\cite{Chambolle:2004, CTWY:2015, HL:2015, LN:2017} for~\cref{dual_ROF_old}, the problems were discretized in the finite difference framework. Each pixel in an image was treated as a discrete point on a grid, and the dual variable was considered as a vector-valued function on the grid. The discrete gradient and divergence operators were defined by finite difference approximations of the continuous gradient and divergence operators. In this paper, we propose a finite element discretization for \cref{dual_ROF_old}, which is more suitable for the DDMs than the existing ones. Each pixel in an image is treated as a square finite element and the problem \cref{dual_ROF} is discretized by using the conforming lowest order Raviart--Thomas element~\cite{RT:1977}. Based on the proposed discretization, we propose a primal DDM which is similar to the classical Schur complement method for the second order elliptic problems. Eliminating the interior degrees of freedom in each subdomain yields an equivalent minimization problem to the full dimension problem. The functional of the resulting minimization problem has enough regularity to adopt the FISTA~\cite{BT:2009}. Thus, the proposed primal DDM achieves $O(1/n^2)$ convergence, and to the best of our knowledge, it is the best rate among the existing DDMs for the ROF model. In addition, we propose a primal-dual DDM based on an equivalent saddle point problem. The continuity of a solution on the subdomain interfaces is enforced by the method of Lagrange multipliers as in \cite{DCT:2016, FLP:2000, FR:1991}, and it yields an equivalent saddle point problem of the original variable (primal) and the Lagrange multipliers (dual). The local problems for the proposed primal-dual DDM can be solved at a linear convergence rate, so that the method becomes very fast. The rest of the paper is organized as follows. In \cref{Sec:dual_ROF}, a conforming discretization of the dual ROF model with a Raviart--Thomas finite element space is introduced. A primal DDM based on an equivalent minimization problem on the subdomain interfaces is presented in \cref{Sec:primal_DD}. A primal-dual DDM based on an equivalent saddle point problem is considered in \cref{Sec:pd_DD}. We present numerical results for the proposed methods in various settings in \cref{Sec:numerical}. Finally, we conclude the paper with some remarks in \cref{Sec:conclusion}. \section{The Dual ROF Model} \label{Sec:dual_ROF} \subsection{Preliminaries} We review some preliminaries about the dual ROF model. The space $H(\div; \Omega)$ is defined as \begin{equation*} H(\div; \Omega) = \left\{ \mathbf{p} \in (L^2 (\Omega))^2 : \mathrm{div} \mathbf{p} \in L^2 (\Omega) \right\}. \end{equation*} It is a Hilbert space equipped with an inner product \begin{equation*} \left< \mathbf{p}, \mathbf{q} \right>_{H(\div; \Omega)} = \int_{\Omega} \mathbf{p} \cdot \mathbf{q} \,dx + \int_{\Omega} \mathrm{div} \mathbf{p} \mathrm{div} \mathbf{q} \,dx, \end{equation*} and its induced norm is called the $H(\div; \Omega)$ graph norm. A remarkable property of $H(\div; \Omega)$ is that, for a vector function $\mathbf{p} \in H(\div; \Omega)$, the normal component $\mathbf{p} \cdot \mathbf{n}$ on $\partial \Omega$ is well-defined \cite{BBF:2013, GR:2012}. We define $H_0 (\div ; \Omega)$ as the subspace of $H(\div; \Omega)$ with vanishing normal component on $\partial \Omega$. It can be shown that the space $H_0 (\div ; \Omega)$ is the closure of $(C_0^{\infty}(\Omega))^2$ in the $H(\div; \Omega)$ graph norm~\cite{Monk:2003}. Thus, it is natural to consider the following alternative formulation of~\cref{dual_ROF_old} using $H_0 (\div ; \Omega)$ as the solution space: \begin{equation} \label{dual_ROF} \min_{\mathbf{p} \in H_0 (\div ; \Omega)} \left\{ \mathcal{J}(\mathbf{p}) := \frac{1}{2\alpha} \int_{\Omega} (\mathrm{div} \mathbf{p} + \alpha f)^2 \,dx \right\} \hspace{0.5cm} \textrm{subject to } |\mathbf{p}| \leq 1. \end{equation} We notice that this formulation was also considered in \cite{CTWY:2015}. \subsection{Finite Element Discretizations} A digital image consists of a number of rows and columns of pixels, holding values representing the intensity at a specific point. We regard each pixel as a unit square and an image as a piecewise constant function in which each piece is a single pixel. In this sense, we regard each pixel in a digital image as a square finite element whose side length equals~1. Let $\mathcal{T}$ be the collection of all elements in $\Omega$, i.e. pixels. We define the space $X$ for the image by \begin{equation*} X = \left\{ u \ \in L^2 (\Omega) : u|_{T}\textrm{ is constant } \forall T \in \mathcal{T} \right\}. \end{equation*} Then it is clear that $X \subset BV(\Omega )$, which means that the discretization is conforming. Each degree of freedom of $X$ lies in an element (see \cref{Fig:dofs}(a)), and its corresponding basis function is \begin{equation*} \phi_T (x) = \begin{cases} 1 & \textrm{ if } x \in T , \\ 0 & \textrm{ if } x \not\in T , \end{cases} \hspace{0.5cm} T \in \mathcal{T}. \end{equation*} For $u \in X$ and $T \in \mathcal{T}$, let $(u)_T$ denote the degree of freedom of $u$ associated with the basis function $\phi_T$. With a slight abuse of notation, let $\mathcal{T}$ also indicate the set of indices of the basis functions for $X$; then we can represent $u$ by $$u = \sum_{T \in \mathcal{T}} (u)_T \phi_T.$$ \begin{figure}[] \centering \subfloat[][Degrees of freedom for $X$]{ \includegraphics[height=3.8cm]{./X_dofs.jpg} } \hspace{1.5cm} \subfloat[][Degrees of freedom for $Y$]{ \includegraphics[height=3.8cm]{./Y_dofs.jpg} } \caption{Degrees of freedom for the spaces $X$ and $Y$} \label{Fig:dofs} \end{figure} It is natural to determine the space $Y$ for the dual variable $\mathbf{p}$ such that the divergence of each element in $Y$ is in $X$. A suitable choice to meet this condition is the lowest order Raviart--Thomas elements~\cite{RT:1977}. We define $Y$ by \begin{equation*} Y = \left\{ \mathbf{q} \in H_0 (\div ; \Omega) : \mathbf{q}|_{T} \in \mathcal{RT}_0(T) \hspace{0.2cm}\forall T \in \mathcal{T} \right\}, \end{equation*} where $\mathcal{RT}_0(T)$ is the collection of the vector functions $\mathbf{q}$:~$T \rightarrow \mathbb{R}^2$ of the form $$\mathbf{q} (x_1 , x_2) = \begin{bmatrix} a_1 + b_1 x_1 \\ a_2 + b_2 x_2 \end{bmatrix}.$$ In order for a piecewise $\mathcal{RT}_0(T)$-function to be in $H_0 (\div ; \Omega)$, a particular condition on the element interfaces should be satisfied, which is given in the following proposition~\cite{Nedelec:1980}. \begin{proposition} \label{Prop:FEM_interface} A vector function $\mathbf{q}$\emph{:} $\Omega \rightarrow \mathbb{R}^2$ is in $H(\div; \Omega)$ if and only if the restriction of $\mathbf{q}$ to each $T \in \mathcal{T}$ is in $H(\mathrm{div} ; T)$, and for each common edge $e = \bar{T_1} \cap \bar{T_2}$, we have $$ \mathbf{q} \cdot \mathbf{n} |_{T_1} + \mathbf{q} \cdot \mathbf{n} |_{T_2} = 0 \textrm{ on } e, $$ where $\mathbf{n} |_{T_i}$ is the outer normal to $\partial T_i$ on $e$, $i= 1,2$, so that $\mathbf{n}|_{T_1} = - \mathbf{n}|_{T_2}$. \end{proposition} \Cref{Prop:FEM_interface} gives a natural way to choose the degrees of freedom of the space $Y$. Let $\mathbf{q} \in Y$. Then the value of $\mathbf{q} \cdot \mathbf{n}$ is well-defined on each common edge of elements, where the direction of $\mathbf{n}$ is chosen as in \cref{Fig:dofs}(b). Therefore, we choose the degrees of freedom of $Y$ by the values of $\mathbf{q} \cdot \mathbf{n}$ on the element interfaces. To construct the corresponding basis functions, we consider a reference square $T_{\mathrm{ref}} = [0, 1]^2$. The outer normal component of a basis function $\bm{\psi}_{\mathrm{ref}}$ has the value $1$ on one edge, say $x=1$, and $0$ on the other edges. Such $\bm{\psi}_{\mathrm{ref}}$ is unique and given by $\bm{\psi}_{\mathrm{ref}}(x_1 , x_2 ) = (x_1 , 0)$. Similarly, the other basis functions on $T_{\mathrm{ref}}$ are given by $(1-x_1, 0)$, $(0, x_2)$, and $(0, 1-x_2)$. Now, let $\mathcal{I}$ be the set of indices of the basis functions for $Y$, and let $\left\{\bm{\psi}_i \right\}_{i \in \mathcal{I}}$ be the basis. Also, for $\mathbf{p} \in Y$ and $i \in \mathcal{I}$, let $(\mathbf{p})_i$ denote the degree of freedom of $\mathbf{p}$ associated with the basis function $\bm{\psi}_i$; then we can write $$ \mathbf{p} = \sum_{i \in \mathcal{I}} {(\mathbf{p})_i \bm{\psi}_i}.$$ Next, we determine the norms and the inner products for which $X$ and $Y$ will be equipped. In $X$, the $L^2 (\Omega)$-inner product agrees with the Euclidean inner product, so it is natural to choose the inner product as $$ \left< u, v \right>_X = \int_{\Omega} {uv \,dx} = \sum_{T \in \mathcal{T}} {(u)_T (v)_T} $$ and the norm as its induced norm $$\| u \|_X^2 = \left< u, u \right>_X.$$ We set the inner product for $Y$ by the usual Euclidean inner product $$ \left< \mathbf{p}, \mathbf{q} \right>_Y = \sum_{i \in \mathcal{I}} {(\mathbf{p})_i (\mathbf{q})_i} $$ and the norm by its induced norm $$\| \mathbf{p} \|_Y^2 = \left< \mathbf{p}, \mathbf{p} \right>_Y.$$ \begin{remark} We equipped $Y$ with not the $(L^2 (\Omega))^2$-inner product but the Euclidean inner product. The reason is that if we equip $Y$ with the $(L^2 (\Omega))^2$-inner product, then the $(L^2 (\Omega))^2$-mass matrix occurs in the resulting algorithms, making computation more cumbersome. In the following, we prove that using the Euclidean inner product instead of the $(L^2 (\Omega))^2$-inner product does not affect both the quality of image denoising and the rate of convergence. Assume that the image size is $n = M \times N$. Consider an $n \times n$ symmetric tridiagonal matrix~$\mathrm{trid}_n (\alpha , \beta)$ whose diagonal entries are~$\alpha$ and off-diagonal entries are~$\beta$. Under an appropriate ordering of the degrees of freedom of~$Y$, one can see that the $(L^2 (\Omega))^2$-mass matrix is a block-diagonal matrix composed of~$N$ $\mathrm{trid}_{M-1} (\frac{2}{3}, \frac{1}{6})$-blocks and $M$ $\mathrm{trid}_{N-1} (\frac{2}{3}, \frac{1}{6})$-blocks. Hence, all eigenvalues are \begin{equation*} \frac{2}{3} + \frac{1}{3} \cos \left(\frac{k\pi}{M}\right), \hspace{0.5cm} k=1, \ldots ,M-1, \end{equation*} and \begin{equation*} \frac{2}{3} + \frac{1}{3} \cos \left(\frac{k\pi}{N}\right), \hspace{0.5cm} k=1, \ldots ,N-1. \end{equation*} See section~C.7 in~\cite{LeVeque:2007} for details. The $(L^2 (\Omega))^2$-mass matrix is spectrally equivalent to the identity matrix which can be obtained from the $(L^2 (\Omega))^2$-mass matrix by diagonal lumping with proper scaling. Therefore, one can conclude that the overall performance remains the same even if we use the Euclidean inner product. \end{remark} For a pixel $T = T_{ij} \in \mathcal{T}$ on the $i$th row and the $j$th column of the $M \times N$ image, let $\iota_{T, 1} \in \mathcal{I}$ be the index corresponding to the degree of freedom of~$Y$ on the edge shared by~$T_{ij}$ and~$T_{i+1,j}$. Similarly, let $\iota_{T, 2} \in \mathcal{I}$ be the one on the edge shared by~$T_{ij}$ and $T_{i, j+1}$. To treat the inequality constraints in~\cref{dual_ROF}, for $1< p < \infty$, we define the subset $C^p$ of $Y$ by \begin{equation} \label{Cp} C^p = \left\{ \mathbf{p} \in Y : |(\mathbf{p})_{\iota_{T, 1}}|^{q} + |(\mathbf{p})_{\iota_{T, 2}}|^{q} \leq 1 \hspace{0.2cm} \forall T \in \mathcal{T} \right\}, \end{equation} where $q$ is the H\"{o}lder conjugate of $p$ and the convention $(\mathbf{p})_{\iota_{T_{M, j}, 1}} = (\mathbf{p})_{\iota_{T_{i, N}, 2}} = 0$ is adopted. Also, for $p=1$, we define \begin{equation} \label{C1} C^1 = \left\{ \mathbf{p} \in Y : |(\mathbf{p})_i| \leq 1 \hspace{0.2cm} \forall i \in \mathcal{I} \right\}. \end{equation} Clearly, for $1 \leq p < \infty$, $C^p$ is nonempty and convex. The orthogonal projection of $\mathbf{p} \in Y$ onto $C^p$ can be easily computed by \begin{equation} \label{proj_Cp} (\mathrm{proj}_{C^p} \mathbf{p})_{\iota_{T, k}} = \frac{(\mathbf{p})_{\iota_{T, k}}}{\left( |(\mathbf{p})_{\iota_{T, 1}}|^{q} + |(\mathbf{p})_{\iota_{T, 2}}|^{q} \right)^{\frac{1}{q}}} \hspace{0.5cm} \forall T \in \mathcal{T}, \hspace{0.1cm} k=1,2 \end{equation} for $1<p<\infty$ and \begin{equation} \label{proj_C1} (\mathrm{proj}_{C^1} \mathbf{p})_i = \frac{(\mathbf{p})_i}{\max \left\{ 1, |(\mathbf{p})_i| \right\}} \hspace{1cm} \forall i \in \mathcal{I} \end{equation} for $p = 1$. Finally, we are ready to state a finite element version of problem \cref{dual_ROF}: \begin{equation} \label{d_dual_ROF} \min_{\mathbf{p} \in Y} \mathcal{J}(\mathbf{p}) + \chi_{C^p} (\mathbf{p}), \end{equation} where $\chi_{C^p}$ is the characteristic function of $C^p$ which is defined as \begin{equation*} \chi_{C^p} (\mathbf{p}) = \begin{cases}0 & \textrm{ if } \mathbf{p} \in C^p,\\ \infty & \textrm{ if } \mathbf{p} \not\in C^p. \end{cases} \end{equation*} We provide a relation between \cref{d_dual_ROF} and the conventional finite difference discretization of the ROF model. \begin{figure}[] \centering \subfloat[][$\mathrm{div}$]{ \includegraphics[height=3.5cm]{./div.jpg} } \hspace{1.2cm} \subfloat[][$\mathrm{div}^*$]{ \includegraphics[height=3.5cm]{./div_star.jpg} } \caption{Action of the operators $\mathrm{div}$ and $\mathrm{div}^*$ on an element} \label{Fig:div} \end{figure} \begin{theorem} \label{Thm:equiv} Let $\mathbf{p}^* \in Y$ be a solution of~\cref{d_dual_ROF}. If we identify $X$ with the Euclidean space of the functions from the $M \times N$ discrete points $[1, ..., M] \times [1, ..., N]$ into~$\mathbb{R}$, then $u^* = f + \frac{1}{\alpha} \mathrm{div} \mathbf{p}^*$ is a solution of the finite difference ROF model \begin{equation*} \min_{u \in X } \frac{\alpha}{2} \| u - f \|_2^2 + \| | D u|_p \|_1, \hspace{0.5cm} (1 \leq p < \infty) \end{equation*} where $D u$ is the forward finite difference operator \begin{eqnarray*} (Du)_{ij}^1 &=& \begin{cases} u_{i+1, j} - u_{ij} & \textrm{ if } i=1, ..., M-1, \\ 0 & \textrm{ if } i = M,\end{cases} \\ (Du)_{ij}^2 &=& \begin{cases} u_{i, j+1} - u_{ij} & \textrm{ if } j=1, ..., N-1, \\ 0 & \textrm{ if } j = N\end{cases} \end{eqnarray*} and $(|Du|_p)_{ij} = \left( |(Du)_{ij}^1|^p + |(Du)_{ij}^2|^p \right)^{\frac{1}{p}}$. \end{theorem} \begin{proof} By the primal-dual equivalence, $u^*$ is a solution of the Fenchel--Rockafellar dual of~\cref{d_dual_ROF} given by \begin{equation*} \min_{u \in X} \left\{ \frac{\alpha}{2} \int_{\Omega} (u-f)^2 \,dx + \sup_{\mathbf{p} \in C^p} \int_{\Omega} u \mathrm{div} \mathbf{p} \,dx \right\}. \end{equation*} Then, we have \begin{align*} \frac{\alpha}{2} \int_{\Omega} (u-f)^2 \,dx + \sup_{\mathbf{p} \in C^p} \int_{\Omega} u \mathrm{div} \mathbf{p} \,dx &= \frac{\alpha}{2} \| u - f \|_2^2 + \sup_{\mathbf{p} \in C^p} \left< u , \mathrm{div} \mathbf{p} \right>_X \\ &= \frac{\alpha}{2} \| u - f \|_2^2 + \sup_{\mathbf{p} \in C^p} \left< \mathrm{div}^* u , \mathbf{p} \right>_Y, \end{align*} where $\mathrm{div}^*$:~$X \rightarrow Y$ is defined as \begin{equation*} \left< \mathrm{div}^* u , \mathbf{p} \right>_Y = \left< u , \mathrm{div} \mathbf{p} \right>_X \hspace{0.5cm} \forall u \in X, \mathbf{p} \in Y. \end{equation*} Observe that the $\mathrm{div}^*$ operator acts like the minus finite difference operator (see \cref{Fig:div}(b)). Indeed, we can see that \begin{eqnarray*} (\mathrm{div}^* u)_{\iota_{T, 1}} &=& u_{ij} - u_{i+1, j} = - (Du)_{ij}^1\\ (\mathrm{div}^* u)_{\iota_{T, 2}} &=& u_{ij} - u_{i, j+1} = - (Du)_{ij}^2 \end{eqnarray*} for $T = T_{ij} \in \mathcal{T}$ with the convention $u_{Mj} - u_{M+1, j} = u_{iN} - u_{i,N+1} = 0$. Assume $1<p<\infty$, $\frac{1}{p} + \frac{1}{q} = 1$, and take any $\mathbf{p} \in C^p$. Then, by the duality between the spaces $l^p$ and $l^q$ in each pixel, we get \begin{align*} \sup_{\mathbf{p} \in C^p} \left< \mathrm{div}^* u , \mathbf{p} \right>_Y &= \sum_{T = T_{ij} \in \mathcal{T}} \sup_{|(\mathbf{p})_{\iota_{T, 1}}|^{q} + |(\mathbf{p})_{\iota_{T, 2}}|^{q} \leq 1} \left[ (Du)_{ij}^1 (\mathbf{p})_{\iota_{T, 1}} + (Du)_{ij}^2 (\mathbf{p})_{\iota_{T, 2}} \right] \\ &= \sum_{T = T_{ij} \in \mathcal{T}} \left[ |(Du)_{ij}^1|^p + |(Du)_{ij}^2|^p \right]^{\frac{1}{p}} = \| |Du|_p \|_1, \end{align*} which concludes the proof. The case for $p=1$ is straightforward. \end{proof} \Cref{Thm:equiv} means that, by choosing the set $C^p$ appropriately, the finite element model~\cref{d_dual_ROF} can express various versions of discrete total variation, for example, an anisotropic one for $p=1$ and an isotropic one for $p=2$. Hereafter, for the sake of simplicity, we treat the case for $p=1$ only; generalization to the other cases is straightforward. We drop the superscript and write $C = C^1$. Next, note that the divergence operator in the continuous setting is well-defined on $Y$, and its image is contained in $X$. That is, the divergence of a function in $Y$ is piecewise constant. This means that we do not need to define a discrete divergence operator as in the preceding researches, and some good properties from the continuous setting are inheritable to our discretization. For instance, for a nonoverlapping domain decomposition $\left\{ \Omega_s \right\}_{s=1}^{\mathcal{N}}$ of $\Omega$ and $\mathbf{p} \in Y$, the following \textit{splitting property} of $\mathcal{J}(\mathbf{p})$ holds: \begin{equation} \label{splitting} \frac{1}{2\alpha} \int_{\Omega} (\mathrm{div} \mathbf{p} + \alpha f)^2 \,dx = \sum_{s=1}^{\mathcal{N}} \frac{1}{2\alpha} \int_{\Omega_s} ( \mathrm{div}(\mathbf{p}|_{\Omega_s}) + \alpha f) ^2 \,dx . \end{equation} \Cref{splitting} will be our main tool in designing the DDMs in \cref{Sec:primal_DD,Sec:pd_DD}. \begin{remark} The discrete divergence operator proposed in \cite{Chambolle:2004, LN:2017} does not satisfy~\cref{splitting}, which was designed in the finite difference framework. \end{remark} \subsection{Solvers for the Finite Element ROF Model} The proposed discrete problem~\cref{d_dual_ROF} can adopt the existing solvers for the total variation minimization using either dual approaches~\cite{BT:2009, Chambolle:2004} or primal-dual approaches~\cite{CP:2011}. We give some results about~\cref{d_dual_ROF} which help to set the parameters for the solvers. \begin{proposition} \label{Prop:div_norm} The operator norm of $\mathrm{div}$\emph{:} $Y \rightarrow X$ has a bound such that $\| \mathrm{div} \|_{Y \rightarrow X}^2 \leq 8$. \end{proposition} \begin{proof} Fix $\mathbf{p} \in Y$. For a pixel $T \in \mathcal{T}$, let $p_{T, 1}$, $p_{T, 2}$, $p_{T, 3}$, and $p_{T, 4}$ be the degrees of freedom of $\mathbf{p}$ on the top, bottom, left, and right edges of $T$, respectively (see \cref{Fig:div}). We may set $p_{T, j}$ by $0$ if it is on $\partial \Omega$ for some $j$. Then, we have \begin{align*} (\mathrm{div} \mathbf{p})_T^2 &= (-p_{T, 1} + p_{T, 2} - p_{T, 3} + p_{T,4})^2\\ &\leq 4(p_{T, 1}^2 + p_{T, 2}^2 + p_{T, 3}^2 + p_{T,4}^2 ). \end{align*} Summation over all $T \in \mathcal{T}$ yields \begin{align*} \| \mathrm{div} \mathbf{p} \|_X^2 = \sum_{T \in \mathcal{T}} {(\mathrm{div} \mathbf{p})_T^2} &\leq 4 \sum_{T \in \mathcal{T}} {(p_{T, 1}^2 + p_{T, 2}^2 + p_{T, 3}^2 + p_{T,4}^2 )}\\ &\leq 8 \sum_{i \in \mathcal{I}} {(\mathbf{p})_i^2} = 8 \| \mathbf{p} \|_Y^2. \end{align*} For the second inequality, the fact that every edge is shared by at most two elements is used. Therefore, $\| \mathrm{div} \|_{Y \rightarrow X}^2 \leq 8$. \end{proof} \begin{proposition} \label{Prop:d_dual_ROF_Lipschitz} The gradient of $\mathcal{J} (\mathbf{p})$ is given by $$\nabla \mathcal{J} (\mathbf{p}) = \frac{1}{\alpha} \mathrm{div}^* ( \mathrm{div}\mathbf{p} + \alpha f)$$ and it is Lipschitz continuous with a Lipschitz constant $8/\alpha$. \end{proposition} \begin{proof} Take any $\mathbf{p} \in Y$, and let $\mathbf{q} \in Y$ with $\|\mathbf{q} \|_Y = 1$, and $h > 0$. Then we have \begin{align*} \left| \mathcal{J} (\mathbf{p} + h\mathbf{q}) - \mathcal{J}(\mathbf{p}) - \left< \frac{1}{\alpha} \mathrm{div}^* ( \mathrm{div}\mathbf{p} + \alpha f ), h\mathbf{q} \right>_Y \right| &= \frac{h^2}{2\alpha} \int_{\Omega} (\mathrm{div} \mathbf{q})^2 \,dx \\ &\leq \frac{h^2}{2\alpha} \| \mathrm{div} \|_{Y \rightarrow X}^2 \| \mathbf{q} \|_Y^2 \leq \frac{4h^2}{\alpha} . \end{align*} Therefore, $\nabla \mathcal{J} (\mathbf{p}) = \frac{1}{\alpha} \mathrm{div}^* (\mathrm{div}\mathbf{p} + \alpha f )$. Furthermore, for any $\mathbf{p}$, $\mathbf{q} \in Y$, \begin{align*} \left\| \nabla \mathcal{J} (\mathbf{p}) - \nabla \mathcal{J} (\mathbf{q}) \right\|_Y &= \left\| \frac{1}{\alpha} \mathrm{div}^* \left( \mathrm{div} (\mathbf{p} - \mathbf{q}) \right) \right\|_Y \\ &\leq \frac{1}{\alpha} \| \mathrm{div} \|_{Y \rightarrow X}^2 \| \mathbf{p} - \mathbf{q} \|_Y \leq \frac{8}{\alpha} \| \mathbf{p} - \mathbf{q} \|_Y. \end{align*} In the last line, we used \cref{Prop:div_norm} to bound $\| \mathrm{div} \|_{Y \rightarrow X}$. From the above computations, we conclude that $\nabla \mathcal{J}$ is Lipschitz continuous with a Lipschitz constant $8/\alpha$. \end{proof} We notice that the proof of \cref{Prop:div_norm} given here is essentially the same as the proof of Theorem~3.1 of~\cite{Chambolle:2004}. \section{A Primal Domain Decomposition Method} \label{Sec:primal_DD} In this section, we propose a primal DDM for the proposed discretization which resembles the Schur complement method, one of the most primitive nonoverlapping DDMs for second order elliptic problems. We note that the method proposed in this section is not a DDM for the ``primal" total variation minimization problem, but a ``primal" DDM for the ``dual" total variation minimization problem. In the Schur complement method for second order elliptic problems, the degrees of freedom in the interior of the subdomains are eliminated so that only the degrees of freedom on the subdomain interfaces remain. The remaining system on the subdomain interfaces is called the Schur complement system, and it is solved by an iterative solver like the conjugate gradient method. Similarly, in the proposed method, the interior degrees of freedom are eliminated and we solve a resulting minimization problem on the subdomain interfaces. Every finite-dimensional Hilbert space~$H$ appearing in \cref{Sec:primal_DD,Sec:pd_DD} is equipped with the Euclidean inner product~$\left< \cdot , \cdot \right>_H$ and the induced norm~$\| \cdot \|_H$. \begin{figure}[] \centering \subfloat[][Primal DD]{ \includegraphics[height=3.8cm]{./primal_dd.jpg} } \hspace{2cm} \subfloat[][Primal-dual DD]{ \includegraphics[height=4cm]{./pd_dd.jpg} } \caption{Primal and primal-dual domain decomposition} \label{Fig:DD} \end{figure} We decompose the image domain $\Omega$ into $\mathcal{N} = N \times N$ disjoint square subdomains $\left\{ \Omega_s \right\}_{s=1}^{\mathcal{N}}$ in a checkerboard fashion (see \cref{Fig:DD}(a)). From now on, the letters $s$ and $t$ stand for indices of subdomains, that is, $s$ and $t$ run from $1$ to $\mathcal{N}$. We denote the outer normal to $\partial \Omega_s$ by $\mathbf{n}_s$. For two adjacent subdomains $\Omega_s$ and $\Omega_t$ with $s < t$, let $\Gamma_{st} = \partial \Omega_s \cap \partial \Omega_t$ be the subdomain interface between them. The subdomain interface $\Gamma_{st}$ is oriented in the way that the normal $\mathbf{n}_{st}$ to $\Gamma_{st}$ is given by $\mathbf{n}_{st} = \mathbf{n}_s = -\mathbf{n}_t$. Also, we define the union of the subdomain interfaces $\Gamma$ by $\Gamma = \bigcup_{s<t} \Gamma_{st}$. For the discrete setting, let $\mathcal{T}_s$ be the collection of all elements in $\Omega_s$. We define the local dual function space $Y_s$ by \begin{equation} \label{Y_s} Y_s = \left\{ \mathbf{q}_s \in H_0(\mathrm{div} ; \Omega_s) : \mathbf{q}_s |_{T} \in \mathcal{RT}_0 (T) \hspace{0.2cm} \forall T \in \mathcal{T}_s \right\}. \end{equation} Also, let~$\mathcal{I}_s$ be the set of indices of the basis functions for $Y_s$. In addition, we set $Y_I$ by the direct sum of all local dual function spaces, that is, \begin{equation*} Y_I = \bigoplus_{s=1}^{\mathcal{N}} Y_s. \end{equation*} One can observe that, for $\mathbf{p}_I = \bigoplus_{s=1}^{\mathcal{N}} \mathbf{p}_s$ and $\mathbf{q}_I = \bigoplus_{s=1}^{\mathcal{N}} \mathbf{q}_s$, we have \begin{equation*} \left< \mathbf{p}_I , \mathbf{q}_I \right>_{Y_I} = \sum_{s=1}^{\mathcal{N}} \left< \mathbf{p}_s , \mathbf{q}_s \right>_{Y_s}. \end{equation*} Next, we denote $\mathcal{I}_{\Gamma}$ by the set of indices of degrees of freedom of $Y$ on $\Gamma$, and define the interface function space $Y_{\Gamma}$ by \begin{equation*} Y_{\Gamma} = \mathrm{span} \left\{ \bm{\psi}_i \right\}_{i \in \mathcal{I}_{\Gamma}}. \end{equation*} The interface function space~$Y_\Gamma$ is equipped with the inner product defined by \begin{equation*} \left< \mathbf{p}_{\Gamma} , \mathbf{q}_{\Gamma} \right>_{Y_{\Gamma}} = \left< \mathbf{p}_{\Gamma} , \mathbf{q}_{\Gamma} \right>_Y \end{equation*} and its induced norm \begin{equation*} \| \mathbf{p}_{\Gamma} \|_{Y_{\Gamma}}^2 = \left< \mathbf{p}_{\Gamma} , \mathbf{p}_{\Gamma} \right>_{Y_{\Gamma}}. \end{equation*} As we readily see, $Y = Y_I \oplus Y_{\Gamma}$. For $\mathbf{p} \in Y$, there exists a unique decomposition $$\mathbf{p} = \mathbf{p}_I \oplus \mathbf{p}_{\Gamma} = \left( \bigoplus_{s=1}^{\mathcal{N}} \mathbf{p}_s \right) \oplus \mathbf{p}_{\Gamma}$$ with $\mathbf{p}_s \in Y_s$ and $\mathbf{p}_{\Gamma} \in Y_{\Gamma}$. Thanks to the splitting property \cref{splitting}, we have \begin{equation} \begin{split} \label{primal_DD_splitting} \mathcal{J}(\mathbf{p}) &= \frac{1}{2\alpha} \int_{\Omega} (\mathrm{div} \mathbf{p} + \alpha f)^2 \,dx \\ &= \sum_{s=1}^{\mathcal{N}} \frac{1}{2\alpha} \int_{\Omega_s} (\mathrm{div}(\mathbf{p}_s + \mathbf{p}_{\Gamma}|_{\Omega_s}) + \alpha f)^2 \,dx. \end{split} \end{equation} To treat the inequality constraints, as we did in~\cref{C1}, we define the subset $C_s$ of $Y_s$ by \begin{equation} \label{C_s} C_s = \left\{ \mathbf{p}_s \in Y_s : |(\mathbf{p}_s)_i| \leq 1 \hspace{0.2cm} \forall i \in \mathcal{I}_s \right\}, \end{equation} and we set $C_I$ as the direct sum of all $C_s$'s: \begin{equation*} \label{C_I} C_I = \bigoplus_{s=1}^{\mathcal{N}} C_s. \end{equation*} In addition, let $C_{\Gamma}$ be the subset of $Y_{\Gamma}$ satisfying the inequality constraints: \begin{equation*} \label{C_Gamma} C_{\Gamma} = \left\{ \mathbf{p}_{\Gamma} \in Y_{\Gamma} : |(\mathbf{p}_{\Gamma})_i| \leq 1\hspace{0.2cm} \forall i \in \mathcal{I}_{\Gamma} \right\}. \end{equation*} Similarly to~\cref{proj_C1}, the projections onto $C_s$ and $C_{\Gamma}$ can be computed by the pointwise Euclidean projection: \begin{subequations} \begin{equation} \label{proj_C_s} (\mathrm{proj}_{C_s} \mathbf{p}_s )_i = \frac{(\mathbf{p}_s)_i}{\max \left\{ 1, |(\mathbf{p}_s)_i| \right\}} \hspace{0.5cm} \forall i \in \mathcal{I}_s, \end{equation} \begin{equation} (\mathrm{proj}_{C_{\Gamma}} \mathbf{p}_{\Gamma} )_i = \frac{(\mathbf{p}_{\Gamma})_i}{\max \left\{ 1, |(\mathbf{p}_{\Gamma})_i| \right\}} \hspace{0.5cm} \forall i \in \mathcal{I}_{\Gamma}. \end{equation} \end{subequations} Now, for $\mathbf{p}_{\Gamma} \in C_{\Gamma}$, we consider the following minimization problem: \begin{equation} \label{primal_DD_harmonic} \min_{\mathbf{p}_I \in Y_I} \left\{ \mathcal{J} (\mathbf{p}_I \oplus \mathbf{p}_{\Gamma}) + \chi_{C_I}(\mathbf{p}_I) \right\}. \end{equation} We note that, with the help of~\cref{primal_DD_splitting}, a solution of \cref{primal_DD_harmonic} can be obtained by solving \begin{equation} \label{primal_DD_harmonic_local} \min_{\mathbf{p}_s \in Y_s} \left\{ \frac{1}{2\alpha}\int_{\Omega_s} (\mathrm{div}(\mathbf{p}_s + \mathbf{p}_{\Gamma}|_{\Omega_s}) + \alpha f)^2 \,dx + \chi_{C_s}(\mathbf{p}_s) \right\} \end{equation} and taking the direct sum of the solutions of~\cref{primal_DD_harmonic_local} over $s=1, ..., \mathcal{N}$. The local problem~\cref{primal_DD_harmonic_local} can be solved independently in each subdomain. That is, no communications among processors are required so that the resulting algorithm becomes suitable for parallel computation. With a slight abuse of notation, we denote a solution of~\cref{primal_DD_harmonic} by $\mathcal{H}_I \p_{\Gamma} \in C_I$. Although $\mathcal{H}_I \p_{\Gamma}$ is not unique in general, $\mathrm{div} (\mathcal{H}_I \p_{\Gamma})$ is uniquely determined and we will deal with $\mathrm{div} (\mathcal{H}_I \p_{\Gamma})$ only. Finally, we present the minimization problem for the proposed primal DDM: \begin{equation} \label{primal_DD} \min_{\mathbf{p}_{\Gamma} \in Y_{\Gamma}} \mathcal{J}_{\Gamma}(\mathbf{p}_{\Gamma}) + \chi_{C_{\Gamma}} (\mathbf{p}_{\Gamma}), \end{equation} where the functional $\mathcal{J}_{\Gamma}(\mathbf{p}_{\Gamma})$ on $Y_{\Gamma}$ is defined as \begin{equation} \label{J_Gamma} \mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma}) = \mathcal{J} (\mathcal{H}_I \p_{\Gamma} \oplus \mathbf{p}_{\Gamma}). \end{equation} The functional $\mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma})$ can be regarded as the result of elimination of interior degrees of freedom $\mathbf{p}_I$ from $\mathcal{J} (\mathbf{p})$. The same technique is widely used in DDMs for second order elliptic problems. The following proposition shows a relation between~\cref{d_dual_ROF} and~\cref{primal_DD}. \begin{proposition} \label{Prop:primal_DD_equiv} If $\mathbf{p}^* \in Y$ is a solution of~\cref{d_dual_ROF}, then $\mathbf{p}_{\Gamma}^* = \mathbf{p}^* |_{Y_{\Gamma}}$ is a solution of~\cref{primal_DD}. Conversely, if~$\mathbf{p}_{\Gamma}^* \in Y_{\Gamma}$ is a solution of~\cref{primal_DD}, then $\mathbf{p}^* = \mathcal{H}_I \p_{\Gamma}^* \oplus \mathbf{p}_{\Gamma}^*$ is a solution of~\cref{d_dual_ROF}. \end{proposition} \begin{proof} Let $\mathbf{p}^* \in Y$ be a solution of~\cref{d_dual_ROF} and $\mathbf{p}_{\Gamma}^* = \mathbf{p}^* |_{Y_{\Gamma}}$. Clearly, $\mathbf{p}_{\Gamma}^* \in C_{\Gamma}$. We show that $\mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma}) \geq \mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma}^*)$ for all $\mathbf{p}_{\Gamma} \in C_{\Gamma}$. Take any $\mathbf{p}_{\Gamma} \in C_{\Gamma}$. Then, by the minimization property of $\mathbf{p}^*$ with respect to~\cref{d_dual_ROF}, we have \begin{equation*} \mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma}) = \mathcal{J} (\mathcal{H}_I \p_{\Gamma} \oplus \mathbf{p}_{\Gamma}) \geq \mathcal{J}(\mathbf{p}^*). \end{equation*} Also, by the minimization property of $\mathcal{H}_I$ with respect to~\cref{primal_DD_harmonic}, we have \begin{align*} \mathcal{J}(\mathbf{p}^*) &= \mathcal{J}(\mathbf{p}^* |_{Y_I} \oplus \mathbf{p}_{\Gamma}^*) \\ &\geq \mathcal{J}(\mathcal{H}_I \p_{\Gamma}^* \oplus \mathbf{p}_{\Gamma}^*) = \mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma}^*). \end{align*} Therefore, $\mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma}) \geq \mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma}^*)$, so that $\mathbf{p}_{\Gamma}^*$ is a solution of~\cref{primal_DD}. Conversely, let $\mathbf{p}_{\Gamma}^* \in Y_{\Gamma}$ be a solution of~\cref{primal_DD} and $\mathbf{p}^* = \mathcal{H}_I \p_{\Gamma}^* \oplus \mathbf{p}_{\Gamma}^* \in C$. It suffices to show that $\mathcal{J}(\mathbf{p}) \geq \mathcal{J}(\mathbf{p}^*)$ for all $\mathbf{p} \in C$. Take any $\mathbf{p} \in C$. By the minimization property of $\mathcal{H}_I$ with respect to~\cref{primal_DD_harmonic}, we have \begin{align*} \mathcal{J}(\mathbf{p}) &= \mathcal{J}(\mathbf{p} |_{Y_I} \oplus \mathbf{p}|_{Y_{\Gamma}}) \\ &\geq \mathcal{J}(\mathcal{H}_I \mathbf{p} |_{Y_{\Gamma}} \oplus \mathbf{p}|_{Y_{\Gamma}}) = \mathcal{J}_{\Gamma} (\mathbf{p}|_{Y_{\Gamma}}), \end{align*} while \begin{equation*} \mathcal{J}_{\Gamma} (\mathbf{p}|_{Y_{\Gamma}}) \geq \mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma}^*) = \mathcal{J}(\mathbf{p}^*) \end{equation*} by the minimization property of $\mathbf{p}_{\Gamma}^*$ with respect to~\cref{primal_DD}. Therefore, $\mathbf{p}^*$ is a solution of~\cref{d_dual_ROF}. \end{proof} By \cref{Prop:primal_DD_equiv}, it is enough to solve~\cref{primal_DD} to obtain a solution of~\cref{d_dual_ROF}. As we noted in~\cref{primal_DD_harmonic_local}, \cref{primal_DD} has an intrinsic domain decomposition structure, so that the parallelization of the algorithm at the subdomain level is straightforward regardless of the choice of solver for the minimization problem. In this paper, we adopt FISTA~\cite{BT:2009} as the solver for~\cref{primal_DD}, which is known to have $O(1/n^2)$ convergence. To the best of our knowledge, there have been no DDMs for the ROF model with convergence rate better than~$O(1/n^2)$. In particular, Chang~et al.~\cite{CTWY:2015} showed that the subspace correction methods for the dual ROF model has the theoretical convergence rate~$O(1/n)$ even in the overlapping domain decomposition case. To show the suitability of FISTA for~\cref{primal_DD}, it should be ensured that the functional $\mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma})$ in \cref{J_Gamma} is differentiable and its gradient is Lipschitz continuous. The following lemmas are ingredients for showing such regularity of $\mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma})$. At first, \cref{Lem:primal_DD_div_norm} tells that the norm bound of the $\mathrm{div}$ operator can be improved from \cref{Prop:div_norm} if its domain is restricted to~$Y_{\Gamma}$. \begin{lemma} \label{Lem:primal_DD_div_norm} Assume that each subdomain consists of at least $2 \times 2$ pixels. Then, the operator norm of $\mathrm{div}$\emph{:} $Y_{\Gamma} \rightarrow X$ has a bound such that $\| \mathrm{div} \|_{Y_{\Gamma} \rightarrow X}^2 \leq 4$. \end{lemma} \begin{proof} Fix $\mathbf{p}_{\Gamma} \in Y_{\Gamma}$ and let $\mathbf{p} = \mathbf{0}_I \oplus \mathbf{p}_{\Gamma} \in Y$, which is an extension of $\mathbf{p}_{\Gamma}$ to $Y$. We clearly have $$ \mathrm{div} \mathbf{p} = \mathrm{div} \mathbf{p}_{\Gamma}.$$ For a pixel $T \in \mathcal{T}$, similarly to \cref{Prop:div_norm}, let $p_{T, 1}$, $p_{T, 2}$, $p_{T, 3}$, and $p_{T, 4}$ be the degrees of freedom of $\mathbf{p}$ on the top, bottom, left, and right edges of $T$, respectively. Since~$\partial T \cap \Gamma$ consists of at most two element edges (when $T$ is at a subdomain corner), at most two of $\mathbf{p}_{T, i}$'s are nonzero. Thus, we have \begin{align*} (\mathrm{div} \mathbf{p})_T^2 &= (-p_{T, 1} + p_{T, 2} - p_{T, 3} + p_{T, 4})^2\\ &\leq 2(p_{T, 1}^2 + p_{T, 2}^2 + p_{T, 3}^2 + p_{T, 4}^2 ), \end{align*} where we use the Cauchy--Schwarz inequality. Summation over all $T \in T$ yields \begin{align*} \| \mathrm{div} \mathbf{p} \|_X^2 = \sum_{T \in \mathcal{T}} {(\mathrm{div} \mathbf{p})_T^2} &\leq 2 \sum_{T \in \mathcal{T}} {(p_{T, 1}^2 + p_{T, 2}^2 + p_{T, 3}^2 + p_{T, 4}^2 )}\\ &\leq 4 \sum_{i \in \mathcal{I}} {(\mathbf{p})_i^2} \\ &= 4 \sum_{i \in \mathcal{I}_{\Gamma}} (\mathbf{p}_{\Gamma})_i^2= 4 \| \mathbf{p}_{\Gamma} \|_{Y_{\Gamma}}^2. \end{align*} Therefore, $\| \mathrm{div} \|_{Y_{\Gamma} \rightarrow X}^2 \leq 4$. \end{proof} Now we provide the main tool for showing the regularity of $\mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma})$, which is stated in a more general setting. We note that \cref{Lem:smooth} can be regarded as a generalization of the smoothness property of the Moreau envelope~\cite{CP:2016}. \begin{lemma} \label{Lem:smooth} Suppose that $H$, $H_1$, and $H_2$ are finite-dimensional Hilbert spaces. Let $A$:~$H_1 \rightarrow H$, $B$:~$H_2 \rightarrow H$ be linear operators and $c \in H$. Also, let $g$:~$H_2 \rightarrow \bar{\mathbb{R}}$ be a proper, convex, and lower semicontinuous functional. Then a functional $F$:~$H_1 \rightarrow \mathbb{R}$ defined as \begin{equation*} F(x) = \min_{y \in H_2} \left\{ f(x, y) := \frac{1}{2} \| Ax + By + c \|_H^2 + g(y) \right\} \end{equation*} is differentiable and its gradient is given by \begin{equation*} \nabla F(x) = A^* (Ax + B y^*(x) + c), \end{equation*} where $y^* (x) = \argmin_{y \in H_2} f(x, y)$. Furthermore, $\nabla F$ is Lipschitz continuous with modulus $L = \| A \|_{H_1 \rightarrow H}^2$. \end{lemma} \begin{proof} For $x \in H_1$, let \begin{equation*} d(x) = A^* (Ax + By^* (x) + c). \end{equation*} One can easily verify that $d(x)$ is single-valued even though $y^*(x)$ may not be. Take any $x_1$, $x_2 \in H_1$ and write $y_1 = y^* (x_1)$, $y_2 = y^* (x_2)$. Then, by the minimization property of $y_1$, we get \begin{equation} \begin{split} \label{ub} F(x_1) &= \frac{1}{2} \| Ax_1 + By_1 + c \|_H^2 + g(y_1) \\ &\leq \frac{1}{2} \| Ax_1 + By_2 + c \|_H^2 + g(y_2) \\ &= g(y_2) + \frac{1}{2} \| Ax_2 + By_2 + c \|_H^2 + \left< Ax_2 + By_2 + c , A(x_1 - x_2) \right>_H \\ &\quad+ \frac{1}{2} \| A(x_1 - x_2) \|_H^2 \\ &\leq F(x_2) + \left< d(x_2) , x_1 - x_2 \right>_{H_1} + \frac{L}{2} \| x_1 - x_2 \|_{H_1}^2 . \end{split} \end{equation} On the other hand, the optimality condition of $y_2$ reads as \begin{equation*} \label{y2_opt} g(y) \geq g(y_2) + \left< Ax_2 + By_2 + c , B(y_2 - y)\right>_H \hspace{0.5cm} \forall y \in H_2. \end{equation*} Thus, it follows that \begin{equation} \begin{split} \label{lb_temp} F(x_1) &= \frac{1}{2} \| Ax_1 + By_1 + c \|_H^2 + g(y_1) \\ &= g(y_1) + \frac{1}{2} \| Ax_2 + By_1 + c \|_H^2 + \left< Ax_2 + By_1 + c , A(x_1 - x_2) \right>_H \\ &\quad + \frac{1}{2} \| A(x_1 -x_2) \|_H^2 \\ &\geq g(y_2 ) + \left< Ax_2 + By_2 + c , B(y_2 - y_1) \right>_{H} + \frac{1}{2} \| Ax_2 + By_1 + c \|_H^2 \\ &\quad + \left< Ax_2 + By_1 + c , A(x_1 - x_2) \right>_H + \frac{1}{2} \| A(x_1 -x_2) \|_H^2 . \end{split} \end{equation} By the vector identity \begin{equation*} \left< a +b , b \right> + \frac{1}{2} \| a \|_2^2 = \frac{1}{2} \| a +b \|_2^2 + \frac{1}{2} \| b \|_2^2, \end{equation*} equation~\cref{lb_temp} is written as \begin{equation} \begin{split} \label{lb} F(x_1) &\geq g(y_2) + \frac{1}{2} \| Ax_2 + By_2 + c \|_H^2 + \frac{1}{2} \| B(y_1 - y_2) \|_H^2 \\ &\quad + \left< Ax_2 + By_1 + c , A(x_1 - x_2) \right>_H + \frac{1}{2} \| A(x_1 -x_2) \|_H^2 \\ &= F(x_2) + \frac{1}{2} \| B(y_1 - y_2) \|_H^2 + \left< Ax_2 + By_2 + c, A(x_1 - x_2) \right>_H \\ &\quad + \left< B(y_1 - y_2) , A(x_1 - x_2)\right>_H + \frac{1}{2} \| A(x_1 - x_2) \|_H^2 \\ &= F(x_2) + \left< d(x_2) , x_1 -x_2 \right>_{H_1} + \frac{1}{2} \| (Ax_1 + By_1 + c) - (Ax_2 + By_2 + c) \|_H^2 \\ &\geq F(x_2) + \left< d(x_2) , x_1 -x_2 \right>_{H_1} + \frac{1}{2L} \| d(x_1) - d(x_2) \|_{H_1}^2 . \end{split} \end{equation} From \cref{ub,lb}, we conclude that $F$ is differentiable with $\nabla F = d$. Now, it remains to show that $\nabla F$ is Lipschitz continuous. Interchanging $x_1$ and $x_2$ in~\cref{lb} yields \begin{equation} \label{lb2} F(x_2) \geq F(x_1) - \left< d(x_1) , x_1 - x_2 \right>_{H_1} + \frac{1}{2L} \| d(x_1) - d(x_2) \|_{H_1}^2. \end{equation} Summing~\cref{lb,lb2}, we obtain \begin{align*} \frac{1}{L} \| d(x_1) - d(x_2) \|_{H_1}^2 &\leq \left< d(x_1) - d(x_2) , x_1 - x_2 \right>_{H_1} \\ &\leq \| d(x_1) - d(x_2) \|_{H_1} \| x_1 - x_2 \|_{H_1}, \end{align*} which means that $d$ is Lipschitz continuous with modulus~$L$. \end{proof} Now, we obtain the desired regularity result of $\mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma})$ as a direct consequence of \cref{Lem:smooth}. \begin{corollary} \label{Cor:smooth} The gradient of $\mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma})$ is given by $$\nabla \mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma}) = \frac{1}{\alpha} \mathrm{div}^* ( \mathrm{div}(\mathcal{H}_I \p_{\Gamma} \oplus \mathbf{p}_{\Gamma}) + \alpha f) |_{Y_{\Gamma}},$$ which is Lipschitz continuous with a Lipschitz constant $4/\alpha$. \end{corollary} \begin{proof} In \cref{Lem:smooth}, we set $H = X$, $H_1 = Y_{\Gamma}$, and $H_2 = Y_I$. Taking $A = \mathrm{div}$:~$Y_{\Gamma} \rightarrow X$, $B = \mathrm{div}$:~$Y_I \rightarrow X$, and $g = \chi_{C_I}$ yields the conclusion. In this case, we have $L = 4/\alpha$ due to \cref{Lem:primal_DD_div_norm}. \end{proof} \Cref{Cor:smooth} guarantees that FISTA is appropriate for~\cref{primal_DD}. The proposed primal DDM for the dual ROF model is summarized in \cref{Alg:primal_DD}. \begin{algorithm}[] \caption{Primal DDM} \begin{algorithmic}[] \label{Alg:primal_DD} \STATE Choose $L \geq 4$. Let $\mathbf{q}_{\Gamma}^{(0)} = \mathbf{p}_{\Gamma}^{(0)} = \mathbf{0}_{\Gamma}$ and $t_0 = 1$. \FOR{$n=0,1,2,...$} \STATE $\displaystyle \mathcal{H}_I \q_{\Gamma}^{(n)} \in \argmin_{\mathbf{q}_I \in Y_I} \left\{ \mathcal{J} (\mathbf{q}_I \oplus \mathbf{q}_{\Gamma}^{(n)}) + \chi_{C_I}(\mathbf{q}_I) \right\}$ \STATE $\displaystyle \mathbf{p}_{\Gamma}^{(n+1)} = \mathrm{proj}_{C_{\Gamma}} \left( \mathbf{q}_{\Gamma}^{(n)} - \frac{1}{L} \mathrm{div}^* \left(\mathrm{div} (\mathcal{H}_I \q_{\Gamma}^{(n)} + \mathbf{q}_{\Gamma}^{(n)}) + \alpha f \right) \Big|_{Y_{\Gamma}}\right)$ \STATE $\displaystyle t_{n+1} = \frac{1 + \sqrt{1+4t_n^2}}{2}$ \STATE $\displaystyle \mathbf{q}_{\Gamma}^{(n+1)} = \mathbf{p}_{\Gamma}^{(n+1)} + \frac{t_n - 1}{t_{n+1}}(\mathbf{p}_{\Gamma}^{(n+1)} - \mathbf{p}_{\Gamma}^{(n)})$ \ENDFOR \end{algorithmic} \end{algorithm} As we noted in~\cref{primal_DD_harmonic_local}, $\mathcal{H}_I \q_{\Gamma}^{(n)}$ in \cref{Alg:primal_DD} can be obtained independently in each subdomain. Indeed, $\mathcal{H}_I \q_{\Gamma}^{(n)} = \bigoplus_{s=1}^{\mathcal{N}} \mathbf{q}_s^{(n)}$ where $\mathbf{q}_s^{(n)}$ is a solution of \begin{equation} \label{primal_local} \min_{\mathbf{q}_s \in Y_s} \left\{ \frac{1}{2\alpha} \int_{\Omega_s} \left(\mathrm{div}(\mathbf{q}_s + \mathbf{q}_{\Gamma}^{(n)} |_{\Omega_s} ) + \alpha f \right)^2 \,dx + \chi_{C_s}(\mathbf{q}_s)\right\}. \end{equation} Since $\mathbf{q}_{\Gamma}^{(n)} |_{\Omega_s}$ plays a role of only the essential boundary condition in~\cref{primal_local}, the existing solvers for the ROF model can be utilized to obtain $\mathbf{q}_s^{(n)}$ with little modification. Convergence analysis for \cref{Alg:primal_DD} is straightforward~\cite{BT:2009}. \begin{theorem} \label{Thm:primal_DD} Let $\{ \mathbf{p}_{\Gamma}^{(n)} \}$ be the sequence generated by \cref{Alg:primal_DD}, and let $\mathbf{p}_{\Gamma}^*$ be a solution of~\cref{primal_DD}. Then for any $n \geq 1$, \begin{equation*} \mathcal{J}_{\Gamma} (\mathbf{p}_{\Gamma}^{(n)}) - \mathcal{J}_{\Gamma}(\mathbf{p}_{\Gamma}^*) \leq \frac{2L \| \mathbf{p}_{\Gamma}^{(0)} - \mathbf{p}_{\Gamma}^* \|_{Y_{\Gamma}}^2}{(n+1)^2}. \end{equation*} \end{theorem} \section{A Primal-Dual Domain Decomposition Method} \label{Sec:pd_DD} In the primal DDM introduced in \cref{Sec:primal_DD}, the continuity of a solution on the subdomain interfaces is imposed directly. Alternatively, motivated by existing DDMs in structural mechanics~\cite{FLP:2000,FR:1991}, the continuity can be enforced by the method of Lagrange multipliers, which results in a saddle point problem of the ``primal" variable $\mathbf{p}$ and the Lagrange multipliers $\lambda$ also known as the ``dual" variable. We name the algorithm proposed in this section ``primal-dual DDM'' because it solves the saddle point problem of $\mathbf{p}$ and $\lambda$ by the primal-dual algorithm~\cite{CP:2011}. We begin with the same domain decomposition setting as in \cref{Sec:primal_DD}. At first, we state a proposition which suggests how to treat the continuity of the solution on the subdomain interfaces. \begin{proposition} \label{Prop:DD_interface} A vector function $\mathbf{q}$\emph{:} $\Omega \rightarrow \mathbb{R}^2$ is in $H_0 (\div ; \Omega)$ if and only if the restriction $\mathbf{q}_s = \mathbf{q} |_{\Omega_s}$ to each subdomain $\Omega_s$ is in $H(\mathrm{div}; \Omega_s)$ satisfying the boundary condition $\mathbf{q}_s \cdot \mathbf{n}_s = 0$ on $\partial \Omega_s \cap \partial \Omega$ and the interface condition $\mathbf{q}_s \cdot \mathbf{n}_{st} - \mathbf{q}_t \cdot \mathbf{n}_{st} = 0$ on $\Gamma_{st}$, $s<t$. \end{proposition} \begin{proof} Applying \cref{Prop:FEM_interface} to a coarse mesh $\left\{ \Omega_s \right\}_{s=1}^{\mathcal{N}}$ of $\Omega$ yields the conclusion. \end{proof} We introduce the local function space $\tilde{Y}_s$, defined by \begin{equation*} \label{tY_s} \tilde{Y}_s = \left\{ \tilde{\mathbf{q}}_s \in H(\mathrm{div} ; \Omega_s) : \tilde{\mathbf{q}}_s \cdot \mathbf{n}_s = 0 \textrm{ on } \partial \Omega_s \setminus \Gamma \textrm{, } \tilde{\mathbf{q}}_s |_{T} \in \mathcal{RT}_0 (T) \hspace{0.2cm} \forall T \in \mathcal{T}_s \right\}. \end{equation*} The difference between $Y_s$ in~\cref{Y_s} and $\tilde{Y}_s$ is that the essential boundary condition $\tilde{\mathbf{q}}_s \cdot \mathbf{n}_s = 0$ is not imposed on $\Gamma \cap \partial \Omega_s$ for $\tilde{Y}_s$. That is, $\tilde{Y}_s$ has degrees of freedom on $\partial \Omega_s \cap \Gamma$ as shown in \cref{Fig:DD}(b), while $Y_s$ does not. Let $\tilde{\mathcal{I}}_s$ be the set of indices of the basis functions for $\tilde{Y}_s$. Similarly to~\cref{C_s}, we define the inequality-constrained subset $\tilde{C}_s$ of $\tilde{Y}_s$ by \begin{equation*} \label{tC_s} \tilde{C}_s = \left\{ \tilde{\mathbf{p}}_s \in \tilde{Y}_s : |(\tilde{\mathbf{p}}_s)_i| \leq 1 \hspace{0.2cm} \forall i \in \tilde{\mathcal{I}}_s \right\}. \end{equation*} Clearly, the projection onto $\tilde{C}_s$ is given by \begin{equation*} \label{proj_tC_s} (\mathrm{proj}_{\tilde{C}_s} \tilde{\mathbf{p}}_s )_i = \frac{(\tilde{\mathbf{p}}_s)_i}{\max \left\{ 1, |(\tilde{\mathbf{p}}_s)_i| \right\}} \hspace{0.5cm} \forall i \in \tilde{\mathcal{I}}_s. \end{equation*} Also, we denote $\tilde{Y}$ by the direct sum of the local function spaces, \begin{equation*} \tilde{Y} = \bigoplus_{s=1}^{\mathcal{N}} \tilde{Y}_s \end{equation*} and we denote $\tilde{C}$ by \begin{equation*} \label{tC} \tilde{C} = \bigoplus_{s=1}^{\mathcal{N}} \tilde{C}_s. \end{equation*} For $\tilde{\mathbf{p}} = \bigoplus_{s=1}^{\mathcal{N}} \tilde{\mathbf{p}}_s$, we define the energy functional $\tilde{\mathcal{J}} (\tilde{\mathbf{p}})$ on $\tilde{Y}$ by \begin{equation} \label{tJ} \tilde{\mathcal{J}} (\tilde{\mathbf{p}}) = \sum_{s=1}^{\mathcal{N}} \frac{1}{2\alpha} \int_{\Omega_s} (\mathrm{div} \tilde{\mathbf{p}}_s + \alpha f)^2 \,dx. \end{equation} In addition, we define the operator $B$: $\tilde{Y} \rightarrow \mathbb{R}^{|\mathcal{I}_{\Gamma}|}$ which measures the jump of the normal component of $\tilde{Y}$ on the subdomain interfaces by \begin{equation} \label{B} B\tilde{\mathbf{p}}|_{\Gamma_{st}} = \tilde{\mathbf{p}}_s \cdot \mathbf{n}_{st} - \tilde{\mathbf{p}}_t \cdot \mathbf{n}_{st}, \hspace{0.5cm} s<t. \end{equation} Since each degree of freedom in the Raviart--Thomas elements represents the value of the normal component on the corresponding edge, the standard matrix of $B$ consists of only $-1$'s, $0$'s, and $1$'s. Thus, an application of $B$ can be done by a series of scalar additions/subtractions only. By \cref{Prop:DD_interface}, there is an isomorphism between two spaces~$Y$ and~$\ker B \subset \tilde{Y}$, say~$\Phi:$~$Y \rightarrow \ker B$, defined by \begin{equation} \label{isomorphism} \Phi \mathbf{p} = \bigoplus_{s=1}^{\mathcal{N}} \mathbf{p}|_{\Omega_s}, \hspace{0.5cm} \mathbf{p} \in Y. \end{equation} By such an isomorphism, \cref{d_dual_ROF} is equivalent to \begin{equation} \label{pd_constrained} \min_{\tilde{\mathbf{p}} \in \tilde{Y}} \tilde{\mathcal{J}} (\tilde{\mathbf{p}}) + \chi_{\tilde{C}}(\tilde{\mathbf{p}}) \hspace{0.5cm} \textrm{subject to } B\tilde{\mathbf{p}} = 0. \end{equation} By treating the constraint $B\tilde{\mathbf{p}} = 0$ in~\cref{pd_constrained} by the method of Lagrange multipliers, we get the following proposition. \begin{proposition} \label{Prop:pd_DD_equiv} If $\mathbf{p}^* \in Y$ is a solution of \cref{d_dual_ROF}, then $\Phi \mathbf{p}^*$ is a primal solution of the saddle point problem \begin{equation} \label{pd_DD} \min_{\tilde{\mathbf{p}} \in \tilde{Y}} \max_{\lambda \in \mathbb{R}^{|\mathcal{I}_{\Gamma}|}} \left\{ \mathcal{L}(\tilde{\mathbf{p}}, \lambda ) := \tilde{\mathcal{J}} (\tilde{\mathbf{p}}) + \chi_{\tilde{C}}(\tilde{\mathbf{p}}) + \left< B\tilde{\mathbf{p}}, \lambda \right>_{\mathbb{R}^{|\mathcal{I}_{\Gamma}|}} \right\}, \end{equation} where~$\Phi$:~$Y \rightarrow \ker B$ was defined in~\cref{isomorphism}. Conversely, if $\tilde{\mathbf{p}}^* \in \ker B \subset \tilde{Y}$ is a primal solution of \cref{pd_DD}, then $\Phi^{-1} \tilde{\mathbf{p}}^*$ is a solution of \cref{d_dual_ROF}. \end{proposition} Since the functional $\tilde{\mathcal{J}} (\tilde{\mathbf{p}})$ in \cref{tJ} is convex but not uniformly convex, the $O(1/n)$-primal-dual algorithm can be utilized to solve~\cref{pd_DD}~\cite{CP:2011}. To estimate a valid range of parameters for the primal-dual algorithm, \cref{Lem:pd_DD_B_norm} gives a norm bound of the operator $B:\tilde{Y} \rightarrow \mathbb{R}^{|\mathcal{I}_{\Gamma}|}$. \begin{lemma} \label{Lem:pd_DD_B_norm} The operator norm of $B$\emph{:} $\tilde{Y} \rightarrow \mathbb{R}^{|\mathcal{I}_{\Gamma}|}$ defined in~\cref{B} has a bound such that $\| B \|_{\tilde{Y} \rightarrow \mathbb{R}^{|\mathcal{I}_{\Gamma}|}}^2 \leq 2$. \end{lemma} \begin{proof} Fix $\tilde{\mathbf{p}} = \bigoplus_{s=1}^{\mathcal{N}} \tilde{\mathbf{p}}_s \in \tilde{Y}$. Let $(B\tilde{\mathbf{p}})_i$ be a degree of freedom of $B\tilde{\mathbf{p}}$ on $\Gamma_{st}$ for some $s<t$, and let $(\tilde{\mathbf{p}}_s)_i$, $(\tilde{\mathbf{p}}_t)_i$ be degrees of freedom of $\tilde{\mathbf{p}}_s$, $\tilde{\mathbf{p}}_t$ adjacent to $(B\tilde{\mathbf{p}})_i$, respectively. Then it satisfies that $$ (B\tilde{\mathbf{p}})_i = (\tilde{\mathbf{p}}_s)_i - (\tilde{\mathbf{p}}_t)_i .$$ By applying the Cauchy--Schwarz inequality, we get $$ (B\tilde{\mathbf{p}})_i^2 \leq 2((\tilde{\mathbf{p}}_s)_i^2 + (\tilde{\mathbf{p}}_t)_i^2).$$ Summation over every $i$ and $s<t$ yields $\| B\tilde{\mathbf{p}} \|_{\mathbb{R}^{|\mathcal{I}_{\Gamma}|}}^2 \leq 2 \| \tilde{\mathbf{p}} \|_{\tilde{Y}}^2$. \end{proof} Thanks to \cref{Lem:pd_DD_B_norm}, the primal-dual algorithm for~\cref{pd_DD} is given in \cref{Alg:pd_DD}. We notice that the primal-dual algorithm was used for DDMs in~\cite{DCT:2016}. \begin{algorithm}[] \caption{Primal-dual DDM} \begin{algorithmic}[] \label{Alg:pd_DD} \STATE Choose $L \geq 2$, $\tau, \sigma > 0$ with $\tau \sigma = \frac{1}{L}$. Let $\tilde{\mathbf{p}}^{(0)} = \mathbf{0}$ and $\lambda^{(0)} = 0$. \FOR{$n=0,1,2,...$} \STATE $\displaystyle \lambda^{(n+1)} = \lambda^{(n)} + \sigma B (2\tilde{\mathbf{p}}^{(n)} - \tilde{\mathbf{p}}^{(n-1)} )$ \STATE $\displaystyle \tilde{\mathbf{p}}^{(n+1)} \in \argmin_{\tilde{\mathbf{p}} \in \tilde{Y}} \left\{ \tilde{\mathcal{J}} (\tilde{\mathbf{p}}) + \chi_{\tilde{C}}(\tilde{\mathbf{p}}) + \frac{1}{2\tau} \int_{\Omega} (\tilde{\mathbf{p}} - \hat{\mathbf{p}})^2 \,dx \right\}$,\\ \quad where $\displaystyle \hat{\mathbf{p}} = \tilde{\mathbf{p}}^{(n)} - \tau B^* \lambda^{(n+1)}$ \ENDFOR \end{algorithmic} \end{algorithm} We note that the primal problem for $\tilde{\mathbf{p}}^{(n+1)}$ in \cref{Alg:pd_DD} can be solved independently in each subdomain. Indeed, $\tilde{\mathbf{p}}^{(n+1)}$ can be obtained as the direct sum of $\tilde{\mathbf{p}}_s^{(n+1)}$'s, where $\tilde{\mathbf{p}}_s^{(n+1)}$ is a solution of \begin{equation} \label{pd_DD_local} \min_{\tilde{\mathbf{p}}_s \in \tilde{Y}_s} \left\{ \frac{1}{2\alpha} \int_{\Omega_s} (\mathrm{div} \tilde{\mathbf{p}}_s + \alpha f)^2 \,dx + \chi_{\tilde{C}_s}(\tilde{\mathbf{p}}_s) + \frac{1}{2\tau} \int_{\Omega_s} (\tilde{\mathbf{p}}_s - \hat{\mathbf{p}}_s)^2 \,dx \right\}, \end{equation} where $\hat{\mathbf{p}}_s = \tilde{\mathbf{p}}_s^{(n)} - \tau B^* \lambda^{(n+1)} |_{\Omega_s}$. Now, we state the convergence analysis for \cref{Alg:pd_DD}. See Theorem~5.1 of~\cite{CP:2016} for details. \begin{theorem} \label{Thm:pd_DD} Let $\left\{ \tilde{\mathbf{p}}^{(n)}, \lambda^{(n)} \right\}$ be the sequence generated by \cref{Alg:pd_DD}. Then, it converges to a saddle point of~\cref{pd_DD} and satisfies that $$ \mathcal{L} \left( \frac{1}{n}\sum_{k=1}^{n}\tilde{\mathbf{p}}^{(k)}, \lambda \right) - \mathcal{L} \left( \tilde{\mathbf{p}}, \frac{1}{n}\sum_{k=1}^{n}\lambda^{(k)} \right) \leq \frac{1}{n} \left( \frac{1}{\tau} \| \tilde{\mathbf{p}} - \tilde{\mathbf{p}}^{(0)} \|_{2, \tilde{Y}}^2 + \frac{1}{\sigma} \| \lambda - \lambda^{(0)} \|_{2, \mathbb{R}^{|\mathcal{I}_{\Gamma}|}}^2 \right) $$ for any $\tilde{\mathbf{p}} \in Y$ and $\lambda \in \mathbb{R}^{|\mathcal{I}_{\Gamma}|}$. \end{theorem} Even though the convergence rate in \cref{Thm:pd_DD} is the same as the existing methods (see, e.g.,~\cite{CTWY:2015}), the proposed primal-dual DDM has an advantage for the convergence rate of local problems compared to the existing ones. With the help of a $\frac{1}{\tau}$-uniformly convex term $$\frac{1}{2\tau} \int_{\Omega_s} (\tilde{\mathbf{p}}_s - \hat{\mathbf{p}}_s)^2 \,dx$$ in~\cref{pd_DD_local}, linearly convergent algorithms such as~\cite[Algorithm~3]{CP:2011} and~\cite[Algorithm~5]{CP:2016} can be adopted, while the known optimal convergence rate of the existing methods for the ROF model is only $O(1/n^2)$, which is far slower than linear convergence. The following is the linearly convergent primal-dual algorithm~\cite[Algorithm~3]{CP:2011} applied to~\cref{pd_DD_local}. \begin{algorithm}[] \renewcommand{\thealgorithm}{} \caption{Linearly convergent local solver for \cref{Alg:pd_DD}} \begin{algorithmic}[] \label{Alg:pd_DD_local} \STATE Choose $L \geq 8$, $\gamma \leq \alpha$, and $\delta \leq \frac{1}{\tau}$. \STATE Set $\mu = \frac{2\sqrt{\gamma \delta}}{L}$, $\tau_0 = \frac{\mu}{2\gamma}$, $\sigma_0 = \frac{\mu}{2\delta}$, and $\theta_0 \in \left[ \frac{1}{1+\mu}, 1\right]$. Let $\bar{u}_s^{(0)} = u_s^{(0)} = 0$ and $\tilde{\mathbf{p}}_s^{(0)} = \mathbf{0}$. \FOR{$n=0,1,2,...$} \STATE $\displaystyle \tilde{\mathbf{p}}_s^{(n+1)} = \mathrm{proj}_{\tilde{C}_s} \left( \frac{\tau (\tilde{\mathbf{p}}_s^{(n)} - \sigma_0 \mathrm{div}^* \bar{u}_s^{(n)}) + \sigma_0 \hat{\mathbf{p}}_s}{\tau + \sigma_0}\right)$ \STATE $\displaystyle u_s^{(n+1)} = \frac{(u_s^{(n)} + \tau_0 \mathrm{div} \mathbf{p}_s^{(n+1)}) + \tau_0 \alpha f}{1 + \tau_0 \alpha}$ \STATE $\bar{u}_s^{(n+1)} = u_s^{(n+1)} + \theta_0 (u_s^{(n+1)} - u_s^{(n)})$ \ENDFOR \end{algorithmic} \end{algorithm} \section{Numerical Results} \label{Sec:numerical} In this section, numerical results of the algorithms introduced in previous sections are presented. All the algorithms were implemented in MATLAB~R2018a, and all the computations were performed on a desktop equipped with Intel Core i5-8600K CPU (3.60GHz), 16GB memory, and the OS Windows 10 Pro 64-bit. Two test images ``Peppers $512\times512$" and ``Boat $2048 \times 3072$," shown in \cref{Fig:test_images}, were used in the numerical experiments. We introduced noise to each image using Gaussian additive noise with mean $0$ and variance $0.05$. As a measurement of the quality of denoising, the peak-signal-to-noise ratio (PSNR) defined by \begin{equation*} \mathrm{PSNR} = 10 \log_{10} \left( \frac{\mathrm{MAX}^2 \cdot |\Omega|}{\| u-f_{\mathrm{orig}}\|_{X}^2} \right), \end{equation*} where $\mathrm{MAX}$ is the maximum possible pixel value of the image ($\mathrm{MAX} = 1$ in our experiments), $f_{\mathrm{orig}}$ is the original clean image and $u$ is a denoised image, is calculated for each output of the experiment. We set~$\alpha = 10$ heuristically in~\cref{ROF}. \begin{figure}[] \centering \subfloat[][Peppers $512 \times 512$]{ \includegraphics[height=3.8cm]{./peppers.png} } \hspace{1.2cm} \subfloat[][Boat $2048 \times 3072$]{ \includegraphics[height=3.8cm]{./boat.png} } \caption{Test images for the numerical experiments} \label{Fig:test_images} \end{figure} First, we compare the proposed methods with other existing DDMs for the ROF model. Thanks to~\cref{Thm:equiv}, direct comparisons with existing methods based on the finite difference discretization are available in the aspect of the primal energy functional defined as \begin{equation} \label{primal_energy} \mathcal{E} (u) = \frac{\alpha}{2} \| u - f \|_2^2 + \| | Du|_1 \|_1. \end{equation} The following algorithms are used for our numerical experiments: \begin{itemize} \item ALG1: Primal DDM described in \cref{Alg:primal_DD}, $L=4$. \item ALG2: Primal-dual DDM described in \cref{Alg:pd_DD}, $L=2$, $\sigma=0.02$, $\sigma \tau = 1/L$. \item HL--RJ: Relaxed block Jacobi~(parallel) method proposed by Hinterm{\"u}ller and Langer~\cite{HL:2015}, relaxation parameter:~$1/3$ (see Remark~3.3 of~\cite{LN:2017}). \item HL--GS: Block Gauss--Seidel~(successive) method proposed by Hinterm{\"u}ller and Langer~\cite{HL:2015}. \item LN--RJ: Relaxed block Jacobi method proposed by Lee and Nam~\cite{LN:2017}, relaxation parameter:~$1/3$. \item LN--GS: Block Gauss--Seidel method proposed by Lee and Nam~\cite{LN:2017}. \end{itemize} The number of subdomains~$\mathcal{N}$ is fixed at~$4\times4$. Local problems are solved by the~$O(1/n^2)$ convergent primal-dual algorithm~\cite[Algorithm~2]{CP:2011} with the parameters $L=8$, $\gamma = 0.125\alpha$, $\tau_0 = 0.01$, and $\sigma_0 \tau_0 = 1/L$ for all algorithms stated above but ALG2. For ALG2, the linearly convergent primal-dual algorithm~\cite[Algorithm~3]{CP:2011} with the parameters~$L=8$, $\gamma = 0.5\alpha$, and $\delta = 1/\tau$ are used. Local problems are solved by the following stop criterion: \begin{equation*} \frac{\| \mathbf{p}_s^{(n+1)} - \mathbf{p}_s^{(n)} \|_2}{\| \mathbf{p}_s^{(n+1)}\|_2} < 10^{-8}. \end{equation*} To evaluate the performances of DDMs based on iterations of the dual variables~$\left\{\mathbf{p}^{(n)} \right\}$ in terms of the primal energy~\cref{primal_energy}, we have to define the primal iterates~$\left\{u^{(n)} \right\}$ appropriately. For HL--RJ and HL--GS, we define $u^{(n)}$ as \begin{equation*} u^{(n)} = f + \frac{1}{\alpha} \mathrm{div} \mathbf{p}^{(n)}. \end{equation*} Also, for ALG1 and ALG2, $u^{(n)}$ is defined as \begin{equation} \label{primal_DD_u} u^{(n)} = f + \frac{1}{\alpha} \mathrm{div} (\mathcal{H}_I \q_{\Gamma}^{(n)} \oplus \mathbf{q}_{\Gamma}^{(n)}) \end{equation} and \begin{equation} \label{pd_DD_u} u^{(n)} = f + \frac{1}{\alpha} \bigoplus_{s=1}^{\mathcal{N}} \mathrm{div} \tilde{\mathbf{p}}_s^{(n)} , \end{equation} respectively. Meanwhile, we compute the minimum value of the primal energy~$\mathcal{E} (u^*)$ approximately by 10,000 iterations of the~$O(1/n^2)$ convergent primal-dual algorithm applied to the full dimension problem~\cref{d_dual_ROF}. \begin{figure}[] \centering \subfloat[][Peppers $512 \times 512$]{ \includegraphics[height=4.9cm]{./comp_peppers.png} } \subfloat[][Boat $2048 \times 3072$]{ \includegraphics[height=4.9cm]{./comp_boat.png} } \caption{Decay of the values of~$\frac{\mathcal{E} (u^{(n)}) - \mathcal{E}(u^*) }{\mathcal{E} (u^*)}$ in various DDMs for the ROF model} \label{Fig:comp} \end{figure} \Cref{Fig:comp} shows the decay of the relative primal energy functional~$\frac{\mathcal{E} (u^{(n)}) - \mathcal{E}(u^*) }{\mathcal{E} (u^*)}$ during 1,000 outer iterations for various DDMs. It can be observed that the primal energy of ALG1 decreases as fast as the block Gauss--Seidel methods. ALG1 has an advantage compared to the block Gauss--Seidel methods in the aspect of parallel computation; all local problems of ALG1 can be solved in parallel while only local problems of the same color can be solved in parallel for the block Gauss--Seidel methods. In~\cref{Fig:comp}, there are oscillations of the primal energy of ALG1 when the value of~$\frac{\mathcal{E} (u^{(n)}) - \mathcal{E}(u^*) }{\mathcal{E} (u^*)}$ is close to~$10^{-10}$. This is because local problems are solved inexactly by iterative methods. \begin{table}[] \centering \begin{tabular}{| c | c c c c c c |} \hline Test image & ALG1 & ALG2 & HL--RJ & HL--GS & LN--RJ & LN--GS \\ \hline Peppers & 2923 & 271 & 2925 & 2928 & 2925 & 2929 \\ \cline{2-6} \hline Boat & 8613 & 263 & 8613 & 8613 & 8613 & 8614 \\ \cline{2-6} \hline \end{tabular} \caption{Maximum numbers of inner iterations in various DDMs for the ROF model} \label{Table:max_inner_iter} \end{table} Even though the primal energy of ALG2 does not decrease faster than the existing methods, it has its own advantage in that local problems can be solved much faster. \Cref{Table:max_inner_iter} shows the maximum numbers of inner iterations during~1,000 outer iterations for various DDMs. ALG1 shows similar behavior on inner iterations compared to the existing DDMs. On the other hand, as we explained in~\cref{Sec:pd_DD}, ALG2 can adopt linearly convergent algorithms as local solvers, while the other algorithms cannot. Thus, the maximum number of inner iterations of ALG2 is much less than the other ones. This phenomenon makes ALG2 practically efficient. For example, in the case of the test image ``Boat $2048\times3072$,'' a single outer iteration of ALG2 is approximately 32 times faster than the other methods. Next, we present numerical results for the proposed methods, which emphasize their efficiency as parallel solvers. To evaluate the parallel efficiency, the \textit{virtual wall-clock time} is measured, which assumes that the algorithms run in parallel in each subdomain. That is, it ignores the communication time among processors. We first present the numerical results for \cref{Alg:primal_DD}. We set the parameter $L = 4$. We note that, in the viewpoint of image restoration, the stop criteria for the proposed methods need not to be too strict. We use the following stop criterion: \begin{equation} \label{stop_outer} \left| \frac{\mathcal{E}(u^{(n+1)}) - \mathcal{E}(u^{(n)})}{\mathcal{E}(u^{(n+1)})} \right| < 10^{-3} , \end{equation} where $u^{(n)}$ was defined in~\cref{primal_DD_u}. Local problems are solved by the $O(1/n^2)$ convergent primal-dual algorithm with the parameters $L=8$, $\gamma = 0.125\alpha$, $\tau_0 = 0.01$, and $\sigma_0 \tau_0 = 1/L$ and the stop criterion \begin{equation} \label{stop_inner} \frac{\| \mathbf{p}_{s}^{(n+1)} - \mathbf{p}_{s}^{(n)} \|_{Y_{s}}}{\| \mathbf{p}_{s}^{(n+1)} \|_{Y_{s}}} < 10^{-5} . \end{equation} \begin{table}[] \centering \begin{tabular}{| c | c | c c c c |} \hline Test image & $\mathcal{N}$ & PSNR & iter & \begin{tabular}{c}max\\inner iter\end{tabular} & \begin{tabular}{c}Virtual\\wall-clock\\time (sec)\end{tabular} \\ \hline \multirow{4}{*}{\shortstack{\begin{phantom}1\end{phantom} \\ \begin{phantom}2\end{phantom} \\ Peppers \\ $512 \times 512$}} & 1 & 24.41 & - & 526 & 4.90 \\ \cline{2-6} & $2 \times 2$ & 24.41 & 2 & 532 & 0.77 \\ & $4 \times 4$ & 24.41 & 2 & 584 & 0.26 \\ & $8 \times 8$ & 24.41 & 5 & 590 & 0.22 \\ & $16 \times 16$ & 24.41 & 7 & 573 & 0.14 \\ \hline \multirow{4}{*}{\shortstack{\begin{phantom}1\end{phantom} \\ \begin{phantom}2\end{phantom} \\ Boat \\ $2048 \times 3072$}} & 1 & 24.75 & - & 995 & 273.48 \\ \cline{2-6} & $2 \times 2$ & 24.75 & 2 & 1145 & 91.72 \\ & $4 \times 4$ & 24.75 & 2 & 1408 & 21.03 \\ & $8 \times 8$ & 24.75 & 2 & 1415 & 3.42 \\ & $16 \times 16$ & 24.75 & 2 & 1492 & 1.31 \\ \hline \end{tabular} \caption{Performance of the primal DDM \cref{Alg:primal_DD}} \label{Table:primal_DD} \end{table} \cref{Table:primal_DD} shows the performance of \cref{Alg:primal_DD}. For the single subdomain case, the $O(1/n^2)$ convergent primal-dual algorithm is used. The PSNRs of the resulting denoised images do not differ from the single subdomain case. Thus, we can conclude that the results of \cref{Alg:primal_DD} agree with the single subdomain case, as proven in \cref{Prop:primal_DD_equiv}. With sufficiently many subdomains, the virtual wall-clock time is much less than the wall-clock time of the single subdomain case. It shows the worth of \cref{Alg:primal_DD} as a parallel algorithm. Next, we consider the primal-dual DDM. For \cref{Alg:pd_DD}, we set the parameters $L=2$, $\sigma = 0.02$, and $\sigma\tau = 1/L$. We use the same stop criterion~\cref{stop_outer} for the outer iterations as in~\cref{Alg:primal_DD} with~$u^{(n)}$ defined in~\cref{pd_DD_u}. For the local solver, the parameters $L=8$, $\gamma = 0.5\alpha$, and $\delta = 1/\tau$ are used. The stop criterion~\cref{stop_inner} for local problems is used for~$\tilde{\mathbf{p}}_s^{(n)}$. \begin{table}[] \centering \begin{tabular}{| c | c | c c c c |} \hline Test image & $\mathcal{N}$ & PSNR & iter & \begin{tabular}{c}max\\inner iter\end{tabular} & \begin{tabular}{c}Virtual\\wall-clock\\time (sec)\end{tabular} \\ \hline \multirow{4}{*}{\shortstack{\begin{phantom}1\end{phantom} \\ \begin{phantom}2\end{phantom} \\ Peppers \\ $512 \times 512$}} & 1 & 24.41 & - & 526 & 4.90 \\ \cline{2-6} & $2 \times 2$ & 24.41 & 22 & 144 & 2.09 \\ & $4 \times 4$ & 24.41 & 24 & 147 & 0.66 \\ & $8 \times 8$ & 24.41 & 26 & 150 & 0.28 \\ & $16 \times 16$ & 24.41 & 30 & 154 & 0.19 \\ \hline \multirow{4}{*}{\shortstack{\begin{phantom}1\end{phantom} \\ \begin{phantom}2\end{phantom} \\ Boat \\ $2048 \times 3072$}} & 1 & 24.75 & - & 995 & 273.48 \\ \cline{2-6} & $2 \times 2$ & 24.75 & 12 & 138 & 95.84 \\ & $4 \times 4$ & 24.75 & 18 & 140 & 24.59 \\ & $8 \times 8$ & 24.75 & 20 & 144 & 3.38 \\ & $16 \times 16$ & 24.75 & 24 & 146 & 1.74 \\ \hline \end{tabular} \caption{Performance of the primal-dual DDM \cref{Alg:pd_DD}} \label{Table:pd_DD} \end{table} As \cref{Table:pd_DD} shows, the solution of \cref{Alg:pd_DD} is consistent with the single subdomain case regardless of the number of subdomains. Since the local solver has the linear convergence rate, which is much faster than the standard algorithms for the ROF model, we can observe that the maximum number of inner iterations of \cref{Alg:pd_DD} is smaller than that of \cref{Alg:primal_DD} in all cases. For example, in the experiments with the test image ``Boat $2048 \times 3072$,'' local problems of \cref{Alg:pd_DD} are solved approximately 10 times faster than those of \cref{Alg:primal_DD}. Consequently, even though the convergence rate of \cref{Alg:pd_DD} is only~$O(1/n)$, the virtual wall-clock time of \cref{Alg:pd_DD} is as small as that of~\cref{Alg:primal_DD} in the case of sufficiently many subdomains. \begin{figure}[] \centering \subfloat[][Noisy ``Peppers $512\times512$'' \\ \centering(PSNR: 19.11)]{ \includegraphics[width=3.8cm]{./peppers_noised.png} } \subfloat[][Primal DDM, $\mathcal{N} = 16\times16$ \\ \centering(PSNR: 24.41)]{ \includegraphics[width=3.8cm]{./primal_peppers16.png} } \subfloat[][Primal-dual DDM,\\ \centering$\mathcal{N} = 16\times16$ (PSNR: 24.41)]{ \includegraphics[width=3.8cm]{./pd_peppers16.png} } \subfloat[][Noisy ``Boat $2048\times3072$'' \\ \centering(PSNR: 19.10)]{ \includegraphics[width=3.8cm]{./boat_noised.png} } \subfloat[][Primal DDM, $\mathcal{N} = 16\times16$ \\ \centering(PSNR: 24.75)]{ \includegraphics[width=3.8cm]{./primal_boat16.png} } \subfloat[][Primal-dual DDM,\\ \centering$\mathcal{N} = 16\times16$ (PSNR: 24.75)]{ \includegraphics[width=3.8cm]{./pd_boat16.png} } \caption{Results of \cref{Alg:primal_DD,Alg:pd_DD} for test images} \label{Fig:denoised_images} \end{figure} Finally, we display the resulting denoised images by the proposed DDMs in \cref{Fig:denoised_images}. We only provide the images for the case $\mathcal{N} = 16 \times 16$ since all the resulting images are visually the same regardless of the number of subdomains. One can observe that there are no artificialities at all on the subdomain interfaces even in the case of quite large number of subdomains. \section{Conclusion} \label{Sec:conclusion} In this paper, we proposed an alternative discretization~\cref{d_dual_ROF} for the dual ROF model using a conforming Raviart--Thomas basis. We mentioned that the proposed discretization naturally satisfies the splitting property~\cref{splitting} of the energy functional. Thanks to the splitting property, we proposed two DDMs for the dual ROF model: the primal one and the primal-dual one. We showed that the proposed primal DDM has a $O(1/n^2)$ convergence rate, which is the best among the existing DDMs. Also, we showed that the local problems in the proposed primal-dual DDM can be solved at a linear convergence rate by using the accelerated primal-dual algorithm. Numerical results demonstrate the superiority of the proposed DDMs. We conclude the paper with a remark on the primal-dual DDM. Since we did not use any regularity of the dual ROF energy functional to prove convergence of the primal-dual DDM, we expect that the primal-dual DDM can be generalized to more advanced imaging problems with total variation, for example, total variation minimization with~$L^1$-fidelity term~\cite{CE:2005}. \bibliographystyle{siamplain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{introduction} Let $A$ be a unital $C^*$-algebra, unitally represented on a Hilbert space $H$. Assume that there is a continuous family $(q_t)_{t\in [0,\infty)}$ of compact projections on $H$ that asymptotically commutes with $A$, meaning that $[q_t,a]\to 0$ as $t\to\infty$ for all $a\in A$. Note that if $p$ is a projection in $A$, then the family $t\mapsto p q_t$ of compact operators gets close to being a projection, and is thus close to a projection that is uniquely defined up to homotopy; in particular, there is a well-defined $K$-theory class $[p q_t]\in K_0(K(H))=\mathbb{Z}$. It is moreover not difficult to see that this idea can be bootstrapped up to define a homomorphism \begin{equation}\label{eq:pair} [q_t]:K_0(A)\to \mathbb{Z}, \quad [p]\mapsto [pq_t]. \end{equation} This suggests using such parametrized families $(q_t)_{t\in [0,\infty)}$ to define elements of $K$-homology. Indeed, something like this has been done when $A=C(X)$ is commutative. In this case, the condition that $[q_t,a]\to 0$ is equivalent to the condition that the `propagation' of $q_t$ (in the sense of Roe, \cite[Ch. 6]{HigRoe:khomology}) tends to zero, up to an arbitrarily good approximation. Motivated by considerations like the above, and by the heat kernel approach to the Atiyah-Singer index theorem, Yu \cite{Yu:localization-algebra} described $K$-homology for simplicial complexes in terms of families with asymptotically vanishing propagation using his localization algebras. Subsequently, Qiao and Roe \cite{Qiao-Roe} gave a new approach to this result of Yu that works for all compact (in fact, all proper) metric spaces. In this paper, we present a new picture of Kasparov's $KK$ groups based on asymptotically commuting families. Thanks to the relationship between asymptotically vanishing propagation and asymptotic commutation, our picture can be thought of as an extension of the results of Yu and Qiao-Roe from commutative to general (separable) $C^*$-algebras, and from $K$-homology to $KK$-theory. We think this gives an attractive picture of $KK$-theory. We also suspect that the ease with which the pairing in line \eqref{eq:pair} is defined \textemdash~ note that unlike in the case of Paschke duality, there is no dimension shift, and unlike in the case of $E$-theory, there is no suspension \textemdash~ should be useful for future applications. Having said this, we should note that the picture of the pairing in line \eqref{eq:pair} is overly simplified, as in general to get the whole $KK$ group one needs to consider formal differences of such families of projections $(q_t)$ in an appropriate sense. We now give precise statements of our main results. For a $C^{*}$-algebra $B$, we denote by $C_u(T,B)$ the $C^{*}$-algebra of bounded and uniformly continuous functions from $T=[0,\infty)$ to $B$. Inspired by work of Yu ~\cite{Yu:localization-algebra} and Qiao and Roe \cite{Qiao-Roe}, we define the localization algebra $\mathcal{C}_L(\pi)$ associated to a representation $\pi$ of a separable $C^{*}$-algebra $A$ on a separable Hilbert space to be the $C^{*}$-subalgebra of $C_u(T,L(H))$ consisting of all the functions $f$ such that for all $a\in A$, \[ [f,\pi(a)]\in C_0(T,K(H))\,\, \text{and} \, \,\pi(a)f\in C_u(T,K(H)).\] Let us recall that a representation $\pi$ is ample if it is nondegenerate, faithful and $\pi(A)\cap K(H)=\{0\}$. One verifies that the isomorphism class of $\mathcal{C}_L(\pi)$ does not depend on the choice of an ample representation $\pi$. In this case, we write $\mathbb{C}_L(A)$ in place of $\mathcal{C}_L(\pi)$ and view $A$ as a $C^{*}$-subalgebra of $L(H)$. Note that if $A$ is unital, then \[\mathbb{C}_L(A)=\{f\in C_u(T,K(H))\colon [f,a]\in C_0(T,K(H)),\,\forall a\in A\}.\] In this paper we establish canonical isomorphisms $K^i(A) \cong K_i(\mathbb{C}_L(A)) $, $i=0,1$, between the $K$-homology of $A$ and the $K$-theory of the localization algebra $\mathbb{C}_L(A)$. More generally, we use results of Thomsen \cite{Thomsen} to show that for separable $C^{*}$-algebras $A$, $B$ and any absorbing representation $\pi:A \to L(H_B)$ on the standard infinite dimensional countably generated right Hilbert $B$-module $H_B$, there are canonical isomorphisms of groups \begin{equation}\label{eq:kk iso} \xymatrix{ KK_i(A,B) \ar[r]^-\cong & K_i(\mathcal{C}_L(\pi)),\quad i=0,1,} \end{equation} where the localization $C^{*}$-algebra $\mathcal{C}_L(\pi)$ consists of those functions $f\in C_u(T,L(H_B))$ such that for all $a\in A$, \[ [f,\pi(a)]\in C_0(T,K(H_B))\,\, \text{and} \, \,\pi(a)f\in C_u(T,K(H_B)).\] The isomorphism in line \eqref{eq:kk iso} is defined and proved by combining Paschke duality with a generalization of the techniques used by Roe and Qiao in the commutative case. \\ The paper is structured as follows. In Section~\ref{sec:abs}, we discuss absorbing representations and give a version of Voiculescu's theorem appropriate to localization algebras. In Section~\ref{sec:1}, we define the various dual algebras and localization algebras that we use, and show that they do not depend on the choice of absorbing representation. In Section~\ref{sec:dual}, we prove the isomorphism in line \eqref{eq:kk iso}. Finally, in Section~\ref{sec:inv}, we construct maps $K_i(\mathcal{C}_L(\pi))\to E_i(A,B)$ and show that they `invert' the isomorphism in line \eqref{eq:kk iso} in the sense that the composition $KK_i(A,B)\to K_i(\mathcal{C}_L(\pi)) \to E_i(A,B)$ is the canonical natural transformation from $KK$-theory to $E$-theory. \paragraph{\textbf{Acknowledgements:}} Part of this research was conducted during the authors' visits to the University of M\"{u}nster, the University of Hawai\kern.05em`\kern.05em\relax i~at M\=anoa, and the Institut Mittag-Leffler. We are grateful for the hospitality of the host institutes. We would also like to thank the referee for a close reading of the paper, and several useful suggestions. \section{Absorbing representations}\label{sec:abs} Let $A$, $B$ be separable $C^{*}$-algebras. If $E, F$ are countably generated right Hilbert $B$-modules, we denote by $L(E,F)$ the $C^{*}$-algebra of bounded $B$-linear adjointable operators from $E$ to $F$. The corresponding $C^{*}$-algebra of ``compact" operators is denoted by $K(E,F)$, \cite{Kas:cp}. Set $L(E)=L(E,E)$ and $K(E)=K(E,E)$. Recall that $H_B$ is the standard infinite dimensional countably generated right Hilbert $B$-module. We shall use the notion of (unitally) absorbing $*$-representations $\pi:A \to L(H_B)$, see \cite{Thomsen}. \begin{definition}\label{def:absorbing} (i) Suppose that $A$ is a unital separable $C^{*}$-algebra. A unital representation $\pi:A \to L(H_B)$ is called \emph{unitally absorbing} for the pair $(A,B)$ if for any other unital representation $\sigma:A \to L(E)$, there is an isometry $v\in C_b(\mathbb{N}, L(E,H_B))$ such that $v\sigma(a)-\pi(a)v\in C_0(\mathbb{N}, K (E,H_B))$ for all $a\in A$. (ii) Suppose that $A$ is a separable $C^{*}$-algebra. We denote by $\widetilde{A}$ the unitalization of $A$, with the convention that $\widetilde{A}=A$, if $A$ is already unital. A representation $\pi:A \to L(H_B)$ is called \emph{absorbing} for the pair $(A,B)$ if its unitalization $\widetilde{\pi}:\widetilde{A} \to L(H_B)$ is unitally absorbing for the pair $(\widetilde{A},B)$. \end{definition} Note that in Definition~\ref{def:absorbing}, if we denote the components of $v$ by $v_n$, we have $v_n\sigma(a)-\pi(a)v_n\in K (E,H_B)$ and $\lim_{n\to \infty} \|v_n\sigma(a)-\pi(a)v_n\|=0$ for all $a\in A$. \begin{theorem}[Voiculescu, \cite{Voi:Weyl-vn}]\label{thm:Voiculescu-classic} Any ample representation of a separable $C^{*}$-algebra on a separable infinite dimensional Hilbert space is absorbing. \end{theorem} \begin{theorem}[Kasparov, \cite{Kas:cp}]\label{thm:Kasparov-abs} Let $A$ be a unital separable $C^{*}$-algebra and let $B$ be a $\sigma$-unital $C^{*}$-algebra. If either $A$ or $B$ are nuclear, then any unital ample representation $\pi:A \to L(H)\subset L(H_B)$ is absorbing for the pair $(A,B)$. \end{theorem} \begin{theorem}[Thomsen, \cite{Thomsen}]\label{thm:Thomsen-abs-exist} For any separable $C^{*}$-algebras $A$ and $B$ there exist absorbing representations $\pi:A \to L(H_B)$. \end{theorem} Given two $*$-representations $\pi_i:A \to L(E_i)$ we write that $\pi_1\underset{v}\preccurlyeq \pi_2$ if there is an isometry $v\in C_u(T,L(E_1,E_2))$ such that \begin{equation*}\label{eqn:sVoic} v\pi_1(a)-\pi_2(a)v\in C_0(T,K(E_1,E_2)). \end{equation*} If in addition $v\in C_u(T,L(E_1,E_2))$ is a unitary with the same property, then we write $\pi_1\underset{v}\approx \pi_2$. Let $w^\infty:E_1^\infty \to E_1 \oplus E_1^\infty$ be the unitary defined by $w^\infty (h_0,h_1,h_2,...)=h_0 \oplus (h_1,h_2,...)$. \begin{lemma}[Lemma 2.16, \cite{DadEil:class}]\label{lemma:DE-KK} Let $\pi_i:A \to L(E_i)$ be two representations and let $v\in L(E^{\infty}_1,E_2)$ be an isometry such that $v\pi_1^\infty(a)-\pi_2(a)v\in K(E^\infty_1,E_2)$ for all $a\in A$. Then $u=(1_{E_1}\oplus v)w^\infty v^*+(1_{E_2}-v v^*)\in L(E_2, E_1\oplus E_2)$ is a unitary operator such that $\pi_1(a)\oplus \pi_2(a)-u\pi_2(a)u^*\in K(E_1\oplus E_2)$ for all $a\in A$ and moreover \[\|\pi_1(a)\oplus \pi_2(a)-u\pi_2(a)u^*\|\leq 6\|v\pi_1^\infty(a)-\pi_2(a)v\| +4\|v\pi_1^\infty(a^*)-\pi_2(a^*)v\|.\] \end{lemma} Using this lemma, one obtains the following strengthened variation of Voiculescu's theorem \cite{Voi:Weyl-vn}. This result appears in \cite{DadEil:AKK} as Theorem 3.11, except that the uniform continuity of the isometry $v$ and the unitary $u$ were not addressed explicitly in the statement. \begin{theorem}\label{thm:Voiculescu} Let $A$, $B$ be separable $C^{*}$-algebras and let $\pi_i:A \to L(E_i)$, $i=1,2$ be two representations where $E_i\cong H_B$. If $\pi_2$ is absorbing, then $\pi_1\underset{v}\preccurlyeq \pi_2$ for some isometry $v\in C_u(T,L(E_1,E_2))$. If both $\pi_1$ and $\pi_2$ are absorbing, then $\pi_1\underset{u}\approx \pi_2$ for some unitary $u\in C_u(T,L(E_1,E_2))$. \end{theorem} \begin{proof} Since $\pi_2$ absorbs $\pi^\infty_2$ there is an isometry $u=(u_n)_n\in C_b(\mathbb{N},L(E_2^\infty,E_2))$ such that $u\pi_2^\infty(a)-\pi_2(a)u\in C_0(\mathbb{N},K(E_2^\infty,E_2))$ for all $a\in A$. Since $\pi_2$ absorbs $\pi_1$, there is a sequence of isometries $w_n \in L(E_1,E_2^\infty)$ with mutually orthogonal ranges such that $w_n\pi_1(a)-\pi_2^\infty(a)w_n\in K (E_1,E_2^\infty)$ and $\lim_{n\to \infty} \|w_n\pi_1(a)-\pi_2^\infty(a)w_n\|=0$ for all $a\in A$. Then $v_n =u_n w_n \in L(E_1,E_2)$ is a sequence of isometries with orthogonal ranges such that the corresponding isometry $v\in C_b(\mathbb{N}, L(E_1,E_2))$ satisfies $v\pi_1(a)-\pi_2(a)v\in C_0(\mathbb{N},K(E_1,E_2))$ for all $a\in A$. This follows from the identity \[u_n w_n\pi_1(a)-\pi_2(a)u_n w_n=u_n(w_n\pi_1(a)-\pi_2^\infty(a)w_n)+(u_n\pi_2^\infty(a)-\pi_2(a)u_n)w_n.\] Since $v_n^*v_m=0$ for $n\neq m$, one observes that $\mathbf{v}(n+s)=(1-s)^{1/2}v_n+s^{1/2}v_{n+1}$, $0\leq s \leq 1$, extends $v$ to a uniformly continuous isometry $\mathbf{v}\in C_u(T,L(E_1,E_2))$ that satisfies $\pi_1\underset{\mathbf{v}}\preccurlyeq \pi_2$. For the second part of the statement, we note that by the first part $\pi^\infty_1\underset{v}\preccurlyeq \pi_2$. Thus, $ v\pi_1^\infty(a)-\pi_2(a)v\in C_0(T,K(E^\infty_1,E_2))$, for all $a\in A$ where $v=(v_t)_{t\in T}$ is a uniformly continuous isometry with $v_t\in L(E^{\infty}_1,E_2)$. It follows by Lemma~\ref{lemma:DE-KK} that \[u_t=(1_{E_1}\oplus v_t)w^\infty v_t^*+(1_{E_2}-v_t v_t^*)\] is a uniformly continuous unitary such that $\pi_1\oplus \pi_2 \underset{u}\approx \pi_2$. By symmetry we have that $\pi_1\oplus \pi_2 \underset{u}\approx \pi_1$ and hence $\pi_1 \underset{u}\approx \pi_2$. \end{proof} \section{Dual algebras}\label{sec:1} Let $A$, $B$ be separable $C^{*}$-algebras and let $\pi:A \to L(H_B)$ be a $*$-representation. \begin{definition} The localization algebra $\mathcal{C}_L(\pi)$ associated to $\pi$ is the $C^{*}$-subalgebra of $C_u(T,L(H_B))$ consisting of all the functions $f$ such that\\ $[f,\pi(a)]\in C_0(T,K(H_B))$ and $\pi(a)f\in C_u(T,K(H_B))$ for all $a\in A$. \end{definition} While $\mathcal{C}_L(\pi)$ is the central object of the paper, we also need to consider a series of pairs of $C^{*}$-algebras and ideals which will play a supporting role: \[\mathcal{D}(\pi)=\{b\in L(H_B)\colon [b,\pi(a)]\in K(H_B),\,\forall a\in A\},\] \[\mathbb{C}(\pi)=\{b\in L(H_B)\colon \pi(a)b\in K(H_B),\,\forall a\in A\},\] and their parametrized versions, \[\mathcal{D}_T(\pi)=\{f\in C_u(T,L(H_B))\colon [f,\pi(a)]\in C_u(T,K(H_B)),\,\forall a\in A\}\cong C_u(T,\mathcal{D}(\pi)),\] \[\mathcal{C}_T(\pi)=\{f\in C_u(T,L(H_B))\colon \pi(a)f\in C_u(T,K(H_B)),\,\forall a\in A\}\cong C_u(T,\mathbb{C}(\pi)).\] The evaluation map at $0$ leads to the pair \[\mathcal{D}_T^0(\pi)=\{f\in \mathcal{D}_T(\pi)\colon f(0)={0}\},\] \[\mathcal{C}_T^0(\pi)=\{f\in \mathcal{C}_T(\pi)\colon f(0)={0}\}.\] Finally, we view the localization algebra $\mathcal{C}_L(\pi)$ as an ideal of \[\mathcal{D}_L(\pi)=\{f\in C_u(T,L(H_B))\colon [f,\pi(a)]\in C_0(T,K(H_B)),\,\forall a\in A\},\] \[\mathcal{C}_L(\pi)=\{f\in \mathcal{D}_L(\pi)\colon \pi(a)f\in C_u(T,K(H_B)), \, \forall a\in A\}.\] To simplify some of the statements it is useful to introduce the following notation: $A_1(\pi)=\mathcal{D}_T(\pi)$, $A_2(\pi)=\mathcal{C}_T(\pi)$, $A_3(\pi)=\mathcal{D}_T^0(\pi)$, $A_4(\pi)=\mathcal{C}_T^0(\pi)$, $A_5(\pi)=\mathcal{D}_L(\pi)$ and $A_6(\pi)=\mathcal{C}_L(\pi)$. We are going to see that the isomorphism classes of these $C^{*}$-algebras are independent of $\pi$, provided that $\pi$ is an absorbing representation. We follow the presentation from \cite[Section 5.2]{HigRoe:khomology} where analogous properties of $\mathcal{D}(\pi)$ and $\mathbb{C}(\pi)$ are established, except that we need to employ a strengthened version of Voiculescu's theorem, contained in Theorem~\ref{thm:Voiculescu} above. Let $\pi_1,\pi_2:A \to L(H_B)$ be two representations. \begin{lemma} If $\pi_1\underset{v}\preccurlyeq \pi_2$, then the equation $\Phi_v(f)=v f v^*$ defines a $*$-homomorphism $\Phi_v:\mathcal{D}_T(\pi_1)\to \mathcal{D}_T(\pi_2)$ with the property that $\Phi_v(A_j(\pi_1))\subset A_j(\pi_2)$ for all $1 \leq j \leq 6$. \end{lemma} \begin{proof} This follows from the identities: \[ [vfv^*,\pi_2(a)]=v[f,\pi_1(a)]v^*+(v\pi_1(a)-\pi_2(a)v)f v^*-v f(v\pi_1(a^*)-\pi_2(a^*)v)^*,\] \[\pi_2(a)v f v^*=v\pi_1(a)f v^*-(v\pi_1(a)-\pi_2(a)v)f v^*.\qedhere\] \end{proof} \begin{corollary}\label{cor:indep} Let $\pi_1,\pi_2:A \to L(H_B)$ be two absorbing representations. Then $A_j(\pi_1)\cong A_j(\pi_2)$ for all $1 \leq j \leq 6$. \end{corollary} \begin{proof} Proposition~\ref{thm:Voiculescu} yields a unitary $v\in C_u(T,L(H_B))$ such that $\pi_1\underset{v}\approx \pi_2$. The corresponding maps $\Phi_v: A_j(\pi_1)\to A_j(\pi_2)$ are isomorphisms. \end{proof} \begin{lemma}\label{lemma:K-theory-natural} Let $\pi_1,\pi_2:A \to L(H_B)$ be two representations of $A$ and suppose that $v_1,v_2$ are two isometries such that $\pi_1\underset{v_i}\preccurlyeq \pi_2$, $i=1,2$. Then $(\Phi_{v_1})_*=(\Phi_{v_2})_*:K_*(A_j(\pi_1))\to K_*(A_j(\pi_2))$ for all $1\leq j \leq 6$. \end{lemma} \begin{proof} The unitary $u=\begin{pmatrix} 1-v_1 v_1^* &v_1 v_2^*\\v_2 v_1^* & 1-v_2 v_2^* \end{pmatrix}\in M_2(\mathcal{D}_L(\pi_2))$ conjugates $\begin{pmatrix}\Phi_{v_1}&0\\0 & 0 \end{pmatrix}$ over $\begin{pmatrix}0&0\\0 & \Phi_{v_1}\end{pmatrix}.$ It follows that $(\Phi_{v_1})_*=(\Phi_{v_2})_*:K_*(\mathcal{D}_T(\pi_1))\to K_*(\mathcal{D}_T(\pi_2))$. Similarly, one verifies that the equality $(\Phi_{v_1})_*=(\Phi_{v_2})_*:K_*(A_j(\pi_1))\to K_*(A_j(\pi_2))$ holds for all $1\leq j \leq 6$. \end{proof} Denote by $\pi^\infty$ the direct sum $\pi^\infty =\bigoplus_{n=1}^\infty \pi : A \to L(H_B^\infty)=L(\bigoplus_{n=1}^\infty H_B)$. \begin{corollary}\label{cor:inclusion} If $\pi:A \to L(H_B)$ is an absorbing representation, then the inclusion $\mathcal{D}_T(\pi)\to \mathcal{D}_T(\pi^\infty)$, $f \mapsto(f,0,0,...)$ induces isomorphisms on $K$-theory: $K_*(A_j(\pi))\to K_*(A_j(\pi^\infty))$, for all $1 \leq j \leq 6$. \end{corollary} \begin{proof} We have $\pi\underset{v}\preccurlyeq \pi^\infty$, where $v\in C_u(T,L(H_B,H_B^\infty))$ is the constant isometry defined by $v(t)(h)=(h,0,0,...)$ for any $t \in T$ and $h \in H_B$. The inclusion map from the statement coincides with $\Phi_v$. On the other hand $\pi\underset{u}\approx \pi^\infty$ since $\pi$ is absorbing and hence $\Phi_u$ is an isomorphism. We conclude the proof by noting that $(\Phi_v)_*=(\Phi_u)_*$ by Lemma~\ref{lemma:K-theory-natural}. \end{proof} \section{A duality isomorphism}\label{sec:dual} Let $A$ and $B$ be separable $C^{*}$-algebras. We are going to show that when we fix an absorbing representation $\pi \colon A \to L(H_B)$ (the existence of such an absorbing representation is guaranteed by Theorem~\ref{thm:Thomsen-abs-exist}), the $K$-theory of $\mathcal{C}_L(\pi)$ is canonically isomorphic to the $KK$-theory of the pair $(A,B)$. We start with a technical lemma that will be used several times later. \begin{lemma}\label{lem:qc-unit} For any separable $C^*$-algebra $D\subset C_u(T,L(H_B))$ there is a positive contraction $x\in C_u(T,K(H_B))$ such that: \begin{enumerate}[(a)] \item $[x,d]\in C_0(T,K(H_B))$ for all $d\in D$, and \item $(1-x)d\in C_0(T,K(H_B))$ for all $d\in D\cap C_u(T,K(H_B))$. \end{enumerate} \end{lemma} \begin{proof} Our arguments will in fact show that the statement holds true in the more general situation where $L(H_B)$ is replaced by a $C^{*}$-algebra $L$ and $K(H_B)$ is replaced by a two-sided closed ideal $I$ of $L$. Let $\dot{D}$ denote the $C^*$-subalgebra of $L$ generated by all images $d(t)$ as $d$ ranges over $D$ and $t$ over $T$. This is separable, and contains $\dot{C}=\dot{D}\cap I$ as an ideal. Let $(x_n)_n$ be a positive contractive approximate unit for $\dot{C}$ which is quasi-central in $\dot{D}$. Choose countable dense subsets $(d_k)_{k=1}^\infty$ and $(c_k)_{k=1}^\infty$ of $D$ and $D\cap C_u(T,I)$ respectively. As for each $n$, the subsets $\bigcup_{k=1}^n\{d_k(t):t\in [0,n+1]\}$ and $\bigcup_{k=1}^n\{c_k(t):t\in [0,n+1]\}$ of $\dot{D}$ and $\dot{C}$ respectively are compact, we may assume on passing to a subsequence of $(x_n)$ that \begin{enumerate}[(i)] \item $\|[d_k(t),x_n]\|<\frac{1}{n+1}$ for all $1\leq k\leq n$ and all $t\in [0,n+1]$, and \item $\|(1-x_n)c_k(t)\|<\frac{1}{n+1}$ for all $1\leq k \leq n$ and all $t\in [0,n+1]$. \end{enumerate} For $t\in [n,n+1)$, write $s=t-n$ and set $x(t)=(1-s)x_n+s x_{n+1}$; note that the function $x:t\mapsto x(t)$ is uniformly continuous. Then from (i) and (ii) above we have \begin{enumerate}[(i)] \item $\|[d_k(t),x(t)]\|<\frac{1}{n+1}$ for all $1\leq k\leq n$ and all $t\in [n,n+1)$, and \item $\|(1-x(t))c_k(t)\|<\frac{1}{n+1}$ for all $1\leq k \leq n$ and all $t\in [n,n+1)$. \end{enumerate} This implies that $x$ has the right properties. \end{proof} We have {obvious} inclusions $\mathcal{D}_L(\pi)\subset \mathcal{D}_T(\pi)$ and $\mathcal{C}_L(\pi)\subset \mathcal{C}_T(\pi)$ which induce a $*$-homomorphism \[\eta: \mathcal{D}_L(\pi)/\mathcal{C}_L(\pi) \to\mathcal{D}_T(\pi)/\mathcal{C}_T(\pi).\] \begin{proposition}\label{prop:1} For any separable $C^{*}$-algebras $A, B$ and any representation $\pi:A \to L(H_B)$, the map $\eta$ is a $*$-isomorphism. \end{proposition} \begin{proof} It is clear from the definitions that $\mathcal{C}_L(\pi)=\mathcal{D}_L(\pi)\cap \mathcal{C}_T(\pi)$ and hence $\eta$ is {injective}. It remains to prove that $\eta$ is surjective. It suffices to show that for any $f\in \mathcal{D}_T(\pi)$ there is $\tilde{f}\in \mathcal{D}_L(\pi)$ such that $\tilde{f}-f\in \mathcal{C}_T(\pi)$. Let $f\in \mathcal{D}_T(\pi)$ be given. Let $D$ be the $C^*$-subalgebra of $C_u(T,L(H_B))$ generated by $\pi(A)$ (embedded as constant functions) and $f$, and let $x$ be as in Lemma~\ref{lem:qc-unit}. With this choice of $x$ (that depends on $f$) we define $\tilde{f}=(1-x)f.$ Note that $\tilde{f}=f - x f\in \mathcal{D}_T(\pi)$ since $f,x\in \mathcal{D}_T(\pi)$, and $\tilde{f}-f=-x f\in C_u(T,K(H_B))$ since $x\in C_u(T, K(H_B))$. In particular it follows that $\tilde{f}-f \in \mathcal{C}_T(\pi)$. It remains to verify that $\tilde{f}\in \mathcal{D}_L(\pi)$. This follows as for any $a\in A$, \[[\tilde{f},\pi(a)]=[(1-x)f,\pi(a)]=[\pi(a),x]f+(1-x)[f,\pi(a)]. \qedhere\] \end{proof} An adaptation of the arguments from the paper \cite{Qiao-Roe} of Qiao and Roe gives: \begin{proposition}\label{prop:2} Let $A,B$ be separable $C^{*}$-algebras and let $\pi:A \to L(H_B)$ be an absorbing representation. Then \begin{itemize} \item[(a)] $K_*(\mathcal{D}_L(\pi))=0$ and hence the boundary map \newline \noindent$\partial:K_*(\mathcal{D}_L(\pi)/\mathcal{C}_L(\pi))\to K_{*+1}(\mathcal{C}_L(\pi))$ is an isomorphism. \item[(b)] The evaluation map at $t=0$ induces an isomorphism. \newline \noindent $e_*:K_*(\mathcal{D}_T(\pi)/\mathcal{C}_T(\pi))\to K_*(\mathcal{D}(\pi)/\mathbb{C}(\pi))$. \end{itemize} \end{proposition} \begin{proof} Fix an ample representation $\pi$ of $A$. One verifies that if $f\in \mathcal{D}_L(\pi)$, then the formula \[F(t):=(f(t),f(t+1),...,f(t+n),...)\] defines an element $F\in \mathcal{D}_L(\pi^\infty)$. Indeed, \[[F(t),\pi(a)]=([f(t),\pi(a)],[f(t+1),\pi(a)],...,[f(t+n),\pi(a)],...)\] and each entry belongs to $C_0(T,K(H_B))$ and is bounded by $\|[f,\pi(a)]\|$. This shows that $[F,\pi(a)]\in C_u(T,K(H_B^{\infty}))$. Since $[f,\pi(a)]\in C_0(T,K(H_B))$, it follows immediately that in fact $[F,\pi(a)]\in C_0(T,K(H_B^{\infty}))$. With these remarks, the proof of (a) goes just like the proof of Proposition 3.5 from \cite{Qiao-Roe}. Indeed, define $*$-homomorphisms $\alpha_i:\mathcal{D}_L(\pi)\to \mathcal{D}_L(\pi^\infty)$, $i=1,2,3,4$ by \[\alpha_1(f)=(f(t),0,0,...), \] \[\alpha_2(f)=(0,f(t+1),f(t+2),...), \] \[\alpha_3(f)=(0,f(t),f(t+1),...) \] \[\alpha_4(f)=(f(t),f(t+1),f(t+2),...) .\] It is clear that $\alpha_1+\alpha_2=\alpha_4$. The isometry $v\in L(H_B^\infty)$ defined by $v(h_0,h_1,h_2,...)=(0,h_0,h_1,h_2,...)$ commutes with $\pi^\infty(A)$ and hence $v\in \mathcal{D}_L(\pi^\infty)$. Moreover $\alpha_4(a)=v\alpha_3(a)v^*$ and hence $(\alpha_4)_*=(\alpha_3)_*$ by \cite[Lemma 4.6.2]{HigRoe:khomology}. Using uniform continuity, one shows that $\alpha_3$ is homotopic to $\alpha_2$, via the homotopy $f(t)\mapsto (0,f(t+s),f(t+s+1),...)$, $0\leq s \leq 1$. We deduce that \[(\alpha_1)_*+(\alpha_2)_*=(\alpha_1+\alpha_2)_*=(\alpha_4)_*=(\alpha_3)_*=(\alpha_2)_*\] and hence $(\alpha_1)_*=0$. This concludes the proof of (a), since $(\alpha_1)_*$ is an isomorphism by Corollary~\ref{cor:inclusion}. (b) One follows the proof of Proposition 3.6 from \cite{Qiao-Roe} to show that both $K_*(\mathcal{D}_T^0(\pi))=0$ and $K_*(\mathcal{C}_T^0(\pi))=0$. The desired conclusion will then follow in view of the split exact sequence: \[ \xymatrix{ 0 \ar[r] &\mathcal{D}_T^0(\pi)/\mathcal{C}_T^0(\pi) \ar[r] & \mathcal{D}_T(\pi) /\mathcal{C}_T(\pi)\ar[r] &\mathcal{D}(\pi)/\mathbb{C}(\pi) \ar[r] & 0. } \] Any $f\in \mathcal{D}_T^0(\pi)$ can be extended by $0$ to an element of $C_u(\mathbb{R},L(H_B))$. With this convention, define four maps $\beta_i:\mathcal{D}_T^0(\pi)\to \mathcal{D}_T^0(\pi^\infty)$, $i=1,2,3,4$ by \[\beta_1(f)=(f(t),0,0,...), \] \[\beta_2(f)=(0,f(t-1),f(t-2),...), \] \[\beta_3(f)=(0,f(t),f(t-1),...) \] \[\beta_4(f)=(f(t),f(t-1),f(t-2),...) .\] This definition requires that one verifies that if $f \in \mathcal{D}_T^0(\pi)$, then \[F'(t):=(f(t),f(t-1),...,f(t-n),...)\] defines an element of $ \mathcal{D}_T^0(\pi^\infty)$. This is clearly the case, since if $f$ is uniformly continuous, then so is $F'$ and moreover, just as argued in \cite{Qiao-Roe}, for each $t$ in a fixed bounded interval only finitely many components of $F'(t)$ are non-zero, and hence $[F'(t),\pi^\infty(a)]\in K(H_B^\infty)$ if $[f(t),\pi(a)]\in K(H_B)$ for all $t\in T$. Note that $(\beta_4)_*=(\beta_3)_*$ since $\beta_4(a)=v\beta_3(a)v^*$ where $v\in \mathcal{D}_T(\pi^\infty)$ is the same isometry as in part (a). Using uniform continuity, one observes that $\beta_3$ is homotopic to $\beta_2$, via the homotopy $f(t)\mapsto (0,f(t-s),f(t-s-1),...)$, $0\leq s \leq 1$. We deduce that \[(\beta_1)_*+(\beta_2)_*=(\beta_1+\beta_2)_*=(\beta_4)_*=(\beta_3)_*=(\beta_2)_*\] and hence $(\beta_1)_*=0$. This shows that $K_*(\mathcal{D}_T^0(\pi))=0$, since $(\beta_1)_*$ is an isomorphism by Corollary~\ref{cor:inclusion}. The proof for the vanishing of $K_*(\mathcal{C}_T^0(\pi))$ is entirely similar. Indeed, with the same notation as above, one observes that if $f \in \mathcal{C}_T^0(\pi)$ then $F'\in \mathcal{C}_T^0(\pi^\infty)$. Moreover, the four maps $\beta_i:\mathcal{D}_T^0(\pi)\to \mathcal{D}_T^0(\pi^\infty)$ restrict to maps $\beta'_i:\mathcal{C}_T^0(\pi)\to \mathcal{C}_T^0(\pi^\infty)$ with $\beta'_3$ homotopic to $\beta'_2$ and $(\beta'_1)_*$ is an isomorphism by Corollary~\ref{cor:inclusion}. \end{proof} \begin{theorem}\label{thm:1} Let $A,B$ be separable $C^{*}$-algebras and let $\pi:A \to L(H_B)$ be an absorbing representation. There are canonical isomorphisms of groups \[\alpha:KK_i(A,B)\stackrel{\cong}\longrightarrow K_i(\mathcal{C}_L(\pi)),\quad i=0,1.\] \end{theorem} \begin{proof} Consider the diagram \[ \xymatrix{ {KK_{i}(A,B)}\ar[r]^-{P} &K_{i+1}(\mathcal{D}(\pi)/\mathbb{C}(\pi))\ar[r]^{\iota_*} & K_{i+1}(\mathcal{D}_T(\pi)/\mathcal{C}_T(\pi))\ar[d]^{{\eta^{-1}_*}} \\ & K_{i}(\mathcal{C}_L(\pi) )& K_{i+1}(\mathcal{D}_L(\pi)/\mathcal{C}_L(\pi))\ar[l]_-{\partial} } \] where $P$ is the Paschke duality isomorphism, see \cite{Paschke}, \cite[Remarque~2.8]{Skandalis:K-nuclear}, \cite[Theorem~3.2]{Thomsen}, and $\iota$ is the canonical inclusion. The maps $\partial$ and ${\iota_*}=e^{-1}_*$ are isomorphisms by Proposition~\ref{prop:2} and $\eta_*$ is an isomorphism by Proposition~\ref{prop:1}. \end{proof} As a corollary we obtain the following duality theorem, mentioned in the introduction. Recall from the introduction that $\mathcal{C}_L(A)$ stands for $\mathcal{C}_L(\pi)$, where $\pi$ is ample (and thus absorbing, by Theorem~\ref{thm:Voiculescu-classic}), and $A$ is identified with $\pi(A)$. \begin{theorem}\label{thm:cor} For any separable $C^{*}$-algebra $A$ there are canonical isomorphisms of groups \(K^i(A)\cong K_i(\mathcal{C}_L(A)),\quad i=0,1.\) \qed \end{theorem} \section{An inverse map}\label{sec:inv} Let $\alpha: KK_i(A,B)\stackrel{\cong}\longrightarrow K_i(\mathcal{C}_L(\pi))$ be the isomorphism of Theorem~\ref{thm:1}. Recall that $K(H_B)\cong B \otimes K(H)$. Consider the $*$-homomorphism \[\mathbf{\Phi}: \mathcal{D}_L(\pi) \otimes_{\max} A\to \frac{C_u(T, L(H_B))}{C_0(T, K(H_B))}\] defined by $\mathbf{\Phi}(f \otimes a)=f \pi(a) $ and its restriction to $\mathcal{C}_L(\pi) \otimes_{\max} A $ \[{\boldsymbol\varphi}: \mathcal{C}_L(\pi) \otimes_{\max} A \to \frac{C_u(T, K(H_B))}{C_0(T, K(H_B))}.\] We want $\boldsymbol\varphi$ to define a class in $E$-theory that we can take products with, but have to be a little careful due to the non-separability of the $C^{*}$-algebra $\mathcal{C}_L(\pi) \otimes_{\max} A$. Just as in the case of the $KK$-groups \cite{Skandalis:K-nuclear}, if $C$ is any $C^{*}$-algebra and $B$ is a non-separable $C^{*}$-algebra one defines $E_{sep}(B,C)=\varprojlim_{\,B_1} E(B_1,C)$, with $B_1 \subset B$ and $B_1$ separable. Moreover if $D$ is separable, then $E(D,B)=\varinjlim_{\,B_1} E(D,B_1)$, with $B_1\subset B$ and $B_1$ separable. With these adjustments, one has a well-defined product \[E(D,B)\times E_{sep}(B,C)\to E(D,C).\] Moreover, it is clear that $[[\boldsymbol\varphi]]$ defines an element of the group $E_{sep}(\mathcal{C}_L(\pi) \otimes_{\max} A ,B)$. Recall the isomorphism $K_i(\mathcal{C}_L(\pi))\cong E_i(\mathbb{C}, \mathcal{C}_L(\pi))$. We use the product \[E_i(\mathbb{C}, \mathcal{C}_L(\pi))\times E_{sep}(\mathcal{C}_L(\pi) \otimes_{\max} A ,B) \to E_i(A, B)\] to define a map $\beta: K_i(\mathcal{C}_L(\pi)) \to E_i(A, B)$ by $\beta (z) = [[\boldsymbol\varphi]]\circ (z \otimes \mathrm{id}_A)$. The map $\beta$ is an inverse of $\alpha$ in the following sense. \begin{theorem}\label{inverse the} The composition $\beta \circ \alpha$ coincides with the natural map \noindent $KK_i(A,B)\to E_i(A,B)$, $i=0,1$. \end{theorem} \begin{proof} We will give the proof for the odd case $i=1$ and leave the even case for the reader. Recall that the $E$-theory group $E_1(A,B)$ of Connes and Higson \cite{Con-Hig:etheory} is isomorphic to $[[SA, K(H_B)]]$ by a desuspension result from \cite{DadLor:unsusp}. For two continuous functions $f,g:T \to L(H_B)$ we will write $f(s)\sim g(s)$ (or $f(t)\sim g(t)$) if $f-g\in C_0(T,K(H_B))$. Let $\{\varphi_s: \mathcal{C}_L(\pi) \otimes_{\max} A \to K(H_B))\}_{s\in T}$ be an asymptotic homomorphism representing $\boldsymbol\varphi$. More precisely take $\varphi$ to be a set-theoretic lifting of $\boldsymbol\varphi$. This means that $\varphi_s( f\otimes a)\sim f(s)\pi(a)$. The composition $\beta \circ \alpha: KK_1(A,B)\to E_1(A,B)$ is computed as follows. Let $y\in KK_1(A,B)$ and let $z=Py\in K_0(\mathcal{D}(\pi)/\mathbb{C}(\pi))$ be its image under the Paschke duality isomorphism $P:KK_1(A,B)\to K_0(\mathcal{D}(\pi)/\mathbb{C}(\pi))$. Let $z$ be represented by a self-adjoint element $e\in \mathcal{D}(\pi)\subset \mathcal{D}_T(\pi)$ whose image in $\mathcal{D}(\pi)/\mathbb{C}(\pi)$ is an idempotent $\dot{e}$. We identify $\mathcal{D}(\pi)$ with the $C^{*}$-subalgebra of constant functions in $\mathcal{D}_T(\pi)$. Choose an element $x\in C_u(T,K(H_B))$ as in Lemma~\ref{lem:qc-unit} with respect to the (separable) $C^{*}$-subalgebra $D$ of $C_u(T,L(H_B))$ generated by $\pi(A)$, $e$, and $K(H_B)$. Therefore both $[x,\pi(a)]$ and $ (1-x)[e,\pi(a)]$ belong to $C_0(T,K(H_B))$ for all $a\in A$, and moreover $(1-x)e\in \mathcal{D}_L(\pi)$ as $$ [(1-x)e,\pi(a)]=[1-x,\pi(a)]e+(1-x)[e,\pi(a)]\in C_0(T,K(H_B)) $$ for all $a\in A$. Let $e_L=(1-x)e$ and let $\dot{e}_L$ be its image in $ \mathcal{D}_L(\pi)/\mathcal{C}_L(\pi)$. Under the isomorphism $ \mathcal{D}_L(\pi)/\mathcal{C}_L(\pi) \cong \mathcal{D}_T(\pi)/\mathcal{C}_T(\pi)$ of Proposition \ref{prop:1} we see that $\dot{e}_L$ is just the image of $e \in \mathcal{D}_T(\pi)$ in the quotient, which is an idempotent since $\dot{e}$ is so. It is then clear that $\eta^{-1}_*\iota_*(z)=[\dot{e}_L]$. Define a $*$-homomorphism $\ell:\mathbb{C}\to \mathcal{D}_L(\pi)/\mathcal{C}_L(\pi)$ by $\ell(1)=\dot{e}_L$ and set $S=C_0(0,1)$. Then $(\beta\circ \alpha )(y)$ is represented by the composition of the asymptotic homomorphisms from the following diagram. \begin{equation}\label{eqn:am} \xymatrix{ S \otimes \mathbb{C}\otimes A \ar[r]^-{1 \otimes \ell \otimes 1} & S \otimes \mathcal{D}_L(\pi)/\mathcal{C}_L(\pi) \otimes A\ar[r]^-{ \delta_t \otimes 1} &\mathcal{C}_L(\pi)\otimes A \ar[r]^{\varphi_s}& K(H_B), } \end{equation} where here and throughout the rest of the proof the tensor products are maximal ones, and the map labelled $\delta_t$ is defined by taking the product with a canonical element $\delta$ of $E_{1,sep}(\mathcal{D}_L(\pi)/\mathcal{C}_L(\pi),\mathcal{C}_L(\pi))$ associated to the extension \[ 0 \to \mathcal{C}_L(\pi)\to \mathcal{D}_L(\pi) \to \mathcal{D}_L(\pi)/\mathcal{C}_L(\pi) \to 0\] that we now discuss. Fixing a separable $C^{*}$-subalgebra $\dot{M}$ of $\mathcal{D}_L(\pi)/\mathcal{C}_L(\pi)$, the image of $\delta$ in $E_1(\dot{M},\mathcal{C}_L(\pi))$ is defined as follows. Choose a separable $C^{*}$-subalgebra $M$ of $\mathcal{D}_L(\pi)$ that surjects onto $\dot{M}$, and for each $\dot{m}\in \dot{M}$ choose a lift $m\in M$. Let $(v_t)_{t\in T}$ be a positive, contractive, and continuous approximate unit for $M\cap \mathcal{C}_L(\pi)$ which is quasicentral in $M$. Then for $g\in S=C_0(0,1)$, $\delta$ is characterized by stipulating that $\delta_t(g\otimes \dot{m})$ satisfies $$ \delta_t(g\otimes \dot{m})\sim g(v_t)m $$ (the choices of $(v_t)$ and the various lifts do not matter up to homotopy). In our case, to compute the composition we need, let $M$ be a separable $C^*$-subalgebra of $\mathcal{D}_L(\pi)$ containing $e$ and $x$, and let $(v_t)$ be an approximate unit for $M\cap C_L(\pi)$ that is quasicentral in $M$. On the level of elements, we can now concretely describe the composition in line \eqref{eqn:am} as follows. If $g\in S=C_0(0,1)$ and $a\in A$, then under the asymptotic morphism $\{\mu_t: SA \to K(H_B)\}_{t}$ defined by diagram \eqref{eqn:am}, elementary tensors $g\otimes a$ are mapped as follows \begin{equation}\label{eq:nn} g\otimes a \xmapsto{\quad} g\otimes \dot{e}_L \otimes a \xmapsto{\,\,\delta_t \,\,} g(v_t)(1-x)e \otimes a\xmapsto{\,\,\varphi_{s(t)} \,\,} g(v_t(s(t)))\big(1-x (s(t) ) \big)e \pi(a)\end{equation} for any positive map $t\mapsto s(t)$ which increases to $ \infty$ sufficiently fast. Since the map $t\mapsto x(t)$ is an approximate unit of $K(H_B)$, $(1-x)y \in C_0(T,K(H_B))$ for all $y\in K(H_B)$. In particular it follows that $\big(1-x (s(t) ) \big)e[e,\pi(a)]\sim 0$ since $[e,\pi(a)]\in K(H_B)$. Since $e\pi(a)=e\pi(a)e+e[e,\pi(a)]$, it follows from \eqref{eq:nn} that \begin{equation}\label{eqn:am1} \mu_t(g\otimes a)\sim g(v_t(s(t)))\big(1-x (s(t) ) \big)e \pi(a)e. \end{equation} On the other hand, the natural map $KK_1(A,B) \to E_1(A,B)$, maps $y$ to $[[\gamma_t]]$, where $\{\gamma_t: S\otimes A \to K(H_B)\}_t$ is described in \cite{Con-Hig:etheory} as follows. Consider the extension: \[0 \to K(H_B) \to e\pi(A) e+K(H_B) \to A \to 0.\] Let $(u_t)_{t\in T}$ be a contractive, positive, and continuous approximate unit of $K(H_B)$ which is quasicentral in $e\pi(A) e+K(H_B)$. Then \[\gamma_t(g \otimes a)\sim g(u_t)e\pi(a)e.\] Applying Lemma~\ref{lem:qc-unit} (this time with $D$ the $C^{*}$-subalgebra of $C_u(T,L(H_B))$ generated by $e$, $\pi(A)$, $K(H_B)$, and $t\mapsto x(s(t))$), we can choose $(u_t)_{t}$ such that $\lim_{t\to \infty} (1-u_t) x(s(t))=0$. Since the $C^{*}$-algebra $C_0[0,1)$ is generated by the function $f(\theta)= 1-\theta$, it follows that $\lim_{t\to \infty} g(u_t) x(s(t))=0$ for all $g\in C_0[0,1)$, and in particular for all $g\in C_0(0,1)$. Our goal now is to verify that $(\mu_t)_t$ is homotopic to $(\gamma_t)_t$. Due to the choice of $(u_t)_{t}$ and the comments above, we have that \begin{equation}\label{eqn:am2} \gamma_t(g\otimes a)\sim g(u_t)e\pi(a)e \sim g(u_t)\big(1-x (s(t) ) \big)e\pi(a)e, \end{equation} for all $a\in A$ and $g\in C_0(0,1)$. Finally, define $w_t^{(r)}=(1-r)\,v_t(s(t))+r\,u_t$, $0\leq r\leq 1$. As \[ \left[ g \big( w_t^{(r)} \big), \big(1-x (s(t) ) \big) e \pi(a) e \right] \to 0 \text{ as } t\to\infty \] for all $r\in [0,1]$ and $a\in A$, there is an asymptotic morphism $H_t: SA \to C[0,1]\otimes K(H_B)$ defined by the condition \[H_t^{(r)}(g\otimes a)\sim g \big(w_t^{(r)}\big) \big(1-x (s(t) ) \big) e \pi(a) e.\] This gives the desired homotopy joining $(\mu_t)_t$ with $(\gamma_t)_t$. \end{proof} As suggested by the referee, we finish this section by sketching another proof which is maybe a little less self-contained, but more conceptual. The proof below is analogous to the approach used by Qiao and Roe to establish \cite[Proposition 4.3]{Qiao-Roe}. The basic idea in their approach is to apply naturality of the connecting map in $E$-theory for the diagram of strictly commutative asymptotic morphisms $$ \xymatrix@C=0.59cm{ 0 \ar[r] & \mathcal{C}_L(\pi)\otimes_{\max} A \ar[r] \ar[d]^-{\varphi_t} & \mathcal{D}_L(\pi)\otimes_{\max}A \ar[r] \ar[d]^-{{\phi_t}} & (\mathcal{D}_L(\pi)/\mathcal{C}_L(\pi))\otimes_{\max} A \ar[r] \ar[d]^-{\bar{\phi}_t} & 0 \\ 0 \ar[r] & K(H_B) \ar[r] & L(H_B) \ar[r] & L(H_B)/K(H_B) \ar[r] & 0 ~,} $$ where $\phi_t$ and $\varphi_t$ represent the asymptotic morphisms induced by the $*$-homomorphisms $\mathbf{\Phi}$ and ${\boldsymbol\varphi}$ from the beginning of this section. The family $\bar{\phi}_t$ is the quotient family induced by $\phi_t$, and consists of $*$-homomorphisms. Naturality of the boundary map in $E$-theory in this case amounts to the equality \begin{equation}\label{nat-bound}[[\varphi_t]]\circ [[\delta_t \otimes \text{id}_A]]=[[\gamma_t]]\circ [[\bar{\phi}_t]], \end{equation} where $\delta_t$ is the boundary map for the top sequence of the diagram before tensoring with $A$ and $\gamma_t$ is the boundary map for the bottom sequence. See \cite[Lemme 10]{Con-Hig:etheory} for the definition of the boundary maps associated to extensions (here and elsewhere below one should use limits to deal with the non-separable algebras involved in the way discussed earlier in this section). The naturality property of the boundary map with respect to general asymptotic morphisms that was discussed in \cite[Thm. 5.3]{Guentner} seems to be the closest statement in the literature to the equality in line \eqref{nat-bound}, but it is nonetheless not sufficiently general to justify the equality. However, one can combine the arguments from the second part of the proof of Theorem~\ref{inverse the} with those from \cite{Guentner} to verify naturality in full generality and in particular to justify \eqref{nat-bound}. Now \eqref{nat-bound} allows us to conceptualize the proof of Theorem ~\ref{inverse the}. Let $y\in KK_{i}(A,B)$ and let $z=Py\in K_{i+1}(\mathcal{D}(\pi)/\mathbb{C}(\pi))$ be its image under the Paschke duality isomorphism $P:KK_{i}(A,B)\to K_{i+1}(\mathcal{D}(\pi)/\mathbb{C}(\pi))$. Consider $\eta^{-1}_*\iota_*(z) \in K_{i+1}(\mathcal{D}_L(\pi)/\mathcal{C}_L(\pi))\cong E_{i+1}(\mathbb{C},\mathcal{D}_L(\pi)/\mathcal{C}_L(\pi))$, where the maps $\iota_*$ and $\eta_*$ are isomorphisms as in the proof of Theorem \ref{thm:1}. We may view $\eta^{-1}_*\iota_*(z)\otimes[[\text{id}_{A}]]$ as an element of $E_{i+1}\left(A,\mathcal{D}_L(\pi)/\mathcal{C}_L(\pi)\otimes_{\max}A\right)$. From \eqref{nat-bound} we obtain that \begin{equation}\label{nat-bound-z}[[\varphi_t]]\circ [[\delta_t \otimes \text{id}_A]]\circ (\eta^{-1}_*\iota_*(z)\otimes[[\text{id}_{A}]])=[[\gamma_t]]\circ [[\bar{\phi}_t]]\circ (\eta^{-1}_*\iota_*(z)\otimes[[\text{id}_{A}]]). \end{equation} The left hand side of \eqref{nat-bound-z} represents the element $(\beta\circ \alpha) (y)$ of $E_{i}(A,B)$ by the very definition of $\alpha$ and $\beta$. In order to identify the right hand side of \eqref{nat-bound-z}, it is useful to note that each individual map $\bar{\phi}_t$ is a $*$-homomorphism given by $\kappa \circ (\operatorname{ev}_t \otimes \text{id}_A )$, where \[ \operatorname{ev}_t \colon \mathcal{D}_L(\pi)/\mathcal{C}_L(\pi) \to \mathcal{D}(\pi)/\mathbb{C}(\pi) \] is the evaluation map at $t$ and \[ \kappa \colon \left(\mathcal{D}(\pi)/\mathbb{C}(\pi)\right)\otimes_{\max}A\to L(H_B)/K(H_B) , ~ [b] \otimes a \mapsto [b \cdot \pi(a)] \] is the ``multiplication" $*$-homomorphism. Thus the asymptotic morphism $\{\bar{\phi}_t\}$ is homotopic to the constant asymptotic morphism given by $\bar{\phi}_0$, which is equal to $\kappa \circ (\operatorname{ev}_0 \otimes \text{id}_A )$. Hence the right hand side of \eqref{nat-bound-z} is equal to \[ [[\gamma_t]]\circ [[\kappa]] \circ ((\operatorname{ev}_0)_* \eta^{-1}_*\iota_*(z)\otimes[[\text{id}_{A}]]) . \] It follows from the following commutative diagram of $*$-homomorphisms \[ \xymatrix{ \mathcal{D}(\pi) / \mathbb{C}(\pi) \ar[r]^{\operatorname{id}} \ar[d]^{\iota} & \mathcal{D}(\pi) / \mathbb{C}(\pi) \\ \mathcal{D}_T(\pi) / \mathcal{C}_T(\pi) \ar[ur]^{\operatorname{ev}_{0}} & \mathcal{D}_L(\pi) / \mathcal{C}_L(\pi) \ar[l]^{\eta} \ar[u]^{\operatorname{ev}_{0}} } \] that $(\operatorname{ev}_0)_* \eta^{-1}_*\iota_*(z) = z$. This allows us to simplify the right hand side of \eqref{nat-bound-z} further to \[ [[\gamma_t]]\circ [[\kappa]] \circ (z \otimes[[\text{id}_{A}]]) \] where $z$ is viewed as an element in $E_{i+1}(\mathbb{C},\mathcal{D}(\pi)/\mathbb{C}(\pi))$. This can be seen to be equal to the image of $y$ under the natural map $KK_{i}(A,B)\to E_{i}(A,B)$. Indeed, focusing on the odd case, where $y\in KK_1(A,B)$ and $z=Py\in K_0(\mathcal{D}(\pi)/\mathbb{C}(\pi))$, we may choose $e\in \mathcal{D}(\pi)$ as in the first part of the proof of Theorem~\ref{inverse the}, such that $z=[\dot{e}]\in K_0(\mathcal{D}(\pi)/\mathbb{C}(\pi))$. Then the $*$-homomorphism $a \in A \mapsto [ e \cdot \pi(-) ] \in L(H_B)/K(H_B) $, which represents $[[\kappa]] \circ (z \otimes[[\text{id}_{A}]])$, is the Busby invariant of the extension corresponding to $e\in \mathcal{D}(\pi)$. Hence its composition with the asymptotic morphism $\{\gamma_t\} \colon L(H_B)/K(H_B) \to K(H_B)$ represents the image of $y$ under the natural map $KK_1(A,B)\to E_1(A,B)$. \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Predicting the actions of others in complex and strategic settings is an important facet of intelligence that guides our interactions---from walking in crowds to negotiating multi-party deals. Recovering such behavior from merely a few observations is an important and challenging machine learning task. \begin{comment} Despite the development of numerous powerful reinforcement learning algorithms, policies that mimic observed behavior are often preferred to, or even required over, learned policies. For example, systems that are partially automated, but maintain humans in the loop, like monitoring a power plant or operating heavy machinery, must be predictable and are often closely regulated. In these environments where the cost of failure is high, learning is often done with physical simulations. These tasks are notoriously hard to accurately model with a computer and errors not present in simulation often present when the system is deployed. \end{comment} While mature computational frameworks for decision-making have been developed to {\bf prescribe} the behavior that an agent {\em should} perform, such frameworks are often ill-suited for {\bf predicting} the behavior that an agent {\em will} perform. Foremost, the standard assumption of decision-making frameworks that a criteria for preferring actions ({\em e.g.}, costs, motivations and goals) is known {\it a priori} often does not hold. Moreover, real behavior is typically not consistently optimal or completely rational; it may be influenced by factors that are difficult to model or subject to various types of error when executed. Meanwhile, the standard tools of statistical machine learning ({\em e.g.}, classification and regression) may be equally poorly matched to modeling purposeful behavior; an agent's goals often succinctly, but implicitly, encode a strategy that would require a tremendous number of observations to learn. A natural approach to mitigate the complexity of recovering a full strategy for an agent is to consider identifying a compactly expressed utility function that {\em rationalizes} observed behavior: that is, identify rewards for which the demonstrated behavior is optimal and then leverage these rewards for future prediction. Unfortunately, the problem is fundamentally ill-posed: in general, many reward functions can make behavior seem rational, and in fact, the trivial, everywhere zero reward function makes {\bf all} behavior appear rational~\citep{ng2000algorithms}. Further, after removing such trivial reward functions, there may be {\bf no} reward function for which the demonstrated behavior is optimal as agents may be imperfect or the world they operate in may only be approximately represented. In the single-agent decision-theoretic setting, inverse optimal control methods have been used to bridge this gap between the prescriptive frameworks and predictive applications~\citep{abbeel2004,ratliff2006,ziebart2008,ziebart2010}. Successful applications include learning and prediction tasks in personalized vehicle route planning~\citep{ziebart2008}, predictive cursor control~\citep{ziebart2012}, robotic crowd navigation~\citep{henry2010}, quadruped foot placement and grasp selection~\citep{ratliff2009}. A reward function is learned by these techniques that both explains demonstrated behavior and approximates the optimality criteria of decision-theoretic frameworks. As these methods only capture a single reward function and do not reason about competitive or cooperative motives, inverse optimal control proves inadequate for modeling the strategic interactions of multiple agents. In this article, we consider the game-theoretic concept of regret as a stand-in for the optimality criteria of the single-agent work. As with the inverse optimal control problem, the result is fundamentally ill-posed. We address this by requiring that for any utility function linear in known features, our learned model must have no more regret than that of the observed behavior. We demonstrate that this requirement can be re-cast as a set of equivalent convex constraints that we denote the {\em inverse correlated equilibrium} (ICE) polytope. As we are interested in the effective prediction of behavior, we will use a maximum entropy criteria to select behavior from this polytope. We demonstrate that optimizing this criteria leads to mini-max optimal prediction of behavior subject to approximate rationality. We consider the dual of this problem and note that it generalizes the traditional log-linear maximum entropy family of problems~\citep{della2002inducing}. We provide a simple and computationally efficient gradient-based optimization strategy for this family and show that only a small number of observations are required for accurate prediction and transfer of behavior. We conclude by considering a variety of experimental results, ranging from predicting travel routes in a synthetic routing game to a market-entry econometric data-analysis exploring regulatory effects on hotel chains in Texas. Before we formalize imitation learning in matrix games, motivate our assumptions and describe and analyze our approach, we review related work. \section{Related Work} Many research communities are interested in computational modeling of human behavior and, in particular, in modeling rational and strategic behavior with incomplete knowledge of utility. Here we contrast the contributions of three communities by overviewing their interests and approaches. We conclude by describing our contribution in the same light. The econometrics community combines microeconomics and statistics to investigate the empirical properties of markets from sales records, census data and other publicly available statistics. McFadden first considered estimating consumer preferences for transportation by assuming them to be rational utility maximizers~\citep{mcfadden74}. Berry, Levinsohn and Pakes estimate both supply and demand-side preferences in settings where the firms must price their goods strategically~\citep{berry95}. Their initial work described a procedure for measuring the desirability of certain automobile criteria, such as fuel economy and features like air conditioning, to determine substitution effects. The Berry, Levinsohn and Pakes approach and its derivatives can be crudely described as model-fitting techniques. First, a parameterized class of utility functions are assumed for both the producers and consumers. Variables that are unobservable to the econometrician, such as internal production costs and certain aspects of the consumer's preferences, are known as {\em shocks} and are modeled as independent random variables. The draws of these random variables are known to the market's participants, but only their distributions are known to the econometrician. Second, an equilibrium pricing model is assumed for the producers. The consumers are typically assumed to be utility maximizers having no strategic interactions with in the market. Finally, an estimation technique is optimistically employed to determine a set of parameter values that are consistent with the observed behavior. Ultimately, it is from these parameter values that one derives insight into the unobservable characteristics of the market. Unfortunately, neither efficient sample nor computational complexity bounds are generally available using this family of approaches. A variety of questions have been investigated by econometricians using this line of reasoning. Petrin investigated the competitive advantage of being the first producer in a market by considering the introduction of the minivan to the North American automotive industry~\citep{petrin2002}. Nevo provided evidence against price-fixing in the breakfast cereal market by measuring the effects of advertising~\citep{nevo01}. Others have examined the mid-scale hotel market to determine the effects of different regulatory practices~\citep{suzuki2010} and how overcapacity can be used to deter competition~\citep{conlin06}. As a general theme, the econometricians are interested in the intentions that guide behavior. That is, the observed behavior is considered to be the truth and the decision-making framework used by the producers and consumers is known {\em a priori}. The decision theory community is interested in human behavior on a more individual level. They, too, note that out-of-the-box game theory fails to explain how people act in many scenarios. As opposed to viewing this as a flaw in the theories, they focus on both how to alter the games that model our interactions in addition to devising human-like decision-making algorithms. The former can be achieved through modifications to the players' utility functions, which are known {\em a priori}, to incorporate notions such as risk aversion and spite~\citep{myers60,erev2008}. latter approaches often tweak learning algorithms by integrating memory limitations or emphasizing recent or surprising observations~\citep{camerer1999,erev2005}. the Iterative Weighting and Sampling algorithm (I-SAW) is more likely to choose the action with the highest estimated utility, but recent observations are weighted more highly and, in the absence of a surprising observation, the algorithm favors repeating previous actions~\citep{erev2010}. Memory limitations, or more generally bounded rationality, have also led to novel equilibrium concepts such as the quantal response equilibrium~\citep{mckelvey1995}. This concept assumes the players' strategies have faults, but that small errors, in terms of forgone utility, are much more common than large errors. Contrasting with the econometricians, the decision theory community is mainly interested in the algorithmic process of human decision-making. The players' preferences are known and observed behavior serves only to validate or inform an experimental hypothesis. Finally, the machine learning community is interested in predicting and imitating the behavior of humans and expert systems. Much work in this area focuses on the single-agent setting and in such cases it is known as \emph{inverse optimal control} or \emph{inverse reinforcement learning}~\citep{abbeel2004,ng2000algorithms}. Here, the observed behavior is assumed to be an approximately optimal solution to an unknown decision problem. At a high level, known solutions typically summarize the behavior as parameters to a low dimensional utility function. A number of methods have been introduced to learn these weights, including margin-based methods~\citep{ratliff2006} that can utilize black box optimal control or planning software, as well as maximum entropy-based methods with desirable predictive guarantees~\citep{ziebart2008}. These utility weights are then used to mimic the behavior in similar situations through a decision-making algorithm. Unlike the other two communities, it is the predictive performance of the learned model that is most pivotal and noisy observations are expected and managed by those techniques. This article extends our prior publication---a novel maximum entropy approach for predicting behavior in strategic multi-agent domains~\citep{waugh11,waugh11arXiv}. We focus on the computationally and statistically efficient recovery of good estimates of behavior (the only observable quantity) by leveraging rationality assumptions. The work presented here further develops those ideas in two key ways. First, we consider distributions over games and parameterized families of deviations using the notion of conditional entropy. Second, this work enables more fine-grained assumptions regarding the players' possible preferences. Finally, this work presents the analysis of data-sets from both the econometric and decision theory communities, comparing and contrasting the methods presented with statistical methods that are blind to the strategic aspects of the domains. Before describing our approach, we will introduce the necessary notation and background. \section{Preliminaries} \subsection*{Notation} \newcommand{\Vsym}{V} \newcommand{\Vsym}{\Vsym} \newcommand{\Ksym}{\mathcal{K}} \newcommand{\mathcal{K}^*}{\mathcal{K}^*} \newcommand{\mathbb R}{\mathbb R} \newcommand{\Rsym^{K}}{\mathbb R^{K}} \newcommand{\inner}[2]{\langle #1,#2 \rangle} \newcommand{\func}[3]{#1 : #2\rightarrow #3} \newcommand{\operatornamewithlimits{argmax}}{\operatornamewithlimits{argmax}} \newcommand{\norm}[1]{\Vert#1\Vert} \newcommand{\abs}[1]{\vert#1\vert} Let $\Vsym$ be a Hilbert space with an inner product $\func{\inner{\cdot}{\cdot}}{\Vsym\times\Vsym}{\mathbb R}$. For any set $\Ksym\subseteq\Vsym$, let $\mathcal{K}^*=\{x \mid \inner{x}{y} \ge 0, \forall y\in\Ksym \}$ be its dual cone. We let $\norm{v}_2 = \sqrt{\inner{v}{v}}$, and, if $V$ is of finite dimension with orthonormal basis $\{e_1,\ldots,e_K\}$, let $\norm{v}_1 = \sum_{k=1}^K \abs{\alpha_k}$ where $v = \sum_{k=1}^K\alpha_k e_k$. Typically, we will take $\Vsym=\Rsym^{K}$ and use the standard inner product. \subsection*{Game Theory} Matrix games are the canonical tool of game theorists for representing strategic interactions ranging from illustrative toy problems, such as the ``Prisoner's Dilemma" and the ``Battle of the Sexes" games, to important negotiations, collaborations, and auctions. Unlike the traditional definition~\citep{osborne1994course}, in this work we model games where only the features of the players' utility are known and not the utilities themselves. \newcommand{\Gamma}{\Gamma} \newcommand{\mathcal N}{\mathcal N} \newcommand{\mathcal A}{\mathcal A} \newcommand{i}{i} \newcommand{\utilityi}[1]{u^{#1}_{i}} \newcommand{N}{N} \newcommand{A}{A} \newcommand{A_{\playeri}}{A_{i}} \begin{definition} A \defword{vector-valued normal-form game} is a tuple $\Gamma=(\mathcal N,\mathcal A,\utilityi{\Gamma})$ where \begin{itemize} \item $\mathcal N$ is the finite set of the game's $N$ \defword{players}, \item $\mathcal A=\times_{i\in\mathcal N}A_{\playeri}$ is the set of the game's \defword{outcomes} or \defword{joint-actions}, where \item $A_{\playeri}$ is the finite set of \defword{actions} or \defword{strategies} for player $i$, and \item $\func{\utilityi{\Gamma}}{\mathcal A}{\Vsym}$ is the \defword{utility feature function} for player $i$. \end{itemize} We let $A=\max_{i\in\mathcal N}\abs{A_{\playeri}}$. \end{definition} \newcommand{\outcome}{a} \newcommand{w}{w} \newcommand{\utilityif}[2]{\utilityi{#1}(#2)} \newcommand{\utilityiw}[3]{\utilityi{#1}(#2|#3)} \newcommand{w^*}{w^*} Players aim to maximize their \defword{utility}, a quantity measuring happiness or individual well-being. We assume that the players' utility is a common linear function of the utility features. This will allow us to treat the players anonymously should we so desire. One can expand the utility feature space if separate utility functions are desired. We write the utility for player $i$ at outcome $\outcome$ under utility function $w\in\Vsym$ as \begin{align} \utilityiw{\Gamma}{\outcome}{w} &=\inner{\utilityif{\Gamma}{\outcome}}{w}. \end{align} In contrast to the standard definition of normal-form games, where the utility functions for game outcomes are known, in this work we assume that the \defword{true utility function}, formed by $w^*$, which governs observed behavior, is unknown. This allows us to model real-world scenarios where a cardinal utility is not available or is subject to personal taste. Consider, for instance, a scenario where multiple drivers each choose a route among shared roads. Each outcome, which specifies a travel plan for all drivers, has a variety of easily measurable quantities that may impact the utility of a driver, such as travel time, distance, average speed, number of intersections and so on, but how these quantities map to utility depends on the internal preferences of the drivers. \newcommand{\sigma^{\Gamesym}}{\sigma^{\Gamma}} \newcommand{\simplex}[1]{\Delta_{#1}} We model the players using a \defword{joint strategy}, $\sigma^{\Gamesym}\in\simplex{\mathcal A}$, which is a distribution over the game's outcomes. Coordination between players can exist, thus, this distribution need not factor into independent strategies for each player. Conceptually, a trusted signaling mechanism, such as a traffic light, can be thought to sample an outcome from $\sigma^{\Gamesym}$ and communicate to each player $\outcome_{i}$, its portion of the joint-action. Even in situations where players are incapable of communication prior to play, correlated play is attainable through repetition. In particular, there are simple learning dynamics that, when employed by each player independently, converge to a correlated solution~\citep{foster96,hart2000}. \begin{comment} Given joint strategy, players can evaluate their \defword{expected utility}. We write the expected utility features for player $i$ when players play joint strategy $\sigma^{\Gamesym}$ as \begin{align} \utilityif{\Gamma}{\sigma^{\Gamesym}} &= \expectation{\outcome\distributed\sigma^{\Gamesym}}{\utilityif{\Gamma}{\outcome}}. \end{align} We let $\utilityiw{\Gamma}{\sigma^{\Gamesym}}{w}$ denote expected utility for player $i$ under utility function $w$. \end{comment} \newcommand{f_{\playeri}}{f_{i}} \newcommand{\pideviationf}[1]{f_{i}(#1)} \newcommand{\Phi}{\Phi} \newcommand{\Phi_{\deviation}}{\Phi_{\deviation}} \newcommand{\devswap}{\Phi^{\mbox{swap}}} If a player can benefit through a unilateral deviation from the proposed joint strategy, the strategy is unstable. As we are considering coordinated strategies, a player may condition its deviations on the recommended action. That is, a \defword{deviation for player $i$} is a function $\func{f_{\playeri}}{A_{\playeri}}{A_{\playeri}}$~\citep{agt4}. To ease the notation, we overload $\func{f_{\playeri}}{\mathcal A}{\mathcal A}$ to be the function that modifies only player $i$'s action according to $f_{\playeri}$. \newcommand{x}{x} \newcommand{y}{y} \newcommand{\operatorname{switch}^{\xsym\rightarrow\ysym}_{\playeri}}{\operatorname{switch}^{x\rightarrowy}_{i}} \newcommand{\outcome_{\playeri}}{\outcome_{i}} \newcommand{\switchixyf}[1]{\operatorname{switch}^{\xsym\rightarrow\ysym}_{\playeri}(#1)} \newcommand{\Phi^{\mbox{int}}}{\Phi^{\mbox{int}}} \newcommand{\operatorname{fixed}^{\rightarrow\ysym}_{\playeri}}{\operatorname{fixed}^{\rightarrowy}_{i}} \newcommand{\fixediyaf}[1]{\operatorname{fixed}^{\rightarrow\ysym}_{\playeri}(#1)} \newcommand{\devexternal}{\Phi^{\mbox{ext}}} Two well-studied classes of deviations are the switch deviation, \begin{align} \switchixyf{\outcome_{\playeri}} &= \left\{\begin{array}{cl} y & \mbox{if $\outcome_{\playeri} = x$}\\ \outcome_{\playeri} & \mbox{otherwise}, \end{array}\right. \end{align} which substitutes one action for another, and the fixed deviation, \begin{align} \fixediyaf{\outcome_{\playeri}} &= y, \end{align} which does not condition its change on the prescribed action. A \defword{deviation set}, denoted $\Phi$, is a set of deviation functions. We call the set of all switch deviations the internal deviation set, $\Phi^{\mbox{int}}$, and the set of all fixed deviations the external deviation set, $\devexternal$. The set $\devswap$ is the set of all deterministic deviations. Given that the other players indeed play their recommended actions, there is no strategic advantage to considering randomized deviations. \newcommand{\regretf}[3]{r^{#1}_{#2}(#3)} \newcommand{\regretw}[4]{r^{#1}_{#2}(#3|#4)} The benefit of applying deviation $f_{\playeri}$ when the players jointly play $\outcome$ is known as \defword{instantaneous regret}. We write the instantaneous regret features as \begin{align} \regretf{\Gamma}{f_{\playeri}}{\outcome} &= \utilityif{\Gamma}{\pideviationf{\outcome}} - \utilityif{\Gamma}{\outcome}, \end{align} and the instantaneous regret under utility function $w$ as \begin{align} \regretw{\Gamma}{f_{\playeri}}{\outcome}{w} &= \utilityiw{\Gamma}{\pideviationf{\outcome}}{w} - \utilityiw{\Gamma}{\outcome}{w} = \inner{\regretf{\Gamma}{f_{\playeri}}{\outcome}}{w}. \end{align} \newcommand{\deviation}{f} More generally, we can consider broader classes of deviations than the two we have mentioned. Conceptually, a deviation is a strategy modification and its regret is its benefit to a particular player. As we will ultimately only work with the regret features, we can now suppress the implementation details while bearing in mind that a deviation typically has these prescribed semantics. That is, a \defword{deviation} $\deviation\in\Phi$ has associated instantaneous regret features, $\regretf{\Gamma}{\deviation}{\cdot}$, and instantaneous regret, $\regretw{\Gamma}{\deviation}{\cdot}{w}$. \newcommand{\expectation}[2]{\mathbb{E}_{#1}\left[#2\right]} \newcommand{\distributed}{\sim} As a player is only privileged to its own portion of the coordinated outcome, it must reason about its \defword{expected regret}. We write the expected regret features as \begin{align} \regretf{\Gamma}{\deviation}{\sigma^{\Gamesym}} &= \expectation{\outcome\distributed\sigma^{\Gamesym}}{\regretf{\Gamma}{\deviation}{\outcome}}, \end{align} and the expected regret under utility function $w$ as \begin{align} \regretw{\Gamma}{\deviation}{\sigma^{\Gamesym}}{w} &= \expectation{\outcome\distributed\sigma^{\Gamesym}}{\regretw{\Gamma}{\deviation}{\outcome}{w}} = \inner{\regretf{\Gamma}{\deviation}{\sigma^{\Gamesym}}}{w}. \end{align} \newcommand{\Regret}[4]{\operatorname{Regret}^{#1}_{#2}(#3|#4)} \newcommand{\eqmeps}{\varepsilon} A joint strategy is in \defword{equilibrium} or, in a sense, stable if no player can benefit through a unilateral deviation. We can quantify this stability using expected regret with respect to the deviation set $\Phi$, \begin{align} \Regret{\Gamma}{\Phi}{\sigma^{\Gamesym}}{w} = \max_{\deviation\in\Phi}\regretw{\Gamma}{\deviation}{\sigma^{\Gamesym}}{w}, \end{align} and call a joint strategy $\sigma^{\Gamesym}$ an \defword{$\eqmeps$-equilibrium} if \begin{align} \Regret{\Gamma}{\Phi}{\sigma^{\Gamesym}}{w} \le \eqmeps. \end{align} The most general deviation set, $\devswap$, corresponds with the \defword{$\varepsilon$-correlated equilibrium} solution concept~\citep{osborne1994course,agt4}. Thus, regret can be thought of as the natural substitute for utility when assessing the optimality of behavior in multi-agent settings. The set $\devswap$ is typically intractably large. Fortunately, internal regret closely approximates swap regret and is polynomially-sized in both the number of actions and players. \begin{lemma} If joint strategy $\sigma^{\Gamesym}$ has $\eqmeps$ internal regret, then it is an $A\eqmeps$-correlated equilibrium under utility function $w$. That is, $\forall w\in\Vsym$, \begin{align} \Regret{\Gamma}{\Phi^{\mbox{int}}}{\sigma^{\Gamesym}}{w} & \le \Regret{\Gamma}{\devswap}{\sigma^{\Gamesym}}{w} \le A\cdot\Regret{\Gamma}{\Phi^{\mbox{int}}}{\sigma^{\Gamesym}}{w}. \end{align} \label{lemma:internalapproxswap} \end{lemma} The proof is provided in the Appendix. \section{Behavior Estimation in a Matrix Game} We are now equipped with the tools necessary to introduce our approach for imitation learning in multi-agent settings. We start by assuming a notion of rationality on the part of the game's players. By leveraging this assumption, we will then derive an estimation procedure with much better statistical properties than methods that are unaware of the game's structure. \subsection{Rationality and the ICE Polytope} \newcommand{t}{t} \newcommand{\outcome^{\obsidxsym}}{\outcome^{t}} \newcommand{T}{T} \newcommand{\strategy}{\sigma^{\Gamesym}} \newcommand{\tilde{\sigma}^{\Gamesym}}{\tilde{\sigma}^{\Gamma}} Let $\{\outcome^{\obsidxsym}\}_{t=1}^{T}$ be a sequence of $T$ independent observations of behavior in game $\Gamma$ distributed according to $\strategy$, the players' \defword{true behavior}. We call the empirical distribution of the observations, $\tilde{\sigma}^{\Gamesym}$, the \defword{demonstrated behavior}. \newcommand{\hat{\sigma}^{\Gamesym}}{\hat{\sigma}^{\Gamma}} \newcommand{\predstrategya}[1]{\hat{\sigma}^{\Gamma}(#1)} We aim to learn a distribution $\hat{\sigma}^{\Gamesym}$, called the \defword{predicted behavior}, an estimation of the true behavior from these demonstrations. Moreover, we would like our learning procedure to extract the motives for the behavior so that we may imitate the players in similarly structured, but unobserved games. Initially, let us consider just the estimation problem. While deriving our method, we will assume we have access to the players' true behavior. Afterwards, we will analyze the error introduced by approximating from the demonstrations. Imitation appears hard barring further assumptions. In particular, if the agents are unmotivated or their intentions are not coerced by the observed game, there is little hope of recovering principled behavior in a new game. Thus, we require a form of rationality. \newcommand{\acute{\sigma}^{\Gamesym}}{\acute{\sigma}^{\Gamma}} \begin{proposition} The players in a game are \defword{rational} with respect to deviation set $\Phi$ if they prefer joint-strategy $\sigma^{\Gamesym}$ over joint strategies $\acute{\sigma}^{\Gamesym}$ when \begin{align} \Regret{\Gamma}{\Phi}{\sigma^{\Gamesym}}{w^*} & < \Regret{\Gamma}{\Phi}{\acute{\sigma}^{\Gamesym}}{w^*}. \end{align} \end{proposition} Our rationality assumption states that the players are driven to minimize their regret. It is not necessarily the case that they indeed have low or no regret, but simply that they can evaluate their preferences and that they prefer joint strategies with low regret. Through this assumption, we will be able to reason about the players' behavior solely through the game's features; this is what leads to the improved statistical properties of our approach. As agents' true preferences $w^*$ are unknown, we consider an encompassing assumption that requires that estimated behavior satisfy this property for all possible utility weights. A prediction $\hat{\sigma}^{\Gamesym}$ is \defword{strongly rational} with respect to deviation set $\Phi$ if \begin{align} \forall w\in\Vsym,\;\;\Regret{\Gamma}{\Phi}{\hat{\sigma}^{\Gamesym}}{w}& \le \Regret{\Gamma}{\Phi}{\strategy}{w}. \end{align} This assumption is similar in spirit to the utility matching assumption employed by inverse optimal control techniques in single-agent settings. As in those settings, we have an {\em if and only if} guarantee relating rationality and strong rationality~\citep{abbeel2004,ziebart2008}. \begin{theorem} If a prediction $\hat{\sigma}^{\Gamesym}$ is strongly rational with respect to deviation set $\Phi$ and the players are rational with respect to $\Phi$, then they do not prefer $\strategy$ over $\hat{\sigma}^{\Gamesym}$. \end{theorem} This is immediate as $w^*\in\Vsym$. Phrased another way, a strongly rational prediction is no worse than the true behavior. \begin{corollary} If a prediction $\hat{\sigma}^{\Gamesym}$ is strongly rational with respect to deviation set $\Phi$ and the true behavior is an $\eqmeps$-equilibrium with respect to $\Phi$ under utility function $w^*\in\Vsym$, then $\hat{\sigma}^{\Gamesym}$ is also an $\eqmeps$-equilibrium. \end{corollary} Again, the proof is immediate as $\Regret{\Gamma}{\Phi}{\hat{\sigma}^{\Gamesym}}{w^*} \le \Regret{\Gamma}{\Phi}{\strategy}{w^*} \le \eqmeps$. Conversely, if we are uncertain about the true utility function we {\bf must} assume strong rationality or we risk predicting less desirable behavior. \begin{theorem} If a prediction $\hat{\sigma}^{\Gamesym}$ is not strongly rational with respect to deviation set $\Phi$ and the players are rational, then there exists a $w^*\in\Vsym$ such that $\strategy$ is preferred to $\hat{\sigma}^{\Gamesym}$. \end{theorem} The proof follows from the negation of the definition of strong rationality. By restricting our attention to strongly rational behavior, at worst agents acting according to their unknown true preferences will be indifferent between our predictive distribution and their true behavior. That is, strong rationality is necessary and sufficient under the assumption players are rational given no knowledge of their true utility function. \newcommand{\eta}{\eta} \newcommand{\eta_{\deviation}}{\eta_{\deviation}} Unfortunately, a direct translation of the strong rationality requirement into constraints on the distribution $\hat{\sigma}^{\Gamesym}$ leads to a non-convex optimization problem as it involves products of varying utility vectors with the behavior to be estimated. Fortunately, we can provide an equivalent concise convex description of the constraints on $\hat{\sigma}^{\Gamesym}$ that ensures any feasible distribution satisfies strong rationality. We denote this set of equivalent constraints as the {\em Inverse Correlated Equilibria} (ICE) polytope. \begin{definition}[Standard ICE Polytope] \begin{align} \regretf{\Gamma}{\deviation}{\hat{\sigma}^{\Gamesym}} & = \expectation{g\distributed\eta_{\deviation}}{\regretf{\Gamma}{g}{\strategy}},&\forall \deviation\in\Phi \\ \eta_{\deviation}&\in\simplex{\Phi_{\deviation}}, &\forall \deviation\in\Phi\\ \hat{\sigma}^{\Gamesym}&\in\simplex{\mathcal A}. \end{align} \end{definition} Here, we have introduced $\Phi_{\deviation}$, the set of deviations that $\deviation$ will be compared against. Our rationality assumption corresponds to when $\Phi_{\deviation}=\Phi$, but there are different choices that have reasonable interpretations as alternative rationality assumptions. For example, if each switch deviation is compared only against switches for the same player---a more restrictive condition---then the quality of the equilibrium is measured by the {\em sum} of all players' regrets, as opposed to only the one with the most regret. The following corollary equates strong rationality and the standard ICE polytope. \begin{corollary} A prediction $\hat{\sigma}^{\Gamesym}$ is strongly rational with respect to deviation set $\Phi$ if and only if for all $\deviation\in\Phi$ there exists $\eta_{\deviation}\in\simplex{\Phi}$ such that $\hat{\sigma}^{\Gamesym}$ and $\eta$ satisfy the standard ICE polytope. \label{c:standardice} \end{corollary} We now show a more general result that implies Corollary~\ref{c:standardice}. We start by generalizing the notion of strong rationality by restricting $w^*$ to be in a known set $\Ksym\subseteq\Vsym$. We say a prediction $\hat{\sigma}^{\Gamesym}$ is $\Ksym$-strongly rational with respect to deviation set $\Phi$ if \begin{align} \forall w\in\Ksym,\;\;\Regret{\Gamma}{\Phi}{\hat{\sigma}^{\Gamesym}}{w}& \le \Regret{\Gamma}{\Phi}{\strategy}{w}. \end{align} If $\Ksym$ is convex with non-empty relative interior and $0\in\Ksym$, we derive the $\Ksym$-ICE polytope. \begin{definition}[$\Ksym$-ICE Polytope] \begin{align} \regretf{\Gamma}{\deviation}{\hat{\sigma}^{\Gamesym}} - \expectation{g\distributed\eta_{\deviation}}{\regretf{\Gamma}{g}{\strategy}}& \in -\mathcal{K}^*, &\forall \deviation\in\Phi \\ \eta_{\deviation}&\in\simplex{\Phi_{\deviation}}, &\forall \deviation\in\Phi\\ \hat{\sigma}^{\Gamesym}&\in\simplex{\mathcal A}. \end{align} \end{definition} Note that the above constraints are linear in $\hat{\sigma}^{\Gamesym}$ and $\eta_{\deviation}$, and $\mathcal{K}^*$, the dual cone, is convex. The following theorem shows the equivalence of the $\Ksym$-ICE polytope and $\Ksym$-strong rationality. \begin{theorem} A prediction $\hat{\sigma}^{\Gamesym}$ is $\Ksym$-strongly rational with respect to deviation set $\Phi$ if and only if for all $\deviation\in\Phi$ there exists $\eta_{\deviation}\in\simplex{\Phi}$ such that $\hat{\sigma}^{\Gamesym}$ and $\eta$ satisfy the $\Ksym$-ICE polytope. \label{thm:kice} \end{theorem} The proof is provided in the Appendix. \newcommand{\Rsym^{K}_{+}}{\mathbb R^{K}_{+}} By choosing $\Ksym=\Vsym$, then $\mathcal{K}^*=\{0\}$ and the polytope reduces to the standard ICE polytope. Thus, Corollary~\ref{c:standardice} follows directly from Theorem~\ref{thm:kice}. By choosing $\Ksym$ to be the positive orthant, $\Ksym=\mathcal{K}^*=\Rsym^{K}_{+}$, the polytope reduces to the following inequalities. Here, we explicitly assume the utility to be a positive linear function of the features. \begin{definition}[Positive ICE Polytope] \begin{align} \regretf{\Gamma}{\deviation}{\hat{\sigma}^{\Gamesym}} & \le \expectation{g\distributed\eta_{\deviation}}{\regretf{\Gamma}{g}{\strategy}}, &\forall \deviation\in\Phi \\ \eta_{\deviation}&\in\simplex{\Phi_{\deviation}}, &\forall \deviation\in\Phi\\ \hat{\sigma}^{\Gamesym}&\in\simplex{\mathcal A}. \end{align} \end{definition} Predictive behavior within the ICE polytope will retain the quality of the demonstrations provided. The following corollaries formalize this guarantee. \begin{corollary} If the true behavior is an $\eqmeps$-correlated equilibrium under $w^*$ in game $\Gamma$, then a prediction $\hat{\sigma}^{\Gamesym}$ that satisfies the standard ICE polytope where $\Phi = \devswap$ and $\forall \deviation\in\Phi, \Phi_{\deviation}=\Phi$ is also an $\eqmeps$-correlated equilibrium. \end{corollary} This follows immediately from the definition of an approximate correlated equilibrium. \begin{corollary} If the true behavior is an $\eqmeps$-correlated equilibrium under $w^*$ in game $\Gamma$, then a prediction $\hat{\sigma}^{\Gamesym}$ that satisfies the standard ICE polytope where $\Phi = \Phi^{\mbox{int}}$ and $\forall \deviation\in\Phi, \Phi_{\deviation}=\Phi$ is also an $A\eqmeps$-correlated equilibrium. \end{corollary} This follows immediately from Lemma~\ref{lemma:internalapproxswap}. \newcommand{\strategyi}{\sigma^{\Gamma}_{i}} In two-player constant-sum games, we can make stronger statements about our predictive behavior. In particular, when these requirements are satisfied we may reason about games without coordination. That is, each player chooses their action independently using their \defword{strategy}, $\strategyi$ a distribution over $A_{\playeri}$. A \defword{strategy profile} $\sigma^{\Gamesym}$ consists of a strategy for each player. It defines a joint-strategy with no coordination between the players. A game is constant-sum if there is a fixed amount of utility divided among the players. That is, if there is a constant $C$ such that $\forall\outcome\in\mathcal A$, \begin{align} \sum_{i\in\mathcal N}\utilityiw{\Gamma}{\outcome}{w^*} & = C. \end{align} In settings where the players act independently, we use external regret to measure a profile's stability, which corresponds with the famous \defword{Nash equilibrium} solution concept~\citep{osborne1994course}. By using the ICE polytope with external regret, we can recover a Nash equilibrium if one is demonstrated in a constant-sum game. \begin{theorem} If the true behavior is an $\eqmeps$-Nash equilibrium in a two-player constant-sum game $\Gamma$, then the marginal strategies formed from a prediction $\hat{\sigma}^{\Gamesym}$ that satisfies the standard ICE polytope where $\Phi = \devexternal$ and $\forall \deviation\in\Phi, \Phi_{\deviation}=\Phi$ is a $2\eqmeps$-Nash equilibrium. \label{thm:nash} \end{theorem} The proof is provided in the Appendix. In general, there can be infinitely many correlated equilibrium with vastly different properties. One such property that has received much attention is the social welfare of a joint strategy, which refers to the total utility over all players. Our strong rationality assumption states that the players have no preference on which correlated equilibrium is selected, and thus without modification cannot capture such a concept should it be demonstrated. We can easily maintain the social welfare of the demonstrations by additionally preserving the players' utilities along side the constraints prescribed by the ICE polytope. A joint strategy is utility-preserving under all utility functions if \begin{align} \forall w\in\Vsym,i\in\mathcal N,\;\;\utilityiw{\Gamma}{\hat{\sigma}^{\Gamesym}}{w} & = \utilityiw{\Gamma}{\strategy}{w}. \end{align} As with the correspondence between strong rationality and the ICE polytope, utility preservation can be represented as a set of linear equality constraints. These utility feature matching constraints are exactly the basis of many methods of inverse optimal control~\citep{abbeel2004,ziebart2008}. \begin{theorem} A joint strategy is utility-preserving under all utility functions if and only if \begin{align} i\in\mathcal N,\;\;\utilityif{\Gamma}{\hat{\sigma}^{\Gamesym}} & = \utilityif{\Gamma}{\strategy}. \end{align} \end{theorem} The proof is due to~\citet{abbeel2004}. A notable choice for $\Phi_{\deviation}$ is we compare each deviation only to itself. As a consequence this enforces a stronger constraint that the regret under each deviation, and in turn the overall regret, is the same under our prediction and the demonstrations. That is, $\hat{\sigma}^{\Gamesym}$ is \defword{regret-matching} as for all $w\in\Vsym$, \begin{align} \Regret{\Gamma}{\Phi}{\hat{\sigma}^{\Gamesym}}{w} & = \Regret{\Gamma}{\Phi}{\strategy}{w}. \end{align} Thus, regret-matching preserves the equilibrium qualities of the demonstrations. Unlike the correspondence between the ICE polytope and strong rationality, matching the regret features for each deviation is \textbf{not} required for a strategy to match the regrets of the demonstrations. That is, the converse does not hold.\footnote{We may sketch a simple counterexample. Consider a game with one player and three actions, $x$, $y$ and $y'$, where the utility for playing $x$ is zero, and the utility for playing either $y$ or $y'$ is one. If the true behavior always plays $y$, then matching the regret features will force the prediction to also play $y$. Predicting $y'$ also matches the regret, though.} \begin{theorem} A prediction $\hat{\sigma}^{\Gamesym}$ matches the regret of $\strategy$ for all $w\in\Vsym$ does not necessarily match the regret features of $\strategy$. \end{theorem} We use both utility and regret matching in our final set of experiments. The former for predictive reasons, the latter to allow for the use of smooth minimization techniques. \subsection{The Principle of Maximum Entropy} As we are interested in the problem of statistical prediction of strategic behavior, we must find a mechanism to resolve the ambiguity remaining after accounting for the rationality constraints. The \defword{principle of maximum entropy}, due to~\citet{jaynes1957}, provides a well-justified method for choosing such a distribution. This choice leads to not only statistical guarantees on the resulting predictions, but to efficient optimization. \newcommand{\entropy}[2]{H^{#1}(#2)} \newcommand{\strategy(\outcome)}{\sigma^{\Gamesym}(\outcome)} \newcommand{\strategy(\outcome)}{\sigma^{\Gamesym}(\outcome)} The \defword{Shannon entropy} of a joint-strategy $\sigma^{\Gamesym}$ is \begin{align} \entropy{\Gamma}{\sigma^{\Gamesym}} & = \expectation{\outcome\distributed\sigma^{\Gamesym}}{-\log \strategy(\outcome)}, \end{align} and the principle of maximum entropy advocates choosing the distribution with maximum entropy subject to known constraints~\citep{jaynes1957}. That is, \begin{align} \sigma_{\text{MaxEnt}} & = \operatornamewithlimits{argmax}_{\sigma^{\Gamesym}\in\simplex{\mathcal A}} \entropy{\Gamma}{\sigma^{\Gamesym}}, \quad\mbox{subject to:} \\ & \; g(\sigma^{\Gamesym}) = 0 \text{ and } h(\sigma^{\Gamesym}) \leq 0. \end{align} The constraint functions, $g$ and $h$, are typically chosen to capture the important or most salient characteristics of the distribution. When those functions are affine and convex respectively, finding this distribution is a convex optimization problem. The resulting log-linear family of distributions ({\em e.g.}, logistic regression, Markov random fields, conditional random fields) are widely used within statistical machine learning. In the context of multi-agent behavior, the principle of maximum entropy has been employed to obtain correlated equilibria with predictive guarantees in normal-form games when the utilities are known {\em a priori}~\citep{ortiz2007}. We will now leverage its power with our rationality assumption to select predictive distributions in games where the utilities are unknown, but the important features that define them are available. \newcommand{\operatornamewithlimits{maximize}}{\operatornamewithlimits{maximize}} For our problem, the constraints are precisely that the distribution is in the ICE polytope, ensuring that whatever we predict has no more regret than the demonstrated behavior. \begin{definition} The primal maximum entropy ICE optimization problem is \begin{align} \operatornamewithlimits{maximize}_{\hat{\sigma}^{\Gamesym},\eta} &\;\; \entropy{\Gamma}{\hat{\sigma}^{\Gamesym}} \quad\mbox{subject to:}\\ \regretf{\Gamma}{\deviation}{\hat{\sigma}^{\Gamesym}} - \expectation{g\distributed\eta_{\deviation}}{\regretf{\Gamma}{g}{\strategy}} & \in -\mathcal{K}^*, &\forall \deviation\in\Phi \\ \eta_{\deviation}&\in\simplex{\Phi_{\deviation}}, &\forall \deviation\in\Phi\\ \hat{\sigma}^{\Gamesym}&\in\simplex{\mathcal A}. \end{align} \end{definition} This program is convex, feasible, and bounded. That is, it has a solution and is efficiently solvable using simple techniques in this form. Importantly, the maximum entropy prediction enjoys the following guarantee: \begin{lemma} \label{lem:maxent} The maximum entropy ICE distribution minimizes over all strongly rational distributions the worst-case log-loss, $\expectation{\outcome\distributed\sigma^{\Gamesym}}{-\log_2 \predstrategya{\outcome}}$, when $\strategy$ is chosen adversarially but subject to strong rationality. \end{lemma} The proof of Lemma~\ref{lem:maxent} follows immediately from the result of~\citet{grunwald2003}. \begin{comment} A notable consequence of our approach is that with the appropriate choice of deviations it encompasses MaxEnt Inverse Optimal Control. \begin{corollary} When MaxEnt ICE, where $\Phi=\devexternal$ and $\forall \deviation\in\Phi, \deviation\in\Phi_{\deviation}$, is applied in a single-agent decision process, it is equivalent to MaxEnt Inverse Optimal Control. \end{corollary} \begin{proof} We will show the standard ICE polytope with $\Phi=\devexternal$ and $\forall \deviation\in\Phi, \deviation\in\Phi_{\deviation}$ in a single-agent game reduces exactly to utility matching. \begin{align} \end{align} \end{proof} \end{comment} \subsection{Dual Optimization} In this section, we will derive and describe a procedure for optimizing the dual program for solving the MaxEnt ICE optimization problem. We will see that the dual multipliers can be interpreted as utility vectors and that optimization in the dual has computational advantages. We begin by presenting the dual program. \newcommand{\operatornamewithlimits{minimize}}{\operatornamewithlimits{minimize}} \newcommand{\theta} \newcommand{\optdualweights}{\theta^*}{\theta} \newcommand{\optdualweights}{\theta^*} \newcommand{\theta_{\deviation}}{\theta_{\deviation}} \newcommand{\theta^*_{\deviation}}{\theta^*_{\deviation}} \newcommand{\Zfunc}[1]{Z^{#1}(\theta} \newcommand{\optdualweights}{\theta^*)} \newcommand{\logZfunc}[1]{\operatorname{logZ}^{#1}(\theta} \newcommand{\optdualweights}{\theta^*)} \newcommand{\Ksym^{**}}{\Ksym^{**}} \begin{theorem} The dual maximum entropy ICE optimization problem is the following non-smooth, but convex program: \begin{align} \operatornamewithlimits{minimize}_{\theta_{\deviation}\in\Ksym^{**}} & \;\; \sum_{\deviation\in\Phi}\Regret{\Gamma}{\Phi_{\deviation}}{\strategy}{\theta_{\deviation}} + \log\Zfunc{\Gamma}, \mbox{~where} \\ \Zfunc{\Gamma} & = \sum_{\outcome\in\mathcal A}\exp\left(-\sum_{\deviation\in\Phi}\regretw{\Gamma}{\deviation}{\outcome}{\theta_{\deviation}} \right). \end{align} \label{thm:dual} \end{theorem} We derive the dual in the Appendix. As the dual's feasible set has non-empty relative interior, strong duality holds by Slater's condition---there is no duality gap. We can also use a dual solution to recover $\hat{\sigma}^{\Gamesym}$. \begin{lemma} Strong duality holds for the maximum entropy ICE optimization problem and given optimal dual weights $\optdualweights$, the maximum entropy ICE joint-strategy $\hat{\sigma}^{\Gamesym}$ is \begin{align} \predstrategya{\outcome} & \propto \exp\left(-\sum_{\deviation\in\Phi}\regretw{\Gamma}{\deviation}{\outcome}{\theta^*_{\deviation}} \right). \label{eqn:dualtoprimal} \end{align} \end{lemma} \newcommand{g}{g} \newcommand{g^f}{g^f} \newcommand{f^*}{f^*} \newcommand{f'}{f'} \begin{algorithm}[tb] \caption{Dual MaxEnt ICE Gradient} \label{alg:dualgradient} \begin{algorithmic} \STATE {\bfseries Input:} Let $\hat{\sigma}^{\Gamesym}$ be the prediction given the current dual weights, $\theta} \newcommand{\optdualweights}{\theta^*$, as from Equation~\eqref{eqn:dualtoprimal}. \FOR{$\deviation \in \Phi$} \STATE $f^* \leftarrow \operatornamewithlimits{argmax}_{f'\in\Phi_{\deviation}}\regretw{\Gamma}{f'}{\strategy}{\theta_{\deviation}}$ \STATE $g^f \leftarrow \regretf{\Gamma}{f^*}{\strategy} - \regretf{\Gamma}{\deviation}{\hat{\sigma}^{\Gamesym}}$ \ENDFOR \STATE {\bfseries return} $g$ \end{algorithmic} \label{alg:dual} \end{algorithm} The dual formulation of our program has important inherent computational advantages. First, so long as $\Ksym$ is simple, the optimization is particularly well-suited for gradient-based optimization, a trait not shared by the primal program. Second, the number of dual variables, $\abs{\Phi}\dim{\Vsym}$, is typically much fewer than the number of primal variables, $\abs{\mathcal A}+\abs{\Phi}^2$. Though the work per iteration is still a function of $\abs{\mathcal A}$ (to compute the partition function), these two advantages together let us scale to larger problems than if we consider optimizing the primal objective. Computing the expectations necessary to descend the dual gradient can leverage recent advances in the structured, compact game representations: in particular, any graphical game with low-treewidth or finite horizon Markov game~\citep{kakade2003correlated} enables these computations to be performed in time that scales only polynomially in the number of decision makers. Algorithm~\ref{alg:dual} describes the dual gradient computation. This can be incorporated with any non-smooth gradient method, such as the projected subgradient method~\citep{shor1985}, to approach the optimum dual weights. \section{Behavior Estimation in Parameterized Matrix Games} \newcommand{\mathcal G}{\mathcal G} To account for stochastic, or varying environments, we now consider {\em distributions over} games. For example, rain may affect travel time along some routes and make certain modes of transportation less desirable, or even unavailable. Operationally, nature samples a game prior to play from a distribution known to the players. The players then as a group determine a joint strategy conditioned on the particular game and an outcome is drawn by a coordination device. We let $\mathcal G$ denote our class of games. \newcommand{\Gamesym^{\obsidxsym}}{\Gamma^{t}} \newcommand{\xi}{\xi} \newcommand{\sigma}{\sigma} \newcommand{\tilde{\xi}}{\tilde{\xi}} \newcommand{\tilde{\sigma}}{\tilde{\sigma}} As before, we observe a sequence of $T$ independent observations of play, but now in addition to an outcome we also observe nature's choice at each time $t$. Let $\{(\Gamesym^{\obsidxsym},\outcome^{\obsidxsym})\}_{t=1}^T$ be the aforementioned sequence of observations drawn from $\xi$ and $\sigma$, the \defword{true behavior}. The empirical distribution of the observations, $\tilde{\xi}$ and $\tilde{\sigma}$, together are the \defword{demonstrated behavior}. \newcommand{\predstrategies}{\hat{\sigma}} Now we aim to learn a \defword{predictive behavior} distribution, $\hat{\sigma}^{\Gamesym}$, for {\em any} $\Gamma\in\mathcal G$, even ones we have not yet observed. Clearly, we must leverage the observations across the entire family to achieve good predictive accuracy. We continue to assume that the players' utility is an unknown linear function, $w^*$, of the games' features and that this function is fixed across $\mathcal G$. Next, we amend our notion of regret and our rationality assumption. \begin{comment} \todo{This is too negative. Needs work. Relation to no-regret in a sequence of games would be a nice footnote. I'm not even sure this paragraph is necessary. We allow deviations to depend on the game, if desired, and leave it at that.} Though we allow the joint strategies to be conditioned on the game being played, we do not explicitly allow the players to condition their deviation on nature's choice. Without such a restriction or additional assumptions about the players, we cannot hope for good generalization from a reasonable number of observations. This design decision has the disadvantage of requiring the actions, and in turn the deviations, in each game to have similar semantic meaning, which limits the particular game distributions over which we learn principled behavior. Given this choice, we can simply employ an additional expectation over the game distribution when reasoning about the regret and regret features. More concretely, we write the expected regret features under deviation $\deviation$ as \end{comment} \subsection{Behavior Estimation through Conditional ICE} Ultimately, we wish to simply employ an additional expectation over the game distribution when reasoning about the regret and regret features. To do this, our notion of a deviation needs to account for the fact that it may be executed in games with different structures. Operationally, one way to achieve this is by having a deviation not act when it is applied to such a game, which increases the size of $\Phi$ by a factor of $\abs{\mathcal G}$. If the actions, and in turn the deviations, have similar semantic meanings across our entire family of games, one can simply share the deviations across all games. This allows for one to achieve transfer over an infinitely large class. Given such a decision, we write the expected regret features under deviation $\deviation$ as \begin{align} \regretf{\xi}{\deviation}{\sigma} &= \expectation{\Gamma\distributed\xi}{\regretf{\Gamma}{\deviation}{\strategy}}, \end{align} and the expected regret under utility function $w$ as \begin{align} \regretw{\xi}{\deviation}{\sigma}{w} &= \expectation{\Gamma\distributed\xi}{\regretw{\Gamma}{\deviation}{\strategy}{w}}. \end{align} Again, we quantify the stability of a set of joint strategies using this new notion of expected regret with respect to the deviation set $\Phi$, \begin{align} \Regret{\xi}{\Phi}{\sigma}{w} &= \max_{\deviation\in\Phi}\regretw{\xi}{\deviation}{\sigma}{w}, \end{align} which, in turn, entails a notion of an $\eqmeps$-equilibrium for a set of joint strategies, a modified rationality assumption, and a slight modification to the $\Ksym$-ICE polytope, \begin{definition}[Conditional $\Ksym$-ICE Polytope] \begin{align} \regretf{\xi}{\deviation}{\predstrategies} - \expectation{g\distributed\eta_{\deviation}}{\regretf{\xi}{g}{\sigma}} & \in -\mathcal{K}^*, &\forall \deviation\in\Phi \\ \eta_{\deviation}&\in\simplex{\Phi_{\deviation}}, &\forall \deviation\in\Phi\\ \hat{\sigma}^{\Gamesym}&\in\simplex{\mathcal A}.&\forall \Gamma\in\mathcal G \end{align} \end{definition} All that remains is to adjust our notion of entropy to take into account a distribution over games. In particular, we choose to maximize the expected entropy of our prediction, which is conditioned on the game sampled by chance. \begin{definition} The conditional Shannon entropy of a set of strategies $\sigma$ when games are distributed according to $\xi$ is \begin{align} \entropy{\xi}{\sigma} & = \expectation{\Gamma\distributed\xi}{\entropy{\Gamma}{\strategy}}. \end{align} \end{definition} The modified dual optimization problem has a familiar form. We now use the new notion of regret and take the expected value of the log partition function. \begin{theorem} The dual conditional maximum entropy ICE optimization problem is \begin{align} \operatornamewithlimits{minimize}_{\theta_{\deviation}\in\Ksym^{**}} & \;\; \sum_{\deviation\in\Phi}\Regret{\xi}{\Phi_{\deviation}}{\sigma}{\theta_{\deviation}} + \expectation{\Gamma\distributed\xi}{\log\Zfunc{\Gamma}}. \end{align} \end{theorem} To recover the predicted behavior for a particular game, we use the same exponential family form as before. As with any machine learning technique, it is advisable to employ some form of complexity control on the resulting predictor to prevent over-fitting. As we now wish to generalize to unobserved games, we too should take the appropriate precautions. In our experiments, we employ $L1$ and $L2$ regularization terms to the dual objective for this purpose. Regularization of the dual weights effectively alters the primal constraints by allowing them to hold approximately, leading to higher entropy solutions~\citep{dudik07}. \subsection{Behavior Transfer without common deviations} A principal justification of inverse optimal control techniques that attempt to identify behavior in terms of utility functions is the ability to consider what behavior might result if the underlying decision problem were changed while the interpretation of features into utilities remain the same~\citep{ng2000algorithms,ratliff2006}. This enables prediction of agent behavior in a no-regret or agnostic sense in problems such as a robot encountering novel terrain~\citep{Silver_2010_6638} as well as route recommendation for drivers traveling to unseen destinations~\citep{Ziebart2008b}. Econometricians are interested in similar situations, but for much different reasons. Typically, they aim to validate a model of market behavior from observations of product sales. In these models, the firms assume a fixed pricing policy given known demand. The econometrician uses this fixed policy along with product features and sales data to estimate or bound both the consumers' utility functions as well as unknown production parameters, like markup and production cost~\citep{berry95,nevo01}. In this line of work, the observed behavior is considered accurate to start with; it is unclear how suitable these methods are for settings with limited or noisy observations. \newcommand{\Gamepsym}{\acute{\Gamma}} In our prior work, we introduced an approach to behavior transfer applicable between games with different action sets~\citep{waugh11}. It is based off the assumption of \defword{transfer rationality}, or for two games $\Gamma$ and $\Gamepsym$ and some constant $\kappa > 0$, \begin{align} \forall w\in\Vsym,\: \Regret{\Gamepsym}{\Phi}{\hat{\sigma}^{\Gamesym}}{w} \le \kappa\Regret{\Gamma}{\Phi}{\strategy}{w}. \end{align} Roughly, we assume that under preferences with low regret in the original game, the behavior in the unobserved game should also have low regret. By enforcing this property, if the agents are performing well with respect to their true preferences, then the transferred behavior will also be of high quality. Assuming transfer rationality is equivalent to using the conditional ICE estimation program with differing game distributions for the predicted and demonstrated regret features. In such a case, the program is not necessarily feasible and the constraints must be relaxed. For example, a slack variable may be added to the primal, or through regularization in the dual. We note that this requires the estimation program to be run at test time. \section{Sample Complexity} In practice, we do not have full access to the agents' true behavior---if we did, prediction would be straightforward and we would not require our estimation technique. Instead, we may only approximate the desired expectations by averaging over a finite number of observations, \begin{align} \regretw{\tilde{\xi}}{\deviation}{\tilde{\sigma}}{w} & \approx \frac{1}{T}\sum_{t=1}^T \regretw{\Gamesym^{\obsidxsym}}{\deviation}{\outcome^{\obsidxsym}}{w}. \end{align} In real applications there are costs associated with gathering these observations and, thus, there are inherent limitations on the quality of this approximation. Next, we will analyze the sensitivity of our approach to these types of errors. \newcommand{\maxregret}{\Delta} \newcommand{\maxregretf}[1]{\Delta(#1)} First, although $\abs{\mathcal A}$ is exponential in the number of players, our technique only accesses $\tilde{\sigma}$ through expected regret features of the form $\regretf{\tilde{\xi}}{\deviation}{\tilde{\sigma}}$. That is, we need only approximate these features accurately, not the distribution $\sigma$. For finite-dimensional vector spaces, we can bound how well the regrets match in terms of $\abs{\Phi}$ and the dimension of the space. \begin{theorem} With probability at least $1-\delta$, for any $w$, by observing $T \ge \frac{1}{2\epsilon^2}\log\frac{2\abs{\Phi}\dim{\Vsym}}{\delta}$ outcomes we have for all deviations $\regretw{\tilde{\xi}}{\deviation}{\tilde{\sigma}}{w} \le \regretw{\xi}{\deviation}{\sigma}{w} + \epsilon\maxregret\norm{w}_1$. \label{thm:samplefinite} \end{theorem} where $\maxregret$ is the maximum possible regret over all basis directions. The proof is an application of the union bound and Hoeffding's inequality and is provided in the Appendix. Alternatively, we can bound how well the regrets match independently of the space's dimension by considering each utility function separately. \begin{theorem} With probability at least $1-\delta$, for any $w$, by observing $T \ge \frac{1}{2\epsilon^2}\log\frac{\abs{\Phi}}{\delta}$ outcomes we have for all deviations $\regretw{\tilde{\xi}}{\deviation}{\tilde{\sigma}}{w} \le \regretw{\xi}{\deviation}{\sigma}{w} + \epsilon\maxregretf{w}$. \label{thm:sampleinf} \end{theorem} where $\maxregretf{w}$ is the maximum possible regret under $w$. Again, the proof is in the Appendix. Both of the above bounds imply that, so long as the true utility function is not too complex, with high probability we need only logarithmic many samples in terms of $\abs{\Phi}$ and $\dim{\Vsym}$ to closely approximate $\regretf{\xi}{\deviation}{\sigma}$ and avoid a large violation of our rationality condition. \begin{theorem} If for all $\deviation$, $\regretw{\tilde{\xi}}{\deviation}{\tilde{\sigma}}{w} \le \regretw{\xi}{\deviation}{\sigma}{w} + \gamma$, then $\Regret{\tilde{\xi}}{\Phi}{\tilde{\sigma}}{w} \le \Regret{\xi}{\Phi}{\sigma}{w} + \gamma$. \end{theorem} \begin{proof} For all deviations, $\deviation\in\Phi$, $\regretw{\tilde{\xi}}{\deviation}{\tilde{\sigma}}{w} \le \regretw{\xi}{\deviation}{\sigma}{w} + \gamma \le \Regret{\xi}{\Phi}{\sigma}{w} + \gamma$. In particular, this holds for the deviation that maximizes the demonstrated regret. \end{proof} \section{Experimental Results} \subsection{Synthetic Routing Game} To evaluate our approach experimentally, we first consider a simple synthetic routing game. Seven drivers in this game choose how to travel home during rush hour after a long day at the office. The different road segments have varying capacities that make some of them more or less susceptible to congestion. Upon arrival home, the drivers record the total time and distance they traveled, the fuel that they used, and the amount of time they spent stopped at intersections or in congestion---their utility features. In this game, each of the drivers chooses from four possible routes, yielding over $16,000$ possible outcomes. We obtained an $\eqmeps$-social welfare maximizing correlated equilibrium for those drivers using a subgradient method where the drivers preferred mainly to minimize their travel time, but were also slightly concerned with fuel cost. The demonstrated behavior $\tilde{\sigma}^{\Gamesym}$ was sampled from this true behavior distribution $\sigma^{\Gamesym}$. In Figure~\ref{fig:logloss} we compare the prediction accuracy of MaxEnt ICE, measured using log loss, $\expectation{\outcome\distributed\sigma^{\Gamesym}}{-\log_2 \predstrategya{\outcome}}$, against a number of baselines by varying the number of observations sampled from the $\eqmeps$-equilibrium. The baseline algorithms are: a smoothed multinomial distribution over the joint-actions, a logistic regression classifier parameterized with the outcome utilities, and a maximum entropy inverse optimal control approach~\citep{ziebart2008} trained individually for each player. \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=.68\textwidth]{figs/sc}} \caption{Prediction error (log loss) as a function of number of observations in the synthetic routing game.} \label{fig:logloss} \end{center} \vskip -0.2in \end{figure} In Figure~\ref{fig:logloss}, we see that MaxEnt ICE predicts behavior with higher accuracy than all other algorithms when the number of observations is limited. In particular, it achieves close to its best performance with only $16$ observations. The maximum likelihood estimator eventually overtakes it, as expected since it will ultimately converge to $\sigma^{\Gamesym}$, but only after $10,000$ observations, or close to as many observations as there are outcomes in the game. MaxEnt ICE cannot learn the true behavior exactly in this case without additional constraints due to the social welfare criteria the true behavior optimizes. That is, our rationality assumption does not hold in this case. We note that the logistic regression classifier and the inverse optimal control techniques perform better than the multinomial under low sample sizes, but they fail to outperform MaxEnt ICE due to their inability to appreciate the strategic nature of the game. Next, we evaluate behavior transfer from this routing game to four similar games, the results of which are displayed in Table~\ref{tbl:transfer}. The first game, {\em Add Highway}, adds a new route to the game. That is, we simulate the city building a new highway. The second game, {\em Add Driver}, adds another driver to the game. The third game, {\em Gas Shortage}, keeps the structure of the game the same, but changes the reward function to make gas mileage more important to the drivers. The final game, {\em Congestion}, simulates adding construction to the major roadway, delaying the drivers. Here, we do not share deviations across the training and test game and we add a slack variable in the primal to ensure feasibility. \begin{table}[t] \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcc} \hline Problem & Logistic Regression & MaxEnt Ice \\ \hline Add Highway & 4.177 & 3.093 \\ Add Driver & 4.060 & 3.477 \\ Gas Shortage & 3.498 & 3.137 \\ Congestion & 3.345 & 2.965 \\ \hline \end{tabular} \end{sc} \end{small} \end{center} \caption{Transfer error (log loss) on unobserved games.} \label{tbl:transfer} \end{table} These transfer experiments even more directly demonstrate the benefits of learning utility weights rather than directly learning the joint-action distribution; direct strategy-learning approaches are incapable of being applied to general transfer setting. Thus, we can only compare against the Logistic Regression. We see from Table~\ref{tbl:transfer} that MaxEnt ICE outperforms the Logistic Regression in all of our tests. For reference, in these new games, the uniform strategy has a loss of approximately $6.8$ in all games, and the true behavior has a loss of approximately $2.7$. These experiment demonstrates that learning underlying utility functions to estimate observed behavior can be much more data-efficient for small sample sizes. Additionally, it shows that the regret-based assumptions of MaxEnt ICE are beneficial in strategic settings, even though our rationality assumption does not hold in this case. \subsection{Market Entry Game} We next evaluate our approach against a number of baselines on data gathered for the Market Entry Prediction Competition~\citep{erev2010}. The game has four players and is repeated for fifty trials and is meant to simulate a firm's decision to enter into a market. On each round, all four players simultaneous decide whether or not to open a business. All players who enter the market receive a stochastic payoff centered at $10 - kE$, where $k$ is a fixed parameter unknown to the players and $E$ is the number of players who entered. Players who do not enter the market receive a stochastic payoff with zero mean. After each round, each player is shown their reward, as well as the reward they would have received by choosing the alternative. Observations of human play were gathered by the CLER lab at Harvard~\citep{erev2010}. Each student involved in the experiment played ten games lasting fifty rounds each. The students were incentivized to play well through a monetary reward proportional to their cumulative utility. The parameter $k$ was randomly selected in a fashion so that the Nash equilibrium had an entry rate of $50\%$ in expectation. In total, $30,000$ observations of play were recorded. The intent of the competition was to have teams submit programs that would play in a similar fashion to the human subjects. That is, the data was used at test time to validate performance. In contrast, our experiments use actual observations of play at training time to build a predictive model of the human behavior. As we are interested in stationary behavior, we train and test on only the last twenty five trials of each game. We compared against two baselines. The first baseline, labeled {\em Multinomial} in the figures, is a smoothed multinomial distribution trained to minimize the leave-one-out cross validation loss. This baseline does not make use of any features of the games. That is, if the players indeed play according to the Nash equilibrium we would expect this baseline to learn the uniform distribution. The second baseline, labeled {\em Logistic Regression} in the figures, simply uses regularized logistic regression to learn a linear classification boundary over the outcomes of the game using the same features presented to our method. Operationally, this is equivalent to using MaxEnt Inverse Optimal Control in a single-agent setting where the utility is summed across all the players. This baseline has similar representational power to our method, but lacks an understanding of the strategic elements of the game. \begin{figure}[ht] \centering \includegraphics[width=.68\textwidth]{figs/mec_utility_mle} \caption{Test log loss using only the game's expected utility as a feature in the market entry experiment.} \label{fig:utilitymle} \end{figure} In Figure~\ref{fig:utilitymle}, we see a comparison of our method against the baselines when only the game's true expected utility is used as the only feature. We see that our method outperforms both baselines across all sample sizes. We also observe the multinomial distribution performs slightly better than the uniform distribution, which attains a log loss of $4$, though substantially worse than logistic regression and our method, indicating that the human players are not particularly well-modeled by the Nash equilibrium. Our method substantially outperforms logistic regression, indicating that there is indeed a strategic interaction that is not captured in the utility features alone. \begin{figure}[ht] \centering \includegraphics[width=.68\textwidth]{figs/mec_features_nomle} \caption{Test log loss using a number of history summary features in the market entry experiment.} \label{fig:allfeatures} \end{figure} In Figure~\ref{fig:allfeatures}, we see a comparison of our method against the baselines using a variety of predictive features. In particular, we summarize a round using the observed action frequencies, average reward, and reward variance up to that point in the round. To weigh recent observations more strongly, we also employ exponentially-weighted averages. We observe that the use of these features substantially improves the predictive power of the feature-based methods. Interestingly, we also note that the addition of these summary features also narrows the gap between logistic regression and MaxEnt ICE. Under low sample sizes, the logistic model performs the best, but our method overtakes it as more data is made available for training. It appears that in this scenario, much of the strategic behavior demonstrated by the participants can be captured by these history features. \subsection{Mid-scale Hotel Market Entry} \begin{figure}[ht] \centering \includegraphics[width=.6\textwidth]{figs/texas_wrluri} \caption{Regulatory index values for select counties in Texas. Blue means little regulation and lower costs to enter the market. Red means higher costs.} \label{fig:texas} \end{figure} For our final experimental evaluation, we considered the task of predicting the behavior of mid-scale hotel chains, like Holiday Inn and Ramada, in the state of Texas. Given demographic and regulatory features of a county, we wish to predict if each chain is likely to open a new hotel or to close an existing one. The observations of play are derived from quarterly tax records over a fifteen year period from forty counties, amounting to a total of $2,205$ observations. The particular counties selected had records of all of the demographic and regulatory features, had at least four action observations, and none was a chain's flagship county. Figure~\ref{fig:texas} highlights the selected counties and visualizes their regulatory practices. The demographic and regulatory features were aggregated from various sources and generously provided to us by Prof. Junichi Suzuki~(\citeyear{suzuki2010}). The demographic features for each county include quantities such as size of its population and its area, employment and household income statistics, as well as the presence or absence of an interstate, airport or national park. The regulatory features are indices measuring quantities such as commercial land zoning restrictions, tax rates and building costs. In addition to these noted features, which are fixed across all time periods, there are time-varying features such as the number of hotels and rooms for each chain and the aggregate quarterly income. We model each quarterly decision as a parameterized simultaneous-move game with six players. Each player, a mid-scale hotel chain, has the action set $\{\text{Close},\text{NoAction},\text{Open}\}$, resulting in $729$ total outcomes. For the game's utility, we allocated the county's features to each player in proportion to how many hotels they owned. That is, if a player operated 3 out of 10 hotels, the features associated with utility at that outcome would be the county's feature vector scaled by $0.3$. We included bias features associated with each action to account for fixed costs associated with opening or closing a hotel. In the observation data, there are a small number of instances where a chain opens or closes more than one hotel during a quarter. These events are mapped to $\text{Open}$ and $\text{Close}$ respectively. Though the outcome-space is quite large, the outcome distribution is extremely biased and the actions of the chains are highly correlated. In particular, over $80\%$ of time the time no action is taken, around $17\%$ of the time a single chain acts, and less than $3\%$ of the time more than one chain acts. As a result, one expects the featureless multinomial estimator to have reasonable performance despite a large number of classes. For experimentation, we evaluated four algorithms: a smoothed multinomial distribution trained to minimize the leave one out cross-validation loss, MaxEnt inverse optimal control trained once for all players, multi-class logistic regression over the joint action space, and regret-matching ICE with utility matching constraints. As the resulting optimizations for the latter two algorithms are smooth, we employed the L-BFGS quasi-Newton method with L2-regularization for training~\citep{nocedal80}. As a substitute for L1-regularization, we selected the $23$ best features based on their reduction in training error when using logistic regression. Each county had $63$ features available. Of the top $23$ features selected, $11$ were regulatory indices. For the logistic regression and ICE predictors, we only used utility features on the 13 high probability outcomes (no firms build, and one firm acting). The remaining outcomes had only bias features associated with them to help prevent overfitting. We experimented with a number of types of bias features, for example, 4 bias features (one for no firms build, one for a single firm builds, one for a single firm closes and one for all remaining outcomes), as well as 729 bias features (one for each outcome). We found that, though on their own the different bias features had varied predictive performance, when combined with utility and regret features they were quite similar given the appropriate regularization. In the best performing model, which we present here, we used 729 bias features resulting in $1,028$ parameters to the logistic regression model. In the ICE predictor, we tied together the weights for each deviation across all the players to reduce the number of model parameters. For example, all players shared the same dual parameters for the $\text{NoAction}\rightarrow\text{Open}$ deviation. Effectively, this alters the rationality assumption such that the \emph{average} regret across all players is the quantity of interest, instead of the maximum regret. Operationally, this is implemented as summing each deviation's gradient in the dual. This treats the players anonymously, thus we implicitly and incorrectly assume that conditioned on the county's parameters each firm is identical. Due to the use parameter tying, the ICE predictor has an additional $156$ model parameters. \begin{figure}[ht] \centering \includegraphics[width=.6\textwidth]{figs/predictions} \caption{The marginalized probability that a chain will build a hotel in Spring 1996 predicted by MaxEnt ICE. Brighter shades of green denote higher probabilities.} \label{fig:texas_predictions} \end{figure} The test losses reported were computed using ten-fold cross validation. To fit the regularization parameters for logistic regression, MaxEnt IOC and MaxEnt ICE, we held out $10\%$ of the training data and performed a parameter sweep. For logistic regression, a separate parameter sweep and regularization was used for the bias and utility features. For MaxEnt ICE, an additional regularization parameter was selected for the regret parameters. A sample of the predictions from MaxEnt ICE are shown in Figure~\ref{fig:texas_predictions}. \begin{figure} \centering \includegraphics[width=.32\textwidth]{figs/chart729} \includegraphics[width=.32\textwidth]{figs/chart2} \includegraphics[width=.32\textwidth]{figs/chartc} \caption{(Left) Test log loss on the full outcome space relative to the smoothed multinomial, which has log loss $1.58234\pm 0.058088$. (Center) Test log loss no build vs. build outcomes only. Loss is relative multinomial, with log loss $0.721466\pm 0.016539$. (Right) Test log loss conditioned on build outcomes only. Loss is relative multinomial, with log loss $6.5911\pm 0.116231$.} \label{fig:hotel} \end{figure} In the left of Figure~\ref{fig:hotel}, we present the test errors of the three parameterized methods in terms of their offset from that of the featureless multinomial. This quantity has lower variance than the absolute errors, allowing for more accurate comparisons. We see that the addition of the regret features more than doubles the improvement of logistic regression from $2.6\%$ to $6.3\%$, where as the inverse optimal control method only sees a $4.3\%$ improvement. In the center of Figure~\ref{fig:hotel}, we show the test log-loss when the methods are only required to predict if any firm acts. Here, the models are still trained over their complete outcome spaces and their predictions are marginalized. We see that all three methods are equal within noise. That is, the differences in the predictive performances come solely from each method's ability to predict {\em who} acts. We additionally performed this experiment without the use of regulatory features and found that the logistic regression method achieved a relative loss of $-0.027300$. Using a paired comparison between the two methods, we note that this difference of $0.004443$ is significant with error $0.001886$. This echoes Suzuki's conclusions the regulatory environment in this industry affect firms' decisions to build new hotels~\citep{suzuki2010}, measured here by improvements in predictive performance. In the right of Figure~\ref{fig:hotel}, we demonstrate the test log loss conditioned on at least one firm acting---the portion of the loss that differentiates the methods. The logistic regression method with only utility features performs the worst with a $1.8\%$ improvement over the multinomial base line, the individual inverse optimal control method improves by $4.1\%$ and MaxEnt ICE performs the best with a $6.3\%$ improvement. That is, the addition of regret features, and hence accounting for the strategic aspects of the game, have a significant effect on the predictive performance in this setting. We note that replacing the regulatory features in the regret portion of the MaxEnt ICE model actually slightly improves performance to $-0.471763$, though not by a significant margin. This implies that the regulatory features have little or no bearing on predicting exactly the firm that will act, which suggests the regulatory practices are unbiased. \section{Conclusion} In this article, we develop a novel approach to behavior prediction in strategic multi-agent domains. We demonstrate that by leveraging a rationality assumption and the principle of maximum entropy our method can be efficiently implemented while achieving good statistical performance. Empirically, we displayed the effectiveness of our approach on two market entry data sets. We demonstrated both the robustness of our approach to errors in our assumption as well as the importance of considering strategic interactions. Our future work will consider two new directions. First, we will address classes of games where the action sets and players differ. A key benefit of our current approach is that it enables these to differ between training and testing which we only leverage modestly in the transfer experiments for route prediction. This will involve investigating from a statistical point of view novel notions of a deviation and their corresponding equilibrium concepts Second, we will consider different models of interactions, such as stochastic games and extensive-form games. These models, though no more expressive than matrix games, can often represent interactions exponentially more succinctly. From a practical standpoint, this avenue of research will allow for the application of our methods to a broader class of problems, including, for instance, exploring the time series dependencies within the Texas Hotel Chain data. \section*{Acknowledgements} This work is supported by the ONR MURI grant N00014-09-1-1052 and by the National Sciences and Engineering Research Council of Canada (NSERC). The authors gratefully acknowledge Prof. Junichi Suzuki for providing the aggregated mid-scale hotel data and Alex Grubb for the application of density estimation code to the data-sets.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} During the course of each solar activity cycle, hundreds of ARs form and decay with time. These regions are responsible for producing energetic events, e.g., flares and coronal mass ejections (CMEs), thus, known as the major drivers of space weather. Observations show that a significant number of CMEs originate from flares \citep{Yashiro06}. There are further evidences that big ARs are generally formed during the declining phase of the cycle and produce more flares and CMEs than in ascending and maximum activity periods. Many authors have investigated the possible relationships between flare properties and the CMEs physical parameters \citep[][and references therein]{Compagnino17}, and also tried to determine the flare characteristics that may lead to the CME production \citep[][and references therein]{Liu16}. In general, big ARs do produce both flares and CMEs. However, there are few examples where large regions did produce flares but no CMEs. The most recent example is the NOAA 12192, the biggest AR in last 25 years which produced significant number of flares during its disk passage. However, only one small CME was reported to originate from this region. In total, there were total 73 C-class, 35 M-class and 6 X-class flares during 14 days of the disk passage (ftp://ftp.swpc.noaa.gov/pub/warehouse/). Among them, the most intense flare produced was X3.1 on 2014 October 24. In contrast to AR 12192, the biggest AR of Solar Cycle 23, AR 10486, produced 16 C-class, 16 M-class, 7 X-class flares (including X17.2 flare on 2003 October 28) and several associated halo CMEs (https://cdaw.gsfc.nasa.gov/CME\_list/). In Figure~\ref{ar_mag}, we show sample magnetograms of 12192 (top left) and AR 10486 (top right) around the same location on the solar disk but almost 11 years apart. The lower panel illustrates the variation of sunspot area in AR 10486 and 12192 as these pass through the Earth-facing side of the Sun. It can be seen that AR 12192 was much bigger in size than AR 10486. Since the appearance of AR 12192, several studies have been carried out to explore its properties \citep{Chen15,Sun15,Jiang16, Liu16,Inoue16}. However, these were confined to studying the magnetic conditions during the X3.1 flare with a focus on understanding its CME-poor nature. \citet{Sun15} found a weaker non-potential core region in AR 12192 with stronger overlying fields, and smaller flare-related field changes when compared with other major flare-CME-productive ARs 11158 and 11429 of Cycle 24. By studying various magnetic indices, they further added that the large amount of magnetic free energy in AR 12192 did not translate into the CME productivity. The strong confinement from the overlying magnetic field responsible for the poor CME production was also confirmed by \citet{Chen15}. In a different study, \citet{Jing15} also compared the characteristics of the X3.1 flare of AR 12192 with the X2.2 flare of CME-productive AR 11158 in terms of magnetic free energy, relative magnetic helicity and decay index of these ARs. The major difference between them was found to lie in the time-dependent change of magnetic helicity of the ARs. AR 11158 showed a significant decrease in magnetic helicity starting 4 hours prior to the flare, but no apparent decrease in helicity was observed in AR 12192. By studying decay index, these authors also confirmed the earlier findings of \citet{Sun15} and \citet{Chen15} that the strong overlying magnetic field was mainly responsible for low-CME productivity in AR 12192. In order to provide more insight on this, \citet{Liu16} suggested that the large ARs may have enough current and free energy to power flares, but whether or not a flare is accompanied by a CME is related to the presence of a mature sheared or twisted core field serving as the seed of the CME, or a weak enough constraint of the overlying arcades. In a recent study, by analyzing SDO/HMI, SDO/AIA and Hinode/SOT observations, \citet{Bamba17} studied the X1.0 flare that occurred in the same region on 2014 October 25, which also did not trigger any CME. They found that reversed magnetic shear at the flaring location was responsible for this 3-ribbon flare, and the closed magnetic field overlying a flux rope did not reach the critical decay index that prohibited the CME eruption. Despite many studies describing the properties of AR 12192 above the solar surface, none has been published so far on the subsurface characteristics. There are no well-defined parameters from helioseismic studies that favor the CME-poor or CME-rich nature of any AR. However, in order to explore the subsurface signatures of eruptive events, \citet{Reinard2010} studied the temporal changes in subsurface helicity preceding AR flaring. \citet{tripathy2008_CME} also investigated the changes in acoustic mode parameters before, during, and after the onset of several halo CMEs. They found that the CMEs associated with a low value of magnetic flux regions have line widths smaller than the quiet regions, implying the longer life-time for oscillation modes. However, an opposite trend was found in the regions of high-magnetic field which is a common characteristic of the ARs \citep{Jain08}. Both these studies were based on limited samples of ARs, hence a general conclusion was not achieved. In this paper, we study subsurface flows associated with AR 12192 and compare with those in AR 10486. We also investigate the differences between them in order to understand the CME-poor nature of AR 12192. The paper is organized as follows; the observations of AR 12192 in multiple CRs are described in Section~2. We briefly describe the ring-diagram technique in Section~3, and subsurface flows in AR 12192 and their comparison with AR 10486 in Section~4. We discuss possible scenario supporting different characteristics of both ARs in Section~5, and finally our results are summarized in Section~6. \section{Observations} \subsection{Early observations of AR 12192 in EUV} Two ARs, NOAA 12172 and 12173, appeared near the east limb in CR 2155. These regions were formed on the backside of the Sun and were closely located. Continuous observations show that both regions survived till they reached the west limb, however, AR 12173 decayed within a few days after crossing the limb while AR 12172 went through rapid evolution. The AR 12172 was later labeled as AR 12192 after its appearance on the east limb in CR 2156. To track the growth and decay of AR 12172 and AR 12173, respectively, in particular on the invisible surface the the Sun, we use ``all Sun" EUV Carrington maps as described in \citet{Liewer2014}. These maps are produced by combining EUV frontside images from Atmospheric Imaging Assembly (AIA) on board {\it Solar Dynamics Observatory (SDO)} with the backside images from {\it Solar Terrestrial Relations Observatory (STEREO)}. In Figure~\ref{EUV}, we show sample helioseismic and corresponding EUV maps in 195\AA, 171\AA ~and 304\AA, ~and for four days. The helioseismic maps are the composite images of the far- and near-side solar hemispheres, where the near hemisphere is represented by the line-of-sight magnetic field and the farside by an image showing sound wave travel time variations. The locations of shorter travel times are darker regions indicating the locations of high-magnetic field concentration on the farside. The top two rows provide maps when AR 12172 and 12173 were present on the front side, and bottom two rows for those days when they crossed the west limb. Similar to travel time maps, the high magnetic field regions in EUV images can easily be determined in the form of brightening in these images. It should be noted that communications with the STEREO Behind spacecraft were interrupted on 2014 Oct 1 that hampered the required telemetry of the data. Nevertheless, a few images from STEREO do exist that indicate the decay of AR 12173 between Sept 27 and Oct 5. A closer inspection of the images on Sept 24 (Top row) clearly shows two well-developed ARs in both photospheric magnetogram (last column) and EUV observations while in 2nd row, showing observations for Sept 27, we notice dimming in the brightness of one AR. This kind of reduction in EUV brightness is generally referred to as the decaying phase of the AR or the depletion of magnetic field strength. We further notice that only one AR was present in EUV images on Oct 5 indicating that AR 12173 was completely decayed by this time. Although there is a gap in the STEREO observations between Oct 5 and Oct 12, the helioseismic maps strongly support the rapid growth of a strong magnetic field region on the farside which is discussed below. This region could not be further tracked in EUV due to the interrupted communications with STEREO Behind spacecraft. \subsection{AR 12192 in multiple Carrington rotations - helioseismic imaging} Direct and continuous observations of the visible surface and the helioseismic far-side imaging suggest that the AR 12192 was a long-lived active region. A careful examination of these images clearly indicates that the AR survived at least for four CRs and went through various evolutionary phases. Although this is the biggest region of the current solar cycle till date, it did not produce the largest flare of the cycle or any CME. Since NOAA assigns new numbers to all ARs whenever these appear on the east limb irrespective of their reappearance after completing a solar rotation, the far-side images are crucial to ascertain the life of such ARs. The reliability of far-side images of ARs was tested in a series of papers by \citet{Liewer2012, Liewer2014} where helioseismic farside maps were compared with backside images of the Sun from NASA's STEREO mission. They found that approximately 90\% of the helioseismic active-region predictions matched with the activity/brightness observed in EUV at the same locations. It should be noted that despite providing reasonable information on the existence of ARs in the non-earth facing side of Sun, the current method of helioeseimic imaging has a limitation on sensing strong signal near the limb. However, the signal becomes stronger when the ARs move towards the far-side central meridian. In Figure~\ref{fs}, we show the images of AR 12192 in four consecutive CRs (CR 2155 - 2158) by including images of both visible and invisible surfaces. Here we shown images for every fourth day with each row corresponding to AR's appearance in the next CR. The region of interest is marked by the white circle. Daily helioseismic maps of far side are available at http://jsoc.stanford.edu/data/farside/ using HMI and http://gong2.nso.edu/products/ using GONG observations. The evolution of AR 12192 in multiple CRs is summarized in Table~\ref{table1}. \section{Data and Technique} We utilize 1k $\times$ 1k continuous Doppler images from GONG at a cadence of 60 s. Although better spatial resolution space observations from MDI on board SOHO during the Dynamics run from 1995 -- 2010 (1k $\times$ 1k at a cadence of 60 s) and from HMI on board SDO (4k $\times$ 4k at a cadence of 45 s) are available since 2010, the GONG observations provide a unique advantage in this work. All these instruments use photospheric lines; the GONG and MDI observations are in the \ion{Ni}{1} 6768\AA~line as opposed to HMI observations in \ion{Fe}{1} 6173\AA~line. Although the inference of subsurface flow should not be influenced by the spectral line used in observations \citep{Jain06a}, the different spatial resolution may introduce some bias on the results mainly due to the sensitivity to different mode sets for inversion \citep{Jain13}. In addition, every instrument has some type of systematic effects, thus the use of consistent data for both ARs as well as quiet regions will minimize instrument related bias, if any. Furthermore, the GONG observations will allow us to use the same quiet region values from the deep minimum phase between cycles 23 and 24 as reference. It is worth mentioning that there have been changes in the GONG instrumentation over the period of this analysis, however these were limited to adding new capabilities only, e.g., the modulator upgrade in 2006 to obtain high-cadence magnetograms and adding H-$\alpha$ filter in 2010 at a 20 s cadence for space weather studies, thus the Doppler observations were not affected. Therefore, the GONG network provides unique consistent data set of continuous high-cadence and high-spatial resolution Dopplergrams for last one and half solar cycles that can be used to study long-term solar variability. We use the technique of ring diagrams \citep{Hill88} to study the subsurface plasma flow in AR 12192 in multiple CRs. This method has been used in numerous studies to investigate the acoustic mode parameters beneath ARs \citep{Rajaguru01,Tripathy12, Rabello-Soares16} as well as long-term and short-term variations in near-surface flows \citep{Komm15a,Greer15,Bogart15,Howe15, Tripathy15,Jain15}. It is widely understood that the strong concentration of magnetic field alters the behavior of acoustic waves by absorbing their power. As a result, the inferences in these regions often encounter with large uncertainties. In a recent study, the credibility of the ring-diagram method was tested by comparing near-surface flows with the surface flows in three ARs calculated from ring-diagram and the local correlation tracking methods, respectively \citep{Jain16}. The authors reported a good correlation between these two quantities. Studies further show that the uncertainties in inferences tend to increase towards the limb \citep{Jain13a}, hence our analysis is restricted to within $\pm$45${\degr}$ from the disk center. In the ring-diagram method, high-degree waves propagating in localized areas over the solar surface are used to obtain acoustic mode parameters in the region of interest. In this work, we track and remap the regions of $15^{\degr} \times 15^{\degr}$ (128 $\times$ 128 pixels in the spatial directions) for 1664 min (referred to as 1 ring day) at the surface rotation rate \citep{snod84}. The tracking rate of each central latitude in these regions is of the form cos($\theta$)[$a_0 + a_2$sin$^2(\theta) + a_4$ sin$^4(\theta)$], where $\theta$ is the latitude and the coefficients $a_0$ = 451.43 nHz, $a_2$ = $ -$ 54.77 nHz, and $a_4$ = $ -$ 81.17 nHz. We then apodize each tracked area with a circular function and a three-dimensional FFT is applied in both spatial and temporal directions to obtain a three-dimensional power spectrum that is fitted using a Lorentzian profile model \citep{Haber02}, \begin{eqnarray} P (k_x, k_y, \omega) & = & {A \over (\omega - \omega_0 + k_xU_x+k_yU_y)^2+\Gamma^2} + {b \over k^3} \end{eqnarray} where $P$ is the oscillation power for a wave with a temporal frequency ($\omega$) and the total wave number $k^2=k_x^2+k_y^2$. There are six parameters to be fitted: two Doppler shifts ($k_xU_x$ and $k_yU_y$) for waves propagating in the orthogonal zonal and meridional directions, the background power ($b$), the mode central frequency ($\omega_0$), the mode width ($\Gamma$), and the amplitude ($A$). Finally, frequencies along with the fitted velocities ($U_x$ and $U_y$) and formal errors are used as input to a regularized least square (RLS) method to estimate depth dependence of various components of the horizontal velocity (zonal: $V_x$ and meridional: $V_y$). While the zonal component provides an estimate of the flow in east-west direction, the meridional component is for the north-south direction. The depth dependences of mode kernels used in the inversion are shown in Figure~\ref{kernels}. It can be seen that the kernels sharply peaks near the surface due to the availability of large modes for the kernel construction, however as the centroids of the kernels increases in depth, the kernels become broader and the inversion's depth resolution becomes poorer. In order to investigate the subsurface variability with the evolution/decay of ARs, we use quiet region values as a reference and subtract them from the AR values in various time samples. Our analysis is restricted to radial order $n$ = 0 -- 6 in the frequency range of 1700$\mu$Hz $\le \nu \le$ 5200$\mu$Hz. It should be noted that we have used custom patches processed through the latest version of the GONG ring-diagram pipeline (version 3.6.1). Although both active regions studied in this paper were located close to the multiple of $7^{\degr}.5$ in both directions (the grid spacing used in ring-diagram pipeline), the standard products provided different locations for AR 10486 on the solar disk from those for AR 12192. Hence, for the direct comparison between these ARs, we have customized the time series in AR 10486 which means the start and end times for each set are different from the standard products. \subsection{Quantitative estimates of magnetic activity} In our study, we also quantify the magnetic activity in various regions by calculating the Magnetic Activity Index (MAI) in each sample using the line-of-sight magnetic field from the magnetograms. Here we convert magnetogram data to absolute values, average over the length of a ring day and apodise them into circular areas to match the size of the Dopplergram patches used in the ring-diagram analysis. We use both SOHO/MDI and GONG magnetograms to calculate MAI. Since the entire period under study is not covered by the MDI magnetograms alone, we use the 96 minute MDI magnetograms for 2003 and GONG magnetograms sampled every 32 minutes for 2008 and 2014. Lower threshold values of 50 and 8.8 G are used for MDI \citep{basu04} and GONG \citep{Tripathy15}, respectively, which are approximately 2.5 times the estimated noise in the individual magnetograms. In addition, we also correct the magnetic field data from space for cosmic-ray strikes. To generate a uniform set of MAIs, the MDI MAI values are scaled to GONG values using a conversion factor of 0.31, which was obtained by relating the line-of-sight magnetic field measurements between these two instruments using a histogram-equalizing technique \citep{Riley14}. \section{Results} \subsection{Systematics and the selection of quiet regions} In a study of the subsurface characteristics of any active region in multiple time samples, the first step is to eliminate the influence of systematic effects, if any. The most common error comes from the foreshortening that may arise due to the change in heliographic locations of the tracked regions during the disk passage. To overcome this effect, we calculate reference values of quiet regions at the same heliographic locations as the AR and subtract them from the AR values as described in \citet{Jain12,Jain15}. As a result, we obtain residuals that provide estimates of flows due to the change in magnetism. Another problem is the selection of quiet regions which may also influence the inferences. \citet{Tripathy09} have examined the role of the choice of quiet regions in helioseismic studies and demonstrated that the results derived for the pairs of quiet and ARs were biased by the selection of quiet regions. They further suggested that an ensemble average of quiet regions should be used in such studies. In previous studies, we have chosen quiet regions from the neighboring Carrington rotations \citep{Jain12,Jain15}. However, for an unbiased comparison between ARs 12192 (CR 2155) and 10486 (CR 2009), we use same quiet regions from the deep minimum phase between cycles 23 and 24 for both ARs. Note that this was the period of extremely low solar activity, hence the values obtained with reference to this period can be termed as ``true'' AR values. Although, both ARs appeared in two different solar cycles, this selection of quiet regions is supported by the following observational facts; (i) both ARs were observed in same phase of the year when the values of the solar inclination angle ($B_0$) were comparable, and (ii) both ARs were located around the same latitude in the southern hemisphere (Figure~\ref{ar_mag}). Thus the projection effect in both cases can be uniformly removed by using the same quiet-region values. The period of 2008 September thru December is used to determine the quiet-region values. We compute error-weighted averages of the MAIs for 13 ring days (approximate time for any region to cross the disk) at each heliographic location. Since we are interested in studying the temporal variation in the subsurface properties of AR 12192 in four CRs and $B_0$ varies from +7${\degr}$.0 (September) to $-$1${\degr}$.5 (December) during the period of analysis, the quiet regions are selected in such a way that the values of MAI do not significantly change from region to region. Average MAIs for quiet regions at 15 disk positions along latitude 15${\degr}$ for four different ranges of $B_0$ are summarized in Table~\ref{table2}. The $B_0$ values chosen here correspond to those for the active regions analyzed in this paper. The regions along the latitude 15${\degr}$ are separated by 7${\degr}$.5 and cover the solar surface from $60{\degr}$E to $60{\degr}$W. It can be seen that the MAIs vary from 0.3 to 0.6 G and the standard deviation is within 18\% with largest values near the limb decreasing to about 8\% near central meridian. We further calculate the components of horizontal velocity corresponding to these quiet regions and compute error-weighted averages of 13 ring days at each location as for MAIs. As our selected regions have an overlap of 7${\degr}$.5, we smooth these components with 3-point running mean which are later used as reference in the active-region analysis. The longitudinal variation of these components across the disk is displayed in Figure~\ref{flow_quiet} where different panels represent the averages for different $B_0$ ranges at four target depths below the surface, i.e. 3.1, 5.8, 10.2 and 14.2 Mm. It is clearly seen that the magnitude of $V_x$ increases with depth which is in agreement with the observations where rotation rate is found to increase with depth in the outer 5\% of the Sun. There are some variations in the magnitude of $V_y$ as well, however these are within 1$\sigma$. Further, both zonal and meridional velocities show an east-west trend but it is more prominent in the zonal component. Also, the shallower layers are more affected by this systematic, which decreases with depth. To highlight the variation in east-west trend with time, we plot, in Figure~\ref{flow_quiet1}, the variation of both components with disk location at depths 3.1 Mm and 14.2 Mm. We notice significant variations in these components with both time and depth. These systematics were also noticed by \citet{Zhao12} in the the time-distance analysis where they found both travel-time magnitude and variation trends to be different for different observables. Though these authors did not provide any physical basis for this effect, they suggested that these systematics must be taken into account for the accurate determination of flows. Later, \citet{Baldner12} explained this effect in terms of highly asymmetrical nature of the solar granulation which results in the net radial flow. This east-west trend was also studied by \citet{Komm15a} in flows from the ring-diagram analysis for both GONG and HMI observations, where different data sets exhibited different systematics. They concluded that the trends could be removed adequately by selecting reference values from the same data source. Thus, in our study of the temporal variation of flows at different depths, we use reference values that depend on both depth and the disk location of the active region. \subsection{AR 12192 and subsurface horizontal flows} Manifestation of strong magnetic fields is not only visible on the solar surface and beyond in the solar atmosphere, it also has significant influence on the subsurface properties. While most acoustic mode parameters tend to change linearly with the increase or decrease in magnetic field strength \citep[][and references therein]{Tripathy15}, the velocity vectors have a complex relationship. These are modified in the presence of strong fields, however there are several other factors that may dominate amplitude and the direction of velocity vectors. In general, the topology of ARs plays an important role in defining the velocity components. Also, moderately large number of modes are needed in inversion for reliably calculating the depth dependence of flows. The evolution or decay of the magnetic activity in AR 12192 in multiple CRs is illustrated in Figure~\ref{mai2014}. Locations of the reference image (i.e. the image at the center of each timeseries) are given in Table~\ref{table3}. Since the Carrington longitude identifies the unique location of any region on the solar surface, we have made sure that it does not change with time. The heliographic location for this AR is 247${\degr}$.5 longitude and $-15^{\mathrm{o}}$ latitude. It can be seen that the magnetic field strength in this AR became stronger in the second rotation (CR 2156) which decayed to a moderate level in third rotation (CR 2157). It further decayed in the fourth rotation and the MAI values became comparable to those in the first rotation (CR 2155). Among additional factors that may influence the inference of flow vectors, the high duty cycle is crucial in obtaining reliable flows. The large gaps in observations or low duty cycle reduce the number of fitted modes that have significant effect on the inverted flows and produce large errors. Since the duty cycle in GONG observations varies from day-to-day, primarily due to the changing weather conditions at observing sites or to system maintenance/breakdown, the results may be altered. To authenticate our results, we also include duty cycles in Table~\ref{table3} and Figure~\ref{mai2014} for each timeseries. It can be seen that these are $\ge$ 70\% in all time series which is above the lower threshold set in earlier studies (R. Komm, private communication). The mean duty cycles for eight ring days in each CR are 88.0\%, 85.4\%, 91.8\% and 88.3\% with the variation between 1.2$\sigma$ to 2.3$\sigma$ within the CR. Another factor is the heliographic location of the region which also influences the number of modes. Thus, fitted modes are predominantly governed by all three factors; higher magnetic field, low duty cycle and the larger distance from disk center lead to a lower number of fitted modes. It is evident from the lower panel in Figure~\ref{mai2014} that the number of modes in CR 2156 at each location is relatively low though the duty cycles at some locations are comparable to other CRs. This clearly demonstrates the effect of the significantly strong magnetic field in CR 2156 on the acoustic modes, which reduces their amplitude and as a result, produces large errors. We estimate the variation in number of modes is less than 2$\sigma$ in each CR which is higher than the variation in duty cycle. Temporal variation of $V_x$ and $V_y$ for the AR in each CR are displayed in Figure~\ref{flow_12192}. These values are obtained by subtracting the quiet Sun values at the same locations. In the absence of any high magnetic field region, the meridional component is typically poleward (negative values in the southern hemisphere) that provide basis for the magnetic flux transport from low latitudes to the poles, and the zonal component is in the direction of solar rotation. However, as mentioned above, their amplitudes as well as directions are modified due to the change in the characteristics of AR. We obtain maximum values of both velocity components in CR 2156 in comparison to other three CRs. It is further noted that the values of both components are comparable (within 1$\sigma$) in shallow layers in all CRs except CR 2156, however these differ moderately in deeper layers. We also find very large flows in deeper layers in CR 2156. Since the MAI values are significantly large in CR 2156 (Figure~\ref{mai2014}), these could be responsible for the large flows and different behavior in this Carrington rotation. We also notice exceptionally large errors at $\pm$52${\degr}$.5 in CR 2156 that may arise due to lower number of modes available for inversion as shown in Figure~\ref{mai2014}. \subsection{Comparison with AR 10486} The active region 10486 was a long-lived AR \citep{Irene07} that produced several X- and M-class flares and associated CMEs in cycle 23. The magnetic classification of this region was similar to AR 12192, however the field strength was not as strong as it was in AR 12192. There was a slow decrease in the magnetic field in AR 10486 in the next CR while it decreased significantly in AR 12192. In order to compare the horizontal flows from both active regions, the position of ARs in the tracked cubes is crucial. If one region is off centered from the other, the inferences of mode parameters and the flows will be biased by the area covered. Further, quiet areas within the tile will modify the results as calculated flows are averaged over the entire tile. To reduce the biasing of these factors, as mentioned in Section~3, both regions are tracked at the Snodgrass rotation rate which allows us to cover the same area in tracking ensuring that the ARs stay at the same location in the tracked cube. Sample images of the evolution of ARs 12192 and 10486 in the tracked cubes for eight consecutive ring days are shown in Figures~\ref{patches_2014} and \ref{patches_2003}, respectively. These are the magnetogram regions near or at the reference image which is the center of the each time series. Since both ARs are large, it is clearly visible that these cover a reasonably large area of the tracked regions with stable positions of ARs within the tiles. The start and end of each time series representing AR 12192 and AR 10486 at different disk locations are listed in Tables~\ref{table3} and \ref{table4}, respectively. Temporal variations of MAI, duty cycle and the number of fitted modes in both regions for two consecutive CRs are displayed in Figure~\ref{mai_2ar}. As illustrated, the number of modes in AR 12192 and 10486 were comparable in the first two days and then differences started to arise which increased as ARs moved towards the west. These differences may introduce some bias in the flow estimates and the errors are also large when different mode sets are available for inversion. For convenience, in the following analysis, we will refer to regions in different CRs by their NOAA numbers. We, therefore, compare the subsurface properties of AR 12192 with those of AR 10486, and AR 12209 with AR 10508 in the next rotation. Figure~\ref{flow_2003} illustrates the daily variation of zonal and meridional velocities. Our analysis reveals a number of similarities as well as differences between ARs 12192 and 10486. We find that both ARs maintained significantly large flows during their disk passages but distinctly different directions. Locations of X3.1 and X17.2 flares in ARs 12192 and 10486, respectively are also marked in Figure~\ref{flow_2003} and also tabulated in Tables~\ref{table3} and \ref{table4}. The resultant amplitude of both components is found to increase with depth. While there is a small change in $V_x$ in AR 12192 several days before the X3.1 flare, the gradient is sharp in the case of AR 10486. In fact, direction was reversed in AR 10486 a day before the flare around 6 Mm producing a twist in the horizontal velocity, whose amplitude continued to increase with time. Interestingly, there was an increase in $V_x$ in AR 12192 at the time of X3.1 flare which then decreased significantly with a small twist near the surface despite producing another X2.0 flare. Furthermore, the calculated meridional components showed a different trend; although there was a small temporal variation in AR 10486, the maximum values in AR 12192 in deeper layers were achieved a day before the X3.1 flare with the decreasing trend after that. Figures~\ref{box_R2a} and~\ref{box_R2b} illustrate the depth variation of estimated total horizontal velocity at eight disk locations where each panel represents an individual ring day. It is evident that there are strong twists in the flows for AR 10486 and that the pattern persists for the first five days. Previous studies show that this kind of twist is generally associated with strong flares. As indicated in Table~\ref{table4}, there were a series of X- and M-class flares from this AR before and after the major X17.2 flare on Day 4, which we find to be responsible for the observed twists for several days. Note that there was X8.3 flare on Day 8 that started near the end of time series used in the analysis but continued beyond the end time. We find a slight twist near the surface in this case too but a complete flow pattern associated with this flare was not achieved. Nevertheless, the twist appears in AR 12192 on Day 7 only after the X3.1 flare despite the fact that there were other major flares before and after this event (see Table~\ref{table3}). Our analysis suggests that the twist disappeared thereafter when the region was centered at longitude 52$^{\degr}$5W. It is clear from Figure~\ref{mai_2ar} that the number of fitted modes depends on the location of the region on the solar disk, decreasing as the AR moves away from the disk center making the inferences noisier with higher uncertainties. In addition, any magnetic field within the region and the duty cycle influence this number. Further, the outer boundary of the reliable inferences depends on the spatial resolution of input Dopplergrams and the GONG resolution puts this limit at around 45$^{\degr}$ in either direction \citep{Jain13a}. However, we have plotted flows at 52$^{\degr}$5W to understand their behavior after large flares, where we find the twist has disappeared and the values are expected to be biased by moderately large uncertainties. In next CRs, as evident from Figure~\ref{flow_2003}, there was a significant decrease in the flow amplitude in both ARs. \section{The plausible scenario} It is believed that the magnetic fields are generated in a thin layer at the base of convection zone, known as the tachocline, hence the subsurface properties may provide useful information on their evolution. The techniques of local helioseismology are widely used to obtain vital information from the convection zone beneath quiet as well active regions. In last couple of decades, with the availability of high-resolution continuous observations at high cadence, it has become possible to map plasma flow in the convection zone and derive quantities that are directly linked with the measurements on the surface \citep{Komm15}. Further, \citet[][and references therein]{Komm12} carried out several studies on a large sample of ARs to understand the overall behavior in their evolutionary and decaying phases. In addition, helioseismic studies describing subsurface characteristics beneath individual ARs are also available \citep{Zhao03, Komm08, Jain12, Zhao14, Jain15}. All these studies converge to a common conclusion that the magnetic regions show considerably higher subsurface velocities compared to the quiet regions and the velocities increase or decrease with the strength of magnetic elements within the region. In this study, we find significantly large velocities in ARs as compared to the quiet regions, however the quantitative measures of these velocity fields depend on various factors that are described below. In this paper we studied two big ARs, NOAA 12192 and 10486, from two different solar cycles. There are many similarities as well as differences in the morphology of these regions. Both were large in size at the time of their appearance on the east limb and maintained a complex magnetic configuration, $\beta\gamma\delta$, throughout their disk passages. Despite being larger in size and also having stronger magnetic field than in 10486, the sunspot counts in AR 12192 were significantly lower. In contrast, AR 10486 had several rotating sunspots with high rotational velocity. These rotating sunspots are identified by their rotation around the umbral center or other sunspots within the same AR. The large positive-polarity sunspot in this region was reported to rotate uniformly at a rate of 2$\degr$.67 hr$^{-1}$ for about 46 hours prior to the major flare on 2003 October 28 \citep{Kazachenko10}. Studies show that the role of sunspot rotation in flare energetics is complex and it is often argued that the rotation of the sunspot produces energy and magnetic helicity more than the non-rotating case, thus triggering large flares. \citet{Kazachenko10} suggested that the sunspot rotation in AR 10486 contributed significantly to the energy and helicity budgets of the whole AR. They further emphasized that the shearing motions alone stored sufficient energy and helicity in AR 14086 to account for the flare energetics and interplanetary coronal mass ejection helicity content within their observational uncertainties. In our previous study, we investigated the subsurface flows in AR 11158 that had several rotating sunspots and was the source region of first X-class flare with a halo CME in cycle 24 \citep{Jain15}. The characteristics of subsurface flows in AR 11158 are similar to those in AR 10486 reported in this paper but the amplitude is significantly lower. A series of CMEs erupted from both AR 11158 \citep{Kay17} and AR 10486 \citep{Gopalswamy05}. A closer comparison of flows in AR 10486 (Figure~\ref{flow_2003}) and AR 11158 (Figure 8 of \citet{Jain15}) clearly shows that the meridional flows are more affected by the CME eruption while rotating sunspots play a critical role in defining the zonal flows. Note that first CMEs reported from ARs 10486 and 11158 were on 2003 Oct 18 and 2011 Feb 13, respectively. Thus, we obtain similar flow patterns in these two ARs. However, the flows in AR 12192 display different characteristics, which we interpret in terms of its CME-poor nature. Further, by exploiting the high spatial resolution of HMI, we also studied flows in three sub-regions within AR 11158 and found that the leading and trailing polarity regions move faster than the mixed-polarity region. Since the spatial resolution of GONG Dopplergrams does not provide sufficient modes near the surface to obtain reliable inferences in individual polarity regions, our present study is confined to the investigation of subsurface flows in ARs as a whole. We show, in Figure~\ref{energy_R2}, the depth variation of flow kinetic energy in AR 12192 and AR 10486 for 8 consecutive days when several large flares were triggered. The regions corresponding to major flares are also marked in the Figure. Here, the energy density values are calculated using the density profile in the solar interior from model ``S'' of \citet{JCD96_modelS}. It can be seen that energy density increases exponentially with depth, mainly due to the rapidly increasing density. Note that these are the resultant energy density values which are elevated from the quiet region values due to the presence of high magnetic field. We notice a significant variation in energy density with time in the case of AR 12192 (upper panel) while no significant variation is seen in AR 10486 except on Day 1. It increases gradually in AR 12192 for the first 5 days although there were two large flares (M8.7 and X1.6) on Day 4. It appears that the energy released by these flares was not sufficient to disrupt the flow which eventually decreased significantly with the eruption of the X3.1 flare on Day 6. After this event, at a depth of about 13 Mm, almost 50\% of the energy was dissipated which further decreased from Day 6 to Day 7 and increased from Day 7 to Day 8 with the eruption of several big flares. In all cases, this change in kinetic energy was relatively small in the upper 4 Mm. However, below this depth, there were significant changes in flow energy - from 400\% around 5.8 Mm to 650\% around 7.2 Mm, and this variation gradually decreased in much deeper layers. In contrast to these results, the energy variation was minimal in AR 10486, which produced several major flares on four consecutive days, including an X1.2 on Day 2, M6.7 on Day 3, X17.2 on Day 4 and X10.0 on Day 5. We believe that all these flares must have lowered the kinetic energy, however there were some other processes that rapidly supplied energy to AR 10486. Thus, the small variation in flow energy with time in AR 10486 provided favorable conditions for more energetic eruptions, e.g. CMEs in this case. In the next rotation, as shown in Figure~\ref{energy_R3}, the flow energy decreased substantially, almost by a factor of 4, in both regions. Further, the flow energy in AR 10508 (10486) is significantly higher than in AR 12209 (12192) in deeper layers while it is comparable in both regions above 4 Mm. Our study suggests that additional factors, e.g., the sunspot rotation combined with the re-organization of magnetic field in AR 10486, were not sufficient to decrease the flow energy even after several large flares and triggered CMEs. Furthermore, in the absence of sunspot rotation in AR 12192, the re-organization of magnetic field contributed significantly in the substantial release of flow energy after the X3.1 flare. \section{Summary} AR 12192 is the biggest AR observed in solar cycle 24 till date. It appeared on the east limb on 2014 October 18 and grew rapidly into the largest such region since 1990. Composite images of helioseismic farside maps and the direct frontside observations, in conjunction with STEREO and SDO/AIA EUV observations, clearly show that it was a long-lived ARs that survived at least four CRs and exhibited several unusual phenomena. Over the period of four rotations, it had the most complex magnetic configuration, $\beta\gamma\delta$, during second rotation, i.e in CR 2156 with very strong magnetic field and several large flares without any associated CMEs. We measured the horizontal subsurface flows in AR 12192 in multiple rotations applying the ring-diagram technique to the GONG+ Dopplergrams. Our analysis suggests that it had unusually large horizontal flow amplitudes before the X3.1 flare on 2014 October 24 which were comparable to those in AR 10486 during Halloween solar events in 2003. Both regions were located around the same latitude in southern hemisphere and produced several high M- and X-class flares but had different CME productivity. In order to compare the horizontal flows from both active regions, the position of ARs in the tracked cubes is crucial. If one region is off centered from the other, the inferences of mode parameters and the flows are biased by the area covered. Further, quiet areas within the tile also modifies the results as inferred quantities are averaged over the entire tile. To reduce the biasing of these factors, both regions are tracked at the Snodgrass rotation rate, which allows us to cover the same area in tracking ensuring that the ARs stay at the same location in the tracked cube. In the case of ARs 12192 and 10486, if one AR is slightly off-centered from the other AR, both cover significantly large areas of the tiles and we believe that our results are less affected by the positioning of these ARs within the tiles. Our analysis further suggests that flow directions in these ARs were distinctly different; while meridional flow in the AR 12192 was poleward that support flux transport to the poles, it was equatorward in AR 10486. Furthermore, there was a sudden increase in the magnitude of estimated zonal flow in shallow layers in AR 12192 during the X3.1 flare, however, it reversed direction in AR 10486 with X17.2 flare. These different flow patterns provided strong twists in the horizontal velocity with depth that persisted in AR 10486 throughout the disk passage as opposed to AR 12192, which produced a twist only after the eruption of the X3.1 flare that disappeared soon after. Over the period of eight ring days, we find different flow energy patterns in these regions; there was significant variation in flow energy with time in the case of AR 12192 while no significant variation is seen in AR 10486. It increased gradually in AR 12192 for first four days until the X3.1 was produced and then declined sharply. We conclude that the sunspot rotation combined with the re-organization of magnetic field in AR 10486 was not sufficient to decrease the flow energy even after triggering several large flares that might have been responsible for CMEs. Furthermore, in the absence of rotating sunspots in AR 12192, the re-organization of magnetic field contributed significantly to the substantial release of kinetic energy after the X3.1 flare. In future studies, we plan to identify flaring regions with and without rotating sunspots in both CME-rich and CME-poor categories to obtain a comprehensive picture linking the role of subsurface flows in the energetic eruptions. A study based on a large number of dataset may be crucial for providing subsurface thresholds for the extreme space weather events. It would also be interesting to investigate the subsurface characteristics of unique regions with CMEs but which are not associated with flares. \acknowledgements We thank the anonymous referee for useful suggestions. This work utilizes GONG data obtained by the NSO Integrated Synoptic Program (NISP), managed by the National Solar Observatory, the Association of Universities for Research in Astronomy (AURA), Inc. under a cooperative agreement with the National Science Foundation. The data were acquired by instruments operated by the Big Bear Solar Observatory, High Altitude Observatory, Learmonth Solar Observatory, Udaipur Solar Observatory, Instituto de Astrof\'{\i}sica de Canarias, and Cerro Tololo Interamerican Observatory. The ring-diagram analysis was carried out using the NSO/GONG ring-diagram pipeline. The STEREO/SECCHI data used here are produced by an international consortium of the Naval Research Laboratory (USA), Lockheed Martin Solar and Astrophysics Lab (USA), NASA Goddard Space Flight Center (USA) Rutherford Appleton Laboratory (UK), University of Birmingham (UK), Max-Planck-Institut fur Sonnensystemforschung (Germany), Centre Spatiale de Liege (Belgium), Institut d'Optique Theorique et Applique (France), Institut d'Astrophysique Spatiale (France). SDO data courtesy of SDO(NASA) and the HMI and AIA consortium. The farside HMI maps were provided by NASA through the Joint Science Operations Center for the SDO project at Stanford University.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} With the already high and most likely increasing demand, chest radiography is today the most common examination type in radiology departments \cite{CQC2018}. As reported by \cite{beardmore2016radiography}, the average report turnaround time for plain X-ray is about 34 hours while $74\%$ have a turnaround time less than $24$ hours. In case of critical findings such as \textit{pneumothorax} or \textit{pleural effusion} the integration of automated detection systems in the clinical work-flow could have a substantial impact on the quality of care. Recent developments in pathology classification focused mainly on specific aspects of Deep Learning (e.g. in terms of novel network architectures). Early on, Shin \textit{et al.} \cite{Shin2016} demonstrated that a Convolutional Neural Network (CNN) combined with a recurrent part can be applied for image captioning in chest X-rays. The increased availability of annotated chest X-ray datasets like ChestX-ray14 \cite{Wang2017} helped to accelerate the progress in the field of pathology classification, detection and localization. In this rapidly evolving field, Li \textit{et al.} \cite{Li2018} presented a unified network architecture for pathology classification and localization, while only limited annotation is needed for the localization. Cai \textit{et al.} \cite{Cai2018IterativeAM} proposed an attention mining method for disease localization which works without localization annotation. In the work of Wang \textit{et al.} \cite{wang2018tienet}, a classification and reporting method -- leveraging the radiologist report in addition to the image -- was presented. In this context, only very simple pre-processing steps have been employed. Motivated by prior work in the computer vision domain this includes predominantly intensity normalization as well as a re-scaling of the image to the model size. Contrary, over the last years, several methods have been developed for supporting radiologists in the diagnostic process. Two well known techniques are bone suppression and lung field detection \cite{von2016novel, von2016decomposing}; the former artificially removes the rib cage facilitating the detection of small appearing pathologies and the later standardizes viewing appearance. In multiple studies, the usefulness of such image processing methods for different diseases was shown \cite{Li2012}. An obvious question arises: do bone suppression and lung field detection have the same beneficial effect on disease classification with CNNs? Toward this end, we investigate how bone suppression and lung field detection can be exploited as a pre-processing step for a CNN. In a methodology comparable way to \cite{Gordienko2018}, we apply pre-processing in three different scenarios. First, processing each image with bone suppression. Secondly, cropping the images to detected lung fields and finally, combining both processing steps. However, different to \cite{Gordienko2018}, we use lung field detection to crop the images to the important area, whereas Gordienko \textit{et al.} kept the image size equal and just set regions (not belonging to the lung fields) to zero. We believe cropping can increase the CNN performance as it increases the effective spatial resolution for the CNN. Furthermore, we propose a novel ensemble architecture to leverage the complimentary information in the different images, similar to a radiologists work-flow. Furthermore, in order to allow for a detailed assessment of the impact for specific pathologies, two expert radiologists, annotated, the public Indiana dataset (Open-I) with respect to eight findings. \begin{figure}[!ht] \centering \subfloat["Normal"]{\includegraphics[width=0.24\textwidth]{./fig/25_IM-1024-2001.png}}\hfill \subfloat["Bone suppressed" \label{fig:img-BS}]{\includegraphics[width=0.24\textwidth]{./fig/25_IM-1024-2001-1_boneSup.png}}\hfill \subfloat["Lung field cropped" \label{fig:img-Crop}]{\includegraphics[width=0.24\textwidth]{./fig/25_IM-1024-2001-2_lungCrop.png}}\hfill \subfloat["Combination" \label{fig:img-combinied}]{\includegraphics[width=0.24\textwidth]{./fig/25_IM-1024-2001-3_boneSup-LungCrop.png}} \caption[Titel des Bildes]{One example image (a) of the Indiana chest X-ray dataset from Open-I. The dataset consists of 3125 frontal and lateral images from 3125 patients. We annotated all images with up to eight findings. The pre-processed images are show in (b)-(d). (b) is the bone suppressed image, (c) is cropped to the lung field, and (d) illustrates the combination out of (b) and (c).} \label{fig:indiana} \end{figure} \section{Methods} \label{sec:methods} Following the method and training setup in \cite{baltruschat2018comparison}, we pre-trained a ResNet-50 architecture with a larger input size of $448\times448$ on ChestX-ray14. Compared to different network architectures and training strategies, the obtained model achieved the highest average AUC value in our previous experiments. Due to the focus on eight specific pathologies, we replaced the last dense layer of the converged model with a new dense layer having eight outputs and a sigmoid activation function. Furthermore, we applied a fine-tuning step in order to adapt the model to the new image domain. \subsection{Bone Suppression} \label{sec:boneSup} In the original Indiana images we suppress the bones (ribs and clavicles) using a method from \cite{von2016novel, von2016decomposing}. The method preserves the remaining details originally overlaid with the bones (see Fig. \ref{fig:img-BS}). In the reported reader study, the AUC for the detection and localization of lung nodules increased for experienced human readers when using bone suppression images. Machine learning may potentially also benefit from suppressing some normal anatomy, which is to be tested here. \subsection{Lung Fields} \label{sec:lungField} Lung fields are segmented using a foveal CNN as described in \cite{brosch2018foveal}. It is trained by semi-automatically annotated lung fields and applied to the Indiana images. After the initial lung field detection, we apply post-processing steps to determine the final crop area. First, we identified all connected regions and computed a bounding box around the two largest region. Thereafter, we added a small border of $100$ pixel to the top/bottom and to the left/right. Each image is cropped to its individual bounding box as pre-processing step (see Fig. \ref{fig:img-Crop}). Lung field cropping has two beneficial aspects. First, it reduces the amount of information loss due to down scaling and secondly, it is a geometric image normalization. We also consider a combination of both -- bone suppression and lung field cropping (Fig. \ref{fig:img-combinied}). \subsection{Ensemble} \label{sec:ensamble} In many applications combining different predictors can lead to improved classification results, which is known as \textit{ensemble} forming \cite{Hansen1990, krogh1995neural}. Ensembling can be done in several ways and with any number of predictors. To determine whether the combination of several predictors could improve results, the Pearson correlation coefficient can be used. Ensembling predictors with high correlation coefficient will likely not improve results a lot compared to predictors with lower correlation. Methods for ensemble generation include averaging and majority voting as well as machine learning algorithm like Support Vector Machines (SVMs). Since an ensemble approach will typically outperform an individual model, we compare not only individual models (trained with a specific pre-processing) to an ensemble trained on different images. Instead, we compare also an ensemble with models trained on images without pre-processing to a ensemble with pre-processed trained models. In order to limit the complexity of the experimental setup, we focus on averaging approach. \section{Indiana Dataset} \label{sec:data} The Indiana dataset from Open-I contains 3996 studies with DICOM images \cite{Demner-Fushman2015}. In a first step, we created a revised dataset, by removing studies with no associated images or labels (i.e. the reference annotation). Next, studies that lacked either frontal or lateral acquisition were removed. The final dataset consists of 3125 studies. Two expert radiologists from our department reviewed all cases and diagnosed, which findings are present using the frontal as well as the lateral acquisition. As shown in Table \ref{tab:dataset}, we have selected eight different findings for annotation: pleural effusion, infiltrate, congestion, atelectasis, pneumothorax, cardiomegaly, mass, and foreign object. Intra-observer variability is common in chest X-rays. Thus, after an individual assessment of the images, all disagreements were discussed and a final consensus annotation was found. Table \ref{tab:dataset} shows the distribution of each finding. All classes except pneumothorax have more than 100 positive cases, whereas the class pneumothorax only has eleven positive cases. In our final evaluation, we do report results but will not discuss them for pneumothorax because of the low number of positive cases. We re-sampled 5 times from the entire Indiana dataset for an assessment of the generalization performance\cite{Molinaro2005}. Each time, we split the data into 70\% training and 30\% testing. We calculated the average loss over all re-samples to estimate the best point for generalization. Finally, our results are calculated for each split on the test set and averaged afterwards. \begin{table} \centering \caption{Indiana dataset statistic overview for all eight findings.} \label{tab:dataset} \begin{tabular}{l c c c} \noalign{\smallskip} Finding & True & False & Prevalence [\%] \\ \noalign{\smallskip} \hline \noalign{\smallskip} pleural effusion & 147 & 2978 & 4.7\\ infiltrate & 152 & 2973 & 4.9\\ congestion & 170 & 2955 & 5.4\\ atelectasis & 212 & 2913 & 6.8\\ pneumothorax & 11 & 3114 & 0.4\\ cardiomegaly & 529 & 2596 & 16.9\\ mass & 447 & 2678 & 14.3\\ foreign object & 1121 & 2004 & 35.9\\ \end{tabular} \end{table} \section{Experiments and Results} \label{sec:empirical} \noindent \textbf{Implementation:} Following the experimental setup in \cite{baltruschat2018comparison}, we employed an adapted ResNet-50, which is tailored to the X-ray domain. After replacing the dense layer, the model was fine-tuned using the Indiana dataset. For training, we sample various sized patches of the image with sizes between $80\%$ and $100\%$ of the image area. The patch aspect ratio is distributed evenly between $3:4$ and $4:3$. In addition, each image is randomly horizontal flipped and randomly rotated between $\pm7^{\circ}$. At testing, we resize images to $480 \times 480$ and use an averaged five crop (i.e. center and all four corners) evaluation. In all experiments, we use ADAM \cite{Kingma2015} as optimizer with default parameters for $\beta_1 = 0.9$ and $\beta_2 = 0.999$. The learning rate $lr$ is set to $lr = 0.005$. While training, we reduce the learning rate by a factor of 2 when the validation loss does not improve. We use a batch size of 15 and binary cross entropy as a loss function. The models are implemented in CNTK and trained on GTX 1080Ti GPUs. We perform six different experiments based on our proposed image pre-processing (Section \ref{sec:lungField} and \ref{sec:boneSup}). First, we train on normal images (i.e. no pre-processing is employed), bone suppressed images, lung cropped images, and on images combining both pre-processing steps. Secondly, we build an ensemble upon four normal trained models "EN-normal" as a baseline ensemble. Finally, we us our pre-processed trained models to build an ensemble "EN-pre-processed". \begin{figure} \centering \subfloat["Normal trained"]{\includegraphics[width=0.33\textwidth]{./fig/rank-corr_test-prediction_normal.png}}\hfill \subfloat["Pre-processed trained"]{\includegraphics[width=0.33\textwidth]{./fig/rank-corr_test-prediction.png}}\hfill \caption[Titel des Bildes]{Pearson correlation coefficient results for normal trained models (a) and models trained with pre-processed images (b). The correlation between normal models is higher than the models trained with pre-processed images. This is an indication, that an ensemble for the models in (b), result in an better result improvement.} \label{fig:rankCorr} \vspace{-0.5cm} \end{figure} \noindent \textbf{Results:} \begin{table*}[t] \centering \caption{AUC result overview for all our experiments. In this table, we present averaged results over all 5 splits and the calculated standard deviation (SD) for each finding. Furthermore, the average (AVG) AUC over all findings is shown. We trained our model with four different input images. First, normal images. Secondly, "BS" means with bone suppressed images. Thirdly, "Lung" means with images cropped to lung fields. Fourthly, "BS+Lung" means with bone suppressed and cropped to lung fields. In addition, we formed an ensemble with models trained on normal images "EN-normal" and an ensemble with the models trained on pre-processed images "EN-pre-processed". Bold text emphasizes the overall highest AUC value. $^\bigstar$We excluded pneumothorax because of the low positive count.} \label{tab:results} \begin{tabular}{l c c c c c c} \noalign{\smallskip} Finding & Normal & BS & Lung & BS+Lung & EN-normal & EN-pre-processed \\ \noalign{\smallskip} \hline \noalign{\smallskip} pleural effusion& $.951 \pm .008$& $.948 \pm .009$& $.955 \pm .007$& $.955 \pm .009$& $\mathbf{.960} \pm .004$& $.957 \pm .007$\\ infiltrate & $.936 \pm .012$& $.938 \pm .012$& $.939 \pm .007$& $.936 \pm .014$& $\mathbf{.944} \pm .010$& $.943 \pm .011$\\ congestion & $.937 \pm .013$& $.932 \pm .015$& $.941 \pm .014$& $.938 \pm .014$& $.941 \pm .012$& $\mathbf{.946} \pm .013$\\ atelectasis & $.905 \pm .020$& $.907 \pm .016$& $.917 \pm .017$& $.913 \pm .020$& $.905 \pm .020$& $\mathbf{.923} \pm .016$\\ cardiomegaly & $.952 \pm .006$& $.950 \pm .006$& $.953 \pm .005$& $.952 \pm .003$& $.955 \pm .004$& $\mathbf{.959} \pm .003$\\ mass & $.764 \pm .016$& $.766 \pm .016$& $.821 \pm .020$& $.840 \pm .011$& $.769 \pm .014$& $\mathbf{.837} \pm .014$\\ foreign object & $.795 \pm .015$& $.815 \pm .013$& $.808 \pm .013$& $.805 \pm .015$& $.811 \pm .018$& $\mathbf{.821} \pm .015$\\ pneumothorax & $.731 \pm .134$& $.789 \pm .104$& $.813 \pm .132$& $.794 \pm .128$& $.736 \pm .163$& $.792 \pm .142$\\ \noalign{\smallskip} \hline \noalign{\smallskip} AVG $^\bigstar$ & $.891 \pm .013$ & $.894 \pm .012$ & $.905 \pm .012$ & $.906 \pm .012$ & $.898 \pm .012$ & $\mathbf{.912} \pm .011$ \end{tabular} \end{table*} To compare our experiments to each other, we calculate the area under the ROC curve (AUC). The shown AUC results are averaged over all re-sampling and presented with standard deviation (SD). For our ensemble experiment, we calculate the Pearson correlation coefficient between each normal trained model and our pre-processed trained models. First, we look at our experiments with the different pre-processed images and the performance based on AUC. In all experiments, we note that five out of seven relevant classes have a high AUC of above $0.9$. Two of those five \textit{pleural effusion} and \textit{cardiomegaly} have even an AUC of above $0.95$. Only the class \textit{mass} and \textit{foreign object} have an AUC below $0.9$. Comparing the results of a model using bone suppression to the normal trained model, the AUC for \textit{foreign object} increased significant from $.795 \pm .015$ to $.815 \pm .013$ with respect to the reported standard deviation (SD). The model trained with lung cropping has in all classes a higher AUC and often a reduced SD compared to the baseline. But only for the class \textit{mass}, the AUC increased significantly from $.766 \pm .016$ to $.821 \pm .020$. We argue that the increased spatial resolution for lung cropped images helps the model to better detect small masses. This is in line with the observation of our radiologists, which reported an increased number of small masses. Combining both pre-processing steps results in the highest AUC for \textit{mass} and increases the AUC by $9.95\%$. We observe no significant changes for the other classes. Secondly, we build two ensembles: \textit{EN-normal} and \textit{EN-pre-processed}. EN-Normal refers to our ensemble of four models trained using images without pre-processing. Whereas EN-pre-processed is an ensemble with one normal, one BS, one lung cropping, and one combined model. In figure \ref{fig:rankCorr}, we report the Pearson correlation coefficient for the normal and pre-processed ensemble. As expected, the four normal models are already highly correlated (i.e. values around 96) except for one model which seems to converged to a different optimum. Comparing the Pearson correlation coefficients of the pre-processed trained models with the normal trained models, the coefficient are lower and only around 85. This indicates that a pre-processed ensemble can have a high impact on our results. We verify our hypothesis with the AUC results in Table \ref{tab:results}. The pre-processed ensemble increases the AUC in \textit{mass}, \textit{foreign object}, and \textit{atelectasis} significantly with respect to the reported SD, whereas the normal ensemble does not. Overall, the pre-processed ensemble yields in five of seven classes the best AUC results. \section{Conclusion} \label{sec:discuss} In this contribution we investigated the effect of two advanced pre-processing methods for multi-label disease classification on chest radiographs: bone suppression and lung field detection. In a systematic evaluation, we showed and discussed the superior performance of models -- trained on pre-processed images. The best performance was achieved by a novel ensemble architecture leveraging all the information from the different pre-processing methods. Significant AUC improvement for specific classes like \textit{foreign object} and \textit{mass} have been achieved, but there is still work needed for a clinical application. Our future work will include detailed investigation of clinical application scenarios and the integration of disease segmentation for multi-label classification. \printbibliography \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} Uniformity is universally recognised across scientific domains, being used in a wide range of applications, since it is connected with several semantic data characteristics. Examples include but are not limited to (1) aggregating points in multidimensional feature space (in which case uniformity suggests the lack of distinctive features), (2) concatenating uncorrelated Gaussian variables on a single vector (which may generate uniform points on a hypersphere through normalisation \cite{ABlum16} and (3) tuning multiple hyperparameters during algorithm evaluation, in cases that the hyperparameters number prohibits the brute-force testing of all parameter combinations (so as to avoid over-representation or under-representation of regions on the hyperparameter space). On the other hand, uniformity is considered ''common knowledge'' and is rarely cited. The underlying intuitive definition of uniformity is as follows: \emph{a pointset defined on a space $S$ is uniform if-f it is the output of a stochastic process defined in $S$ in which all $p_i \in S$ have equal probability $P_i = c$ to be generated}. The main issue with this definition is that it imposes a large (often infinite) number of probability equalities, which are both theoretically and practically challenging to fully confirm without a priori knowledge of the stochastic process. As a result, most of the times uniformity is confirmed through \textit{reductio ad absurdum} reasoning; a set of reasonable non-uniform distributions are examined and disproven, thus implying uniformity as the only valid option. Perhaps the most typical approach is to aggregate all probability equalities to a small number of subset probability equations, based on the fact that only in a (spatial) uniform distribution the probability ratio equals to the size ratio, i.e. $P_1/P_2 = |S_1|/|S_2|$, where $P_i, ~i=\{1,2\}$ is the probability of a point generated to a subset $S_i, ~i=\{1,2\}$ of $S$, with corresponding size $|S_i|$. Especially if $S$ is segmented to a family of non-overlapping equal-sized sets $S_i, \cup S_i = S$ the absolute frequency of all $S_i$ is expected to be equal iff the point distribution is uniform. In practice, this approach suffers from three main shortcomings: (a) the optimal number of subsets as well as their boundaries is not trivial to estimate, especially in cases that unmodelled symmetry properties may cause erroneous uniform identification (b) segmenting sets becomes increasingly problematic in high-dimensional spaces due to the ''curse of dimensionality'' \cite{JLBentley75} (c) the output of the assessment is a logical variable ("true" or "false") while no quantitative evaluation is conducted. A more direct approach to examine uniformity is through the use of spherical harmonics \cite{TMMacRobert48}. Spherical harmonics are a complete set of orthogonal functions on the hypersphere that model both uniformity and symmetry. A spatial distribution on a hypersphere being dominated by the spherical harmonic of degree $0$ (which corresponds to the uniform part of the distribution) may be declared uniform. However, despite their elegant and mathematically rigid modelling of uniformity, the generalisation of spherical harmonics to higher dimensions greatly expands the number of spherical harmonics even of low degree. As a matter of fact, the number of spherical harmonics of degree $m$ in $N$ dimensions is $\frac{2m+N-2}{m}{N+m-3 \choose m-1}$ \cite{CRFrye14}. Hence, the number of spherical harmonics of $m$ degree is linear in $3$-dimensional space, quadratic in $4$-dimensional spaces, cubic in $5$-dimensional spaces, etc. This makes impractical the use of spherical harmonics even for small $N$ values. The foundation of the present work is a novel uniformity definition, one that is equivalent to the ''classical'' one, but can lead to additional tools to examine uniformity: \emph{a pointset defined on a space $S$ is uniform if-f it is the output of a stochastic process in which the limit set of generated points includes an equal number of all $p_i \in S$}. In the above statement, the phrase ''limit set of generated points'' refers to a set that contains infinitely more points than $S$. The novelty of this definition is that it is based on the absolute frequency of the generated points and not the probability as the classical uniformity distribution. The two are obviously equivalent because the limit at infinity of the absolute frequency is the probability. The main gain is that the new definition implies a connection of the uniformity distribution with the chords connecting points of $S$. More specifically, since the limit uniform set includes an equal number of all points, the limit distribution of point distances $||p_i-p_j||, p_i,p_j \in S$ is the distribution of the chord lengths of $S$. If $||.||$ is the metric of $S$ then its chord length distribution can be examined and formalised, thus modelling the distribution that uniform point distances follow. Subsequently, the similarity of pointset distance distributions with the theoretic chord length distribution can be used to qualitatively and quantitatively assess uniformity. This work presents such an analysis, for the special case that $S$ is a hypersphere of dimension $N$ and $||.||$ is the Euclidean distance. This case is very useful from a practical point of view because it corresponds to points normalised to have a fixed Euclidean norm (usually equal to $1$), a data structure that finds extended applications on data science. Apart from the novel uniformity definition, the main novelties of this work are: \begin{itemize} \item The closed-form expression and the basic properties of the hypersphere chord length distribution and a corresponding analysis for the hyper-hemisphere chord length distribution \item The introduction of the basic principles of measuring uniformity using the hypersphere chord length distribution, including a preliminary experimental evaluation on both real and synthetic data \item The introduction of the basic principles of detecting uniform hyperspherical subsets in high-dimensional data, including a preliminary experimental evaluation \end{itemize} The rest of this work is structured as follows. The related work on estimating closed-form expressions of chord length distributions is summarised on Section \ref{sec:chord_length}, while the hypersphere and hyper-hemisphere chord length distributions are presented and thoroughly examined in Section \ref{sec:Chord_length_distributions}. The theoretic analysis of how this can be used to assess uniformity and detect uniform subsets is conducted on Sections \ref{sec:point_uniform} and \ref{sec:higher_dimensions}, respectively, while the related experimental evaluation follows on Section \ref{sec:applications}. Section \ref{sec:conclusions} concludes this article. \section{Chord Length Distributions} \label{sec:chord_length} The study of chord length distributions is part of stochastic geometry, a domain historically being a sparse set of intuitive mathematical puzzles (such as the Buffon's clean tile and needle problems \cite{GBuffon77}, \cite{AMathai99}), which has recently significantly advanced both theoretically and practically \cite{SNChiu07}, the latter including applications in image analysis (e.g \cite{CLacoste05}, computer vision (e.g. \cite{AJBaddeley93}), etc. Within stochastic geometry, the chord length is defined as a random variable, more specifically, the random variable that is equal to the distance $||p_i-p_j||$ of two points $p_i$, $p_j$ randomly (i.e. uniformly) selected from a space $S$. The chord length distribution models this random variable, and as already mentioned, it is also the limit distribution of the inner distances of a uniformly selected set on $S$. Despite this association, there is not a lot of work that has been done in the direction of estimating closed-form expressions of chord length distributions. Currently, this challenging problem has found solutions in very specific cases, usually related to $2$-dimensional or $3$-dimensional spaces used in radiation research \cite{FGruy15}. Examples of shapes for which the chord length distribution is known is a regular polygon \cite{UBasel14}, a parallelogram \cite{SRen12}, a cube \cite{JPhilip07}, a hemisphere \cite{JBLangworthy89}, etc. The literature of closed-form expressions of chord length distributions in high-dimensional spaces is even more sparse, including the chord length distribution of points inside a hypersphere \cite{JMHammersley50} (which is different than the distribution on a hypersphere that is presented here), in two adjacent unit squares \cite{VSAlagar76} the chord length distribution of N-dimensional points of variables following Gaussian distribution \cite{BThirey15} and an analysis regarding specifically the average chord length in a compact convex subset of a n-dimensional Euclidean space \cite{BBurgstaller09}. Characteristic of the limited interest in high-dimensional chord length distributions is the fact that while J. M. Hammersley introduced the chord length distribution of points selected within a hypersphere in 1950, the corresponding chord length distribution for points selected on a hypersphere became available on a preliminary self-printed version of this work more than $6$ decades later \cite{PSidiropoulos14ar}, based on the recently estimated closed-form expression of the surface of a hyperspherical cap as a fraction of the total hypersphere surface \cite{SLi11}. In the present version the hypersphere chord length distribution estimation is repeated in a more compact presentation, augmented by the corresponding analysis for hyper-hemispheres. Moreover, the introduced distributions are not merely presented as mathematical achievements but are subsequently employed in a novel approach that both quantitatively and qualitatively assess spatial uniformity. \section{Chord length distributions on the hypersphere} \label{sec:Chord_length_distributions} \subsection{Hypersphere chord length distribution} \label{sec:full_distribution} Let $p_i = \{p_{i1}~p_{i2}~p_{i3}~...~p_{iN}\}, \i \in \{1,2,...M\}$ be $M$ points selected uniformly and independently from the surface of a $N$-dimensional hypersphere of radius $R$, i.e., $\forall ~ i \in \{1,2,...M\}, ~p_{i1}^2 + p_{i2}^2 +...p_{iN}^2=R^2$. The pairwise Euclidean distances $d(i,j), i,j \in \{1,2,...M\}, i \neq j$ of $p_i$, $p_j$ generate a set $d_k$ of distances ($k~=~M(M-1)/2$). The hypersphere (or N-sphere) chord length distribution $f_N(d)$ is the distribution of $d_k$ as $k$ (i.e. $M$) tends to infinity. If $N=2$, then the N-sphere is a circle. The circle chord length distribution is a special case, for which both the pdf ($f_2(d)$) and the cdf ($F_2(d)$) can be found in the literature (e.g. \cite{EWeisstein2}): \begin{equation} \label{eq:circle_pdf} f_2(d) = \frac{1}{\pi}\frac{1}{\sqrt{1-\frac{d^2}{2R^2}}} \end{equation} \begin{equation} \label{eq:circle_cdf} F_2(d) = \frac{cos^{-1}(1-\frac{d^2}{2R^2})}{\pi} \end{equation} The estimation of the closed-form expressions for the pdf and the cdf in the general case (i.e. $f_N(d)$ and $F_N(d)$, $N \geq 2$, respectively) is assisted by the hypersphere homogeneity, i.e. the fact that the hypersphere (and its chord length distribution) is invariant to axis rotation. Therefore, the hypersphere chord length distribution can be estimated assuming that one chord end is fixed to $\{0,0,0...0,R\}$, while the other end determines the chord length. An additional consequence of the rotation invariance is that $f_N(d)$ ($F_N(d)$) is not only the asymptotic pdf (cdf) of $d_k$ but also the asymptotic pdf (cdf) of the distances $d(i,j), j \neq i$ from any fixed point in the point set $p_i$, i.e. that when $M$ tends to infinity each row (and column) of the distance matrix $d(i,j)$ would follow $f_N(d)$ distribution. Assuming that one of the end points of the chord are in $p=\{0,0,0...0,R\}$, the chords of length $d$ lie on a $(N-1)$-sphere of radius $a=\sqrt{d^2-\frac{d^4}{4R^2}}$. This is derived by eliminating $p_{iN}$ from the N-sphere equation and the distance-from-$p$ equation ($p_{i1}^2 + p_{i2}^2 +...+(p_{iN}-R)^2=d^2$). The $(N-1)$-sphere is the intersection of the $N$-sphere with the hyperplane $L: p_{N}=R-\frac{d^2}{2R}$. Since $\frac{\partial p_{N}}{\partial d} \leq 0$, for all points $p'$ of the N-sphere with distance $D$ from $p$, $D < d$, $p'_{iN} > R-\frac{d^2}{2R}$ and for all points $p''$ of the N-sphere with distance $D$ from $p$, $D > d$, $p''_{iN} < R-\frac{d^2}{2R}$. Therefore, $L$ cuts the hypersphere into two parts, each defined by the comparison of the chord length with $d$. A hyperspherical cap, by default, is a hypersphere part cut by a hyperplane, hence, the latter parts are hyperspherical caps, i.e. \begin{proposition} The locus of the $N$-sphere points that have distance $D$, $D \leq d$ from a point on it is a hyperspherical cap of radius $a=\sqrt{d^2-\frac{d^4}{4R^2}}$. \label{theorem1} \end{proposition} Proposition \ref{theorem1} implies that the cdf $F_N(d)$ is given as the ratio of a hyperspherical cap surface to the hypersphere surface. Before estimating $F_N(d)$ it is reminded that for each N-sphere point $p'= \{p'_{i1}, ~p'_{i2}, ~p'_{i3},~...~, p'_{iN}\}$ with $d(p,p') \leq d$ there is a point $p''= \{-p'_{i1},~-p'_{i2},~-p'_{i3},~...~, -p'_{iN}\}$ for which $d(p,p'') \geq \sqrt{4R^2-d^2}$, and vice versa. As a result: \begin{equation} \label{eq:symmetry_sphere} F_N(\sqrt{4R^2-d^2}) = 1-F_N(d), d \leq \sqrt{2}R \end{equation} Due to Eq. (\ref{eq:symmetry_sphere}), only $F_N(d)$ for $d \leq \sqrt{2}R$ (i.e. corresponding to hyperspherical caps less or equal than a hemi-hypersphere) is required. This part of the cdf is estimated using the surface $A^{cap}_N(R)$ of a hyperspherical cap that is smaller than a hyper-hemisphere \cite{SLi11}: \begin{equation} \label{eq:li_surface} A^{cap}_N(R) = \frac{1}{2}A_N(R)I_{sin^2\phi}(\frac{N-1}{2},\frac{1}{2}) \end{equation} In Eq. (\ref{eq:li_surface}), $N$ is the hypersphere dimension, $R$ its radius, $A_N(R)$ the hypersphere surface, $\phi$ the colatitude angle \cite{SLi11} and $I$ the regularised incomplete beta function \cite{EWeisstein4} given by \begin{equation} \label{eq:incomplete_beta} I_x(a,b) = \frac{B(x;a,b)}{B(a,b)} = \frac{\int_0^x t^{a-1} (1-t)^{b-1}dt}{\int_0^1 t^{a-1} (1-t)^{b-1}dt} \end{equation} In order to eliminate the colatitude angle from Eq. (\ref{eq:li_surface}), we use the fact that $h = (1-cos\phi)R$, where $h$ is the cap height. Since the maximum distance $d$ the cap radius $a$ and the cap height $h$ form a right triangle (Fig. \ref{fig:chord_circle}), the height of the cap is $h = d^2/2R$. Therefore, $\phi = cos^{-1} (1-d^2/2R^2)$ and the cdf $F_N(d)$ is as follows: \begin{proposition} The cumulative distribution function of the $N$-sphere chord length, $F_N(d)$ is \begin{equation} \label{eq:hypersphere_cdf} \centering \begin{split} P(D\leq d) = F_N(d) = \frac{1}{2}I_{\frac{d^2}{R^2}-\frac{d^4}{4R^4}}(\frac{N-1}{2},\frac{1}{2}), d<\sqrt{2}R \\ P(D\leq d) = F_N(d) = 1 - \frac{1}{2}I_{\frac{d^2}{R^2}-\frac{d^4}{4R^4}}(\frac{N-1}{2},\frac{1}{2}), d \geq \sqrt{2}R \end{split} \end{equation} \label{prop:hypersphere_cdf} \end{proposition} \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth]{spherical_cap.png} \caption {A hyperspherical cap and the relation of the maximum distance $d$ from a point $P$, the hyperspherical cap height $h$ and its radius $a$.} \label{fig:chord_circle} \end{figure} The corresponding pdf $f_N(d)$ is: \begin{proposition} The probability density function of the $N$-sphere chord length, $f_N(d)$ is: \begin{equation} \label{eq:hypersphere_pdf} f_N(d) = \frac{d}{R^2B(\frac{N-1}{2},\frac{1}{2})}(\frac{d^2}{R^2}-\frac{d^4}{4R^4})^{\frac{N-3}{2}} \end{equation} \end{proposition} \subsubsection{Basic properties of the hypersphere chord length distribution} \label{sec:spotnvdd} Table \ref{tab:basic properties} summarises the chord length distributions of hyperspheres of dimension $2$ to $6$, while the probability density functions and cumulative distribution functions for $N = 2, 3, 4, 8, 16, 32$ are shown in Figs. \ref{fig:cdf_examples} and \ref{fig:pdf_examples}, respectively. \begin{table*}[htb] \footnotesize \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline N & pdf & cdf & Mean & Median & Variance \\ \hline 2 & $\frac{1}{\pi}\frac{1}{\sqrt{1-\frac{d^2}{2R^2}}}$ & $\frac{cos^{-1}(1-\frac{d^2}{2R^2})}{\pi}$ & $\frac{4}{\pi}R$ & $\sqrt{2}$R & $0.379R^2$ \\ \hline 3 & $\frac{d}{2R^2}$ & $\frac{d^2}{4R^2}$ & $\frac{4}{3}R$ & $\sqrt{2}$R & $0.222R^2$ \\ \hline 4 & $\frac{4d^2}{\pi R^3} \sqrt{1-\frac{d^2}{4R^2}}$ & $\frac{cos^{-1}(1-\frac{d^2}{2R^2})}{\pi}-\frac{2}{\pi}(1-\frac{d^2}{2R^2})\sqrt{1-\frac{d^2}{4R^2}}$ & $1.358R$ & $\sqrt{2}$R & $0.156R^2$ \\ \hline 5 & $\frac{3d^3}{4R^4}(1-\frac{d^2}{4R^2})$ & $\frac{3d^4}{16R^4}-\frac{3d^6}{96R^6}$ & $1.371R$ & $\sqrt{2}$R & $0.119R^2$ \\ \hline 6 & $\frac{8d}{3\pi R^2} (\frac{d^2}{R^2}-\frac{d^4}{4R^4})^{3/2}$ & $\frac{2sin^{-1}(\frac{d}{2R})}{\pi} - \frac{\sqrt{4R^2-d^2}(d^7-6R^2d^5+2R^4d^3+12R^6d)}{24\pi R^8} $ & $1.38R$ & $\sqrt{2}$R & $0.0956R^2$ \\ \hline \end{tabular} \end{center} \caption{Basic properties of the N-sphere chord length distribution for $N = 2, 3, 4, 5, 6$.} \label{tab:basic properties} \end{table*} \begin{figure}[htb] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.2\textwidth]{cdf2.png} & \includegraphics[width=0.2\textwidth]{cdf3.png} \\ (a) & (b) \\ \includegraphics[width=0.2\textwidth]{cdf4.png} & \includegraphics[width=0.2\textwidth]{cdf8.png} \\ (c) & (d) \\ \includegraphics[width=0.2\textwidth]{cdf16.png} & \includegraphics[width=0.2\textwidth]{cdf32.png} \\ (e) & (f) \\ \end{tabular} \end{center} \caption {The cumulative distribution functions. (a) $F_2(d)$ (b) $F_3(d)$ (c) $F_4(d)$ (d) $F_8(d)$ (e) $F_{16}(d)$ (f) $F_{32}(d)$.} \label{fig:cdf_examples} \end{figure} The moments about the origin $E(D^k)$ are estimated by using the transform $d^2/4R^2 =u$, which leads to the following equation: \begin{equation} \label{eq:moments_equation} E(D^k) = \frac{2^{k+N-2}}{B(\frac{N-1}{2},\frac{1}{2})} B(\frac{k+N-1}{2},\frac{N-1}{2}) R^k \end{equation} Hence, for the mean, $\mu$ the following holds: \begin{proposition} \label{proposition2} The mean value $\mu$ of the $N$-sphere chord length distribution is \begin{equation} \label{eq:moments_equation_3} \mu= \frac{\Gamma^2(\frac{N}{2})}{\Gamma(N-\frac{1}{2})\sqrt{\pi}}2^{N-1}R \end{equation} \end{proposition} On the other hand, $E(D^2)$ can be proven to be independent from the hypersphere dimension $N$. Indeed, Eq. (\ref{eq:moments_equation}) for $k=2$ becomes: \begin{equation} \label{eq:moments_equation_2} E(D^2) = 2^NR^2 \frac{B(\frac{N+1}{2},\frac{N-1}{2})}{B(\frac{N-1}{2},\frac{1}{2})} = 2^NR^2 \frac{\Gamma(\frac{N}{2})\Gamma(\frac{N+1}{2})}{\Gamma(\frac{1}{2})\Gamma(N)} \end{equation} where $\Gamma$ is the Gamma function. Using the following Gamma function property \cite{EWeisstein3}: \begin{equation} \label{eq:gamma_ratio} \frac{\Gamma(z)\Gamma(z+\frac{1}{2})}{\Gamma(\frac{1}{2})\Gamma(2z)} = 2^{1-2z} \end{equation} and substituting $z=\frac{N}{2}$ in Eq. (\ref{eq:moments_equation_2}) it follows that $E(D^2)=2R^2$. If a point distribution in space is considered as a ''spatial stochastic signal'', then $E(D^2)$ would correspond to the signal power. The independence of $E(D^2)$ from the hypersphere dimension signifies that the ''power'' of the uniform distribution on a hypersphere is constant in all hyperspheres of equal radius, independently of their dimension. The variance $\sigma^2$ is straightforwardly estimated by $E(D)$ and $E(D^2)$: \begin{equation} \label{eq:gamma_ratio} \begin{split} \sigma^2 = (2-\frac{\Gamma^4(\frac{N}{2})}{\pi \Gamma^2(N-\frac{1}{2})}2^{2N-2})R^2 \end{split} \end{equation} \begin{figure}[htb] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.2\textwidth]{pdf2.png} & \includegraphics[width=0.2\textwidth]{pdf3.png} \\ (a) & (b) \\ \includegraphics[width=0.2\textwidth]{pdf4.png} & \includegraphics[width=0.2\textwidth]{pdf8.png} \\ (c) & (d) \\ \includegraphics[width=0.2\textwidth]{pdf16.png} & \includegraphics[width=0.2\textwidth]{pdf32.png} \\ (e) & (f) \\ \end{tabular} \end{center} \caption {The probability density functions. (a) $f_2(d)$ (b) $f_3(d)$ (c) $f_4(d)$ (d) $f_8(d)$ (e) $f_{16}(d)$ (f) $f_{32}(d)$.} \label{fig:pdf_examples} \end{figure} Apart from $E(D^2)$, independent from the dimension is also the median score. By substituting $d=\sqrt{2}R$ to \ref{eq:symmetry_sphere} it follows that $F_N(\sqrt{2}R)=0.5, \forall N \geq 2$. $\sqrt{2}R$ is the distance between a ''pole'' and the ''equator'', thus this property is intuitively expected, since it follows by the fact that the two hyper-hemispheres have equal number of points. Finally, a secondary contribution of the hypersphere distribution is that it allows to estimate the generic solution of the Bertrand problem \cite{APapoulis84}, which refers to the probability $P_R$ of a random chord being larger than the radius $R$. By substituting $d=R$ in Eq. (\ref{eq:hypersphere_cdf}) we get that: \begin{equation} \label{eq:bertrand} P_R = P(d\leq R) = \frac{1}{2}I_{\frac{3}{4}}(\frac{N-1}{2},\frac{1}{2}) \end{equation} $P_R$ is independent from the radius $R$ and rapidly decreasing with respect to the dimension $N$. $P_R$ for $N=2,3,4,5$ is $1/3$, $1/4$, $0.196$ and $0.156$, respectively. \subsection{Hyper-hemisphere chord length distribution} \label{sec:hemihyper_distribution} Apart from the chord length distribution of the whole hypersphere it would be useful to estimate the corresponding distribution of hypersphere sectors, starting with the hyper-hemisphere one. Without loss of generality it can be assumed that the hyper-hemisphere is the part of the hypersphere for which $p_{iN} \geq 0$. The ''pole'' or, formally speaking, the Chebyshev centre \cite{SBoyd04} of the hyper-hemisphere, i.e. the point that has the minimum maximum distance, is the point $J(0,0,...,0,R)$. The existence of a unique Chebyshev centre (contrary to the hypersphere for which every point has equal maximum distance) implies that points in a hyper-hemisphere are not homogeneous. Therefore, when the number of points $M$ tends to infinity, the rows (and columns) of the distance matrix $d(i,j)$ will not follow the same $f_{NH}(d)$ distribution. However, the hyper-hemisphere is invariant to rotations around the $p_{N}$ axis, i.e. all points on the surface of the hyper-hemisphere with equal $p_{N}=c$ are produced by the rotation of the point $C(0,0....,0,c',c)$ ($c'^2+c^2=R^2, c' \geq 0$) around $p_{N}$ axis. Since point distance is invariant to rotation, $f_{NH}(d_p)=f_{NH}(d_p')$ if $p_N=p'_N$, where $d_p$ and $d_p'$ is the distance from point $p$ and $p'$, respectively and $f_{NH}(d_p)$, $f_{NH}(d_p')$ are the respective chord length distributions. As a result, the probability that a hyper-hemispherical chord $D_H$ is smaller than $d$ ($d \leq \sqrt{2}R$) is: \begin{equation} \label{eq:hemisphere_generic} P(D_H \leq d) = F_{NH}(d) = \int_0^R P(p_N=c)F_{NH}(d_c) dc \end{equation} where $d_c$ is the distance from the point $C(0,0,...,0,c',c)$. A point in the hyper-hemisphere has $p_N \geq c$ if-f it belongs on a hyperspherical cap centered in the hyper-hemisphere pole with colatitude angle $\phi = cos^{-1}((C\cdot J)/R^2)=cos^{-1}(c/R)$. By equation \ref{eq:li_surface} it follows that: \begin{equation} \label{eq:firstpointhemi} P(p_N \geq c) = I_{sin^2\phi}(\frac{N-1}{2},\frac{1}{2}) = I_{1-c^2/R^2}(\frac{N-1}{2},\frac{1}{2}) \end{equation} and, finally, that: \begin{equation} \label{eq:firstpointhemi2} P(p_N=c) = \frac{2}{RB((N-1)/2,1/2)}(1-\frac{c^2}{R^2})^{\frac{N-3}{2}} \end{equation} On the other hand, a hyper-hemispherical chord with one end in $C$ has a length less or equal than $d$ if-f it belongs in a corresponding hyperspherical cap of centre $C$ and maximum distance $d$. Therefore, $F_{NH}(d_c)=X_N(d,c) \frac{A^{cap}_N(C,d)}{A^H_N}$, where $X_N(d,c)$ is the percentage of the hyperspherical cap of centre $C$ that lies within the hyper-hemisphere, $A^{cap}_N(C,d)$ is the total surface of the hyperspherical cap and $A^H_N$ is the total surface of the hyper-hemisphere. Since $\frac{A^{cap}_N(C,d)}{A^H_N} = I_{\frac{d^2}{R^2}-\frac{d^4}{4R^4}}(\frac{N-1}{2},\frac{1}{2})$, Eq. (\ref{eq:hemisphere_generic}) becomes: \begin{equation} \label{eq:hemisphere_generic2} F_{NH}(d) = KI_{\frac{d^2}{R^2}-\frac{d^4}{4R^4}}(\frac{N-1}{2},\frac{1}{2}) \int_0^R (1-\frac{c^2}{R^2})^{\frac{N-3}{2}} X_N(d,c) dc \end{equation} where $K=2/RB((N-1)/2,1/2)$. Note that the hyperspherical cap of centre $C$, $A^{cap}(C,d)$, is a rotated version of a same-size hyperspherical cap having as a centre the pole $J$, $A^{cap}(J,d)$. The rotation is on the plane that is defined by the centre of the sphere $O$, the pole $J$ and the chord end $C$, i.e. the plane defined by $p_{N-1}$ and $p_{N}$, and the rotation angle is the angle between $OJ$ and $OC$, which in this case is $\phi$. $X_N(d,c)$ is determined by the $p_N$ coordinate of $A^{cap}(C,d)$, which is determined by the $p_{N-1}$ and $p_{N}$ coordinates of $A^{cap}(J,d)$. Even though this seems as a $2$-dimensional geometrical problem, it is more complex than that because $p_{N-1}$ and $p_{N}$ are correlated with the rest of the coordinates through the hypersphere equation. Still, $X_N(d,c)$ is the percentage of $A^{cap}(J,d)$ points for which $-sin(\phi)p_{i(N-1)}+cos(\phi)p_{i(N)} \geq 0$. A first remark is that if $p_{i(N-1)} \leq 0$ then $-sin(\phi)p_{i(N-1)}+cos(\phi)p_{i(N)} \geq 0$, because $0 \leq \phi \leq \pi/2$ and $p_{i(N)} \geq 0$. The inequality $p_{i(N-1)} \leq 0$ holds for half of $A^{cap}(J,d)$ points because the (N-1)-coordinate of the pole $J$ is $0$ and $OJ$ is an axis of symmetry of $A^{cap}(J,d)$. Therefore, $X_N(d,c) \geq 1/2$. Moreover, the integral of $P(p_N=c)$ is $1$ because $P(p_N=c)$ is a pdf. By substitution to Eq. (\ref{eq:hemisphere_generic2}) we confirm the following intuitive proposition. \begin{proposition} \label{propositionhemi1} The hyper-hemisphere cdf is larger than the hypersphere cdf $\forall d \leq \sqrt{2}R$, i.e. $F_{NH}(d) \geq F_N(d) = \frac{1}{2} I_{\frac{d^2}{R^2}-\frac{d^4}{4R^4}}, ~ \forall d \leq \sqrt{2}R$ \end{proposition} As a matter of fact, $X_N(d,c)$ equals to $1$ if the rotation angle is sufficiently small. To estimate the range of $c$ for which $X_N(d,c)=1$ it is reminded that the part of $A^{cap}(C,d)$ that lies within the hyper-hemisphere is the cut of the hypersphere with two hyperplanes, $L: p_{N}=0$ and $L': \frac{c'}{R} p_{N-1}+\frac{c}{R} p_{N} = R(1-\frac{d^2}{2R^2})$. The cap $A^{cap}(C,d)$ lies entirely within the hyper-hemisphere (i.e. $X_N(d,c)=1$) if-f the hyperplane intersection happens outside the hypersphere. This implies that: \begin{equation} \label{eq:hemispherexn} X_N(d,c)=1 \iff d\sqrt{1-\frac{d^2}{4R^2}} \leq c \leq R \end{equation} The integral $ \int_{d\sqrt{1-\frac{d^2}{4R^2}}}^R (1-\frac{c^2}{R^2})^{\frac{N-3}{2}} dc$, by substituting $c^2/R^2=t$, becomes $\frac{R}{2}\int_{\frac{d^2}{R^2}-\frac{d^4}{4R^4}}^1 (1-t)^{(N-3)/2}t^{-1/2}dt$. Therefore, \begin{equation} \label{eq:hemisphere_generic3} F_{NH}(d) = I_{\frac{d^2}{R^2}-\frac{d^4}{4R^4}}(\frac{N-1}{2},\frac{1}{2}) (1-I_{\frac{d^2}{R^2}-\frac{d^4}{4R^4}}(\frac{1}{2},\frac{N-1}{2})) + I_1 \end{equation} where $I_1$ is Eq. (\ref{eq:hemisphere_generic2}) with the upper integral limit changed according to Eq. (\ref{eq:hemispherexn}) to $d\sqrt{1-\frac{d^2}{4R^2}}$. The determination of $X_N(d,c)$ in the case that $X_N(d,c)<1$ (Fig. \ref{fig:semicircle_fig} is a rather challenging problem, which however can be linked to a single surface ratio: \begin{equation} \label{eq:hemispherexn2} X_N(d,c)=\frac{1}{2} + \frac{A_\Omega}{A^{cap}(J,d)} \end{equation} where $A_\Omega$ is the area of the locus for which $-sin(\phi)p_{i(N-1)}+cos(\phi)p_{i(N)} \geq 0$, $p_{i(N-1)} \geq 0$ (Fig. \ref{fig:semicircle_fig}). \begin{figure}[htb] \centering \includegraphics[width=0.52\textwidth]{semicircle_fig.png} \caption {$X_N(d,c)$ as the ratio of the shaded area $\Omega$ in relation to the hyperspherical cap with maximum distance $d$.} \label{fig:semicircle_fig} \end{figure} The area of $A_\Omega$ can be estimates using the intersection of two hyperspherical caps, which has been recently examined in detail \cite{YLee14}. Using the taxonomy of \cite{YLee14}, this corresponds to case No. $9$, i.e. with axis angle $\theta_v$ less than $\pi /2$, and the two hyperspherical caps angles $\theta_1 \in [0~ \pi/2)$ and $\theta_2=\pi /2$. According to \cite{YLee14}, the hyperspherical cap part $X'$ that does not intersect with the hyper-hemisphere $X'=A^{cap}(J,d)/2 - A_\Omega$ is as follows: \begin{equation} \label{eq:hemispherexnnon} X' = \frac{\pi ^{\frac{N-1}{2}}}{\Gamma(\frac{N-1}{2})}R^{N-1}\int_{l_1}^{l_2} sin\phi ^{N-2}I_{1-c^2}((N-1)/2,1/2)d\phi \end{equation} where $l_1=sin^{-1}(c/R)$ and $l_2=cos^{-1}(1-d^2/2R^2)$. Estimating $X'$ through (Eq. \ref{eq:hemispherexnnon}) and replacing to (Eq. \ref{eq:hemisphere_generic3}) gives the generic formula of hyper-hemisphere chord length distribution. This is a rather challenging task and leads to complex and lengthy expressions even for small values of $N$. As an example, the hyper-hemisphere chord length cdf for $N=4$ is given: \begin{proposition} \label{propositionhemi1} If $N=4$, the probability that a hyper-hemisphere chord is less or equal than $d$, $d \leq \sqrt{2}R$ $F_{NH}(d)$ is $F_{NH}(d) = P_1(d) - P_2(d) + P_3(d)$, where: \begin{equation} \label{eq:hemispherexnp1} P_1(d) = I_{\frac{d^2}{R^2}-\frac{d^4}{4R^4}}(\frac{N-1}{2},\frac{1}{2}) (1-\frac{1}{2}I_{\frac{d^2}{R^2}-\frac{d^4}{4R^4}}(\frac{1}{2},\frac{N-1}{2})) \end{equation} \begin{equation} \label{eq:hemispherexnp2} P_2(d) = \frac{2I_{\frac{d^2}{R^2}-\frac{d^4}{4R^4}}(\frac{N-1}{2},\frac{1}{2})[(1-\frac{d^2}{2R^2})^2-(1-\frac{d^2}{2R^2})^N]}{(N-2)\pi B(\frac{N-1}{2},\frac{1}{2})I _{\frac{d^2}{R^2}-\frac{d^4}{4R^4}}(\frac{3}{2},\frac{1}{2})} \end{equation} \begin{equation} \label{eq:hemispherexnp3} P_3(d) = \frac{2I_{\frac{d^2}{R^2}-\frac{d^4}{4R^4}}(\frac{N-1}{2},\frac{1}{2})\int_0^{asin(\sqrt{\frac{d^2}{R^2} - \frac{d^4}{4R^4}})} \theta cos\theta ^{N-2} d\theta}{\pi B(\frac{N-1}{2},\frac{1}{2})I _{\frac{d^2}{R^2}-\frac{d^4}{4R^4}}(\frac{3}{2},\frac{1}{2})} \end{equation} \end{proposition} Analogous closed-form expressions of the hyper-hemisphere chord length distribution can be estimated using (Eq. \ref{eq:hemispherexnnon}) and (Eq. \ref{eq:hemisphere_generic3}) if required. The cdf estimation is completed for $d \geq \sqrt{2}R$ by revisiting the hyperspherical symmetry of Eq. (\ref{eq:symmetry_sphere}) and taking into account that $p'$ and $p''$ belong to different hyper-hemisphere. Therefore, for each pair of points that belong to the same hyper-hemisphere and have a distance $d$ there is a pair of points that belong to different hyper-hemispheres and have a distance $d' = \sqrt{4R^2-d^2}$ and vice versa. This property defines an equation between the cdf of a hyper-sphere and the cdf of a hyper-hemisphere, which leads to the following property for the hyper-hemispherical cdf for $d \geq \sqrt{2}R$: \begin{proposition} \label{propositionhemi2} The probability that a hyper-hemisphere chord is less or equal than $d$, $d \geq \sqrt{2}R$ $F_{NH}(d)$ is $F_{NH}(d) = 2F_N(d)+F_{NH}(\sqrt{4R^2-d^2})-1$, where $F_N$ and $F_{NH}$ are the hyper-spherical and hyper-hemispherical cdfs for $d\sqrt(2)R$, respectively. \end{proposition} \subsubsection{Basic properties of the hyper-hemisphere chord length distribution} \label{sec:basice_properties_hyperhemi} Following the above analysis one can estimate both the cdf and the pdf of the hyper-hemisphere chord length distribution for any $N$. However, it is apparent that the compactness of the (whole-)hypersphere case, i.e. the hyperspherical chord length distribution, is lost, thus implying that estimating analytical formulas of the chord length distribution of widely used hyperspherical segments and/or sectors is expected to be a rather challenging task. Another difference between the hypersphere and hyper-hemisphere distribution is the fact that for $d=\sqrt{2}R$ the cdf is no longer independent from the hypersphere dimension (let alone equal to $0.5$). For example, by substituting $d=\sqrt{2}R$ in the equations of proposition \ref{propositionhemi1} it follows that $P_1(\sqrt{2}R)=1/2$, $P_2(\sqrt{2}R) =0$, i.e. $F_{NH}(\sqrt{2}R) - F_{N}(\sqrt{2}R) = P_3(\sqrt{2}R)$. In this case, the divergence between the hyper-hemisphere and the hypersphere cdf value for $d=\sqrt{2}R$, $P_3(\sqrt{2}R)$, is: \begin{equation} \label{eq:hemisphere_diversqr2} P_3(\sqrt{2}R) = \frac{2 \int_0^{\frac{\pi}{2}} \theta cos\theta ^{N-2} d\theta}{\pi B(\frac{N-1}{2},\frac{1}{2})} \overset{(N=4)} = 1/4-1/\pi ^2 \end{equation} Finally, the independence of the second moment, $E(D^2)$, from the dimension $N$ does not hold for the hyper-hemisphere. However, the gradual decrease of the variance with dimension is also apparent in the hyper-hemisphere case, as shown in Table \ref{tab:basic_properties_hemi}. Table \ref{tab:basic_properties_hemi} summarises the basic properties of all hyper-hemisphere chord length distributions for dimensions $3$ to $6$ \begin{table}[htb] \footnotesize \begin{center} \begin{tabular}{|c|c|c|c|} \hline N & Mean & Median & Variance \\ \hline 3 & $1.124R$ & $1.147R$ & $0.217R^2$ \\ \hline 4 & $1.218R$ & $1.249R$ & $0.157R^2$ \\ \hline 5 & $1.268R$ & $1.296R$ & $0.121R^2$ \\ \hline 6 & $1.299R$ & $1.322R$ & $0.0985R^2$ \\ \hline \end{tabular} \end{center} \caption{Basic properties of the hyper-hemisphere chord length distribution for dimension 3 to 6.} \label{tab:basic_properties_hemi} \end{table} \section{Hypersphere chord length distribution as a uniformity measure} \label{sec:point_uniform} As already mentioned, some of the most interesting properties of the hypersphere chord length distribution arise from the fact that this is the limit distribution of the distances of uniformly selected hypersphere points. To summarise, the hypersphere chord length distribution is the limit distribution of $3$ (related but distinct) distributions: \begin{enumerate} \item The distance distribution of $M$ point-pairs $||p_i-p'_i||, i=1,2,...,M$, if the $2M$ relevant points $p_i$ and $p'_i$ are independently selected from a uniform random distribution. \item The intra-distance distribution of a set of $M$ points $p_i, i=1,2,...,M$, if the $M$ relevant points are independently selected from a uniform random distribution, before the $M(M-1)/2$ pairwise distances $||p_i-p_j||, i,j=1,2,...,M, i \neq j$ are estimated. \item The distance distribution of a set of $M-1$ points $p_i, i=1,2,...,M-1$ from a fixed point $p_0$ if both $p_i$ and $p_0$ are selected from a uniform random distribution before the $M-1$ pairwise distances $||p_i-p_0||, i=1,2,...,M-1$ are estimated. \end{enumerate} The second and the third distribution allow the hypersphere chord length distribution to be used as an uniformity measure, as is described in the current section. More specifically, in order to quantify the ''uniformity'' of an input point distribution on a N-sphere, the L1 distance is used. If the intra-distance distribution of the input point distribution is $g_N$ then the $L_1(g)$ uniformity measure is defined as follows: \begin{equation} \label{eq:hypesphere_uniform_distance} L_1(g) = \int_0^2 |g_N(x)-f_N(x)| ~dx \end{equation} where, $f_N$ is the hypersphere chord length distribution. Note that this uniformity measure can be used to quantify the uniformity of all $3$ types of point distance distributions that are described above. The reason for selecting $L_1$ is double; firstly, it satisfies the metric conditions, thus defining a metric space; secondly, it was experimentally found that $L_1(g)$ convergence rate to $0$ for a uniform distribution is $k^{-1/2}$, where $k$ is the number of point-pairs that are included in the $g_N$ distribution ($k=M$, $k=M(M-1)/2$ and $k=M-1$, respectively, for the $3$ examined types of point distance distributions). This allows the experimental computation of ''confidence intervals'' for $L_1(g)$ even when $M$ takes an impractically large value. Initially, uniform pointsets of size $M'$ ($M' \ll M$) are generated on the N-sphere and $L_1$ values are sorted, before acquiring the $\alpha\%$-largest $L_1$ value and finally extrapolating for pointsets of size $M$. This value is the threshold with which the input distance distribution $L_1(g)$ is compared to determine whether it is uniform or not. Elaborating on this idea, based on the computational cost of iteratively estimating pairwise distances, $L_1$ can be used to qualitatively assess whether an $N$-dimensional point sample $S$ (consisting of $M$ points, and having an intra-distribution $g$) originates from a uniform N-sphere (or N-hemisphere) distribution following on of the three following approaches: \begin{itemize} \item If $S$ dimension $N$ and sample size $M$ imply a non-prohibitive computational cost, then $Q$ uniformly distributed point sets of size $M$ and dimension $N$ are randomly generated and $L_1$ is estimated for all of them (as well as for $S$). If the $\alpha\%$-largest $L_1$ value of $Q$ is smaller than $L_1(g)$ then $S$ can be declared as non-uniform with confidence $(100-\alpha)\%$. \item If $S$ dimension $N$ and sample size $M$ imply a prohibitive computational cost for estimating $L_1$ for $Q$ uniformly distributed point sets ($Q \gg 1$) but not for estimating $L_1(g)$, then the difference with the previous case is that the $\alpha\%$-largest $L_1$ value of $Q$ is estimated using sets of $M'$ $N$-dimensional points ($M' \ll M$) and extrapolating for $M$. \item If the computational cost needs to be further reduced then a point of $S$ is fixed and the distributions of the distances from this point are estimated. These distributions are still expected to have as a limit distribution the hypersphere chord length distribution, while the associated computational complexity is linear (instead of quadratic). \end{itemize} The above tests are designed to identify non-uniform spatial distribution, thus can not securely confirm uniformity. In practice, this is rarely expected to be of major importance because the uniformity-measurement objective is usually to assess whether the points span the hypersphere in a way that is compatible with the uniform distribution; not to mathematically confirm that they actually originate from a ''pure'' uniform hypersphere distribution. For example, if an algorithm has a large number of hyperparameters and its exhaustive evaluation in the hyperparameter space is computationally expensive, a straightforward approach would be to sub-sample the hyperparameter space ''uniformly'', selecting a small number of points (i.e. hyperparameter combinations). In this case, uniformity of the set of hyperparameter-points is not a strict theoretic requirement but a rather loose condition so as to ensure that the evaluation does not omit a large neighbourhood in the hyperparameter space. The above tests would be sufficient to assess whether the hyperparameter-points were ''uniformly'' selected or not. Apart from the qualitative evaluation, $L_1$ can be used to generate a quantitative uniformity measure, specifically, the size of the maximum uniform subset $M_u, M_u \leq M$ of a N-sphere pointset $S$ ($|S|=M$). The relevant presentation starts by reminding that a pointset $S$ can be considered as a mixture of a uniform subset $S_u$ ($|S_u|=M_u$) and a non-uniform subset $S_c$ ($|S_c|=M_c = M-M_u$). The intra-distance distribution $g_N(S)$ is a weighted average of $3$ distance distributions: (a) the (intra-)distance distribution $g_u$ of pairs selected from $S_u$ (b) the (inter-)distance distribution $g_{uc}$ of pairs in which one point is selected from $S_u$ and one point is selected from $S_c$ and (c) the (intra-)distance distribution $g_c$ of pairs selected from $S_c$, the respective weights being $W_a = M_u(M_u-1)/2k$, $W_b = M_uM_c/k$ and $W_c = M_c(M_c-1)/2k$, where $k=M(M-1)/2$ (note that $W_a+W_b+W_c=1$). Since $S_u$ are selected from a uniform distribution, $L_1(g_u)=0$ (it is assumed that $M,M_u,M_c = \infty$). Regarding the inter-distance distribution $L_1(g_{uc}$, this is also $0$ because $g_{uc}$ is a sum of $S_c$ distance distributions in which one point of the pair is fixed on the hypersphere while the second is uniformly selected, i.e. $S_c$ (identical) hypersphere chord length distributions (as implied by the third distribution for which the hypersphere chord length distribution is the limit distribution). Therefore, $L_1(g) = W_cL_1(g_c)$. Since $L_1(g) \equiv L$ can be straightforwardly estimated by the pointset $S$, and $L_1(g_c) \leq 2$ (this follows by the inequality $|g_N(x)-f_N(x)| \leq 1, \forall x$), then a lower limit for $W_c$ is $L/2$. Since $M_c$ and $M$ are infinite, then $W_c = (M_c/M)^2$, i.e. $M_c/M \geq \sqrt{L/2}$. The ratio $M_c/M$ is the percentage of non-uniform points in $S$, therefore, what has been proven is the following: \begin{proposition} If a set of points $S$ on the N-sphere have an intra-distance distribution for which the $L_1$ distance from the N-sphere chord length distribution is $L$, then the maximum percentage of uniform points in $S$ is equal to $1-\sqrt{L/2}$. \label{maximumunfrmsubs} \end{proposition} Proposition \ref{maximumunfrmsubs} constraints the maximum size $M_u$ of a uniform subset of a pointset defined on a hypersphere. In practical applications $S$ is finite, hence there is an uncertainty in the $M_u$ upper limit because $g_u$ and $g_{uc}$ have not fully converged yet to the hypersphere chord length distribution, and $W_c$ is only approximately equal to $(M_c/M)^2$. In this case, the implied maximum size $M_u$ should be checked so as to ensure that the uncertainty does not significantly tamper the upper limit. For example, if $M=5,000$, $L=0.5$ and $N=3$ then Proposition \ref{maximumunfrmsubs} gives $M_u=2,500$. By instantiating $1,000,000$ uniform pointsets of dimension $3$ and size $2,500$ it is estimated that the average divergence from the (3-)sphere chord length distribution is $0.0042$ and the $1\%$-largest divergence $0.0052$. Since $W_a \approx 1/4$, $L_1(g_u) \leq 0.0015$, i.e. negligible in comparison to $L$. On the other hand, $L_1(g_{uc})$ is theoretically more difficult to eliminate because it depends on the averaging of $2,500$ uniform sample distributions, each one generated by $2,500$ distance samples, which is not straightforward to theoretically analyse because the distributions are not mutually independent. However, due to the fact that averaging exhibits a powerful noise reduction effect, it was experimentally found that usually $L_1(g_{uc}) \approx L_1(g_u)$. Since both $L_1(g_{uc})$ and $L_1(g_{u})$ are negligible, $M_u$ estimation can be considered accurate enough. A more safe estimation of $L_1(g_{uc})$ is a sideproduct of a novel algorithm that proceeds to estimate the actual maximum uniform subset $S_u$ (Algorithm \ref{alg:uniformsubsetestim}. This algorithm is a Monte-Carlo voting scheme which generates uniform sets of size $M_u$ (independent to $S$) and project them to $S$. In this algorithm, $M_u$ value controls the size of the generated uniform sets. This kind of an algorithm, which separated uniform for the non-uniform subset of a set can be very useful in several applications. For example, in clustering applications, points that can be generated from an uniform distribution may be assumed to not belong to any (locally defined) class, hence identifying and discarding the maximum uniform subset will typically disambiguate the inter-classes boundaries. \begin{algorithm}\small \begin{algorithmic} \item[Input:] A set $S$ of $M$ $N$-dimensional points defined on the N-sphere, with a distance distribution $g$, number of repetitions $E_m$, current repeat $E=1$, VoteVector(i)=0, $i=1,2,...M$. \item[1: ] Estimate $L_1(g)$ using Eq. (\ref{eq:hypesphere_uniform_distance}). \item[2: ] Estimate $M_u$ using proposition \ref{maximumunfrmsubs}. \item[3: ] Randomly select a uniform N-dimensional set $S_E$ ($|S_E|=M_u$). \item[4: ] For all points in $S_E$ estimate their nearest neighbour in $S$, i, and assign VoteVector(i) = VoteVector(i)+1. \item[5: ] If $E \geq E_m$ return the $M_u$ points with the largest VoteVector corresponding values, else $E=E+1$ and go to Step 3. \end{algorithmic} \caption{Algorithm for the estimation of the maximum-size uniform subset of a pointset defined on a hypersphere.} \label{alg:uniformsubsetestim} \end{algorithm}. Finally, it should be noted that in the case of hyper-hemispherical uniform data only the qualitative analysis conducted in this section stands. As a matter of fact, while the hyper-hemisphere chord length distribution is the limit distribution of a set of $M$ points independently and uniformly selected on a hyper-hemisphere and the convergence rate is still $k^{-1/2}$, proposition \ref{maximumunfrmsubs} does not stand because as explained in Section \ref{sec:hemihyper_distribution} the points in a hyper-hemisphere are not homogeneous. $L_1$ distance is still expected to denote whether a sample originates from a uniform hyper-hemispherical distribution, however, no quantitative conclusions can be derived from the specific $L_1$ value using the techniques presented in this section. \section{Detecting uniform sets in higher dimensions} \label{sec:higher_dimensions} In the previous section we have discussed how the hypersphere chord length distribution can be used to assess the uniformity of a point set defined on a hypersphere. In this section we are discussing whether a uniform subset $S_u$ embedded in a (not necessarily uniform) set $S$ of higher dimension can be identified using the hypersphere chord length distribution. It will be demonstrated that this is practically possible because the distribution of distances from a point that belongs to $S_u$ will be a mixture of two distributions, one of which is the hypersphere chord length distribution. Let's assume that a uniform subset $S_u$ with dimension $N_u$ and size $M_u$ is embedded into a set $S$ ($S \supset S_u$) of dimension $N$ ($N>N_u$) and size $M$ ($M > M_u$), and $p_i$ a point in $S_u$ then the distribution $g_{iS}$ of the distances from $p_i$ is: \begin{equation} \label{eq:distance_distribution_embed} g_{iS}(d) = (M_u/M)f_{N_u}(d) + (1-M_u/M)g'_N(d) \end{equation} where $f_{N_u}(d)$ is the $N_u$-sphere chord length distribution and $g'_N(d)$ a generally unknown distance distribution. The $L_1(g)$ distance between $g_{iS}(d)$ and $f_{N_u}(d)$ is \begin{equation} \label{eq:distance_distribution_l1} L_1(g) = (1-M_u/M) \int_0^2 |g'_N(x)-f_{N_u}(x)| dx \end{equation} On the other hand, if $p_{i'} \notin S_u$ the $L_1$ distance is given by the same formula without the scaling factor $(1-M_u/M)$ and with a generally different $g''_N(x)$ function. If $(1-M_u/M)$ is small enough to cancel the difference between $\int_0^2 |g'_N(x)-f_{N_u}(x)| dx$ and $\int_0^2 |g''_N(x)-f_{N_u}(x)| dx$ then the uniform subset can be identified one point at a time using an information retrieval approach; the $M_u$ smaller $L_1$ distances of $g_{iS}$ ($p_i \in S$) from $f_{N_u}$ would correspond to the $M_u$ points originating from the uniform $N_u$-dimensional distribution. The aforementioned condition depends on two parameters: (a) the uniform subset relative size $M_u/M$, (b) the ''resemblance'' of the distance distribution of the superset $S$ and the $N_u$-sphere chord length distribution. Regarding the first parameter, Eq. \ref{eq:distance_distribution_l1} confirms the intuitive assumption that the performance is increasing with the uniform-subset relative size. On the other hand, the second parameter is less intuitive and more difficult to decipher. In general, if the integral $\int_0^2 |g'_N(x)-f_{N_u}(x)| dx$ fluctuates between a value $\mu-\sigma$ and a value $\mu+\sigma$ (for different $p_i \in S$), then the detection of $S_u$ would be facilitated by a large $\mu$ (and a small $\sigma$) value. Therefore, the performance increases when the distance distribution of $S$ is substantially different from the $N_u$-sphere chord length distribution. By examining how the N-sphere chord length distribution is modified with the dimension $N$ we gain more insight about this statement. By revisiting Proposition \ref{prop:hypersphere_cdf} it can be proven that $f_N(d)$ becomes progressively more narrow and as $N$ approaches infinity $f_N(d)$ approaches $\delta(\sqrt{2}R)$, where $\delta(x)$ is the Dirac delta function \cite{APapoulis84}. Using the Beta function as a trigonometric integral \cite{EWeisstein4}: \begin{equation} \label{eq:beta_trigonometric} B(x,y) = 2 \int_0^{\pi/2} (sin\theta )^{2x-1}(cos\theta )^{2y-1} d\theta \end{equation} it follows that: \begin{equation} \label{eq:beta_trigonometric2} F_N(d) = \frac{\int_0^{sin^{-1}(d^2/R^2-d^4/4R^4)} (sin\theta )^{N-2} d\theta}{\int_0^{\pi/2} (sin\theta )^{N-2} d\theta}, d \leq \sqrt2R \end{equation} As $N \rightarrow \infty$, $(sin\theta )^{N-2}$ approaches $0$ for $\theta \neq \pi/2$ and $1$ for $\theta=\pi/2$. Therefore, the nominator of Eq. \ref{eq:beta_trigonometric2} is non-zero if-f $sin^{-1}(d^2/R^2-d^4/4R^4)=\pi/2$, i.e. if-f $d=\sqrt2R$, which means that $F_N(d)=0$ for $d<\sqrt2R$ and $F_N(d)=1$ for $d=\sqrt2R$. Additionally, the following recursive formula stands for incomplete beta functions \cite{EWeisstein4}: \begin{equation} \label{eq:incomplete_recursice} I_x(a+1,b) = I_x(a,b) - \frac{x^a(1-x)^b}{aB(a,b)} \end{equation} In the $N$-sphere chord length distribution case $a=(N-1)/2$, $b=1/2$ and $x=d^2/R^2-d^4/4R^4$. Under these constraints (and since $0 \leq x \leq 1$), the second term of the right part of Eq. (\ref{eq:incomplete_recursice}) is always positive in the interval $d \leq \sqrt{2}R$, i.e., the cdf scores that correspond to a fixed $d$ value, $d \leq \sqrt{2}R$ reduce with $N$. The opposite is true in the interval $d \geq \sqrt{2}R$. Hence, as the dimension increases the pdf of Eq. (\ref{eq:hypersphere_pdf}) becomes increasingly more concentrated around $\sqrt{2}R$ (Fig. \ref{fig:pdf_examples}). Finally, Eq. \ref{eq:incomplete_recursice} can be considered as the equivalent of derivative with respect to the dimension. The ratio between two consecutives ''derivatives'' is as follows: \begin{equation} \label{eq:second_derivative} \frac{F_{N+2}(d)-F_{N+1}(d)}{F_{N+1}(d)-F_{N}(d)} = \frac{N}{N+1}(\frac{d^2}{R^2}-\frac{d^4}{4R^4}) \leq 1, d \leq \sqrt2R \end{equation} As a result, the $L_1$ distance between two N-sphere chord length distributions for adjacent $N$ values is an decreasing function of the dimension $N$. To summarise, $N$-sphere chord length pdf continuously converges to the $\delta (\sqrt2R)$ function. Moreover, for all $d, d \neq \sqrt2R$, $f_N(d)$ is a decreasing and concave function of $N$. Based on these properties, it is possible to determine two special cases that the introduced information retrieval scheme is expected to achieve high detection rate: \begin{itemize} \item If $N_u \gg 1$ and the distance distribution of $S$ is not a narrow function around $\sqrt2R$. \item If $S$ is a uniformly distributed set of dimension $N$ ($N>N_u$) and either $N \gg N_u$ or $(N/N_u) \gg 1$. \end{itemize} The discussion conducted in this Section is not valid for hyper-hemispherical uniform subsets. The lack of point homogeneity in hyper-hemispheres prohibits using the distribution of distances from a fixed point as a uniformity descriptor, thus invalidating the introduced information retrieval scheme. However, it can be proven that hyper-hemisphere chord length distribution also converges to the $\delta (\sqrt2R)$ function. This property may be possibly exploited to develop hyper-hemisphere uniform subset detection techniques. Before finishing the theoretic part of this work, it would be interesting to have a brief discussion about a purely theoretic concept that is rarely examined, the hypersphere of infinite dimensions. The chord length distribution in this case implies that the probability of two points having distance $\sqrt2R$ is $1$. Taking into account that in (3-dimensional) spheres $\sqrt2R$ is the distance between a pole and the equator, if a point in the infinite-dimension hypersphere is arbitrarily selected as a pole, then ''almost all'' (meaning infinitely more than not) other points lie on the equator. Therefore, we reach to the counterintuitive conclusion, that in an infinite dimension space, a sphere and its equator represent (almost) identical concepts. \section{Application demonstrations and experimental evaluation} \label{sec:applications} In this section some examples of the potential use of the hypersphere chord length distribution as a uniformity measure are given. The employed algorithms were designed to be as simple as possible, involving no more than the basic concepts discussed in the theoretic part of this article. The reason for this design principle was double; firstly, the scope of this work was to validate the hypersphere chord length distribution as a uniformity measure that can find a broad range of applications, and not to present an optimised and complex algorithm that was developed to tackle a specific problem; secondly, by keeping the algorithm development in a basic level it is ensured that the achieved performance is induced by the introduced uniformity measure and not by an elaborate algorithm setup. \subsection{Monitoring uniform-pointset generation algorithms} \label{subsec:monitoring} Hypersphere chord length distribution can be used to monitor the generation of pointsets in terms of uniformity, aiming either to optimise the pointset span or to debug the algorithm that has produced them. The first objective mainly refers to the sampling of discrete spaces, in cases that generating an uniform grid is not an option. For example, a desirable property of the initial population in genetic algorithms may be to uniformly span the solution space. In such a case, the initial population can be selected according to its uniformity, measured by the $L_1$ distance of the (projected to a hypersphere) initial population distance distribution from $f_N(d)$, where $N$ is the parameter space dimension. On the other hand, debugging refers to validating algorithms supposed to generate uniform pointsets, especially if simple solutions (such as visual inspection) give ambiguous results. For example, a subtle error in generating uniform pointsets on a $N$-sphere is to initially generate $N$ random values $-1 \leq v_i \leq 1, i=1,2,...N$ and subsequently to normalise the vector $(v_1,v_2,...v_N)$. This approach generates points over the whole hypersphere but not with equal probability (Fig. \ref{fig:error_fig}), i.e. not uniformly. \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{error_fig22.png} \caption {An erroneous uniform-point generation technique (in the 2-dimensional space). By selecting randomly and independently $2$ values in the $[-1, 1]$ points inside the square are uniformly defined. Subsequently, they are projected on the circle according to their angle and stored. As a result, all points in line segment $\epsilon_1$ will be projected on $P_1$ while all points in line segment $\epsilon_2$ will be projected on $P_2$. Since $\epsilon_2$ is longer than $\epsilon_1$ the probability of generating $P_2$ is larger than the probability of generating $P_1$, i.e. the final pointset is not uniform.} \label{fig:error_fig} \end{figure} The non-uniformity of this distribution may be missed due to the fact that it is a mixture of a uniform and a non-uniform distribution, with the uniform component being of large magnitude. For example, in the 2-dimensional case (Fig. 5), the set of points randomly initialised within the circle (before being projected on the circle) constitute the uniform part of the distribution, while the set of points randomly initialised outside the circle (but within the square) the non-uniform. Consequently, $78.54\%$ (approximately equal to $\pi/4$, i.e. the probability of a point being initialised within the circle) of the points follow the uniform circle distribution while $21.46\%$ not. With approximately $4$ out of $5$ points being uniformly distributed, the resulting spatial distribution is difficult to be recognised as non-uniform through graphical means (e.g. a plot of the points). Moreover, the final point distribution is horizontally, vertically and diagonally symmetric, hence binning points in $2$, $4$ or $8$ equal-angle bins would erroneously imply that the distribution is uniform while if more bins are used then it should be ensured that the divergence from uniformity is statistically significant. $L_1$ distance from the hypersphere chord length distribution can identify that the generated distribution is not uniform and produce a lower-boundary of the error magnitude (i.e. the minimum number of non-uniform points). As a case study, it is assumed that sets $S_{10}$ of $10,000$ $2$-dimensional points are produced using the discussed technique. The (median after $1,000$ runs) $L_1(g_{10})$ distance from $f_2(d)$ is $0.0253$, a value that implies (based on proposition \ref{maximumunfrmsubs}) at least $1,125$ of the $10,000$ points not being generated by a uniform distribution. In comparison, the $1\%$-largest $L_1$ divergence from the theoretic chord length distribution for $10,000$ uniformly selected $2$-dimensional points is $0.0016$, i.e. $16$ times less than the estimated value. However, in practice, the $1\%$-largest $L_1$ divergence is not expected to be available in such an application, because most of the times this would mean that the person doing the uniformity test already has a second, already debugged, technique generating uniform pointsets. If this isn't true, a different approach is required, one that doesn't need access to the $1\%$-largest $L_1$ divergence. In this case, instead of comparing the $L_1$ distance with some uniformity threshold, we repeat the estimation taking into account only half of the input dataset (i.e. sets $S_5$ of $5,000$ $2$-dimensional points). The corresponding (median after $1,000$ runs) $L_1(g_5)$ distance from $f_2(d)$ is $0.0261$, i.e. only $3.16\%$ higher than $g_1(S_{10})$. As explained in Section \ref{sec:point_uniform}, the convergence rate of uniform distributions to the corresponding hypersphere chord length distribution is $k^{-1/2}$, where $k$ is the number of point-pairs, i.e. approximately $M^{-1}$ where $M$ is the pointset size. Therefore, the expected $L_1(g_{5})$/$L_1(g_10)$ rate is approximately $2$, which is far from the reported $1.0316$, thus signifying a non-uniform spatial distribution. Moreover, the estimated lower boundary of non-uniform points in the set is $11.25\%$ (as already mentioned, the actual value is $21.46\%$). In general, a simple process to validate algorithms supposedly generating uniform pointsets on the hypersphere is to start by sets of $M_{initial}$ points (e.g. $M_{initial} = 1,000$) and then iteratively double the pointset size while confirming that the $L_1$ distance ratio of adjacent pointsets is approximately equal to $2$. The process is terminated either when the $L_1$ becomes lower than a uniformity threshold (e.g. $0.001$), in which case the algorithm is validated, or when a $L_1$ distance ratio of adjacent pointsets is near to $1$ (which implies that $L_1$ converges to a non-zero value), in which case the presence of a bug is reported. Such a process could also be applied on the hyper-hemisphere, without requiring any modifications except from the fact that the additional feature of estimating a non-uniformity lower boundary is not available. \subsection{Evaluating data uniformity} \label{subsec:evaluating} The main difference of this setup from the previous one is that a debugged algorithm for generating uniform pointsets on the hypersphere is available and the focus is to assess the uniformity of an input dataset. Such an application would be of great interest in cases where the uniformity (or non-uniformity) of the data is correlated with semantic information about the (partially unknown) process that generated them. Because this definition is too generic, a case study is used to underline the analysis framework, as well as its merit. The employed case study is the spatial distribution of craters on the Moon. The population and spatial distribution of Moon craters is of great scientific interest because these quantitative features are related with the age \cite{WKHartmann01} as well as the composition of the Moon surface \cite{MLFeuvre08}. Apart from locally-focused ''crater counting'' \cite{TKneissl15}, analysis of the global features of their distribution has been extensively conducted, including examining their uniformity. As a matter of fact, there is a consensus among planetary scientists that the crater distribution of the Moon (as well as Earth, Mars, etc.) is not uniform. This non-uniformity has been associated with several physical properties such as the latitudinal dependence on the impact velocity and the impact angle \cite{MLFeuvre08}, the angular distance from the apex \cite{TMorota05} and the orbital and size distribution of asteroids and comets in the inner Solar System \cite{MLFeuvre11}. In \cite{MLFeuvre11} an elaborate quantitative analysis of Moon crater spatial distribution was conducted, including uniformity assessment. With the use of spherical harmonics \cite{TMMacRobert48} the authors have estimated that the crater rate locally varies from $80\%$ to $125\%$ of the global average. This implies that the Moon crater distribution is a mixture of $80\%$ uniform and $20\%$ non-uniform points defined on a $3$-dimensional sphere. On the other hand, the use of spherical harmonics has revealed no significant difference in uniformity for different crater sizes, based on the maximum/minimum cratering ratio \cite{MLFeuvre11}. Moreover, the authors have reported a symmetry between the North and the South hemisphere. In this work, we re-examine \cite{MLFeuvre11} conclusions using the hypersphere chord length distribution. The input data originates from Salamuniccar et al. \cite{GSalamuniccar14}, which introduced the LU78287GT dataset, the most complete lunar crater catalogue that includes the complete list of $22,402$ Moon craters with diameter larger than $8$ kilometres. The coordinates of these $22,402$ craters (which is named set $C$ in the rest of this section) were used to assess crater uniformity. Even though craters of smaller dimensions are available (e.g. LU78287GT consists of $78,287$ craters in total \cite{GSalamuniccar14}) these were ignored because the list is not complete and it can not be undoubtedly assumed that the missing craters do not tamper the uniformity measure. After computing the crater distance distribution $g_C$ it was compared to $f_3(d)$. The estimated $L_1(g)$ distance was $0.079$, while the $1\%$-largest and the median $L_1$ distance for a uniform $3$-dimensional set of the same size was $5.8~10^{-4}$ and $4.8~10^{-4}$ respectively. Therefore, the chord length distribution uniformity measure confirms that the lunar craters are not uniformly distributed. As a matter of fact, proposition \ref{maximumunfrmsubs} implies that the maximum percentage of uniformly distributed craters is $80.13\%$, an estimate almost identical to the (estimated with spherical harmonics) uniformity reported in \cite{MLFeuvre11}. Perhaps more interesting is the fact that, contrary to the techniques employed in \cite{MLFeuvre11} (i.e. spherical harmonics and maximum/minimum cratering ratio), using sphere chord length distribution, it is possible to detect size-based uniformity differences. More specifically, the $L_1(g_{>20})$ distance of the distribution of craters larger than $20km$ is $0.0506$ (the $1\%$-largest $L_1$ distance for a uniform set of this size was $1.7~10^{-3}$) while $L_1(g_{<20})$ distance of the distance distribution of craters smaller than $20km$ is $0.1014$ (the $1\%$-largest $L_1$ distance for a uniform set of this size was $9~10^{-4}$). While for the time being there is no theoretic explanation of the root cause of this divergence, this is possibly connected to the fact that (as suggested in \cite{MLFeuvre11}) a distinct numerical model is optimal for the distribution of craters of size larger/smaller than $20km$. Finally, the crater distance distributions of the North and the South Hemisphere were estimated and compared to the uniform hemisphere chord length distribution. As can be seen in Fig. \ref{fig:moon_fig}, the uniformity-related difference is apparent. The estimated $L_1$ distance is $0.2217$ for the North Hemisphere and $0.047$ for the South Hemisphere (the $1\%$-largest $L_1$ distance for a uniform set of this size was approximately $3~10^{-3}$ in both cases). While for the time being it is not easy to quantify the semantics of this divergence (especially since proposition \ref{maximumunfrmsubs} does not stand for hyper-hemispheres) it is rather straightforward to conclude that there is a difference in uniformity between the two hemispheres, which should be further examined in the future. \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth]{moon_hemispheres.png} \caption {The distance distribution of the craters of the lunar North and South hemispheres, compared to the theoretic uniform distribution. While the South hemisphere craters are rather uniformly distributed, in the North hemisphere there seems to be a large divergence from uniformity.} \label{fig:moon_fig} \end{figure} In summary, this analysis provides evidence that the introduced hypersphere chord length distribution can contribute in uniformity-related data analysis. Its main advantage over established methods such as spherical harmonics or even simple, grid-based, binning is that it is not based on symmetry or on numeric equivalence (i.e. bins of equal size expected to have an equal number of points) but on a more generic uniformity feature, i.e. the inner structure of a hyperspherical uniform pointset. As a result, the hypersphere chord length distribution can identify subtle non-uniformity instances that are missed by spherical harmonics or/and simple statistics. Since its implementation presents no difficulties and its computational complexity being quadratic is rarely prohibitive, hypersphere chord length distribution constitutes a valuable addition to the tools used to assess data uniformity. \subsection{Identifying and discarding non-informational data} \label{subsec:cleaning} In continuation to the previous sub-section, the maximum size of a uniform subset, which is estimated through the hypersphere chord length distribution, can be used to discriminate the uniform/non-uniform parts of spatial data. This may be of great importance in applications where most of the data are not interesting (e.g. in anomaly detection \cite{JKittler14}). On the state-of-the-art approach, the key hypothesis is that there is a descriptor space in which the ''interesting data'' (whatever this means) would constitute a compact and clearly defined (i.e. not overlapping with the ''not-interesting'' datsaset) area that can be modelled through some supervised learning technique. Notwithstanding the significant achievements in this kind of applications, there is an inherent theoretical problem with its key hypothesis: if the negative training set represents a \emph{semantically null} set, it is expected to be featureless, therefore not possible to be accurately modelled by some set of descriptors. An alternative hypothesis would be that a dataset can be projected to a descriptor space as a uniform pointset if-f it is semantically null. Note that in this case the algorithmic focus would shift from making the positive set as descriptive as possible to making the negative set as featureless as possible. If this is correct, then the informational data can be detected by identifying and discarding the uniform background. Such an approach would face two challenges; firstly, to develop this type of descriptors; secondly, to successfully discriminate uniform from non-uniform distribution subsets. In this work, it is demonstrated that the second challenge can be met, even by the simple Algorithm \ref{alg:uniformsubsetestim} that is described in Section \ref{sec:point_uniform}. Algorithm \ref{alg:uniformsubsetestim} employs a Monte Carlo nearest neighbour technique in which a number of uniform sets are constructed and projected (using nearest neighbour) onto the mixture of uniform/non-uniform pointsets. The main idea is that the points belonging to the uniform subset will generally have larger support region that the points belonging to the non-uniform one, hence, a ''randomly'' (i.e. uniformly) selected point on the hypersphere would be more probable to have as a nearest neighbour a point in the uniform subset. On the other hand, the $L_1$ distance from the hypersphere chord length distribution implies a maximum size of the uniform set. The Monte Carlo nearest neighbour approach is used as a stochastic estimation of the support region size that is computationally efficient even in high dimensions. The detection accuracy depends on the span, the shape and the (relative) size of the non-uniform subset, as well as the precision of the threshold implied by the $L_1$ distance. An exhaustive analysis of this approach is not possible due to space limitations. Instead, a rather simple setup is employed focusing on the size of the non-uniform subset and selecting fixed values for the rest of the parameters. More specifically, a set of $2,000$ $N$-dimensional ($4 \leq N \leq 12$) uniform points represent the non-informational points. Subsequently, an area equal to the $5\%$ of the N-sphere is augmented with more (informational) points so as to finally reach $X\%$ of the total points. Three different values of $X$ are examined, $X=10\%, 15\%$ and $20\%$. The distance distribution of the augmented set $S$ is estimated and its $L_1$ distance from the N-sphere chord length estimation determines (using proposition \ref{maximumunfrmsubs}) the number $M_u$ of uniform points on the augmented set. Finally, $1,000,000$ points are uniformly generated on the N-sphere and projected on their nearest neighbour in $S$. The $M_u$ points with the most points projected on them are discarded and the rest constitute the estimated non-uniform subset. The process was iterated $100$ times for each (N,X) pair and the evaluation is conducted using the (average over the $100$ simulations) Precision-Recall measures. The results are presented in Table \ref{tab:prec_recall_identif_uniform}. \begin{table}[htb] \scriptsize \begin{center} \begin{tabular}{|c||c|c||c|c||c|c|} \hline N & P. ($10\%$) & R. ($10\%$) & P. ($15\%$) & R. ($15\%$) & P. ($20\%$) & R. ($20\%)$ \\ \hline 4 & 0.4359 & 0.5122 & 0.7210 & 0.5898 & 0.8795 & 0.7101 \\ \hline 5 & 0.5001 & 0.5759 & 0.7946 & 0.6647 & 0.9336 & 0.7372 \\ \hline 6 & 0.5285 & 0.573 & 0.8189 & 0.614 & 0.9489 & 0.7395 \\ \hline 7 & 0.5087 & 0.5374 & 0.8401 & 0.6651 & 0.9545 & 0.7297 \\ \hline 8 & 0.5033 & 0.5136 & 0.8523 & 0.6366 & 0.9583 & 0.7102 \\ \hline 9 & 0.5122 & 0.5304 & 0.8329 & 0.6077 & 0.946 & 0.6744 \\ \hline 10 & 0.5308 & 0.5492 & 0.8501 & 0.6336 & 0.9536 & 0.6786 \\ \hline 11 & 0.5329 & 0.5490 & 0.8430 & 0.5749 & 0.9639 & 0.6663 \\ \hline 12 & 0.5151 & 0.5084 & 0.8351 & 0.5972 & 0.9506 & 0.6358 \\ \hline \end{tabular} \end{center} \caption{Precision and Recall rates of the informational (i.e. non-uniform) point estimation using the setup described in subsection \ref{subsec:cleaning}.} \label{tab:prec_recall_identif_uniform} \end{table} The Recall rate is determined from the accuracy of the used threshold, i.e. from how strong the non-uniformity lower boundary of proposition \ref{maximumunfrmsubs} is. Even though the Recall does not exceed $75\%$ in any case, and it seems to fluctuate (and perhaps decrease) when the dimension increases, still a substantial number of non-uniform points is retrieved (in all but one cases, more than $50\%$ of them). Moreover, the Recall is increasing with the size of the non-uniform subset. This can be explained by the fact that proposition \ref{maximumunfrmsubs} makes use of the inequality $L_1(g_c) \leq 2$. The equality $L_1(g_c)=2$ stands if-f $S_c$ (i.e. the non-uniform subset) is a set of identical points (for which the distance distribution is $1$ for zero-distance and $0$ for any non-zero-distance), i.e. if the support region of the non-uniform points is minimum. This implies that the inequality is stronger if the non-uniform points are dense, i.e. if the support region of the non-uniform points is smaller. On the other hand, the Precision rate increases both with the dimension and with the size of the non-uniform subset (the size of the support region is in this case the main reason for the increase), reaching as high as $96.39\%$. The high Precision rate indicates that the Monte Carlo approach can successfully model the support region size with low computational cost independently from $N$ (at least in the setup examined in this work). This signifies that such an approach is realistically applicable in several different uniform/non-uniform data identification scenarios. Applications that require a higher Recall rate may benefit from the fact that in the employed experimental setup, the center of the estimated non-uniform subset was lying within the ''non-uniform region'' in $87\%$, $92\%$ and $98\%$ of the times for $X$ equal to $10\%, 15\%$ and $20\%$, respectively. Such a property can be used as a basis for the development of more elaborate non-uniform subset estimation algorithms (e.g. through region expansion). It is highly possible that more powerful techniques can be developed using the chord length distribution as a basis. However, neither this task nor the comparison with other state-of-the-art approaches (e.g. mean shift \cite{DComaniciu02}) is within the scope of this work. On the contrary, the analysis objective was to establish that the chord length distribution is potentially useful in a uniform/non-uniform subset detection pipeline, and the evidence provided in this subsection confirms this hypothesis. \subsection{Uniform sub-set detection embedded in higher dimensional data} \label{subsec:subset_detection} The three first experimental sub-sections expanded on the quantitative properties introduced in Section \ref{sec:point_uniform}. The last experimental analysis examines the detection of uniform pointsets in higher dimensions, as discussed in Section \ref{sec:higher_dimensions}. The objective of this section is double; firstly to confirm and quantify the qualitative conclusions driven in Section \ref{sec:higher_dimensions}; secondly, similarly to the other evaluation setups, to give evidence that the hypersphere chord length distribution could be useful in such an application. In the employed setup, uniform subset detection employs $4$ parameters: (a) the dimension $N_u$ of the uniform subset $S_u$ ($2 \leq N_u \leq 11$) (b) the dimension $N$ of the superset $S$ ($N_u < N \leq 12$) (c) the type $T$ of the superset $S$ ($T=\{Sp,He\}$, where $Sp$ stands for uniform distribution on a hypersphere and $He$ stands for uniform distribution on a hemi-hypersphere) and (d) the ratio $M_u/M$ ($M_u/M = 0.05i, i=\{1,2,...19\}$. Moreover, $M$ was selected to be equal to $5,000$ and $100$ simulations were conducted with each parameter combination. Once again, the experimental process was designed to be as simple as possible. More specifically, initially the distance matrix of $S$ was estimated, before the distance distribution of each point $p, p \in S$ was estimated and compared with the $N_u$-sphere chord length distribution to estimate the $L_1$ distance. The points with the $M_u$ lowest $L_1$ values were returned and compared with the points of $S_u$ to estimate the detection rate (as a result, in this setup, the ''detection rate'' is equal both to the Recall and to the Precision rate).The average detection rate over the $100$ simulations is reported. Note that this process is not an evaluation scheme that can be used in practice because it assumes that $M_u$ and $N_u$ are a priori known, which is not generally correct. This process focus on evaluating whether the distance distributions $g_{iS}$ from each point $p_i$ have a potential to detect uniform sets in higher dimensions. Optimising the use of $g_{iS}$ distributions in a relevant algorithm is not a work to be done before this potential has become apparent. Because the parameter space is $4$-dimensional and includes $2,090$ parameter combinations, the results are averaged and compared according to $4$ distinct criteria, each one examining a separate performance factor: (a) the detection rate when the superset $S$ is defined on a hypersphere versus the detection rate when the superset $S$ is defined on a hemi-hypersphere (b) the detection rate as a function of the dimension difference $N-N_u$ (c) the detection rate as a function of $N$ and (d) the detection rate as a function of $M_u/M$. The average detection rate of the $1,045$ parameter combinations for which $T=Sp$ (i.e. the superset is defined on a hypersphere) is $0.8209$ while the corresponding statistic for the $1,045$ parameter combinations for which $T=He$ (i.e. the superset is defined on a hyper-hemisphere) is $0.8744$. Both rates are significantly better than the baseline of $0.5$, which corresponds to the detection rate if the $M_u$ returned points were randomly selected. Therefore, a first conclusion is that the similarity of the pointwise distance distributions $g_{iS}$ to the hypersphere chord length distribution can be used as a local feature that models uniformity. Moreover, there is an apparent difference between $T=Sp$ and $T=He$ runs, which is further confirmed by the fact that there is no run for which the $T=Sp$ detection rate is higher than the corresponding $T=He$ detection rate, while for $21.05\%$ of the runs the $T=He$ detection rate is more than $10\%$ better than the corresponding $T=Sp$ detection rate. This difference is explained by the fact that, as discussed in Section \ref{sec:higher_dimensions}, the detection rate is large when the distance distribution of the superset $S$ is substantially different from the embedded set $S_u$. In the examined setup, if $N_u \approx N$ the $N_u$-sphere distribution is quite similar with the $N$-sphere distribution but not with the $N$-hemisphere distribution. Therefore the detection rate of $T=He$ is substantially higher than the $T=Sp$ one. On the contrary, when $N \gg N_u$ the two detection rates are expected to be quite similar. This analysis may be further confirmed by the experimental data. For example, while the average detection rate for $(T=He, N_u=2, N=9)$ is only $0.95\%$ higher than the detection rate for $(T=Sp, N_u=2, N=9)$, the average detection rate for $(T=He, N_u=8, N=9)$ is $13.94\%$ higher than the detection rate for $(T=Sp, N_u=8, N=9)$. For a little bit more thorough evaluation the detection rate as a function of $N-N_u$ is plotted in Fig. \ref{fig:distance_dimensions}, showing the hypersphere and the hyper-hemisphere curves to converge for large $N-N_u$. Nevertheless, in general it is easier to detect uniform subsets when $N_u \ll N$, regardless of the superset distance distribution. However, even for $N-N_u=1$ the detection rate is much better than the (random) baseline, thus verifying the uniformity detection potential of the hypersphere chord length distribution. \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth]{distance_dimensions.png} \caption {The detection rate as a function of $N-N_u$ for $S$ being defined on a hypersphere and on a hyper-hemipshere. The dashed line represents the baseline (i.e. for random selection).} \label{fig:distance_dimensions} \end{figure} On the other hand, since both the hypersphere and the hyper-hemisphere chord length distribution converge to $\delta (\sqrt2R)$, the uniformity detection potential for a fixed $N-N_u$ is expected to decrease with $N$. In order to examine how fast the performance decline, the detection rate as a function of $N$ for $N-N_u=c, c=\{1,2,3\}$ is plotted (Fig. \ref{fig:distance_N}). Fig. \ref{fig:distance_N} show that apart from the $N-N_u=1, T=Sp$ curve, all other curves are being rather robust in the plotted $N$ range. Moreover, if a $55\%$ detection rate is selected as a low boundary under which the hypersphere chord length distribution is so weak that is practically performing similarly to the baseline, by extrapolating the curves of Fig. \ref{fig:distance_N} it is estimated that for the hypersphere the $N$ that for which the detection rate is below this boundary is $N_{off} = 13+6(N-N_u-1)$ (i.e. if $N-N_u=1$, $N_{off}=13$, if $N-N_u=2$, $N_{off}=19$, if $N-N_u=3$, $N_{off}=25$, etc.) while for the hyper-hemisphere it is $N_{off} = 42+9(N-N_u-1)$. Even though this extrapolation is by default of limited accuracy, it still validates that the examined performance decrease is not prohibitive for a wide range of $N$ and $N_u$ values. \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth]{distance_N.png} \caption {The detection rate as a function of $N$ for different $N-N_u$. The blue lines correspond to hypersphere detection rates while the black to hyper-hemispheres. Lines of different colour but same style show detection rate curves for the same $N-N_u$ but for different superset type (hemisphere/hyper-hemisphere).} \label{fig:distance_N} \end{figure} Finally, the relative size of the uniform subset $M_u/M$ is also related to the detection performance (Section \ref{sec:higher_dimensions}). Typically, the larger the uniform sub-set the higher the performance. However, as shown in Fig. \ref{fig:distance_MuM} the performance increase is far from linear; instead, the detection performance improves rapidly with $M_u/M$ for small $M_u/M$ values and is saturated near to $1$ for large $M_u/M$ values. For example, the $T=Sp$ curve exceeds $0.9$ for $M_u/M=0.55$ while the $T=He$ for $M_u/M=0.45$. Perhaps more importantly, the detection rate is not near the baseline for small uniform sub-sets. For example, the $M_u/M=0.05$ value for the $T=He$ curve is $0.479$, i.e. almost $10$ times better than the baseline. Taking into account that this performance was achieved with the simplest of algorithms it can be deduced that the accurate detection of uniform subsets embedded in higher-dimension data using the hypersphere chord length distribution is possible even for small-sized subsets. \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth]{distance_MuM.png} \caption {The detection rate as a function of $M_u/M$. The black dashed line shows the baseline (i.e. for random selection).} \label{fig:distance_MuM} \end{figure} \section{Conclusions and Future Work} \label{sec:conclusions} In this work the hypersphere chord length distribution (and the hyper-hemisphere chord length distribution) was analytically introduced and examined, especially in relation to the uniformity of high-dimensional data defined on a hypersphere. Both the theoretic presentation and the experimental evaluation show that the introduced tools can find several applications assessing the uniformity of data. In the future three main directions will be explored. Firstly, despite its novelty and its potential, the new uniformity measure suffers from being a single-value ''uniformity descriptor''. Notwithstanding its compactness, it is understood that it could greatly benefit from an extension to a vector defined on an orthogonal basis. Theoretically, there is no reason for not being possible to describe a pointset as an infinite sum of uniform distributions on continuously smaller regions (if this was achieved then the similarity to the hypersphere chord length distribution would be just the first term of the infinite sum). As a matter of fact, the main motivation for estimating the hyper-hemisphere chord length distribution was to examine whether this (or a translated/scaled version of it) is orthogonal to the hypersphere chord length distribution. Proposition \ref{propositionhemi1} implies a complex and lengthy expression that is difficult to incorporate in a basis function scheme even for small dimensions. Moreover, the estimation process signify that the chord length distribution of half of the hyper-hemisphere (or even smaller segments of the hypersphere) would be even more complex and impractical to use. Therefore, the extension to an orthogonal basis of gradually more confined uniform distributions does not seem to be achievable by continuously splitting the N-sphere in $2^i, 1 \leq i \leq N$ equal-sized parts. Different possibilities are currently explored that include not only progressively splitting the hypersphere but also updating the distance distribution. Secondly, it would be useful to have a similar measure for histogram-type of vector data. Histograms is a type of vector data that are extensively used; along with the $L_2$-normalised data (which are defined on a hypersphere) are the most common data types. Histogram variables are non-negative and have a $L_1$ norm equal to $1$, therefore they are not defined on a hypersphere but on a $(N-1)$-dimensional simplex. The distance distribution for uniformly selected points on a high-dimensional simplex, which is currently explored, would allow uniformity measures for histograms to be developed. Thirdly, the development of algorithms that build upon the measures defined in this work is an ongoing process that is done on an as-needed basis. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The mechanism of randomly resetting a dynamical variable to a particular set of values is widespread in nature, such as in quantum mechanics, stochastic thermodynamics, chemical reactions and movement ecology~\cite{evans2020stochastic}. Stochastic resetting can be convenient for a number of reasons, and rooted in diverse origins, e.g., given by evolutionary advantage or designed to fulfill specific purposes. Such a mechanism can lead to the optimization of some task with respect to the reset-free scenario, e.g., in search strategies~\cite{kusmierz2014first}, enzymatic catalysis~\cite{rotbart2015michaelis}, rare event sampling~\cite{villen1991restart}, or Internet congestion reduction strategies~\cite{maurer2001restart}, among many others~\cite{evans2020stochastic}. From a fundamental viewpoint, in recent years the focus has been put on unveiling the properties of the relaxation to the stationary state, of the first-passage times and of the survival statistics in systems following diverse dynamics that undergo stochastic resetting~\cite{evans2011diffusion, evans2011diffusionb, mukherjee2018quantum, basu2019symmetric, grange2021aggregation, sandev2022heterogeneous}. Of particular note are the emergence of a nonequilibrium steady state and the existence of an optimal value of the resetting rate that minimizes the mean first-passage time (MFPT) to a target, properties that usually hold true for any embedding dimension~\cite{evans2014diffusion, riascos2020random, bressloff2021drift}, for several underlying dynamics followed by the particle~\cite{kusmierz2014first, evans2018run, masoliver2019telegraphic, gupta2019stochastic} and for most inter-resetting time statistics~\cite{pal2017first, chechkin2018random, nagar2016diffusion, kusmierz2019robust, masoliver2019anomalous}. See~\cite{evans2020stochastic} for a recent review. Beyond particles diffusing in regular and heterogeneous topologies, resetting can be actually worth exploring in any dynamical system with a stochastically evolving state~\cite{gupta2014fluctuating, durang2014statistical, basu2019symmetric, grange2021aggregation}. An interesting example due to its theoretical and practical relevance is complex network growth undergoing stochastic resetting. Indeed, networks are abstract representations of complex systems, where nodes represent individual units and edges encode the interactions among these units~\cite{newman2003structure}, e.g., neurons and synapses in the brain, people and acquaintance ties in a social network, or web pages and hyperlinks in the world wide web. Apart from characterizing the topological structure of empirical networks~\cite{broder2000graph, colizza2006detecting, peixoto2014hierarchical, noldus2015assortativity, broido2019scale, voitalov2019scale, de2013mathematical, artime2022multilayer, battiston2020networks}, and study the behavior of dynamical models on top of them~\cite{arenas2008synchronization, pastor2015epidemic, porter2016dynamical, artime2017dynamics, hens2019spatiotemporal, artime2020abrupt}, it is crucial to understand how simple growing mechanisms yield the emergence of topological features present in real interconnected systems. Diverse mechanisms are able to predict or explain a plethora of these features: link rewiring leads to small-worldness~\cite{watts1998collective}, node addition with preferential attachment linking leads to scale-free networks~\cite{barabasi1999emergence, dorogovtsev2000structure, dorogovtsev2001effect, krapivsky2000connectivity}, or heterogeneous node fitness leads to \textit{winner-takes-all} (or Bose-Einstein condensation) phenomena~\cite{bianconi2001bose}. Users in a social network can deactivate their accounts, websites in the WWW can be deleted, and neurons in the brain can be injured and stop functioning. In fact, many empirical networks might shrink, and even crash. Hence, a realistic ingredient to take into consideration in modeling network evolution is node removal~\cite{dorogovtsev2000scaling, sarshar2004scale, moore2006exact, srinivasan2007response, saavedra2008asymmetric, chung2004coupling, bauke2011topological, kalay2014fragmentation, crane2015cluster, zhang2016random}. It turns out that this effect can be framed as a general stochastic resetting problem that provides a conceptual advancement with respect to more traditional multiparticle stochastic resetting models, namely, the fact that the stochastic events on a particle influence the state of an arbitrary and variable number of other particles. Recently, there have been some contributions that take into account the particle interactions in the reset probability, but in an indirect way, see e.g.,~\cite{pelizzola2021simple, miron2021diffusion, miron2022local, bertin2022stochastic, krapivsky2022competition}. In this article we show that such a direct interaction can be mathematically handled and we derive some consequences from it: the emergence of a macroscopic network structure via a time-dependent phase transition, and the emergence of an inflection point in the mean first-passage time. The remainder of this paper is structured as follows. First, we explain in detail the model of network growth with node removal. We then write a master equation accounting for its probabilistic description, which is exactly solved. Afterward, the time-dependent percolation phase transition is examined and its critical point is derived. Finally, the analysis of the first-passage distribution and the mean first-passage time are considered. We close the article drawing some conclusions. \section{Model definition} We consider a set of $N$ interconnected nodes, each one characterized by its degree $k$, i.e., the number of undirected connections to its neighbors. Two processes compete in the formation of the network. On the one hand, a link between two nodes picked uniformly at random is added at rate $\alpha N/2$. In the absence of resetting, the average degree would grow at rate $\alpha$, and if $N \to \infty$ this corresponds to a kinetic formulation of the Erd\H{o}s-R\'{e}nyi model with parameter (ratio between the number of links $M(t)$ at time $t$ and the number of possible links between $N$ pair of nodes) $p = \alpha t/N$, where $t$ is time~\cite{krapivsky2010kinetic}. On the other hand, with rate $rN$, a node selected uniformly at random is removed, along with all of its edges, and a new node with no connections is added into the network. Notice the coupling induced by the edges in both the growth and removal process: the increase of a degree unit occurs simultaneously in two nodes, and a deletion of a node with degree $k$ implies a degree unit loss in other $k$ nodes. See Fig.~\ref{fig:fig1} for a sketch of this process. With this model we prioritize the analytical understanding of the coupled multiparticle stochastic resetting process over a faithful description of network evolution. The incorporation of some realistic effects is possible ---e.g., the higher chances of well-connected nodes to acquire new links, the correlations between the degree and the removal probability, a variable system size $N$, or the fact that resetting event could make a node to lose not necessarily all its connections---, but they make the analytical treatment much more limited. \section{Time-dependent degree distribution} We can describe the temporal behavior of the model by studying the probabilities $p_k(t)$, the so-called degree distribution, that give the fraction of nodes with degree $k$ at time $t$~\cite{moore2006exact}. These probabilities follow the master equations \begin{equation} \label{eq:MastEq} \diff{p_k} = \alpha p_{k-1} - \alpha p_{k} - r p_{k} + r(k+1)p_{k+1} - r k p_{k} + \delta_{k,0} r. \end{equation} The first two terms are associated to the uniformly random addition of a link, while the rest correspond to the resetting process. In specific, the third term encodes the direct resetting of a node of degree $k$, while the fourth and the fifth terms correspond to the loss of one link due to the removal of a neighbor. The last term stands for the incoming flux of nodes that are reset to degree $0$. Negative degrees are not allowed, so the boundary condition is $p_{-1}(t) = 0$. As initial condition, we choose $p_k(0) = \delta_{k,0}$, that is, initially there are no links in the network. The exact full-time solution to the master equations~\eqref{eq:MastEq} can be obtained by means of the generating function. Introducing the time-dependent degree generating function $g(z,t) = \sum_{k=0}^{\infty} z^k p_k(t)$, if we multiply Eqs.~\eqref{eq:MastEq} by $z^k$ and sum over all degrees, we obtain \begin{equation} \pdiff{g}{t} = \left[ \alpha z - (\alpha + r) \right] g + r (1 - z) \pdiff{g}{z} + r, \end{equation} with the conditions $g(1,t) = 1$, which derives from the normalization of the degree distribution, and $g(z,0) = 1$, which derives from the initial condition. Employing the method of characteristics, the solution reads \begin{equation} \label{eq:genfunc_sol} g(z,t) = \frac{1}{1-z} \left[ e^{\alpha z / r} \mathcal{G}\left((1-z)e^{-rt}\right) + \frac{r}{\alpha} \right], \end{equation} where $\mathcal{G}(x) = (x - r/\alpha) \exp \left[ \alpha/r \, (x-1) \right] $. To obtain the time-dependent degree distribution, we need to rewrite Eq.~\eqref{eq:genfunc_sol} as a power series of the auxiliary variable $z$. To do so, we use the well-known expansions $(1-x)^{-1} = \sum_{k=0}^{\infty} x^k$ and $e^{x} = \sum_{k=0}^{\infty} x^k/k!$, along with the relation \begin{equation} \sum_{k=0}^{\infty} z^k \sum_{m=0}^{\infty} \frac{(x z)^m}{m!} = e^x \sum_{k=0}^{\infty} z^k Q(k+1,x), \end{equation} where $Q(a, b) \equiv \gamma(a,b)/\Gamma(a)$ is the regularized Gamma function, with $\Gamma(a)$ the complete Gamma function and \begin{equation} \gamma(a,b) = \int_{b}^{\infty} \mathrm{d}x\, x^{a-1} e^{-x} \end{equation} the upper incomplete Gamma function. After some simple algebra, we arrive at our desired solution \begin{equation} \label{eq:fulltime_sol} p_k(t) = \frac{r}{\alpha} \left[ 1 - Q(k+1,\,c(t)) \right] + e^{-c(t) - rt} \frac{c(t)^k}{k!}, \end{equation} where we have introduced the function $c(t) = \frac{\alpha}{r} (1 - e^{-rt})$ to ease the notation. \begin{figure}[!t] \centering \includegraphics[width=0.8\columnwidth]{Fig1.pdf} \caption{In $(a)$, stylized time evolution of the number of links of node $i$, $k_i(t)$, and of the total number of links in the system, $M(t)$. The vertical drops in $M$ correspond to the resetting event of a node losing all of its links. In the first drop, one of $i$'s neighbor is reset, while in the second drop $i$ itself is reset. The last reset event does not affect on $i$ because it does not occur in its immediate vicinity. In $(b)$, resetting event on a node and the impact on its neighbors, which lose one degree unit.} \label{fig:fig1} \end{figure} Although the full-time solution~\eqref{eq:fulltime_sol} is a complicated expression, we can extract some useful information under certain limits. Let us consider first the long-time limit, where we have the stationary degree distribution \begin{equation} \label{eq:StatDegDist} p_k^{\text{st}} = \frac{r}{\alpha} \left[ 1 - Q \left(k+1, \frac{\alpha}{r}\right) \right]. \end{equation} By virtue of the relation \begin{equation} 1 - Q \left(k+1, \frac{\alpha}{r}\right) = e^{-\alpha / r} \sum_{m=k+1}^{\infty} \frac{\left( \alpha / r \right)^m}{m!}, \end{equation} which can be proved by integration by parts, Eq.~\eqref{eq:StatDegDist} becomes \begin{equation} p_k^{\text{st}} = \frac{r}{\alpha} e^{-\alpha / r} \sum_{m=k+1}^{\infty} \frac{\left( \alpha / r \right)^m}{m!} \approx \left(\frac{\alpha}{r}\right)^k \frac{e^{-\alpha / r}}{(k+1)!}, \end{equation} where in the last step we keep only the smallest value of the sum, which is at the same time the largest contribution. Using the Stirling's formula for the factorial, we obtain the asymptotic behavior of the stationary degree distribution \begin{equation} p_k^{\text{st}} \approx e^{-\alpha / r} \left(\frac{e \, \alpha}{r}\right)^k (k+1)^{-3/2 - k}, \end{equation} which decays much faster than an exponential~\cite{moore2006exact}. In Fig.~\ref{fig:fig2}$(a)$ we verify that both the analytical time-dependent degree distribution and its stationary values coincide with the values coming from the simulations of the model. Another interesting limit to explore is the one of small reset rate $r$. On the one hand, if we set $r=0$ and solve the master equations~\eqref{eq:MastEq}, the time-dependent degree distribution is $p_k(t) = (\alpha t)^k e^{-\alpha t} / k!$. That is, in the reset-free scenario there is no stationary state, and the network grows indefinitely. On the other hand, if we take the limit of small resetting rates in the time-dependent solution~\eqref{eq:fulltime_sol}, we get \begin{equation} \label{eq:degdist_expansion} \frac{(\alpha t)^k e^{- \alpha t}}{k!} + r \left[ \frac{t (\alpha t -k - 2)}{2} \frac{(\alpha t)^k e^{-\alpha t}}{ k!} + \frac{1-Q (k+1,t \alpha )}{\alpha }\right]+\mathcal{O}\left(r^2\right). \end{equation} We see that only when $r = 0$ Eq.~\eqref{eq:degdist_expansion} reduces to the reset-free degree distribution. As far as the resetting rate becomes finite, this no longer holds true and the system is able to tend to a well-defined stationary state. Thus, the introduction of a resetting rate, however small it may be, induces the emergence of a nonequilibrium steady-state. \begin{figure}[!t] \centering \includegraphics[width=0.85\columnwidth]{Fig2.pdf} \caption{In $(a)$, temporal evolution of the degree distribution. We show the curves for several degrees, indicated in the caption, together with their stationary value, computed from Eq.~\eqref{eq:StatDegDist}. Markers come from simulations and solid lines come from Eq.~\eqref{eq:fulltime_sol}. In $(b)$, temporal evolution of the mean degree and its variance. Markers come from simulations and lines from the theory. In all simulations we use $N = 1000$, $\alpha = 1$, $r=0.5$ and averages are computed over $200$ independent realizations.} \label{fig:fig2} \end{figure} Additional information of the process of network growth under stochastic resetting can be gained with the temporal evolution of the moments of the degree distributions $\langle k^n \rangle (t) \equiv \sum_k k^n p_k(t)$. Although they could be computed directly from the full-time expression~\eqref{eq:fulltime_sol}, a significantly easier approach is to solve the differential equation governing them. We can solve exactly up to arbitrary order $n$ because the differential equations for the moments are closed. For example, after some trivial algebra, for the mean degree we obtain \begin{equation} \label{eq:meandeg_difeq} \diff{\langle k \rangle} = \alpha - 2r \langle k \rangle, \end{equation} with initial condition $\langle k \rangle (0) = 0$. The solution reads \begin{equation} \label{eq:meandeg} \langle k \rangle (t) = \frac{\alpha}{2 r} \left( 1 - e^{-2rt} \right), \end{equation} and it is displayed in Fig.~\ref{fig:fig2}$(b)$, along with the standard deviation, which can be trivially calculated. \section{Percolation transition} The time-dependent degree distribution and their moments allow us to study network-wide connectivity properties during the growth process. As an illustration, we herein consider the emergence of the giant component, which is the order parameter of the percolation phase transition and is of crucial importance, because connectivity is usually assumed the first proxy for network functionality. Let us denote $u(t)$ the probability that, at time $t$, a node is not in the giant component via one of its links. If that node has $k$ connections, the probability to belong to the giant component is then $1 - u(t)^k$. Averaging over the degree distribution, we obtain the size of the giant component \begin{equation} \label{eq:SizeGC} S(t) = \sum_{k=0}^{\infty} p_k(t) \left(1 - u(t)^k\right) = 1 - g(u,t), \end{equation} where $g(z,t)$ is the time-dependent degree generating function used to solve Eq.~\eqref{eq:MastEq}. To solve Eq.~\eqref{eq:SizeGC}, we need to know what is the value of $u(t)$. This can be obtained in a recursive manner, by noting that the condition for a node to not belong to the giant component following one of its links is that the node at the end of the link we are following does not belong to the giant component via any of its other $k-1$ neighbors, i.e., $u(t)^{k-1}$. To account for the network heterogeneity, this latter quantity needs to be averaged in a similar fashion to the computation of the giant component $S(t)$, although without the use of the degree distribution. Indeed, let $q_{k}(t)$ denote the probability that a node at the end of a link has $k$ connections at time $t$. It then follows that $q_{k}(t)$ is proportional to $k p_{k}(t)$. The proportional factor must ensure the normalization of this distribution, that is, $q_{k}(t) = k p_{k}(t) / \langle k \rangle(t)$. Hence, we reach to our self-consistent equation for $u(t)$, \begin{equation} \label{eq:RecRel} u(t) = \frac{1}{\langle k \rangle (t)} \sum_{k=0}^{\infty} k \, p_k(t)\, u(t)^{k-1} = \frac{\partial_u g(u,t)}{\partial_u g(1,t)}. \end{equation} Note that $u(t)=1$ is always a solution to Eq.~\eqref{eq:RecRel} and, if plugged in Eq.~\eqref{eq:SizeGC}, we obtain $S(t)=0$. This absence of the giant component is of course consistent with the physical meaning of $u(t)$, and it could have been guessed \textit{a priori}. When a second, non-trivial solution $u(t) \neq 1$ exists, we find a finite $S(t)$. The appearance of this second solution marks the existence of a percolation-like phase transition, and occurs when the derivatives of both sides of Eq.~\eqref{eq:RecRel}, evaluated at $u(t)=1$, are equal. In our case, the condition for criticality is given by the combination of values $(t,\alpha,r)$ that satisfy \begin{equation} \label{eq:crit_point} 1 = \frac{2\alpha}{3r} \frac{1-2e^{-rt} + e^{rt}}{1 + e^{rt}}. \end{equation} \begin{figure}[!t] \centering \includegraphics[width=0.9\columnwidth]{Fig3.pdf} \caption{In $(a)$, size of the giant component as a function of time and the reset rate. The growth rate is $\alpha = 1$. Separating the percolating and non-percolating phases, there is the critical line $t_c(\alpha, r)$, from Eq.~\eqref{eq:crit_point}. In $(b)$, temporal evolution of the size of the giant component for different values of $\alpha$ and $r$. Solid lines come from the analytical approximations Eqs.~\eqref{eq:SizeGC} and \eqref{eq:RecRel}, markers come from simulations. In all curves, we have considered $N = 20000$ and averages over $10$ realizations.} \label{fig:fig3} \end{figure} We graphically investigate the time-dependent percolation framework developed above, in Fig.~\ref{fig:fig3}$(a)$. We show the size giant component $S$ as a function of time and of the resetting parameter, for fixed $\alpha = 1$. We see that as resetting becomes more and more probable, the giant component needs more time to emerge and its stationary value tends to be smaller. There is a value of resetting rate, $2 \alpha / 3$, above which the giant component never emerges, no matter how long we wait. That is, the minimum average degree (at stationarity) for a network to display a percolating structure is $3/4$. This is somehow counterintuitive at first sight, because we would expect, on average, at least one link per node in order to find a connected subgraph traversing a large fraction of the network. Indeed, for the Erd\H{o}s-R\'{e}nyi model the giant component is found only if $\langle k \rangle > 1$. This conundrum is solved by looking at higher moments of the degree distribution: if $\alpha$ is large enough, or $r$ is small enough ---they need to satisfy $\alpha / r > 3/2$--- the fluctuations from the mean value become relevant and create nodes with large enough degrees, so the giant component can emerge even if, on average, nodes have less than one neighbor. It is not difficult to verify that the Moloy-Reed criterion~\cite{molloy1995critical} for the existence of the giant component precisely leads to $\langle k \rangle^{\text{st}} > 3/4$. To further evince the non-trivial impact of the growth and resetting rate on the percolation transition, in Fig.~\ref{fig:fig3}$(b)$ we show the time evolution of the size of the giant component for several pairs $(\alpha, r)$. We see that if their ratio is the same, the curves tend to the same stationary value, as expected from the long-time limit of the degree distribution and the mean degree. However, the absolute values of $(\alpha, r)$ do have a strong impact on the critical point and on the time scale to reach the stationary value, as it can be appreciated in the ordering of the curves. We see, moreover, that theory and simulations match very well in all the cases. \section{First-passage statistics} A final point that we address is the first-passage properties of this coupled multiparticle system. Let $q_k(t)$ be the probability that at time $t$ a randomly selected node has degree $k$ without having arrived at degree $k^* > k$, and let $a_{k^*}(t)$ be the probability that at time $t$ a randomly selected node has arrived at degree $k^*$. Mathematically, this is equivalent to setting an absorbing boundary at $k^*$, so there is no outflow probability from that state. The master equations for these quantities are \begin{align} \diff{q_k} = & \alpha q_{k-1} - \alpha q_{k} - r q_{k} + r(k+1)q_{k+1} - r k q_{k} + \nonumber \\ & + \delta_{k,0} r (1 - a_{k^*}) \quad \text{for $0 \leq k \leq k^*-2$}. \label{eq:survprob1} \\ \diff{q_k} = & \alpha q_{k-1} - \alpha q_{k} - r q_{k} - r k q_{k} \quad \text{for $k = k^*-1$}. \label{eq:survprob2} \\ \diff{a_k} = & \alpha q_{k-1} \quad \text{for $k = k^*$}. \label{eq:survprob3} \end{align} The initial conditions are $q_k(0) = \delta_{k,0}$. Clearly, the first-passage time probability to the target degree $k^*$ is given by $f_{k^*}(t) = \alpha q_{k^*-1} $, with the mean first-passage time being then $\langle t_{k^*} \rangle = \int \diffint t\, t f_{k^*}$. We show in Fig.~\ref{fig:fig4}$(a)$ that the first-passage distribution obtained from the numerical solution of Eqs.~\eqref{eq:survprob1}--\eqref{eq:survprob3} match well with the results from simulations. It is a non-trivial task to obtain the solution of the above set of equations. In order to proceed and find some analytical results regarding the first-passage statistics, we apply some approximations whose validity will be later checked. In particular, we transform the set of discrete master equations~\eqref{eq:survprob1}--\eqref{eq:survprob3} into a single Fokker-Planck equation, approximating the degree as a continuous variable (see Appendix for the derivation). If we define $q(k,t)$ as the continuous version of $q_k(t)$, that is, $q(k,t)$ is a probability density function that gives the probability of finding a randomly chosen node with degree $k \in \left[k, k+\diffint{k} \right]$ at time $t \in \left[ t, t + \diffint{t} \right]$ that has not yet arrived at $k^*$, we obtain \begin{equation} \label{eq:FP_main} \pdiff{q}{t} = - \pdiff{}{k} \left[ v(k) q(k,t) \right] + \ppdiff{}{k} \left[ D(k) q(k,t) \right] - r q(k,t) + r \delta(k), \end{equation} with the drift coefficient $v(k)=(\alpha - rk )$ and the diffusion coefficient $ D(k) = (rk + \alpha)/2 $. The boundary conditions are $q(k^*,t) = 0$ and $\partial_k q(k,t) |_{k=0} = 0$, and the initial condition is $q(k,0) = \delta(k)$. The survival probability is defined such that \begin{equation} S(k^*, t) = \int_0^{k^*} \, \diffint{k} \, q(k,t), \end{equation} that relates to the mean first-passage time as \begin{equation} \langle T_{k^*} \rangle = - \int_0^{\infty} \diffint t\, t \pdiff{S(k^*,t)}{t} = \Tilde{S}(k^*, s=0), \end{equation} where $\Tilde{S}(k^*, s) = \int_0^{\infty} \diffint{t} e^{-st} S(k^*,t)$ is the Laplace transform of the survival probability. \begin{figure}[!t] \centering \includegraphics[width=0.48\columnwidth]{Fig4a.pdf} \includegraphics[width=0.48\columnwidth]{Fig4b.pdf} \caption{In $(a)$, first-passage time distribution to target degrees $k^* = 4, 5, \ldots, 11$. Solid lines correspond to $f_{k^*}$ computed from the numerical integration of Eqs.~\eqref{eq:survprob1}--\eqref{eq:survprob3}, while markers come from simulations. The growth and reset parameters $\alpha = 2$ and $r = 0.5$. In $(b)$, mean first-passage time computed from the analytical expression obtained in the Appendix (solid lines) and from simulations (markers). From left to right, curves corresponding to $(\alpha, r) = (2,2.5)$, $(2,0.5)$ and $(2,0.2)$. In all simulations, we use networks with $N=100$ nodes.} \label{fig:fig4} \end{figure} In the Appendix, we obtain a solution for the $\Tilde{S}$ in terms of special functions. However, a direct inspection of Equation~\eqref{eq:FP_main} already provides information that would hardly be obtained from the discrete master equations or grasped by analyzing directly the analytical solution. Given the growth and reset parameters, we see that the drift term vanishes at $k_{\text{ip}} = \alpha / r$. Those nodes with degrees $k < k_{\text{ip}}$ experience a positive drift toward $k_{\text{ip}}$, while, on the contrary, those with degrees $k > k_{\text{ip}}$ suffer a negative drift toward $k_{\text{ip}}$. Consequently, we anticipate different behaviors depending on whether the target degree $k^*$ is larger or smaller than $k_{\text{ip}}$. In particular, in the regime $k^* < k_{\text{ip}}$, the creation of links dominates over the resetting of nodes, and the arrival to the target degree will be relatively fast. In the regime $k^* > k_{\text{ip}}$, however, the resetting dominates and the system does not create connections quick enough in order for a node to easily reach the target degree. Thus, we expect a considerable increase in the mean first-passage time. In fact, because the drift is degree-dependent, we expect that the larger the difference $k^* - k_{\text{ip}} > 0$, the more difficult will be to arrive to the target degree and the longer it will take. Likewise, the larger the difference $k_{\text{ip}} - k^* > 0$, the faster the target will be attained, hence observing a decrease in the MFPT. Note, however, that we should not wait for a non-trivial combination of $(\alpha, r)$ that minimizes the mean first-passage time, because our system is now bounded and the non-resetting transitions are unidirectional. Hence, it is natural to suppose that $\langle T_{k^*} \rangle$ shows a concave-to-convex inflection point at $k^* = k_{\text{ip}}$ if $\alpha > r$. When $\alpha < r$ the inflection point will disappear and the mean first-passage time will be purely convex over all traget degrees $k^*$. In Fig.~\ref{fig:fig4}$(b)$, we verify that the analytical approximation for the MFPT reproduces well the results from simulations. Indeed, we find an overall good agreement, especially in the concave region. When we enter in the convex regime, where resetting dominates, we observe that some slight differences between the prediction and simulations arise, that can be associated with the failure of the mean-field approach in capturing the full complexity of the growth and reset network process. \section{Conclusions} In summary, we have approached a central problem in network science by reformulating it as a stochastic resetting process. In doing so, we have provided insights that are relevant to both areas of research. On the one hand, network growth under node removal advances our knowledge of stochastic resetting, providing an analytically amenable model characterized by many-body variable interactions. In it, the particles' state evolution is non-trivially coupled in the degree space: growth events can be seen as a two-particle coupling, with a simultaneous increase of one degree unit; reset events can be seen as a $(k+1)$-particle coupling, where one node in state $k$ resets and $k$ other nodes lose one degree unit. On the other hand, we have studied several out-of-equilibrium properties of network growth with node removal, as it is customary in stochastic resetting problems. In particular, we have obtained an exact expression for the time-dependent degree distribution, which has allowed us to elucidate network-wide properties such as the existence of a connected component occupying a macroscopic portion of the network. In addition, we have studied first-passage statistics, finding that our system does not display a minimum in the mean first-passage time from the origin to a target state $ k^* > 0 $, typical of many processes with resetting~\cite{evans2020stochastic}. Instead, we find a monotonously increasing function that, however, can present an inflection point depending on the growth and resetting rates. There are several directions for further research. One deals with the inclusion of more realistic dynamical rules both in the network formation and in the resetting process. This would partially change the mathematical form of the master equations and the challenge would be related to solving more complicated equations that could lead to new physical insights. On these lines, it would be interesting to test the performance of different growth and reset mechanisms in reproducing the evolution of empirical networks. Another research avenue deals with a more complete characterization of the very same model presented in this article. For example, when computing first-passage statistics we could be interested in knowing the first-passage time of the $i$th fastest node arriving to a target. The cluster dynamics would be very interesting to be explored as well, as network growth with node removal can be seen as a highly non-trivial aggregation-fragmentation process for which writing a Smoluchowski-like equation and grasping analytical insights remain as a considerable technical challenge. Finally, we acknowledge that correlated transitions like the ones presented here are still an under-researched characteristic in stochastic resetting models. Therefore, devising new mathematically tractable models that incorporate this feature and that lend themselves to be eventually compared to empirical data is a necessary step toward a more complete, solid and useful theory of stochastic resetting. \section*{Acknowledgments} The author thanks M. De Domenico and D. Kiziridis for their comments on the manuscript. \setcounter{equation}{0} \renewcommand{\theequation}{A.\arabic{equation}} \section*{Appendix: Continuous approach} To gain insight into the first-passage problem, we proceed by applying some approximations, namely, we are treating the discrete degrees $k$ as a continuous variable. We will see that even for small target degrees $k^*$, hence a small effective interval, the approximation works well. Our starting point are the master equations~\eqref{eq:survprob1}--\eqref{eq:survprob3}. Let $q(k,t)$ be continuous version of $q_k(t)$, that is, $q(k,t)$ is a probability density function that gives us the probability of finding a node with degree $k \in \left[k, k+\diffint{k} \right]$ at time $t \in \left[ t, t + \diffint{t} \right]$. We notice that we can write \begin{equation} \pdiff{q}{t} = \Omega^+(k - \delta k) q(k - \delta k, t) + \Omega^-(k + \delta k) q(k + \delta k, t) - \left[\Omega^+(k) + \Omega^-(k) + \rho(k)\right] q(k, t), \end{equation} where $\Omega^+(k) = \alpha$ is the growing rate, while $\Omega^-(k) = rk$ and $\rho(k) = r$ correspond to the resetting rates, the former due to a neighbor deletion and the latter due to the deletion of the node itself. For convenience, we leave $\delta k$ undetermined during the calculations and set $\delta k = 1$ at the end. Expanding both the rates and the pdf up to order $(\delta k)^2$, we arrive at \begin{equation} \pdiff{q}{t} = - \pdiff{}{k} \left[ v(k) q(k,t) \right] + \ppdiff{}{k} \left[ D(k) q(k,t) \right] - \rho(k) q(k,t), \end{equation} where the drift and diffusion terms are, respectively, $v(k) = \delta k (\Omega^+(k) - \Omega^-(k))$ and $D(k) = \delta k^2 (\Omega^+(k) + \Omega^-(k))/2$. We have not yet incorporated the information given in master equations for the boundary degrees $k=0$ and $k=k^*$. The former can be manually introduced as a delta-like source of probability at the resetting point, $k_{\text{reset}}$,while the former takes the form of an absorbing boundary condition. Putting all the pieces together, we are dealt with \begin{equation} \pdiff{q}{t} = - \delta k \pdiff{}{k} \left[ (\alpha - rk) q(k,t) \right] + \frac{\delta k^2}{2} \ppdiff{}{k} \left[ (rk + \alpha) q(k,t) \right] - r q(k,t) + r \delta(k-k_{\text{reset}}), \end{equation} with $q(k^*,t) = 0$, $\partial_k q(k,t) |_{k=0} = 0$ and $q(k,0) = \delta(k-k_0)$. In this continuous-degree approach, the survival probability reads $S(k^*, k_0,t) = \int_0^{k^*} \, \diffint{k} \, q(k,t)$, where $k_0$ is the initial degree. Treating $k_0$ as a variable, the survival probability satisfies the backward equation \begin{equation} \pdiff{S}{t} = (\alpha -rk) \pdiff{S}{k_0} + \frac{\delta k^2}{2} (rk + \alpha) \ppdiff{S}{k_0} - r S(k^*,k_0,t) + r S(k^*,k_{\text{reset}},t). \end{equation} with $S(k^*,k^*, t) = 0$ and $\partial_{k_0} S(k^*,k_0, t)|_{k_0=0} = 0$. Introducing the Laplace transform $\Tilde{S}(k^*,k_0, s) = \int_0^{\infty} \diffint{t} e^{-st} S(k^*,k_0,t)$, we obtain \begin{equation} (\alpha - rk) \pdiff{\Tilde{S}}{k_0} + \frac{rk + \alpha}{2} \ppdiff{\Tilde{S}}{k_0} - (r + s) \Tilde{S}(k^*,k_0,s) = - 1 - r \Tilde{S}(k^*,k_{\text{reset}},t). \end{equation} The general solution to this equation is \begin{equation} \Tilde{S}(k^*,k_0,s) = \frac{r \Tilde{S}(k^*, k_{\text{reset}},t) + 1}{s+r} + c_1 \mathcal{U}(1,k_0) + c_2 \mathcal{L}(0,k_0), \end{equation} where to ease the notation we have introduced the functions $\mathcal{U}(x,y) = U(x + r/\alpha, x - 1 + 4 \alpha/ r, 2y + 2\alpha/r)$, being $U(\cdot, \cdot, \cdot)$ the confluent hypergeometric function of the second kind, and $\mathcal{L}(x,y) = L_{-(x+1) - s/r}^{x-1+ 4\alpha/r} (2y + \alpha/r)$, being $L_{a}^{b} (\cdot)$ the generalized Laguerre polynomial. The derivative is \begin{equation} \pdiff{\Tilde{S}}{k_0} = -\frac{2}{r} (r+s) \, c_1 \, \mathcal{U}(2,k_0) -2 \, c_2 \, \mathcal{L}(1,k_0). \end{equation} Applying the boundary conditions, after some easy algebra we obtain \begin{align} c_1 & = \frac{r \Tilde{S}(k^*, k_{\text{reset}},t) + 1}{s+r} \frac{B_2}{A_2 B_1 - A_1 B_2} \\ c_2 & = \frac{r \Tilde{S}(k^*, k_{\text{reset}},t) + 1}{s+r} \frac{A_2}{A_1 B_2 - A_2 B_1}, \end{align} with \begin{align} A_1 & = \mathcal{U}(1, k^*), \\ B_1 & = \mathcal{L}(0, k^*), \\ A_2 & = -\frac{2}{r} (r+s) \mathcal{U}(2, 0) , \\ B_2 & = -2 \mathcal{L}(1, 0). \end{align} Note that all these quantities are constant with respect to the ``spatial'' variable $k_0$ but of course do depend explicitly on the growth and reset rates, $\alpha$ and $r$, as well as the target degree $k^*$ and the time in Laplace domain $s$. Now setting $k_0 = k_{\text{reset}} = 0$, we obtain the Laplace transform of the probability that a randomly chosen node starts with no connections and arrives for first time to reach degree $k^*$. Taking the limit $s \to 0$, we have an analytical expression for the Laplace transform of the survival probability, from which we can obtain the mean first-passage time of a randomly chosen node of an initially empty network to reach degree $k^*$. Note that if we relax the condition $k_{\text{reset}} = 0$, the value and relative ordering of $k_{\text{reset}}$, $k^*$, and $k_{\text{ip}}$ (the degree value at which the drift vanishes, see main text) will impact in the behavior of the first-passage distribution and its moments. Despite being an interesting theoretical exercise, these cases do not bring new phenomenological insights with respect to the case $k_{\text{reset}} = 0$. For this reason, a careful report of all the combinations falls outside the scope of the present article, and we stick to the case of nodes losing all their links in the resetting events. \section*{References} \bibliographystyle{iopart-num}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction.} \subsection{General discussion.} A vehicle moving through the stratosphere (altitudes 40km-50km) at hypersonic velocities (8-15 Mach) is covered by a plasma sheath. Typically, the plasma density $n$ can be as high as $10^{18}m^{-3}$ with corresponding plasma frequency \begin{equation} \label{plasma_freq} 2\pi f_L = \omega_L = \left(\frac{e^2 n}{M \varepsilon_0}\right)^{1/2} \end{equation} of about $9 GHz$. In (\ref{plasma_freq}), $e$ is the electron charge $-1.6\times 10^{-19} C$, $\varepsilon_0 = 8.85\times 10^{-12} CV^{-1}m^{-1}$ and $M$ is the electron mass $9\times 10^{-31} kg$. Therefore the plasma is opaque to frequencies lower than $9 GHz$. Direct communication through such a plasma to and from the vehicle is impossible because frequencies $f$ suitable for long distance propagation through the atmosphere are usually much less. For example, the standard frequency used for navigational satellite systems, including the global positioning system (GPS), are less than $2 GHz$. For the GPS, $f = 1.57542 GHz$. The challenge is to devise means to maintain continuous contact with the hypersonic vehicle. When such vehicles were principally spacecrafts, a blackout period of up to two minutes was acceptable albeit undesirable. But when the vehicles are of military origin, it is clear that continuous contact is essential for both targeting and rapid abort reasons. It is a challenge which has drawn many responses. They fall into several categories. The first ignores the presence of the plasma by using signals with frequencies well above the plasma frequency. The difficulty with this method is that such signals are heavily attenuated in and scattered by the atmosphere. A second means, which also ignores the plasma, is to use low frequency signals in the $100 MHz$ range where wavelengths are large compared to the plasma sheath thickness (typically of the order of a meter). But such solutions have high cost and low bit rates and are not well supported by existing infrastructure. A third category of solutions violates the plasma. One approach is to remove, by vehicle reshaping, for example, the plasma from certain points on the vehicle at which one might place an antenna. Another is to destroy it by electrophilic injection or by injecting water drops. A third approach is to use powerful magnets to reshape the plasma. Such solutions involve a heavy cost in that design features necessary for their implementation must be built into the vehicle a priori. Nevertheless some are feasible and worthy of consideration. For example, it is possible to build an antenna into a sharp leading edge which would protrude beyond the plasma and survive for sufficiently long (it would be eventually destroyed by ablation) to cover the flight time. The fourth category of solutions, and the one to which we are attracted, uses the properties of the plasma itself to effect transmission in the same way a judo expert uses the strength and motion of an opponent to defeat him. One idea is to create new modes of oscillation and propagation by the introduction of magnetic fields. Indeed, for strong enough fields, the Larmor frequency $f_{Larmor}$ is sufficiently large that the window $(f_{Larmor}, \max(f_L))$ for which the plasma is opaque is small and transmission can be achieved for frequencies below $f_{Larmor}$. But the introduction of magnetic fields involves large additional weight and new design features. The second idea is much more simple. Its aim is to take advantages of nonlinear properties of plasma to render it effectively transparent to the signal. Communications both to and from the vehicle are feasible using basically the same ideas. We shall first describe the ``to the vehicle'' case. Consider Figure \ref{plasma_response_fig} in which we show schematically the response of the plasma to an incoming signal with low frequency $\omega$ from a direction which makes an angle $\phi$ with the normal to the vehicle. There are two principal features to the response. First, there is a reflection from the layer at a point $z=z_r$ where the plasma frequency at the point $\omega_L (z_r)$ is $\omega\cos\phi$. However, the influence of the signal is felt beyond that point, namely at the resonant layer $z=0$ where $\omega_L (0) = \omega$. Langmuir oscillations are excited there which produce large transversal and longitudinal components of the electric field. The resonant layer acts as an antenna. The task is to find a way to connect the antenna at the resonant layer at $z=0$ to a receiver on board the vehicle at $z=R$. There are several possibilities which we have outlined before \cite{Nazarenko1994, NNR1994, NNR1995}. The most practical one, however, is also the most simple and first suggested without a detailed numerical simulation in \cite{Nazarenko1994}. We use an onboard source, which we call the pump, to generate electromagnetic signals of sufficiently high frequency $\omega_p$ ($\omega_p > \max\limits_{z}^{}\omega_L (z) + \omega$) that they can propagate through the plasma. There are several candidates for such a source. For example, available on the open market is a klystron amplifier which can generate $3 kW$ of power at frequencies of $12-14 GHz$. These high frequency waves have only to travel distances of a meter or less. They interact nonlinearly with and scatter off the signal wave. Not surprisingly, the largest contribution to the scattered wave comes from the nonlinear interaction of the pump wave with the plasma density distortion induced by the incoming signal wave at the resonant layer. We call the scattered wave a Stokes wave because the scattering process is a three wave interaction analogous to Raman scattering. The Stokes wave with frequency $\omega_S = \omega_p - \omega$ carries the information encoded on the signal wave back to the vehicle. We will show that, whereas much of the scattered Stokes wave propagates away from the vehicle, a significant fraction is returned to the vehicle. What is remarkable is this. The ratio of the power flux of the Stokes wave received at the vehicle to the power flux contained in the signal wave at the plasma edge can be between 0.7 and 2 percent. This means that reception of GPS signals may be possible because one simply needs an onboard receiver approximately 100 times more sensitive than commercially available hand held receivers or use sufficiently larger antenna. We shall discuss in the conclusion the sensitivity required for a variety of sources. Communications from the vehicle requires two power sources on the vehicle. One, which we term the Stokes wave generator, will also carry the signal. The other is the pump wave. Both have carrier frequencies above that of the maximum of the plasma frequency. Their nonlinear interaction in the plasma produces an oscillations of frequency $\omega = \omega_p -\omega_S$. Consider Figure \ref{from_concept}. For $z_r < z < R$ where $z_r$ is determined by $\omega_L (z_r) = \omega\cos\phi$ and $\phi$ is calculated from the differences in propagation directions of the pump and Stokes waves, the oscillation does not propagate and its strength decays away from the vehicle. Nevertheless this oscillation is sufficiently strong to act as a power source for a propagating wave in the region $z < z_r$ where $\omega\cos\phi > \omega_L (z)$. In the conclusion we analyze what power is required in order for the signal to be detected by distant receivers. It appears that even if we use usual available on the market generators communication can be put into practice. \subsection{Plan of the paper.} The plan of the paper is as follows. We begin in {\it Section~2} with a detailed analysis of the two dimensional propagation and interaction of a signal wave of frequency $\omega$, a pump wave of frequency $\omega_p$ and a Stokes wave of frequency $\omega_S$ through a plasma with a given density profile $n_0 (z)$ where $z$ is the direction normal to the vehicle. The key equation is a modification of the well known Ginzburg equation \cite{Ginzburg_bib} \begin{eqnarray} \label{Ginzburg_equation} \frac{\partial}{\partial z}\left(\left(\frac{\varepsilon_0}{\varepsilon(z,\Omega)}\right)\frac{\partial\vec H}{\partial z}\right) + \frac{\varepsilon_0}{\varepsilon(z,\Omega)}\frac{\partial^2\vec H}{\partial y^2} + \frac{\Omega^2}{c^2}\vec H =\\ =-\left[\nabla\times\left(\frac{\varepsilon_0}{\varepsilon(z,\Omega)}\vec j_{NL}\right)\right],\nonumber \end{eqnarray} for the magnetic field amplitude $(H(y,z), 0, 0)\mathrm e^{-\mathrm i\Omega t}$ of an oscillation of frequency $\Omega$. In (\ref{Ginzburg_equation}), the effective electric susceptibility is \begin{equation} \label{epsilon_z_Omega} \varepsilon (z, \Omega) = \varepsilon_0 \left( 1 - \frac{\omega_L^2 (z)}{\Omega^2}\left(\frac{1}{1+\mathrm i\nu/\Omega}\right)\right), \end{equation} ($\omega_L (z)$ is the local plasma frequency and $\nu$ the collision frequency). The susceptibility is due to the linear response of the plasma to the electric fields of whichever waves are involved. The nonlinear current $\vec j_{NL}$ will be determined both by the product of the plasma density distortion with the linear current and the nonlinear response of the electric velocity field due principally to dynamic pressure forces. We observe that, for $\Omega \gg \max\limits_{z}^{}\omega_L (z)$, the electric susceptibility is approximately $\varepsilon_0$ and the left hand side of the nonlinear Ginzburg equation (\ref{Ginzburg_equation}) is the usual wave operator. How do we use (\ref{Ginzburg_equation})? For the case of communication to the vehicle, we use it in two ways. First with $\vec j_{NL}=0$, we determine for $\Omega = \omega$ and $H(y,z) = H(z)\mathrm e^{\mathrm i(\omega/c)y\sin\phi}$, the field $H(z)$ from which the distortion to the plasma produced by the incoming wave is calculated. In this instance, $H(z)$ satisfies \begin{eqnarray} \frac{d^2 H}{d z^2} - \frac{1}{\varepsilon(z,\omega)}\frac{d \varepsilon(z,\omega)}{d z}\frac{d H}{d z} +\label{Ginzburg_to_equation}\\ \frac{\omega^2}{c^2}\left(\frac{\varepsilon(z,\omega)}{\varepsilon_0} - \sin^2 \phi\right)H = 0.\nonumber \end{eqnarray} A glance at the third term shows that propagation is impossible for $\varepsilon/\varepsilon_0 < \sin^2 \phi$ or, from (\ref{epsilon_z_Omega}), for $\omega\cos\phi < \omega_L (z)$. The importance of the resonance layer where $\varepsilon (z,\omega) \simeq 0$ is seen from the denominator in the second term. Having solved for $H(z)$ from (\ref{Ginzburg_to_equation}) we can then calculate the plasma distortion field $\delta n(z)$. Its interaction with the pumping wave then produces a nonlinear current $\vec j_{NL}$ which gives rise to the Stokes wave. The Stokes wave $H_S (y,z)$ and its propagation is calculated by solving (\ref{Ginzburg_equation}) with this $\vec j_{NL}$ and appropriate boundary conditions at the plasma edge and at the vehicle. Our goal is to determine $H_S (y, z = R)$. We give the results of both the numerical simulation and an analytic estimation. The latter takes advantage of the fact, that, for the Stokes wave, $\omega_S \gg \max\limits_{z}^{}\omega_L (z)$ and that the principal plasma distortion occurs at the resonance layer. For communicating from the vehicle, we solve (\ref{Ginzburg_to_equation}) with the right hand side given by $-\nabla\times\frac{\varepsilon_0}{\varepsilon}\vec j_{NL}$ with $\vec j_{NL}$ calculated from the nonlinear interaction of the pump and Stokes waves. Here the goal is to calculate the flux of power of the signal wave with frequency $\omega = \omega_p - \omega_S$ as it leaves the plasma edge in the direction of some distant receiver. In {\it Section 3}, we describe the numerical procedure and give detailed results of our calculations. Finally, in the {\it Conclusion}, we use our results to calculate the powers of both the incoming and outgoing signals at their respective receivers. We discuss in addition several important considerations: \begin{itemize} \item The advantages, particularly in terms of available power, of using pulsed signals. \item The possibility of using GPS sources for incoming signals. \item The challenges involved in making ideas practicable. \end{itemize} \section{Analytics.} \subsection{Basic theory.} We shall study a very idealized situation when the plasma sheath is a flat slab. The plasma density is a linear function of the horizontal coordinate $z$ \begin{equation} n_0 (z) = n_0 \frac{z+L}{R+L}. \end{equation} In this geometry the vehicle is the vertical wall placed at $z=R$. The plasma density near the vehicle is $n_0$. The plasma contacts the vacuum at $z=-L$, where $n=0$. We shall study two situations: communication to the vehicle and communication from the vehicle. In both cases, three almost monochromatic electromagnetic waves exist in the plasma. Two of them have high frequencies $\omega_p$ (pumping wave), $\omega_S$ (Stokes wave). The third one has low frequency $\omega$, satisfying the condition \begin{equation} \omega = \omega_p - \omega_S. \end{equation} In the ``to the vehicle'' case $\omega$ is the circular frequency of the incoming signal. In the ``from the vehicle'' case, $\omega$ is the circular frequency of the outgoing signal. In both these cases, the low-frequency signal plays a key role. Because the local plasma frequency at $z = 0$ is $\omega$, \begin{equation} \omega^2 = \frac{e^2 n_0}{M \varepsilon_0}\frac{L}{R+L}. \end{equation} Let us denote also the Langmuir frequency at the vehicle as $$ \omega_L^2 = \frac{e^2 n_0}{M \varepsilon_0}. $$ Thus $$ \frac{L}{R+L} = \frac{\omega^2}{\omega^2_L} = \frac{f^2}{f^2_L}. $$ In a realistic situation $f_L\simeq 9 GHz$ (it corresponds to $n_0 = 10^{18} m^{-3}$), $f \simeq 2 GHz$, $R+L = 1 m$, and $L \simeq 0.05 m$. The wavelength of the incoming signal in the vacuum is $\lambda = c/f = 0.15 m$, so that $\lambda > L$. We point out that in the case of low-frequency wave reflection from the ionosphere, the situation is the opposite $\lambda << L$. We shall assume that the ions' positions are fixed and the plasma is cold ($T_e \simeq 0$). The magnetic field has only one component $H_x$. The electric field has two components $E_y$, $E_z$. Neither the electric nor magnetic fields depend on the $x$-coordinate. Maxwell's equations read $$ \vec E (0, E_y(y,z), E_z (y,z));\;\; \vec H (H(y,z), 0, 0) $$ \begin{eqnarray} \nabla \times \vec E = -\mu_0 \frac{\partial \vec H}{\partial t},\label{rot_E}\\ \nabla \times \vec H = \varepsilon_0 \frac{\partial \vec E}{\partial t} + \vec j,\label{rot_H}\\ \nabla \cdot \vec H = 0,\label{div_H}\\ \nabla \cdot \varepsilon_0 \vec E = e (n - n_0(z)),\; \vec j = en\vec v.\label{div_E} \end{eqnarray} \begin{eqnarray} \frac{\partial \rho}{\partial t} + \nabla \cdot \vec j = 0\nonumber\\ \frac{\partial n}{\partial t} + \nabla \cdot n\vec v = 0\label{Continuity}, \end{eqnarray} \begin{equation} \label{Euler} \frac{\partial \vec v}{\partial t} + \nu \vec v = \frac{e \vec E}{M} + \vec v \times\left(\left[\nabla\times\vec v \right] + \frac{\mu_0 e}{M}\vec H\right) - \frac{1}{2}\nabla v^2, \end{equation} \begin{eqnarray*} c = \frac{1}{\sqrt{\varepsilon_0\mu_0}}\simeq 3\times10^{8}ms^{-1},\\ n_0 \simeq 10^{18}m^{-3},\; \omega_{L}^2(R) = \frac{e^2n_0}{M\varepsilon_0},\\ \frac{\omega_L(R)}{2\pi} = f_L(R) = 9 GHz. \end{eqnarray*} The power flux in vacuum is $$ S = 2\varepsilon_0 c \left|E\right|^2 = 2c \mu_0\left|H\right|^2 Wm^{-2};\;1 Wm^{-2} \rightarrow 13.7 Vm^{-1}. $$ In equation (\ref{Euler}) $\nu$ is the effective friction of the electron fluid with the neutral gas, sometimes called the ion collision frequency. We take $\nu = 10^8 Hz$. The current $\vec j = \vec j_L + \vec j_{NL}$. $\vec j_L$ is the linear response of the plasma on the electric field, $\vec j_{NL}$ is the current due to nonlinear effects. For a monochromatic wave of frequency~$\Omega$, Maxwell's equations can be rewritten in the following form \begin{eqnarray*} \nabla\times\vec H &=& -\mathrm i\Omega\varepsilon\vec E + \vec j_{NL},\;\varepsilon_0 \nabla\times\vec E = \mathrm i\varepsilon_0 \mu_0 \Omega \vec H,\\ \mathrm i\frac{\Omega}{c^2} \vec H &=& \frac{\mathrm i}{\Omega}\nabla\times\left(\frac{\varepsilon_0}{\varepsilon}\nabla\times\vec H\right) - \frac{\mathrm i}{\Omega}\nabla\times\left(\frac{\varepsilon_0}{\varepsilon}\vec j_{NL}\right), \end{eqnarray*} \begin{equation} \label{ProtoGinzburg} \frac{\Omega^2}{c^2}\vec H = \nabla\times\left(\frac{\varepsilon_0}{\varepsilon}\nabla\times\vec H\right) - \nabla\times\left(\frac{\varepsilon_0}{\varepsilon}\vec j_{NL}\right). \end{equation} In our geometry, (\ref{ProtoGinzburg}) is one scalar equation. We should stress that this is an exact equation. The only challenge is the calculation of $\vec j_{NL}$. Finally for the magnetic field, one obtain the Ginzburg equation \begin{eqnarray} \frac{\partial^2 H}{\partial z^2} &-& \frac{\varepsilon'}{\varepsilon}\frac{\partial H}{\partial z} + \frac{\varepsilon}{\varepsilon_0}\frac{\Omega^2}{c^2}H+\nonumber\\ + \frac{\partial^2 H}{\partial y^2} &=& - \left(\nabla\times\vec j_{NL}\right)_x - \frac{\varepsilon'}{\varepsilon} (\vec j_{NL})_y,\; \varepsilon' = \frac{\partial \varepsilon}{\partial z}. \label{Ginzburg} \end{eqnarray} For the high frequency pump and Stokes waves $\varepsilon\simeq\varepsilon_0$. Some exact solutions of simplified versions of the homogeneous Ginzburg equation for several important cases can be found in Appendix \ref{APP:Analytics_homogenious}. What we are going to do is the following: in subsection \ref{Lin_resp} we shall calculate linear responses of the plasma to an electromagnetic wave, such as the electron velocity, linear current and the electron density profile perturbation; the calculation of the first nonlinear correction to the linear current is done in subsection \ref{Nonlin_curr}; analytic estimations for ``to the vehicle'' and ``from the vehicle'' cases are given in subsections \ref{Analytics_to} and \ref{Analytics_from} respectively. \subsection{\label{Lin_resp}Linear responses.} In order to calculate the nonlinear current we need to consider the linear responses of the plasma to the presence of an electromagnetic wave. For a field with frequency $\Omega$ $$ H \sim \mathrm e^{-\mathrm i\Omega t}, $$ from (\ref{Euler}), the linear term in the velocity \begin{equation} \label{Linear_v} \vec v_L = \frac{\mathrm i e}{M \Omega}\frac{1}{1+\mathrm i\nu/\Omega}\vec E, \end{equation} and $$ \vec j_L = \frac{\mathrm i e^2 n_0}{M \Omega}\frac{1}{1+\mathrm i\nu/\Omega}\vec E. $$ From (\ref{rot_H}) $$ \nabla\times\vec H = -\mathrm i\Omega\varepsilon_0 \vec E + \frac{\mathrm i e^2 n_0}{M\Omega}\frac{1}{1 + \mathrm i\nu/\Omega}\vec E = -\mathrm i\Omega\varepsilon\vec E $$ Using Maxwell equations one can express all responses in terms of magnetic field \begin{eqnarray} \vec E = \frac{\mathrm i}{\Omega\varepsilon(\Omega)}\left(0, \frac{\partial}{\partial z}H, -\frac{\partial}{\partial y}H\right),\\ \vec v_L = -\frac{e}{M\Omega^2\varepsilon(\Omega)}\frac{1}{1 + \mathrm i\nu/\Omega} \left(0, \frac{\partial}{\partial z}H, -\frac{\partial}{\partial y}H\right),\label{linear_velocity}\\ \vec j_L = -\left(1 -\frac{\varepsilon_0}{\varepsilon(\Omega)}\right) \left(0, \frac{\partial}{\partial z}H, -\frac{\partial}{\partial y}H\right). \end{eqnarray} The expression for a distortion $\delta n$ of the electron density in the plasma $n (z) = n_0 (z) + \delta n (y,z,t)$ can be derived from (\ref{Continuity}) and (\ref{linear_velocity}), \begin{equation} \delta n = -\frac{\mathrm i e}{M\Omega^3}\frac{1}{1 + \mathrm i\nu/\Omega}\frac{\partial}{\partial z}\left(\frac{n_0(z)}{\varepsilon}\right) \frac{\partial}{\partial y}H. \end{equation} \subsection{\label{Nonlin_curr}Nonlinear current.} The nonlinear current is due to the first nonlinear correction to the linear response velocities of electrons and a scattering of an electromagnetic wave on the distortion of the charge density profile produced by another wave \begin{equation} \label{Current_NL_general} \vec j_{NL} = e n_0(z) \vec v_{NL} + e\delta n \vec v_L. \end{equation} We introduce the nonlinear velocity $v_{NL}$ which can be found from the following equation $$ \frac{\partial \vec v_{NL}}{\partial t} = \vec v_L \times \left[ \nabla \times \vec v_L \right] + \frac{\mu_0 e}{M}\vec v_L \times \vec H - \frac{1}{2}\nabla v_L^2 = - \frac{1}{2}\nabla v_L^2. $$ Here we used a corollary of the Maxwell equations and (\ref{Linear_v}) whence to within $O(\nu/\omega)$, $$ \left[ \nabla \times \vec v_L \right] = -\frac{\mu_0 e}{M}\vec H. $$ This means that only the dynamic pressure induced by the fields affects the plasma. Finally, we have everything for the calculation of the first term in right hand side of Ginzburg equation (\ref{Ginzburg}) \label{curl_j_NL} \begin{eqnarray} \left(\nabla\times\vec j_{NL}\right)_x = \frac{\mathrm i e}{2\omega}\frac{d n_0(z)}{d z}\frac{\partial}{\partial y} v_L^2 + e v_{z L}\frac{\partial}{\partial y}\delta n -\\ - e v_{y L}\frac{\partial}{\partial z}\delta n - \frac{\mu_0 e^2}{M}\delta n \vec H.\nonumber \end{eqnarray} The detailed expression of equation's (\ref{Ginzburg}) right hand side can be found in Appendix~\ref{APP:Current_NL}. \subsection{\label{Analytics_to}Analytic estimation. ``To the vehicle.''} We would like to estimate the ratio $$ \mu_S = \frac{S_S (z = R)}{S_0} $$ of the fluxes of the squared scattered field to the squared incoming signal field and express it as a function of pump power flux $S_p$ measured in Watts per square meter. We can make an analytic estimation of the three-wave process efficiency. The main contribution comes from the vicinity of $z = 0$. The reason comes from the fact that the real part of dielectric susceptibility (\ref{epsilon_z_Omega}) for the low frequency signal wave has a zero at this point. It means that the nonlinear current on the right hand side of the Ginzburg equation has a very sharp peak near $z=0$. A typical plot of right hand side is given in Figure \ref{RHS_to}. This issue is discussed in more detail in Appendix \ref{APP:Current_NL_to}. If we consider a high frequency pumping wave we can use the plane wave approximation $$ H_p(y,z,t) = H_p \mathrm e^{\mathrm i(-\omega_p t + k_p y - \kappa_p z)}. $$ The low frequency signal wave can be written $$ H_0(y,t) = H (y, z, t)|_{z=0} = H (z)|_{z=0} \mathrm e^{\mathrm i(-\omega t + k y)}. $$ For the Stokes wave, whose frequency is higher than the plasma frequency, one can use the following approximate Ginzburg equation \begin{equation} \frac{\partial^2 H_S}{\partial z^2} + \kappa^2 H_S = f_S, \end{equation} where $f_S$ is calculated from the $\mathrm {curl}$ of the nonlinear current given in (\ref{curl_j_NL}). To solve, we use the method of variation of constants.We find \begin{eqnarray*} H_S = C_1 \mathrm e^{\mathrm i\kappa_S z} + C_2 \mathrm e^{-\mathrm i\kappa_S z},\\ C_1' \mathrm e^{\mathrm i\kappa_S z} + C_2' \mathrm e^{-\mathrm i\kappa_S z} = 0,\\ C_1 (z) = \frac{1}{2\mathrm i \kappa_S}\int\limits_{-L}^{z} \mathrm e^{-i\kappa_S y} f_S (y) d y,\\ C_2 (z) = -\frac{1}{2\mathrm i \kappa_S}\int\limits_{z}^{R} \mathrm e^{i\kappa_S y} f_S (y) d y.\\ \end{eqnarray*} One can say that $C_1$ is the amplitude of the Stokes wave propagating to the vehicle and $C_2$ is the amplitude of the anti-Stokes wave propagating from the vehicle. The main contribution to $C_1 (R)$ arises from the vicinity of $z=0$, where $f_S (z)$ is almost singular \begin{eqnarray*} C_1 (R) = \frac{1}{2\mathrm i \kappa_S}\int\limits_{-L}^{R} f_S (y)\mathrm e^{-\mathrm i\kappa_S y} d y \simeq\\ \simeq \frac{1}{2\mathrm i\kappa_S}\int\limits_{-\infty}^{+\infty} f_S (y)\mathrm e^{-\mathrm i\kappa_S y} d y. \end{eqnarray*} After some simple but tedious calculations (see Appendix \ref{APP:Current_NL_to}) one finds \begin{equation} C_1 (R) \simeq 2\pi\mathrm i\frac{e L}{Mc^2}\frac{1}{\varepsilon_0 c}\cos(2\theta)\sin(\phi)H_p H^*(0).\label{C_1_to_estimation} \end{equation} where $\theta$ is the pumping incident angle. Details of these calculations are given in Appendix \ref{APP:Current_NL_to}. The angular dependence of $H(0)$, which we call $\rho(\phi)$, can be calculated numerically by solving the homogeneous Ginzburg equation. In Fig.~\ref{RhoSinPhi}, we plot the product $\rho\sin\phi$ against $\phi$. At the optimal value $\phi\simeq 0.5$, $\rho(\phi)\sin\phi \simeq 1/4$, \begin{equation} C_1 (R) \simeq \frac{\pi}{2}\mathrm i\frac{e L}{Mc^2}\frac{1}{\varepsilon_0 c}\cos 2\theta H_p H^*(-L). \end{equation} Using the expression $S_p = |H_p|^2/(\varepsilon_0 c)$, one gets \begin{eqnarray} \label{estimation_to} \mu_S = \left|\frac{C_1}{H}\right|^2 c\varepsilon_0\frac{S_p}{1 W m^{-2}}\simeq\nonumber\\ \simeq\frac{\pi^2}{4}\left(\frac{e L}{M c^2}\right)^2\frac{1} {\varepsilon_0 c}\cos^2(2\theta)\frac{S_p}{1 W m^{-2}}.\label{Estimate_to} \end{eqnarray} For the optimal values of incidence angles ($\theta = 0$, $\phi \simeq 0.5$), the given plasma parameters and $L \simeq 0.05 m$, one gets the following maximum value of the efficiency coefficient \begin{equation} \label{Estimate_to_number} \mu_S \simeq 0.9\times10^{-11} \frac{S_p}{1 W m^{-2}}. \end{equation} This is consistent with what we obtain by direct numerical simulation. \subsection{\label{Analytics_from}Analytic estimation. ``From the vehicle.''} Equation (\ref{Ginzburg_equation}) can be rewritten in the following form \begin{eqnarray} \label{Ginzburg_rewritten_to} \frac{\mathrm d}{\mathrm d z}\frac{1}{\varepsilon}\frac{\mathrm d H}{\mathrm d z} + \left(\frac{1}{\varepsilon_0}\frac{\omega^2}{c^2} - \frac{k_0^2}{\varepsilon_0}\right)H = \frac{\partial}{\partial z}\left(\frac{(\vec j_{NL})_y}{\varepsilon}\right) -\label{Ginzburg_from}\\ -\frac{1}{\varepsilon}\frac{\partial}{\partial y}(\vec j_{NL})_z.\nonumber \end{eqnarray} It is not too surprising that that the dominant contribution to the RHS of (\ref{Ginzburg_rewritten_to}) is the first term and arises from the neighborhood of $z=0$. Again, just as in the ``to the vehicle'' case, the resonant layer acts as a transmitting antenna which will beam the message contained on the Stokes wave to a distant receiver at frequency $\omega = \omega_p -\omega_S$. In Fig.~\ref{RHS_from} we verify that indeed the dominant contribution comes from the first term on the RHS of (\ref{Ginzburg_rewritten_to}) and from the neighborhood of $z=0$. Hence we can get simple equation for a very good approximation to the approximate particular solution of (\ref{Ginzburg_from}), namely \begin{equation} \frac{\mathrm d H}{\mathrm d z} = (\vec j_{NL})_y. \end{equation} The general solution is the following \begin{equation} H = C_1 \phi_1 (z) + C_2 \phi_2 (z) + \int\limits_{0}^{z} (\vec j_{NL})_y \mathrm d z, \end{equation} where $\phi_1 (z)$ and $\phi_2 (z)$ are solutions of the homogeneous part of equation (\ref{Ginzburg_from}), and $\phi_1 (z)$ is bounded as $z \rightarrow R \gg 1$, $\phi_2 (z)$ is unbounded (exponentially) at the vehicle. Thus $C_2 \simeq 0$. See Appendix~\ref{APP:Analytics_homogenious} for a discussion of solutions to the homogeneous Ginzburg equation. Using the boundary condition on the edge of the plasma ($z=-L$) $$ \frac{\mathrm d H}{\mathrm d z}(-L) = -\mathrm i\kappa_0 H (-L), $$ where $\kappa_0 = \frac{\omega_0}{c}\cos\phi$ is the $z$-component of wavevector of the outgoing low frequency signal wave, and $\vec j_{NL} (-L) = 0$, one finds \begin{equation} C_1 = \frac{-\mathrm i\kappa_0}{\phi_1'(-L)+\mathrm i\kappa_0 \phi_1(-L)}\int\limits_{0}^{-L} (\vec j_{NL})_y \mathrm d z. \end{equation} Finally, for the magnetic field at $z=-L$ we find \begin{equation} H(-L) \simeq \frac{\phi_1'(-L)}{\phi_1'(-L)+\mathrm i\kappa_0 \phi_1(-L)}\int\limits_{0}^{-L} (\vec j_{NL})_y \mathrm d z. \end{equation} The function $(\vec j_{NL})_y$ oscillates with $z$ with wavenumber $\kappa_p - \kappa_S$. The lower the wavenumber the more will be the contribution in the integral. This gives us a very simple optimal strategy for the choice of pump and Stokes wave directions. We should radiate both the Stokes and pumping waves in the desired direction of the signal wave propagation. In this case we also have an exact compatibility with the boundary conditions at $z=-L$. If we consider the expression for $(\vec j_{NL})_y$ given in Appendix~\ref{APP:Current_NL}, we can see that in the case $\omega_0 \ll \omega_S,\omega_p$ the first term (\ref{Current_NL_Y}) is the major one in the vicinity of resonant layer. The resonant layer works like radiating antenna. Using the simplified nonlinear current expression and considering the pumping and Stokes waves as plane waves one finds \begin{eqnarray} H(-L) \simeq -\mathrm i\frac{e\omega_0^2 L \sin\phi}{2M\varepsilon_0 c^3 \omega_S \omega_p}\frac{1}{A}H_p H_S^*\times\\ \times\frac{\phi_1'(-L)}{\phi_1'(-L)+\mathrm i\kappa_0 \phi_1(-L)}\left(1-\frac{\mathrm e^{\mathrm i A\cos\phi} - 1}{A\cos\phi}\right).\nonumber \end{eqnarray} Where $A= L\omega_0/c$. Using the solutions of the approximate homogeneous equations (\ref{GinzburgBessel}), we can estimate $\phi_1'(z)/\phi_1(z)|_{z=-L} \simeq 1/L$. Thus for $\kappa_0 L = A\cos\phi \ll 1$, one finds $$ H(-L) \simeq \frac{e\omega_0^2 L\sin\phi}{4M\varepsilon_0 c^3 \omega_S \omega_p}\frac{1}{A}H_p H_S^*. $$ For the power density, we have \begin{equation} S = \frac{1}{32}\left(\frac{e L}{M c^2}\right)^2\frac{1}{\varepsilon_0 c} \left(\frac{\omega_0^2}{\omega_S \omega_p}\right)^2\sin^2\phi S_S S_p. \end{equation} This result is quite clear from physical point of view. The larger $\phi$ is, the longer is the distance over which the signal wave is generated in the plasma. In our simulations, $A\simeq2.1$ and in this case we cannot use the simplified expression given above. Instead we find, \begin{eqnarray} S = \frac{1}{8}\left(\frac{e L}{M c^2}\right)^2\frac{1}{\varepsilon_0 c} \left(\frac{\omega_0^2}{\omega_S \omega_p}\frac{1}{A}\right)^2\times\nonumber\\ \times\tan^2\phi \left(1 - 2\frac{\sin(A\cos\phi)}{A\cos\phi}+2\frac{1-\cos(A\cos\phi)}{A^2 \cos^2\phi}\right)\times\label{estimation_from}\\ \times\frac{1}{1+C_{der}\cos^2\phi} S_S S_p.\nonumber \end{eqnarray} Here we introduced the coefficient $C_{der}=(\kappa_0 \phi_1/\phi_1')^2$ the value of which we obtain from our numerics. Finally, we find \begin{eqnarray} S_{12 GHz} = 1.2\times10^{-16}\tan^2\phi \left(1 - 2\frac{\sin(A\cos\phi)}{A\cos\phi}+\right.\nonumber\\ \left.+2\frac{1-\cos(A\cos\phi)}{A^2 \cos^2\phi}\right) \frac{1}{1+C_{der}\cos^2\phi} S_S S_p,\\ S_{18 GHz} = 2.0\times10^{-17}\tan^2\phi \left(1 - 2\frac{\sin(A\cos\phi)}{A\cos\phi}+\right.\nonumber\\ \left.+2\frac{1-\cos(A\cos\phi)}{A^2 \cos^2\phi}\right) \frac{1}{1+C_{der}\cos^2\phi} S_S S_p. \end{eqnarray} The subscripts refer to the frequencies of the onboard pump waves. Again, we find the magnitude and angular dependence to be consistent with our numerical results. \section{Numerical procedures and simulations.} The equation we solve numerically in all cases is the Ginzburg equation (\ref{Ginzburg}) including all terms on its right hand side. The boundary conditions are given at $z = L_1 = -L-(L+R)$, in the vacuum beyond the plasma edge and at $z=R$, the vehicle. To solve this equation we use a ``sweep''-method described in detail in Appendix~\ref{APP:Numerical_method}. The method was invented simultaneously in several places for work on classified topics in the middle of the last century. In the Soviet Union, it was introduced by a group of L.\,D.~Landau (information from I.M. Khalatnikov) (the first publication \cite{Sweep_Landau} appeared several years later due to obvious reasons) and was developed to its modern form in \cite{Sweep_Gelfand}. As the first step in the ``to the vehicle'' case we have to find the profile of the incoming magnetic field in the plasma. We used an incident angle $\phi = 0.5$. It will be shown later that this angle is an optimal value but it is good for an initial evaluation of the possibility of communication. We consider the incoming signal as a monochromatic plane wave of a given frequency $f_0 = 2 GHz$ and amplitude $H_0$. The current is equal to zero. In this case, the boundary conditions are \begin{eqnarray} z &=& -L_1,\;\; \frac{\partial H}{\partial z} + \mathrm i \kappa_0 H = 2\mathrm i \kappa_0 H_0,\\ z &=& R,\;\; \frac{\partial H}{\partial z} = 0. \end{eqnarray} The resulting profile of the magnetic field is shown in Fig.~\ref{H_low_freq_to}. The profile of $E_z (z)$ is shown in Fig.~\ref{EZ_low_freq_to}. At the next stage, we consider an incident low frequency magnetic field profile as a source of distortion of the plasma density profile and take into account currents due to the presence of a pump wave. The pumping wave angle $\theta = 0.0$. Our goal is to calculate the scattered field $H_S$ with frequency $\omega_S = \omega_p - \omega$. In this case, the boundary conditions are \begin{eqnarray} z &=& -L_1,\; \frac{\partial H_S}{\partial z} + \mathrm i \kappa_{S} H = 0,\\ z &=& R,\; \frac{\partial H_S}{\partial z} = 0. \end{eqnarray} The profiles of the magnetic fields $H_S$ for two different pumping frequencies are shown in Figures~\ref{H_high_freq_to_12GHz} and \ref{H_high_freq_to_18GHz}. We note that the resonant layer $z=0$ acts as if it were a source. In the ``from the vehicle'' case we calculate the magnetic field of the low frequency wave generated by plane pump and Stokes waves. Following the optimal strategy in this case, described in the analytic part of the paper, we take all angles equal to each other $\phi=\theta=\pi/4$. In this case, the boundary conditions are \begin{eqnarray} z &=& -L_1,\; \frac{\partial H}{\partial z} + \mathrm i \kappa_{0} H = 0,\\ z &=& R,\; H = 0. \end{eqnarray} Here $H(z)$ is the magnetic field of the signal wave with frequency $\omega = \omega_p - \omega_S$. The boundary condition at $z = R,\; H = 0$, gives us the worst of all cases by definition. The low frequency magnetic fields for two different pumping frequencies are shown in Figs.~\ref{H_low_freq_from_12GHz} and \ref{H_low_freq_from_18GHz}. We tested the robustness of the code by allowing for both finite and zero conductivity of the vehicle surface in the ``to the vehicle case''. During the simulation in the ``from the vehicle'' case we also redid the simulation with the derivative of the magnetic field at the vehicle equal to zero. In all the cases, the influences of the differing boundary conditions were negligible. In the ``to the vehicle'' case, it is convenient to introduce the function $\mu_S$ as the ratio $$ \mu_S = \frac{S_S (z = R)}{S_0} $$ of the scattered field flux to the incoming signal flux and express it as a function of pump flux $S_p$ measured in Watts per square meter. We found $$ \omega_p = 2\pi * 12GHz, \; \max(\mu_S) \simeq 2.2\times10^{-12} \frac{S_p}{1Wm^{-2}}, $$ $$ \omega_p = 2\pi * 18GHz, \; \max(\mu_S) \simeq 0.63\times10^{-11} \frac{S_p}{1Wm^{-2}}. $$ These results are in a good agreement with the analytic estimation (\ref{Estimate_to_number}). Any difference is due to the fact that the pumping frequency is not sufficiently high to neglect the plasma frequency. The reason we used these frequencies and not much higher ones was that they are available on standard microwave equipment and devices. In the ``from the vehicle'' case, we calculate the ratio $$ \mu = \frac{S_{out} (z = -L)}{S_S S_p} $$ of the output signal flux to the product of the pump and Stokes fluxes and express it as a function of the optimal angle. We found $$ \omega_P = 2\pi * 12GHz, \; \max(\mu) \simeq 1.8\times10^{-16} \frac{1}{1Wm^{-2}}, $$ $$ \omega_P = 2\pi * 18GHz, \; \max(\mu) \simeq 3.0\times10^{-17} \frac{1}{1Wm^{-2}}. $$ In order to investigate the dependence of the result on the angles $\phi, \theta_p, \theta_S$, we calculated $\mu$ for various different choices. The results are shown in Figs.~\ref{Theta_Phi_to_12GHz}-\ref{Theta_Phi_from_18GHz}. As one can see, in the ``to the vehicle'' case we have a very good agreement between the analytically estimated angular dependence (\ref{estimation_to}) and the numerical results. Namely, we have a maximum at pumping angles close to $\theta = 0$ and the efficiency coefficient $\mu_S$ goes to zero at the vicinity of $\theta = \pi/4$ in a agreement with the $\cos(2\theta)$ dependence. So we can formulate a simple rule: in order to get the best possible performance, send the pump wave in a direction perpendicular to the plasma edge surface. In the ``from the vehicle'' case, the situation is even simpler. As it was shown in Section \ref{Analytics_from} the power conversion is optimal if we radiate both the pump and Stokes waves in the direction of the desired signal wave propagation. The estimated angular dependence (\ref{estimation_from}) can be fitted with good accuracy to the numerical results using only one tuning coefficient $C_{der}$. It is shown that this coefficient weakly depends on the pumping frequency. \section{Conclusion and discussion.} Let us now discuss the practical usage of this approach for receiving at and transmitting from the vehicle. For the ``to the vehicle'' case we consider the problem of receiving even GPS signals. Let us estimate the resulting attenuation coefficient. Given a pump waveguide aperture of $3cm\times 3cm$ and a pump power of $3kW$, this gives $S_p = 3.3\times10^{6}Wm^{-2}$. One can use the pulse regime. In this case, even for pulses $10^{-3}s$ long, every pulse still contains more than $10^6$ periods of the low frequency signal and we can get much higher power flux $S_p^{pulse} = 3.3\times 10^{9}Wm^{-2}$. It gives us the attenuation coefficients $\mu_S S_{p}^{pulse}$ \begin{eqnarray*} \mu_S S_p^{pulse} \simeq 0.73\times 10^{-2},\;\omega_p &=& 2\pi * 12GHz.\\ \mu_S S_p^{pulse} \simeq 2.1\times 10^{-2},\;\omega_p &=& 2\pi * 18GHz. \end{eqnarray*} The usual level of a GPS signal at the Earth surface is about $-127.5 dBm$ (1 Decibel per milliwatt is equal to $1dBm = 10 \log_{10}(P/1mW)$). Indoors, one must use high sensitivity GPS receivers. Many general purpose chipsets have been available for several years. Presently, the market offers sensitivities $-157.5 dBm$ (for example \cite{Fujitsu}). Using the definition of $dBm$ one can see, that it is possible to receive a signal with an attenuation about $10^{-3}$. Also it is possible to use a much bigger antenna on the vehicle than in the case of a handheld device. In this case, it is even possible to receive a signal using the continuous rather than pulsed regime for a klystron pump. So even at the angles far from optimal, one can receive GPS signals. Further, we used maximum value of the plasma thickness. If the plasma sheath is thinner, the angular dependence is broader. Some characteristics of klystron amplifiers available on the open market are given in Table~\ref{Table_1} \cite{NEC}. In the ``from the vehicle'' case, because of sensitive land based receivers, all we need is to have a reasonable signal. Let us estimate an incoming power on some land based antenna. First of all, for any real antenna we have to take into account the decrease of a signal due to diffraction broadening. If the diameter of the land-based antenna (Figure \ref{antenna}) is $D_0$, the diameter of the signal flux after some long distance $l$ will be \begin{equation} D(l) \simeq \frac{l\lambda}{2 D_0}. \end{equation} It means that if we have power flux at an antenna $S_A$, the power flux at the edge of the plasma after a distance $l$ will be \begin{equation} S_0 \simeq S_A \left(\frac{2D_0^2}{l\lambda}\right)^2. \end{equation} For example, for an antenna of the diameter equal to 5m, after 100km $$ S_0 \simeq 1.1\times10^{-5} S_A. $$ Now one can calculate the sensitivity of the receiver needed. Let us suppose that the signal beam outgoing from the vehicle has diameter $D_0 = 1m$, signal frequency $f = 2 GHz$ and corresponding wave length $\lambda = 1.5\times 10^{-1}m$, the land based antenna has a diameter $D_{LB} = 5m$ and is situated at a distance $l = 100 km$. Using the previous results for diffraction, the pumping klystrons' powers from the table above and the expression $S_{out}=\mu S_pS_S$, one can get for the power on the land based receiver \begin{equation} S_{LB} \simeq S_{out} \left(\frac{2D_0^2}{l\lambda}\right)^2 = 1.8\times 10^{-8} S_{out}. \end{equation} We now list for two different frequencies, the corresponding powers in Watts at the receiving antenna. \begin{eqnarray*} \omega_P &=& 2\pi * 12GHz,\\ P_A &\simeq& 1.8\times10^{-8}* 1.8\times10^{-16}*9\times10^{6}Wm^{-2}* 25m^2\\ &\simeq& 0.73\times 10^{-15} W;\\ \omega_P &=& 2\pi * 18GHz,\\ P_A &\simeq& 1.8\times10^{-8}* 3.0\times10^{-17}*4\times10^{12}Wm^{-2}* 25m^2\\ &\simeq& 0.54\times 10^{-17} W. \end{eqnarray*} The GPS receiver mentioned above has a sensitivity about $-160dBm \simeq 10^{-19}W$. Even with such a modest size of the antenna and ordinary klystrons one can receive the signal at almost any angle. As a final remark one can conclude that proposed method for communication with and from the supersonic vehicle is realistic even using standard devices available on the open market. In a future work, there are several additional points we would like to consider further. First, it might be worthwhile to discuss the effect of the plasma density profile at $z<0$ on the wave interaction. One may expect that the transition will be much narrower than $5\,cm$, because of the shock formation. But shock waves take place not in the plasma but in air. The air and plasma densities are not connected directly. The plasma density is defined by the level of ionization which is governed by a density distribution in accordance with the Saha equation. In a real plasma sheath the characteristic temperature is much less then the ionization potential, and a typical level of ionization is low ($10^{-6}$-$10^{-5}$). Under this conditions the plasma density depends on the temperature dramatically. The temperature jumps inside the shock then grows gradually toward the vehicle. Still, just after the shock it is too low to provide a strong ionization, and the most essential increase in ionization takes place far behind the shock. For this reason we can neglect the jump of plasma density inside the shock wave and treat it as smooth. In this case an approximation by a linear function seems to be reasonable. Let us note that there is no blackout if the plasma sheath is as thick as $5\,cm$ and the plasma density is as low as $10^{18}\,m^{-3}$. In this case, the incident wave can reach the vehicle due to the skin-effect. Anyway, our numerical code can be used for an arbitrary density profile. Another question is the following: will the shock and the flow behind it suffer from hydrodynamic instabilities? Actually, hydrodynamic-type instabilities as well as hydrodynamic turbulence are slow processes in our time-scale. We can treat the plasma sheath as ``frozen''. The distortion of the density profile, although frozen, can slightly change the results. It could look quite surprising that collision frequency $\nu$ drops out from the final results after integration over the spatial variable. This is a common mathematical trick, working perfectly as long as ${\nu}/{\omega}\rightarrow 0$. In our case we have ${\nu}/{\omega}\simeq 10^{-1} - 10^{-2}$. In fact, the shape of the resonant layer can be distorted by nonlinear effects. They are essential if the ongoing signal is powerful enough. At very small $\nu$, the resonant peak would become very narrow. Could then the electron thermal motion start affecting the structure of the resonance? This question is very important. A resonant layer cannot be thinner than the Debye length. In our case $r_d\simeq 10^{-4}\,cm$ while the thickness of the resonant layer is $l\simeq L\cdot({\nu}/{\omega})^2\simeq 10^{-1}-10^{-2}cm$. Thus the influence of the temperature can be neglected. In the case of much thinner resonant layer this phenomenon should be taken into account and considered separately. Anyway, such influence could lead only to a radiation of Langmuir waves from the resonant area. But this effect is nothing but additional dissipation, dropping out from the final formulae. When the signals come from the external source, what would be the role of the $s$-polarized component? One can expect that, for some orientations of the vehicle, the s-component will be dominant, and the resonance will disappear.This could somewhat reduce the efficiency. In our idealized model (the vehicle is an infinite wall) there is a difference between $s$ and $u$ polarizations. For a real vehicle such a difference could be important. This is a subject for future study. As one can see there are still a lot of open questions. The physics of the problem is very rich. Our present work may be best described as a "proof of concept" research from which we can make some orders of magnitude estimates and gain some insights. The results can be made more accurate by including some of the physical effects discussed above. \section{Acknowledgments} This work was supported by AFOSR contract number~FH\,95500410090. A.O. Korotkevich was supported by RFBR grant 06-01-00665-a, the Programme ``Nonlinear dynamics and solitons'' from the RAS Presidium and ``Leading Scientific Schools of Russia'' grant.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sec:Introduction}Introduction} One of the most surprising experimental results of the last decades has been the discovery of tiny neutrino masses and relatively large neutrino mixings. Although non-vanishing neutrino masses are a clear indication of physics beyond the Standard Model (SM), the mechanism and the scales responsible for the neutrino mass generation remain a total mystery. It seems unlikely that the very small neutrino masses are generated by the same Higgs mechanism responsible for the masses of the other SM fermions, since extremely small Yukawa couplings, of the order of $\lesssim 10^{-12}$, must be invoked. A more `natural' way to generate neutrino masses involve the addition of new states that, once integrated out, generate the dimension five Weinberg operator \begin{equation} \mathcal{O}_5=\frac{c}{\Lambda}LLHH. \end{equation} This is embodied by the so-called seesaw mechanisms~\cite{seesawI,seesawII,seesawIII, Mohapatra:1986bd}. The smallness of neutrino masses relative to the weak scale implies either that the scale of new physics $\Lambda$ is very large (making it impossible to experimentally discriminate the different seesaw mechanisms), or that the Wilson coefficient $c$ is extremely small (for instance, coming from loop effects involving singly or doubly charged scalars~\cite{radiativeseesaw}). A different approach is given by neutrinophilic Two-Higgs-Doublet Models~\cite{Gabriel:2006ns, Davidson:2009ha}. In this framework, a symmetry ($U(1)$ or $Z_2$) compels one of the doublets to couple to all SM fermions but neutrinos, hence being responsible for their masses, while the other Higgs couples to the lepton doublets and right-handed neutrinos. If the second doublet acquires a vacuum expectation value (vev) around the eV scale, this leads to small neutrino masses. These models, however, are either ruled out or severely constrained by electroweak precision data and low energy flavor physics~\cite{Machado:2015sha,Bertuzzo:2015ada}. A variation of this idea, in which the symmetry is taken to be a local $U(1)$ and leads to the typical Lagrangian of the inverse seesaw scenario, suffers from an accidental lepton number symmetry that has to be explicitly broken to avoid the presence of a massless Nambu-Goldstone boson in the spectrum~\cite{Bertuzzo:2017sbj}. All aforementioned models have one of the two following features: (i) The model is realized at very high scales, or (ii) the model is based on explicit breaking of lepton number or other symmetries that protect neutrino masses (e.g. in TeV scale type II or inverse seesaw models). Neutrinos, however, are the {\em darkest} between the SM particles, in the sense that they can couple through the renormalizable {\em neutrino portal} $LH$ operator with generic dark sectors~\cite{Schmaltz:2017oov}. This fact has been used in connection to thermal Dark Matter with mass in the sub-GeV region (see for instance Refs.~\cite{Falkowski:2011xh,Batell:2017cmf}). In this letter we propose to use such a portal to explicitly connect a new light dark sector with the generation of neutrino masses. In this way, we are able to lower the scale of neutrino mass generation below the electroweak one by resorting to a dynamical gauge symmetry breaking of this new sector. The dark sector is mostly secluded from experimental scrutiny, as it only communicates with the SM by mixing among scalars, among neutrinos and dark fermions, and through kinetic and mass mixing between the gauge bosons. This scheme has several phenomenological consequences at lower energies, and in particular it offers a natural explanation for the long-standing excess of electron-like events reported by the MiniBooNE collaboration~\cite{Aguilar-Arevalo:2018gpe, Bertuzzo:2018itn}. \section{\label{sec:model} The Model} To avoid any neutrino mass contribution from the Higgs mechanism, we introduce a new dark gauge symmetry $U(1)_{\cal D}$, under which the SM particles are uncharged, but the new sector is charged. To build a Dirac neutrino mass term we need a $SU(2)_L$ singlet right-handed dark neutrino $N$, and a dark scalar doublet $\phi$, both having the same $U(1)_{\cal D}$ charge $+1$. The absence of chiral anomalies require a second right-handed neutrino, $N'$, with an opposite $U(1)_{\cal D}$ charge, thus restoring lepton number symmetry. We add to the particle content a dark scalar $SU(2)_L$ singlet $S_{2}$, with dark charge $+2$, whose vev spontaneously breaks lepton number, giving rise to a Majorana mass component for the dark neutrinos. As we will see shortly, this setup leads to an inverse seesaw structure in which the lepton number breaking parameter is promoted to a dynamical quantity. Finally, this scalar content enjoys an accidental global symmetry which is spontaneously broken. To avoid a massless Goldstone boson, an extra dark scalar $SU(2)_L$ singlet $S_{1}$, with dark charge $+1$, is included in the spectrum. Its vev breaks all accidental global symmetries. This field will allow for mixing among all the scalar fields, including the SM Higgs. The dark scalar $S_{1}$ will spontaneously break $U(1)_{\cal D}$ by acquiring a vev, while $\phi$ and $S_2$ will only develop an induced vev after the breaking of the electroweak and dark symmetries. By making a well motivated choice for the hierarchy of the vevs, our model allows a dynamical generation of the light neutrino masses and mixings at very low scale. Our model predicts masses for the dark scalars within the reach of current experiments as well as a light dark vector boson, $Z_{\cal D}$, that has small kinetic mixing with the photon and mass mixing with the SM $Z$ boson. The dark particles communicate with the SM ones via mixing: flavor mixing (neutrinos), mass mixing (scalars) and mass mixing and kinetic mixing ($Z_{\cal D}$), giving rise to a simple yet rich phenomenology. \vspace{0.3cm} \subsection{\label{subsec:scalars} The Dark Scalar Sector} Let us start discussing the scalar sector of the model. This will motivate the region of parameter space on which we will focus throughout the paper. The most general $SU(2)_L\times U(1)_{Y} \times U(1)_{\cal D}$ invariant scalar potential that can be constructed out of the fields and charges outlined above is \begin{widetext} \begin{align}\begin{aligned}\label{eq:potential} V = &-m_H^2(H^\dagger H)+m^2_{\phi}(\phi^\dagger\phi) -m_{1}^2S_{1}^*S_{1} + m_{2}^2 S_{2}^* S_{2} -\left[\frac{\mu}{2} S_{1}(\phi^\dagger H)+\frac{\mu'}{2}S_{1}^2 S_{2}^* + \frac{\alpha}{2}(H^\dagger \phi)S_{1} S_{2}^*+{\rm h.c.} \right] \\ & ~~~~~+ \lambda_{H\phi}' \phi^\dagger H H^\dagger \phi + \sum_\varphi^{\{H,\phi,S_1,S_{2}\}}\lambda_\varphi(\varphi^\dagger \varphi)^2 + \sum_{\varphi<\varphi'}^{\{H,\phi,S_1,S_2\}}\lambda_{\varphi\varphi'}(\varphi^\dagger \varphi)(\varphi^{\prime\dagger} \varphi')\, . \end{aligned}\end{align} \end{widetext} (In the last sum, the notation $\varphi<\varphi'$ is to avoid double counting.) We denote the vevs of the scalar fields as $(H , \phi , S_1 , S_2)|_{\rm vev}\equiv\left(v,v_\phi,\omega_1,\omega_2\right)/\sqrt{2}$. We stress that we are supposing the bare mass terms of $H$ and $S_1$ to be negative, while we take the corresponding ones for $\phi$ and $S_2$ to be positive. This ensures that, as long as $\mu=\mu'=\alpha \equiv 0$ ({\it i.e.} if there is no mixing among the scalar fields), the latter fields do not develop a vev, while the former do. In turn, this implies that the vevs $v_\phi$ and $\omega_2$ must be induced by $\mu$, $\mu'$, and/or $\alpha$. We now observe that $\mu$, $\mu'$, and $\alpha$ explicitly break two accidental $U(1)$ global symmetries, making these parameters technically natural~\footnote{ One of the symmetries is lepton number, the other is a symmetry under which only $\phi$ and $L$ are charged, with opposite charge. Since there are only two global symmetries for 3 parameters, having two of them non-zero necessarily generates the third by renormalization group running. }. For our purposes, this means that $\mu$, $\mu'$ and $\alpha$ can be taken small in a natural way, and justifies our working hypothesis $v_\phi,\omega_2\ll v,\omega_1$. As we will see later, this hierarchy of vevs will provide a low scale realization of the inverse seesaw mechanism with low scale dynamics associated to it. Explicitly, we obtain \begin{align}\label{eq:vevs} &v_\phi\simeq \frac{1}{8\sqrt{2}}\left(\frac{\alpha\mu' \, v \omega_1^3}{M_{S'_{\cal D}}^2M_{H_{\cal D}}^2}+4\frac{\mu \, \omega_1 v}{M_{H_{\cal D}}^2}\right)\, ,~~~~~{\rm and}\\ % &\omega_2\simeq \frac{1}{8\sqrt{2}}\left(\frac{\alpha\mu \, v^2\omega_1^2}{M_{S'_{\cal D}}^2 M_{H_{\cal D}}^2}+4\frac{\mu' \, \omega_1^2}{M_{S'_{\cal D}}^2}\right) \, , \end{align} with $M_{H_{\cal D}}^2$ and $M_{S'_{\cal D}}^2$ approximately being the physical masses of the respective scalars (to be defined below). In order to avoid large mixing between $H$ and $\phi$, we will always make the choice $\omega_1 \ll v$. The scalar spectrum contains, in addition to the SM-like scalar $h_{\rm SM}$ with mass $m_{h_{\rm SM}}\simeq $ 125 GeV, three CP-even dark scalars $H_{\cal D}$, $S_{\cal D}$ and $S'_{\cal D}$, with masses $M_{H_{\cal D}}$, $M_{S_{\cal D}}$ and $M_{S'_{\cal D}}$, two CP-odd dark scalars $A_{\cal D}$ and $a_{\cal D}$ with masses $M_{A_{\cal D}}$ and $M_{a_{\cal D}}$, and a charged dark scalar $H^\pm_{\cal D}$ with mass $M_{H^\pm_{\cal D}}$. Explicitly, the masses of the CP-even scalars are~\footnote{Radiative corrections will naturally contribute to the masses of these scalars. There are potentially several contributions according to Eq.~(\ref{eq:potential}), the quartic couplings being the most dangerous ones. In order to avoid fine-tuning{, we will always demand the masses of the lightest scalars to satisfy $M_{\rm lightest} \gtrsim \sqrt{\lambda} M_{\rm heavy}/8\pi$, where $M_{\rm heavy}$ denotes any of the heavy scalar masses.} By the same argument we expect $\mu, \mu'$ and $\alpha v$ to be below $M_{\rm lightest}$. { Our computation ignores the threshold at the Planck scale, which must be stabilized by other means (for instance, supersymmetrizing the theory).}} \begin{align}\begin{aligned}\label{eq:scalar_masses} m_{h_{\rm SM}}^2 &\simeq 2 \, \lambda_H v^2\, , \\ M_{S_{\cal D}}^2 &\simeq 2 \, \lambda_{S_{1}} \omega_1^2\, , \\ M_{H_{\cal D}}^2 &\simeq m_{\phi}^2 + \frac{\lambda_{H\phi}+\lambda_{H\phi}'}{2} v^2 \, ,\\ M_{S'_{\cal D}}^2 & \simeq m_{2}^2 + \frac{\lambda_{H S_2}}{2} \, v^2 \, ,\\ \end{aligned}\end{align} while the masses of the CP-odd and charged scalars are given by \begin{align} M_{A_{\cal D}} & \simeq M_{H_{\cal D}}\, , \\ M_{a_{\cal D}} & \simeq M_{S'_{\cal D}}\, , \\ M_{H_{\cal D}^\pm}^2 & \simeq M_{H_{\cal D}}^2 - \frac{\lambda_{H\phi }' v^2}{2}\, . \end{align} As for the composition of the physical states, since the mixing in the scalar sector is typically small, we can generically define \begin{equation} \varphi_{\rm physical} = \varphi - \sum_{\varphi'\neq \varphi}\theta_{\varphi \varphi'} \varphi'\,, \end{equation} where $\varphi_{\rm physical}$ denotes the physical scalar field that has the largest $\varphi$ admixture. Then, the mixing in the CP-even scalar sector is given by \begin{align}\label{eq:scalar_mixing} \theta_{H \phi } &\simeq \, \left[(\lambda_{H\phi}+\lambda_{H\phi }')\, v_\phi v - \mu\omega_1/2\sqrt{2}\right]/\Delta M^2_{h_{SM}H_{\cal D}}\, , \nonumber \\ \theta_{H {S_1} } &\simeq \, \lambda_{H S_1}\, \omega_1 v/\Delta M^2_{h_{SM}S_{\cal D}}\, , \nonumber \\ \theta_{H {S_2} } &\simeq \lambda_{HS_2}\, \omega_2 v/\Delta M^2_{h_{SM} S'_{\cal D}} \, , \nonumber\\ \theta_{\phi {S_1} } &\simeq \mu v /2\sqrt{2}\Delta M^2_{H_{\cal D}S_{\cal D}} \, ,\\ \theta_{\phi {S_2} } &\simeq \alpha \, \omega_1 v/4\Delta M^2_{H_{\cal D}S'_{\cal D}} \, \nonumber ,\\ \theta_{{S_1} {S_2} } & \simeq \mu'\omega_1/\sqrt{2}\Delta M^2_{S_{\cal D}S'_{\cal D}} \, , \nonumber \end{align} where $\Delta M^2_{\varphi \varphi'}\equiv M^2_\varphi-M^2_{\varphi'}$, while the Nambu-Goldstone bosons associated with the $W^\pm$, $Z$ and $Z_{\cal D}$ bosons are defined as \begin{align} G_W^\pm &\simeq H^\pm - \frac{v_\phi}{v}\phi^\pm \,, \nonumber\\ G_Z & \simeq {\rm Im}(H^0) + \frac{v_\phi}{v} {\rm Im}(\phi^0)\,,\\ G_{Z_{\cal D}}&\simeq{\rm Im}(S_1)+\frac{2\omega_2}{\omega_1}{\rm Im}(S_2)+\frac{v_\phi}{\omega_1}{\rm Im}(\phi^0)-\frac{v_\phi^2}{\omega_1 v}{\rm Im}(H^0).\nonumber \end{align} We see that our hypothesis $v_\phi, \omega_2 \ll \omega_1 \ll v$ prevents any relevant modification to the Higgs-like couplings, and $h_{\rm SM}$ ends up being basically like the SM Higgs boson. Moreover, due to the mixing with the Higgs field, the dark scalars and the longitudinal mode of the $Z_{\cal D}$ will also couple to SM fermions via SM Yukawa couplings. Nevertheless, such couplings to light fermions are quite small as they are suppressed by the hierarchy of vevs. If the spectrum enjoys light degrees of freedom (below the $100$ MeV scale), an interesting phenomenology may be associated with this sector. A dedicated study will be pursued in a future work. \subsection{\label{subsec:neutrinos} Neutrino Masses and Mixings} Let us now discuss the generation of neutrino masses and mixings, and how the dynamics of the dark sector outlined so far ensures light neutrinos. The most general Lagrangian in the neutrino sector, compatible with our charge assignment, is \begin{align} \mathcal{L}_\nu=& -y_\nu \, \overline{L} \widetilde{\phi} N + y_N \, S_2 \,\overline{N}N^c + y_{N'}\, S_{2}^* \, \overline{N'}N'^c \nonumber\\ &+ m\, \overline{N'}N^c+{\rm h.c.}\, , \label{eq:Lnu} \end{align} where $y_\nu$, $m$, $y_N$ and $y_{N'}$ are matrices in flavor space. After the two-steps spontaneous breaking $SU(2)_L \times U(1)_Y \times U(1)_{\cal D}\xrightarrow[\quad ]{v} U(1)_{\rm em} \times U(1)_{\cal D} \xrightarrow[\quad ]{\omega_1} U(1)_{\rm em}$, the neutrino mass matrix in the $(\nu,\,N,\,N')$ basis is \begin{equation} \mathcal{M}_\nu=\frac{1}{\sqrt{2}}\left( \begin{array}{c c c} 0 & y_\nu \, v_\phi & 0\\ y_\nu^T \, v_\phi & y_N \, \omega_2 & \sqrt{2}\,m\\ 0 & \sqrt{2}\,m^T &y_{N'} \,\omega_2 \end{array}\right)\, . \end{equation} As already stressed, $v_\phi$ generates a Dirac mass term, while $\omega_2$ plays the key role to generate a naturally small term $y_{N^{\prime}}\omega_2$, which can be identified as the tiny mass term of the inverse seesaw $\mu_{\rm ISS}$ (the dimensionful parameter of inverse seesaw that breaks lepton number by two units), and we obtain a dynamically generated inverse seesaw neutrino mass matrix. The mass matrix $m$ can always be made diagonal, and in principle take any value, but given the smallness of the Dirac and $\mu_{\rm ISS} $-terms, it is clear that we can generate light neutrino masses even with values of $m$ smaller than that in the usual inverse seesaw scenario. More precisely, the light neutrino mass matrix is given at leading order by \begin{equation} m_\nu \simeq (y_\nu^T v_\phi) \frac{1}{m^T} (y_{N^{\prime}} \omega_2) \frac{1}{m} (y_\nu v_\phi)\, . \end{equation} \begin{figure}[t] \includegraphics[width=0.5\textwidth]{Numass.png} \caption{\label{fig:massdiagram} Diagram for the dynamically induced light neutrino masses in our model.} \end{figure} Inspection of Eq.~(\ref{eq:Lnu}) makes clear why we can substantially lower the scale of neutrino mass generation, since in our construction the light neutrino masses are generated effectively as a dimension nine operator (see Fig.~\ref{fig:massdiagram}). Schematically, we start with \begin{equation} \label{eq:dsix} {\cal L}_\nu^{\rm eff} \sim y_\nu^2 \, y_{N^{\prime}}\frac{(\overline{L^c} \phi)(\phi^T L)}{m^2} S_{2}^* \, . \end{equation} Remembering that the vevs of $\phi$ and $S_2$ are induced by the dynamics of the scalar sector, we can rewrite the previous operator in terms of $H$ and $S_1$, the fields whose vev's are present even in the limit $\left\{ \mu, \mu', \alpha\right\}\to 0$. We obtain \begin{equation} \label{eq:dnine} {\cal L}_\nu^{\rm d=9} \sim y_\nu^2 \, y_{N^{\prime}} \frac{\mu^2}{M^4_{H_{\cal D}}} \frac{\mu'}{M^2_{S'_{\cal D}}}\frac{(\overline{L^c} H)(H^T L)}{m^2} (S_{1}^* S_{1})^2 \, , \end{equation} from which it is clear that, ultimately, neutrinos masses are generated by a dimension 9 operator (see, e.g., Refs.~\cite{higherdim} for generation of neutrino masses from higher dimensional effective operators). In addition, we have a further suppression due to the fact that $\mu$ and $\mu'$ can be taken small in a technically natural way. The mixing between active and dark neutrinos can be explicitly written as \begin{equation}\label{eq:mix} \nu_\alpha = \sum_{i=1}^{3} U_{\alpha i} \, \nu_i + U_{\alpha \mathcal{D}} \, N_{\cal D}\, , \end{equation} $\alpha=e,\mu,\tau,{\cal D}$, where $\nu_i$ and $\nu_{\alpha}$ are the neutrinos mass and flavor eigenstates, respectively (we denote by $\alpha = {\cal D}$ the 6 dark neutrinos flavor states, while $U_{\alpha \mathcal{D}}$ is a $9\times 6$ matrix). Schematically, we have that the mixing between light and heavy neutrinos is $y_\nu v_\phi/m$. Note that the dark neutrino can be made very light, without introducing too large mixing, even for $y_\nu\sim\mathcal{O}(1)$ since $v_\phi\ll v$. \vspace{0.3cm} \subsection{\label{subsec:gauge} $Z_{\cal D}$ and the Gauge Sector} The new vector boson will, in general, communicate with the SM sector via either mass mixing or kinetic mixing. The relevant part of the dark Lagrangian is \begin{widetext} \begin{equation} {\cal L}_{\cal D} \supset \frac{m^2_{Z_{\cal D }}}{2} \, Z_{{\cal D}\mu} Z_{\cal D}^{\mu} + g_{\cal D} Z_{\cal D}^\mu \, J_{\mathcal{D}\mu} + e \epsilon \, Z_{\cal D}^\mu \, J_\mu^{\rm em} + \frac{g}{c_W} \epsilon' \, Z_{\cal D}^\mu \, J_\mu^{\rm Z} \, , \label{eq:kmix} \end{equation} \end{widetext} where $m_{Z_{\cal D}}$ is the mass of $Z_{\cal D}$, $g_{\cal D}$ is the $U(1)_{\cal D}$ gauge coupling, $e$ is the electromagnetic coupling, $g/c_W$ is the $Z$ coupling in the SM, while $\epsilon$ and $\epsilon'$ parametrize the kinetic and mass mixings, respectively. The electromagnetic and $Z$ currents are denoted by $\ J^{\rm em}_\mu$ and $J^{Z}_\mu$, while $J_{\mathcal{D}\mu}$ denotes the dark current. In the limits we are considering, the $Z$ and $W^\pm$ masses are essentially unchanged with respect to the SM values, while the new gauge boson mass reads \begin{equation} m_{Z_{\cal D}}^2\simeq g_{\cal D}^2 \left(\omega_1^2 + v_\phi^2 + 4 \, \omega_2^2\right)\simeq g_{\cal D}^2 \, \omega_1^2\, , \end{equation} with mass mixing between $Z$ and $Z_{\cal D}$ given by \begin{align} \epsilon' \simeq \frac{2 g_{\cal D} }{g/c_W} \frac{v_\phi^2}{v^2}\, . \end{align} Of course, a non-vanishing mass mixing $\epsilon'$ implies that the $Z$ boson inherits a coupling to the dark current \begin{align} {\cal L}_{Z} = \frac{m_Z^2}{2} Z_\mu Z^\mu + \frac{g}{c_W} Z^\mu J_\mu^Z - g_{\cal D} \epsilon' Z^\mu J_{\mathcal{D}\mu}\, . \end{align} While the new coupling allows for the possibility of new invisible $Z$ decays, the large hierarchy $v_\phi \ll v$ guarantees that the new contributions to the invisible decay width are well inside the experimentally allowed region. The vev hierarchy also protects the model from dangerous $K$, $B$ and $\Upsilon$ decays with an on-shell $Z_{\cal D}$ in the final state~\cite{Davoudiasl:2012ag, Babu:2017olk}. The kinetic mixing parameter $\epsilon$ is allowed { at tree-level} by all symmetries of the model. Moreover, it is radiatively generated (see e.g. Ref.~\cite{Holdom:1985ag}) by a loop of the $H^\pm_{\cal D}$ scalar which magnitude is \begin{align} \epsilon_{\rm LOOP} \sim \frac{e g_{\cal D}}{480 \pi^2} \frac{m_{Z_{\cal D}}^2}{m_{H^\pm_{\cal D}}^2}. \end{align} { In what follows, we will take $\epsilon$ as generated at tree-level, with $\epsilon_{\rm TREE} \gg \epsilon_{\rm LOOP}$ to guarantee the radiative stability of the theory.} The kinetic mixing will lead to interactions of the $Z_{\cal D}$ to charged fermions, as well as decays if kinematically allowed (see e.g. Ref.~\cite{Ilten:2018crw} for constraints). \section{\label{sec:pheno}Phenomenological Consequences} We would like at this point to make some comments about the possible phenomenological consequences of our model. To illustrate the discussion let us consider a benchmark point consistent with our working hypothesis $v_{\phi}, \omega_2 \ll \omega_1 \ll v$. This point is defined by the input values given in Tab.~\ref{tab1}, producing the physical observables in Tab.~\ref{tab2}. We see that for this point the changes in the masses of the SM gauge bosons as well as the mixings of the Higgs with the new scalars are negligible, so we do not foresee any major problems to pass the constraints imposed to the SM observables by the Tevatron, LEP or the LHC data. { Moreover, our model is endowed with all the features needed to explain the excess of electron-like events observed by the MiniBooNE experiment: a new dark vector boson, $Z_{\cal D}$, that couples to the SM fermions by kinetic mixing and also directly to a dark neutrino, $\nu_{\cal D}$, which mixes with the active ones. As shown in \cite{Bertuzzo:2018itn}, the dark neutrino can be produced via neutrino-nucleus scattering in the MiniBooNE detector and, if $m_{N_{\cal D}}>m_{Z_{\cal D}}$, subsequently decay as $N_{\cal D} \to Z_{\cal D} + \nu_i$. The $Z_{\cal D}$ can then be made to decay primarily to $e^+ e^-$ pairs with a rate that results in an excellent fit to MiniBooNE energy spectra and angular distributions.} In general, this model may in principle also give contributions to the muon $g-2$ \footnote{Since additional electrically charged/neutral scalar ($H^\pm_{\cal D}, H_{\cal D}, A_{\cal D}$) fields and a light dark gauge boson($Z_{\cal D}$) field are present in our model, they will induce a shift in the leptonic magnetic moments and mediate LFV decays via the interactions as shown in Eq.~\ref{eq:Lnu} and Eq.~\ref{eq:kmix}. The contribution to muon magnetic moment from neutral dark Higgs fields ($H_{\cal D}, A_{\cal D}$) with flavor-changing couplings is negligible in our framework. The dominant contribution will arise from singly charged scalar ($H^\pm_{\cal D}$) via the interaction term $y_\nu \, \overline{L} \widetilde{\phi} N$. But, the singly charged scalar correction to muon $g-2$ is negative and rather destructive to the other contributions. Whereas, the one loop contribution of the dark gauge boson ($Z_{\cal D}$) to muon $g-2$ is quite promising and a dedicated study will be pursued further on that. It is worth mentioning that there will be another small contribution to muon $g-2$ from the $W$ boson exchange via mixing between active and dark neutrinos.}, to atomic parity violation, polarized electron scattering, neutrinoless double $\beta$ decay, rare meson decays as well as to other low energy observables such as the running of the weak mixing angle $\sin^2\theta_{W}$. There might be consequences to neutrino experiments too. It can, for instance, modify neutrino scattering, such as coherent neutrino-nucleus scattering, or impact neutrino oscillations experimental results as this model may give rise to non-standard neutrino interactions in matter. Furthermore, data from accelerator neutrino experiments, such as MINOS, NO$\nu$A, T2K, and MINER$\nu$A, may be used to probe $Z_{\cal D}$ decays to charged leptons, in particular, if the channel $\mu^+\mu^-$ is kinematically allowed. We anticipate new rare Higgs decays, such as $h_{\rm SM} \to Z Z_{\cal D}$, or $H^\pm_{\cal D} \to W^\pm Z_{\cal D}$, that depending on $m_{Z_{\cal D}}$ may affect LHC physics. Finally, it may be interesting to examine the apparent anomaly seen in $^8$Be decays~\cite{Kozaczuk:2016nma} in the light of this new dark sector. The investigation of these effects is currently under way but beyond the scope of this letter and shall be presented in a future work. \begin{table}[htb] \begin{tabular}{||c|c|c|c||} \hline\hline \multicolumn{4}{c}{\bf Vacuum Expectation Values}\\ \hline \hline $v$ (GeV) & $\omega_1$ (MeV) & $v_{\phi}$ (MeV) & $\omega_2$ (MeV)\\ \hline $246$ & $136$ & $0.176$ & $0.65$ \\ \hline \hline \multicolumn{4}{c}{\bf Coupling Constants}\\ \hline\hline $\lambda_H$ & $\lambda_{H\phi}=\lambda_{H\phi}'$ & $\lambda_{H S_1}$ & $\lambda_{H S_2}$ \\ \hline $0.129$ & $10^{-3}$ & $10^{-3}$ & $-10^{-3}$ \\ \hline $\lambda_{\phi S_1}$ & $\lambda_{\phi S_2}$ & $\lambda_{S_1}$ & $\lambda_{S_1 S_2}$ \\ \hline $10^{-2}$ & $10^{-2}$ & $2$ & $0.01$ \\ \hline $\mu$ (GeV) & $\mu'$ (GeV) & $\alpha$ & $g_{\cal D}$\\ \hline $0.15$ & $0.01$ &$10^{-3}$ & 0.22 \\ \hline\hline \multicolumn{4}{c}{\bf Bare Masses}\\ \hline\hline \multicolumn{2}{||c|}{$m_{\phi}$ (GeV)} & \multicolumn{2}{c||}{$m_2$ (GeV)} \\ \hline \hline \multicolumn{2}{||c|}{100} & \multicolumn{2}{c||}{5.51} \\ \hline \hline \end{tabular} \caption{\label{tab1} Input values for a benchmark point in our model that can provide an explanation of the low energy MiniBooNE excess~\cite{Aguilar-Arevalo:2018gpe,Bertuzzo:2018itn}. See Tab.~\ref{tab2} for the respective physical masses and mixings.} \end{table} \begin{table*}[htb] \begin{tabular}{||c|c|c|c|c|c|c|c|c||} \hline\hline \multicolumn{9}{c}{\bf Masses of the Physical Fields}\\ \hline\hline $m_{h_{\rm SM}}$ (GeV) & $m_{H_{\cal D}}$ (GeV) & $m_{S_{\cal D}}$ (MeV) & $m_{S'_{\cal D}}$ (MeV) & $m_{H^\pm_{\cal D}}$ (GeV) & $m_{A_{\cal D}}$ (GeV) & $ m_{a_{\cal D}}$ (MeV) & $m_{Z_{\cal D}}$ (MeV) & $m_{N_{\cal D}}$ (MeV)\\ \hline $125$ & $100$ & $272$ & $320$ & 100 & 100 & 272 & 30 & 150 \\ \hline \hline \multicolumn{9}{c}{\bf Mixing between the Fields }\\ \hline\hline $\theta_{H\phi} $ & $\theta_{HS_1}$ & $\theta_{HS_2}$ & $\theta_{\phi S_1}$ & $\theta_{\phi S_2}$ & $\theta_{S_1 S_2} $ & $e\epsilon$ & $\epsilon' $ & $|U_{\alpha N}|^2$\\ \hline $1.3\times 10^{-6}$ & $2.1 \times 10^{-6}$ & $10^{-8}$ & $1.2 \times 10^{-3}$ & $8.3 \times 10^{-7}$ & $3.4\times 10^{-2}$ & $2\times 10^{-4}$ & $3.6 \times 10^{-14}$ & $ \mathcal{O}(10^{-6})$ \\ \hline \hline \end{tabular} \caption{\label{tab2} Physical masses and mixings for the benchmark point of our model that can provide an explanation of the low energy MiniBooNE excess~\cite{Aguilar-Arevalo:2018gpe,Bertuzzo:2018itn}. The light-heavy neutrino mixing is schematically denoted by $|U_{\alpha N}|^2$, and $m_{N_{\cal D}}$ denotes the order of magnitude of the diagonal entries of the dark neutrino mass matrix.} \end{table*} \section{\label{sec:Conclusion}Final Conclusions and Remarks} The main purpose of this letter has been to explicitly connect the generation of neutrino masses to the existence of a new light dark sector. Doing so, we are able to lower the scale of neutrino mass generation substantially below the electroweak one by resorting to a dynamical breaking of a new $U(1)_{\cal D}$ dark gauge symmetry under which SM particles are neutral. Our secluded sector consists of the minimal dark field content able to ensure anomaly cancellation, as well as the spontaneous breaking of the dark gauge symmetry without the appearance of a Nambu-Goldstone boson. It consists of a new scalar doublet, two scalar singlets and a set of six new fermion singlets, all charged under the dark symmetry. A judicious choice of dark charges allows to generate neutrino masses by a dynamical inverse seesaw mechanism, but unlike the usual inverse seesaw scenario, the so-called $\mu_{\rm ISS}$-term is here dynamically generated, and can be small in a technically natural way. Interestingly, neutrino masses effectively emerge only at the level of dimension 9 operators, and we can have a new light dark gauge boson in the spectrum. The dark sector is mostly secluded from experimental scrutiny, as it only communicates with the SM by mixing: the SM Higgs mixing with dark scalars, neutrinos mixing with dark fermions, and through kinetic and mass mixing with the dark gauge boson. The low scale phenomenology of the model is simple yet rich. It is possible that our model gives sizable contributions to several experimental observables such as the value of the muon $g-2$, the Majorana mass in neutrinoless double $\beta$ decay or influence atomic parity violation, polarized electron scattering, or rare meson decays, among others. Moreover, the mechanism we propose in this letter could provide an novel explanation to the MiniBooNE low energy excess of electron-like events~\cite{Bertuzzo:2018itn}. As a final remark, let us stress that we presented here only the low scale realization of the model, imposed by the hierarchy of vevs we have selected. Nevertheless, we could have chosen a different one, for instance, $\omega_1\gtrsim v$. In that case we would have a high scale realization of the model, with unique phenomenological consequences at the LHC, for instance displaced vertex or prompt multi-lepton signatures. \section*{Acknowledgments} \noindent We thank Kaladi Babu for useful discussions and Oscar \'{E}boli for careful reading of the manuscript. This work was supported by Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo (FAPESP) under contracts 12/10995-7 and 2015/25884-4, and by Conselho Nacional de Ci\^encia e Tecnologia (CNPq). R.Z.F. is grateful for the hospitality of the Fermilab Theoretical Physics Department during the completion of this work. The work of S.J. is supported in part by the US Department of Energy Grant (DE-SC0016013) and the Fermilab Distinguished Scholars Program. Fermilab is operated by the Fermi Research Alliance, LLC under contract No. DE-AC02-07CH11359 with the United States Department of Energy. This project has also received support from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement N$^\circ$ 690575 (InvisiblesPlus) and N$^\circ$ 674896 (Elusives).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Observation of lepton number violation would be one of the most spectacular evidences for deviations from the standard model. Lepton mixing arises naturally in many of the extensions of the standard model. Here we consider the top-color assisted technicolor (TC2) scenario introduced by Hill \cite{hill}. In TC2 there exists an extra $U(1)$ group which breaks at a higher energy than the electroweak breaking scale. The couplings of this $U(1)$ are generally not generation-universal, so there will be flavor-changing neutral current processes including the possibility for lepton number violation. After the exposition of the theoretical framework in section II, we present the calculations of the rates for $\mu\to e\gamma$, $\mu\to 3e$, $\mu\text{-}e$ conversion in nuclei and the intrinsic dipole moments for leptons in a TC2 scenario for which Chivukula and Terning \cite{chiv} did a study on constraints from precision $Z$ data. The decay rate for $\mu\to e\gamma$ does not provide so stringent a limit since it is a one loop process involving a photon vertex. With the current data, $\mu\text{-}e$ conversion in Ti gives limits that are roughly an order of magnitude better than $\mu\to 3e$. This will considerably improve with the proposed MECO experiment \cite{meco}. For the TC2 scenario we have considered, we found that the current experimental limits allow for mixing angles of magnitude ranging between the analogous elements of CKM matrix K and the elements of $\sqrt{K}$. \section{Theory} In TC2, one has $SU(2)\times U(1)_{1}\times U(1)_{2}$ as the electroweak gauge symmetry. The extra $U(1)$ will generate a new neutral gauge boson with a mass expected to be around $1\;{\rm TeV}$ \cite{1tev}. We require the breaking to occur in two stages, in the following pattern {\Large \[ {\underbrace{{}_{SU(2)\times}\underbrace{{}_{U(1)_{1} \times U(1)_{2}}}_{U(1)_{Y}}}_{U(1)_{EM}}} \] } \noindent Here $Y_{1}$ and $Y_{2}$ are the generators of $U(1)_{1}$ and $U(1)_{2}$ respectively and $Y=Y_{1}+Y_{2}$ is the ordinary hypercharge. After the first step of symmetry breaking we want the gauge boson corresponding to $U(1)_{Y}$ to remain massless so that the second stage can proceed as in the standard model. This requires the first stage to be triggered by a neutral, $SU(2)$-singlet condensate. The second stage can be triggered by $SU(2)$-doublet condensates as in the standard model\footnote{ Here we adopt a Higgs-like formalism for the sake of future clarity in notation. In TC2 fermions don't acquire masses via fundamental scalars. If one insists on using fundamental scalars to generate fermion masses one will need more than two doublets to provide generational mixing via neutral currents of the new gauge boson since models with less than three Higgs doublets are equipped with a natural GIM mechanism.}. \noindent The covariant derivative for $SU(2)\times U(1)_{1}\times U(1)_{2}$ is $\partial^{\mu}+ig T^{a}W_{a}^{\mu}+ig'_{1}Y_{1}B_{1}^{\mu}+ ig'_{2}Y_{2}B_{2}^{\mu}$. Here $T^{a}\,\, , (a=1,2,3)$ are the generators of $SU(2)$. The gauge couplings can be parameterized as $g=e/\sin\theta$, $g'_{2}=e/\cos\theta\,\cos\phi$ and $g'_{1}=e/\cos\theta\,\sin\phi$. Here $\theta$ is the weak mixing angle and $\phi$ is a new mixing angle. The value of $\sin^{2}\phi$ should be smaller than $1/2$ since this would mean interchanging the labeling of the $U(1)$'s. Furthermore, in TC2 one of the $U(1)$'s is strong, choosing it to be $U(1)_{1}$, one has $\alpha_{2}={g'}_{1}^{2}/4\pi\simeq O(1)$ and this gives roughly $\sin^{2}\phi\approx O(0.1)$. We now rotate the $B_{1}^{\mu}$, $B_{2}^{\mu}$ fields in terms of $\phi$ as follows \begin{mathletters} \begin{eqnarray}{\label{eq:2}} B^{\mu} &=& \cos\phi\, B_{2}^{\mu} + \sin\phi\, B_{1}^{\mu} \; , \\ Z_{2}^{\mu} &=& \cos\phi\, B_{1}^{\mu} - \sin\phi\, B_{2}^{\mu}\; . \end{eqnarray} \end{mathletters} \noindent This choice of basis guarantees that $B^{\mu}$, coupling to $Y$, remains massless after the first stage of the breaking pattern. The new gauge boson, $Z_{2}$, gets most of its mass at this stage. For the second stage of symmetry breaking, we rotate the $B^{\mu}$ and $W_{3}^{\mu}$ fields in terms of $\theta$, as in the standard model, \begin{mathletters} \begin{eqnarray}{\label{eq:3}} A^{\mu} &=& \cos\theta\, B^{\mu} + \sin\theta W_{3}^{\mu} \; , \\ Z_{1}^{\mu} &=& \cos\theta\, W_{3}^{\mu} - \sin\theta\, B^{\mu}\; . \end{eqnarray} \end{mathletters} \noindent The use of $SU(2)$-doublet condensates at this stage provides mass to $Z_{1}$. The heavier boson, $Z_{2}$, also gets some additional mass and $A$ remains massless. The currents to which these gauge bosons couple, are as follows: \begin{mathletters} \begin{eqnarray} &A&\;\;\text{couples to}\;\;Q\equiv T^{3}+Y\;\;\text{with strength}\;\;e\; ,\\ &Z_{1}&\;\;\text{to}\;\;C\equiv T^{3}-Q\sin^{2}\theta\;\; \text{with}\;\;g_{Z}\equiv e/\cos\theta\sin\theta\; , \\ &Z_{2}&\;\;\text{to}\;\;C'\equiv Y_{1}-Y\sin^{2}\phi\;\;\text{with}\;\; g_{Z'}\equiv e/\cos\theta\sin\phi\cos\phi\label{eq:dumbc} \; . \end{eqnarray} \end{mathletters} \noindent Generally, the mass matrix for these gauge bosons is not diagonal, because the $SU(2)$-doublet condensates couple to both $Z_{1}$ and $Z_{2}$. Following the formalism in \cite{chiv} we write the mass eigenstates as follows \begin{mathletters} {\label{eq:4}} \begin{eqnarray} Z &\simeq& Z_{1} - \frac{\tan\phi\sin\theta}{\eta}(1+\frac{\xi} {\sin^{2}\phi}) Z_{2} \; ,\\ Z' &\simeq& \frac{\tan\phi\sin\theta}{\eta}(1+\frac{\xi} {\sin^{2}\phi}) Z_{1} + Z_{2} \; . \end{eqnarray} \end{mathletters} \noindent Here we introduced \begin{mathletters} \begin{eqnarray}{\label{eq:5}} \xi &=& \sum_{_{j}}<T^{3}Y_{1}>_{_{j}}/\sum_{_{j}}<T^{3}T^{3}>_{_{j}} \; ,\\ \eta &=& \frac{\sin^{2}\theta}{\sin^{2}\phi\cos^{2}\phi} \sum_{_{j}}<C'C'>_{_{j}}/\sum_{_{j}}<T^{3}T^{3}>_{_{j}} \; . \end{eqnarray} \end{mathletters} \noindent We make use of Higgs language for the sake of notation; $j$ runs over the condensates used to trigger the breaking stages and $<X>_{_{j}}$ means the VEV of X with respect to the $j$'th condensate. Now, in natural TC2 models the technifermion $Y_{2}$ hypercharges can be taken to be isospin symmetric \cite{isolane}. Then, since $Q=T_{3}+Y$ is conserved, the only contribution to $\xi$ comes from the top-quark condensate. This is \begin{equation}{\label{eq:xi}} \xi=2\frac{f_{t}^{2}}{v^{2}}\left(Y_{1L}^{t}-Y_{1R}^{t}\right), \end{equation} \noindent with $v\approx 250\;{\rm GeV}$ and $f_{t}\approx 64\;{\rm {GeV}}$ \cite{chiv} , the top-pion decay constant. Thus, $\xi\approx0.13(Y_{1L}^{t}-Y_{1R}^{t})$. The masses of the eigenstates in (\ref{eq:4}) are \begin{mathletters} \begin{eqnarray}{\label{eq:6}} M^{2}_{Z} & \simeq & M^{2}_{ZSM} \left(1-\frac{\tan^{2}\phi\sin^{2}\theta} {\eta}(1+\frac{\xi}{\sin^{2}\phi})^{2}\right)\; ,\\ M^{2}_{Z'} & \simeq & \eta\;M^{2}_{ZSM} \left(1+\frac{\tan^{2}\phi \sin^{2}\theta}{\eta}(1+\frac{\xi}{\sin^{2}\phi})^{2}\right) \; . \end{eqnarray} \end{mathletters} \noindent Here $M_{ZSM}$ is the standard model prediction for the mass of $Z$. Then the correction to $\rho$ parameter due to the shift in the $Z$ mass is given by \begin{equation}{\label{eq:rho}} \delta\rho_{Z'} \simeq \frac{\tan^{2}\phi\sin^{2}\theta}{\eta}(1+\frac{\xi}{\sin^{2}\phi})^{2}\; . \end{equation} \noindent If $\xi=-\sin^{2}\phi$, there will be no $Z_{1}-Z_{2}$ mixing, but the generation mixing effects of $Z_{2}$ will remain. The shift in $\rho$ must not be bigger than a percent \cite{PDG}. So, to this order, we must have \begin{equation}{\label{eq:eta}} \eta\simeq\left(\frac{M_{Z'}}{M_{Z}}\right)^{2} \;. \end{equation} \noindent In TC2, we expect $M_{Z'}\gtrsim 1-2\;{\rm TeV}$, meaning that $\eta\gtrsim 100$. \noindent We now rewrite the currents $Z$($Z'$) couple with $g_{Z}$($g_{Z'}$) factored out; \begin{mathletters} \begin{eqnarray} J_{Z} &=& C-\zeta\sqrt{\delta\rho}\;C' \; ,\\ J_{Z'} &=& C'+\frac{g_{Z}^{2}}{g_{Z'}^{2}}\zeta\sqrt{\delta\rho}\;C \; . \end{eqnarray} \end{mathletters} \noindent Here, we suppressed chirality subscripts on $C$ and $C'$ for the sake of notation and omitted the $Z'$ subscript from $\delta\rho_{Z'}$. We also used $\zeta\equiv({g_{Z'}M_{Z}})/({g_{Z}M_{Z'}})\approx O(0.1)$ and $\delta\rho$ to have a compact and model independent notation. If we consider $C$ and $C'$ as matrices in the 3-dimensional generation space, $C$ is a multiple of the identity, since the standard model $Z$ has universal couplings. Thus, $C$ will commute with the rotation matrices that bring the leptons to their mass eigenbasis. However, there is no a priori reason for $C'$ to be a multiple of identity and in TC2 it isn't; after rotating the fermion fields, a non-universal $C'$ will induce tree-level generation mixing. We denote the rotated mass-eigenstate \footnote{Clearly, our motivation is independent of the fact that whether neutrinos have mass or not, because the tree-level mixing is due to a neutral gauge boson.} charged lepton fields as follows (chiral indices are suppressed); \begin{equation} \psi^{l} = \Lambda \psi'^{l} \; . \end{equation} \noindent Here $\psi^{l}$, $(l=e,\mu,\tau)$, are the lepton mass eigenstates and $\Lambda$ is a $3\times3$ unitary matrix. As mentioned above $C'$ won't commute with the rotation matrices, so we introduce the rotated lepton vertex matrices \begin{equation}{\label{eq:vertices}} L\equiv\bar{\Lambda}C'^{l}\Lambda\; ,\label{eq:vertices1} \end{equation} One of the biggest problems of such a theory is the flavor-changing neutral currents involving the first two generations. This is cured in TC2 by having the new gauge boson $Z_{2}$ couple with equal strength to the first two generations, and differently to the third. This implies the following \begin{mathletters} \begin{eqnarray} L^{e\mu}&=&(C'^{\tau}-C'^{e}){\bar{\Lambda}}^{e\tau}\Lambda^{\tau\mu} \; ,\\ L^{l\tau}&=&(C'^{\tau}-C'^{e}){\bar{\Lambda}}^{l\tau}\Lambda^{\tau\tau} \; , \;\;\;\; l=e,\mu \; . \end{eqnarray} \end{mathletters} \noindent Then, if one assumes $\Lambda^{\tau\tau}\gg\Lambda^{l\tau}$, $l=e,\mu$, one has \begin{equation}{\label{eq:iden1}} L^{\mu\tau}\; , L^{e\tau} \gg L^{e\mu} \; . \end{equation} \noindent Thus mixing between the first two generations is suppressed from the outset. In what follows, we will present the results for $\mu\to e\gamma$, the electron's electric dipole moment, $\mu\to 3e$ and $\mu\text{-}e$ conversion in Ti. To get a feel for the numerical implications, we will use a TC2 model for which $Y_{1}=0$ ($C'=-Y\sin^{2}\phi$) for the first two generations and $Y_{1}=Y$ ($C'=Y(1-\sin^{2}\phi)$) for the third one. This results in $\xi\approx -0.07$. Chivukula and Terning \cite{chiv} fit the full precision $Z$ data including atomic parity violations to this model. The results of their fit are summarized in Fig.~\ref{fig1}. As can be seen from this graph, the lower bound for $M_{Z'}$ around $1-2\;{\rm Tev}$. \section{Lepton Mixing Processes} \subsection{{\bf The Amplitude for \boldmath{$ {\rm l} \to {\rm l'}\gamma$}}} We calculated the amplitude for $l\to l'\gamma$ in a generalized $R_{\xi}$ gauge following the formalism of Ref.~\cite{bjlawe}. This process occurs at one loop ; the photon couples to the internal lepton propagator and the loop closes with a $Z$ or $Z'$ line. The matrix element is ${\cal{M}}=\epsilon^{*}_{\mu}(q){\bar{u}}(p')\Gamma^{\mu}u(p)$ and $\Gamma^{\mu}$ is given by \begin{equation}{\label{eq:ltolpg}} \Gamma^{\mu}=-i\frac{eg_{Z'}^{2}}{16M_{Z'}^{2}\pi^{2}}\left[F_{_{+}}^{ll'} \sigma^{\mu\nu}q_{\nu}-F_{_{-}}^{ll'}\sigma^{\mu\nu}q_{\nu}\gamma^{5}\right]\;. \end{equation} \noindent With, \begin{equation}{\label{eq:ltolpg2}} F_{_{\pm}}=\left[\frac{1}{3}\left[m,L^{2}_{L}\right]_{_{\pm}}-\frac{2}{3} \frac{\sqrt{\delta\rho}}{\zeta}C_{L}\left[m,L_{L}\right]_{_{\pm}}-L_{L}mL_{R} +\frac{\sqrt{\delta\rho}}{\zeta}C_{L}\left[m,L_{R}\right]_{_{\pm}}\right] \;\pm\;\left({}_{L}\leftrightarrow {}_{R}\right)\;. \end{equation} \noindent Here $\left[x,y\right]_{_{\pm}}=xy\pm yx$ and $m$ is the mass matrix of the leptons. From this amplitude one can calculate the decay $l\to l'\gamma$ and the electric dipole moments for $l$. \subsubsection{{\bf The Electron Electric Dipole Moment}} The electric dipole moments are given by the coefficient of $\sigma^{\mu\nu}q_{\nu}\gamma^{5}$ in Eq.~(\ref{eq:ltolpg}), so for the electron we need to evaluate $F^{ee}_{_{-}}$. This gives \begin{equation}{\label{eq:anomel2}} d^{e}=\frac{e g_{Z'}^{2}}{16M_{Z'}^{2}\pi^{2}}{\rm Im}\left(m_{\tau}L_{L}^{e\tau}L_{R}^{\tau e} + m_{\mu}L_{L}^{e\mu}L_{R}^{\mu e}\right)\; . \end{equation} Assuming Eq.~(\ref{eq:iden1}) holds, the RHS of the equation above is dominated by the first term. The experimental value for $d^{e}$ is $(-0.27\pm 0.83) \times 10^{-26} {\rm e\; cm}$ \cite{PDG}. Taking the $Z'$ contribution to lie within $1\sigma$, we get the following constraint: \begin{equation}{\label{eq:anomel3}} M_{Z'}\gtrsim \frac{39.3}{\sin\phi\cos\phi}\left[{\rm Im} \left({\bar{\Lambda}}_{L}^{e\tau}\Lambda_{L}^{\tau\tau} {\bar{\Lambda}}_{R}^{\tau\tau}\Lambda_{R}^{\tau e}\right)\right]^{1/2} \;\;{\rm TeV}. \end{equation} \noindent If one considers the magnitude of the quantity above ignoring the phases and assuming $\Lambda_{L}\approx\Lambda_{R}\approx K$ one finds $M_{Z'}\gtrsim 0.14/(\sin\phi\cos\phi)\;\;{\rm TeV}$. Had we used $\Lambda_{L}\approx\Lambda_{R}\approx \sqrt{K}$ this would change to $M_{Z'}\gtrsim 0.02/(\sin\phi\cos\phi)\;\;{\rm TeV}$. Recalling that in TC2 one expects $M_{Z'}$ to be around $1-2\;{\rm TeV}$, the former gives $\sin^{2}\phi\gtrsim 0.01$ and the latter $\sin^{2}\phi\gtrsim 4\times 10^{-4}$. These are expected, because very small values of $\sin^{2}\phi$ would make $g_{Z'}$ diverge and this in turn will result in a large mass for $Z'$. On the other hand one can get rid of the electric dipole moments by assuming $\Lambda_{R}^{i\tau}\approx\Lambda_{R}^{\tau i}\approx 0$, $i\neq\tau$, leaving $\Lambda_{L}$ unconstrained (or vice-versa). In the context of TC2, this type of behavior was strongly advocated for quark mixing angles to naturally eliminate the very stringent constraints resulting from $B^{o}_{d}-\bar{B^{o}}_{d}$ mixing \cite{komihill}. \subsubsection{{\bf {\boldmath $\mu\to e\gamma$}}} Using the amplitude in Eq.~(\ref{eq:ltolpg}) we find that the decay rate is \begin{equation}{\label{eq:mutoeg}} \Gamma(\mu\to e\gamma)=\alpha_{e}\left(\frac{g_{Z'}^{4}m_{\mu}^{3}} {2048\pi^{4}M_{Z'}^{4}}\right)\left[ |F_{_{+}}^{e\mu}|^{2} + |F_{_{-}}^{e\mu}|^{2}\right]\; . \end{equation} With ${\rm BR}(\mu\to e\gamma)<4.9\times 10^{-11}$ \cite{PDG} and assuming for simplicity $\Lambda_{R}^{i\tau}\approx\Lambda_{R}^{\tau i}\approx 0$, $i\neq\tau$, we have the following, \begin{equation}{\label{eq:mutoegc}} \zeta^{2}|(C_{L}^{'\tau})^{2}-(C_{L}^{'e})^{2}| \left|1-1.5\frac{\sqrt{\delta\rho}}{\zeta (C_{L}^{'\tau}+C_{L}^{'e})}\right| \lesssim \frac{7.2\times 10^{-4}}{\left|\Lambda_{L}^{\mu\tau} \Lambda_{L}^{\tau e}\right|}\; . \end{equation} \noindent Had we not used $\Lambda_{R}^{i\tau}\approx\Lambda_{R}^{\tau i} \approx0,\;\; i\neq\tau$ the amplitude would be dominated by $L_{L}mL_{R}$, which would make the RHS of Eq.~(\ref{eq:mutoegc}) smaller by an amount $m_{\mu}/m_{\tau}\simeq 0.06$. Even this will not help fix $Z'$ parameters; as we shall see shortly other lepton number violating modes are better by orders of magnitude. \subsection{\bf{ {\boldmath $\mu\to 3 e$}}} This decay mode is allowed at tree level. One finds for the decay rate \begin{equation}{\label{eq:mto3e1}} \Gamma(\mu\to3e)=\frac{m_{\mu}^{5}}{768\pi^{3}}\left(\frac{g_{Z'}} {2M_{Z'}}\right)^{4}(3X+X')\; . \end{equation} \noindent With $X$ and $X'$ given by, \begin{mathletters} \begin{eqnarray}{\label{eq:mto3e}} X&=&\left[|L_{V}^{e\mu}|^{2}+|L_{A}^{e\mu}|^{2}\right] \left[(B_{V}^{ee})^{2}+(B_{A}^{ee})^{2}\right] \; ,\\ X'&=&2\left[L_{V}^{e\mu}(L_{A}^{e\mu})^{*}+L_{A}^{e\mu} (L_{V}^{e\mu})^{*}\right]{B_{V}^{ee}}{B_{A}^{ee}} \; . \end{eqnarray} \end{mathletters} \noindent Here, we defined $B^{ee} \equiv L^{ee}-\frac{\sqrt{\delta\rho}}{\zeta} C^{ee} = (B^{ee})^{*}$. Using ${\rm BR}(\mu\to 3e)<10^{-12}$ \cite{PDG} and assuming, for simplicity, $\Lambda_{R}^{i\tau}\approx\Lambda_{R}^{\tau i} \approx 0$, $i\neq\tau$, we have, \begin{equation}{\label{eq:mto3ec}} \zeta^{2}|C_{L}^{'e}(C_{L}^{'\tau}-C_{L}^{'e})|\left|1-0.03 \frac{\sqrt{\delta\rho}}{\zeta C_{L}^{'e}}+0.03 \frac{\delta\rho}{\zeta^{2}(C_{L}^{'e})^{2}}\right|^{1/2} \lesssim \frac{2.2\times 10^{-7}}{{\left|\Lambda_{L}^{\mu\tau} \Lambda_{L}^{\tau e}\right|}}\; . \end{equation} \noindent This is by far a better constraint than the one imposed by $\mu\to e\gamma$. The polynomial under the square root in (\ref{eq:mto3ec}) is stable between $1.00-1.06$ for $\sin^{2}\phi$ between $0.04-1$. Thus, the constraint is dominated by the term multiplying the square root. We compare this process with the $\mu\text{-}e$ conversion in Ti in the following subsection. \subsection{\bf{ {\boldmath $\mu\text{-}e$} Conversion in Ti }} This process is also a tree level process. Borrowing the result from Barneb\'{e}u et.al. \cite{barnebeu}, one has the following rate (normalized to the muon capture rate $\Gamma_{c}$): \begin{mathletters} \begin{eqnarray}{\label{eq:mtoe}} {\cal R}(\mu\text{-}e) &=& \frac{\alpha_{e}^{3} m_{\mu}^{5} Z_{eff}^{4} f^{2} g_{Z'}^{4}}{32\pi^{2} Z \Gamma_{c} M_{Z'}^{4}}\left[|L_{L}^{e\mu}|^{2}+|L_{R}^{e\mu}|^{2}\right] X^{2}\;, \\ X &=& (2Z+N) B_{V}^{uu}+(Z+2N) B_{V}^{dd} \;,\\ B^{qq} &\equiv& C'^{qq}-\frac{\sqrt{\delta\rho}}{\zeta} C^{qq} \;\; ,\; q=u,d. \end{eqnarray} \end{mathletters} \noindent The parameters for Ti are, $Z=22$, $N=26$, $Z_{eff}\simeq 17.6$, $\Gamma_{c}\simeq 1.7\times 10^{-15}{\rm MeV}$ and $f\simeq 0.54$ \cite{barnebeu}. The current experimental upper bound ${\cal R}(\mu-e)_{Ti}<4.3\times 10^{-12}$ \cite{PDG} will give (again assuming for simplicity $\Lambda_{R}^{i\tau}\approx\Lambda_{R}^{\tau i}\approx0$, $i\neq\tau$) \begin{equation}{\label{eq:mtoec}} \zeta^{2}|C_{L}^{'e}(C_{L}^{'\tau}-C_{L}^{'e})|\left|1+0.14 \frac{\sqrt{\delta\rho}}{\zeta C_{L}^{'e}}\right|\lesssim \frac{3.6\times 10^{-8}}{{{\left|\Lambda_{L}^{\mu\tau} \Lambda_{L}^{\tau e}\right|}}} \end{equation} The term $\left|1+0.27{\sqrt{\delta\rho}}/{(\zeta C_{L}^{'e})}\right|$ is stable between $0.8-1$ for $\sin^{2}\phi$ between $0.04-1$, thus the constraint is dominated by $\zeta^{2}|C^{'e}(C^{'\tau}-C^{'e})|$ as in the case of $\mu\to 3e$. So we see that RHS of (\ref{eq:mtoec}) is roughly 6 times the RHS of (\ref{eq:mto3ec}). This observation will remain roughly valid with the relaxation of the assumption $\Lambda_{R}^{i\tau}=\Lambda_{R}^{\tau i}=0$, $i\neq\tau$. So we can disregard the process $\mu\to 3e$ and concentrate only on $\mu\text{-}e$ conversion for the numerical analysis. Since the rate for $\mu\text{-}e$ conversion depends on $|{L_{R}}^{e\mu}|^{2}+|{L_{R}}^{e\mu}|^{2}$ the correct form of the constraint equation is \begin{equation}{\label{eq:mtoecc}} \zeta^{2}|C_{L}^{'e}(C_{L}^{'\tau}-C_{L}^{'e})| \left|1+0.14\frac{\sqrt{\delta\rho}}{\zeta C_{L}^{'e}}\right|\lesssim \frac{3.6\times 10^{-8}}{\delta} \;\; . \end{equation} \noindent Here \begin{equation}{\label{eq:mtoecc2}} \delta \equiv {\sqrt{\left|\Lambda_{L}^{\mu\tau}\Lambda_{L}^{\tau e}\right|^{2}+4 \left|\Lambda_{R}^{\mu\tau}\Lambda_{R}^{\tau e}\right|^{2}}} \;\; . \end{equation} \noindent The factor of $4$ in Eq.~(\ref{eq:mtoecc2}) is the ratio $(Y_{R}/Y_{L})^{2}$ for leptons. \section{Numerical analysis and conclusion} In terms of the relevant quantities the constraint equation (\ref{eq:mtoecc}) reads \begin{equation}\label{eq:constf} (\frac{{\rm TeV}}{M_{Z'}})^{2}\frac{1}{s^{2}c^{2}}|C_{L}^{'e} (C_{L}^{'\tau}-C_{L}^{'e})|\left|1+3.2\frac{\sqrt{\delta\rho}} {C_{L}^{'e}}(\frac{M_{Z'}}{{\rm TeV}})sc\right|\lesssim \frac{1.9\times 10^{-5}}{\delta}\;, \end{equation} \noindent where $s\equiv\sin\phi$ and $c\equiv\cos\phi$. Taking the lower limits for $M_{Z'}$ from Fig.~\ref{fig1} and feeding them in the constraint equation for $\mu\text{-}e$ conversion, (\ref{eq:constf}), we get the upper limits on $\delta$ presented in Fig.~\ref{fig2}. As can be seen from Fig.~\ref{fig2} the lowest upper bound for $\delta$ occurs at $\delta\rho=0$, and it increases steeply for smaller values of $\sin^{2}\phi$. This is because $g_{Z'}$ diverges for vanishingly small values of $\sin^{2}\phi$. The numerical upper bound for $\delta$ lies between $10^{-4}$ and $10^{-6}$ for the particular scenario we have considered. These values are compatible with the naive estimates made by taking $\Lambda\approx K$ or $\Lambda\approx \sqrt{K}$ for the mixing matrices. The constraint eqution (\ref{eq:constf}) is also sensitive to the $Z'$ hypercharges: for example in the TC2 model proposed recently by Lane \cite{lanelast} the bound is more stringent by a factor of $10$. Since, within TC2, the reasons for expecting the $Z'$ mass around $1-2$ TeV are somewhat robust, more stringent constraints may rule out lepton mixing altogether. For example if the MECO experiment reaches the proposed precision of $10^{-16}$ for $\mu\text{-}e$ conversion in Ti without observing any candidate event, the upper bound on $\delta$ will be smaller by a factor of $5\times 10^{-3}$. This will be hard to accomodate with reasonable $Z'$ mass and hypercharges and will lead to the exclusion of lepton number violation via $Z'$ for the TC2 model we have considered. The main conclusion to be drawn is that the possibility of lepton number violation in TC2 remains an interesting feature for now. The proposed MECO experiment could tell which ones of the present models involving lepton number violation may survive. I thank Kenneth Lane for useful discussions and comments on the manuscript. This work was supported in part by the National Science Foundation under grant PHY-9501249, and by the Department of Energy under grant DE-FG02-91ER40676. \begin{figure} \centerline{\epsfig{figure=./sinmass.eps}} \caption{The $95\% $ confidence level lower bound on $M_{Z'}$ resulting from the fit of the TC2 model we are considering to the precision $Z$ data. We reproduced the graph from Chivukula and Terning \protect\cite{chiv}.} {\label{fig1}} \end{figure} \begin{figure} \centerline{\epsfig{figure=./sinmix2.eps}} \caption{The upper bound on $\delta$, defined in Eq.~(\ref{eq:mtoecc2}), resulting from the data in Fig.~\ref{fig1} and the constraint from $\mu- e$ conversion, Eq.~(\ref{eq:constf}).} {\label{fig2}} \end{figure}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{I. Introduction} In (2+1)-dimensions, generally in odd-dimensional space-time, a peculiar type of quantum field theory exists. That is a theory with a {\it Chern-Simons term} in its action.~\cite{SS,DJT} The Chern-Simons term may be added to the action by hand or induced in an effective action by a vacuum polarization effect of a fermion as the anomaly~\cite{RNSI}. Our understanding on the theory has been progressed especially through studies of the topological structure due to the Chern-Simons term.~\cite{TJZW} On the other hand, two important phenomena were discovered in condensed matter systems. One is the quantum Hall effect~\cite{QH} and the other is the high-$T_c$ superconductivity~\cite{HTC}. Both phenomena may be thought of as realization of macroscopic quantum effects in {\it planar} systems in which electrons have a strong correlation. It is challenging to ask whether and how the Chern-Simons term is related to these phenomena discovered in the two-dimensional electron systems. These situations have activated the study of the (2+1)-dimensional quantum field theories. In (2+1)-dimensional gauge theories, the behaviour in the infrared region seems to be unstable at least in the perturbative treatment. The Chern-Simons term gives the gauge field a topological mass without breaking gauge symmetry so that the term may rescue the theory from the infrared catastrophe. This is original motivation of including the Chern-Simons term in the action.~\cite{JTAPRS} Reminding those studies, we think that one of the most important problems which is still not clear is what is a role of the Chern-Simons term in a $nonperturbative$ region. It is our purpose to clarify how the Chern-Simons term affects nonperturbative dynamics. The dynamical mass generation of the fermion is an important phenomenon induced by the nonperturbative effects. In even-dimensional space-time, chiral symmetry forbids the mass of the fermion. The nonperturbative effects can break this symmetry giving the mass to the fermion. As is well known, we cannot define the chiral symmetry for the $two$-component fermion which belongs to the irreducible spinor representation in (2+1)-dimensions. Instead parity symmetry ($P_2$) forbids the two-component fermion having the mass. On the other hand, for the $four$-component fermion which is composed of $two$ two-component fermions, we can define a kind of "chiral" symmetry. The "chiral" transformation is defined by a combination of the parity and $Z_2$ flavour transformations. ($P_4=P \otimes Z_2$) Many studies on the dynamical mass generation in $QED_3$ have been done already by using the Schwinger-Dyson technique~\cite{PABCWABKW} or the lattice Monte-Carlo method~\cite{DKK}. In almost of previous works, these authors have studied the dynamical mass generation of the four-component fermions in the theory without the Chern-Simons term, which is a spontaneous symmetry breaking of $P_4$ but not $P_2$. These studies have been extended to the case with the Chern-Simons term but the fermion considered there has been the four-component one.~\cite{KKKM} The theory with $N$ four-component fermions is equivalent to the theory with $2N$ two-component fermions. In this sense, the theory with the four-component fermions can describe only the case with even number of flavours (counting the two-component fermions). It is interesting to ask what happens in the theory with odd number of flavours. This also is another motivation of the present work. The simplest version of the theory with odd number of flavours is the case with a single flavour. The first work concerned with the dynamical mass generation of the single $two$-component fermion in $QED_3$ without the Chern-Simons term was done in Ref. ~\cite{HMH}. The symmetry which was studied in the work is $P_2$ but not $P_4$. In this paper, we extend the work to the case with the Chern-Simons term. Thus we consider the dynamical mass generation of the $single$ $two$-component fermion in $QED_3$ $with$ the Chern-Simons term. The Schwinger-Dyson equations are formulated in the lowest ladder approximation, which are two coupled integral equations. Due to the Chern-Simons term, a drastic change happens in the structure of the Schwinger-Dyson equations. While the wave function renormalization is absent in the Landau gauge in $QED_3$ without the Chern-Simons term as far as we consider in the lowest ladder approximation, the inclusion of the Chern-Simons term makes impossible to choose a peculiar gauge in which the wave function is not renormalized. Because there are no {\it ad hoc} reasons that the Landau gauge is specific in $QED_3$ with the Chern-Simons term, we should study the coupled integral equations for various values of the gauge parameter. Our strategy in this work are as follows: First we solve the coupled integral equations in an approximated analytical method which is very crude but may be available for reference in a more complete numerical calculation. After that, the coupled integral equations are solved numerically. In these analyses, we study dependence of the dynamical mass on the gauge-fixing parameter, the coupling constant, and the topological mass. This paper is organized as follows: In Sec. II, we explain the model which we use. The Schwinger-Dyson equations in the lowest ladder approximation are derived in Sec. III. In Sec. IV, the equations are analyzed by the approximated analytical method. The numerical analysis of the equations are presented in Sec. V. We conclude our results with discussions in Sec. VI. In Appendices A and B, some useful formulae are summarized. \section*{II. Maxwell-Chern-Simons $QED_3$} The model which we consider is an extended version of the usual three-dimensional quantum electrodynamics ($QED_3$). It has both of the Maxwell and Chern-Simons terms as the action of the U(1) gauge field . We call this theory the Maxwell-Chern-Simons $QED_3$.~\cite{JTAPRS} The gauge field is coupled to the two-component Dirac fermion. The Lagrangian density of the theory is given by \begin{eqnarray} {\cal L}= - \frac{1}{4} F_{\mu\nu} F^{\mu\nu} + \frac{\mu}{2} \varepsilon^{\mu \nu \rho} A_\mu \partial_\nu A_\rho - \frac{1}{2\alpha}(\partial_\mu A^\mu)^2 + \bar{\psi}(i \not \! \partial - e \not \! \! A)\psi \ \ . \label{lagrangian} \end{eqnarray} The second term in the right-hand side of Eq.(\ref{lagrangian}) is the so-called Chern-Simons term. It is well-known that the term gives the gauge field the mass $\mu$ without breaking the gauge symmetry. $\mu$ is called the topological mass because the Chern-Simons term has topological meaning as a secondary characteristic class~\cite{TJZW,CS}. $\alpha$ is the gauge-fixing parameter and $e$ is the gauge coupling constant. $\psi$ is the two-component fermion field which belongs to the irreducible spinor representation in (2+1)-dimensions. The Dirac matrices are defined by $\gamma^0=\sigma_3, \gamma^1=i\sigma_1, \gamma^2=i\sigma_2$ with $diag(g^{\mu\nu})=(1,-1,-1)$ where $\sigma_i$'s (i=1, 2, 3) are the Pauli matrices. The $\gamma^\mu$'s satisfy relations as $\{ \gamma^\mu, \gamma^\nu \}=2g^{\mu \nu}$, $\gamma^\mu \gamma^\nu = -i \epsilon^{\mu \nu \rho} \gamma_\rho + g^{\mu \nu}$ and $tr[\gamma^\mu \gamma^\nu ] = 2g^{\mu \nu}$. In this representation, there does not exist a matrix which anti-commutes with all of $\gamma^\mu$'s so that we cannot define the chiral transformation. This is a specific aspect of the odd-dimensional space-time. In even-dimensions, the chiral symmetry requires that a fermion is massless. In odd-dimensions, the chiral symmetry itself does not exist. Instead, the mass term of the fermion is forbidden by parity symmetry. The parity transformation is defined as \begin{eqnarray} x=(t, x, y) &\rightarrow& x'=(t, -x, y) \ \ , \nonumber \\ \psi(x) &\rightarrow& \gamma^1 \psi(x') \ \ , \nonumber \\ A^0 (x) &\rightarrow& A^0 (x') \ \ , \label{parity} \\ A^1 (x) &\rightarrow& - A^1 (x') \ \ , \nonumber \\ A^2 (x) &\rightarrow& A^2 (x') \ \ . \nonumber \end{eqnarray} Under the parity transformation, the mass term of the fermion and the Chern-Simons term change their signs. Thus the mass terms of both the fermion and the gauge field are forbidden by the parity symmetry. We study how the breaking of parity by the topological mass affects the mass generation of the fermion. \section*{III. Schwinger-Dyson Equations} As a non-perturbative method to evaluate the dynamical mass of the fermion, we use here the Schwinger-Dyson technique. The Schwinger-Dyson equation for the fermion self-energy $\Sigma(p)$ is written as \begin{eqnarray} \Sigma(p)=(-i e)^2 \int \frac{d^3k}{(2\pi)^3} \ \gamma^\mu \ i S'_F (k) \ \Gamma^\nu(k,p-k) \ i D'_{\mu\nu}(p-k) \ \ . \label{SDeqfull} \end{eqnarray} $\Gamma^\nu(k,p-k)$ is a full vertex function and $D'_{\mu\nu}(p-k)$ is a full propagator of the gauge field. $S'_F$ is the full propagator of the fermion field which is written as \begin{eqnarray} i S'_F(p)=\frac{i}{A(p)\not \hspace{-0.8mm}p - B(p)} =\frac{i}{\not \hspace{-0.8mm}p-i\Sigma(p)} \ \ , \label{fermion} \end{eqnarray} where $A(p)$ and $B(p)$ are functions of $\sqrt{p_\mu p^\mu}$ in taking into account the relativistic nature, while $\Sigma(p)$ depends on $p_\mu$'s. $A(p)^{-1}$ is the wave function renormalization and $B(p)/A(p)$ is a mass induced by dynamical effects at the momentum scale $p$. The so-called dynamical mass $m_{phys}$ is defined by $m_{phys}=B(0)/A(0)$ as usual. It is useful to notice the relations as \begin{eqnarray} tr[\Sigma(p)]=-2i B(p) \ , \ \ tr[\not \hspace{-0.8mm}p \Sigma(p)]=2i(A(p)-1)p^2 \ \ . \label{relation} \end{eqnarray} To proceed our analysis of Eq.(\ref{SDeqfull}) further, we need to introduce any suitable approximation. In this paper, we limit ourselves to use the lowest ladder approximation where the full propagator of the gauge field and the full vertex are replaced by the free propagator and the tree vertex respectively as \begin{eqnarray} i D'_{\mu\nu}(p-k) \approx i D_{\mu\nu}(p-k) \ , \ \ \Gamma^\nu(k,p-k) \approx \gamma^\nu \ \ . \label{ladder} \end{eqnarray} The analysis beyond the ladder approximation will appear elsewhere. The free propagator of the gauge field is derived from the Lagrangian density (\ref{lagrangian}) as \begin{eqnarray} i D^{\mu\nu}(p)=-i\frac{1}{p^2-\mu^2} \left(g^{\mu\nu} - \frac{p^\mu p^\nu}{p^2}\right) + \mu\frac{1}{p^2-\mu^2}\frac{1}{p^2}\varepsilon^{\mu\nu\rho}p_\rho - i\alpha\frac{p^\mu p^\nu}{p^4} \ \ . \label{gauge} \end{eqnarray} Thus the Schwinger-Dyson equation in the lowest ladder approximation becomes \begin{eqnarray} \Sigma(p)=(-i e)^2\int\frac{d^3k}{(2\pi)^3} \gamma^\mu \,i S'_F(k)\gamma^\nu \,i D_{\mu\nu}(p-k) \ \ . \label{SDeqld} \end{eqnarray} We substitute Eqs.(\ref{fermion}) and (\ref{gauge}) into Eq.(\ref{SDeqld}) and use Eq.(\ref{relation}) in producing two coupled equations. After taking the traces, we obtain the coupled integral equations as \begin{eqnarray} B(p) &=& - \frac{i e^2}{2} \int\frac{d^3k}{(2\pi)^3} \frac{1}{ A(k)^2 k^2- B(k)^2} \left[-2\mu A(k) \frac{p^2-k^2-(p-k)^2} {\{(p-k)^2-\mu^2\}(p-k)^2} \right. \nonumber \\ &+& \left. 4 B(k) \frac{1}{(p-k)^2-\mu^2} +2\alpha B(k) \frac{1}{(p-k)^2}\right] \ \ , \label{Bd3k} \\ A(p) &=& 1- \frac{i e^2}{2 p^2} \int\frac{d^3k}{(2\pi)^3} \frac{1}{ A(k)^2 k^2- B(k)^2} \left[ \frac{A(k)}{(p-k)^2-\mu^2} \left\{\frac{(p^2-k^2)^2}{(p-k)^2}-(p-k)^2 \right\} \right. \nonumber \\ &-& \frac{2\mu B(k)}{(p-k)^2-\mu^2} \left\{\frac{p^2-k^2}{(p-k)^2}+1 \right\} - \left. \alpha A(k) \left\{\frac{(p^2-k^2)^2}{(p-k)^4} - \frac{p^2+k^2}{(p-k)^2} \right\} \right] \ \ . \label{Ad3k} \end{eqnarray} Now we change the metric to the Euclidean one by the Wick rotation as $(k^0,\vec{k}) \rightarrow (ik^0,\vec{k})$ and $(p^0,\vec{p}) \rightarrow (ip^0,\vec{p})$. Then $k^2$ and $p^2$ are replaced by $-k^2=-(k^0)^2-(k^1)^2$ and $-p^2= -(p^0)^2-(p^1)^2$. After that, we transform the integral variables $k^\mu$'s to the polar coordinates $(k, \theta,\phi)$. The angular integration on $\theta$ and $\phi$ can be done explicitly. (The useful integral formulae are listed in Appendix A.) Finally we obtain the coupled integral equations which contain only the integration on the radial variable $k$, \begin{eqnarray} B(p) &=& \frac{e^2}{8\pi^2p} \int_{0}^{\infty}dk \frac{k}{A(k)^2 k^2+B(k)^2} \left[ \left\{\alpha B(k) - \frac{1}{\mu}(p^2-k^2) A(k) \right\} \ln\frac{(p+k)^2}{(p-k)^2} \right. \nonumber \\ &+& \left. \left\{\frac{1}{\mu}(p^2-k^2) A(k) +\mu A(k) +2 B(k) \right\} \ln\frac{(p+k)^2+\mu^2}{(p-k)^2+\mu^2} \right] \ \ , \label{B} \\ A(p) &=& 1+\frac{e^2}{8\pi^2p^3}\int_{0}^{\infty}dk \frac{k}{A(k)^2 k^2+ B(k)^2} \left[ -2pk(\alpha+1) A(k) + \left\{\frac{1}{2\mu^2}(p^2-k^2)^2 A(k) \right. \right. \nonumber \\ &+& \left. \frac{1}{\mu}(p^2-k^2) B(k) + \frac{1}{2}\alpha(p^2+k^2) A(k) \right\} \ln\frac{(p+k)^2}{(p-k)^2} + \left\{\frac{1}{2}\mu^2 A(k) \right. \nonumber \\ &-& \left. \left. \frac{1}{2\mu^2}(p^2-k^2)^2 A(k) +\mu B(k) - \frac{1}{\mu}(p^2-k^2) B(k) \right\} \ln\frac{(p+k)^2+\mu^2}{(p-k)^2+\mu^2} \right] \ \ . \label{A} \end{eqnarray} In the successive Secs. IV and V, we solve these equations by an approximated analytical method and also numerically by using an iteration method. Notice that the parameters $e^2$ and $\mu$ have the dimensions of mass. We may rewrite Eqs. (\ref{B}) and (\ref{A}) to dimensionless forms by defining $\hat{\mu} \equiv \mu/e^2$, $b(x) \equiv B(e^2 x)/e^2$ and $a(x) \equiv A(e^2 x)$ where $x$ is a dimensionless variable defined by $p=e^2 x$. The equations are just the ones obtained by setting $e^2=1$ and replacing $(A, B, \mu)$ to $(a, b, \hat{\mu})$ in Eqs. (\ref{B}) and (\ref{A}). The theory is controlled by only one dimensionless parameter $\hat{\mu}$. After solving the dimensionless equations for $a(x)$ and $b(x)$, we can convert them to $A(p)$ and $B(p)$. \section*{IV. Approximated Analytical Studies} \subsection*{IV-A. $\mu \rightarrow 0$ limit} We can check easily that Eqs. (\ref{B}) and (\ref{A}) reduce to the Schwinger-Dyson equations in $QED_3$ without the Chern-Simons term if we put the topological mass $\mu$ equal to zero. In fact, taking the limit as $\mu \rightarrow 0$ in Eqs. (\ref{B}) and (\ref{A}), we obtain \begin{eqnarray} B(p)&=&(\alpha+2) \frac{e^2}{8 \pi^2 p} \int^\infty_0 dk \frac{kB(k)} {A(k)^2 k^2 + B(k)^2} \ln\frac{(p+k)^2}{(p-k)^2} \ \ , \label{BwoCS} \\ A(p)&=&1 - \alpha \frac{e^2}{4 \pi^2 p^3} \int^\infty_0 dk \frac{kA(k)} {A(k)^2 k^2 + B(k)^2} \left[ pk - \frac{p^2+k^2}{4} \ln\frac{(p+k)^2}{(p-k)^2} \right] \ \ , \label{AwoCS} \end{eqnarray} which are the Schwinger-Dyson equations in the lowest ladder approximation derived in $QED_3$ without the Chern-Simons term.\footnote{In ~\cite{HMH}, there were some typing mistakes. We give the correct equations in Eqs. (\ref{BwoCS}) and (\ref{AwoCS}).} We can see that there exists the specific gauge where the wave function renormalization is absent. Thus in the Landau gauge ($\alpha=0$), Eq. (\ref{AwoCS}) gives us the simple solution as $A(p)=1$ and the problem reduces to solve Eq. (\ref{BwoCS}) with $A(p)=1$. In the case with the Chern-Simons term, as is seen in Eqs. (\ref{B}) and (\ref{A}), there does not exist such a specific gauge where the wave function is not renormalized. As far as we cannot find a self-evident reason that the Landau is still specific in $QED_3$ with the Chern-Simons term, it must be fair to study Eqs. (\ref{B}) and (\ref{A}) for various values of the gauge-fixing parameter $\alpha$. \subsection*{IV-B. Small $p$ Expansion and Perturbative Results} At first sight, Eqs. (\ref{B}) and (\ref{A}) seem to be dangerous in the limit $p \rightarrow 0$. But we can check that this limit is well-defined. In the region $p \approx 0$, Eqs.(\ref{B}) and (\ref{A}) are written as \begin{eqnarray} B(p) &=& \frac{e^2}{\pi^2} \int_{0}^{\infty}dk \frac{k}{A(k)^2 k^2 + B(k)^2} \left[ \frac{k \mu}{k^2+\mu^2} A(k) + \left( \frac{\alpha}{2 k} + \frac{k}{k^2+\mu^2} \right) B(k) \right. \nonumber \\ &+& \left. O(p^2) \right] \ \ , \label{B0} \\ A(p) &=& 1+\frac{e^2}{\pi^2} \int_{0}^{\infty}dk \frac{k}{A(k)^2 k^2 + B(k)^2} \left[ \frac{1}{3} \left\{ \frac{\alpha}{k} - \frac{2k \mu^2}{(k^2+ \mu^2)^2} \right\} A(k) \right. \nonumber \\ &-& \left. \frac{\mu}{3k} \frac{k^2-\mu^2}{(k^2+\mu^2)^2} B(k) + O(p^2) \right] \ \ . \label{A0} \end{eqnarray} (See Appendix B.) From Eqs. (\ref{B0}) and (\ref{A0}), we can derive the result which is obtained in the lowest-order of perturbation. By setting $A(k)=1$ and $B(k)=0$ and performing the integration on $k$, we have \begin{eqnarray} B(0)=\frac{e^2}{2 \pi} \frac{|\mu|}{\mu} \ , \ \ A(0)=1 - \frac{e^2}{6 \pi} \frac{|\mu|}{\mu^2} + \frac{e^2}{3 \pi^2} \frac{\alpha}{\epsilon} \ \ , \label{Per} \end{eqnarray} where $\epsilon$ is the infrared cutoff in the integration on $k$. It should be noticed that $B(0)$ depends on only the sign of $\mu$. This also may be a specific aspect in the (2+1)-dimensions. The dependence of $A(0)$ on $\mu$ shows that only the Landau gauge is free from the infrared divergence. On the other hand, $A(0)$ is singular at $\mu=0$ so that the theory with the Chern-Simons term may not be smoothly connected to the theory without the Chern-Simons term in the lowest-order perturbation. \subsection*{IV-C. Constant Approximation} Before proceeding to a numerical analysis, it is very useful if we can estimate $A(0)$ and $B(0)$ analytically even under a fairly crude approximation. The kernels of these integral equations are dumped rapidly as the integral variable $k$ increases so that the contribution from $k \approx 0$ is the most dominant one in the integrals. We approximate $A(k)$ and $B(k)$ by $A(0)$ and $B(0)$ in the integrals. We call this approximation "the $constant$ approximation". Of course this approximation might be too crude for our purpose and we only use the result as reference in the numerical analysis. Under this approximation, we can perform the remaining radial integration and have the simple algebraic equations as \begin{eqnarray} B(0) &=& \frac{e^2}{2\pi} \frac{1}{A(0)} (\frac{\alpha}{2}+1) \ \ \ , \label{Balge} \\ A(0) &=& 1+\frac{e^2}{6\pi} \frac{\alpha}{B(0)} \ \ \ , \label{Aalge} \end{eqnarray} where we have considered the case of $\mu>0$. (See Appendix B for the details.) Solving the coupled algebraic equations (\ref{Balge}) and (\ref{Aalge}), we obtain \begin{eqnarray} B(0) &=& \frac{e^2}{2\pi} + \frac{e^2}{12\pi}\alpha \ \ , \label{Bcnst} \\ A(0) &=& 1+\frac{2\alpha}{\alpha+6} \ \ . \label{Acnst} \end{eqnarray} From Eqs.(\ref{Bcnst}) and (\ref{Acnst}), we can see that the dependence of $B(0)$ and $A(0)$ on the gauge-fixing parameter, the coupling constant and the topological mass has the following peculiar features: \begin{itemize} \item[1)] Dependence on the gauge-fixing parameter \\ $B(0)$ depends linearly on $\alpha$. It is suggestive that $A(0)$ is singular at $\alpha=-6$ where B(0) vanishes. In the Landau gauge ($\alpha=0$), $A(0)=1$ and $B(0)=e^2/2\pi$. $A(0)=1$ is favourable for us because $A(p)=1$ means that the Ward-Takahashi identity is satisfied. \item[2)] Dependence on the coupling constant \\ $A(0)$ does not depend on $e^2$. It means that the deviation of $A(0)$ from 1 is independent of the coupling constant. This is crucially different from the perturbative result given by Eq.(\ref{Per}) where the deviation is proportional to $e^2$. On the other hand, $B(0)$ is proportional to $e^2$. \item[3)] Dependence on the topological mass \\ We recognize that Eqs.(\ref{Bcnst}) and (\ref{Acnst}) are independent of the topological mass $\mu$. In fact, if we apply the constant approximation to the case without the Chern-Simons term, we obtain the same results as Eqs. (\ref{Bcnst}) and (\ref{Acnst}). It means that the amount of the explicit parity breaking in the gauge sector by the topological mass does not affect the dynamical mass in the fermion sector. \end{itemize} Now we proceed to a more precise numerical evaluation in the next section. \section*{V. Numerical Analysis} \subsection*{V-A. Non-trivial Solutions} We solve the two coupled integral equations (\ref{B}) and (\ref{A}) numerically by using a method of iteration. First we substitute trial functions into $A(k)$ and $B(k)$ in the right-hand sides of Eqs. (\ref{B}) and (\ref{A}) and then calculate the integrals numerically. The outputs so obtained, $A(p)$ and $B(p)$, are substituted back to the right-hand sides until the outputs coincide with the inputs. Finally we obtain convergent functions $A(p)$ and $B(p)$, which satisfies the integral equations, if there exist any solutions in Eqs. (\ref{B}) and (\ref{A}). We obtain the non-trivial solutions for the various values of the gauge parameter $\alpha$. For $e=1.0$ and $\mu=1.0$, $A(p)$'s in the cases of $\alpha =0, 1, 2, 3$ are shown in Fig. 1. The corresponding $B(p)$'s are presented in Fig. 2. We find a very interesting feature that $A(p)$ is fairly close to 1 in the Landau gauge ($\alpha=0$). In the case of $QED_3$ without the Chern-Simons term, $A(p)$ is exactly equal to 1 in the Landau gauge under the lowest ladder approximation \footnote{Outside the lowest ladder approximation, it is known that $A(p)$ differs from one in the infrared.~\cite{AMMKMBR}}.~\cite{HMH} However, in the case of $QED_3$ with the Chern-Simons, there may be no apparent reason that $A(p)=1$ in the Landau gauge. It is surprising that the numerical calculation of so complicated integral equations results $A(p) \approx 1$ in the Landau gauge. There might be a simple reason of explaining a peculiarity of the Landau gauge. \subsection*{V-B. Dependence on the gauge parameter} The dependence of $A(0)$ and $B(0)$ on the gauge-fixing parameter $\alpha$ is shown in Figs. 3 and 4. In the region $\alpha>0$, the numerical result is consistent with the one obtained in the constant approximation. For $\alpha < 0$, the numerical iteration procedure does not converge, but cycles between two or more functions, none of which are solutions to Eqs. (\ref{B}) and (\ref{A}). This appears to be a manifestation of the well known "doubling route to chaos" frequently exhibited by non-linear iterative algorithms. Unfortunately this prevents us from obtaining numerical solutions for negative values of the gauge parameter. A plot of the gauge invariant condensate $<\bar{\psi} \psi>$ as a function of $\alpha$ is helpful as a way of indicating to what extent gauge symmetry is broken by the bare vertex approximation. The condensate is defined by $<\bar{\psi} \psi>=-i\lim_{x \rightarrow 0} tr S'_F(x)$ where $S'_F$(x) is a propagator in real space-time coordinates. Using the Fourier transformation and the Wick rotation, we obtain \begin{eqnarray} <\bar{\psi} \psi>= \frac{1}{\pi^2} \int^\infty_0 dk \frac{k^2 B(k)}{A(k)^2 k^2 + B(k)^2} \ \ . \label{condensate} \end{eqnarray} On the other hand, a position of the pole of the Minkowski propagator also is gauge invariant. To know the position, we have to perform an analytic continuation from our Euclidean results to the Minkowski ones. But it is a hard task because what we know in the Euclidean analysis is $numerical$. In fact, it is difficult to derive a full analytic properties from the numerical data. Instead, $B(0)/A(0)$ is usually considered as an approximated value of $p^2$ at the pole. We have shown the $\alpha$-dependence of the dynamical fermion mass $m_{phys}=B(0)/A(0)$ and also $<\bar{\psi} \psi>$ in Fig. 5. It shows that the $\alpha$-dependence may be considered to be fairly weak, compared with the result in the constant approximation. The results obtained in Secs.V-A and V-B suggest that the Landau gauge is still the best gauge. Hereafter we present mainly the results obtained in the Landau gauge. \subsection*{V-C. Dependence on the topological mass and the coupling constant} Now we investigate how the dynamical fermion mass depends on the topological mass and the coupling constant. What we are most interested in is the dependence of the dynamical fermion mass on the topological mass of the gauge field. In the constant approximation, it has been shown that both $A(0)$ and $B(0)$ do not depend on the topological mass. Is this true in the more precise numerical evaluation? In Fig. 6 we show the dependence of $a(0)$ on the dimensionless parameter $\hat{\mu}$ which is defined in Sec. III. We can see that the deviation of $a(0)$ from 1 is less than 1 \%. We may say that $a(0)$ is almost equal to 1 in all region of $\hat{\mu}$. It means that the $\hat{\mu}$- and $e^2$-dependence of $A(0)$ is extremely weak. On the other hand, the $\hat{\mu}$-dependence of $b(0)$ is presented in Fig. 7. It show that though $B(0)$ is almost constant in the region of $\mu > e^2$, it decreases as $\mu$ does if $\mu$ is comparable to or smaller than $e^2$. The $e^2$-dependence of $B(0)$ is less explicit in Fig. 7 so that we show the explicit $e^2$-dependence of $B(0)$ in Fig. 8. $B(0)$ increases as the coupling constant becomes larger. The $e^2$-dependence of $B(0)$ is almost linear when $e^2$ is smaller than $\mu$. This is consistent with the constant approximation. But the slope is smaller than the one obtained in the constant approximation and the line curves downward slightly as the coupling constant becomes larger, which may be a non-perturbative effect. It should be noticed that for very small or large values of the topological mass compared with the coupling constant, there appears a technical difficulty in the numerical calculation as follows: In principle, we do not need to cut off the region of the energy-momentum integrals in Eqs. (\ref{B}) and (\ref{A}). But in the practical prescription of the numerical integration, the infinite range of the integration must be replaced by a finite one so that it is not avoidable to introduce cut-off parameters. The cut-off dependence of the results must be checked carefully. When the topological mass $\mu$ takes very small or large values compared with $e^2$, it has been found that the integration region must be taken wider enough to get reliable results. It needs much machine power. In getting our results, we have checked the absence of the cut-off dependence in detail in the parameter region adopted here ($\mu/e^2=10^{-2} \sim 10^4$). Therefore we conclude that the behaviour of $A(0)$ and $B(0)$ mentioned above, especially the decreasing of $B(0)$ in the region of $\mu/e^2 < 1$, is not due to the cut-off. \section*{VI. Conclusion and Discussion} We have investigated the dynamical mass generation of the $single$ two-component fermion in $QED_3$ $with$ the Chern-Simons term. The coupled Schwinger-Dyson equations for $A(p)$ and $B(p)$, where the full fermion propagator is defined by $S'_F(p)^{-1} = A(p) \not \hspace{-0.8mm} p-B(p)$, have been formulated in the lowest ladder approximation. When the Chern-Simons term is included, there does not exist the specific gauge where $A(p)$ is automatically equal to 1 as the case without the Chern-Simons term. We examine the dependence of the dynamical mass on the gauge-fixing parameter $\alpha$, the coupling constant $e$ and the topological mass $\mu$ by using the approximated analytical and also the numerical methods. The coupled integral equations have been solved first analytically in the constant approximation where $A(p)$ and $B(p)$ have been replaced by $A(0)$ and $B(0)$ respectively. We have found that $B(0)$ is proportional to $\alpha$ and also to $e^2$. On the other hand, $A(0)$ is independent of $e$ and singular at $\alpha=-6$ where $B(0)$ vanishes. We also have found that $A(0)$ and $B(0)$ do not depend on the topological mass. Keeping these facts in mind we have proceeded to solve the set of the integral equations by using the numerical method. The dependence of $A(0)$ and $B(0)$ on $\alpha$ and $e$ are almost consistent with the results derived in the constant approximation. $B(0)$ depends on $\alpha$ linearly. The $e^2$-dependence of $B(0)$ also is almost linear but the value of the slope becomes smaller than the one in the constant approximation as $e^2$ increases. The $e$-dependence of $A(0)$ is almost absent. To check to what extent gauge symmetry is spoilt by the bare vertex approximation, we have evaluated the gauge invariant condensate $<\bar{\psi} \psi>$ and also $B(0)/A(0)$. The result has shown that $\alpha$-dependence of them may be considered to be fairly weak. Further we have discovered some novel features in the numerical analysis. First we have found $A(p) \approx 1$ in the Landau gauge. It is not entirely obvious why the numerical evaluation of so complicated integral equations results that $A(0)$ is almost equal to 1 at $\alpha=0$. We may expect that there is any simple reason. It also should be noticed that the trivial $A(p)$ in the Landau gauge is obtained merely in the lowest ladder approximation.~\cite{AMMKMBR} More sophisticated approximation will be considered in subsequent works. Secondly we have found the strange behaviour of $A(0)$ and $B(0)$ in the region $\alpha <0$. There has appeared the doubling signal in the iterations. The signal is a popular phenomena in a non-linear system. We wonder that the signal appeared here would have any physical or mathematical meaning or not. As the third, it seems that the dynamical mass of the fermion is almost constant if $\mu$ is larger than $e^2$, while it decreases when $\mu$ is comparable to or smaller than $e^2$. Why it is? In the case of $\mu \neq 0$, we have found just one solution in our numerical analysis. On the other hand, we know that there exist two solutions in the case of $\mu=0$ ~\cite{HMH}; one is trivial ($B=0$) and the other one nontrivial ($B \neq 0$). We have extrapolated our numerical data $B(0)/A(0)$ to $\mu=0$ by using the method of least squares~\cite{MLS} and obtained Fig. 9. The numerical evaluations in $QED_3$ without the Chern-Simons term results $B(0)=0.104755$ ($\alpha=0$ and $A(0)=1$). The value obtained by the extrapolation is $B(0)/A(0)=0.105255$ so that both results may be consistent with each other. Thus, the solution in the case of $\mu \neq 0$ seems to approach the non-trivial one as $\mu \rightarrow 0$. Of course, we should be careful before accepting this result. As mentioned in V-C, the numerical evaluation in the region of $\mu/e^2 <<1$ is very difficult technically and the behaviour of the solution in taking the limit $\mu \rightarrow 0$ is still not clear. There might appear any critical behaviour such as a bifurcation of the solution. The best way to confirm the result is to give a proof by analytical method but it is very difficult because the integral equations are highly non-linear. We are now extending the parameter region to search any more non-trivial structure especially in the region of $\mu/e^2<<1$. The result will be reported separately. \newpage \section*{Appendix A: Angular Integration} We present here the formulae for the angular integration, which are used to derive Eqs.(\ref{B}) and (\ref{A}) in Sec. III. After the Wick rotation, Eqs. (\ref{Bd3k}) and (\ref{Ad3k}) can be rewritten in the polar coordinates as \begin{eqnarray} B(p)&=&\frac{e^2}{8\pi^2}\int_{0}^{\infty}dk \frac{k^2}{A(k)^2 k^2+ B(k)^2} \left[ -2\mu(p^2-k^2) A(k) I_0 \right. \nonumber \\ &+& \left. 2 \mu A(k) I_1 + 4 B(k) I_1 + 2\alpha B(k) I_2 \right] \ \ , \nonumber \\ A(p)&=&1+\frac{e^2}{8\pi^2 p^2} \int_{0}^{\infty}dk\frac{k^2}{A(k)^2 k^2 + B(k)^2} \left[ A(k) \left\{(p^2-k^2)^2 I_0 - I_3 \right\} \right. \nonumber \\ &+& \left. 2 \mu B(k) \left\{(p^2-k^2) I_0 + I_1 \right\} - \alpha A(k) \left\{(p^2-k^2)^2 I_4 -(p^2+k^2) I_2 \right\}\right] \ \ . \nonumber \end{eqnarray} Each integrals $I_n$'s (n=0, 1, 2, 3, 4) are calculated as \begin{eqnarray} I_0&=&\int_{0}^{\pi}\sin\theta d\theta\frac{1}{\{(p-k)^2+\mu^2\} (p-k)^2} =\frac{1}{2\mu^2pk}\ln\frac{(p+k)^2\{(p-k)^2+\mu^2\}} {(p-k)^2\{(p+k)^2+\mu^2\}} \ \ \ , \nonumber \\ I_1&=&\int_{0}^{\pi}\sin\theta d\theta\frac{1}{(p-k)^2+\mu^2} =\frac{1}{2pk}\ln \frac{(p+k)^2+\mu^2}{(p-k)^2+\mu^2} \ \ \ , \nonumber \\ I_2&=&\int_{0}^{\pi}\sin\theta d\theta\frac{1}{(p-k)^2} =\frac{1}{2pk}\ln\frac{(p+k)^2}{(p-k)^2} \ \ \ , \nonumber \\ I_3&=&\int_{0}^{\pi}\sin\theta d\theta\frac{(p-k)^2}{(p-k)^2+\mu^2} =\frac{1}{2pk} \left\{(p+k)^2-(p-k)^2 - \mu^2 \ln\frac{(p+k)^2+\mu^2}{(p-k)^2+\mu^2} \right\} \ \ \ , \nonumber \\ I_4&=&\int_{0}^{\pi}\sin\theta d\theta\frac{1}{(p-k)^4} =-\frac{1}{2pk}\left\{\frac{1}{(p+k)^2}-\frac{1}{(p-k)^2} \right\} \ \ \ . \nonumber \end{eqnarray} \section*{Appendix B: $p \rightarrow 0$ limit and Constant Approximation} The formulae which are used in Sec. IV are summarized here. The ($p \rightarrow 0$) limit is taken by the expansion formulae as \begin{eqnarray} \ln \frac{(p+k)^2}{(p-k)^2} &=& 4 \frac{p}{k} + \frac{4}{3} (\frac{p}{k})^3 + O(p^5) \ \ \ , \nonumber \\ \ln \frac{(p+k)^2+\mu^2}{(p-k)^2+\mu^2} &=& \frac{4kp}{k^2+\mu^2} + \frac{4}{3} \frac{(k^2-3\mu^2)k}{(k^2+\mu^2)^3} p^3 + O(p^5) \ \ \ . \nonumber \end{eqnarray} In the constant approximation of Eqs. (\ref{B}) and (\ref{A}), we replace the unknown functions $A(k)$ and $B(k)$ to $A(0)$ and $B(0)$. Setting $p=0$, we obtain \begin{eqnarray} B(0) &=& \frac{e^2}{\pi^2} \left[ \left\{ \mu A(0)+(\frac{\alpha}{2}+1) B(0) \right\} J_0 - \mu^2 (\mu A(0)+B(0)) J_1 \right] \ \ \ , \nonumber \\ A(0) &=& 1 + \frac{e^2}{\pi^2} \left[ \frac{\alpha}{3} A(0) J_0 - \frac{\mu}{3} (2\mu A(0)+B(0)) J_1 + \frac{\mu^3}{3} (2\mu A(0)+2B(0)) J_2 \right] \ \ , \nonumber \end{eqnarray} where $J_i$'s ($i=0, 1, 2$) are given by \begin{eqnarray} J_0&=& \int^\infty_0 \frac{dk}{A(0)^2 k^2+B(0)^2} = \frac{\pi}{2A(0)B(0)} \ \ \ , \nonumber \\ J_1&=& \int^\infty_0 \frac{dk}{A(0)^2 k^2+B(0)^2} \frac{1}{k^2+\mu^2} = \frac{\pi}{2B(0)} \frac{1}{|\mu|} \frac{1}{|\mu| A(0)+B(0)} \ \ \ , \nonumber \\ J_2&=& \int^\infty_0 \frac{dk}{A(0)^2 k^2+B(0)^2} \frac{1}{(k^2+\mu^2)^2} = \frac{\pi}{4B(0)} \frac{1}{|\mu|^3} \frac{2|\mu|A(0)+B(0)}{(|\mu| A(0)+B(0))^2} \ \ \ . \nonumber \end{eqnarray} \section*{Acknowledgment} One of the authors (T. M.) would like to thank Prof. M. Kenmoku for his hospitality at the Department of Physics, Nara Women's University.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Methods} \bmhead{Determination of velocities} The velocity $\beta_{reac}$ of the $^{23}$Mg nuclei at the reaction time was derived from the momentum \color{black} of the $\alpha$ particles measured with the VAMOS++ magnetic spectrometer and from the kinematics laws of energy and momentum conservation in the case of the $^{3}$He($^{24}$Mg,$\alpha$)$^{23}$Mg two-body reaction \begin{equation} \beta_{reac} = \sqrt{1-\gamma_{reac}^{-2}} \nonumber\\ \end{equation} with \begin{equation} \gamma_{reac} = \sqrt{1+\frac{\frac{m_{\alpha}^2\beta_{\alpha}^2}{1-\beta_{\alpha}^2}+\frac{m_{beam}^2\beta_{beam}^2}{1-\beta_{beam}^2}-2\cos(\theta_{\alpha})m_{beam}m_{\alpha}\sqrt{\gamma_{\alpha}^2-1}\sqrt{\gamma_{beam}^2-1}}{m_{recoil}^2}} \label{eq1} \end{equation} where $m_{\alpha}$, $m_{beam}$ and $m_{recoil}$ are the rest-mass energies of the $\alpha$, $^{24}$Mg and $^{23}$Mg nuclei, $\theta_{\alpha}$ the angle between the $\alpha$ particle and the beam axis. The parameter $\gamma_{\alpha}$ was measured with VAMOS++ and corrected for the energy losses in the target using the {\tt SRIM} code \cite{bibSRIM}. The parameter $\gamma_{beam}$, measured prior to the experiment, was then determined from the measured $\gamma_{\alpha}$ and a $\gamma$-ray transition detected in coincidence. \par The velocity $\beta_{ems}$ of the $^{23}$Mg nuclei at the $\gamma$-ray emission time was derived from the measured $\gamma$ rays. Since the $^{23}$Mg nuclei are moving at the time of the $\gamma$-ray emission, the $\gamma$-ray energy is Doppler shifted, with the measured energies $E_{\gamma}$ shifted from the rest energy $E_{\gamma,0}$ according to \begin{equation} E_{\gamma}=E_{\gamma,0} \frac{\sqrt{1-\beta^2_{ems}}}{1-\beta_{ems}\text{cos}(\theta)} \label{eq2} \end{equation} It follows that \begin{equation} \beta_{ems}=\frac{R^2\text{cos}(\theta)+\sqrt{1+R^2\text{cos}^2(\theta)-R^2}}{R^2\text{cos}^2(\theta)+1} \label{eq4} \end{equation} with $R=E_{\gamma}/E_{\gamma,0}$. Here, $R < 1$ since the AGATA detector was located upstream of the target. The angle $\theta$ between the $\gamma$ ray and the $^{23}$Mg recoil nucleus was derived from the measured ($\theta$, $\phi$) of the $\gamma$ ray and the $\alpha$ particle using the formulas \begin{align} \cos(\theta)=&\sin(\theta_\gamma)\sin(\theta_{recoil})[\cos(\phi_\gamma)\cos(\phi_{recoil})+\sin(\phi_\gamma)\sin(\phi_{recoil})]\nonumber\\ &+\cos(\theta_\gamma)\cos(\theta_{recoil}) \nonumber \end{align} where \begin{align} \theta_{recoil} =& \text{acos}(\frac{m_{beam}\sqrt{\gamma_{beam}^2-1}-m_{\alpha}\cos(\theta_{\alpha})\sqrt{\gamma_{\alpha}^2-1}}{m_{recoil}\sqrt{\gamma_{recoil}^2-1}}) \nonumber\\ \phi_{recoil} =& \pi + \phi_{\alpha} \label{eq3} \end{align} \bmhead{Fit of velocity-difference profiles} \par Velocity-difference profiles were numerically simulated with a Monte-Carlo approach developed in the {\tt EVASIONS} C++/ROOT code. The principle of these numerical simulations will be the subject of a future publication. Simulated velocity-difference profiles were normalized to the measured ones via the profile integrals. The goodness of fit between experimental and simulated profiles was quantified with the Pearson $\chi^2$ tests where the lifetime and the $\gamma$-ray rest energy are taken as free parameters. \bmhead{Branching ratios} The $E_x=7785.0(7)$~keV astrophysical state can decay via proton or $\gamma$-ray emission. Therefore, after applying a selection on $E_x$ in $^{23}$Mg by using the measured $\alpha$ particles, the number of detected protons and $\gamma$ rays allowed us to determine the proton and $\gamma$-ray branching ratios. These values were corrected for the detection efficiencies. On the one hand, the geometrical efficiency of the silicon detector was estimated by numerical simulations. The angular distribution was considered isotropic for the emitted $\ell$=0 protons. On the other hand, the AGATA efficiency was measured at low energies with a radioactive $^{152}$Eu source, and simulated with the AGATA {\tt Geant4 code} library \cite{agataG4,FARNEA2010331} to determine the efficiency at high energies after scaling the simulations to the measured efficiencies at low energies. \bmhead{Determination of the $^{22}$Na flux in novae} The amount of $^{22}$Na ejected in a nova outburst was obtained with the simulation codes {\tt MESA}\cite{2011ApJS..192....3P,2013ApJS..208....4P,2015ApJS..220...15P,2018ApJS..234...34P,2019ApJS..243...10P} and {\tt SHIVA}\cite{Jose_1998, jose2016stellar} from its abundance in the ejected layers. {\tt MESA} and {\tt SHIVA} use several criteria for the ejection of a specific layer, based either on achieving escape (velocity) or a luminosity above the Eddington limit where the radiative pressure exceeds the gravitational force. \color{black} \bmhead{Data availability} The experimental data of this work related to particles measurements are available upon reasonable request by contacting the primary author. Due to the large size of the data, they cannot be hosted publicly. The ownership of data generated by the AGATA $\gamma$-ray spectrometer resides with the AGATA collaboration as detailed in the AGATA Data Policy\cite{agataPolicy}. \bmhead{Codes availability} The {\tt EVASIONS} code used in this study is available from the primary author upon reasonable request. Other codes employed here, i.e. {\tt SRIM}\cite{bibSRIM}, {\tt NushellX${@}$MSU}\cite{NUSHELLX}, {\tt MESA}\cite{2011ApJS..192....3P,2013ApJS..208....4P,2015ApJS..220...15P,2018ApJS..234...34P,2019ApJS..243...10P} and {\tt RatesMC} \cite{Longland2010}, are freely available. \section{Introduction}\label{sec1} The Introduction section, of referenced text \cite{bib1} expands on the background of the work (some overlap with the Abstract is acceptable). The introduction should not include subheadings. Springer Nature does not impose a strict layout as standard however authors are advised to check the individual requirements for the journal they are planning to submit to as there may be journal-level preferences. When preparing your text please also be aware that some stylistic choices are not supported in full text XML (publication version), including coloured font. These will not be replicated in the typeset article if it is accepted. \section{Results}\label{sec2} Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. \section{This is an example for first level head---section head}\label{sec3} \subsection{This is an example for second level head---subsection head}\label{subsec2} \subsubsection{This is an example for third level head---subsubsection head}\label{subsubsec2} Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. \section{Equations}\label{sec4} Equations in \LaTeX\ can either be inline or on-a-line by itself (``display equations''). For inline equations use the \verb+$...$+ commands. E.g.: The equation $H\psi = E \psi$ is written via the command \verb+$H \psi = E \psi$+. For display equations (with auto generated equation numbers) one can use the equation or align environments: \begin{equation} \|\tilde{X}(k)\|^2 \leq\frac{\sum\limits_{i=1}^{p}\left\|\tilde{Y}_i(k)\right\|^2+\sum\limits_{j=1}^{q}\left\|\tilde{Z}_j(k)\right\|^2 }{p+q}.\label{eq1} \end{equation} where, \begin{align} D_\mu &= \partial_\mu - ig \frac{\lambda^a}{2} A^a_\mu \nonumber \\ F^a_{\mu\nu} &= \partial_\mu A^a_\nu - \partial_\nu A^a_\mu + g f^{abc} A^b_\mu A^a_\nu \label{eq2} \end{align} Notice the use of \verb+\nonumber+ in the align environment at the end of each line, except the last, so as not to produce equation numbers on lines where no equation numbers are required. The \verb+\label{}+ command should only be used at the last line of an align environment where \verb+\nonumber+ is not used. \begin{equation} Y_\infty = \left( \frac{m}{\textrm{GeV}} \right)^{-3} \left[ 1 + \frac{3 \ln(m/\textrm{GeV})}{15} + \frac{\ln(c_2/5)}{15} \right] \end{equation} The class file also supports the use of \verb+\mathbb{}+, \verb+\mathscr{}+ and \verb+\mathcal{}+ commands. As such \verb+\mathbb{R}+, \verb+\mathscr{R}+ and \verb+\mathcal{R}+ produces $\mathbb{R}$, $\mathscr{R}$ and $\mathcal{R}$ respectively (refer Subsubsection~\ref{subsubsec2}). \section{Tables}\label{sec5} Tables can be inserted via the normal table and tabular environment. To put footnotes inside tables you should use \verb+\footnotetext[]{...}+ tag. The footnote appears just below the table itself (refer Tables~\ref{tab1} and \ref{tab2}). For the corresponding footnotemark use \verb+\footnotemark[...]+ \begin{table}[h] \begin{center} \begin{minipage}{174pt} \caption{Caption text}\label{tab1}% \begin{tabular}{@{}llll@{}} \toprule Column 1 & Column 2 & Column 3 & Column 4\\ \midrule row 1 & data 1 & data 2 & data 3 \\ row 2 & data 4 & data 5\footnotemark[1] & data 6 \\ row 3 & data 7 & data 8 & data 9\footnotemark[2] \\ \botrule \end{tabular} \footnotetext{Source: This is an example of table footnote. This is an example of table footnote.} \footnotetext[1]{Example for a first table footnote. This is an example of table footnote.} \footnotetext[2]{Example for a second table footnote. This is an example of table footnote.} \end{minipage} \end{center} \end{table} \noindent The input format for the above table is as follows: \bigskip \begin{verbatim} \begin{table}[<placement-specifier>] \begin{center} \begin{minipage}{<preferred-table-width>} \caption{<table-caption>}\label{<table-label>}% \begin{tabular}{@{}llll@{}} \toprule Column 1 & Column 2 & Column 3 & Column 4\\ \midrule row 1 & data 1 & data 2 & data 3 \\ row 2 & data 4 & data 5\footnotemark[1] & data 6 \\ row 3 & data 7 & data 8 & data 9\footnotemark[2]\\ \botrule \end{tabular} \footnotetext{Source: This is an example of table footnote. This is an example of table footnote.} \footnotetext[1]{Example for a first table footnote. This is an example of table footnote.} \footnotetext[2]{Example for a second table footnote. This is an example of table footnote.} \end{minipage} \end{center} \end{table} \end{verbatim} \bigskip \begin{table}[h] \begin{center} \begin{minipage}{\textwidth} \caption{Example of a lengthy table which is set to full textwidth}\label{tab2} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccccc@{\extracolsep{\fill}}} \toprule% & \multicolumn{3}{@{}c@{}}{Element 1\footnotemark[1]} & \multicolumn{3}{@{}c@{}}{Element 2\footnotemark[2]} \\\cmidrule{2-4}\cmidrule{5-7}% Project & Energy & $\sigma_{calc}$ & $\sigma_{expt}$ & Energy & $\sigma_{calc}$ & $\sigma_{expt}$ \\ \midrule Element 3 & 990 A & 1168 & $1547\pm12$ & 780 A & 1166 & $1239\pm100$\\ Element 4 & 500 A & 961 & $922\pm10$ & 900 A & 1268 & $1092\pm40$\\ \botrule \end{tabular*} \footnotetext{Note: This is an example of table footnote. This is an example of table footnote this is an example of table footnote this is an example of~table footnote this is an example of table footnote.} \footnotetext[1]{Example for a first table footnote.} \footnotetext[2]{Example for a second table footnote.} \end{minipage} \end{center} \end{table} In case of double column layout, tables which do not fit in single column width should be set to full text width. For this, you need to use \verb+\begin{table*}+ \verb+...+ \verb+\end{table*}+ instead of \verb+\begin{table}+ \verb+...+ \verb+\end{table}+ environment. Lengthy tables which do not fit in textwidth should be set as rotated table. For this, you need to use \verb+\begin{sidewaystable}+ \verb+...+ \verb+\end{sidewaystable}+ instead of \verb+\begin{table*}+ \verb+...+ \verb+\end{table*}+ environment. This environment puts tables rotated to single column width. For tables rotated to double column width, use \verb+\begin{sidewaystable*}+ \verb+...+ \verb+\end{sidewaystable*}+. \begin{sidewaystable} \sidewaystablefn% \begin{center} \begin{minipage}{\textheight} \caption{Tables which are too long to fit, should be written using the ``sidewaystable'' environment as shown here}\label{tab3} \begin{tabular*}{\textheight}{@{\extracolsep{\fill}}lcccccc@{\extracolsep{\fill}}} \toprule% & \multicolumn{3}{@{}c@{}}{Element 1\footnotemark[1]}& \multicolumn{3}{@{}c@{}}{Element\footnotemark[2]} \\\cmidrule{2-4}\cmidrule{5-7}% Projectile & Energy & $\sigma_{calc}$ & $\sigma_{expt}$ & Energy & $\sigma_{calc}$ & $\sigma_{expt}$ \\ \midrule Element 3 & 990 A & 1168 & $1547\pm12$ & 780 A & 1166 & $1239\pm100$ \\ Element 4 & 500 A & 961 & $922\pm10$ & 900 A & 1268 & $1092\pm40$ \\ Element 5 & 990 A & 1168 & $1547\pm12$ & 780 A & 1166 & $1239\pm100$ \\ Element 6 & 500 A & 961 & $922\pm10$ & 900 A & 1268 & $1092\pm40$ \\ \botrule \end{tabular*} \footnotetext{Note: This is an example of table footnote this is an example of table footnote this is an example of table footnote this is an example of~table footnote this is an example of table footnote.} \footnotetext[1]{This is an example of table footnote.} \end{minipage} \end{center} \end{sidewaystable} \section{Figures}\label{sec6} As per the \LaTeX\ standards you need to use eps images for \LaTeX\ compilation and \verb+pdf/jpg/png+ images for \verb+PDFLaTeX+ compilation. This is one of the major difference between \LaTeX\ and \verb+PDFLaTeX+. Each image should be from a single input .eps/vector image file. Avoid using subfigures. The command for inserting images for \LaTeX\ and \verb+PDFLaTeX+ can be generalized. The package used to insert images in \verb+LaTeX/PDFLaTeX+ is the graphicx package. Figures can be inserted via the normal figure environment as shown in the below example: \bigskip \begin{verbatim} \begin{figure}[<placement-specifier>] \centering \includegraphics{<eps-file>} \caption{<figure-caption>}\label{<figure-label>} \end{figure} \end{verbatim} \bigskip \begin{figure}[h]% \centering \includegraphics[width=0.9\textwidth]{fig.eps} \caption{This is a widefig. This is an example of long caption this is an example of long caption this is an example of long caption this is an example of long caption}\label{fig1} \end{figure} In case of double column layout, the above format puts figure captions/images to single column width. To get spanned images, we need to provide \verb+\begin{figure*}+ \verb+...+ \verb+\end{figure*}+. For sample purpose, we have included the width of images in the optional argument of \verb+\includegraphics+ tag. Please ignore this. \section{Algorithms, Program codes and Listings}\label{sec7} Packages \verb+algorithm+, \verb+algorithmicx+ and \verb+algpseudocode+ are used for setting algorithms in \LaTeX\ using the format: \bigskip \begin{verbatim} \begin{algorithm} \caption{<alg-caption>}\label{<alg-label>} \begin{algorithmic}[1] . . . \end{algorithmic} \end{algorithm} \end{verbatim} \bigskip You may refer above listed package documentations for more details before setting \verb+algorithm+ environment. For program codes, the ``program'' package is required and the command to be used is \verb+\begin{program}+ \verb+...+ \verb+\end{program}+. A fast exponentiation procedure: \begin{program} \BEGIN \\ % \FOR i:=1 \TO 10 \STEP 1 \DO |expt|(2,i); \\ |newline|() \OD % \rcomment{Comments will be set flush to the right margin} \WHERE \PROC |expt|(x,n) \BODY z:=1; \DO \IF n=0 \THEN \EXIT \FI; \DO \IF |odd|(n) \THEN \EXIT \FI; \COMMENT{This is a comment statement}; n:=n/2; x:=x*x \OD; \{ n>0 \}; n:=n-1; z:=z*x \OD; |print|(z) \ENDPROC \END \end{program} \begin{algorithm} \caption{Calculate $y = x^n$}\label{algo1} \begin{algorithmic}[1] \Require $n \geq 0 \vee x \neq 0$ \Ensure $y = x^n$ \State $y \Leftarrow 1$ \If{$n < 0$}\label{algln2} \State $X \Leftarrow 1 / x$ \State $N \Leftarrow -n$ \Else \State $X \Leftarrow x$ \State $N \Leftarrow n$ \EndIf \While{$N \neq 0$} \If{$N$ is even} \State $X \Leftarrow X \times X$ \State $N \Leftarrow N / 2$ \Else[$N$ is odd] \State $y \Leftarrow y \times X$ \State $N \Leftarrow N - 1$ \EndIf \EndWhile \end{algorithmic} \end{algorithm} \bigskip Similarly, for \verb+listings+, use the \verb+listings+ package. \verb+\begin{lstlisting}+ \verb+...+ \verb+\end{lstlisting}+ is used to set environments similar to \verb+verbatim+ environment. Refer to the \verb+lstlisting+ package documentation for more details. \bigskip \begin{minipage}{\hsize}% \lstset{frame=single,framexleftmargin=-1pt,framexrightmargin=-17pt,framesep=12pt,linewidth=0.98\textwidth,language=pascal \begin{lstlisting} for i:=maxint to 0 do begin { do nothing } end; Write('Case insensitive '); Write('Pascal keywords.'); \end{lstlisting} \end{minipage} \section{Cross referencing}\label{sec8} Environments such as figure, table, equation and align can have a label declared via the \verb+\label{#label}+ command. For figures and table environments use the \verb+\label{}+ command inside or just below the \verb+\caption{}+ command. You can then use the \verb+\ref{#label}+ command to cross-reference them. As an example, consider the label declared for Figure~\ref{fig1} which is \verb+\label{fig1}+. To cross-reference it, use the command \verb+Figure \ref{fig1}+, for which it comes up as ``Figure~\ref{fig1}''. To reference line numbers in an algorithm, consider the label declared for the line number 2 of Algorithm~\ref{algo1} is \verb+\label{algln2}+. To cross-reference it, use the command \verb+\ref{algln2}+ for which it comes up as line~\ref{algln2} of Algorithm~\ref{algo1}. \subsection{Details on reference citations}\label{subsec7} Standard \LaTeX\ permits only numerical citations. To support both numerical and author-year citations this template uses \verb+natbib+ \LaTeX\ package. For style guidance please refer to the template user manual. Here is an example for \verb+\cite{...}+: \cite{bib1}. Another example for \verb+\citep{...}+: \citep{bib2}. For author-year citation mode, \verb+\cite{...}+ prints Jones et al. (1990) and \verb+\citep{...}+ prints (Jones et al., 1990). All cited bib entries are printed at the end of this article: \cite{bib3}, \cite{bib4}, \cite{bib5}, \cite{bib6}, \cite{bib7}, \cite{bib8}, \cite{bib9}, \cite{bib10}, \cite{bib11} and \cite{bib12}. \section{Examples for theorem like environments}\label{sec10} For theorem like environments, we require \verb+amsthm+ package. There are three types of predefined theorem styles exists---\verb+thmstyleone+, \verb+thmstyletwo+ and \verb+thmstylethree+ \bigskip \begin{tabular}{|l|p{19pc}|} \hline \verb+thmstyleone+ & Numbered, theorem head in bold font and theorem text in italic style \\\hline \verb+thmstyletwo+ & Numbered, theorem head in roman font and theorem text in italic style \\\hline \verb+thmstylethree+ & Numbered, theorem head in bold font and theorem text in roman style \\\hline \end{tabular} \bigskip For mathematics journals, theorem styles can be included as shown in the following examples: \begin{theorem}[Theorem subhead]\label{thm1} Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. Example theorem text. \end{theorem} Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. \begin{proposition} Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. Example proposition text. \end{proposition} Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. \begin{example} Phasellus adipiscing semper elit. Proin fermentum massa ac quam. Sed diam turpis, molestie vitae, placerat a, molestie nec, leo. Maecenas lacinia. Nam ipsum ligula, eleifend at, accumsan nec, suscipit a, ipsum. Morbi blandit ligula feugiat magna. Nunc eleifend consequat lorem. \end{example} Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. \begin{remark} Phasellus adipiscing semper elit. Proin fermentum massa ac quam. Sed diam turpis, molestie vitae, placerat a, molestie nec, leo. Maecenas lacinia. Nam ipsum ligula, eleifend at, accumsan nec, suscipit a, ipsum. Morbi blandit ligula feugiat magna. Nunc eleifend consequat lorem. \end{remark} Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. \begin{definition}[Definition sub head] Example definition text. Example definition text. Example definition text. Example definition text. Example definition text. Example definition text. Example definition text. Example definition text. \end{definition} Additionally a predefined ``proof'' environment is available: \verb+\begin{proof}+ \verb+...+ \verb+\end{proof}+. This prints a ``Proof'' head in italic font style and the ``body text'' in roman font style with an open square at the end of each proof environment. \begin{proof} Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. \end{proof} Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. Sample body text. \begin{proof}[Proof of Theorem~{\upshape\ref{thm1}}] Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. Example for proof text. \end{proof} \noindent For a quote environment, use \verb+\begin{quote}...\end{quote}+ \begin{quote} Quoted text example. Aliquam porttitor quam a lacus. Praesent vel arcu ut tortor cursus volutpat. In vitae pede quis diam bibendum placerat. Fusce elementum convallis neque. Sed dolor orci, scelerisque ac, dapibus nec, ultricies ut, mi. Duis nec dui quis leo sagittis commodo. \end{quote} Sample body text. Sample body text. Sample body text. Sample body text. Sample body text (refer Figure~\ref{fig1}). Sample body text. Sample body text. Sample body text (refer Table~\ref{tab3}). \section{Methods}\label{sec11} Topical subheadings are allowed. Authors must ensure that their Methods section includes adequate experimental and characterization data necessary for others in the field to reproduce their work. Authors are encouraged to include RIIDs where appropriate. \textbf{Ethical approval declarations} (only required where applicable) Any article reporting experiment/s carried out on (i)~live vertebrate (or higher invertebrates), (ii)~humans or (iii)~human samples must include an unambiguous statement within the methods section that meets the following requirements: \begin{enumerate}[1.] \item Approval: a statement which confirms that all experimental protocols were approved by a named institutional and/or licensing committee. Please identify the approving body in the methods section \item Accordance: a statement explicitly saying that the methods were carried out in accordance with the relevant guidelines and regulations \item Informed consent (for experiments involving humans or human tissue samples): include a statement confirming that informed consent was obtained from all participants and/or their legal guardian/s \end{enumerate} If your manuscript includes potentially identifying patient/participant information, or if it describes human transplantation research, or if it reports results of a clinical trial then additional information will be required. Please visit (\url{https://www.nature.com/nature-research/editorial-policies}) for Nature Portfolio journals, (\url{https://www.springer.com/gp/authors-editors/journal-author/journal-author-helpdesk/publishing-ethics/14214}) for Springer Nature journals, or (\url{https://www.biomedcentral.com/getpublished/editorial-policies\#ethics+and+consent}) for BMC. \section{Discussion}\label{sec12} Discussions should be brief and focused. In some disciplines use of Discussion or `Conclusion' is interchangeable. It is not mandatory to use both. Some journals prefer a section `Results and Discussion' followed by a section `Conclusion'. Please refer to Journal-level guidance for any specific requirements. \section{Conclusion}\label{sec13} Conclusions may be used to restate your hypothesis or research question, restate your major findings, explain the relevance and the added value of your work, highlight any limitations of your study, describe future directions for research and recommendations. In some disciplines use of Discussion or 'Conclusion' is interchangeable. It is not mandatory to use both. Please refer to Journal-level guidance for any specific requirements. \backmatter \bmhead{Supplementary information} If your article has accompanying supplementary file/s please state so here. Authors reporting data from electrophoretic gels and blots should supply the full unprocessed scans for key as part of their Supplementary information. This may be requested by the editorial team/s if it is missing. Please refer to Journal-level guidance for any specific requirements. \bmhead{Acknowledgments} Acknowledgments are not compulsory. Where included they should be brief. Grant or contribution numbers may be acknowledged. Please refer to Journal-level guidance for any specific requirements. \section*{Declarations} Some journals require declarations to be submitted in a standardised format. Please check the Instructions for Authors of the journal to which you are submitting to see if you need to complete this section. If yes, your manuscript must contain the following sections under the heading `Declarations': \begin{itemize} \item Funding \item Conflict of interest/Competing interests (check journal-specific guidelines for which heading to use) \item Ethics approval \item Consent to participate \item Consent for publication \item Availability of data and materials \item Code availability \item Authors' contributions \end{itemize} \noindent If any of the sections are not relevant to your manuscript, please include the heading and write `Not applicable' for that section. \bigskip \begin{flushleft}% Editorial Policies for: \bigskip\noindent Springer journals and proceedings: \url{https://www.springer.com/gp/editorial-policies} \bigskip\noindent Nature Portfolio journals: \url{https://www.nature.com/nature-research/editorial-policies} \bigskip\noindent \textit{Scientific Reports}: \url{https://www.nature.com/srep/journal-policies/editorial-policies} \bigskip\noindent BMC journals: \url{https://www.biomedcentral.com/getpublished/editorial-policies} \end{flushleft} \begin{appendices} \section{Section title of first appendix}\label{secA1} An appendix contains supplementary information that is not an essential part of the text itself but which may be helpful in providing a more comprehensive understanding of the research problem or it is information that is too cumbersome to be included in the body of the paper. \end{appendices}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The nonlinear optical process of spontaneous parametric down-conversion (SPDC) is a popular method for producing correlated photon pairs, also referred to as biphotons or daughter photons. A major advantage of this process is the ability to quantum mechanically entangle the daughter photons in various degrees-of-freedom including polarization, time, frequency, space, and momentum, for example. The entanglement quality can be very high, making it an attractive source for applications in quantum communication, such as quantum key distribution (QKD) \cite{BB84, Ekert, Serigenko}, as well as experiments in fundamental quantum information science \cite{Zeilenger,bradprl}. In the SPDC process, a pump photon ($p$) enters a nonlinear optical crystal and is annihilated, producing two daughter photons, typically called signal ($s$) and idler ($i$). Energy conservation dictates that $\omega_{p} = \omega_{s}+\omega_{i}$. To be an efficient process, the photons must also abide by momentum conservation, a condition referred to as phase matching. In noncollinear SPDC, the daughter photons are emitted into angles on either side of the pump direction in order to satisfy this phase matching relation. In degenerate SPDC where $\omega_{s}=\omega_{i}$, this emission occurs in opposite pairs of directions around the pump beam forming a ring, called the down-conversion ring, in a plane transverse to the pump. In many experiments aimed at creating entangled photons pairs, the polarization degree-of-freedom is used because the measurement process is straightforward and very high purity can be achieved. Because current QKD schemes aim at maximizing photon rates, methods of creating high brightness polarization-entangled sources have been investigated \cite{Kim,Wong,Kwiat95, Kwiat99, Kwiat05, Kwiat09}. A potential solution involves using Type-I phase matching, where the two daughter photons have the same polarization that is opposite to the pump. Recently, Rangarajan \textit{et al.} \cite{Kwiat09} showed that high-brightness polarization entanglement can be achieved in Type-I phase matching when two thin crystals are placed together with their optic axes rotated $90^{\circ}$ with respect to each other \cite{Kwiat05,Kwiat09}. Originally, the process was demonstrated in beta barium borate (BBO, a uniaxial crystal) \cite{Kwiat99, Kwiat05}, but later shown for bismuth triborate, (BiBO, a biaxial crystal) \cite{Kwiat09} which is quite promising for high-brighness applications because of its higher nonlinear coefficient \cite{Hellwig,Ghotbi}. Each crystal produces the same polarization but there is essentially no which-path information regarding the crystal in which the photons are generated. This process can give rise to polarization entanglement around the entire ring, if the rings overlap everywhere. This enables higher achievable fluxes by allowing for collection of biphotons at multiple pairs of points around the down-conversion ring, known as spatial multiplexing and illustrated in Fig. \ref{multiplex}. \begin{figure} \begin{center} \includegraphics[scale=0.4]{./multiplexing2.pdf} \end{center} \caption{The dark circle illustrates the down-conversion ring. Points A, B, and C represent pairs of daughter photons at different azimuthal angles around the ring. To increase the total count rate, one should collect from multiple points around the ring. This idea is called spatial multiplexing.} \label{multiplex} \end{figure} To achieve entanglement around the whole ring, the collected light must be indistinguishable. This only occurs when the emmission patterns are completely circular because any eccentricity of the ring from the first crystal will have its major axis perpendicular to the major axis from the ring produced in the second crystal. This leads to reduced spatial overlap and reduced entanglement. Although BiBO is a promising crystal for high-brightness SPDC applications, we find that the emission pattern produced from a Type-I interaction with BiBO is elliptical. Eccentricity introduces a possible way of distinguishing photons that were born in the first crystal from those born in the second, thereby potentially reducing the entanglement quality around the entire ring. In the next section, we develop a formalism for predicting theoretically the emission pattern for noncollinear degenerate Type-I SPDC in BiBO and discuss the physical reason for elliptical emission patterns. In Sec. 3, we discuss our experimental setup and present our data. In Sec. 4, we discuss a theoretical model for predicting the eccentricity in a biaxial crystal. We show that this model agrees with our experimental data and that there is a wavelength that minimizes the eccentricity for a given set of emission angles. In Sec. 5, we discuss the repercussions of elliptical rings on the entanglement quality and brightness and further show how the spectrum of the single photons changes as a result of the asymmetry. \section{Phase Matching for Type-I SPDC in BBO and BiBO} The phase matching process for SPDC determines the emission direction of the daughter photons. In general, perfect phase matching occurs when \begin{equation} \vec{k}_{p}=\vec{k}_{s}+\vec{k}_{i}, \end{equation} where \begin{equation} \vec{k}_{j} = \frac{n_{j}(\omega_{j}, \hat{s}_{j})\omega_{j}}{c} \hat{s}_{j}, \end{equation} for $j=(p,s,i)$, where the angle- and frequency-dependent refractive index is given by $n_{j}(\omega_{j}, \hat{s}_{j})$, $\hat{s}_{j}$ is the propagation direction unit vector, and $c$ is the speed of light in vacuum. We follow the geometry, notations, and conventions in Refs. \cite{Beouff,Roberts}. The crystal geometry and photon wavevectors and angles are shown in Fig. \ref{geoandangle}(a). A pump photon with wavevector $\vec{k_{p}}$ is incident on a nonlinear crystal and makes angles ($\theta_{p},\phi_{p}$) with the optic axis of the crystal denoted ``OA-U" (``OA-B") for a uniaxial (biaxial) crystal, discussed in more detail below. The signal (idler) photon with wavevector $\vec{k_{s(i)}}$ is emitted at local angles ($\theta_{s(i)},\phi_{s(i)}$) with respect to $\vec{k_{p}}$. For perfect phase matching, $\phi_{s} = \phi_{i} + \pi$. The photons undergo refraction at the air-crystal interface and exit at exterior angles of $\theta_{s(i)}'$. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.32]{./newcrystalandthetavsphiplotcolumn.pdf} \end{center} \caption{(a) Crystal geometry for BBO and BiBO crystals. The pump makes angles ($\theta_{p},\phi_{p}$) with respect to the crystal optic axis. Here, $\phi_{p}=0$. The signal (idler) photons are emitted at local angles ($\theta_{s(i)},\phi_{s(i)}$) with respect to $\vec{k}_{p}$. Refraction at the interface of the crystal results in an exterior angle of $\theta_{s(i)}'$ outside the crystal. (b) The exterior emission angle versus its local azimuthal angle for BiBO (blue, solid line) and BBO (maroon, dashed line). For BBO, we use a pump cut angle of $\theta_{p}=29.392^{\circ}$ while for BiBO we use $\phi_{p}=90^{\circ},\theta_{p}=151.563^{\circ}$. Both curves are calculated using $\lambda_{p}=405$ nm and $\lambda_{s}=\lambda_{i}=810$ nm.} \label{geoandangle} \end{figure} To determine the emission angles ($\theta_{s},\theta_{i},\phi_{s},\phi_{i}$) of the daughter photons, we choose frequencies ($\omega_{p},\omega_{s},\omega_{i}$) as well as the angles the pump wavevector makes with the optic axis ($\theta_{p},\phi_{p}$). Using the Sellmeier equations for either BBO or BiBO, we solve for either set of parameters by finding a solution to the simultaneous equation \cite{Beouff} \begin{align} (\Delta k_{x})^{2}+(\Delta k_{y})^{2} + (\Delta k_{z})^{2} = 0, \label{totaldk} \end{align} and \begin{equation} \Delta k_{z} = 0, \label{dkz} \end{equation} where, for example, $\Delta k_{z} = k_{p_{z}}-k_{s_{z}}-k_{i_{z}}$. Evaluating Eqs. \ref{totaldk} and \ref{dkz} involves the refractive index for each photon propagating through the crystal, which depends on the polarization of the photons, and the propagation direction. In a uniaxial crystal, which has a single axis of symmetry, an ordinary-polarized photon ($o$-polarized) is polarized perpendicular to the plane that contains its propagation wavevector and the optic axis of the crystal. This photon experiences a refractive index that does not change with the direction of propagation. The extraordinary-polarized photon ($e$-polarized) is polarized in the plane of the optic axis and the propagation vector and experiences an angle-dependent refractive index. Type-I interactions, such as those considered here, include interactions where the pump is an $e$-polarized photon and the daughter photons are $o$-polarized, or vice versa. A biaxial crystal, such as BiBO, has two optic axes and reduced symmetry. Polarized photons are neither $e$-polarized or $o$-polarized in the sense of a uniaxial crystal. Depending on type of crystal and wavelength range, polarized photons may experience either an angle-dependent or angle-independent refractive index. The photons are said to be either ``fast" or ``slow" instead of ``$e$" or ``$o$" where fast (slow) refers to having a smaller (larger) refractive index \cite{Roberts}. Determining the fast (slow) refractive indices involves finding the length of the minor (major) axes of the optical indicatrix given by Fresnel's equation of wave normals given by \begin{equation} \frac{s_{x}^{2}}{n^{-2}(\omega,\hat{s})-n_{x}^{-2}}+\frac{s_{y}^{2}}{n^{-2}(\omega,\hat{s})-n_{y}^{-2}}+\frac{s_{z}^{2}}{n^{-2}(\omega,\hat{s})-n_{z}^{-2}}=0, \label{wavenormals} \end{equation} where $n_{x},n_{y},n_{z}$ are the refractive indices in each principle direction of the crystal at a given vacuum frequency and $n(\omega,\hat{s})$ is the refractive index in a given direction with unit vector $\hat{s}$. For a negative biaxial crystal, such as BiBO, $n_{x}<n_{y}<n_{z}$. Solving Eq. \ref{wavenormals} for $n(\omega,\hat{s})$ using the approach described in Ref. \cite{Beouff}, we obtain two solutions, one for each polarization (fast and slow). In BiBO we find that the pump photons experience the fast refractive index that is angle-independent for wavelengths in the UV and blue part of the spectrum. In contrast, the daughter photons in the red and NIR part of the spectrum experience the slow refractive index that is angle-dependent. In BBO, pump photons in the blue part of the spectrum travel as $e$-polarized photons, and experience an angle-dependent refractive index, while the signal and idler photons are $o$-polarized and experience an angle-independent refractive index. This effect is not due to the uniaxial versus biaxial nature of the crystals, but simply due to the angle dependence on the refractive index over certain wavelength ranges for each particular crystal. Our analysis of the elliptical emission pattern for BiBO has not been noted or observed previously. This may be due to the fact that most previous quantum optics experiments have been conducted with BBO, which produces circular rings as observed and predicted below. This is not the case in BiBO due to the angle-dependent refractive index experienced by the daughter photons. In Fig. \ref{geoandangle}(b) we plot the external angle $\theta_{s}'$ as a function of its azimuthal angle $\phi_{s}$ for BiBO (blue, solid) and BBO (maroon, dashed). For BiBO, we observe that $\theta_{s}'$ varies with $\phi_{s}$ so that $\theta_{s}'$ has a larger value at $\phi_{s} = 0^{\circ}\hspace{2pt} \textrm{and} \hspace{2pt} 180^{\circ}$ than in the $\phi_{s} = 90^{\circ}\hspace{2pt} \textrm{and} \hspace{2pt}270^{\circ}$ directions. This leads to an elliptical emission pattern for this crystal. For BBO, $\theta_{s}'$ is a constant as a function of $\phi_{s}$, which implies a circular emission pattern. In our analysis, we ignore birefringent walk-off of the beams due to its negligible contribution for the thin crystals considered here, as discussed in detail in the Appendix. \section{Experimental Results} We image the spatial intensity patterns from both BiBO and BBO using the experimental setup shown in Fig. \ref{expsetup}. A 405-nm-wavelength, continuous-wave laser pumps either a BBO or BiBO crystal. Both types of crystals are thin (0.8 mm) in comparison to the Rayleigh length and compared to the transverse extent of the imaged rings - a requirement for polarization entanglement. As the crystals become thick, which-path information begins to degrade the entanglement because the rings become distinguishable. We test two different BiBO crystals each with a different set of crystal cut angles ($\theta_{p}, \phi_{p}$) designed to be phase matched for $\lambda_{p}=405$ nm and $\lambda_{s}=\lambda_{i}=810$ nm with an exterior opening angle of $\sim3^{\circ}$. These two crystal cut angles are the angles for which it is straight-forward to create an optical beam that propagates through the crystal with negligible walk-off. That is, $\phi_{p}$ is chosen so that the pump, signal, and idler photons essentially propagate as a slow or a fast wave. We can further tilt the crystal around $\theta_{p}$ to tune the opening angle. One crystal has cut angles of $\phi_{p} = 90^{\circ},\theta_{p} = 151.7^{\circ}$ while the other has $ \phi_{p} = 0^{\circ},\theta_{p} = 51^{\circ}$. The BBO crystal has a pump cut angle of $\theta_{p} = 29.3^{\circ}$. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.35]{./expsetup.pdf} \end{center} \caption{Experimental setup. A 405 nm laser (Omicron, LDM405.120.CWA.L.WS, $<0.02$nm bandwidth FWHM) pumps either a BiBO or BBO crystal (Newlight Photonics). The down-converted light exits the crystal and propagates 12.4 cm before it passes through a 40 cm focal length lens (Thorlabs LAC726B). This lens functions to direct the rays from the down-conversion ring to an object plane that we image onto the camera. We image the ring $\sim$ 10 cm after this lens by placing a zoom lens (Navitar Zoom 7000E) on an EMCCD (Andor $\textrm{iXon}^{EM}$) $\sim$ 1 m away from the lens. The magnification of this imaging system is 8.6. Each pixel on the camera chip is 24 $\mu$m $\times$ 24 $\mu$m, the sensitive area of the chip is 3 mm $\times$ 3 mm, and we cool the chip down to $-70^{\circ}\textrm{C}$ to reduce dark noise. We put a 10 nm bandpass optical filter (Andover 810FS10-50) before the camera lens to select only nearly degenerate wavelengths.} \label{expsetup} \end{figure} Making precise measurements of the eccentricity requires having the well-calibrated imaging system, depicted in Fig. \ref{expsetup}. To achieve this, we perform the following procedure: first we remove the crystal and the 40 cm focal length lens, leaving only the laser, a set of apertures for alignment, steering mirrors, and camera-lens system. We then place a flat mirror against the camera lens to ensure the back-reflected light is going straight back through the apertures. We then place a target in the object plane. The target is a flat piece of metal with a 20-mm-diameter ring scored in the surface of the metal, where the diameter tolerances is $\sim 25$ $\mu$m. This machined ring has a hole in the center so that we can easily align it with the laser beam path and check its back reflections for tilt. We check the eccentricity of the target and make small adjustments to the camera's position and tilt until we minimize eccentricity. The eccentricity is caused primarily by any amount of tilt in the system, which arises from imperfect alignment. Astigmatism and coma also arise from an imperfectly aligned optical system, although these are negligible compared to the tilt for a well-aligned optical system. We then replace the crystal and check back reflections with a mirror on the camera lens. Finally, we add in the lens and ensure its alignment by checking back reflections. We collect images for each crystal and fit the observed emission pattern with an elliptical function in two transverse dimensions with a Gaussian profile in the longitudinal dimension. The free parameters of our model include the height of the Gaussian peak, background counts (offset of the Gaussian from 0), major/minor axes for ellipse, width of Gaussian and location of the center point. We calculate the eccentricity of the ellipse by \begin{equation} \epsilon = \sqrt{1-\left(\frac{a}{b}\right)^{2}}, \end{equation} where $b$ ($a$) is the major (minor) axis of the ellipse. Figure \ref{data} shows the transverse spatial intensity pattern for down-conversion rings from both BBO and BiBO crystals. For BBO, the intensity pattern of the down-conversion ring is essentially circular (Fig. \ref{data} (a)), $\epsilon=0\pm$0.013 where the error is a combined statistical and systematic error of 0.013, which will be discussed below. Hence, our results indicate the pattern for BBO is circular to within our experimental uncertainties. The range of opening angles that are phase matched for BBO is smaller than that for BiBO because the slope of the opening angle versus wavelength is larger in BBO due to the slope of the refractive index versus angle being steeper. This leads to a thinner down-conversion ring for BBO. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.3]{./newdatareducedcolumn.pdf} \end{center} \caption{SPDC emission pattern for BBO and BiBO. The emission pattern from the camera is plotted versus the two transverse dimensions ($x$ and $y$) where $y$ is in the direction perpendicular to the optical table. The patterns from the camera have been scaled up by the magnification and converted from pixels into cm. The red, solid lines are ellipses with the major and minor axes taken from the fit parameters. (a) Down-conversion ring from BBO crystal is nearly circular with an eccentricity of 0.013. (b) Down-conversion ring BiBO crystal cut at phase matching angle of ($\theta_{p}=151.7^{\circ},\phi_{p} = 90^{\circ}$) has a higher eccentricity with the major axis in the x-direction. (c) Down-conversion ring in BiBO crystal cut at phase matching angle of ($\theta_{p}=51^{\circ},\phi_{p} = 0^{\circ}$) has a large eccentricity.} \label{data} \end{figure} For BiBO cut at phase matching angles $ \phi_{p} = 90^{\circ}, \theta_{p} = 151.7^{\circ}$, (Fig. \ref{data}, (b)), the eccentricity is greater than that for BBO, with $\epsilon = 0.172 \pm 0.019$. For BiBO, with $\phi_{p} = 0^{\circ}, \theta_{p} = 51^{\circ}$ (Fig. \ref{data} (c)), $\epsilon = 0.367 \pm 0.012$ and is easily seen by eye. Table 1 gives the results of our observations and analysis. We calculate the major/minor axes using the experimental values for $\theta_{s}'(\phi_{s}=0)/\theta_{s}'(\phi_{s}=90)$ and Eq. \ref{Eccentricity1}. Using the procedure outlined in Sec. 2, we calculate theoretical values for the eccentricity using an exterior angle for either the major or minor axis given in the first column. The exterior angle is what we measure experimentally and is simply the opening angles in the crystal propagated outside the crystal Snell's law. We use this set exterior angle to calculate the $\theta_{p}$ and $\phi_{p}$ that gives this value at either $\phi_{s} = 0$ or $\phi_{s} = 90^{\circ}$. In this case, instead of knowing $\theta_{p}$ and calculating $\theta_{s}$ and $\theta_{i}$, we know the latter and calculate the former. Measuring the external emission angle is more straightforward than measuring the angle the pump beam makes with the crystal optic axis. Once we determine $\theta_{p}$, we use it to predict the emission angle around the entire ring. Using Snell's law, we propagate these angles outside the crystal and find the eccentricity through the relation, \begin{equation} \epsilon = \sqrt{1-\left(\frac{\tan[\theta_{s}'(\phi_{s}=0^{\circ})]}{\tan[\theta_{s}'(\phi_{s}=90^{\circ})]}\right)^{\pm2}}, \label{Eccentricity1} \end{equation} where the $\pm$ accounts for the possibility that the axes at $\phi_{s}=0$ may be the major or the minor axis. We use Eq. \ref{Eccentricity1} to calculate both experimental and theoretical values for $\epsilon$. For the experimental values, we have measured both $\tan[\theta_{s}'(\phi_{s}=90^{\circ})]$ and $\tan[\theta_{s}'(\phi_{s}=0^{\circ})]$, while for the theoretical values, we use only one of these, either $\tan[\theta_{s}'(\phi_{s}=90^{\circ})]$ or $\tan[\theta_{s}'(\phi_{s}=0^{\circ})]$ and calculate the other by the procedure described above. The error in our experiments includes a statistical error from the fitting process and a systematic error, which is most likely due to small astigmatism in the system. We calculate the systematic error by taking repeated measurements at different times which requires realignment using the alignment procedure outlined above. \begin{table} \caption{Experimental Data} \begin{center} \begin{tabular}[h,c]{||p{1.55cm}p{1.95cm}||p{1.15cm}|p{1.55cm}|l||} \hline &&{BBO}&{BiBO}&{BiBO}\\ &&&$\phi_{p}=90^{\circ}$&$\phi_{p}=0^{\circ}$\\ \hline Experiment&$\frac{\theta_{s}'(\phi_{s}=0)}{\theta_{s}'(\phi_{s}=90)}$ ($^{\circ}$) & $\frac{4.12358}{4.12382}$ &$\frac{4.0294}{4.10893}$&$\frac{4.05449}{4.31798}$ \\ &$\epsilon$& 0.013 & 0.172& 0.360\\ \hline Theory&$\epsilon$ &0&0.166& 0.361\\ \hline Error& &&& \\ Statistical&&0.011& 0.001&0.001\\ Systematic &&0.007&0.019 &0.012 \ \\ \hline \hline \end{tabular} \end{center} \end{table} \section{Theoretical Model} In this section we outline a theoretical model for predicting the approximate eccentricity of the emission pattern in each crystal. Because our emission angles are small ($\theta_{s},\theta_{i}<<1$), a small-angle approximation around the collinear case ($\theta_{s} =\theta_{i} =0$) is a natural method for obtaining an approximate analytic solution. This allows us to gain better understanding of the parameters that affect the eccentricity. Starting from the phase matching equations in the $y$ and $z$ directions, we find that the phase matching occurs when, \begin{equation} n_{s} \cos(\theta_{s}) +n_{i} \cos(\theta_{i}) -2n_{p}=0, \label{PMz} \end{equation} \begin{equation} n_{s} \sin(\theta_{s}) +n_{i} \sin(\theta_{i}) =0. \label{PMy} \end{equation} We expand $\sin(\theta_{j})$, $\cos(\theta_{j})$, $n_{s}$, and $n_{i}$ to second order in $\theta_{j}$. The approximate expressions of the refractive indices are given by, \begin{align} n_{s} &\approx \tilde{n}_{s}+\frac{\partial\tilde{n}_{s}}{\partial \theta_{p}}\delta\theta_{p}+\frac{\partial\tilde{n}_{s}}{\partial \theta_{s}}\theta_{s}+\frac{1}{2} \frac{\partial^{2}\tilde{n}_{s}}{\partial \theta_{p}^{2}}\delta\theta_{p}^{2}+\\ \nonumber &\hspace{90pt}\frac{\partial^{2}\tilde{n}_{s}}{\partial\theta_{p}\partial\theta_{s}}\delta\theta_{p} \theta_{s}+\frac{1}{2}\frac{\partial^{2}\tilde{n}_{s}}{\partial \theta_{s}^{2}}\theta_{s}^{2}, \label{nsapprox} \end{align} \begin{align} n_{i} &\approx \tilde{n}_{s} +\frac{\partial\tilde{n}_{s}}{\partial \theta_{p}}\delta\theta_{p}-\frac{\partial\tilde{n}_{s}}{\partial \theta_{s}}\theta_{i}+\frac{1}{2} \frac{\partial^{2}\tilde{n}_{s}}{\partial \theta_{p}^{2}}\delta\theta_{p}^{2}-\\ \nonumber &\hspace{90pt}\frac{\partial^{2}\tilde{n}_{s}}{\partial\theta_{p}\partial\theta_{s}}\delta\theta_{p} \theta_{i}+\frac{1}{2}\frac{\partial^{2}\tilde{n}_{s}}{\partial \theta_{s}^{2}}\theta_{i}^{2}, \label{niapprox} \end{align} where $\theta_{p}=\theta_{p_{0}}+\delta \theta_{p}$ is the pump phase matching angle, $\theta_{p_{0}}$ is the pump phase matching angle for the collinear degenerate case and $\delta \theta_{p}$ is a small variation around that angle. The notation $\tilde{n_{s}}$ describes the refractive index for the collinear degenerate case, and $\tilde{n_{s}}$ has replaced $\tilde{n_{i}}$ everywhere because for the collinear degenerate case, they are equal. The change in refractive index with angle is calculated from the expression for the frequency and directional dependent refractive index. These quantities indicate how quickly the refractive index changes with the various angles. Inserting Eqs. \ref{nsapprox} and \ref{niapprox} and the trigonometric functions into Eqs. \ref{PMz} and \ref{PMy}, we solve for angles $\theta_{s}$ and $\theta_{i}$. We are interested in solving for $\delta\theta_{s}$ and $\delta\theta_{i}$ which are small changes in the emissions angles around the values of $\theta_{s}(\phi_{s}=90^{\circ})$ and $\theta_{s}(\phi_{s}=0^{\circ})$, which are the angles corresponding to the directions of the major and minor axes. We use the expression \begin{equation} \theta_{s,(i)}=\theta_{s}(\phi_{s}=90^{\circ})+\delta\theta_{s,(i)}, \end{equation} to solve for $\delta\theta{s}(\phi_{s}=0^{\circ})$ and $\delta\theta_{s}(\phi_{s}=180^{\circ})$. We calculate the eccentricity inside the crystal to be \begin{equation} \epsilon=\sqrt{1-\left(\frac{\delta\theta_{s}(\phi_{s}=0^{\circ})+\delta\theta_{s}(\phi_{s}=180^{\circ})}{2\theta_{s}(\phi_{s}=90^{\circ})}\right)^{2}}. \label{eccen1} \end{equation} The eccentricity arrises from the photons experiencing the angle-dependent refractive index, so the eccentricity in the emission pattern is only due to the eccentricity inside the crystal. Once the photons propagate into free-space, they experience no angle-dependence and simply follow the trajectories of the external angles dictated by Snell's law. Therefore, the eccentricity inside the crystal is the only contribution to the eccentricity experimentally measured. Using Eq. \ref{eccen1}, we determine the predicted value of the eccentricity using the measured values for $\delta\theta_{p}$. These are shown in the final columns of Table 1. For small values of $\delta\theta_{p}$, we also express the eccentricity in terms of the derivatives of refractive indices as, \begin{equation} \epsilon \approx \bigg(\frac{\partial^{2}\tilde{n_{s}}}{\partial\theta_{s}^{2}}\bigg)^{\phi_{s}=0}- \bigg(\frac{\partial^{2}\tilde{n_{s}}}{\partial\theta_{s}^{2}}\bigg)^{\phi_{s}=90^{\circ}}- \frac{2}{\tilde{n_{s}}}\bigg[\bigg(\frac{\partial\tilde{n_{s}}}{\partial\theta_{s}}\bigg)^{\phi_{s}=0}\bigg]^{2}, \label{approxe} \end{equation} where the superscripts of $\phi_{s}$ denote the azimuthal angle at which the terms are evaluated. We plot the three terms in this approximation in Fig. \ref{threederivatives}. \begin{figure} \begin{center} \includegraphics[scale=0.35]{./threederivatives.pdf} \end{center} \caption{Plot of each term in Eq.\ref{approxe} versus wavelength, ``1" (blue, short-dash) $(\partial^{2}\tilde{n_{s}}/\partial\theta_{s}^{2})^{\phi_{s}=0}$ ``2" (red, long-dash) $(\partial^{2}\tilde{n_{s}}/\partial\theta_{s}^{2})^{\phi_{s}=90^{\circ}}$, ``3" (green, solid) $2/\tilde{n_{s}}*[(\partial\tilde{n_{s}}/\partial\theta_{s})^{\phi_{s}=0}]^{2}$. For this plot we use $\phi_{p} = 90^{\circ},\theta_{p} = 152.077^{\circ}$. } \label{threederivatives} \end{figure} For this plot, the difference between the curves ``1'' and ``2'' is the dominant contribution to the eccentricity. Where these curves intersect is where the eccentricity is at a minimum in Fig. \ref{magicw}. This expression allows us to determine why eccentricity is larger for certain crystal cuts than others. We examine the relative size of each of the three terms in Eq. \ref{approxe} for the case where $\phi_{p}=0$ and $\phi_{p}=90^{\circ}$. We find that the difference of the first two terms in Eq. \ref{approxe} for $\phi_{p}=0$ is very large in comparison to the difference of the first two terms for the $\phi_{p}=90^{\circ}$ case. This ultimately leads to the larger eccentricity we observe for the $\phi_{p}=0$ crystal cut in BiBO. Using this method, we also calculate the value of the degenerate wavelength ($\lambda_{s}=\lambda_{i}$) that minimizes the eccentricity inside the crystal. Again, minimizing the internal eccentricity will minimize the eccentricity measured in the external emission pattern. Each of the parameters in Eq. \ref{eccen1} is wavelength-dependent because the refractive indices for the daughter photons in BiBO are angle-dependent. We find a minimum wavelength by performing a root-finding routine on Eq. \ref{eccen1} with wavelength as the independent parameter. We also find the eccentricity as a function of wavelength plotted in Fig. \ref{magicw} for three different values of $\theta_{p}$. This plot shows that there is a wavelength for each pump tilt angle for which the eccentricity can be minimized. According to the plot, the eccentricity becomes large away from the 710-780 nm range. For our degenerate wavelength of 810 nm, this figure agrees with the value of the eccentricity we observe. This plot indicates which wavelengths are best at minimizing eccentricity in a given crystal cut for Type-I degenerate SPDC. \begin{figure} \begin{center} \includegraphics[scale=0.35]{./magicwavefinal.pdf} \end{center} \caption{Eccentricity versus wavelength for three different values of $ \theta_{p}$, ``1" (blue, short-dash) $\theta_{p}=152.071^{\circ}$ ``2" (green,solid) $\theta_{p}=151.378^{\circ}$, ``3" (red, long-dash) $\theta_{p}=149.21^{\circ}$. For all three plots there is a wavelength for which the eccentricity of the down-conversion ring is minimized. We show our experimental data for a pump wavelength of 405 nm and a degenerate down-converted wavelength of 810 nm (purple) with associated error bars.} \label{magicw} \end{figure} \section{Impact on entanglement and count rate} One potential drawback of an elliptical emission pattern is that it may decrease the entanglement quality and/or the entangled photon count rate. Because the rings are elliptical, they no longer perfectly spatially overlap around the entire ring, leading to potential degradation in entanglement-purity, and limiting the ability to multiplex many channels around the down-conversion ring. Lack of spatial overlap is the most obvious way in which which-path information is revealed. The elliptical shape may also cause a change in the joint spectrum for the photons, which leads to a degradation in entanglement purity. We note that spatial overlap can be corrected for by collecting light in between the ring centroids at the cost of count rate. However, at this location, the spectra from the two rings is different, leading to decreased entanglement purity. We study each of these effects in different ways where we assume that the biphotons are coupled into single mode fibers. To estimate whether the entanglement quality is degraded, we determine the spectrum of the down-converted light for each crystal for single mode fiber collection. Using single mode fibers is important because collection of a single spatial mode for either the signal or idler photon means that the twin photon should also be projected onto a single spatial mode. The entanglement purity can then be determined from the overlap integral of the spectral intensity of these two single modes. Calculating the spectra from the midpoint between the rings at two locations and computing the overlap integral provides a measure of distinguishability. A smaller overlap integral represents lower entanglement quality because the difference in spectra between the crystals means it is possible to distinguish which crystal the photons came from. Using the formalism developed in our previous work \cite{Guilbert}, we determine both the joint spectrum for the biphoton wavefunction as well as the singles spectrum for each signal and idler, where we assume single mode fiber collection. Figure \ref{final} (b) shows the joint spectrum at location B in Fig. \ref{final} (a). Here the overlap is minimal for each crystal in the pair when the crystals are cut at $\phi_{p}=90^{\circ}, \theta_{p} = 151.7^{\circ}$ (solid lines) and $\phi_{p}=0, \theta_{p} = 51^{\circ}$ (dashed lines). In these calculations we choose one of these points, point A (Fig. \ref{final} (a)) to have maximal overlap between the two rings, leaving point B to have minimal overlap. We choose the minimal-overlapped case because the entanglement purity is lowest due to the mismatched overlap. For the minimal overlap case, we choose the collection mode to be in between the two rings so that we collect equal count rates from each and so that the spectra from the two will be most similar. We find that, for both BiBO crystal cuts, the spectra are similar for the two crystals, and the entanglement quality is not substantially degraded. The spectra at point A are identical, while at point B we find the overlap integral is 99.4-99.97$\%$ for each of the two crystal cuts. The overlap integral for two entangled states is a measure of the entanglement purity of the states involved. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.35]{./twoellipticalringscolumn.pdf} \end{center} \caption{(a) Points A and B represent the points of maximal and minimal overlap respectively. Red inner ring lines and blue outer ring lines illustrate the center of the emission patterns from each crystal in the crystal pair, where we have ignored the finite width of the actual ring. The eccentricity of these rings has been exaggerated for illustration purposes. Left figure depicts crystal cut at $\phi_{p}=0, \theta_{p} = 51^{\circ}$ and right figure depicts crystal cut at $\phi_{p}=90^{\circ}, \theta_{p} = 151.7^{\circ}$(b) Joint spectra for each crystal cut at location B for each crystal cut. The $y$ axis on the plot is the differential rate per frequency and the $x$ axis is an angular frequency shift around the degenerate frequency. Solid lines (green (2) and gold (1)) represent the joint spectral for each crystal when the crystals are cut at $\phi_{p}=90^{\circ}, \theta_{p} = 151.7^{\circ}$ while the dashed lines (blue (3) and purple (4)) represent the joint spectra from each crystal when the crystals are cut at $\phi_{p}=0, \theta_{p} = 51^{\circ}$. The pump wavelength for this simulation is 405 nm and the degenerate wavelengths 810 nm. The vertical dashed green lines in the plot indicate a 20 nm bandwidth for reference.} \label{final} \end{figure} Even though the eccentricity of the ring does not affect the entanglement purity, it does affect the count rates. Integrating the spectra in Fig. \ref{final} (b) over $\Delta \omega$ gives the total joint count rate. For the crystal cut at $\phi_{p}=0, \theta_{p} = 51^{\circ}$, the spectra has a much lower amplitude and integrating over any band of frequencies gives a lower count rate compared to the crystal cut at $\phi_{p}=90^{\circ}, \theta_{p} = 151.7^{\circ}$. Higher eccentricity causes the rings to be more spatially separated in the former case, resulting in the lower joint count rate. Additionally, the crystal cut for this high eccentricity ($\phi_{p}=0, \theta_{p} = 51^{\circ}$) has a lower effective nonlinearity ($d_{\rm{eff}}$) resulting in an even lower count rate so that this crystal cut is clearly not optimal for high-brightness applications. This may not be the case for other crystals however. For SPDC applications in BiBO that require both a high count rate as well as high purity entanglement, we find that the eccentricity does not decrease the entanglement purity significantly, however, the count rate is lower for collection of the midpoints between emissions patterns with greater eccentricity. Applications aiming for ultra-high entanglement purity ($> 99.99\%$) are limited to the locations around the ring that can be made to overlap completely. \section{Conclusions} In conclusion, we show both experimentally and theoretically that the emission patterns from down-conversion in BiBO have an elliptical shape, while the pattern from BBO has a circular profile. We show that this difference in shape of the down-conversion rings depends on whether or not the daughter photons experience an angle-independent or angle-dependent refractive index. Although, in hindsight, the results we present may seems obvious, this is to our knowledge, the first time this effect has ever been characterized and its potential impact discussed. We also present a theoretical method for calculating the eccentricity of the down-conversion ring for biaxial crystals find that there is an optimal wavelength for which the eccentricity is a minimum and close to zero. We demonstrate that the elliptical nature of the rings does not reduce entanglement purity, but reduces joint counts rates significantly for down-conversion patterns with a larger eccentricity. \section{Acknowledgments} The authors gratefully acknowledge the financial support of the DARPA InPho program and the Office of Naval Research MURI on Fundamental Research on Wavelength-Agile High-Rate Quantum Key Distribution (QKD) in a Marine Environment, award $\#$N00014-13-0627. \section*{Appendix} Birefringent walk-off occurs in a medium when, for a particular polarization, the momentum vector and the Poynting vector separate from each other. Walk-off occurs in both uniaxial and biaxial crystals, although calculating the walk-off angle in the biaxial case is more challenging due to the reduced symmetry of the crystal. We calculate the walk-off angles using the method in Ref. \cite{Roberts}, and find the contribution to the eccentricity from walk-off is negligible, due to the fact that the crystals considered here are so thin. The crystal thinness means that photons that are walking off different amounts in different directions will not cause a large change in eccentricity because their propagation distance to the crystal face is quite small. In BiBO, the daughter photons experience an angle-dependent refractive index, and will therefore have Poynting vector walk-off. Using Eqs.~(20)-(22) in Ref. \cite{Roberts} we determine the Poynting unit vector $\hat{N}$ and unit propagation vector $\hat{k}$. We use these to determine the walk-off angle as a function of azimuthal angle $\phi_{s}$ around the original pump direction. We find walk-off angles for the $\phi_{p} = 90^{\circ}, \theta_{p} = 151.56^{\circ}$ crystal cut at $\phi_{s} = 0$ and $\phi_{s}=180^{\circ}$ are $3.19^{\circ}$ and $3.51^{\circ}$, respectively. The walk-off angles for $\phi_{s} = 90^{\circ}$ and $\phi_{s} = 270^{\circ}$ are both $3.36^{\circ}$. We map the Poynting vectors and momentum vectors onto the crystal exit face and compare their eccentricity. We find that the Poynting vectors map a ring with eccentricity of 0.1680, while the momentum vectors map a ring with eccentricity 0.1685, which is a 0.3$\%$ difference. In free space, the Poynting vector and momentum vector must be parallel, so the trajectories of these vectors to the image plane are identical. As we have shown in the previous paragraph, the eccentricity of the ring as it is about to exit the crystal is negligible. The photon path outside the crystal is much larger than the one inside the crystal and as the Poynting vector and momentum vector are parallel in free-space, the dominant contribution to the eccentricity comes from the paths taken outside the crystal. Essentially, the overall impact of the walk-off from intracrystal angles is very small due to the fact that the crystal length is much smaller than the crystal - lens distance. \bigskip
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Accreting millisecond pulsars \citep[AMPs,][]{Alpar82,Backer82} are transient low mass X-ray binaries (LMXBs) that show X-ray pulsations during their outbursts. A total of nine AMPs out of ~100 non-pulsating LMXBs have been found to date. The reason why only this small subgroup of binaries pulsates is still unknown. The first seven AMPs discovered showed persistent X-ray pulsations throughout the outbursts. Recently \citet{Kaaret06} discovered the AMP HETE~J1900.1--2455, which has remained active for more than 2 years\footnote{At the time of submitting this letter, the source is still active.} but showed pulsations only intermittently during the first $\sim2$ months of activity \citep{Galloway07a}. From the transient source Aql X-1 pulsations were detected \citep{Casella07} only for $\sim150$~sec out of the $\sim1.3$~Msec the source has (so far) been observed with the Rossi X-ray Time Explorer (RXTE). \citet{Gavriil06,Gavriil07} recently reported on the detection of $\sim442.3$~Hz pulsations in an observation of the 2005 outburst of a transient source in the globular cluster (GC) NGC~6440. The pulsations followed a flux decay observed at the beginning of the observation and were reminiscent of those observed during superbursts; however, as \citet{Gavriil07} suggest, they could also be a detection from a new intermittent accreting millisecond pulsar. \citet{Kaaret03} report a 409.7~Hz burst oscillation in an X-ray transient (SAX~J1748.9--2021) located also in NGC~6440 and this GC harbors at least 24 X-ray sources \citep{Pooley02}, so \citet{Gavriil07} concluded that the burst oscillations and the pulsations were probably coming from different X-ray transients in the same GC. The exact formation mechanisms behind the pulsations of these three sources remains unknown. The existence of intermittent pulsations with a small duty cycle implies that many other apparently non-pulsating LMXBs might be pulsating, bridging the gap between the small number of AMPs and the large group of non-pulsating LMXBs. We are performing a detailed analysis of all RXTE archival data of neutron-star LMXBs to search for transient pulsations in their X-ray flux \citep[see also][]{Casella07}. In this Letter we present the results of our search on the three X-ray outbursts observed from the globular cluster NGC~6440. \section{The neutron-star transient SAX~J1748.9--2021 in NGC~6440} NGC~6440 is a GC at $8.5\pm0.4$~kpc \citep{Ortolani94}. Bright X-ray outbursts from a LMXB were reported in 1971, 1998, 2001 and 2005 \citep[]{Markert75,Zand99,Verbunt00,Zand01,Markwardt05}. \citet{Zand01} from X-ray and optical observations concluded that the 1998 and 2001 outbursts were from the same object, which they designated SAX~J1748.9--2021. \begin{figure*}[t] \center \resizebox{2\columnwidth}{!}{\rotatebox{0}{\includegraphics{./f1.eps}}} \caption{Intensity (2.0--16.0~keV) normalized by Crab vs. time of the three outbursts. Gray symbols show the 16-sec averaged intensity during the pointed PCA observations. The continuous line shows the ASM light curve. Black marks at the \textit{top} mark the times of type-I X-ray bursts. Black marks at the \textit{bottom} mark the times when we detect significant pulsations. Years of the outburst is shown in each panel. } \label{fig:lc} \end{figure*} \section{Observations, data analysis and results} \label{sec:dataanalysis} We used data from the RXTE Proportional Counter Array \citep[PCA, for instrument information see][]{Jahoda06}. Up to July, 2007, there were 27 pointed observations of SAX J1748.9--2021, each covering 1 to 5 consecutive 90-min satellite orbits. Usually, an orbit contains between 1 and 5 ksec of useful data separated by 1--4 ksec data gaps due to Earth occultations and South Atlantic Anomaly passages. Adopting a source position \citep[$\alpha=17^h 48^m 52^s.163$, $\delta = -20^o 21^{'} 32^{''}.40$; J2000][]{Pooley02} we converted the 2--60~keV photon arrival times to the Solar System barycenter with the FTOOL faxbary, which uses the JPL DE-200 ephemeris along with the spacecraft ephemeris and fine clock corrections to provide an absolute timing accuracy of ~5-8 $\rm\,\mu s$ \citep{Rots04}. We performed a Fourier timing analysis using the high-time resolution data collected in the Event (E\_125us\_64M\_0\_1s) and the Good Xenon modes. Power spectra were constructed using data segments of 128, 256 and 512 seconds and with a Nyquist frequency of 4096~Hz. No background or dead-time corrections were made prior to the calculation of the power spectra, but all reported rms amplitudes are background corrected; deadtime corrections are negligible. \begin{table} \caption{Timing parameters for NGC~6440} \scriptsize \begin{tabular}{lc} \hline \hline Parameter & Value \\ \hline Orbital period, P$_{orb}$(hr) \dotfill & 8.764(6) hr \\ Projected semi major axis, $a_x sin i$ (lightsec.)\dotfill & 0.39(1) \\ Epoch of 0$^o$ mean longitude$^1$, $T_0$ (MJD/TDB) \dotfill & 52190.047(4) \\ Eccentricity, e \dotfill & $<0.001$ \\ Spin frequency $\nu_0$ (Hz) \dotfill & 442.361(1) \\ Pulsar mass function, $f_x$ ($\times 10^{-4} M_{\sun}$)\dotfill & $\simeq4.8$ \\ Minimum companion mass, $M_c$ ($M_{\sun}$)\dotfill & $\gtrsim0.1$ \\ \hline $^1$The mean longitude at 0$^o$ in a circular orbit corresponds \\ to the ascending node of the orbit.\\ \end{tabular} \label{table:data} \end{table} \subsection{Colors, light curves and states} From the Standard~2 data \citep{Jahoda06}, we calculated colors and intensities with a time resolution of 16 seconds and normalized by Crab \citep[e.g ][]{Altamirano07}. The PCA observations sample three different outbursts (see Fig.~\ref{fig:lc}). The color-color diagrams show a pattern (not plotted) typical for atoll sources. The power spectral fits confirm the identification of these states \citep[see also][]{Kaaret03}. We looked for kHz QPOs, but found none. \begin{figure}[!hbtp] \resizebox{1\columnwidth}{!}{\rotatebox{0}{\includegraphics{./f2a.eps}}} \resizebox{1\columnwidth}{!}{\rotatebox{0}{\includegraphics[clip]{./f2b.eps}}} \caption{ \textit{Top:} Dynamical power spectrum of observation 60035-02-03-00 showing intermittent pulsations (contours). In the light curve (line) three X-ray bursts are seen. The pulse frequency drifts due to orbital Doppler modulation. The lowest contour plotted corresponds to Leahy power $13$ and the highest to 55. The contours were generated from power spectra for \textit{non}-overlapping 128~sec intervals of data. \textit{Bottom:} Leahy normalized \citep{Leahy83} power spectrum of 512~sec of data centered $\sim7$~ksec after the start of this observation. Maximum Leahy power is 102, corresponding to a single-trial probability of $\sim7\cdot10^{-23}$ given Poissonian statistics in the photon arrival times \citep{vanderklis95}. Inset: The 2--60 keV light curve folded at the 2.26-ms period. Two cycles are plotted for clarity. The pulse profile is sinusoidal, with a 95\% upper limit of 0.4\% (rms) on the amplitude of the second harmonic.} \label{fig:pds} \end{figure} No thermonuclear bursts were detected in the first outburst, sixteen during the second \citep{Kaaret03,Galloway07b} and four during the third one. We searched for burst oscillations during all bursts in the 15--4000~Hz frequency range but found none. \citet{Kaaret03} reported a $\sim4.4\sigma$ burst oscillation at $\sim409.7$~Hz. We find these authors underestimated the number of trials by a factor of at least 180, as their estimate did not take into account the number of X-ray bursts analyzed and the fact that a sliding window was used to find the maximum power. Moreover, we also found that the distribution of powers is not exponential as these authors assumed. Taking into account these effects we estimate the significance for their detection to be $\lesssim2.5\sigma$. \begin{figure}[!hbtp] \resizebox{1\columnwidth}{!}{\rotatebox{-90}{\includegraphics[clip]{./f3.eps}}} \caption{Pulse frequency as a function of orbital phase. The plot has been obtained by folding all data between the first and last pulse detection in 2001. Pulsations were detected during 6 of the 18 orbital cycles covered. Drawn curve is the best-fit orbital model, measured frequencies and post-fit residuals are shown. The residuals' r.m.s. is $1.2\times10^{-3}$~Hz.} \label{fig:phase} \end{figure} \subsection{Pulsations} We inspected each power spectrum for significant features. We found several, at frequencies $\sim442.3$~Hz in 7 observations: 60035-02-02-04/05/06, 60035-02-03-00/02/03 during the second outburst and 91050-03-07-00 during the third outburst \citep[see also][for a detailed analysis of this observation]{Gavriil07}. \citet{Zand01} concluded that the 1998 and 2001 outbursts from the LMXB in NGC~6440 were from the same source (Section 2). Since pulsations are detected in both the 2001 and 2005 outbursts, we can now conclude that these two outbursts are also from the same source. Hence, all outbursts observed from NGC~6440 over the last decade are from SAX J1748.9--2021. The pulsations are detected intermittently, appearing and disappearing on time scales of hundreds of seconds. The appearance of pulsations seems to be related to the occurrence of type-I X-ray bursts, but the relation is not strict. The first two bursts were observed in an observation on October $8^{th}$ 2001; the first pulsations a day later. During the third outburst we detect four bursts; pulsations were only detected after the third one. We also detected pulsations with no preceding burst. The structure of our data does not allow us to tell if pulsations and/or other bursts occurred during data gaps. Figure~\ref{fig:pds} (top) illustrates the relation between pulsations and bursts. The amplitude of the pulsations varies strongly between $\sim2$\% and (often) undetectable (0.3\% rms amplitude upper limit at the 95\% confidence level). Pulsations are seen right after the occurrence of the first and the third burst, but in the middle of the data pulsations are present without the detection of a preceding burst (although a burst could have happened just before the start of this data segment). In Figure~\ref{fig:pds} (bottom) we show a power spectrum and corresponding 2--60 keV pulse profile (inset). In these data the pulsation is relatively hard; the rms amplitude increases with energy from $\sim1$\% at 3~keV to $\sim3$\% at 13~keV. The 2--10~keV luminosity during the observations in which we detected pulsations was between 3 and $4\times10^{37}$~ergs~s$^{-1}$ \citep[assuming a fixed $N_H=8.2 \times 10^{21}$ cm$^{-2}$;][]{Zand01}. Other observations at similar flux and those at higher ( up to $\simeq5\times10^{37}$~ergs~s$^{-1}$ in observation 91050-03-06-00) and lower fluxes do not show pulsations (see Fig.~\ref{fig:lc}). From 16, 32, 64 and 128~sec average colors we found no significant changes in the energy spectra correlated with the pulse-amplitude variations. We studied the pulse frequency drifts using power spectra of 128, 256 and 512~sec data and find a clear 8.7 hours sinusoidal modulation which we interpret as due to Doppler shifts by binary orbital motion with that period. In order to obtain an orbital solution, we performed a $\chi^2$ scan on the orbital parameters using the method described by \citet{Kirsch04} and \citet{Papitto05}. Our best estimates are listed in Table~\ref{table:data}. The combination of data gaps and intermittency of the pulsations yielded aliases, which are taken into account by the reported errors. In Figure~\ref{fig:phase} we plot the pulse frequency as a function of orbital phase. \section{Discussion}\label{sec:discussion} We have discovered intermittent pulsations from the neutron-star LMXB SAX~J1748.9--2021. Pulsations appear and disappear on time scales of hundreds of seconds. Although we find a suggestive relation between the appearance of the pulsations and the occurrence of type-I X-ray bursts (the pulsations appearing after a burst), the relation is not strict. We find bursts with no subsequent pulsations and pulsations with no preceding burst (although a burst could have occurred in the preceding data gaps). From the Doppler shifts on the pulsations we determine that the system is in a near-circular orbit with period of 8.7 hours and projected radius of 0.39 lightsec. The stability of the pulsations (after correcting for the binary orbit) strongly suggests that the pulsation frequency reflects the neutron star spin frequency and that SAX~J1748.9--2021 is an accreting millisecond X-ray pulsar. The characteristics of the pulsations are reminiscent of the those found in HETE~J1900.1--2455: in both sources the pulsations were only intermittently detected and a possible relation between burst occurrence and pulse amplitude exists. However, there are differences: in HETE~J1900.1--2455 the pulsations were only seen during the first two months of the outburst and their amplitude decreased steadily on timescales of days after the bursts which might have caused them to reappear \citep{Galloway07a}. In SAX~J1748.9--2021 we find the pulsations in the middle of the 2001 and 2005 outbursts and not in the beginning. Furthermore, the amplitude of the pulsations behaves erratically, switching between detection and non-detection on time scales of hundreds of seconds. Despite these differences, the behavior of the pulsations in both sources is so similar that we consider it likely that the same mechanism causes the intermittency of the pulsations in both. \begin{figure}[t] \center \resizebox{1\columnwidth}{!}{\rotatebox{-90}{\includegraphics[clip]{./f4.eps}}} \caption{Mass--radius relationship for a Roche lobe-filling companion (continuous line), isochrones of 0.01, 8 and 12 Gyrs with solar metallicity \citep[triangles, crosses and squares, respectively ;][]{Girardi00} and theoretical Zero-age main sequence \citep[ZAMS, dashed line;][]{Tout96}. Black circles mark the inclination of the system as estimated by the mass function of this system.} \label{fig:rvsm} \end{figure} A related system might be Aql~X-1 in which a short-lived ($\sim$150~s) and very rare (duty cycle of 0.03\%) episode of strong pulsations at the neutron-star spin frequency has been detected \citep{Casella07}. In this source, no X-rays bursts were seen in the $\sim1400$~s before the pulsations, making it unlikely that they were triggered by a burst. It is unclear if the pulsations in Aql X-1 were accretion--driven or due to unusual nuclear burning episodes; the same applies to SAX~J1748.9--2021. The extreme rarity of the pulsations in Aql X-1 could indicate that the mechanism behind them is different from that responsible for the pulsations in HETE~J1900.1--2455 and SAX~J1748.9--2021. Nevertheless, irrespective of the mechanisms behind the pulsations in these three sources, it is clear that a strict division between pulsating and non-pulsating sources cannot be made anymore. It is possible that all sources pulsate occasionally although the recurrence times could be very long. Assuming a constant dipolar magnetic field, following \citet{Psaltis99} (i.e., assuming a geometrically thin disk and neglecting inner disk wind mass loss, radiation drag and GR effects) we estimate the magnetic field to be $B\gtrsim1.3\times10^{8}$~Gauss. This assumes a 10~km radius $1.4M_{\odot}$ neutron star and $\dot{M}_{max}$, the highest accretion rate at which pulses are detected, of 0.28 of the Eddington critical value as derived from the luminosity observed at the time using a bolometric flux correction of 1.4 \citep{Migliari06}. In the standard magnetic channeling scenario, the question remains of what causes the large variations in pulse amplitude. Comparisons between HETE~J1900.1--2455, SAX~J1748.9--2021 and the other 7 AMSPs can provide clues to understand the pulse-strength variations. In SAX~J1748.9--2021 and Aql X-1 the time scales on which the pulse amplitude can fluctuate are as short as $\sim10^2$~s, too short for the properties of the neutron star core to change \citep{Galloway07a}. So, these changes must originate in the disk or the outer layers of the neutron star envelope. \citet{Galloway07a} suggested (for HETE~J1900.1--2455) that the accumulation of matter on the surface burying the magnetic field \citep{Cumming01} plays a role. Our results show that this mechanism probably cannot work for SAX~J1748.9--2021, as the pulsations are not seen in the beginning of the outbursts, but instead $\sim3$~weeks and $\sim5$~weeks after the start of the 2001 and 2005 outbursts, respectively, so after a considerable amount of matter has already accreted. Interestingly, we observe pulsations only around a mass accretion rate of $\simeq2\times10^{-16}\ M_{\odot}/$sec as inferred from the X-ray luminosity, not above or below, indicating that instantaneous mass accretion rate rather than total accreted mass is the important quantity. In both HETE~J1900.1--2455 and SAX~J1748.9--2021 the pulsations seem to appear together with bursts although the exact connection is complex. This suggests that surface processes may affect the magnetic field. Hydrodynamic flows in the surface layer of the neutron star may screen the magnetic field \citep[see review by][and references within]{Bhattacharya02}; perhaps violent processes like bursts temporarily affect such flows, diminishing screening and enhancing the channeling. Alternatively, variations in a scattering or screening medium may cause the pulse amplitude modulation (see e.g. discussions in \citet{Psaltis99}, \citet{Titarchuk02}, \citealt{Gogus07}, \citealt{Titarchuk07}, \citealt{Casella07} and references within). For our results, the properties of such a medium should change on timescales of hundreds of seconds; note that we did not detect spectral changes associated with pulse strength modulation. With an orbital period of $\sim8.7$ hours, this binary system is clearly not an ultra-compact binary as usually found in globular clusters and in fact, SAX~J1748.9--2021 is the AMSP with the longest orbital period after Aql~X-1, which has an orbital period of $\sim19$~hrs \citep{Chevalier91,Welsh00}. The mass-radius relation for a low-mass Roche lobe-filling companion in a binary \citep{Eggleton83} is $R_c = 0.24~M_{NS}^{1/3}~q^{2/3}~(1+q)^{1/3}~P^{2/3}_{hr} / ( 0.6~q^{2/3} + \ell og(1+q^{1/3})) $, with $P_{hr}$ the orbital period in hours, $M_{NS}$ the mass of the neutron star, $R_c$ radius of the companion and $q=M_c/M_{NS}$, the mass ratio. Given the mass function and the orbital period and assuming a $1.4M_{\odot}$ neutron star, we plot in Figure~\ref{fig:rvsm} the mass-radius relationship for the companion star. Given that the age of the Globular Cluster NGC 6440 is $10\pm2$ Gyrs \citep{Santos04} and its metallicity is approximately solar \citep{Ortolani94}, in Figure~\ref{fig:rvsm} we also plot the isochrones for stars with ages of 8 and 12 Gyrs and solar metallicity. Stars with a $M_c<0.85\rm\,M_{\odot}$ cannot fill the Roche lobe while stars with $M_c>0.95\rm\,M_{\odot}$ would have a radius exceeding the Roche lobe. This would imply a donor star mass of $0.90\pm0.05M_{\odot}$. However, for masses of 0.95--1.1$\rm\,M_{\odot}$, stars have evolved off the main sequence so binary mass transfer can have affected the radius of the donor star, which means we cannot firmly exclude masses of 0.95--1.1$\rm\,M_{\odot}$. Therefore a more conservative mass range for the donor star is 0.85-1.1$\rm\,M_{\odot}$. Intriguingly, this requires the inclination to be about $9^o$, which has a $\lesssim1\%$ a priori probability for an isotropic sample of binary inclinations. Of course, this estimate is assuming that SAX~J1748.9--2021 is in a primordial binary. If a different evolutionary path took place (e.g. dynamical interactions), the mass of the companion might be much smaller \citep[see e.g.][]{Zyl04}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\label{sec:level1}First-level heading} \section{Introduction} Precise control of quantum systems is necessary for many important experimental implementations in the areas of quantum information and quantum computation. Furthermore, the control must be faster when compared to decoherence times, which is paramount for quantum computing \cite{revisao,livro,livronc}. In general, the theoretical plans demand that external operations – mainly electric and magnetic time dependent fields – or internal interactions to be turned on and off as quick as possible. These processes are the ones used to execute the quantum gates \cite{revisao}. Although many of these operations can be achieved very rapidly, in practice, small deviations are likely to affect profoundly the experimental results, since many of these processes are required for simulating or executing a quantum protocol. In experiments where high accuracy is necessary, these implementation errors can significantly affect the final results. The method to obtain the optimal conditions for performing the quantum gates is to use numerical calculations that can account for the imperfections and delays in the experimental apparatuses. Furthermore, it is possible to use numerical simulations to find the optimal forms of the external excitations needed for controlling the quantum systems. Currently, in a modern NMR (Nuclear Magnetic Resonance) equipment, it is possible to implement magnetic radio-frequency pulses with amplitude and phase modulations, which are used to construct the quantum gates and/or to perform quantum simulations. These radio-frequency pulses are, in general, developed to be robust to various types of errors, such as calibration errors, relaxation and variations of the resonance frequency. One of the main algorithms used to obtain these modulated pulses is GRAPE, Gradient Ascent Pulse Engineering \cite{grape}. Not only in NMR, GRAPE algorithms are being applied in many others experimental techniques and even in different areas \cite{epr,nvcenter,iontrap,circuit,revisao}. In the NMR case, the optimization method consists of dividing the radio-frequency (RF) excitation into many small intervals of time, short pulses, and for each one fit parameters (amplitude and phase) that will give the best description of the desired unitary operation. In fact, others numerical methods of optimization were also implemented \cite{funcao1,funcao2,krotov,Lyapu,chop1,chop2,goat,grapeexp,compila}, attempting to ease the computational processes that are hard due to the high number of parameters which must be taken into account. The number of parameters to be fitted is high because it is typically necessary to use tenths of pulses, and sometimes hundreds. Constraints are also necessary, since it is hard for the NMR spectrometer to cope with rapid variations between the RF pulses, and thus smooth changes, from pulse to pulse, are required. This limits the search for the best parameters. These numerical calculations are in general very hard and take lots of computational time to be executed. In some cases, even after many computing hours a satisfactory solution is not achieved. This is due mainly to the size of the system, number of parameters and equipment limitations. Although, some progress has been made there still a long way to go. In this work, we present an algorithm to optimize external excitations that are used to manipulate the quantum states of relatively large systems. For this, only a reasonable small number of parameters to be optimized are necessary. In addition, our algorithm was designed to work with some approximations to make it fast and scalable. Therefore, the time to perform the optimization is drastically reduced. We demonstrate the success of the proposed method by applying our algorithm in several real experiments, in which we manipulate the quantum states of NMR systems containing 4, 7 and 12 qubits. Furthermore, we have also shown the efficiency of the algorithm, finding pulses for controlling with good precision the quantum state of spins distributed in a fictional bi-dimensional square lattices containing 16, 36, 100 qubits. At the end, we have also discussed the application of our method in a larger system containing 65536 qubits. \section{Optimal control theory} When we use a quantum system to perform a quantum computation or simulate the dynamics of other physical systems, generally, we have to implement a unitary operator ($U_{goal}$) that is not possible to be produced only with the natural evolution of the system at hand. Hence, we need to add external interactions in order to modify the natural dynamics of the system. If we include these interactions, the total Hamiltonian of the system will be given by: \begin{equation}\label{eq:ht} \begin{split} \ \mathcal{H}_{T}(t) = \mathcal{H}_{0} + \mathcal{H}_{C}(t), \end{split} \end{equation} where $\mathcal{H}_{0}$ represents the natural Hamiltonian of the system and $\mathcal{H}_{C}(t)$ is the control Hamiltonian that describes the interactions used to modify the natural dynamics of the system. The evolution of the system under the action of the Hamiltonian $\mathcal{H}_{T}(t)$ will produce the following unitary: \begin{equation}\label{eq:uht} \begin{split} \ U_{\mathcal{H}_{T}} = \mathcal{T} \left[ \exp\left(-\frac{i}{\hbar} \int \mathcal{H}_{T}(t) dt \right) \right]. \end{split} \end{equation} In Eq.~\eqref{eq:uht}, $\mathcal{T}$ represents the Dyson time-ordering operator and $\hbar$ is the Planck constant divided by $2\pi$. In order to have $U_{\mathcal{H}_{T}} = U_{goal}$, we must find the values of $\mathcal{H}_{T}(t)$ that minimize the function \begin{equation}\label{eq:fide} \begin{split} \ \mathcal{F} = 1 - \frac{\left | Tr\left( U_{goal}^{\dagger }U_{\mathcal{H}_{T}}\right) \right |}{N}, \end{split} \end{equation} where $N$ is the dimension of the Hilbert space. The global minimum of $\mathcal{F}$ can be very hard to find, since the number of operations needed in the control Hamiltonian, which are necessary to get $U_{\mathcal{H}_{T}} = U_{goal}$, are usually very high. However, using numerical optimizations, we can find local minima where $U_{\mathcal{H}_{T}}\approx U_{goal}$. To perform a numerical optimization, the time must be discretized in $m$ intervals of duration $\delta t$. The value of $\delta t$ must be small enough to allow us to consider that $\mathcal{H}_{T}(\delta t)$ is reasonably constant at each of the $m$ intervals. In this case, we can calculate $U_{\mathcal{H}_{T}}$ using the following equation: \begin{equation}\label{eq:uhtpro} \begin{split} \ U_{\mathcal{H}_{T}} = U_{m}U_{m-1}U_{m-2} \cdots U_{2}U_{1}, \end{split} \end{equation} with \begin{equation}\label{eq:uhtprok} \begin{split} \ U_{k} = \exp\left\lbrace-\frac{i}{\hbar} \left[ \mathcal{H}_{0} + \mathcal{H}_{C}\left( k\delta t \right) \right] \delta t \right\rbrace . \end{split} \end{equation} Nowadays, we know many algorithms that can be used to find local minima of $\mathcal{F}$ \cite{livrooti}. The choice of one of these algorithms is usually based on the form of $\mathcal{H}_{T}(t)$. In our case, $\mathcal{H}_{T}(t)$ will be the Hamiltonian of a system of nuclear spins that are controlled using the NMR technique. \section{NMR} In the NMR technique, a sample containing many molecules, whose elements have nuclear spins, is placed in a uniform magnetic field along the $z$-direction, and radio-frequency pulses are used to control the quantum states of these spins. For a homonuclear NMR system, all the spins of interest have the same gyromagnetic ratio \cite{livrole}, since they are all of the same kind, and they are subject to the same magnetic field along the $z$-direction. However, due to the electrons clouds of the neighbour atoms, each individual nuclear spin is subject to a slightly different magnetic field. This is known as chemical shift \cite{livrole}. In addition, there is also the coupling interaction between the spins, which occurs via the exchange mechanism. Generally, the samples we utilize in order to simulate quantum systems or implement algorithms are isotropic liquids. The control of such systems is easier, since several interactions do not significantly influence the dynamics of these systems \cite{livrole}. Thus, we will consider samples that can be described by the following Hamiltonian: \begin{equation}\label{eq:h0} \begin{split} \ \mathcal{H}_{0} = \sum_{k}\frac{\hbar(\omega_{k}-\omega_{R})\sigma_{z_{k}}}{2} + \sum_{k \neq n}\frac{\pi\hbar J_{kn}\sigma_{z_{k}}\sigma_{z_{n}}}{4}, \end{split} \end{equation} where $\omega_{k}$ and $\sigma_{\beta_{k}}$ are, respectively, the angular resonance frequency and the Pauli matrix $\beta$ of the $k$-th nuclear spin, $\omega_{R}$ is the angular frequency of the rotating frame and $J_{kn}$ is the scalar coupling constant of the spins $k$ and $n$. For controlling the quantum states of the nuclear spins, we utilize radio-frequency pulses applied in the $xy$ plane with an angular frequency $\omega_{R}$. The interactions of the spins with a pulse can be described by the following Hamiltonian: \begin{equation}\label{eq:hc} \begin{split} \ \mathcal{H}_{C}(t) = \hbar \Omega(t) \sum_{k}\frac{\cos[ \phi(t) ] \sigma_{x_{k}} + \sin[ \phi(t) ] \sigma_{y_{k}}}{2}, \end{split} \end{equation} where $\Omega(t)$ and $\phi(t)$ represent the modulations of the pulse amplitude and its phase. \section{The Algorithm} According to Fermi's Golden Rule, time dependent operations are necessary for inducing transitions between different energy levels. Experimentally, these transitions can be induced through the use of an oscillating electromagnetic field. In this way, electromagnetic fields can be used to implement quantum gates. The precise tailoring of these quantum gates is done through the modulation of the phase and amplitude of these fields. When the phase and amplitude do not have high symmetry, the optimal values for implementing a particular quantum gate must be found numerically. The current standard technique for finding these parameters is the GRAPE algorithm \cite{grape}. This algorithm optimize the amplitude and phase at each time step requiring hundreds of parameters for the construction of a quantum gate, which is a computational arduous task. Furthermore, the GRAPE algorithm commonly requires many hours of computation and can, at times, result in gates with poor fidelity \cite{grapeexp}. The algorithm we propose in this work avoids processing large numbers of parameters by using a set of functions to find the shapes of the amplitude and the phase for the pulse. As a result, the number of parameters that need to be determined numerically are drastically reduced. For illustrative purposes, we present a formulation of our algorithm for finding pulses for the NMR technique. However, this algorithm can be extended to other techniques which use electromagnetic pulses for the control of a quantum system. Since a Fourier series can be used to describe any function, we have chosen a limited series of sinusoidal functions where the amplitudes, frequencies and phases have to be fitted. The functions produced by this fit will be the envelop of the amplitude and phase of the radio-frequency pulse. Thus, the amplitude and the phase of the pulse are modulated using sums composed of $s_{A}$ and $s_{P}$ sines, respectively. \begin{equation}\label{eq:omega} \begin{split} \ \Omega(t) = \sum_{k=1}^{s_{A}} a_{k}\sin \left( b_{k}t + c_{k} \right), \end{split} \end{equation} \begin{equation}\label{eq:phi} \begin{split} \ \phi(t) = \sum_{k=1}^{s_{P}} d_{k}\sin \left( f_{k}t + g_{k} \right). \end{split} \end{equation} The variables $a_{k}$, $b_{k}$, $c_{k}$, $d_{k}$, $f_{k}$ and $g_{k}$ must be optimized in order to obtain $U_{\mathcal{H}_{T}}\approx U_{goal}$. Generally, at the end of the optimization, the values of these variables will be of the same order of their initial value, reminding that an initial guess has to be given as an input. The values that the function $\Omega(t)$ can assume must belong to the range $[0,A_{max}]$, where the upper bound will be established by the experimental equipment. To ensure that the function $\Omega(t)$ does not exceed the lower bound, we have to shift this function so that its minimum is always positive. This can be accomplished with the following substitution: $\Omega(t) \rightarrow \Omega(t) - min\left[ \Omega(t) \right] $. Meanwhile, to limit the maximum value of $\Omega(t)$, we must find the maximum of this function, $\Omega_{max} = max [\Omega(t)]$, and divide it by the limit of the amplitude, $\Omega_{max}/A_{max}$. If the result of this division is greater than $1$, we need to make the following substitution $\Omega(t) \rightarrow \Omega(t)A_{max}/\Omega_{max} $. We employ a Nelder-Mead simplex algorithm to solve the optimization problem \cite{livrooti}. This algorithm does not use derivatives of the function $\mathcal{F}$. In our case, we obtained good results using the fminsearch function from the MATLAB software. Considering the case where 63 parameters of Eq.~\eqref{eq:omega} and Eq.~\eqref{eq:phi} ($s_{A}=7$ and $s_{P}=14$) are optimized, this algorithm converged faster than the optimization method used in GRAPE, which requires derivatives of $\mathcal{F}$. When compared to GRAPE, another advantage we have in our method is that by increasing the pulse duration or reducing the interval $\delta t$, the number of variables that must be optimized does not increase. Consequently, the time to perform the optimization increases linearly. In our algorithm, we use this advantage to calculate the value of $U_{k}$ quickly. For this, we utilize the approximation presented in \cite{apro}, which requires a small $\delta t$ to calculate the value of $U_{k}$ with a good precision. Thus, for a system composed of $q$ qubits, we have \begin{equation}\label{eq:uhtprokaproximado} \begin{split} \ U_{k} \approx e^{-i\phi (k\delta t)\Gamma } W_{1} e^{-i\Omega (k\delta t)\Gamma \delta t} W_{2} e^{i\phi (k\delta t)\Gamma } , \end{split} \end{equation} with $\Gamma = \sum_{l=1}^{q}\sigma_{z_{l}}/2$, $W_{1}=e^{-i \mathcal{H}_{0} \delta t/2} H_{q}$ and $W_{2} =H_{q} e^{-i \mathcal{H}_{0} \delta t/2}$. The matrix $H_{q}$ is the tensor product of $q$ Hadamard gates. Note that the values of $W_{1}$ and $W_{2}$ only need to be calculated once. Since the other matrices of Eq.~\eqref{eq:uhtprokaproximado} are diagonal in the computational basis, in order to find the value of $U_{\mathcal{H}_{T}}$, we need to calculate only exponentials of numbers and matrix products. Although gradient-free optimization methods require more function evaluations, the union of our method with the approximation presented by Bhole and Jones \cite{apro} makes the average execution time of our algorithm less than the ones obtained using GRAPE (with this approximation) for the systems presented in the results section. \subsection{Considerations for the Pulse Amplitude} Another common problem we have to consider are the initial and final values of the pulse amplitude. The RF generator and amplifiers have limited time response that constraint the initial and final values of the pulse amplitude. This constraint can be solved if a smooth function $\Omega(t)$ is used, and it meets the following conditions: \begin{itemize} \item The initial and final value of the amplitude must be null; \item The rate of change of the amplitude, both at the beginning and at the end, have to be able to match the restriction due to equipment used. \end{itemize} In our algorithm, we were able to satisfy these conditions multiplying the amplitude of a pulse with duration $\tau_{f}$ by the function \begin{equation}\label{eq:tanh} \begin{split} \ \Lambda \left( t \right) = -\tanh \left[ \frac{ \zeta_{1} t}{ \tau_{f} } \right] \tanh \left[ \frac{ \zeta_{2} \left( t- \tau_{f} \right) }{ \tau_{f} } \right]. \end{split} \end{equation} The rate of change of the amplitude may be reduced when the values of $\zeta_{1}$ and $\zeta_{2}$ are reduced. The values of these constants are experimentally determined. It is worth mentioning that, in general, when we reduce the values of $\zeta_{1}$ and $\zeta_{2}$ the optimization becomes more challenging. Due to this fact, we should look for the highest values of $\zeta_{1}$ and $\zeta_{2}$ that produce pulses that are well implemented. With our equipment we use $\zeta_{1} = \zeta_{2} = 2$ and obtain good experimental results. \subsection{Radio-Frequency Considerations} In our algorithm, we can also include a condition to obtain pulses that are robust to amplitude calibration. If we include this condition, we have to optimize two more unitary operations, which are given by \begin{equation}\label{eq:upm} \begin{split} \ U_{-} = \mathcal{T} \left[ \exp\left(-\frac{i}{\hbar} \int \mathcal{H}_{0} + (1-\varepsilon )\mathcal{H}_{C}(t) dt \right) \right],\\ \\ U_{+} = \mathcal{T} \left[ \exp\left(-\frac{i}{\hbar} \int \mathcal{H}_{0} + (1+\varepsilon )\mathcal{H}_{C}(t) dt \right) \right]. \end{split} \end{equation} The constant $\varepsilon$ represents the value of the error in the calibration of the pulse amplitude. In our tests we used $\varepsilon = 0.05$, which is equivalent to an error of $5 \%$. The new function that we have to optimize will be given by the following weighted average: \begin{equation}\label{eq:fiderf} \begin{split} \ \mathcal{F}_{RF} = \frac{ \alpha_{1} \mathcal{F}_{-} + \alpha_{2} \mathcal{F} + \alpha_{3}\mathcal{F}_{+} }{ \alpha_{1} + \alpha_{2} + \alpha_{3} }, \end{split} \end{equation} where $\alpha_{1}$, $\alpha_{2}$ and $\alpha_{3}$ are the weights of each element of this average. The values of these weights are defined experimentally. Meanwhile, the values of $\mathcal{F}_{-}$ and $\mathcal{F}_{+}$ can be calculated, respectively, replacing $U_{\mathcal{H}_{T}}$ by $U_{-}$ and $U_{+}$ in Eq.~\eqref{eq:fide}. In principle, caused by the calculation of the new operators $U_{-}$ and $U_{+}$, the number of operations we need to perform for obtaining the value of $\mathcal{F}_{RF}$ is three times the number to obtain $\mathcal{F}$. However, using Eq.~\eqref{eq:uhtprokaproximado}, the value of $\mathcal{F}_{RF}$ can be calculated efficiently, and the number of operations increases by approximately $50\%$. In order to achieve this improvement, during the calculation of $U_{-}$, $U_{\mathcal{H}_{T}}$ and $U_{+}$ we have to take into account that in Eq.~\eqref{eq:uhtprokaproximado} only the term that depends of $\Omega(k\delta t)$ will change. In modern NMR equipment, the errors in the amplitude calibration of the pulse are higher than the errors in the phase calibration \cite{manual}. Thus, we do not include conditions for the pulses to be robust to phase calibration errors. \subsection{Resonance Frequency} As it was done for the error in the pulse amplitude, we can consider errors in the resonance frequency. In this case, to obtain the operators $U_{-}$ and $U_{+}$ we must multiply, respectively, $\omega_{k}$ by $(1-\varepsilon )$ and $(1+\varepsilon )$, instead of multiplying $\mathcal{H}_{C}(t)$. Therefore, if we use Eq.~\eqref{eq:uhtprokaproximado}, only the matrices $W_{1}$ and $W_{2}$ will be modified in the calculation of $U_{-}$ and $U_{+}$. In our experiments we did not include this condition, because the pulses obtained were already robust to this type of error. As an example, in a system with 4 qubits we observed that even altering the frequency of resonance up to $30$ Hz, the value of $\mathcal{F}$ remains less than 0.001. \begin{figure}[b]% \includegraphics[width=8.5cm]{molecula4q} \centering \caption{ Sample information for $^{13}\textrm{C}$-labelled transcrotonic acid molecule - The off-diagonal terms in the table are the $J$ coupling constants of the $^{13}\textrm{C}$ nuclear spins of the $^{13}\textrm{C}$-labelled transcrotonic acid molecule. Meanwhile, on the diagonal we have the values of the chemical shifts of each nuclear spin. The values in the table are in Hz.}% \label{fig:molecula4q}% \end{figure} \section{Experimental Results} For demonstration purposes, we have performed some calculations using the algorithm and used these results in real NMR experiments. Here, we present examples where the amplitude and the phase of the pulses were optimized in order to implement some quantum gates. The pulses are optimized from a random initial guess. Although this algorithm can be used to optimize gates of two qubits or more, we recommend the programmer to decompose such gates into free evolutions under the Hamiltonian $\mathcal{H}_{0}$ and gates of one qubit. By decomposing these gates, we can reduce the amount of errors, since pulse calibration errors do not occur during free evolutions. Furthermore, the time for optimizing the pulse sequence which implements the quantum gates will also be shorter. Following this approach, the errors due to the free evolutions can be diminished using refocus sequences \cite{refoco}, or the method presented by Ryan et al \cite{compila}, or an optimization method like the one presented in \cite{oti}. In our tests, we optimized the pulses to implement sequences used to prepare the pseudo-pure state (PPS) \cite{livro} for a system with 4 and 7 qubits, and some pulses to control a system of 12 qubits. After the numerical optimization, we performed the experiments using a Bruker Avance III $700$ MHz NMR spectrometer. All the experiments were performed at room temperature. \begin{figure*}[t!]% \includegraphics[width=17.7cm]{seq2} \centering \caption{ Quantum circuit to prepare the pseudo-pure state $\left | 1111 \right \rangle$ from a thermal state - In addition to rotations, we use magnetic field gradients along the $\widehat{z}$ direction and free evolution. The gate $\exp \left ( -i\sigma_{z}\otimes \sigma _{z} \pi /4 \right )$ can be implemented using two free evolutions and $\pi$ rotations, in the target qubits of this gate, after each evolution. The time of these evolutions is equal to $1/4J_{kn}$, where $J_{kn}$ is the scalar coupling constant of the target qubits.}% \label{fig:pps4}% \end{figure*} \begin{figure*}[t!]% \centering \subfloat[Amplitude]{{\includegraphics[width=8.5cm]{pulsoamp} }}% \qquad \subfloat[Phase]{{\includegraphics[width=8.5cm]{pulsofase} }}% \qquad \subfloat[Rf inhomogeneity]{{\includegraphics[width=8.5cm]{fidelidaderf} }}% \qquad \subfloat[Psuedo-pure state fidelity]{{\includegraphics[width=8.5cm]{fidelidadeall2} }}% \caption{(a-b) Modulation of the amplitude and phase of some pulses that were used to implement the rotations present in the quantum circuit shown in figure \ref{fig:pps4}. (c) Rotation fidelity when we multiply the function that describes the pulse amplitude by $\textrm{A}_{\textrm{RF}}$. (d) Fidelity obtained in the simulation and experimental implementation of the quantum circuit used to prepare the pseudo-pure state $\left | 1111 \right \rangle$. To calculate the fidelity, we determined the experimental state of each qubit using the quantum state tomography method. The last point in the graph was obtained with the calculation of the tensor product of the four experimentally determined states.}% \label{fig:pulsos4}% \end{figure*} \subsection{4 Qubits System} \begin{figure*}[t!]% \includegraphics[width=17.7cm]{pps7q} \centering \caption{ Sequence to prepare the labelled pseudo-pure state $\left | 000000 \right \rangle \left \langle 000000 \right | \sigma_{z}/2$ - In addition to rotations, we use magnetic field gradients along the $\widehat{z}$ direction and free evolution to prepare the labelled pseudo-pure state, $\left | 000000 \right \rangle \left \langle 000000 \right | \sigma_{z}/2$, starting from a thermal state. The time at which the pulse has to be applied is written below it. In the figure, $J_{kn}$ is the scalar coupling constant of the nuclear spin $k$ and $n$.}% \label{fig:pps7q}% \end{figure*} For these experiments, we used a sample of $^{13}\textrm{C}$-labelled transcrotonic acid (figure \ref{fig:molecula4q}) dissolved in acetone in order to implement the quantum circuit shown in figure \ref{fig:pps4}. In theory, with this circuit we can prepare the pseudo-pure state $\left | 1111 \right \rangle$ starting from a thermal state. In this molecule, the four $^{13}\textrm{C}$ nuclear spins, under the action of a constant magnetic field, will physically represent a four qubits system. The pulses were optimized considering that the $\textrm{H}$ nuclear spins are decoupled, this can be achieved in the experiments. The values of the resonance frequencies and the scalar coupling constants of the $^{13}\textrm{C}$ nuclear spins that were used in our algorithm are shown in figure \ref{fig:molecula4q}. We optimized the phase and amplitude of the pulse to implement the rotations shown in the circuit of figure \ref{fig:pps4}. The free evolutions are not optimized with our algorithm, since the $\pi$ rotations and the magnetic field gradients correct most of the errors that occurs during these evolutions. Each pulse lasts $500$ $\mu$s, and 63 parameters were optimized ($s_{A} = 7$ and $s_{P} = 14$) in order to obtain $\mathcal{F}_{RF} < 0.0004$, with $\alpha_{1} = \alpha_{3} = 0.3$, $\alpha_{2} = 0.4$ and $\varepsilon = 0.05$. In our simulations we noticed that the number of parameters to be optimized can be reduced to 24 ($s_{A} = 4$ and $s_{P} = 4$), but in doing so we generally have to increase the duration of the pulse. For the 4 qubits system, we prefer to increase the number of parameters in order to reduce the duration of the pulse. When it was possible, we optimize the pulses to implement the largest number of simultaneous rotations. This reduces the total time of the experiment and, in general, improves the fidelity \cite{livronc} of the results. In figure \ref{fig:pulsos4}(a-b), we graphically represent the amplitude and phase modulations of some of the pulses used to prepare the pseudo-pure state. We can see on figure \ref{fig:pulsos4}(c) that even when we have errors in the calibration of the pulse amplitude, the rotation will still be implemented with good fidelity. This is due to the condition we have added in our algorithm to find pulses that are robust to such errors. After we have implemented the quantum circuit to prepare the pseudo-pure state, we determined the state of each qubit using the quantum state tomography method \cite{tomografia}. In figure \ref{fig:pulsos4}(d), we present the fidelity between the states measured experimentally and the theoretical states. We also present in this same figure the fidelity between the states obtained by simulating this circuit considering the optimized pulses and the theoretical states. When we performed the tensor product of the 4 qubits state, which were determined experimentally, and compared with the theoretical state, we found a fidelity of $0.9993$, which is exceptionally good. \begin{figure*}[t!]% \includegraphics[width=17.7cm]{molecula12} \centering \caption{ Sample information for per-$^{13}\textrm{C}$-labelled dichlorocyclobutanone molecule - The off-diagonal terms of the table are the values of the $J$ coupling constants of the $^{13}\textrm{C}$ and H nuclear spins of the per-$^{13}\textrm{C}$-labelled dichlorocyclobutanone molecule. Meanwhile, on the diagonal are written the values of the chemical shifts of each nuclear spin. The values in the table are in Hz.}% \label{fig:molecula12}% \end{figure*} \begin{figure*}[t!]% \centering \subfloat[Amplitude]{{\includegraphics[width=8.5cm]{pulso907q} }}% \qquad \subfloat[Phase]{{\includegraphics[width=8.5cm]{fase7q} }}% \qquad \subfloat[Thermal and labelled PPS spectrum of $C_{7}$]{{\includegraphics[width=17.7cm]{resultado7q} }}% \caption{(a-b) Modulation of the amplitude and phase of some pulses that were used to implement the rotations present in the quantum circuit shown in figure \ref{fig:pps7q}. (c) The blue line is the experimental spectrum obtained after the implementation of the sequence to prepare the labelled PPS followed by a $\pi/2$ rotation in the seventh qubit state. The gray dashed line is the experimental spectrum of the thermal state. To obtain the gray spectrum, we implemented a rotation of $\pi/2$ in the state of all nuclear spins and measured the system magnetization.}% \label{fig:pulsos7}% \end{figure*} \subsection{7 Qubits System} We also run tests with a 7 qubit system. In this case, we use our algorithm to find the pulses that implement the rotations shown on the quantum circuit illustrated on figure \ref{fig:pps7q}. This quantum circuit is used to prepare the labelled pseudo-pure state $\left | 000000 \right \rangle \left \langle 000000 \right | \sigma_{z}/2$ \cite{lpps}, starting from a thermal state. The pulses optimization was performed considering that the nuclear spin of the hydrogen's atoms of the per-$^{13}\textrm{C}$-labelled dichlorocyclobutanone molecule (figure \ref{fig:molecula12}) are decoupled. Thus, the nuclear spins of the $^{13}\textrm{C}$ of this molecule will physically represent a 7 qubit system. Although it is possible to divide this 7 qubit system in subgroups to accelerate the pulse optimization \cite{compila}, here we did not use this strategy, since our objective was to verify if our algorithm can provide good results as the size of the system increases. In our simulations we used the chemical shifts and the scalar couplings shown in the figure \ref{fig:molecula12}. The amplitude and phase modulations of some of the pulses obtained with our algorithm are shown in figure \ref{fig:pulsos7}(a-b). Each pulse lasts $600$ $\mu$s and 39 parameters were optimized ($s_{A} = 5$ and $s_{P} = 8$) in order to obtain $\mathcal{F} < 0.004$. When we simulated the quantum circuit shown in figure \ref{fig:pps7q}, using the pulses optimized with our algorithm, and compared the result with the theoretical state, $\left | 000000 \right \rangle \left \langle 000000 \right | \sigma_{z}/2$, we found a fidelity greater than 0.99. \begin{figure*}[t!]% \centering \subfloat[Amplitude]{{\includegraphics[width=8.5cm]{pulso9012q} }}% \qquad \subfloat[Phase]{{\includegraphics[width=8.5cm]{fase12q} }}% \qquad \subfloat[Carbon spectrum]{{\includegraphics[width=17.7cm]{resultado12qc} }}% \qquad \subfloat[Hydrogen spectrum]{{\includegraphics[width=17.7cm]{resultado12qh} }}% \caption{(a-b) Modulation of the amplitude and phase of the pulses that were implemented in the 12 qubits system. In the figure, the letter C and H indicate that the pulses are applied simultaneously with a frequency close to the carbon and hydrogen resonance frequency, respectively.(c-d) The gray line is the thermal spectrum obtained by applying a fast square pulse which implements a rotation of $\pi/2$ in the state of all qubits. The other spectra were obtained by implementing the optimized pulses of (a-b) and measuring the magnetization of the system in the rotating frame.}% \label{fig:pulso12}% \end{figure*} After the pulse optimization, we implemented the quantum circuit to prepare the labelled PPS state $\left | 000000 \right \rangle \left \langle 000000 \right | \sigma_{z}/2$ and determined the state of the nuclear spin that represents the seventh qubit. Theoretically, if the system is in this labelled PPS, when we implement a rotation of $\pi/2$ in the state of the seventh qubit and measure its magnetization, we must obtain a spectrum with only one peak. In figure \ref{fig:pulsos7}(c), we present the experimental spectrum obtained after implementing the sequence to prepare the labelled PPS (followed by a $\pi/2$ rotation in the state of the seventh qubit), along with the thermal state spectrum. As the signal coming from the sample is weak when compared to the noise, we performed 70 measurements to obtain the spectra. Even with this difficulty, we can see that it was possible to obtain a good experimental result using our algorithm to optimize the pulses. In this test we did not include the condition for the pulses to be robust to errors in their amplitude. \subsection{12 Qubits System} Finally, we performed some experiments with a 12 qubit system. In this case, the nuclear spins of the five hydrogen's atoms of the per-$^{13}\textrm{C}$-labelled dichlorocyclobutanone molecule (figure \ref{fig:molecula12}) were used to physically represent 5 qubits. The other 7 qubits were represented by the carbons nuclear spins. In order to optimize the pulses, we divided this system into subsystems. If we do not follow this strategy, the optimization will be slow, since it would be performed in a Hilbert space of dimension $2^{12}$. In our case, we obtained good results considering the following subsystems: $\left \{C_{1},C_{2},C_{3},H_{4} \right \}$, $\left \{C_{2},C_{7} \right \}$, $\left \{C_{3},H_{2},H_{3} \right \}$, $\left \{C_{4},C_{5},C_{7},H_{1} \right \}$ and $\left \{C_{5},C_{6},C_{7},H_{5} \right \}$. The value of the function to be optimized, $\mathcal{F}_{sub}$, for a system composed of $n$ subsystems will be given by the following mean: \begin{equation}\label{eq:fidesub} \begin{split} \ \mathcal{F}_{sub} = \dfrac{1}{n}\sum_{k}^{n} 1 - \frac{\left | Tr\left( U_{goal_{k}}^{\dagger } U_{k} \right) \right |}{N_{k}}, \end{split} \end{equation} where $U_{k}$, $U_{goal_{k}}$ e $N_{k}$ represent respectively the optimized unitary, the goal unitary and the dimension of the $k$th subsystems. The signal coming from our sample is very weak when compared to the noise. Because of this, evaluating the results of some pulse sequences in this system can be very complicated. To work around this problem, we used only $\pi/2$ rotations in our experimental tests. Thus, in the ideal case, the spectra of the nuclear spins that are targets of these rotations must contain peaks with maximum amplitude, and the spectra of the other spins will not show peaks. In addition, to analyse the experimental results, we can use the spectrum obtained by applying a square fast pulse that implements a rotation of $\pi/2$ in the state of all nuclear spins. This fast pulse has $10$ $\mu$s of duration, and its amplitude is calibrated to obtain a spectrum with maximum amplitude. In figure \ref{fig:pulso12}(a-b), we present the shape of the amplitude and phase of two pulses that were optimized. The first pulse implements a rotation of $\pi/2$ in the state of all nuclear spins. The second pulse implements a rotation of $\pi/2$ in the nuclear spins of the atoms of $C_{1},H_{1},H_{5}$, which represent qubits 1, 8 and 12, respectively. Each pulse lasts $1$ ms and 78 parameters were optimized ($s_{A} = 10$ and $s_{P} = 16$) in order to obtain $\mathcal{F}_{sub} < 0.007$. One of the main difficulties to optimize the pulses is the fact that the resonance frequency of the nuclear spins of the hydrogen's atoms differ by only a few hundred Hz. Due to this fact, to individually control these spins, we had to increase the pulse duration and the number of parameters to be optimized. After the optimization, we verified that the fidelity between the operator implemented by the optimized pulses ($U_{\mathcal{H}_{T}}$) and the ideal operator ($U_{goal}$) is greater than 0.97, by comparing the two matrices, the ideal and the calculated ones. To perform this calculation, we use Eq. (\ref{eq:uhtpro}) and (\ref{eq:uhtprok}) considering the complete system of 12 qubits to obtain $U_{\mathcal{H}_{T}}$. Due to the efficiency of our algorithm and the fact that we used subgroups, when we simulate on the same computer a pulse with discretization of $1$ $\mu$s, we note that the time required in the optimization to reach $\mathcal{F}_{sub} < 0.007$ may be 25 times less than the time required to calculate $U_{\mathcal{H}_{T}}$ of this pulse considering the complete system of 12 qubits. In figure \ref{fig:pulso12}(c-d), we present the spectrum obtained experimentally after implementing the optimized pulses, along with the spectrum obtained by applying a fast square pulse. We can see on this figure that we have achieved good experimental results, even without including in the optimization conditions for the pulses to be robust to some types of errors. In our tests, we verified that if we double the number of parameters to be optimized, it is possible to achieve a fidelity superior to 0.995 in the simulation. However, the experimental results will not be very different from those shown in figure \ref{fig:pulso12}(c-d). In this system, even delays of a few hundred nanoseconds can affect the final result. Therefore, if we want better experimental results in this system, we must include this and other sources of errors in our optimization. For the carbon's nuclear spins, figure \ref{fig:pulso12}(c), we had to perform 128 measures to increase the signal-to-noise ratio. Thus, during these measurements (approximately 4 hours) the variations of temperature and magnetic field are two other sources of errors that can not be disregarded. \begin{figure*}[t!]% \centering \subfloat[16 qubits]{{\includegraphics[width=4cm]{16qb} }}% \qquad \subfloat[36 qubits]{{\includegraphics[width=6cm]{36qb} }}% \caption{ Square bi-dimensional lattice composed by 16 (a) and 36 qubits (b). Black bars indicate which nuclear spins pairs have non-zero scalar coupling. The subgroups used in the optimization are painted with blue, green and yellow colors. }% \label{fig:molecula16}% \end{figure*} \begin{figure*}[t!]% \centering \subfloat[Amplitude]{{\includegraphics[width=8.5cm]{amp100} }}% \qquad \subfloat[Phase]{{\includegraphics[width=8.5cm]{fase100} }}% \caption{(a-b) Modulation of the amplitude and phase of the pulses that were optimized to implement a $\pi / 2$ rotation on all odd spins of the lattice.}% \label{fig:pulso100}% \end{figure*} \subsection{Simulations for 16, 36, 100 and 65536 Qubits} Our algorithm may have its performance impaired if we increase the number of parameters to be optimized, since this is based on a Nelder-Mead simplex algorithm \cite{livrooti,neldalgo,neld}. Therefore, we performed a test to verify if it is possible to optimize pulses quickly for larger systems considering the same 63 parameters ($s_{A} = 7$ and $s_{P} = 14$) used with the 4 qubits system. The systems considered are two 2D lattice with coupling between the nearest neighbours. Similar kind of systems are already an experimental reality \cite{rede2d}. It is our belief that the algorithm described in our work can be useful for controlling them. The lattices have 16 and 36 nuclear spins, which will represent respectively systems containing 16 and 36 qubits, figure \ref{fig:molecula16}. These systems will be controlled using the NMR technique. Thus, the natural Hamiltonian of the system will be $\mathcal{H}_{0}$, Eq. (\ref{eq:h0}). The value of the scalar coupling constant ($J_{kn}$) between the nearest neighbours is equal to $50$ Hz. The angular oscillation frequency of the $k$th nuclear spin is given by: $\omega_{k} = 2\pi[b + s(k-1)]$, with $b = 700$ MHz and $s = 2$ kHz. The angular frequency of the rotating reference frame ($\omega_{R}$) is equal to the mean between the angular oscillation frequency of the first and last spin of the lattice. In our test, we optimized the phase and amplitude of a pulse of $1$ ms to implement a rotation of $\pi/2$ on all the odd qubits of the lattice. As in the case of 12 qubits, here we perform the optimization in subsystems. The 16 qubits system was divided into 7 groups of 4 qubits, and the 36 qubits system was divided into 13 groups of 4 qubits and 8 of 2 qubits, as shown in figure \ref{fig:molecula16}. In the optimization of the pulse in the 16 qubits system, it was possible to reach $\mathcal{F}_{sub} <0.01$ in less than an hour. Meanwhile, in the 36 qubits system, after 1 hour of optimization, we were able to obtain $\mathcal{F}_{sub} < 0.025$. The shape of the amplitude and phase of these pulses are shown in figure \ref{fig:pulso100}. To reduce the errors due to the approximation shown in Eq. (\ref{eq:uhtprokaproximado}), we decreased the value of the pulse discretization during the optimization. Initially, we started with a discretization $\delta t = 5$ $\mu$s, and we finished the optimization with $\delta t = 0.625$ $\mu$s. In principle, the two main difficulties to optimize pulses to control large quantum systems are the time and the amount of memory needed to perform the simulation. If we divide these systems into subgroups, we can work around those problems. For example, we did a test with a 2D square lattice with $256$x$256$, which represents a system of $65536$ qubits. The amount of memory used in the optimization process was less than 2 GB, and the time to calculate all the quantities needed to obtain the value of $\mathcal{F}_{sub}$, Eq. (\ref{eq:fidesub}), was approximately 1 minute (considering a pulse of $1$ ms and $\delta t = 10$ $\mu$s). In our simulations, we used a computer with an Intel Core i7-8700 processor and 16 GB of RAM. The subgroups were divided following the same pattern used for the 36 qubit system (figure \ref{fig:molecula16}). While it is possible to work with such large systems using our algorithm, when we increase the size of the lattice without changing the number of optimized parameters, the maximum amplitude or duration of the pulse, we may have difficulty obtaining $\mathcal{F}_{sub} < 0.01$ in the optimization of the pulses. This happens because the distance between the resonance frequency of the first and last spin of the lattice also increases. It is worth remembering that the error of the approximation presented in Eq. (\ref{eq:uhtprokaproximado}) will also increase when this distance increases. One way to solve this problem is to consider that the lattice is formed by more than one kind of atom and that we can use multiple rotating frames \cite{livro}. In this case, the resonance frequency of the atoms of a species may be several hundred MHz different from the frequency of the atoms of the other species. It is worth mentioning that when the difference between the pulse frequency and the resonance frequency of the nuclear spin increases, it will be more difficult for this pulse to change the state of the spin In our simulation we consider a spectrometer with 5 channels (5 rotating frames), as used in \cite{sonda}. As was done previously, we used 63 parameters ($s_{A} = 7$ and $s_{P} = 14$) to optimize the amplitude and phase of a pulse of $1$ ms to implement a rotation of $\pi /2$ on all the odd qubits of the lattice. In this case, the pulses applied in the 5 channels will have the same shape for the amplitude and phase but their frequencies of oscillations will be different. We performed the simulation assuming the lattice is composed of 100 qubits. The lattice is composed by nuclear spins of atoms $^{1}\textrm{H}$, $^{19}\textrm{F}$, $^{13}\textrm{C}$, $^{31}\textrm{P}$ and $^{15}\textrm{N}$. The lattice is configured to have the first 20 qubits represented by nuclear spins of $^{1}\textrm{H}$, the next 20 by nuclear spins of $^{19}\textrm{F}$ and so on. In each group of 20 spins, the resonance frequency of the $k$th nuclear spin of the group will be given by: $\omega_{k}^{n} = 2\pi[b_{n} + s(k-1)]$, with $s = 2$ kHz and $b_{n}$ represents the characteristic resonance frequency of the nuclear spin of the $n$th species of atom of the lattice. If we consider an NMR equipment with a magnetic field magnitude of $16.44$ T, we have $b_{H} = 700$ MHz, $b_{F} = 658$ MHz, $b_{C} = 176$ MHz, $b_{P} = 283$ MHz and $b_{N} = -71$ MHz. In order to perform the optimization, we divided the system following the same pattern used for the 36 qubits system (figure \ref{fig:molecula16}). After approximately 3 hours of optimization, it was possible to obtain $\mathcal{F}_{sub} < 0.012$, with a $\delta t = 0.625$ $\mu$s, which is an excellent result. The shape of the amplitude and phase of the optimized pulse is illustrated in figure \ref{fig:pulso100}. It is worth remembering that this pulse was optimized considering the same restrictions, used in the optimization of the pulses for a system of 7 and 12 qbits, for it to be well implemented experimentally. It is possible to conclude from the results that even with a small number of parameters, our algorithm can efficiently optimize pulses for these big systems considered here. Since our algorithm can be fast and does not require a large amount of memory, we believe that it can contribute significantly to the control of large quantum systems that could be used as quantum computers, not only in NMR but also for other technologies. \section{Conclusions} In summary we have developed an algorithm for optimizing radio-frequency pulses, generally used in NMR systems in order to implement quantum gates with high fidelity. The pulses can be optimized to be robust to calibration errors. Besides, with our algorithm we can obtain pulses that have smooth modulations, since these pulses are described by a set of smooth functions. This is a good advantage over some others methods, since most NMR spectrometers do not deal well with fast variations of the pulse parameters. These functions can be chosen experimentally to ensure that the optimized pulses are implemented with good precision. Additionally, in the method we have developed a small number of parameters are used and consequentially the whole optimization process is performed faster than in other methods. We have shown the success of our algorithm using real NMR experiments, where systems composed of 4 to 12 qubits where controlled. Finally, we have proved that, even in a system with 100 qubits, the pulses used to implement rotations can be described by a small number of parameters, and our algorithm can be efficient enough to optimize a modulated pulse, in a short period of time. Thanks to the effectiveness of our optimization algorithm, we were able to obtain good experimental results without using any other error correction technique or external calibration devices. Ascribed to this effectiveness and efficiency of our algorithm, we believe that it can be used to control large quantum systems in other experimental techniques, others than NMR. A future challenge would be to employ this algorithm to control systems containing thousands of qubits. \begin{acknowledgments} We thank Hemant Katiyar and Janet Venne for valuable discussions that helped to develop the ideas presented in this paper. We thank Shayan-Shawn Majidy for valuable comments on the manuscript. We acknowledge financial support from Ministery of Innovation, Science and Economic Development (Canada), the Government of Ontario, CIFAR, Mike and Ophelia Lazaridis, CNPq and FAPERJ. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} The conformal-invariance approach of Belavin et al. \cite{BPZ,CardyD-L} determines the universal bulk properties, including critical indices and correlation functions, of an infinite class of two-dimensional systems at the critical point. Cardy \cite{Cardyscp} extended the approach to semi-infinite two-dimensional critical systems with a uniform boundary condition, such as free or fixed boundary spins. Cardy \cite{Cardytab} and Burkhardt and Xue \cite{TWBX} made a further extension to semi-infinite critical systems with mixed, piecewise-uniform boundary conditions. Of systems with mixed boundary conditions, the Ising model has received the most attention. For the Ising model on the upper half half plane $y>0$, with boundary conditions $a$ and $b$ on the negative and positive $x$ axes, the one and two-point averages $\langle \sigma\rangle$, $\langle \sigma_1\sigma_2\rangle$, $\langle \epsilon\rangle$, $\langle \epsilon_1\epsilon_2\rangle$, and $\langle \sigma_1\epsilon_2\rangle$ were derived by Burkhardt, Guim, and Xue in Refs. \cite{TWBG1,TWBX,TWBG2} for $ab= -+$ and $f+$. Here $\sigma$ and $\epsilon$ are the spin and energy operators, and $+$, $-$, and $f$ stand for spin-up, spin-down, and free-spin boundary conditions, respectively. The case of alternating boundary conditions $+-+-+\dots$, which switch between $+$ and $-$ at arbitrary points $\zeta_1$, $\zeta_2$, $\dots$ on the $x$ axis is considered in \cite{TWBG2}. In the first half of this paper the Ising model with alternating boundary conditions $+f+f+\dots$ and with three different boundaries $-f+$ is analyzed with conformal-invariance methods. Exact results for the one and two-point averages of $\sigma$, $\epsilon$, and the complex stress tensor $T(z)$ are obtained. The average stress tensor is of interest in connection with Casimir or fluctuation-induced interaction of particles immersed in a two-dimensional critical fluid or of a single particle with the linear boundary of the fluid \cite{EETWB,SMED,TWBEE,BEK}. For a two-dimensional critical system defined on the upper half plane with a uniform boundary condition on the $x$ axis, $\langle T(z)\rangle=0$, where $z=x+iy$. In the case of boundary condition $a$ for $x<\zeta_1$ and $b$ for $x>\zeta_1$, \begin{equation} \langle T(z)\rangle_{ab}={t_{ab}\over(z-\zeta_1)^2}\,,\label{Tab} \end{equation} where the amplitude $t_{ab}=t_{ba}$ depends on the bulk universality class \cite{Cardytab,TWBX}. For the Ising model $t_{+-}={1\over 2}$, and $t_{f+}=t_{f-}={1\over 16}$. For $aba$ and $abc$ boundaries, with changes in the boundary condition at points $\zeta_1$ and $\zeta_2$ on the $x$ axis, \begin{eqnarray} &&\langle T(z)\rangle_{aba}=t_{ab}\left({1\over z-\zeta_1}-{1\over z-\zeta_2}\right)^2\,,\label{Taba}\\ &&\langle T(z)\rangle_{abc}={t_{ab}\over (z-\zeta_1)^2}+{t_{bc}\over (z-\zeta_2)^2}+{t_{ac}-t_{ab}-t_{bc}\over(z-\zeta_1)(z-\zeta_2)}\,.\label{Tabc} \end{eqnarray} Since $t_{aa}=0$, Eq.~(\ref{Tabc}) reproduces Eq.(\ref{Taba}) for $c=a$. Expressions (\ref{Taba}) and (\ref{Tabc}) are dictated by the requirements that $\langle T(z)\rangle$ scale as (length)$^{-2}$, diverge as in Eq.~(\ref{Tab}) for $z\to\zeta_1$ and $z\to\zeta_2$, and reduce to the results for $aa$ and $ac$ boundary conditions, respectively, in the limit $\zeta_2\to\zeta_1$. Equation (\ref{Taba}) also follows from Eq.~(\ref{Tab}) and the transformation property (\ref{Ttransform}) of the stress tensor under the conformal transformation \begin{equation} z'=\zeta_1-\left({z-\zeta\over\Lambda^2}-{1\over\zeta_2-\zeta_1}\right)^{-1}\,,\label{abtoaba} \end{equation} which maps the $ba$ geometry onto $aba$. Here $\Lambda$ is an arbitrary constant with the dimensions of length. In cases where the boundary condition changes at more than two points, for example, for $ababa$, $\langle T(z)\rangle$ is no longer uniquely determined by such elementary considerations, but the explicit form follows from the conformal-invariance approach, as shown below. The paper is organized as follows: In Sec.~\ref{IsingConformal} the semi-infinite critical Ising model is studied with conformal-invariance methods for alternating $+f+f+\dots$ boundaries and in the case $-f+$ of three different boundary conditions. The exact one and two-point averages of the spin $\sigma$, energy $\epsilon$ and stress tensor $T$ are derived for these boundary conditions in Subsecs.~\ref{pfpetc} and \ref{mfp}. In Subsec.~\ref{wedge} we analyze the critical Casimir force on an infinite, wedge-shaped inclusion in the upper half plane, oriented perpendicular to the $x$ axis. For an $f$ boundary along the $x$ axis and $+$ and $-$ boundary conditions on the left and right edges of the wedge, the Casimir force reverses direction at a critical value of the apex angle. The expansion of operators, such as $\sigma$ and $\epsilon$, near boundaries in terms of boundary operators has been studied extensively for {\em uniform} boundary conditions \cite{Diehl,CardyLewellen,EEStap,Cardydistantwall}. In Sec.~\ref{secMBOE} a comprehensive analysis of boundary-operator expansions in two-dimensional critical systems with {\em mixed} boundary conditions is presented. Two types of expansions - away from switching points of the boundary condition and at switching points - are considered. The asymptotic form of two-point averages is expressed in terms of one-point averages using the boundary-operator expansions. In another application of the expansions, to strips with mixed boundary conditions, we derive the distant-wall corrections to one-point averages near one edge due to the other, distant edge. All of the predictions for Ising systems based on the boundary-operator expansions are confirmed by comparison with exact results. Section~\ref{concludingremarks} contains concluding remarks. \section{Exact Ising results from conformal invariance}\label{IsingConformal} \subsection{Conformal differential equations}\label{confdiffeq} In the conformal classification \cite{BPZ,CardyD-L} the spin $\sigma$ and energy $\epsilon$ of the Ising model are both degenerate at level 2. The bulk $n$-point average $\langle\sigma_1\dots\sigma_\ell\,\epsilon_{\ell+1}\dots\epsilon_{n}\rangle$ satisfies the $n$ partial differential equations \begin{equation} \left[-{3\over 2(1+2\Delta_i)}\,{\partial^2\over\partial z_i^2}+\sum_{\substack{j=1\\ j\neq i}}^n\left({1\over z_{ij}}\,{\partial\over\partial z_j}+{\Delta_j\over z_{ij}^2}\right)\right]G^{(n)}(z_1,z_2,\dots,z_n)=0\,,\label{conformdiffeq} \end{equation} with $\Delta_k={1\over 16}$ for the spin and $\Delta_k={1\over 2}$ for the energy. Here $z_j=x_j+iy_j$ is the position of point $j$ in the complex plane, and $z_{ij}=z_i-z_j$. Burkhardt and Guim \cite{TWBG2} have discussed the solutions of Eq.~(\ref{conformdiffeq}) in the cases $\Delta_1=\Delta_2=\dots =\Delta_n={1\over 16}\;{\rm and}\;{1\over 2}$, corresponding to $\langle\sigma_1\dots\sigma_n\rangle$ and $\langle\epsilon_1\dots\epsilon_n\rangle$, respectively. In the former case, they showed that for even $n$ there are $2^{n/2-1}$ linearly independent solutions of differential equations (\ref{conformdiffeq}) given by \begin{eqnarray} &&G_\sigma^{(n,\alpha)}(z_1,\dots, z_n)=(z_{12}z_{34}\dots z_{n-1,n})^{-1/8}\nonumber\\ &&\qquad\times\Bigg\{{1\over 2}\, \sum_{\tau_1=\pm 1}\sum_{\tau_3=\pm 1}\dots\sum_{\tau_{n-1}=\pm 1}S_\alpha(\tau_1,\tau_3,\dots,\tau_{n-1})\prod_{\substack{i<j \\ i,j\;{\rm odd}}}\xi_{ij}^{\tau_i\tau_j}\Bigg\}^{1/2},\label{Gnsigma}\\ &&\xi_{ij}=\left({z_{i,j}\,z_{i+1,j+1}\over z_{i,j+1}\,z_{i+1,j}}\right)^{1/4},\label{xidef} \end{eqnarray} where $\alpha= 1,2,\dots,2^{n/2-1}$. The quantities $S_\alpha(\tau_1,\tau_3,\dots,\tau_{n-1})$ are the even operators $1$, $\tau_k\tau_\ell$ with $k<\ell$, $\tau_k\tau_\ell\tau_m\tau_n$ with $k<\ell<m<n$, etc., where $k,\ell,\dots$ take the values $1,3,5\dots,n-1$. For $n=2$ and 4 , \begin{eqnarray} && G_\sigma^{(2,1)}(z_1,z_2)=z_{12}^{-1/8}\,,\label{G2sigma}\\ && G_\sigma^{(4,1)}(z_1,\dots,z_4)=(z_{12}z_{34})^{-1/8}\left(\xi_{13}+\xi_{13}^{-1}\right)^{1/2}\,,\label{G41sigma}\\ && G_\sigma^{(4,2)}(z_1,\dots,z_4)=(z_{12}z_{34})^{-1/8}\left(\xi_{13}-\xi_{13}^{-1}\right)^{1/2}\,,\label{G42sigmasigma}\end{eqnarray} and for $n=6$, \begin{eqnarray} &&G^{(6,\alpha)}_\alpha (z_1,\dots,z_6)\nonumber\\ &&\qquad=(z_{12}z_{34} z_{56})^{-1/8}\left(\xi_{13}\xi_{15}\xi_{35}+C_{\alpha1}\,{\xi_{13}\over\xi_{15}\xi_{35}}+C_{\alpha2}\,{\xi_{15}\over\xi_{13}\xi_{35}}+C_{\alpha3}\,{\xi_{35}\over\xi_{13}\xi_{15}}\right)^{1/2}\,,\label{G6a} \end{eqnarray} with matrix $C$ of coefficients \begin{equation} C=\left(\begin{matrix} 1&1&1\\ 1&-1&-1\\ -1&1&-1\\ -1&-1&1\end{matrix}\right)\,.\label{G6b} \end{equation} For $\Delta={1\over 2}$, corresponding to the energy, there appears to be only one physical solution of differential equation (\ref{conformdiffeq}), given by \begin{eqnarray} && G_\epsilon^{(2)}(z_1,z_2)=z_{12}^{-1}\,,\label{G2eps}\\ && G_\epsilon^{(4)}(z_1,\dots,z_4)=(z_{12}z_{34})^{-1}-(z_{13}z_{24})^{-1}+(z_{14}z_{23})^{-1},\label{G4eps}\\ && G_\epsilon^{(n)}(z_1,\dots,z_n)={\rm Pf}^{(n)}{1\over z_{ij}}\,,\label{Gneps} \end{eqnarray} for $n=2$, 4, and general even $n$. Here ${\rm Pf}^{(n)}A_{ij}$ denotes the Paffian of the $n\times n$ antisymmetric matrix with elements $A_{ij}$. From these solutions Burkhardt and Guim \cite{TWBG2} constructed all the correlation functions $\langle\sigma_1\sigma_2\dots\sigma_n\rangle$ and $\langle\epsilon_1\epsilon_2\dots\epsilon_n\rangle$ both in the bulk and in the half space with uniform fixed and free-spin boundary conditions. In addition they derived the one and two-point averages of the spin and energy density in the half space with alternating $+-+-+\dots$ boundary conditions. \subsection{Boundary condition $+f+f+\dots$} \label{pfpetc} \subsubsection{General approach for alternating boundary conditions} We begin by considering the correlations of an arbitrary primary operator $\phi(z,\bar z)$ in a semi-infinite critical system defined on the upper half plane, with $ababa\dots$ boundary conditions, which switch between $a$ and $b$ at an even number $m$ of points $\zeta_1<\zeta_2<\dots<\zeta_m$ on the $x$ axis. For $-\infty<x<\zeta_1$ the boundary condition is $a$, for $\zeta_1<x<\zeta_2$ it is $b$, for $\zeta_2<x<\zeta_3$ it is $a$, etc. Results for odd number $m-1$ of $\zeta$'s are obtained by taking the limit $\zeta_m\to\infty$. Following \cite{Cardyscp,TWBX,TWBG1,TWBG2}, we express the $n$-point correlation function of $\phi$ as \begin{equation} \langle\phi_1\dots\phi_n\rangle_{ababa\,\dots}={N(\zeta_1,\dots,\zeta_m,z_1,\bar z_1,\dots,z_n,\bar z_n)\over D(\zeta_1,\dots,\zeta_m)}\,,\label{NoverD1} \end{equation} where the numerator $N$ satisfies the same differential equations in the $m+2n$ variables $(\zeta_1,\dots,\zeta_m,z_1,\bar z_1,\dots,z_n,\bar z_n)$ as the bulk correlation function $\langle\psi_1\dots\psi_m\,\phi_{m+1}\dots\phi_{m+2n}\rangle_{\rm bulk}$ in the variables $(z_1,z_2,\dots z_{m+2n})$. In these differential equations the scaling index $\Delta_i$ for the operators $\phi_{m+1},\dots,\phi_{m+2n}$ is the usual bulk index $\Delta_\phi$. For the operators $\psi_1,\dots,\psi_m$, $\Delta_i=t_{ab}$, where $t_{ab}$ is the boundary index introduced in Eq.~(\ref{Tab}). The denominator $D$ in Eq.~(\ref{NoverD1}) satisfies the same differential equations in the variables $\zeta_1,\dots,\zeta_m$ as the bulk correlation function $\langle\psi_1\dots\psi_m\rangle_{\rm bulk}$ in the variables $z_1,\dots,z_m$, with $\Delta_i=t_{ab}$. In the limit that all of the $n$ points are translated infinitely far to the left of $\zeta_1$ without changing their relative positions, $\langle\phi_1\dots\phi_n\rangle_{ababa\,\dots}$ reduces to the corresponding correlation function for a uniform boundary condition $a$. All of the correlation functions we consider are known for a uniform boundary condition. Thus, once the numerator $N$ in Eq.~(\ref{NoverD1}) has been determined, $D$ can be obtained from \begin{equation} D(\zeta_1,\dots,\zeta_m)\langle\phi_1\dots\phi_n\rangle_{a}=\lim_{X\to -\infty}N(\zeta_1,\dots,\zeta_m,z_1+X,\bar z_1+X,\dots,z_n+X,\bar z_n+X)\,.\label{Dfromlimit} \end{equation} This procedure for determining $D$ is the simplest in practice, and it ensures that the correlation function (\ref{NoverD1}) for mixed boundary conditions is correctly normalized. For the Ising $n$-spin correlation function $\langle\sigma_1\dots\sigma_n\rangle_{+f+f+f+\,\dots}$, there is a simplifying feature. Both the bulk index $\Delta_\sigma$ and the boundary index $t_{+f}$ have the value ${1\over16}$, as mentioned just below Eqs.~(\ref{Tab}) and (\ref{conformdiffeq}). Thus, the numerator $N$ in Eq.~(\ref{NoverD1}) satisfies the same differential equations in the $m+2n$ variables $\zeta_1,\dots,\zeta_m,z_1,\bar z_1,\dots,z_n,\bar z_n$ as the bulk $n$-spin correlation function in the variables $z_1,z_2,\dots z_{m+2n}$. This implies that $N$ is an appropriate linear combination of the $2^{m/2+n-1}$ functions $G_\sigma^{(m+2n,\alpha)}(\zeta_1,\dots,\zeta_m,z_1,\bar z_1,\dots,z_n,\bar z_n)$ defined in Eq.~(\ref{Gnsigma}). Similarly $D$ is an appropriate linear combination of the $2^{m/2-1}$ functions $G_\sigma^{(m,\alpha)}(\zeta_1,\dots,\zeta_m)$. The linear combinations are determined by the requirement that $N/D$ reproduce the expected asymptotic behavior of $\langle\sigma_1\dots\sigma_n\rangle_{+f+f+f+\,\dots}$ as any two of the $n$ points approach each other or as any of the points approaches the boundary line $y=0$ or approaches infinity parallel to the $x$ axis. The operator product expansion of closely spaced spin operators and the one point averages of $\sigma$ and $\epsilon$ in the presence of a homogeneous boundary are discussed in the next subsection. The general form of the operator expansion near a boundary point is considered in Sec.~\ref{secMBOE}. \subsubsection{Operator product expansion} To obtain correlation functions involving the energy $\epsilon$ from $\langle\sigma_1\dots\sigma_n\rangle_{+f+f+\,\dots}$, we make use of the operator-product expansion (OPE) of two closely spaced $\sigma$ operators. This and two other useful OPE's (see Eq.~(D6) of Ref.~\cite{EETWB}, Eqs.~(2.39), (2.47), (3.46), and (A1) of Ref.~\cite{EE}, and Eq.~(D.25) of Ref.~\cite{SMED}) are given by \begin{eqnarray} &&\sigma(z_1,\bar z_1)\sigma(z_2,\bar z_2)=\vert z_{12}\vert^{-1/4}\nonumber\\ &&\qquad\times\left\{1-\textstyle{1\over 2}\,\vert z_{12}\vert\epsilon(z,\bar z) +{1\over 4}\,\left[z_{12}^2\,T(z)+\bar z_{12}^2\,\bar T(\bar z)\right] +\mathcal{O}\left(\vert z_{12}\vert^{3}\right)\right\}\,,\label{OPEsigsig}\\ &&\sigma(z_1,\bar z_1)\epsilon(z_2,\bar z_2)=-{1\over 2}\vert z_{12}\vert^{-1}\sigma(z,\bar z)\big[1+\mathcal{O}\left(\vert z_{12}\vert\right)\big]\,,\label{OPEsigeps}\\ &&\epsilon(z_1,\bar z_1)\epsilon(z_2,\bar z_2)=\vert z_{12}\vert^{-2}\left\{1+2\left[z_{12}^2\,T(z)+\bar z_{12}^2\,\bar T(\bar z)\right]+\mathcal{O}\left(\vert z_{12}\vert^{4}\right)\right\}\,,\label{OPEepseps} \end{eqnarray} where $z_{12}=z_1-z_2$ and $z={1\over 2}(z_1+z_2)$. In Eqs.~(\ref{OPEsigsig})-(\ref{OPEepseps}) we follow the convention of normalizing $\sigma$ and $\epsilon$ so that the bulk pair correlation functions are $\langle\sigma_1\sigma_2\rangle_{\rm bulk}=\vert z_1-z_2\vert^{-1/4}$ and $\langle\epsilon_1\epsilon_2\rangle_{\rm bulk}=\vert z_1-z_2\vert^{-1}$. With this normalization the correlation functions in the upper half plane with a uniform boundary condition on the $x$ axis are given by \cite{Cardyscp,TWBX} \begin{eqnarray} &&\langle\sigma_1\sigma_2\rangle_{\rm fixed\,or\,free}=(4y_1y_2)^{-1/8}\left[{1\over\sqrt{\rho}}\pm\sqrt{\rho}\,\right]^{1/2}\,,\label{sigsiguniformbc1}\\[2mm] &&\rho=\left[{(x_1-x_2)^2+(y_1-y_2)^2\over (x_1-x_2)^2+(y_1+y_2)^2}\right]^{1/2}\,,\label{sigsiguniformbc2}\\[2mm] &&\langle\epsilon_1\epsilon_2\rangle_{\rm fixed\,or\,free}={1\over 4y_1y_2}+{1\over (x_1-x_2)^2+(y_1-y_2)^2}-{1\over (x_1-x_2)^2+(y_1+y_2)^2}\,.\label{epsepsuniformbc} \end{eqnarray} The upper and lower sign in Eq.~(\ref{sigsiguniformbc1}) holds for fixed and free boundary conditions, respectively, and Eq.~(\ref{epsepsuniformbc}) holds for both boundary conditions. Equations (\ref{sigsiguniformbc1})-(\ref{epsepsuniformbc}), the property $\langle\sigma_1\sigma_2\rangle\to \langle\sigma_1\rangle\langle\sigma_2\rangle$ for $x_2\to\infty$, and its analog for $\langle\epsilon_1\epsilon_2\rangle$ imply the one-point averages \begin{eqnarray} &&\langle\sigma\rangle_{\rm fixed}=\pm \left({2\over y}\right)^{1/8}\,,\label{sigfixed}\\ &&\langle\epsilon\rangle_{\rm fixed\,or\,free}=\mp {1\over 2y}\,.\label{epsfixedfree} \end{eqnarray} In Eq.~(\ref{sigfixed}) the upper and lower signs correspond to spin-up and spin-down boundary conditions, respectively, and in Eq.~(\ref{epsfixedfree}) they correspond to fixed and free boundaries. Equation (\ref{epsfixedfree}), including the $\mp$ sign, also follows directly from $\langle\sigma_1\sigma_2\rangle$ in Eq.~(\ref{sigsiguniformbc1}) and the short-distance expansion of $\sigma_1\sigma_2$ in Eq.~(\ref{OPEsigsig}). \subsubsection{Average spin $\langle\sigma\rangle_{+f+f+\,\dots}$} Here we consider the average spin at point $(x,y)$ of the critical Ising model defined on the upper half plane, with alternating boundary conditions $+f+f+\dots$, which switch at an even number $m$ of points $\zeta_1<\zeta_2<\dots<\zeta_m$ on the $x$ axis. According to the discussion below Eq.~(\ref{NoverD1}), $\langle\sigma\rangle_{+f+f+\,\dots}=N/D$, where $N$ and $D$ are appropriate linear combinations of the functions $G_\sigma^{(m+2,\alpha)}(\zeta_1,\dots,\zeta_m,z,\bar z,)$ and $G_\sigma^{(m,\alpha)}(\zeta_1,\dots,\zeta_m)$, respectively. The linear combinations turn out to be particularly simple, consisting only of the function with $\alpha=1$. In this subsection we argue that \begin{eqnarray} &&\langle\sigma\rangle_{+f+f+\,\dots}=\left({i\over 4}\right)^{1/8}\,{G_\sigma^{(2+m,1)}(\zeta_1,\dots,\zeta_m,z,\bar z)\over G_\sigma^{(m,1)}(\zeta_1,\dots,\zeta_m)}\,. \label{NoverD2} \end{eqnarray} has the correct asymptotic behavior for $y\to 0$, whereas other linear combinations do not. We now show this explicitly for $m=4$, with an argument which is easily extended to other even $m$. Making the replacement $(z_1,\dots,z_6)\to(\zeta_1,\dots,\zeta_4,z,\bar z)$ and combining Eqs.~ (\ref{xidef}), (\ref{G41sigma}), (\ref{G6a}), (\ref{G6b}), and (\ref{NoverD2}), we obtain \begin{eqnarray} &&\langle\sigma\rangle_{+f+f+}=\nonumber\\&&\quad=\left({1\over 8y}\right)^{1/8}\left[{\xi_{13}\xi_{15}\xi_{35}+\xi_{13}(\xi_{15}\xi_{35})^{-1}+\xi_{15}(\xi_{13}\xi_{35})^{-1}+{\xi_{35}(\xi_{13}\xi_{15})^{-1}}\over \xi_{13}+\xi_{13}^{-1}}\right]^{1/2},\label{NoverD3a}\\ &&\xi_{13}=\left[{(\zeta_1-\zeta_3)(\zeta_2-\zeta_4)\over(\zeta_1-\zeta_4)(\zeta_2-\zeta_3)}\right]^{1/4},\label{NoverD3b}\\ && \xi_{15}=\left[{(\zeta_1-z)(\zeta_2-\bar z)\over(\zeta_1-\bar z)(\zeta_2-z)}\right]^{1/4}=e^{i(\varphi_1-\varphi_2))/2}, \label{NoverD3c}\\ && \xi_{35}=\left[{(\zeta_3-z)(\zeta_4-\bar z)\over(\zeta_3-\bar z)(\zeta_4-z)}\right]^{1/4}=e^{i(\varphi_3-\varphi_4)/2},\label{NoverD3d} \end{eqnarray} consistent with Eqs.~(\ref{G41sigma}), (\ref{G6a}) and (\ref{G6b}) for $\alpha=1$. Here $z-\zeta_j=|z-\zeta_j|e^{i\varphi_j}$, and $\varphi_j$ is the angle which a line from $\zeta_j$ to $z=x+iy$ in the complex plane forms with the $x$ axis. To check that Eqs.~(\ref{NoverD3a})-(\ref{NoverD3d}) satisfy the $+f+f+$ boundary condition, first suppose that $x<\zeta_1$. Then, in the limit $y\to 0$ all four angles $\varphi_1,\dots,\varphi_4$ approach $\pi$, implying $\xi_{15}\to 1$ and $\xi_{35}\to 1$. Thus, the square bracket in Eq.~(\ref{NoverD3a}) approaches 2, consistent with the spin-up boundary condition (\ref{sigfixed}) for $x<\zeta_1$. Now suppose that $\zeta_1<x<\zeta_2$. Then, in the limit $y\to 0$, $\varphi_1$ approaches $0$, and $\varphi_2$, $\varphi_3$ and $\varphi_4$, all approach $\pi$, implying $\xi_{15}\to -i$, $\xi_{35}\to 1$. Thus, the square bracket in Eq.~(\ref{NoverD2}) vanishes, consistent with the free spin boundary condition for $\zeta_1<x<\zeta_2$. Considering the two remaining possibilities $\zeta_2<x<\zeta_3$ and $x>\zeta_4$ in the same way, we confirm the full consistency of Eqs.~(\ref{NoverD3a})-(\ref{NoverD3d}) with the $+f+f+$ boundary condition. \subsubsection{Correlation function $\langle\sigma_1\dots\sigma_n\rangle_{+f+f+\,\dots}$} Now we turn to the $n$-spin correlation function of the semi-infinite critical Ising with the same alternating boundary condition $+f+f+\dots$ as in the preceding subsection. According to the discussion below Eq.~(\ref{NoverD1}), $\langle\sigma_1\dots\sigma_n\rangle_{+f+f+\,\dots}=N/D$, where $N$ and $D$ are appropriate linear combinations of the functions $G_\sigma^{(m+2n,\alpha)}(\zeta_1,\dots,\zeta_m,z_1,\bar z_1,\dots,z_n,\bar z_n)$ and $G_\sigma^{(m,\alpha)}(\zeta_1,\dots,\zeta_m)$, respectively. As in the preceding subsection, we find that the linear combinations only involve the $G_\sigma$ with $\alpha=1$. Choosing the multiplicative constant for consistency with the normalization (\ref{sigfixed}) leads to \begin{eqnarray} &&\langle\sigma_1\dots\sigma_n\rangle_{+f+f+\,\dots}=\left({i\over 4}\right)^{n/8}\,{G_\sigma^{(m+2n,1)}(\zeta_1,\dots,\zeta_m,z_1,\bar z_1,\dots,z_n,\bar z_n)\over G_\sigma^{(m,1)}(\zeta_1,\dots,\zeta_m)}\,,\label{NoverD4} \end{eqnarray} Beginning with Eq.~(\ref{NoverD4}), we have derived the one and two-point averages $\langle\sigma\rangle$, $\langle\epsilon\rangle$, $\langle\sigma_1\sigma_2\rangle$, $\langle\sigma_1\epsilon_2\rangle$, and $\langle\epsilon_1\epsilon_2\rangle$ both for $+f+$ and $+f+f+$ boundary conditions. The results are given in the next two paragraphs. The correlation functions $\langle\epsilon\rangle$, $\langle\sigma_1\epsilon_2\rangle$, and $\langle\epsilon_1\epsilon_2\rangle$ were obtained from the 2, 3, and 4 spin correlation functions given by Eq.~(\ref{NoverD4}) on letting pairs of spins approach each other and comparing with the operator product expansion (\ref{OPEsigsig}).\\ \paragraph{Results for $+f+$ boundary conditions} For the boundary condition of up spins for $x_1<\zeta$, free spins for $\zeta_1<x<\zeta_2$, and up spins for $x>\zeta_2$, \begin{eqnarray} &&\langle\sigma_1\rangle_{+f+}=\left({2\over y_1}\right)^{1/8}\sqrt{\cos\left(\textstyle{1\over 2}\gamma_{1,1}\right)}\,.\label{sigpfp}\\[3mm] && \langle\epsilon_1\rangle_{+f+}=-{1\over 2y_1}\,\cos\gamma_{1,1}\,.\label{epspfp} \end{eqnarray} \begin{eqnarray} &&\langle\sigma_1\sigma_2\rangle_{+f+}=\left({1\over 4y_1y_2}\right)^{1/8}\left[{1\over\sqrt{\rho}}\cos\left(\textstyle{1\over 2}\gamma_{1,1}-\textstyle{1\over 2}\gamma_{2,1}\right)+\sqrt{\rho}\cos\left(\textstyle{1\over 2}\gamma_{1,1}+\textstyle{1\over 2}\gamma_{2,1}\right)\right]^{1/2}.\label{sigsigpfp}\\[3mm] &&\langle\sigma_1\epsilon_2\rangle_{+f+}=- {1\over 2}\left({2\over y_1}\right)^{1/8}\left({1\over 2y_2}\right)\bigg[{1\over\rho}\,\cos\left(\textstyle{1\over 2}\gamma_{1,1}-\gamma_{2,1}\right) \nonumber\\ &&\qquad\qquad\qquad\qquad\qquad+\rho\cos\left(\textstyle{1\over 2}\gamma_{1,1}+\gamma_{2,1}\right)\bigg] \Big/ \sqrt{\cos\left(\textstyle{1\over 2}\gamma_{1,1}\right)}\,.\label{sigepspfp}\\[3mm] &&\langle\epsilon_1\epsilon_2\rangle_{+f+}=-{1\over 8y_1y_2}\bigg[\left(1-{2\over\rho^2}\right)\cos(\gamma_{1,1}-\gamma_{2,1})\nonumber\\ &&\qquad\qquad\qquad\qquad\qquad +\left(1-2\rho^2\right)\cos(\gamma_{1,1}+\gamma_{2,1})\bigg]\,.\label{epsepspfp} \end{eqnarray} Here \begin{eqnarray} &&\rho=\left[{(x_1-x_2)^2+(y_1-y_2)^2\over(x_1-x_2)^2+(y_1+y_2)^2}\right]^{1/2},\label{rhodef2}\\[2mm] && e^{i\gamma_{k,\ell}}={z_k-\zeta_{\ell+1}\over z_k-\zeta_\ell}\left\vert{z_k-\zeta_\ell\over z_k-\zeta_{\ell+1}}\right\vert ={(x_k-\zeta_\ell)(x_k-\zeta_{\ell+1})+y_k^2+iy_k(\zeta_{\ell+1}-\zeta_\ell)\over\sqrt{\left[(x_k-\zeta_\ell)(x_k-\zeta_{\ell+1})+y_k^2\right]^2+y_k^2(\zeta_{\ell+1}-\zeta_\ell)^2}}\,, \label{gammakelldef1}\nonumber\\ &&\\ &&\gamma_{k,\ell}={\rm arg}\left({x_k-\zeta_{\ell+1}+iy_k\over x_k-\zeta_\ell+iy_k}\right)\,. \label{gammakelldef2} \end{eqnarray} As a check on predictions (\ref{sigpfp})-(\ref{epsepspfp}), we note that in the limit $\zeta_1\to -\infty$, $\zeta_2\to 0$, they correctly reproduce the findings of Burkhardt and Xue \cite{TWBX} for free and fixed spins on the negative and positive $x$ axes, respectively. (Caution: An $ab$ boundary in our notation, corresponds to a $ba$ boundary in the notation of Ref.~\cite{TWBX}.) Conversely, Eqs.~(\ref{sigpfp})-(\ref{epsepspfp}) may be derived from the results of Ref. \cite{TWBX} using the transformation properties of correlation functions under the conformal mapping (\ref{abtoaba}) of the $f+$ geometry onto the $+f+$ geometry. It is straightforward to express predictions (\ref{sigpfp})-(\ref{epsepspfp}) entirely in terms of Cartesion coordinates. Since $y_k>0$ and $\zeta_{\ell+1}>\zeta_\ell$, the quantity $e^{i\gamma_{k,\ell}}$ in Eq.~(\ref{gammakelldef1}) has a positive imaginary part. Thus, $0<\gamma_{k,\ell}<\pi$, so that \begin{eqnarray} &&\cos\left(\textstyle{1\over 2}\gamma_{k,\ell}\right)=\textstyle{1\over\sqrt 2}\left(1+\cos\gamma_{k,\ell}\right)^{1/2}\,,\label{cosgammakellover2}\\ &&\sin\left(\textstyle{1\over 2}\gamma_{k,\ell}\right)=\textstyle{1\over\sqrt 2}\left(1-\cos\gamma_{k,\ell}\right)^{1/2}\,.\label{singammakellover2} \end{eqnarray} Substituting these relations, along with \begin{eqnarray} &&\cos\gamma_{k,\ell}={(x_k-\zeta_\ell)(x_k-\zeta_{\ell+1})+y_k^2\over\sqrt{\left[(x_k-\zeta_\ell)(x_k-\zeta_{\ell+1})+y_k^2\right]^2+y_k^2(\zeta_{\ell+1}-\zeta_\ell)^2}}\,,\label{cosgammakell}\\ &&\sin\gamma_{k,\ell}={y_k(\zeta_{\ell+1}-\zeta_\ell)\over\sqrt{\left[(x_k-\zeta_\ell)(x_k-\zeta_{\ell+1})+y_k^2\right]^2+y_k^2(\zeta_{\ell+1}-\zeta_\ell)^2}}\,,\label{singammakell} \end{eqnarray} and the definition (\ref{rhodef2}) of $\rho$ in Eqs.~(\ref{sigpfp})-(\ref{epsepspfp}) leads to expressions in terms of Cartesian coordinates.\\ \paragraph{Results for $+f+f+$ boundary conditions} For the $+f+f+$ boundary with changes at $\zeta_1,\dots,\zeta_4$, Eq.~(\ref{NoverD4}) and the same general procedure as in the preceding subsection lead to \begin{eqnarray} &&\langle\sigma_1\rangle_{+f+f+}=\left({2\over y_1}\right)^{1/8}\sqrt{{\cos\left(\textstyle{1\over 2}\gamma_{1,1}-\textstyle{1\over 2}\gamma_{1,3}\right)+\chi^2\cos\left(\textstyle{1\over 2}\gamma_{1,1}+\textstyle{1\over 2}\gamma_{1,3}\right)\over 1+\chi^2}}\,.\label{sigpfpfp}\nonumber\\\\[3mm] && \langle\epsilon_1\rangle_{+f+f+}=-{1\over 2y_1}\,{\cos\left(\gamma_{1,1}-\gamma_{1,3}\right)+\chi^2\cos\left(\gamma_{1,1}+\gamma_{1,3}\right)\over 1+\chi^2}\,.\label{epspfpfp} \end{eqnarray} \begin{eqnarray} &&\langle\sigma_1\sigma_2\rangle_{+f+f+}=\left({1\over 4y_1y_2}\right)^{1/8}\bigg[\rho\cos\Big(\textstyle{1\over 2}(\gamma_{1,1}+\gamma_{2,1}-\gamma_{1,3}-\gamma_{2,3})\Big)\nonumber\\ &&\qquad\qquad +\chi^2 \cos\Big(\textstyle{1\over 2}(\gamma_{1,1}-\gamma_{2,1}+\gamma_{1,3}-\gamma_{2,3})\Big)+\cos\bigg(\textstyle{1\over 2}(\gamma_{1,1}-\gamma_{2,1}-\gamma_{1,3}+\gamma_{2,3})\bigg)\nonumber\\ &&\qquad\qquad +\rho\,\chi^2 \cos\Big(\textstyle{1\over 2}(\gamma_{1,1}+\gamma_{2,1}+\gamma_{1,3}+\gamma_{2,3})\bigg)\bigg]^{1/2}\bigg/\Big[4\sqrt{\rho}\, \left(1+\chi^2\right)\Big]^{1/2}.\label{sigsigpfpfp}\\[3mm] &&\langle\sigma_1\epsilon_2\rangle_{+f+f+}=-\left({2\over y_1}\right)^{1/8}{1\over 2y_2}\nonumber\\ &&\qquad\qquad\times \bigg[\rho^2\cos\Big(\textstyle{1\over 2}\gamma_{1,1}-\textstyle{1\over 2}\gamma_{1,3}+\gamma_{2,1}-\gamma_{2,3}\Big)+\chi^2\cos\Big(\textstyle{1\over 2}\gamma_{1,1}+\textstyle{1\over 2}\gamma_{1,3}-\gamma_{2,1}-\gamma_{2,3}\Big)\nonumber\\ &&\qquad\qquad+\cos\Big(\textstyle{1\over 2}\gamma_{1,1}-\textstyle{1\over2}\gamma_{1,3}-\gamma_{2,1}+\gamma_{2,3}\Big)+\rho^2\chi^2\cos\Big(\textstyle{1\over 2}\gamma_{1,1}+\textstyle{1\over 2}\gamma_{1,3}+\gamma_{2,1}+\gamma_{2,3}\Big)\bigg]\nonumber\\ &&\qquad\qquad\times\bigg\{2\rho^2\left(1+\chi^2\right)\Big[\cos\left(\textstyle{1\over 2}\gamma_{1,1}-\textstyle{1\over 2}\gamma_{1,3}\right)+\chi^2\cos\left(\textstyle{1\over 2}\gamma_{1,1}+\textstyle{1\over 2}\gamma_{1,3}\right)\Big]\bigg\}^{-1/2}. \label{sigepspfpfp} \end{eqnarray} \begin{eqnarray} &&\langle\epsilon_1\epsilon_2\rangle_{+f+f+}={1\over 8y_1y_2 \rho^2(1+\chi^2)^2}\nonumber\\ &&\qquad\qquad\times \bigg\{\rho^2\left(-1+2\rho^2+2\rho^2\chi^2\right)\cos\left(\gamma_{1,1}- \gamma_{1,3}+\gamma_{2,1}-\gamma_{2,3}\right)\nonumber\\ &&\qquad\qquad +\chi^2\left(2+2\chi^2-\rho^2\chi^2\right)\cos\left(\gamma_{1,1}+\gamma_{1,3}-\gamma_{2,1}-\gamma_{2,3}\right)\nonumber\\ [2mm] &&\qquad\qquad +\left(-\rho^2+2+2\chi^2\right)\cos\left(\gamma_{1,1}-\gamma_{1,3}-\gamma_{2,1}+\gamma_{2,3}\right)\nonumber\\[2mm] &&\qquad\qquad +\rho^2\chi^2\left(2\rho^2-\chi^2+2\rho^2\chi^2\right)\cos\left(\gamma_{1,1}+\gamma_{1,3}+\gamma_{2,1}+\gamma_{2,3}\right)\nonumber\\[2mm] &&\qquad\qquad -\rho^2\chi^2\Big[\cos\left(-\gamma_{1,1}+\gamma_{1,3}+\gamma_{2,1}+\gamma_{2,3}\right)+\cos\left(\gamma_{1,1}-\gamma_{1,3} +\gamma_{2,1}+\gamma_{2,3}\right)\nonumber\\ &&\qquad\qquad +\cos\left(\gamma_{1,1}+\gamma_{1,3}-\gamma_{2,1}+\gamma_{2,3}\right)+\cos\left(\gamma_{1,1}+\gamma_{1,3} +\gamma_{2,1}-\gamma_{2,3}\right)\Big]\bigg\}.\label{epsepspfpfp} \end{eqnarray} Here $\rho$ and $\gamma_{k,\ell}$ are the same as in Eqs.~(\ref{rhodef2})-(\ref{gammakelldef2}), and \begin{equation} \chi=\left[{(\zeta_1-\zeta_3)(\zeta_2-\zeta_4)\over(\zeta_1-\zeta_4)(\zeta_2-\zeta_3)}\right]^{1/4}.\,\label{chidef} \end{equation} It is simple to check the consistency of Eq.~(\ref{sigpfpfp}) for $\langle\sigma_1\rangle_{+f+f+}$ and our earlier result (\ref{NoverD3a})-(\ref{NoverD3d}), with $\gamma_{1,1}=\varphi_2-\varphi_1$, $\gamma_{1,2}=\varphi_4-\varphi_3$, and $\chi=\xi_{13}$. \subsubsection{Average stress tensors $\langle T(z)\rangle_{+f+f+\,\dots}$ and $\langle T(z)\rangle_{+-+-+\,\dots}$} In the presence of mixed boundary conditions the average stress tensor does not vanish and appears explicitly in the conformal Ward identity and the differential equations for correlation functions \cite{Cardytab,TWBX}. Thus, while the numerator in expression (\ref{NoverD2}) for $\langle\sigma\rangle_{+f+f+\,\dots}$ obeys the differential equations with bulk-like form \begin{eqnarray} &&\left[-{4\over 3}\,{\partial^2\over\partial z^2}+{1\over z-\bar z}\,{\partial\over\partial\bar z}+{1/16\over(z-\bar z)^2} +\sum_{j=1}^m\left({1\over z-\zeta_j}\,{\partial\over\partial \zeta_j}+{1/16\over(z-\zeta_j)^2}\right)\right]G_\sigma^{(2+m,1)}(\zeta_1,\dots,\zeta_m,z,\bar z)=0\,,\nonumber\\ &&\label{conformdiffeqN} \end{eqnarray} the corresponding spin average satisfies \cite{level2} \begin{eqnarray} &&\left[-{4\over 3}\,{\partial^2\over\partial z^2}+{1\over z-\bar z}\,{\partial\over\partial\bar z}+{1/16\over(z-\bar z)^2} +\sum_{j=1}^m{1\over z-\zeta_j}\,{\partial\over\partial \zeta_j}+\langle T(z)\rangle_{+f+f+\,\dots}\right]\langle\sigma\rangle_{+f+f+\,\dots}=0\,.\nonumber\\ &&\label{conformdiffeqsig} \end{eqnarray} Combining Eqs.~(\ref{NoverD2}), (\ref{conformdiffeqN}), and (\ref{conformdiffeqsig}) leads to \begin{equation} \langle T(z)\rangle_{+f+f+\,\dots}=\sum_{j=1}^m\left[{1/16\over(z-\zeta_j)^2}+{1\over z-\zeta_j}\,{\partial\over\partial \zeta_j}\,\ln G_\sigma^{(m,1)}(\zeta_1,\dots,\zeta_m)\right]\,.\label{T+f+etc} \end{equation} A similar calculation based on the differential equations for any of the correlation functions $\langle\sigma_1\dots\sigma_\ell\,\epsilon_{\ell+1}\dots\epsilon_n\rangle$ with $+f+f+\dots$ boundary conditions leads to exactly the same stress tensor, since, for each of these correlation functions the denominator $D$ in the in the $N/D$ form, is also proportional to $G_\sigma^{(m,1)}(\zeta_1,\dots,\zeta_m)$. In the case of $+f+$ boundary conditions, corresponding to $m=2$, combining Eqs.~(\ref{G2sigma}) and (\ref{T+f+etc}) yields the same average stress tensor as in Eq.~(\ref{Taba}), with $t_{ab}=t_{ba}=t_{f+}={1\over 16}$. If there are more than two points $\zeta_1$, $\zeta_2$ on the $x$ axis at which the boundary condition changes, the explicit form of average stress tensor is no longer determined by the elementary considerations that imply Eq.~(\ref{Taba}), but follows from conformal-invariance theory. For $+f+f+$ boundary conditions or $m=4$, Eqs.~(\ref{G41sigma}) and (\ref{T+f+etc}) lead to \begin{eqnarray} \langle T\rangle_{+f+f+} &=&{\textstyle{1\over 16}}\left({1\over z-\zeta_1}-{1\over z-\zeta_2}\right)^2+{\textstyle{1\over 16}}\left({1\over z-\zeta_3}-{1\over z-\zeta_4}\right)^2\nonumber\\ &&+{1\over 8}{\sqrt{\zeta_{31}\zeta_{42}} -\sqrt{\zeta_{41}\zeta_{32}}\over\sqrt{\zeta_{31}\zeta_{42}} +\sqrt{\zeta_{41}\zeta_{32}}} \left({1\over z-\zeta_1}-{1\over z-\zeta_2}\right)\left({1\over z-\zeta_3}-{1\over z-\zeta_4}\right)\,,\nonumber\\ \label{T+f+f+} \end{eqnarray} where $\zeta_{ij}=\zeta_i-\zeta_j$ Now we derive a formula analogous to Eq.~(\ref{T+f+etc}) for the semi-infinite critical Ising model with the alternating boundary condition $+-+-+\,\dots$. The correlation functions of $\sigma$ and $\epsilon$ in this system are analyzed in \cite{TWBG2}. In particular,\begin{equation}\langle\epsilon_1\dots\epsilon_n\rangle_{+-+-+\,\dots}=i^n\,{G_\epsilon^{(m+2n)}(\zeta_1,\dots,\zeta_m,z_1,\bar z_1,\dots,z_n,\bar z_n)\over G_\epsilon^{(m)}(\zeta_1,\dots,\zeta_m)}\,, \end{equation} where the function $G_\epsilon^{(n)}(z_1,\dots,z_n)$ is defined in Eq.~(\ref{Gneps}). Recalling that the scaling index for the energy is $\Delta_\epsilon={1\over 2}$ and carrying out a calculation similar to the one leading to Eq.~(\ref{T+f+etc}) leads to \begin{equation} \langle T(z)\rangle_{+-+-+\,\dots}=\sum_{j=1}^m\left[{1/2\over(z-\zeta_j)^2}+{1\over z-\zeta_j}\,{\partial\over\partial \zeta_j}\,\ln G_\epsilon^{(m)}(\zeta_1,\dots,\zeta_m)\right]\,\label{T+-+etc} \end{equation} where \begin{equation} G_\epsilon^{(n)}(\zeta_1,\dots,\zeta_n)={\rm Pf}^{(n)}{1\over\zeta_{ij}}\,.\label{T+-+etcPf} \end{equation} Equations (\ref{T+-+etc}) and (\ref{T+-+etcPf}) are consistent with Eq. (D4) in Ref.~\cite{EETWB} but have a simpler form In the case of $+-+$ boundary conditions, corresponding to $m=2$, combining Eqs.~(\ref{G2eps}) and (\ref{T+-+etc}) yields the same average stress tensor as in Eq.~(\ref{Taba}), with $t_{ab}={1\over 2}$. For $+-+-+$ boundary conditions or $m=4$, Eqs.~(\ref{G4eps}) and (\ref{T+-+etc}) imply \begin{eqnarray} &&\langle T\rangle_{+-+-+} =\textstyle{1/2\over(z-\zeta_1)^2}+{1/2\over(z-\zeta_2)^2}+{1/2\over(z-\zeta_3)^2}+{1/2\over(z-\zeta_4)^2}\nonumber\\[2mm] &&\qquad -\textstyle\Big\{\textstyle\left[{1\over (z-\zeta_1)(z-\zeta_2)}+{1\over (z-\zeta_3)(z-\zeta_4)}\right]{1\over \zeta_{12}\zeta_{34}} -\textstyle\left[{1\over (z-\zeta_1)(z-\zeta_3)}+{1\over (z-\zeta_2)(z-\zeta_4)}\right]{1\over \zeta_{13}\zeta_{24}}\nonumber\\[2mm] &&\qquad +\textstyle\left[{1\over (z-\zeta_1)(z-\zeta_4)}+{1\over (z-\zeta_2)(z-\zeta_3)}\right]{1\over \zeta_{14}\zeta_{23}}\Big\}\Big/ \left({1\over\zeta_{12}\zeta_{34}}-{1\over\zeta_{13}\zeta_{24}}+{1\over\zeta_{14}\zeta_{23}}\right)\,,\label{T+-+-+} \end{eqnarray} in agreement with Eq.~(D3) in Ref.~\cite{EETWB}. \subsection{Boundary condition $-f+$} \label{mfp} In this section we consider the $n$-spin correlation function $\langle\sigma_1\dots\sigma_n\rangle_{-f+}$ of the semi-infinite Ising model with spin-down boundary conditions on the $x$ axis for $x<\zeta_1$, free spins for $\zeta_1<x<\zeta_2$, and spin-up for $x>\zeta_2$, respectively. For reasons that will become clear, it is convenient to begin, not with $-f+$, but with the $f+-f$ boundary corresponding to free spins for $x<\zeta_a$, spin up for $\zeta_a<x<\zeta_b$, spin down for $\zeta_b<x<\zeta_c$, and free spins for $z>\zeta_c$. Once $\langle\sigma_1\dots\sigma_n\rangle_{f+-f}$ has been determined, it is simple to obtain $\langle\sigma_1\dots\sigma_n\rangle_{-f+}$ with a conformal coordinate transformation involving inversion about an appropriate point on the boundary. Recall that the amplitudes $t_{f+}=t_{f-}={1\over 16}$ and $t_{+-}={1\over 2}$, introduced below Eq.~(\ref{Tab}), equal the scaling indices of $\sigma$ and $\epsilon$, respectively. Accordingly, $\langle\sigma_1\dots\sigma_n\rangle_{f+-f}$ in the variables $(\zeta_a,\zeta_b,\zeta_c,z_1,\bar z_1,\dots,z_n,\bar z_n)$ is determined by the same conformal differential equations as the bulk correlation function $\langle\sigma_a\epsilon_b\sigma_c\sigma_1\dots\sigma_{2n}\rangle$ in the variables $(z_a,z_b,z_c,z_1,z_2,\dots,z_{2n})$. One possible strategy for calculating $\langle\sigma_1\dots\sigma_n\rangle_{f+-f}$ is to attempt to solve these differential equations, using the approach of \cite{TWBG1,TWBX}. Here we follow a different strategy. Setting $\zeta_a=-\zeta$, $\zeta_b=0$, and $\zeta_c=\zeta$, we switch the boundary condition at the origin with the help of two disorder operators \cite{KC,Cardydisorderop,TWBG1}. The advantage of this approach is that both the spin operator $\sigma$ and the dual disorder operator $\mu$ have bulk scaling dimension $\Delta={1\over 16}$, and the solutions of the relevant conformal different equations are the known functions $G^{(n,\alpha)}(z_1,\dots,z_n)$ in Eq.~(\ref{Gnsigma}). Following \cite{KC,Cardydisorderop,TWBG1}, we express the desired correlation function as \begin{equation} \langle\sigma_1\dots\sigma_n\rangle_{f+-f}=\lim_{\substack{Y_1\to 0 \\ Y_2\to\infty}}{\langle\mu(iY_1,-iY_1)\mu(iY_2,-iY_2)\sigma(z_1,\bar z_1)\dots\sigma(z_n,\bar z_n)\rangle_{f+f}\over\langle\mu(iY_1,-iY_1)\mu(iY_2,-iY_2)\rangle_{f+f}}\,,\label{sigdotssigf+-f1} \end{equation} in terms of correlation functions with the $f+f$ boundary condition, with free spins for $x<\zeta$ and $x>\zeta$ and spin up for $-\zeta<x<\zeta$. In the indicated limit the two disorder operators $\mu$ introduce a ladder of antiferrogmetic bonds along the positive $y$ axis, leading from $f+f$ to $f+-f$ boundary conditions. Writing both the correlation functions in the numerator and denominator in Eq.~(\ref{sigdotssigf+-f1}) in $N/D$ form, as in Eq.~(\ref{NoverD1}) leads to \begin{equation} \langle\sigma_1\dots\sigma_n\rangle_{f+-f}=\lim_{\substack{Y_1\to 0 \\ Y_2\to\infty}}{N_1(iY_1,-iY_1,iY_2,-iY_2,-\zeta,\zeta,z_1,\bar z_1,\dots,z_n,\bar z_n) \over N_2(iY_1,-iY_1,iY_2,-iY_2,-\zeta,\zeta)} \,,\label{sigdotssigf+-f2} \end{equation} Since the spin operator $\sigma$, the dual operator $\mu$, and the $f+$ and $+f$ boundary operators all have scaling index $\Delta_\sigma=\Delta_\mu=t_{f+}=t_{+f}={1\over 16}$, the function $N_1$ in Eq.~(\ref{sigdotssigf+-f2}) satisfies the same differential equations in its $2n+6$ arguments as the bulk correlation function $\langle\sigma\dots\sigma_{2n+6}\rangle_{\rm bulk}$ in the variables $z_1,\dots,z_{2n+6}$. Thus, $N_1$ is an appropriate linear combination of the $2^{n+2}$ functions $G_\sigma^{(2n+6,\alpha)}(iY_1,-iY_1, iY_2,-iY_2,-\zeta,\zeta,z_1,\bar z_1,\dots,z_n,\bar z_n)$ defined in Eq.~(\ref{Gneps}). Similarly, $N_2$ is a linear combination of the 4 functions $G_\sigma^{(6,\alpha)}(iY_1,-iY_1, iY_2,-iY_2,-\zeta,\zeta)$. \subsubsection{One-point function $\langle\sigma\rangle_{-f+}$} In the special case $n=1$, the function $N_1$ in Eq.~(\ref{sigdotssigf+-f2}) is an appropriate linear combination of the 8 functions $G_\sigma^{(8,\alpha)}$ defined by Eqs.~(\ref{Gnsigma}) and (\ref{xidef}), with \begin{eqnarray} &&\xi_{13}=\left({Y_2-Y_1\over Y_2+Y_1}\right)^{1/2},\quad\xi_{15}=\left[{Y_1-i\zeta\over(Y_1^2+\zeta^2)^{1/2}}\right]^{1/2}\,,\quad \xi_{17}=\left[{x^2+(y-Y_1)^2\over x^2+(y+Y_1)^2}\right]^{1/4},\nonumber\\ &&\xi_{35}=\left[{Y_2-i\zeta\over(Y_2^2+\zeta^2)^{1/2}}\right]^{1/2}\,,\quad \xi_{37}=\left[{x^2+(y-Y_2)^2\over x^2+(y+Y_2)^2}\right]^{1/4},\quad\xi_{57}=e^{-i(\theta_2-\theta_1)/2}\,,\nonumber\\ &&e^{\theta_1}={z+\zeta\over|z+\zeta|}={x+\zeta+iy \over\sqrt{(x+\zeta)^2+y^2}}\,,\quad e^{i\theta_2}={z-\zeta\over|z-\zeta|}={x-\zeta+iy\over\sqrt{(x-\zeta)^2+y^2}}\,.\label{defsmfp} \end{eqnarray} Examining the leading asymptotic contribution of each of the $G_\sigma^{(8,\alpha)}$ for both $Y_1$ and $1/Y_2$ small, we find that only the contribution of $G_\sigma^{(8,7)}$ is consistent with the expected sign change in $\langle\sigma\rangle_{f+-f}$ as $x$ changes sign and the expected asymptotic behavior for the $f+-f$ boundary condition as $y\to 0$. Choosing the proportionality constant for consistency with Eq.~(\ref{sigfixed}), with the plus sign for $-\zeta<x<0$ and the minus sign for $0<x<\zeta$, we find \begin{eqnarray} &&\langle\sigma\rangle_{f+-f}=-\left({2\over y}\right)^{1/8}\sqrt{\sin\left(\textstyle{1\over 2}{\theta_2}-\textstyle{1\over 2}{\theta_1}\right)-{y\zeta\over x^2+y^2}\cos\left(\textstyle{1\over 2}\theta_2-\textstyle{1\over 2}\theta_1\right)}\,.\label{sigminusfreeplustrig0} \end{eqnarray} To obtain $\langle\sigma\rangle_{-f+}$ for the desired boundary condition of down spins for $x<\zeta_1$, free spins for $\zeta_1<x<\zeta_2$, and up spins for $z>\zeta_2$, use of the conformal transformation property \begin{equation} \langle\sigma(z',\bar z')\rangle_{-f+}=\left\vert{dz'\over dz}\right\vert^{-1/8}\langle\sigma(z,\bar z)\rangle_{f+-f}\label{conformtrans} \end{equation} together with the mapping \begin{equation} z'={\textstyle{1\over 2}}(\zeta_1+{\textstyle\zeta_2)+{1\over 2}}(\zeta_1-\zeta_2){\zeta\over z}\label{confomap} \end{equation} to change the boundary geometry. This leads to \begin{equation} \langle\sigma\rangle_{-f+}=\left({2\over y}\right)^{1/8}\sqrt{\cos\left(\textstyle{1\over 2}\gamma_{1,1}\right)-{2y\over\zeta_2-\zeta_1}\sin\left(\textstyle{1\over 2}\gamma_{1,1}\right)}\,,\label{sigmfp0} \end{equation} the main result of this subsection, where we have dropped the primes for simplicity. The quantity $\gamma_{1,1}$ in Eq.~(\ref{sigmfp0}) is the same as in Eq.~(\ref{sigpfp}) and defined by Eqs.~(\ref{gammakelldef1}) and (\ref{gammakelldef2}). On making use of Eqs.~(\ref{cosgammakellover2})-(\ref{singammakell}), Eq.~(\ref{sigmfp0}) can be expressed entirely in terms of the Cartesian coordinates $(x,y)$. The one-point function (\ref{sigmfp}) is expected to vanish on the half line $x={1\over 2}(\zeta_1+\zeta_2)$, $y>0$, since all points on this half line are equidistant from the up and down pointing boundary spins. Defining $\Delta x\equiv x-{1\over 2}(\zeta_1+\zeta_2)$, we choose the square root in Eq.~(\ref{sigmfp0}) to be positive for $\Delta x>0$ and negative for $\Delta x<0$, so that $\langle\sigma\rangle_{-f+}$ is an odd function of $\Delta x$. Expanding the argument of the square root in powers of $\Delta x$, one finds \begin{equation} \langle\sigma\rangle_{-f+}=\left({2\over y}\right)^{-1/8}\sqrt{{y\over\left({1\over 4}\,\zeta_{21}^2+y^2\right)^{3/2}}\,\Delta x^2+{\rm O}\left(\Delta x^4\right)}\,, \end{equation} where $\zeta_{21}\equiv\zeta_2-\zeta_1$, consistent with a smooth, analytic continuation between the positive and negative branches at $\Delta x=0$. \subsubsection{One and two-point averages for $-f+$ and $f+-$ boundary conditions.} \paragraph{$-f+$ boundary conditions} To calculate the spin-spin correlation function $\langle\sigma_1\sigma_2\rangle_{-f+}$, we again begin with $f+-f$ boundary conditions and with Eq.~(\ref{sigdotssigf+-f2}) for $n=2$. Examining the leading asymptotic contribution of each of the 16 functions $G_\sigma^{(10,\alpha)}(iY_1,-iY_1,iY_2,-iY_2,-\zeta,\zeta,z_1,\bar z_1,z_2,\bar z_2)$ defined by Eq.~(\ref{Gnsigma}), for both $Y_1$ and $1/Y_2$ small, we find that only $G_\sigma^{(10,11)}$ yields an expression for $\langle\sigma_1\sigma_2\rangle_{-f+}$ consistent with the operator product expansion (\ref{OPEsigsig}) for small $\vert z_1-z_2\vert$ and the expected asymptotic behavior in the limits such as $x_1\to \pm\infty$, $y_1\to 0$, $y_2\to 0$. Transforming from $f+-f$ to the $-f+$ geometry, as in the preceding subsection, leads to the result for $\langle\sigma_1\sigma_2\rangle_{-f+}$ in Eq.~(\ref{sigsigmfp}). Comparing the result with the operator product expansion (\ref{OPEsigsig}) leads to the expression for $\langle\epsilon_1\rangle_{-f+}$ in Eq.~(\ref{epsmfp}). Proceeding in the same way, we have constructed $\langle\sigma_1\dots\sigma_n\rangle_{-f+}$ for $n=3$ and 4, beginning with Eq.~(\ref{sigdotssigf+-f2}) and the families of 32 functions $G_\sigma^{(12,\alpha)}$ and 64 functions $G_\sigma^{(14,\alpha)}$, respectively. Comparing the results with the operator product expansion (\ref{OPEsigsig}) leads to expressions (\ref{sigepsmfp}) for $\langle\sigma_1\epsilon_2\rangle_{-f+}$ and (\ref{epsepsmfp}) for $\langle\epsilon_1\epsilon_2\rangle_{-f+}$. In terms of the variables $\rho$, $\gamma_{k,\ell}$, and $\chi$ defined in Eqs.~(\ref{rhodef2}), (\ref{gammakelldef1}), and (\ref{chidef}), \begin{eqnarray} &&\langle\sigma_1\rangle_{-f+}=\left({2\over y_1}\right)^{1/8}\sqrt{\cos\left(\textstyle{1\over 2}\gamma_{1,1}\right)-{2y_1\over\zeta_2-\zeta_1}\sin\left(\textstyle{1\over 2}\gamma_{1,1}\right)}\,,\label{sigmfp}\\[3mm] && \langle\epsilon_1\rangle_{-f+}=-{1\over 2y_1}\left(\cos\gamma_{1,1}-{4y_1\over\zeta_2-\zeta_1}\sin\gamma_{1,1}\right)\,,\label{epsmfp} \end{eqnarray} \begin{eqnarray} &&\langle\sigma_1\sigma_2\rangle_{-f+}=\left({1\over 4y_1y_2}\right)^{1/8}\Bigg[{1\over\sqrt{\rho}}\cos\left(\textstyle{1\over 2}\gamma_{1,1}-\textstyle{1\over 2}\gamma_{2,1}\right)+\sqrt{\rho}\cos\left(\textstyle{1\over 2}\gamma_{1,1}+\textstyle{1\over 2}\gamma_{2,1}\right)\nonumber\\ &&\qquad-{2\over\sqrt{\rho}}\,{y_1-y_2\over\zeta_2-\zeta_1}\sin\left(\textstyle{1\over 2}\gamma_{1,1}-\textstyle{1\over 2}\gamma_{2,1}\right)-2\sqrt{\rho}\;{y_1+y_2\over\zeta_2-\zeta_1}\sin\left(\textstyle{1\over 2}\gamma_{1,1}+\textstyle{1\over 2}\gamma_{2,1}\right)\Bigg]^{1/2},\label{sigsigmfp} \end{eqnarray} \begin{eqnarray} &&\langle\sigma_1\epsilon_2\rangle_{-f+}=- {1\over 2}\left({2\over y_1}\right)^{1/8}\left({1\over 2y_2}\right)\,\Bigg[{1\over\rho}\cos\left(\textstyle{1\over 2}\gamma_{1,1}-\gamma_{2,1}\right) \nonumber\\[2mm] &&\qquad+\rho\cos\left(\textstyle{1\over 2}\gamma_{1,1}+\gamma_{2,1}\right)- {2\over\rho}\,{y_1-2y_2\over\zeta_2-\zeta_1}\,\sin\left(\textstyle{1\over 2}\gamma_{1,1}-\gamma_{2,1}\right)\nonumber\\[2mm] &&\qquad -2\rho\,{y_1+2y_2\over\zeta_2-\zeta_1}\,\sin\left(\textstyle{1\over 2}\gamma_{1,1}+\gamma_{2,1}\right)\Bigg] \Big/ \sqrt{\cos\left(\textstyle{1\over 2}\gamma_{1,1}\right)-{2y_1\over\zeta_2-\zeta_1}\,\sin\left(\textstyle{1\over 2}\gamma_{1,1}\right)}\,,\label{sigepsmfp} \end{eqnarray} \begin{eqnarray} &&\langle\epsilon_1\epsilon_2\rangle_{-f+}=-{1\over 8y_1y_2}\Bigg[\left(1-{2\over\rho^2}+{16y_1y_2\over(\zeta_2-\zeta_1)^2}\right)\cos(\gamma_{1,1}-\gamma_{2,1})\nonumber\\[2mm] &&\qquad +\left(1-2\rho^2-{16y_1y_2\over(\zeta_2-\zeta_1)^2}\right)\cos(\gamma_{1,1}+\gamma_{2,1})-4\left(1-{2\over\rho^2}\right)\,{y_1-y_2\over\zeta_2-\zeta_1}\sin(\gamma_{1,1}-\gamma_{2,1})\nonumber\\[2mm] &&\qquad -4\left(1-2\rho^2\right)\,{y_1+y_2\over\zeta_2-\zeta_1}\sin(\gamma_{1,1}+\gamma_{2,1})\Bigg]\,.\label{epsepsmfp} \end{eqnarray} In Fig.~\ref{fig1} the one-point averages $\langle\sigma\rangle_{-f+}$ and $\langle\epsilon\rangle_{-f+}$ in Eqs. (\ref{sigmfp}) and (\ref{epsmfp}) are plotted as functions of $x$ for $y={1\over 4}$ and $\zeta_1=-\zeta_2=-1$. The quantities $\langle\sigma\rangle_{+f+}$ and $\langle\epsilon\rangle_{+f+}$ in Eqs. (\ref{sigpfp}) and (\ref{epspfp}) are shown for comparison. The curves for $\langle\sigma\rangle_{-f+}$ and $\langle\sigma\rangle_{+f+}$ look qualitatively as expected, reflecting the odd and even dependence on $x$, respectively, and approaching $\langle\sigma\rangle_+$ or $\langle\sigma\rangle_-$ for $|x|\to\infty$. Since the $-f+$ boundary condition is less conducive to ordering than the $+f+$ boundary condition, the curve for $\langle\epsilon\rangle_{-f+}$ in Fig.~\ref{fig1} lies above the curve for $\langle\epsilon\rangle_{+f+}$. For sufficiently small $|x|$, it even rises above the dashed line representing $\langle\epsilon\rangle_f$. Setting $\zeta_1=-\zeta_2=-1$ in Eqs.~(\ref{epsfixedfree}) and (\ref{epsmfp}), we find that $\langle\epsilon\rangle_{-f+}$ exceeds $\langle\epsilon\rangle_f$ for $|x|<({1\over 2}+y^2)^{1/2}$ and has a maximum at $x=0$ with height ratio $\langle\epsilon\rangle_{-f+}/\langle\epsilon\rangle_f=(1+3y^2)/(1+y^2)$. Thus, for $y={1\over 4}$, as in Fig.~\ref{fig1}, the corresponding interval and height ratio are $|x|<{3\over 4}$ and ${19\over 17}$, respectively. For $y\gg 1$, the expressions for the interval and height ratio yield $|x|<y$ and 3. These results are easily checked by noting that $\langle\epsilon\rangle_{-f+}\to\langle\epsilon\rangle_{-+}$ for $y\gg \zeta_2-\zeta_1$ and using the explicit form of $\langle\epsilon\rangle_{-+}$ in Eq.~(\ref{sigandepsab}) .\\ \paragraph{$f+-$ boundary conditions} The one and two-point averages for $-f+$ boundary conditions in Eqs.~(\ref{sigmfp}) and (\ref{epsepsmfp}) can be transformed into results for $f+-$ boundaries using the conformal mapping \begin{equation} z'=-{(\zeta_2-\zeta_1)(\zeta_2'-\zeta_1')\over z-\zeta_1}+\zeta_2'\,,\quad z=-{(\zeta_2-\zeta_1)(\zeta_2'-\zeta_1')\over z'-\zeta_2'}+\zeta_1\,,\label{mfptofpm} \end{equation} which maps $z=\zeta_1,\,\zeta_2,\,\infty$ on to $z'=\infty,\,\zeta_1',\,\zeta_2'$, respectively. In terms of the primed variables, \begin{eqnarray} &&\gamma_{1,1}={\rm arg}\left({z_1-\zeta_2\over z-\zeta_1}\right)={\rm arg}\left({z_1'-\zeta_1'\over\zeta_2'-\zeta_1'}\right)={\rm arg}\left({|z_1'-\zeta_1'|e^{i\vartheta_1'}\over\zeta_2'-\zeta_1'}\right)=\vartheta_1'\,,\label{gamma11prime}\\[2mm] &&{y_1\over \zeta_2-\zeta_1}={(\zeta_2'-\zeta_1')y_1'\over\left\vert z'-\zeta_2'\right\vert^2}\,,\label{yprime}\end{eqnarray} where we have used the definition (\ref{gammakelldef2}) of $\gamma_{1,1}$. Beginning with Eqs.~(\ref{sigmfp}) -(\ref{epsepsmfp}), using Eqs.~(\ref{gamma11prime}) and (\ref{yprime}) and the transformation property analogous to (\ref{conformtrans}), and dropping primes in the final expression, we obtain \begin{eqnarray} &&\langle\sigma_1\rangle_{f+-}=\left({2\over y_1}\right)^{1/8}\sqrt{\cos{\textstyle\vartheta_1\over 2}-\,{2\,\zeta_{21}y_1\over\left\vert z-\zeta_2\right\vert^2}\sin{\textstyle\vartheta_1\over 2}}\label{sigfpm}\\[3mm] && \langle\epsilon_1\rangle_{f+-}=-{1\over 2y_1}\left[\cos\vartheta_1-{4\,\zeta_{21}y_1\over\left\vert z-\zeta_2\right\vert^2}\sin\vartheta_1\right]\,,\label{epsfpm} \end{eqnarray} where $\vartheta_1={\rm arg}(z_1-\zeta_1)$, and corresponding results for the two-point averages. \subsubsection{Average stress tensor $\langle T(z)\rangle_{-f+}$} The average stress tensor $\langle T(z)\rangle_{_f+}$ is given by Eq.~(\ref{Tabc}) with $t_{ab}=t_{bc}=t_{f+}={1\over 16}$ and $t_{ac}=t_{-+}={1\over 2}$, which leads to \begin{equation} \langle T(z)\rangle_{-f+}={1/16\over (z-\zeta_1)^2}+{1/16\over (z-\zeta_2)^2}+{3/8\over(z-\zeta_1)(z-\zeta_2)}\,.\label{Tmfp} \end{equation} According to the conformal theory the one-point averages of $\sigma$ and $\epsilon$ for $-f+$ boundary conditions satisfy \cite{level2} \begin{eqnarray} &&\bigg( -{4\over 3}\,{\partial^2\over \partial z^2}+ {1\over z-\bar z}\,{\partial\over \partial\bar z}+{1/16\over (z-\bar z)^2}\nonumber\\ &&\qquad\qquad\qquad +{1\over z-\zeta_1}\,{\partial\over \partial\zeta_1}+ {1\over z-\zeta_2}\,{\partial\over \partial\zeta_2}+\langle T(z)\rangle_{-f+}\bigg)\langle\sigma\rangle_{-f+}=0\,,\label{diffeqsigmfp1}\\ &&\bigg( -{3\over 4}\,{\partial^2\over \partial z^2}+ {1\over z-\bar z}\,{\partial\over \partial\bar z}+{1/2\over (z-\bar z)^2}\nonumber\\ &&\qquad\qquad\qquad +{1\over z-\zeta_1}\,{\partial\over \partial\zeta_1}+ {1\over z-\zeta_2}\,{\partial\over \partial\zeta_2}+\langle T(z)\rangle_{-f+}\bigg)\langle\epsilon\rangle_{-f+}=0\label{diffeqepsmfp1} \end{eqnarray} As a check on our results (\ref{sigmfp}) and (\ref{epsmfp}) for the one-point functions, we have confirmed that substituting them into Eqs.~(\ref{diffeqsigmfp1}) and (\ref{diffeqsigmfp1}) and solving for $\langle T(z)\rangle_{-f+}$ reproduces the average stress tensor in Eq.~(\ref{Tmfp}). The two-point functions $\langle\sigma_1\sigma_2\rangle_{-f+}$, $\langle\sigma_1\epsilon_2\rangle_{-f+}$, and $\langle\epsilon_1\epsilon_2\rangle_{-f+}$ satisfy differential equations which are obvious generalizations of Eqs.~(\ref{diffeqsigmfp1}) and (\ref{diffeqepsmfp1}). Here also our results (\ref{sigsigmfp}), (\ref{sigepsmfp}), and (\ref{epsepsmfp}) and the differential equations lead to the average stress tensor (\ref{Tmfp}). \subsection{Casimir interaction of a wedge with the boundary} \label{wedge} Consider a wedge-shaped inclusion pointing perpendicularly toward the $x$ axis in a critical Ising system defined on the upper half $z$ plane . The edges of the wedge form angles $\alpha$ and $\pi - \alpha$, where $0< \alpha < \pi /2$, with the $x$ axis and intersect at the tip of the wedge, which is on the $y$ axis a distance $D$ from the origin. This roughly resembles the geometry of an atomic force microscope. To calculate the Casimir force acting on the wedge, we proceed as in Ref. \cite{EETWB} and use the conformal transformation $z(w)$ with derivative \begin{eqnarray} \label{wedgetohalf} {dz \over dw}= - {D \over E(\alpha)} \; e^{-i\alpha} w^{-(1+\alpha/\pi)}(w-1)^{2\alpha/\pi} \, , \quad E(\alpha)= 2 \int_{0}^{\pi/2} d\psi (2 \sin \psi)^{2\alpha/\pi} \,, \end{eqnarray} to map the empty upper half $w=u+iv$ plane onto the simply-connected region of the $z=x+iy$ plane between the wedge and the $x$ axis. Under this transformation the segments $-\infty < u < 0$, $0<u<1$, and $1<u$ of the $u$ axis map onto the $x$ axis X, the right boundary WR, and the left boundary WL of the wedge, respectively. According to Ref. \cite{EETWB} the wedge experiences the force \begin{eqnarray} \label{force} &&(f_x, \, f_y)/(k_B T) = -({\rm Im}, \, {\rm Re}) \left(\tau^{(T)}+\tau^{(S)} \right)\,,\label{force1}\\[2mm] &&\left[ \tau^{(T)},\,\tau^{(S)} \right]={1\over\pi}\int_Cdw \, {1 \over z'(w)} \,\left[ \langle T(w) \rangle \, , \, - {1 \over 24}\, \{z,w\} \right] \, ,\label{tautau} \end{eqnarray} where the integration path $C$ is along the $u$ axis from $w=0$ to $+\infty$ and passes above the singularity at $w=1$. The quantity $\langle T(w) \rangle$ in Eq.~(\ref{tautau}) is the average stress tensor in the empty upper half $w$ plane, and $\{z,w\} \equiv z'''(w)/z'(w) - (3/2) \left[z''(w)/z'(w)\right]^2$ is the Schwarzian derivative, which equals \begin{eqnarray} \label{wedgeS} \{z,w\} = \left( 1+{\alpha \over \pi} \right) \left[ \left( 1- {\alpha \over \pi}\right) {1 \over 2 w^2} - {2\alpha \over \pi} {1 \over w(w-1)^2} \right] \end{eqnarray} for the mapping (\ref{wedgetohalf}). Unlike $\{z,w\}$, $\langle T(w) \rangle$ depends on the boundary conditions in the wedge geometry, since they determine the boundary conditions on the corresponding three segments of the $u$ axis. We now examine the Casimir force in detail for the boundary conditions $f$, $+$, and $-$, on X, WR, and WL, respectively. This is an especially interesting case, since the Casimir force on the wedge reverses direction at a critical value of the apex angle, as we shall see. According to Eq. (\ref{Tabc}), with $z$ replaced by $w$, \begin{eqnarray} \label{wedgeS} \langle T(w) \rangle_{f+-}^{(\zeta_1 =0, \, \zeta_2 =1)} = {1/16 \over w^2} + {1/2 \over w(w-1)^2} \, . \end{eqnarray} Substituting this expression for $T(w)$ in Eq. (\ref{tautau}), and using the relation \begin{eqnarray} \label{integ} \int\limits_{0}^{+\infty} du (u-a+i0)^{-\nu} \, u^{\mu-1} \, = \, a^{\mu-\nu} i^{2(\mu-\nu)} B(\mu,\nu-\mu) \, , \quad a>0 \, , \end{eqnarray} where $B$ is the beta function, corresponding to formula 3.194.3 in Ref. \cite{G+R}, we obtain \begin{eqnarray} \label{tau,tau} &&\left[\tau^{(S)} \, , \; \tau^{(T)} \right] = - {1 \over D} \, E(\alpha) \, G(\alpha) \left[ -\left(1+{\alpha \over \pi} \right)^2 \, , \; 3\left( 1-2{\alpha \over \pi} \right) \right] \,,\\ &&G(\alpha)={\Gamma^2 (\alpha/\pi) \over 48 \pi \Gamma(2\alpha/\pi)} \, {1 \over 1+(2\alpha/\pi)}\,. \end{eqnarray} Together with Eq. (\ref{force}), this implies $f_x =0$ and \begin{eqnarray} \label{force2} {f_y \over k_B T} = {1 \over D} \, E(\alpha) \, G(\alpha) \, \left[ 2-8{\alpha \over \pi} - \left({\alpha \over \pi} \right)^2 \, \right] \, . \end{eqnarray} Rewriting the square bracket in Eq.(\ref{force2}) as \begin{eqnarray} \label{alpha0} [ \;]=(\alpha_0 -\alpha)(\alpha_0 +\alpha +8 \pi)/\pi^2 \, , \quad \alpha_0 = (3 \sqrt{2} -4) \pi = 0.243 \pi =43.7^{\,\circ}\, , \end{eqnarray} and noting that $E(\alpha)$ and $G(\alpha)$ are positive, we find that $f_y$ is positive for $0<\alpha<\alpha_0$ and negative for $\alpha_0<\alpha<\pi/2 $, corresponding to repulsion and attraction, respectively, of the wedge by the boundary. In terms of the apex angle $\beta=\pi-2\alpha$, the force is attractive for $0<\beta<\beta_0$ and repulsive for $\beta_0<\beta<\pi$, where $\beta_0=(9-6\sqrt{2})\pi=92.6^{\,\circ}$. This behavior is consistent with the following picture: For small $\beta$, the wedge is almost a needle, and the dominant force is between its tip and the $f$ boundary. Since the junction of the $+$ and $-$ boundaries at the tip and the $f$ boundary both favor disorder, the overall force is attractive. For $\beta$ near $\pi$, on the other hand, the $+$ and $-$ boundaries of the wedge lie along the positive and negative $x$ axes, respectively, both of which have boundary condition $f$. Since the $f$ boundary repels both $+$ and $-$ boundaries, the overall force on the wedge is repulsive. In the limit of a $-+$ needle, $\alpha = \pi /2$, $\tau^{(T)}=0$, $E=4$, $G= 1/96$, and $f_y/(k_B T)=-3/(32 D)$. This $f_y$ is the same as for an $aa$ needle in the upper half $z$ plane with a uniform $a$ boundary along the $x$ axis \cite{EETWB}. In the latter case the empty upper half $w$ plane also has uniform boundary condition $a$, so that $\langle T(w) \rangle$ vanishes. \section{Boundary-operator expansions in systems\\ with mixed boundary conditions}\label{secMBOE} \subsection{Boundary-operator expansion away from switching points} \label{away} Boundary-operator expansions have been studied extensively in semi-infinite critical systems with uniform boundary conditions \cite{Diehl,CardyLewellen,EEStap}. In the expansion of a primary operator $\phi(x,y)$, with a distance $y$ from the boundary much smaller than the other lengths that characterize the system, $\phi(x,y)$ is expressed as a series of $y$-independent boundary operators with increasing scaling dimension, multiplied by appropriate powers of $y$. For the Ising model defined on the upper half plane with uniform boundary condition $h$ on the $x$ axis and for the pairs $(\phi, h)=(\sigma,+),\,(\sigma,-),\,(\epsilon,+),\,(\epsilon,-),\,(\epsilon,f)$, the leading boundary operator is the stress tensor $T(z)$ evaluated on the $x$ axis. To lowest order the expansion reads \begin{eqnarray} \phi(x, y)-\langle \phi \rangle_h \to \mu_h^{(\phi)}\, y^{2-x_{\phi}} T(x) \, , \quad y \to 0 \, , \label{BOE} \end{eqnarray} where $x_\phi=2\Delta_\phi$ is the scaling dimension of $\phi$. The averages $\langle \phi \rangle_h$ in Eq. (\ref{BOE}) for $\phi$ equal to $\sigma$ and $\epsilon$ are given in Eqs.~(\ref{sigfixed}) and (\ref{epsfixedfree}), and the universal amplitudes $\mu_h^{(\phi)}$ are \begin{eqnarray} \label{mu} \mu_{+}^{(\sigma)}=-\mu_{-}^{(\sigma)}=-2^{1/8}, \quad \mu_{+}^{(\epsilon)}=\mu_{-}^{(\epsilon)}=-\mu_f^{(\epsilon)}=4 \, . \end{eqnarray} The exponent $2-x_{\phi}$ of $y$ in the expansion arises from the scaling dimension $x_T=2$ of $T$. For $(\phi, h)= (\sigma , f)$, the leading boundary operator in the expansion (\ref{BOE}) cannot be the stress tensor, as follows from a symmetry argument \cite{symarg}, but has scaling dimension $1\over 2$, implying the power $y^{1/2 - x_{\sigma}}$. Although not a primary operator, the expansion (\ref{BOE}) also holds for $\phi(x,y)=T(z)$, with $\langle T\rangle_h=0$, $x_T=2$, and $\mu_h^{(T)}=1$. Due to the analyticity properties of $T(z)$, its expansion contains the powers $y^0$, $y^1$, $y^2$, etc. In averages $\langle T(z)\phi_1\phi_2\dots\rangle_{ab...}$ of $T(z)$ with primary operators, the terms in the expansion can be derived explicitly from the conformal Ward identity, e.g. Eq.~(\ref{GGWI}). {The boundary-operator expansion (\ref{BOE}) not only applies to the two-dimensional Ising model, but appears to hold quite generally in semi-infinite critical systems, except in the case of a free boundary with $\phi$ equal to the order parameter. This was assumed in Ref.~\cite{Cardydistantwall}, in a study of critical behavior in the parallel-plate geometry. The asymptotic behavior (\ref{BOE}) has been confirmed in spatial dimension $d=4-\epsilon$ for the $n$-vector model with $f$ boundary \cite{EEKD,McAvOs,DDE} and for the Ising model with $h=+$ boundary \cite{EEStap}. For $d>2$ , $T(x)$ is replaced by the perpendicular component $T_{yy}$ of the Cartesian stress tensor at the boundary. The expansion is also consistent with a general argument \cite{TWBHWD} that the leading boundary operator for the Ising model in $d$ spatial dimensions with $h=+$ and $\phi=\sigma$ or $\epsilon$ has scaling dimension $d$. Finally, the expansion agrees with the exact results of Ref.~\cite{Cardyscp} for $\langle\epsilon_1\epsilon_2\rangle_f=\langle\epsilon_1\epsilon_2\rangle_+$ and of Ref.~\cite{TWBX} for $\langle\sigma\rangle_{ab}$ and $\langle\epsilon\rangle_{ab}$ in the two-dimensional Ising and $Q$-state Potts models. In the two-dimensional models \begin{equation} \label{relationmuhphi} \mu_h^{(\phi)}=-(4x_{\phi}/\hat{c})\,y^{x_\phi}\langle\phi\rangle_h \end{equation} for primary operators, as shown in footnote \cite{EEJune7}.} Here $\hat{c}$ is the central charge in the conformal classification \cite{BPZ,CardyD-L}, which equals 1/2 for the Ising model. The boundary-operator expansion (\ref{BOE}), with $\langle\phi\rangle_h$ on the left-hand side evaluated for a uniform boundary $h$, has a local character and also holds for mixed $ab..h..$ boundary conditions if, in the small $y$ limit, $\phi(x,y)$ is positioned closer to an interior point of the segment with boundary condition $h$ than its endpoints. In terms of the position $(x,y)$ of $\phi$ and the endpoints $\zeta_j$, $\zeta_{j+1}$ of the segment, this corresponds to $y\ll |x-\zeta_j|$ and $y\ll |x-\zeta_{j+1}|$. For the boundary condition $ab..h..$, averaging expansion (\ref{BOE}) leads to \begin{equation} \langle\phi(x, y)\rangle_{ab..h..}-\langle \phi \rangle_h \to \mu_h^{(\phi)} \, y^{2-x_{\phi}} \,\langle T(x)\rangle_{ab..h..} \, , \quad y \to 0 \, . \label{avphiabc} \end{equation} We have verified that the exact one-point averages of $\sigma$, $\epsilon$ and $T$ with mixed boundary conditions, given in Ref.~\cite{TWBX} and in Secs.~\ref{pfpetc} and \ref{mfp} all have this asymptotic behavior. Boundary operator expansions also provide information on the asymptotic behavior of correlation functions. Consider, for example, the cumulant of $\phi(x,y)$ and a distant operator $\Phi(X,Y)$. According to expansions (\ref{BOE}) and (\ref{avphiabc}), \begin{eqnarray} \label{twoaway} && \langle\phi(x,y)\Phi(X,Y)\rangle_{ab..h..}-\langle \phi(x,y) \rangle_{ab..h..} \langle\Phi(X,Y)\rangle_{ab..h..}\nonumber \\ && \qquad \to \mu_h^{(\phi)}\, y^{2-x_{\phi}}\,\left[ \langle T(x)\Phi(X,Y)\rangle_{ab..h..} - \langle T(x) \rangle_{ab..h..} \langle \Phi(X,Y)\rangle_{ab..h..} \right]\label{avphiPhiabc} \end{eqnarray} for $y$ much smaller than $|x-\zeta_j|$, $|x-\zeta_{j+1}|$, and $\left[(x-X)^2 + (y-Y)^2\right]^{1/2}$. The right-hand side of Eq.~(\ref{avphiPhiabc}) can be expressed in terms of $\langle \Phi(X,Y)\rangle_{ab..h..}$ and its derivatives using the conformal Ward identity (\ref{GGWI}). The asymptotic form (\ref{avphiPhiabc}) is consistent with all the exact expressions for the two-point functions $\langle\sigma_1\sigma_2\rangle$, $\langle\epsilon_1\epsilon_2\rangle$, and $\langle\sigma_1\epsilon_2\rangle$ with mixed boundary conditions given in Ref. \cite{TWBX} and in this paper. For $\langle\sigma_1\sigma_2\rangle_{+-}$ this is shown in some detail in Appendix \ref{appendixcheckaway}. \subsection{Boundary-operator expansion at a switching point} Now we turn to operator expansions in the contrasting case in which $\phi(x,y)$ is positioned much closer to one of the switching points, say $\zeta_1$, than to the other switching points $\{ \zeta \} \equiv\zeta_2, \zeta_3,... $ and, when considering multipoint averages, to other operators $\Phi_1(X_1,Y_1)$, $\Phi_2(X_2,Y_2)$, ... In terms of the complex coordinates $z=x+iy$ and $Z=X+iY$, this corresponds to $|z-\zeta_1| \ll |z-\zeta_2|,\,...\, , |z-Z_1|,\,...$ Below, in discussing the order of terms in expansions, we use the notation $l$ and $L$ for small and large lengths, such as $z-\zeta_1$ and $z-\zeta_2$, respectively. In leading order the expansion in terms of boundary-operators at the switching point $\zeta_1$ has the form \begin{eqnarray} \label{MBOE} \phi(x,y)-\left\langle \phi(x,y) \right\rangle _{ab}^{(\zeta_1)} \to F_{ab}^{(\phi)}(x-\zeta_1 ,y) \, \Upsilon({\zeta_1})\,. \end{eqnarray} Here $\phi$ can be either $\sigma, \epsilon$, or $T$, and $a$ and $b$ are the boundary conditions of the segments that extend from $\zeta_1$ to the left and right, respectively. On the right-hand side of Eq.~(\ref{MBOE}) only the contribution of the boundary-operator $\Upsilon(\zeta_1)$ of lowest scaling dimension is shown. Like the factor $\mu_h^{(\phi)} \, y^{2-x_{\phi}}$ in Eq. (\ref{BOE}), $F_{ab}^{(\phi)}$ in (\ref{MBOE}) only depends on local properties. It depends on the boundary conditions $a, b$ of the two segments with switching point $\zeta_1$ but is independent of any other segments and switching points. According to Eq.~(\ref{MBOE}), $\langle \Upsilon (\zeta_1) \rangle_{ab}^{(\zeta_1)} =0$ if the entire boundary consists of one $a$ segment and one $b$ segment. As shown in Appendix~\ref{appendixderivation}, for all pairs of universality classes $ab$ the scaling dimension of $\Upsilon$ equals 1, not just for the Ising model, but for other two-dimensional critical systems as well. Thus, the scaling dimension of $F_{ab}^{(\phi)}$ is $x_{\phi} - 1$. The analyticity properties and scaling dimension $x_T = 2$ of the stress tensor $T(z)$ imply that $F_{ab}^{(T)}$ is proportional to $(z-\zeta_1)^{-1}$, and we normalize $\Upsilon({\zeta_1})$ so that \begin{eqnarray} \label{FabT} F_{ab}^{(T)}(x-\zeta_1,y) = {1 \over z-\zeta_1}\ .\label{FabT} \end{eqnarray} In Appendix~\ref{appendixderivation} we show that \begin{equation} F_{ab}^{(\phi)}(x-\zeta_1,y)=(2t_{ab})^{-1}\,|z-\zeta_1|^2\,\partial_{\zeta_1}\langle\phi\rangle_{ab}\label{generalresult2text} \end{equation} for primary operators. Another derivation of this result, based on the conformal Ward identity, is discussed below Eq.~(\ref{Kexpand2}). We emphasize that expressions (\ref{FabT}) and (\ref{generalresult2text}) are not restricted to the Ising model, but are expected to also hold for other two-dimensional critical systems. According to Eq. (\ref{MBOE}), the change in $\langle\phi(x,y)\rangle$ near the switching point $\zeta_1$ induced by distant switching points $\{\zeta\}=\zeta_2,\,\zeta_3,...$ has the form \begin{eqnarray} \label{MBOEone} \left\langle \phi(x,y) \right\rangle_{ab\{ c \}}^{(\zeta_1,\{ \zeta \})}-\left\langle \phi(x,y) \right\rangle _{ab}^{(\zeta_1)} \to F_{ab}^{(\phi)}(x-\zeta_1 , y) \, \left\langle\Upsilon({\zeta_1})\right\rangle_{ab\{ c \}}^{(\zeta_1,\{ \zeta \})} \, . \end{eqnarray} This complements the change (\ref{avphiabc}) in $\langle\phi(x,y)\rangle$ near {\em interior} points of a boundary segment due to distant switching points. In terms of the small and large lengths $l$ and $L$, the leading contribution $\propto l^{- x_{\phi}}$ of the first term on the left-hand side of Eq.~(\ref{MBOEone}) is cancelled by the second term on the left, and the right-hand side of (\ref{MBOEone}), $\propto (l/L) \times l^{- x_{\phi}}$, represents the next-to-leading contribution. On the right-hand side of Eq. (\ref{MBOEone}) the dependence on the distant switching points $\{ \zeta \}$ and the universality classes $\{ c \}$ of the corresponding segments is entirely contained in the second factor $\left\langle\Upsilon({\zeta_1})\right\rangle _{ab\{ c \}}$, which is independent of $\phi$. The dependence on $\phi$ comes from the first factor $F_{ab}^{(\phi)}$, shown in Eqs.~(\ref{FabT}) and (\ref{generalresult2text}), which, as already mentioned, is independent of the distant switching points and their universality classes. Explicit expressions for $\langle\Upsilon({\zeta_1})\rangle_{ab\{ c \}}$ follow readily from Eqs.~(\ref{FabT}) and (\ref{MBOEone}), with $\phi =T$, which imply \begin{equation} \left\langle T(z)\right\rangle_{ab\{ c \}}^{(\zeta_1,\{ \zeta \})}-\left\langle T(z) \right\rangle _{ab}^{(\zeta_1)} \to {1 \over z-\zeta_1} \, \left\langle\Upsilon({\zeta_1})\right\rangle_{ab\{ c \}}^{(\zeta_1,\{ \zeta \})} \, . \label{MBOET} \end{equation} Inserting the stress tensors (\ref{Tab}) and (\ref{Tabc}) for $ab$ and $abc$ boundaries on the left-hand side of (\ref{MBOET}) leads to \begin{equation} \langle \Upsilon ({\zeta_1}) \rangle_{abc}^{(\zeta_1, \zeta_2)} = {t_{ab}+t_{bc}-t_{ac}\over \zeta_2 -\zeta_1} \, ,\label{avupsaba} \end{equation} which, like Eq.~(\ref{Tabc}), holds for $c\neq a$ and $c= a$, with $t_{aa}=0$ in the latter case. Similarly, from the stress tensors (\ref{T+f+etc}) and (\ref{T+-+etc}) for the Ising model with alternating $+f+f+\dots$ and $+-+-+\dots$ boundary conditions, we obtain \begin{equation} \begin{array}{l} \langle\Upsilon({\zeta_1})\rangle^{(\zeta_1,\dots,\zeta_m)}_{+f+f+...}=\partial_{\zeta_1} {\rm ln} \, G_\sigma^{(m,1)}(\zeta_1,\dots,\zeta_m)\,,\\[1mm] \langle\Upsilon({\zeta_1})\rangle^{(\zeta_1,\dots,\zeta_m)}_{+-+-+...}=\partial_{\zeta_1} {\rm ln} \, G_\epsilon^{(m)}(\zeta_1,\dots,\zeta_m)\,. \end{array}\label{avups+f+f+} \end{equation} In Appendix~\ref{relationUpsfreeenergy} we show that the quantity $\langle\Upsilon(\zeta_j)\rangle_{abc...}$ has a direct physical interpretation. It can be expressed as a free-energy derivative and represents a fluctuation-induced or Casimir force on switching point $\zeta_j$. In Appendix~\ref{appendixUpsUps} we show that multipoint averages of the boundary operator $\Upsilon$, such as $\langle\Upsilon(\zeta_1)\Upsilon(\zeta_2)\rangle_{abc...\,}$, are also determined by the operator expansion at a switching point. For $(\phi,h) \neq (\sigma,f)$, the asymptotic form of $F_{ab}^{(\phi)}$ near an interior point $z=x$ of the $a$ or $b$ interval, i.e., for $y\to 0$, $x\neq \zeta_1$, follows from Eq.~(\ref{MBOEone}), on using Eq. (\ref{BOE}) to express both terms on the left-hand side in terms of the stress tensor. This leads to \begin{eqnarray}\label{MBOE1} \mu_h^{(\phi)}\, y^{2-x_{\phi}} \, \Big[\big\langle T(x) \big\rangle_{ab\{ c \}}^{(\zeta_1,\{ \zeta \})}-\big\langle T(x) \big\rangle _{ab}^{(\zeta_1)} ]\to F_{ab}^{(\phi)}(x-\zeta_1 , y) \, \big\langle\Upsilon({\zeta_1})\big\rangle_{ab\{ c \}}^{(\zeta_1,\{ \zeta \})} \, . \end{eqnarray} Making the substitution (\ref{MBOET}), with $z=x$, in Eq.~(\ref{MBOE1}), we obtain \begin{eqnarray} \label{Fhom} F_{ab}^{(\phi)}(x-\zeta_1, y) \to \mu_h^{(\phi)} \,{y^{2-x_{\phi}}\over x-\zeta_1}\, ; \quad y\to 0\,,\; x \neq \zeta_1\,.\label{Fabasymp} \end{eqnarray} This result holds for $\phi=\sigma$, $\epsilon$ and $T$, with $h=a$ for $x < \zeta_1$ and $h=b$ for $x > \zeta_1$, provided $(\phi,h) \neq (\sigma,f)$. The amplitudes $\mu_h^{(\phi)}$ are given in and just below Eq. (\ref{mu}). The functions $F_{ab}^{(\phi)}$ for $\phi=\sigma$ and $\epsilon$ are determined explicitly for the Ising model in Subsec.~\ref{abboundaries} (see Eqs.~(\ref{dsigandepsabdzeta1}) and (\ref{Fabphi2}) and do indeed have the asymptotic behavior (\ref{Fabasymp}), as does $F_{ab}^{(T)}$ in Eq. (\ref{FabT}), with $x_T=2$ and $\mu_h^{(T)}=1$. The operator expansion (\ref{MBOE}) also yields asymptotic information on averages $\langle \phi\, \Phi_1 \Phi_2 ... \rangle_{ab\{ c \}}^{(\zeta_1,\{ \zeta \})}$ of products of an operator $\phi$ positioned close to the switching point $\zeta_1$ and distant operators $\Phi_1,\Phi_2,..$. We study this in detail for two-point averages, where Eq. (\ref{MBOE}) leads to \begin{eqnarray} \label{MBOEtwo} &&\langle \phi(x,y) \Phi(X,Y) \big\rangle _{ab\{ c \}}^{(\zeta_1,\{ \zeta \})}-\big\langle \phi(x,y) \rangle_{ab}^{(\zeta_1)} \langle \Phi(X,Y) \rangle_{ab\{ c \}}^{(\zeta_1,\{ \zeta \})} \nonumber \\ && \to F_{ab}^{(\phi)}(x-\zeta_1 , y)\,\left\langle \Upsilon(\zeta_1) \Phi(X,Y) \right\rangle_{ab\{ c \}}^{(\zeta_1,\{ \zeta \})} \, . \end{eqnarray} In our further analysis we decompose the average on the right-hand side of Eq. (\ref{MBOEtwo}) according to % \begin{eqnarray} \label{UpsD} \langle \Upsilon(\zeta_1) \Phi(X,Y) \rangle_{ab\{c\}}^{(\zeta_1,\{ \zeta \})} = \bigl[ \langle \Upsilon(\zeta_1) \rangle_{ab\{c\}}^{(\zeta_1,\{ \zeta \})} + \partial_{\zeta_1} \bigr] \langle \Phi(X,Y) \rangle_{ab\{c\}}^{(\zeta_1,\{ \zeta \})} \end{eqnarray} where the derivative $\partial_{\zeta_1}$ is at fixed $X,Y,\{ \zeta \}$. This relation is consistent with the exact results for one and two-point averages with mixed boundary conditions in Refs.~\cite{TWBX,TWBG2} and in Secs.~\ref{pfpetc} and \ref{mfp} of this paper. In addition, the scaling dimension 1 of $\Upsilon$ allows for the first derivative of a length, and, due to locality, only $\zeta_1$ qualifies. Finally, the term with derivative $\partial_{\zeta_1}$ and with a prefactor of 1 in Eq.~({\ref{UpsD}) follows from a conformal Ward identity for $\Phi$, as we discuss below Eq. (\ref{GGWI}). For convenience we often omit the superscripts $(\zeta_1)$ and $(\zeta_1,\{\zeta\})$ below. Substituting Eq. ~(\ref{UpsD}), into Eq. (\ref{MBOEtwo}) leads to \begin{eqnarray} \label{explicitnextlead} \langle \phi \Phi\rangle_{ab\{ c \}} -\langle \phi \rangle_{ab} \langle \Phi \rangle_{ab\{ c \}} \to F_{ab}^{(\phi)}\times \left[\langle \Upsilon\rangle_{ab\{ c \}}+\partial_{\zeta_1} \right]\langle \Phi \big\rangle_{ab\{ c \}}\,. \end{eqnarray} In analogy with Eq.~(\ref{MBOEone}), the leading contribution, $\propto l^{- x_\phi} L^{-x_\Phi}$, of the first term on the left-hand side of Eq.~{\ref{explicitnextlead}) is cancelled by the second term on the left, and the right-hand side, $\propto (l/L) \times l^{- x_\phi}L^{-x_\Phi}$, represents the next-to-leading contribution. Combining Eqs.~(\ref{MBOEone}) and (\ref{explicitnextlead}), we obtain \begin{eqnarray} \label{kumu} \langle\phi(x,y)\Phi(X,Y)\rangle_{ab\{ c \}}^{\rm cum}& \equiv & \langle \phi \Phi\rangle _{ab\{ c \}} - \big\langle \phi \big\rangle_{ab\{ c \}} \langle \Phi \rangle_{ab\{ c \}}\nonumber\\ &\to& F_{ab}^{(\phi)}(x-\zeta_1,y)\,\partial_{\zeta_1}\langle\Phi(X,Y)\rangle_{ab\{ c \}} \end{eqnarray} for the asymptotic form of the cumulant of $\phi$ and $\Phi$. On substituting Eqs.~(\ref{generalresult2text}) and (\ref{FabT}), Eq.~(\ref{kumu}) takes the form \begin{equation} \langle\phi\Phi\rangle_{ab\{ c \}}^{\rm cum} \to\left\{ \begin{array}{l} (2t_{ab})^{-1}\,|z-\zeta_1|^2\;\partial_{\zeta_1}\langle\phi\rangle_{ab}\;\partial_{\zeta_1}\langle\Phi\rangle_{ab\{ c \}}\,,\\ (z-\zeta_1)^{-1}\,\partial_{\zeta_1}\langle\Phi\rangle_{ab\{ c \}}\,,\end{array}\right.\; \begin{array}{l} \phi=\sigma{\rm\;or\;\epsilon}\,, \\ \phi=T\,,\end{array}\label{kumunew} \end{equation} in terms of derivatives of one-point averages. As a consequence, ratios $\langle\phi\Phi_1 \rangle_{ab\{ c \}}^{\rm cum} / \langle\phi\Phi_2 \rangle_{ab\{ c \}}^{\rm cum}$ of cumulants with different $\Phi$'s but the same $\phi$ are independent of $\phi$, and vice versa. As a check on Eqs.~(\ref{kumu}) and (\ref{kumunew}), we recall the conformal Ward identity \cite{Cardyscp,TWBX} \begin{eqnarray} \label{GGWI} &&\left\langle T(z) \Phi(X,Y)\right\rangle_{ab\{c\}}^{(\zeta_1, \{\zeta\})} -\left\langle T(z) \right\rangle_{ab\{c\}}^{(\zeta_1,\{\zeta\})} \langle \Phi(X,Y)\rangle_{ab\{c\}}^{(\zeta_1,\{\zeta\})} = \left[ {\Delta_\Phi \over (z-Z)^2}\right. \nonumber\\ &&\quad \left.+ {1 \over z -Z}\,{\partial \over \partial Z} + {\Delta_\Phi\over (z-\bar{Z})^2} + {1 \over z-\bar{Z}} \,{\partial \over \partial \bar{Z}} + \sum _{j}{1 \over z-\zeta_j} {\partial \over \partial \zeta_j} \right] \langle \Phi(X,Y) \rangle_{ab\{c\}}^{(\zeta_1,\{\zeta\})}\,,\quad \end{eqnarray} where $\Phi(X,Y)$ is a primary operator. In the limit in which $z$ is much closer to $\zeta_1$ than to any other of the switching points and to $Z$, all the terms on the right-hand side of Eq.~(\ref{GGWI}) are of order $L^{-2-x_\Phi}$ except the term $(z -\zeta_1)^{-1}\partial_{\zeta_1} \langle \Phi(X,Y) \rangle$, which is of order $(L/l) L^{-2-x_\Phi}$, and thus the leading contribution. Making use of Eq.~(\ref{FabT}), we see that the leading contribution is the same as the asymptotic forms of the cumulant $\langle\phi\Phi\rangle_{ab\{c\}}^{\rm cum}$ in Eqs.~(\ref{kumu}) and (\ref{kumunew}) for $\phi=T$. This validates the prediction of the operator expansion for $\phi=T$ and for $\Phi$ equal to a primary operator, such as $\sigma$ or $\epsilon$ in the Ising model. In the remainder of this section we specialize to $ab$ and $abc$ boundaries. In Subsec.~\ref{abboundaries} the consistency of the asymptotic forms ~(\ref{kumu}) and (\ref{kumunew}) with Ward identities and with exact results for $\langle\phi\Phi\rangle_{ab}^{\rm cum}$ in the Ising model is checked. Similar consistency checks are carried out for $abc$ boundaries in Subsec.~\ref{abcboundaries}. \subsection{$ab$ boundaries}\label{abboundaries} In this subsection we first confirm, with the help of Ward identities, that the asymptotic form of the two-point cumulant in Eqs.~(\ref{kumu}) and (\ref{kumunew}) holds if either $\phi$ or $\Phi$ or both equal $T$. Then we derive the functions $F_{ab}^{(\phi)}$, $\partial_{\zeta_1}\langle\phi\rangle_{ab}\,$ and $\partial_{\zeta_1}\langle\Phi\rangle_{ab}\,$ on the right-hand sides of Eqs.~(\ref{kumu}) and (\ref{kumunew}) explicitly for the Ising model and confirm the consistency of the predicted asymptotic behavior with exact results for the two-point averages. \subsubsection{Confirmation of the asymptotic form (\ref{kumu}) for $\phi$ or $\Phi$ or both equal to $T$} Beginning with the Ward identity (\ref{GGWI}), we already showed that Eq.~(\ref{kumu}) holds for $\phi=T$ and $\Phi$ equal to a primary operator. It also holds for $\phi=\Phi=T$, since substituting Eqs.~(\ref{FabT}) and its derivative \begin{equation} \partial_{\zeta_1}\langle T(Z)\rangle_{ab}={2t_{ab}\over (Z-\zeta_1)^3} \label{dTabdzeta1} \end{equation} in Eq.~(\ref{kumu}) leads to \begin{equation} \langle T(z)T(Z)\rangle_{ab}^{\rm cum}\to{2t_{ab}\over(z-\zeta_1)(Z-\zeta_1)^3}\,,\label{TTcumasymp} \end{equation} which agrees with the exact result for $\langle T(z)T(Z)\rangle_{ab}^{\rm cum}$ discussed in Appendix~\ref{appendixTT} and shown in Eq.~(\ref{TTcumab}), in the limit that $z$ is much closer to $\zeta_1$ than to $Z$. We now consider the cumulant $\langle\phi T\rangle_{ab}^{\rm cum}$ for $\phi$ equal to a primary operator and show its consistency with Eq.~{(\ref{kumu}). The starting point is the conformal Ward identity \begin{eqnarray} \label{GGWII} &&\left\langle T(Z) \phi(x,y)\right\rangle_{ab}^{(\zeta_1)}-\left\langle T(Z) \right\rangle_{ab}^{(\zeta_1)} \left\langle \phi(x,y)\right\rangle_{ab}^{(\zeta_1)} = \left[ {\Delta_\phi \over (Z-z)^2} \right.\nonumber\\ &&\qquad \left.+ {1 \over Z-z}\,{\partial \over \partial z} + {\Delta_\phi\over (Z-\bar{z})^2} + {1 \over Z-\bar{z}} \,{\partial \over \partial \bar{z}} + {1\over Z-\zeta_1}\,{\partial \over \partial \zeta_1} \right] \left\langle \phi(x,y) \right\rangle_{ab}^{(\zeta_1)}\,,\quad \end{eqnarray} which is the same as Eq.~(\ref{GGWI}), except that $\phi$ and $\Phi$, $z$ and $Z$, and $\bar{z}$ and $\bar{Z}$ have been exchanged, and we specialize to an $ab$ boundary with a single switching point $\zeta_1$. Noting that the left-hand side of Eq.~(\ref{GGWII}) is $\langle\phi T\rangle_{ab}^{\rm cum}$ and expanding the $z$ and $\bar z$ dependence of the square bracket in a Taylor series about $z=\bar z=\zeta_1$ leads to \begin{eqnarray} \label{KOT} &&\langle\phi T\rangle_{ab}^{\rm cum} \to\left\{(Z-\zeta_1)^{-1} \left(\partial_{z} + \partial_{\bar{z}}+\partial_{\zeta_1}\right) + (Z-\zeta_1)^{-2} \left(x_\phi + \delta z\,\partial_{z} + \delta\bar{z}\, \partial_{\bar{z}} \right)\right. \nonumber \\ && \quad \left.+ (Z-\zeta_1)^{-3} \left[x_\phi(\delta z+ \delta\bar{z})+ (\delta z)^2\, \partial_{z} + (\delta\bar{z})^2\, \partial_{\bar{z}} \right]+\dots\right\} \langle \phi(x,y) \rangle_{ab}^{(\zeta_1)} \, .\label{Kexpand1} \end{eqnarray} where $\delta z\equiv z-\zeta_1$ and $x_\phi=2\Delta_\phi$. The terms $\propto (Z-\zeta_1)^{-1}$ and $\propto (Z-\zeta_1)^{-2}$ vanish due to the translational and dilatational invariance \cite{dilatation}, respectively, of $\langle \phi(x,y) \rangle_{ab}^{(\zeta_1)} $. Using dilatation invariance to replace $x_\phi$ by $-(\delta z\partial_z+\delta\bar z\partial_{\bar z})$ in the term $\propto (Z-\zeta_1)^{-3}$, we obtain \begin{eqnarray} \label{KOT} \langle\phi T\rangle_{ab}^{\rm cum} \to -(Z&-&\zeta_1)^{-3}\, \delta z \delta\bar{z}\,(\partial_{z} + \partial_{\bar{z}})\, \langle \phi(x,y) \rangle_{ab}^{(\zeta_1)} \nonumber \\ &&\quad= (Z-\zeta_1)^{-3}\, \bigl\vert z-\zeta_1\bigr\vert ^2\,\partial_{\zeta_1} \langle \phi(x,y) \rangle_{ab}^{(\zeta_1)}\nonumber\\ &&\quad=(2t_{ab})^{-1}\,|z-\zeta_1|^2\,\partial_{\zeta_1}\langle\phi(x,y)\rangle_{ab}\,\partial_{\zeta_1}\langle T(Z)\rangle_{ab}\,.\label{Kexpand2} \end{eqnarray} to leading, non-vanishing order. Here, in going from the first line to the second, we have used translational invariance to replace $\partial_{z} + \partial_{\bar{z}}$ by $-\partial_{\zeta_1}$ and the definition of $\delta z$. Then the second line was rewritten, using Eqs.~(\ref{dTabdzeta1}), to obtain the third line. The third line line of Eq.~(\ref{Kexpand2}) is in complete agreement with the asymptotic form (\ref{kumunew}) of $\langle\phi\Phi\rangle_{ab}$ for $\Phi=T$ predicted by the boundary-operator expansion. For consistency with the alternate asymptotic form (\ref{kumu}), $F_{ab}^{(\phi)}$ and $\partial_{\zeta_1}\langle\phi\rangle_{ab}$} must satisfy Eq.~(\ref{generalresult2text}). This provides an alternate derivation of that relation. The results of the paragraph containing Eq.~(\ref{TTcumasymp}) and Eq.~(\ref{Kexpand2}) confirm the prediction (\ref{kumu}) of the boundary-operator expansion at $\zeta_1$ for $\phi$ or $\Phi$ or both equal to $T$. \subsubsection{Explicit expressions for $F_{ab}^{(\phi)}$, $\partial_{\zeta_1}\langle\phi\rangle_{ab}$, and $\partial_{\zeta_1}\langle\Phi\rangle_{ab}$ in the Ising model}\label{explicitab} Our notation $ab$ for the boundary, i.e., $a$ for $x<\zeta_1$ and $b$ for $x>\zeta_1$, corresponds to $ba$ in the notion of Ref.~\cite{TWBX}. Expressed in our notation, the Ising one-point averages in Eq.~(4.1) of Ref.~\cite{TWBX} read \begin{equation} \begin{array}{l}\langle\sigma\rangle_{+-}=-\langle\sigma\rangle_{-+}=-\langle\sigma\rangle_{_{+}}^{(y)}\,\cos{\vartheta}\,,\\ \langle\epsilon\rangle_{+-}=\langle\epsilon\rangle_{-+}=\langle\epsilon\rangle_{_{+}}^{(y)}\,(1-4\sin^2\vartheta),\\ \langle\sigma\rangle_{+f}=\langle\sigma\rangle_{_{+}}^{(y)}\,\left(\sin{\vartheta\over 2}\right)^{1/2}\,,\\ \langle\sigma\rangle_{f+}=\langle\sigma\rangle_{_{+}}^{(y)}\,\left(\cos{\vartheta\over 2}\right)^{1/2}\,,\\ \langle\epsilon\rangle_{+f}=-\langle\epsilon\rangle_{f+}=-\langle\epsilon\rangle_{_{+}}^{(y)}\,\cos\vartheta\,, \end{array}\label{sigandepsab} \end{equation} where $\langle\sigma\rangle_{+}^{(y)}=(2/y)^{1/8}$ and $\langle\epsilon\rangle_{+}^{(y)}=-(2y)^{-1}$ are the averages for a uniform, spin-up boundary given in Eqs.~(\ref{sigfixed}) and (\ref{epsfixedfree}). Here and below, $(r,\vartheta)$ and $(R,\Theta)$ are polar coordinates defined by \begin{eqnarray} (x-\zeta_1 , y) = r(\cos \vartheta, \sin \vartheta) \, , \quad (X-\zeta_1 , Y) = R(\cos \Theta, \sin \Theta)\,. \label{rthetaRTheta} \end{eqnarray} Differentiating Eq.~(\ref{sigandepsab}), using $\partial_{\zeta_1}\vartheta=\partial_{\zeta_1}\arctan\left[y/(x-\zeta_1)\right]=r^{-1}\sin\vartheta$, leads to \begin{equation}\label{dsigandepsabdzeta1} \begin{array}{l} \partial_{\zeta_1}\langle\sigma\rangle_{+-}=-\partial_{\zeta_1}\langle\sigma\rangle_{-+}=\langle\sigma\rangle_{+}^{(y)}\,r^{-1}\sin^2\vartheta\,,\\ \partial_{\zeta_1}\langle\epsilon\rangle_{+-}=\partial_{\zeta_1}\langle\epsilon\rangle_{-+}=-8\langle\epsilon\rangle_{+}^{(y)}\, r^{-1}\sin^2\vartheta\cos\vartheta\,,\\ \partial_{\zeta_1}\langle\sigma\rangle_{+f}= \textstyle{1\over 2}\langle\sigma\rangle_{+}^{(y)}\,r^{-1}\left(\sin{\vartheta\over 2}\right)^{1/2}(\cos{\vartheta\over2})^2\,,\\ \partial_{\zeta_1}\langle\sigma\rangle_{f+}=\textstyle -{1\over 2}\langle\sigma\rangle_{+}^{(y)}\,r^{-1}\left(\cos{\vartheta\over 2}\right)^{1/2}(\sin{\vartheta\over2})^2\,,\\ \partial_{\zeta_1}\langle\epsilon\rangle_{+f}=-\partial_{\zeta_1}\langle\epsilon\rangle_{f+}=\langle\epsilon\rangle_{+}^{(y)}\, r^{-1}\,\sin^2\vartheta\,. \end{array} \end{equation} The functions $F_{ab}^{(\phi)}$ are easily obtained from these results using Eq.~(\ref{generalresult2text}) in the form \begin{equation} F_{ab}^{(\phi)}=(2t_{ab})^{-1} r^2\,\partial_{\zeta_1}\langle\phi\rangle_{ab}\label{Fabphi2} \end{equation} and the values $t_{+-}={1\over 2}$ and $t_{+f}={1\over 16}$, given below Eq.~(\ref{Tab}). Thus, for example, $F_{+f}^{(\epsilon)}=8\langle\epsilon\rangle_{+}^{(y)}\, r\, \sin^2\vartheta\,.$ It is simple to check that the expressions for $F_{ab}^{(\phi)}$ are indeed consistent with the asymptotic form (\ref{Fabasymp}) for $y\to 0$, $x\neq\zeta_1$. The quantities $\partial_{\zeta_1}\langle\Phi\rangle_{ab}$ with $\Phi=\sigma$ or $\epsilon$ are the same as in Eq.~(\ref{dsigandepsabdzeta1}), except that $r$, $\vartheta$, and $y$ are replaced $R$, $\Theta$, and $Y$. Using the explicit expressions $F_{ab}^{(\phi)}$, $\partial_{\zeta_1}\langle\phi\rangle_{ab}$, and $\partial_{\zeta_1}\langle\Phi\rangle_{ab}\,$, we have compared the asymptotic form (\ref{kumu}) or (\ref{kumunew}) of $\langle\phi\Phi\rangle_{ab}$, predicted by the boundary-operator expansion with the asymptotic behavior of the exact two-point functions $\langle\sigma_1\sigma_2\rangle_{ab}$, $\langle\sigma_1\epsilon_2\rangle_{ab}$, and $\langle\epsilon_1\epsilon_2\rangle_{ab}$ for $ab=+-$ and $+f$ in Eq.~(4.3) of Ref.~\cite{TWBX} and found complete agreement. In Appendix~\ref{appendixcheckat}, the consistency check is illustrated for $\phi=\Phi=\sigma$ and $ab=+-$ in some detail. \subsection{$abc$ boundaries}\label{abcboundaries} For $abc$ boundaries the asymptotic behavior of one and two-point averages near the switching point $\zeta_1$ is specified by Eqs.~(\ref{MBOEone}) and (\ref{kumu}) or (\ref{kumunew}). In this subsection we first confirm, with the help of Ward identities, that the asymptotic form (\ref{kumu}) holds if either $\phi$ or $\Phi$ or both equal $T$. Then we determine the various functions on the right hand sides of Eqs.~(\ref{MBOEone}) and (\ref{kumu}) explicitly, for the Ising model with $abc$ boundaries. Finally, we confirm the consistency of the predicted asymptotic behavior with exact results for the Ising model. \subsubsection{Confirmation of the asymptotic form (\ref{kumu}) for $\phi$ or $\Phi$ or both equal to $T$} We begin by differentiating the stress tensor for $abc$ boundaries (\ref{Tabc}) with respect to $\zeta_1$. This leads to \begin{equation} \partial_{\zeta_1}\langle T(Z)\rangle_{abc}={2t_{ab}\over (Z-\zeta_1)^3}+{t_{ac}-t_{ab}-t_{bc}\over (Z-\zeta_1)^2(Z-\zeta_2)}\,,\label{dTabcdzeta1} \end{equation} a result we will need below. Like Eq.~(\ref{Tabc}), it holds for $c\neq a$ and $c=a$, with $t_{aa}=0$ in the latter case. The general argument presented below the Ward identity (\ref{GGWI}), that the cumulant expression (\ref{kumu}) holds for $\phi=T$ and $\Phi$ equal to primary operators, includes the case of $abc$ boundaries. Equation~({\ref{kumu}) also holds when both $\phi$ and $\Phi$ equal $T$, since $\langle T(z)T(Z)\rangle_{abc}^{\rm cum}\to F_{ab}^{(T)} \times\partial_{\zeta_1}\langle T\rangle_{abc}$, with the right-hand side given by Eqs.~(\ref{FabT}) and (\ref{dTabcdzeta1}), agrees with the exact result for $\langle T(z)T(Z)\rangle_{abc}^{\rm cum}$ discussed in Appendix~\ref{appendixTT} and shown in Eq.~(\ref{TTcumabc}), in the limit that $z$ is much closer to $\zeta_1$ than to $\zeta_2$ and to $Z$. Next we confirm Eq.~(\ref{kumu}) for $\phi$ equal to a primary operator and $\Phi=T$, modifying Eq.~(\ref{GGWII}) and the steps below it for an $abc$ instead of an $ab$ boundary boundary. The Ward identity is similar to Eq. (\ref{GGWII}), but with an extra term $(Z-\zeta_2)^{-1}\partial_{\zeta_2}$ in the square bracket. In the relations $(\partial_z + \partial_{\bar{z}} +\partial_{\zeta_1} +\partial_{\zeta_2})\langle\phi\rangle_{abc} =0$ and $\left[x_\phi + \delta z\, \partial_z + \delta\bar z\,\partial_{\bar z} + (\zeta_2 -\zeta_1) \partial_{\zeta_2} \right]\langle\phi\rangle_{abc} =0$, corresponding to translational and dilatational invariance \cite{dilatation}, there are also extra terms involving $\partial_{\zeta_2}$. The expansion in Eq.~(\ref{Kexpand2}) is replaced by \begin{eqnarray} &&\langle\phi T\rangle_{abc}^{\rm cum}=\left\{(Z-\zeta_2)^{-1}\partial_{\zeta_2}-(Z-\zeta_1)^{-1}\partial_{\zeta_2}-(Z-\zeta_1)^{-2}(\zeta_2-\zeta_1)\,\partial_{\zeta_2}\right.\nonumber\\ &&\qquad\left. -(Z-\zeta_1)^{-3}\left[\delta z\,\delta\bar z (\partial_z+\partial_{\bar z})+(\delta z+\delta\bar z)(\zeta_2-\zeta_1)\,\partial_{\zeta_2}\right]+\dots\right\}\langle\phi\rangle_{abc}\,.\label{Kexpand1abc} \end{eqnarray} Substituting $\langle\phi\rangle_{abc}-\langle\phi\rangle_{ab}\to F_{ab}^{(\phi)}\times\langle\Upsilon\rangle_{abc}\,$, which follows from Eq.~(\ref{MBOEone}), and expression (\ref{avupsaba}) for $\langle\Upsilon\rangle_{abc}$, we obtain \begin{eqnarray} \label{K_ABC_OT'} \langle\phi T\rangle_{abc}^{\rm cum}&\to& F_{ab}^{(\phi)} \times \left[ \partial_{\zeta_1}\langle T(Z)\rangle_{ab} + {(\zeta_2 -\zeta_1)^2 \over (Z-\zeta_1)^2 (Z-\zeta_2)} \partial_{\zeta_2} \langle \Upsilon(\zeta_1) \rangle _{abc}^{(\zeta_1 , \zeta_2)} \right] \nonumber \\ &\to& F_{ab}^{(\phi)} \times \partial_{\zeta_1} \langle T(Z) \rangle _{abc}^{(\zeta_1, \zeta_2)}\,, \end{eqnarray} to leading order $(l/L)l^{-x_\phi} L^{-x_T}$. In going from the first line to the second, we have used expressions (\ref{avupsaba}), (\ref{dTabdzeta1}), and (\ref{dTabcdzeta1}) for $\langle \Upsilon\rangle _{abc}$, $\partial_{\zeta_1}\langle T(Z) \rangle _{ab}$, and $\partial_{\zeta_1}\langle T(Z) \rangle _{abc}$, respectively. Equation~(\ref{kumu}) with $\Phi=T$ and Eq.~(\ref{K_ABC_OT'}) are clearly consistent. Together with the results discussed below Eq.~(\ref{dTabcdzeta1}), this confirms the asymptotic form (\ref{kumu}) of the two-point cumulant $\langle\phi\Phi\rangle_{abc}^{\rm cum}$ for either $\phi$ or $\Phi$ or both equal to $T$. \subsubsection{Explicit expressions for $\partial_{\zeta_1}\langle\Phi\rangle_{abc}$ in the Ising model} The explicit form of $\partial_{\zeta_1}\langle T\rangle_{abc}$ is shown in Eq.~(\ref{dTabcdzeta1}). Here we consider $\partial_{\zeta_1}\langle\Phi\rangle_{abc}$ for $\Phi=\sigma$ and $\epsilon$ and $abc=+f+$, $+-+$, $-f+$, and $f+-$, and obtain explicit expressions by differentiating the corresponding Ising one-point averages $\langle\sigma\rangle_{abc}$ and $\langle\epsilon\rangle_{abc}$. For $+f+$, we begin with the one-point averages $\langle\sigma\rangle_{+f+}$ and $\langle\epsilon\rangle_{+f+}$ in Eqs.~(\ref{sigpfp}) and (\ref{epspfp}), replace $(x_1,y_1)$ by $(X,Y)$ and $\gamma_{1,1}$ by $\Gamma$, where, according to Eq.~(\ref{gammakelldef2}), \begin{equation} \textstyle\Gamma={\rm arg}\,{Z-\zeta_2\over Z-\zeta_1}\,,\label{defineGamma} \end{equation} and then evaluate the derivative with respect to $\zeta_1$, using \begin{equation} \textstyle\partial_{\zeta_1}\Gamma=-\partial_{\zeta_1}\, \arctan{Y\over X-\zeta_1}=-R^{-1}\sin\Theta\,,\label{gammaderivative} \end{equation} which follows from Eqs.~(\ref{rthetaRTheta}) and (\ref{defineGamma}). For $+-+$ boundaries, the calculation is similar, but begins with the one-point averages $\langle\sigma\rangle_{+-+}=\langle\sigma\rangle_{_+}^{(Y)}\,\cos\Gamma$ and $\langle\epsilon\rangle_{+-+}=\langle\epsilon\rangle_{_+}^{(Y)}\left(1-4\sin^2\Gamma\right)$, given in \cite{TWBG2} or obtained with the conformal transformation (\ref{abtoaba}) from the results for a $+-$ boundary shown in Eq.~(\ref{sigandepsab}). For $-f+$ and $f+-$ boundaries the calculations are also similar, but begin with Eqs.~(\ref{sigmfp}), (\ref{epsmfp}), (\ref{sigfpm}), and (\ref{epsfpm}). In this way we obtain \begin{equation} \begin{array}{l} \partial_{\zeta_1}\langle\sigma\rangle_{+f+}={1\over 4}\,\langle\sigma\rangle_{_+}^{(Y)}\,{\sin{\textstyle{\Gamma\over 2}}\over{\sqrt{\cos{\textstyle{\Gamma\over 2}}}}}\,{\sin\Theta\over R}\,,\\[3mm] \partial_{\zeta_1}\langle\sigma\rangle_{f+f}=-{1\over 4}\,\langle\sigma\rangle_{_+}^{(Y)}\,{\cos{\textstyle{\Gamma\over 2}}\over{\sqrt{\sin{\textstyle{\Gamma\over 2}}}}}\,{\sin\Theta\over R}\,,\\[2mm] \partial_{\zeta_1}\langle\epsilon\rangle_{+f+}=-\partial_{\zeta_1}\langle\epsilon\rangle_{f+f}=\langle\epsilon\rangle_{_+}^{(Y)}\,\sin\Gamma\,{\sin\Theta\over R}\,,\\[2mm] \partial_{\zeta_1}\langle\sigma\rangle_{+-+}=\langle\sigma\rangle_{_ +}^{(Y)}\,\sin\Gamma\,{\sin\Theta\over R}\,,\\[2mm] \partial_{\zeta_1}\langle\epsilon\rangle_{+-+}=8\,\langle\epsilon\rangle_{_+}^{(Y)}\sin\Gamma\cos\Gamma\,{\sin\Theta\over R}\,,\\[2mm] \partial_{\zeta_1}\langle\sigma\rangle_{-f+}=\langle\sigma\rangle_{_ +}^{(Y)}\,{1\over 4W_1}\left[(1-{4R^2\over\zeta_{21}^2})\sin{\textstyle{\Gamma\over 2}}+{2R\over\zeta_{21}}\sin\Theta\cos{\textstyle{\Gamma\over 2}}\right]\,{\sin\Theta\over R}\,,\\[2mm] \partial_{\zeta_1}\langle\epsilon\rangle_{-f+}=\langle\epsilon\rangle_{_+}^{(Y)}\,\left[(1-{4R^2\over \zeta_{21}^2})\sin\Gamma+{4R\over\zeta_{21}}\sin\Theta\cos\Gamma\right]\,{\sin\Theta\over R}\,,\\[2mm] \partial_{\zeta_1}\langle\sigma\rangle_{f+-}=\langle\sigma\rangle_{_+}^{(Y)}{1\over 4W_2}\left[{4R(R-\zeta_{21}\cos^2{\textstyle{\Theta\over 2}})\over\left\vert Z-\zeta_2\right\vert^2}-1\right]{\sin{\textstyle{\Theta\over 2}}\,\sin\Theta\over R}\,,\\[2mm] \partial_{\zeta_1}\langle\epsilon\rangle_{f+-}=\langle\epsilon\rangle_{_+}^{(Y)}\left[{4R\left(R-\zeta_{21}\cos\Theta\right)\over\left\vert Z-\zeta_2\right\vert^2}-1\right]{\sin^2\Theta\over R}\,. \end{array} \label{dPhiabcdzeta1} \end{equation} Here $W_1$ and $W_2$ are the square roots $W_1\equiv\left[\cos{\textstyle{\Gamma\over 2}}-(2Y/\zeta_{21})\sin{\textstyle{\Gamma\over 2}}\right]^{1/2}$ and $W_2=\left[\cos{\textstyle{\Theta\over 2}}-\,2\,\zeta_{21}R\left\vert Z-\zeta_2\right\vert^{-2} \sin{\textstyle{\Theta\over 2}}\sin\Theta\right]^{1/2}$ in Eqs.~(\ref{sigmfp}) and (\ref{sigfpm}), respectively. The trigonometric functions of $\Gamma$ in Eq.~(\ref{dPhiabcdzeta1}) can be expressed in terms of the Cartesian coordinates $X,Y$ using the relations \begin{equation} \cos\Gamma=\displaystyle{(X-\zeta_1)(X-\zeta_2)+Y^2\over R\vert Z-\zeta_2\vert}\,,\quad\sin\Gamma={Y(\zeta_2-\zeta_1)\over R\vert Z-\zeta_2\vert}\,, \label{trigGamma} \end{equation} which follow from Eq.~(\ref{defineGamma}) and correspond to Eqs.~(\ref{cosgammakell}) and (\ref{singammakell}). Using the explicit expressions for $F_{ab}^{(\phi)}$ and $\partial_{\zeta_1}\langle\Phi\rangle_{abc}$, given in Eqs.~(\ref{dsigandepsabdzeta1}), (\ref{Fabphi2}), and (\ref{dPhiabcdzeta1}), we have confirmed the consistency of the asymptotic behavior of the one and two-point averages, $\langle\phi\rangle_{abc}$ and $\langle\phi\Phi\rangle_{abc}$, shown in Eqs.~(\ref{MBOEone}) and (\ref{kumu}), respectively, with the exact results reported in Secs.~\ref{pfpetc} and \ref{mfp}. In Appendix~\ref{appendixcheckat} the consistency check for $\phi=\Phi=\sigma$ and $abc=+f+$ is carried out in some detail. \subsection{Distant-wall effects} At criticality, local behavior throughout the system is affected by the boundaries, even if they are distant. In a classic paper Fisher and de Gennes \cite{FdG} considered a critical fluid confined between infinite parallel plates or walls with separation ${\cal W}$. Calculating the density profile by minimizing a local free energy functional, they found that the correction to the profile near one wall due to the distant wall varies as ${\cal W}^{-d}$, where $d$ is the spatial dimension The two-dimensional analog of the fluid between plates is an Ising strip of infinite length and width ${\cal W}$. Exact results for $\langle\sigma\rangle_{a\vert b}$ and $\langle\epsilon\rangle_{a\vert b}$, for boundary condition $a$ on one edge and $b$ on the other, obtained by conformally mapping the semi-infinite results (\ref{sigandepsab}) onto the strip geometry, confirm the ${\cal W}^{-2}$ variation of the distant-wall corrections, similar results were obtained for Potts spins, and a general connection in two-dimensional critical systems between the distant-wall corrections to the profiles and the Casimir force between the edges was explained in terms of conformal invariance \cite{TWBX,Cardydistantwall}. In these and other studies of distant-wall corrections (see \cite{TWBX,EEKD,RudnickJasnow,Upton} and references therein), the boundary condition on each wall is assumed to be uniform. Here we consider distant-wall effects in the critical Ising model defined on an infinitely long strip with {\it mixed} boundary conditions, thereby demonstrating the versatility of the boundary-operator approach. The lower boundary of the strip is the $x$ axis, and the upper boundary is parallel to the $x$ axis and a distance ${\cal W}$ above it. Imposing $ab\vert c$ boundary conditions, consisting of $ab$ boundary conditions with switching point $\zeta_1$ on the lower boundary and a uniform boundary condition $c$ on the upper boundary, we analyze the effect of the distant upper boundary on the profile $\langle\phi(x,y)\rangle_{ab\vert c}$ near the lower boundary, both away from and close to the switching point $\zeta_1$. An important ingredient in our discussion is the average of the stress tensor in the strip geometry, given by \begin{eqnarray} \label{stressstrip} \left\langle T(z) \right\rangle_{ab\vert c}^{(\zeta_1)} = \left({\pi \over {\cal W}} \right)^2 \, \tau (\tilde{z}) \, , \quad \tau (\tilde{z}) = t_{ac} -{\hat{c} \over 24} + {t_{ab} \over (1-e^{-\tilde{z}})^2} + {t_{bc} - t_{ac}-t_{ab} \over 1-e^{-\tilde{z}}}\, \end{eqnarray} for an arbitrary two-dimensional critical system. Here $\tilde{z} \equiv \pi (z -\zeta_1)/{\cal W}$, and $\hat{c}$ is the central charge of the system in the conformal classification \cite{BPZ,CardyD-L}, which equals ${1\over 2}$ for the Ising model. Expression~(\ref{stressstrip}) follows from the conformal mapping $w(z)=\exp (\pi z / {\cal W})$ of the strip, with switching point $\zeta_1$, onto the upper half $w$ plane with two switches, from $c$ to $a$ at $w=0$ and from $a$ to $b$ at $w=\exp(\pi \zeta_1 / {\cal W})$. Combining this mapping with the average stress tensor (\ref{Tabc}) in the $w$ plane and the transformation property (\ref{Ttransform}) of the stress tensor leads to Eq.~(\ref{stressstrip}). \subsubsection{Expansion away from the switching point} Averaging expansion (\ref{BOE}) in the $ab\vert c$ strip geometry and in the $ab$ half-plane geometry, subtracting the two averages, and substituting the corresponding stress tensors (\ref{Tab}) and (\ref{stressstrip}), we obtain \begin{eqnarray} \label{MBOEwallaway} \langle \phi(x,y)\rangle_{ab\vert c}^{(\zeta_1)}-\langle \phi(x,y)\rangle _{ab}^{(\zeta_1)} \to \mu_h^{(\phi)}\, y^{-x_\phi}\left({\pi y\over{\cal W}}\right)^2 \, \left[ \tau (\tilde{x}) - {t_{ab}\over \tilde{x}^2}\right] \end{eqnarray} for the distant-wall correction. Here $h=a$ and $h=b$ for $x< \zeta_1$ and $x> \zeta_1$, respectively. The asymptotic form (\ref{MBOEwallaway}) holds for $y$ much smaller than $|x-\zeta_1|$ and ${\cal W}$, but with no restriction on the scaling variable $\tilde{x}=\pi (x -\zeta_1)/{\cal W}$. In the limit $\tilde{x} \to -\infty$, Eqs.~(\ref{stressstrip}) and (\ref{MBOEwallaway}) reproduce the distant-wall correction \cite{TWBX,Cardydistantwall}, \begin{equation} \langle \phi(x,y) \rangle_{a\vert c}-\langle \phi(x,y) \rangle _{a} \to {4x_\phi\over \hat{c}}\left({\hat{c}\over 24}-t_{ac}\right) \langle\phi(x,y)\rangle_a \left({\pi y\over{\cal W}}\right)^2\,,\label{FdGcorrection} \end{equation} to the profile $\langle \phi(x,y) \rangle_{a\vert c}$ in a strip with uniform boundary conditions $a$ and $c$ on the edges. Here $\langle\phi(x,y)\rangle _a\propto y^{-x_\phi}$ is the profile in the half plane with boundary condition $a$, and we have used Eq.~(\ref{relationmuhphi}). For $\tilde{x}\to\infty$, the corresponding result for $b\vert c$ boundaries is obtained. For $|\tilde{x}| \ll 1$, Eq.~ (\ref{MBOEwallaway}) yields \begin{equation} \langle \phi(x,y) \rangle_{ab\vert c}^{(\zeta_1)}-\langle \phi(x,y) \rangle _{ab} ^{(\zeta_1)}\to \mu_h^{(\phi)}\, y^{-x_\phi}\,{\pi(t_{bc}-t_{ac}) y^2\over (z-\zeta_1){\cal W}}\,,\label{correctionnearzeta1} \end{equation} to leading order in the small quantities $y/|x-\zeta_1|$ and $y/{\cal W}$. According to Eqs.~(\ref{FdGcorrection}) and (\ref{correctionnearzeta1}), the distant-wall correction to the profile of $\phi$ falls off with increasing distance as ${\cal W}^{-2}$ for homogeneous boundaries and as ${\cal W}^{-1}$ near the switching point of $ab\vert c$ boundaries. The entire, smooth crossover between these two limiting cases is described by Eqs.~(\ref{stressstrip}) and (\ref{MBOEwallaway}). \subsubsection{Expansion at the switching point} The leading distant-wall correction to the profile in the neighborhood $\vert z-\zeta_1\vert \ll{\cal W}$ of the switching point follows from averaging the boundary-operator expansion (\ref{MBOE}) in the strip geometry, which yields \begin{eqnarray} \label{MBOEnearzeta1} \left\langle \phi(x,y) \right\rangle_{ab\vert c}^{(\zeta_1)}-\left\langle \phi(x,y) \right\rangle _{ab}^{(\zeta_1)} \to F_{ab}^{(\phi)}(x-\zeta_1 , y) \, \left\langle\Upsilon({\zeta_1})\right\rangle_{ab\vert c}^{(\zeta_1)}\, . \end{eqnarray} Here the second term on the left-hand side and the factor $F_{ab}^{(\phi)}(x-\zeta_1 , y)$ on the right are the same as in the half-plane geometry. On the right only the second factor $\left\langle\Upsilon({\zeta_1})\right\rangle_{ab\vert c}^{(\zeta_1)}$ depends on the upper boundary. The explicit form of $\left\langle\Upsilon({\zeta_1})\right\rangle_{ab\vert c}^{(\zeta_1)}$ follows from setting $\phi=T$ in Eq.~(\ref{MBOEnearzeta1}), substituting the average stress tensors (\ref{Tab}) and (\ref{stressstrip}) on the left-hand side, and then expanding the left-hand side to leading non-vanishing order in $\tilde{z}$. Substituting expression (\ref{FabT}) for $F_{ab}^{(T)}$ on the right-hand side and solving for $\langle\Upsilon({\zeta_1})\rangle_{ab\vert c}^{(\zeta_1)}\,$, we obtain \begin{eqnarray} \label{Upswall} \langle\Upsilon({\zeta_1})\rangle_{ab\vert c}^{(\zeta_1)} = {\pi(t_{bc} - t_{ac})\over{\cal W}}\, . \end{eqnarray} Thus, the distant-wall correction to the profile of $\phi$, where $\phi$ is either a primary operator or $T$, near the switching point is given by Eqs.~(\ref{MBOEnearzeta1}) and (\ref{Upswall}), together with the expressions for $F_{ab}^{(\phi)}$ in Eqs.~(\ref{FabT}) and (\ref{generalresult2text}), or, for the Ising model, in Eqs.~(\ref{dsigandepsabdzeta1}) and (\ref{Fabphi2}). The assumption $|z-\zeta_1|\ll {\cal W}$ made in this subsection and the assumption $y\ll |x-\zeta_1|$, $y\ll {\cal W}$ of the preceding subsection are both satisfied if $y\ll |x-\zeta_1|\ll{\cal W}$. Thus, for $y\ll |x-\zeta_1|\ll{\cal W}$, the distant-wall predictions (\ref{MBOEnearzeta1}) and (\ref{correctionnearzeta1}) of the boundary-operator expansions at and away from the switching point should coincide. Substituting Eq.~(\ref{Upswall}) and the asymptotic form (\ref{Fhom}) of $F_{ab}^{(\phi)}$ in Eq.~(\ref{MBOEnearzeta1}), we see that this is indeed the case. In the Ising model with $c=f$ and $ab=+-$ or $ab=-+$, $\langle \sigma \rangle$ is an odd function of $x-\zeta_1$, while $\langle \epsilon \rangle$ is even. That the corresponding $F_{ab}^{(\sigma)}$ and $F_{ab}^{(\epsilon)}$, given by Eqs.~(\ref{dsigandepsabdzeta1}) and (\ref{Fabphi2}) are even and odd, respectively, is inconsistent with Eq. (\ref{MBOEnearzeta1}) unless $\langle \Upsilon \rangle_{ab\vert c}$ vanishes. According to Eq.~(\ref{Upswall}), $\langle \Upsilon \rangle_{ab\vert c}$ does indeed vanish in these two cases, and we conclude that the leading distant-wall correction is of higher order. For all other $ab\vert c$ with $a \neq b$, the expression for $\langle \Upsilon \rangle_{ab\vert c}$ in (\ref{Upswall}) is non-vanishing, and it is instructive to compare the signs of the predicted distant-wall corrections to $\langle\sigma\rangle$ and $\langle\epsilon\rangle$ with one's intuitive expectation. Finally, we have also confirmed the predictions (\ref{MBOEwallaway}), (\ref{MBOEnearzeta1}), and (\ref{Upswall}) of the two boundary-operator expansions for the Ising model by comparison with exact expressions for $\langle\sigma\rangle_{ab\vert c}$ and $\langle\epsilon\rangle_{ab\vert c}\,$, derived from half-plane results with the conformal mapping mapping onto the strip discussed below Eq. (\ref{stressstrip}).} \section{Concluding remarks}\label{concludingremarks} In the first half of this paper (see Sec.~\ref{IsingConformal}), the semi-infinite critical Ising model with mixed boundary conditions $+f+f+\dots$ and $-f+$ is analyzed with conformal-invariance methods. Exact expressions for the one and two-point averages $\langle\sigma\rangle$, $\langle\epsilon\rangle$, $\langle T\rangle$, $\langle\sigma_1\sigma_2\rangle$, $\langle\epsilon_1\epsilon_2\rangle$, $\langle\sigma_1\epsilon_2\rangle$ are derived. The additional averages $\langle T_1T_2\rangle$, $\langle T_1\sigma_2\rangle$, $\langle T_1\epsilon_2\rangle$, $\langle T_1\sigma_2\sigma_3\rangle$, etc. are readily obtained by substituting these results into expressions (\ref{TTabcdots}),(\ref{TTbarabcdots}) for $\langle T_1T_2\rangle$ and the conformal Ward identity, e.g. Eq.~(\ref{GGWI}). The results of Sec.~\ref{IsingConformal} complement the predictions for $+-+-\dots$ boundary conditions in Ref.~\cite{TWBG2}. In our approach we profit from the fact that the amplitude $t_{+f}$ of the stress tensor $\langle T\rangle_{ab}$ and the scaling indices $\Delta_\sigma$ and $\Delta_\mu$ of the spin and disorder operators all have the same value ${1\over 16}$. Consequently, all the multi-spin averages $\langle\sigma_1\sigma_2\dots\sigma_n\dots\rangle$ with $+f+f+\dots$ and $-f+$ boundary conditions can be expressed in terms of the known solutions (\ref{Gnsigma}) of the bulk conformal differential equations for $\Delta={1\over 16}$. To calculate averages involving $\epsilon$ from the multi-spin averages, we used the operator product expansion (\ref{OPEsigsig}) for $\sigma\sigma$. In future work we plan to consider other two-dimensional critical systems, such as the $Q$-state Potts and ${\rm O}(N)$ models, with mixed boundary conditions. The Potts profiles $\langle{\bf\sigma}\rangle_{ab}$ and $\langle\epsilon\rangle_{ab}$ for general $Q$ have already been determined \cite{TWBX}. The second half of this paper (see Sec.~\ref{secMBOE}) is devoted to boundary-operator expansions in two-dimensional critical systems with mixed boundary conditions and is not limited to the Ising model. Two types of expansions, at and away from switching points of the boundary condition, are considered. Apart from the case of the order parameter near a free boundary, the leading boundary operator in the expansion away from a switching point is the complex stress tensor $T(x)$ at the surface, which has scaling dimension 2. In contrast, in the expansion at a switching point $\zeta_1$, the leading boundary operator $\Upsilon(\zeta_1)$ has scaling dimension 1. We demonstrate the utility of the two expansions in predicting the asymptotic behavior of many-point averages and distant wall corrections to one-point averages in the strip geometry. Finally, we point out the utility of boundary-operator expansions, not only at switching points of the boundary condition, but also at points where the boundary bends abruptly, for example at the tip of a wedge or needle. The asymptotic behavior near the tip of a semi-infinite needle with a single boundary condition, immersed in a two-dimensional critical fluid, is analyzed with the help of a boundary-operator expansion in Appendix \ref{OEneedle}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The theory of sets of finite perimeter and $BV$ functions in Wiener spaces, i.e., Banach spaces endowed with a Gaussian Borel probability measure $\gamma$, was initiated by Fukushima and Hino in \cite{fuk99,fuk2000_1,fuk2000_2}, and has been further investigated in \cite{hin09set,AMMP,AMP,ambfig10}. The basic question one would like to consider is the research of infinite-dimensional analogues of the classical fine properties of $BV$ functions and sets of finite perimeter in finite-dimensional spaces. The class of sets of finite Gaussian perimeter $E$ in a Gaussian Banach space $(X,\gamma)$ is defined by the integration by parts formula $$ \int_E\partial_h\phi\,d\gamma=-\int_X \phi\,d\langle D_\gamma\chi_E,h\rangle_H+\int_E\phi\hat{h}\,d\gamma $$ for all $\phi\in C^1_b(X)$ and $h\in H$. Here $H$ is the Cameron-Martin space of $(X,\gamma)$ and $D_\gamma\chi_E$ is a $H$-valued measure with finite total variation in $X$. When looking for the counterpart of De~Giorgi's and Federer's classical results to infinite-dimensional spaces, it was noticed in \cite{ambfig10} that the Ornstein-Uhlenbeck $$T_t\chi_E(x):=\int_X\chi_E(e^{-t}x+\sqrt{1-e^{-2t}}y)\,d\gamma(y)$$ can be used to rephrase the notion of density, the main result of that paper being \begin{equation}\label{eq:ambfig10} \lim_{t\downarrow 0}\int_X\Bigl|T_t\chi_E-\frac{1}{2}\Bigr|\,d|D_\gamma\chi_E|=0. \end{equation} According to this formula, we might say that $|D_\gamma\chi_E|$ is concentrated on the set of points of density $1/2$, where the latter set is not defined using volume ratio in balls (as in the finite-dimensional theory), but rather the Ornstein-Uhlenbeck semigroup. In this paper we improve \eqref{eq:ambfig10} as follows (we refer to Section~\ref{sec:halfspaces} for the notation relative to halfspaces): \begin{theorem} \label{main} Let $E$ be a set of finite perimeter in $(X,\gamma)$ and let $S(x)=S_{\nu_E(x)}$ be the halfspaces determined by $\nu_E(x)$. Then \begin{equation}\label{eq:main} \lim_{t\downarrow 0}\int_X\int_X\left|\chi_E(e^{-t}x+\sqrt{1-e^{-2t}}y)-\chi_{S(x)}(y)\right|\,d\gamma(y)\,d|D_\gamma\chi_E|(x)=0. \end{equation} \end{theorem} A nice interpretation of this result can be obtained stating it in terms of the Gaussian rescaled sets $$ E_{x,t}=\frac{E-e^{-t}x}{\sqrt{1-e^{-2t}}}, $$ namely \begin{equation}\label{eq:main1} \lim_{t\downarrow 0}\int_X\|\chi_{E_{x,t}}-\chi_{S(x)}\|_{L^1(\gamma)}\,d|D_\gamma\chi_E|(x)=0. \end{equation} Clearly, if we pull the modulus out of the integral in \eqref{eq:main} we recover \eqref{eq:ambfig10}, because the measure of halfspaces is $1/2$ and $T_t\chi_E(x)=\gamma(E_{x,t})$. More specifically, \eqref{eq:main1} formalizes the fact, established by De~Giorgi in finite dimensions, that on small scales a set of finite perimeter is close to an halfspace at almost every (w.r.t. surface measure). The proof of \eqref{eq:main1} relies mainly on a combination of the careful finite-dimensional estimates of \cite{ambfig10} with a variant of the cylindrical construction performed in \cite{hin09set} (with respect to \cite{hin09set}, here we use of the reduced boundary instead of the essential boundary of the finite-dimensional sections of $E$). \section{Preliminary results} We assume that $(X,\|\cdot\|)$ is a separable Banach space and $\gamma$ is a Gaussian probability measure on the Borel $\sigma$-algebra of $X$. We shall always assume that $\gamma$ is nondegenerate (i.e., all closed proper subspaces of $X$ are $\gamma$-negligible) and centered (i.e., $\int_X x\,d\gamma=0$). We denote by $H$ the Cameron-Martin subspace of $X$, that is $$ H:=\left\{\int_X f(x)x\,d\gamma(x):f\in L^2(X,\gamma)\right\}, $$ and, for $h \in H$, we denote by $\hat h \in L^2(X,\gamma)$ the Fomin derivative of $\gamma$ along $h$, namely \begin{equation}\label{Fomin} \int_X\partial_h\phi\,d\gamma=-\int_X\hat{h}\phi\,d\gamma \end{equation} for all $\phi\in C^1_b(X)$. Here and in the sequel $C^1_b(X)$ denotes the space of continuously differentiable cylindrical functions in $X$, bounded and with a bounded gradient. The space $H$ can be endowed with a Hilbertian norm $|\cdot |_H$ that makes the map $h\mapsto\hat{h}$ an isometry; furthermore, the injection of $(H,|\cdot |_H)$ into $(X,\|\cdot\|)$ is compact. We shall denote by $\tilde{H}\subset H$ the subset of vectors of the form \begin{equation}\label{defhstar} \int_X \langle x^*,x\rangle x\,d\gamma(x),\qquad x^*\in X^*. \end{equation} This is a dense (even w.r.t. to the Hilbertian norm) subspace of $H$. Furthermore, for $h\in \tilde H$ the function $\hat{h}(x)$ is precisely $\langle x^*,x\rangle$ (and so, it is continuous). Given an $m$-dimensional subspace $F\subset \tilde{H}$ we shall frequently consider an orthonormal basis $\{h_1,\ldots,h_m\}$ of $F$ and the factorization $X=F\oplus Y$, where $Y$ is the kernel of the continuous linear map \begin{equation}\label{ammiss1} x\in X\mapsto \Pi_F(x):=\sum_{i=1}^m\hat{h}_i(x)h_i\in F. \end{equation} The decomposition $x=\Pi_F(x)+(x-\Pi_F(x))$ is well defined, thanks to the fact that $\Pi_F\circ \Pi_F=\Pi_F$ and so $x-\Pi_F(x)\in Y$; in turn this follows by $\hat{h}_i(h_j)=\langle\hat{h}_i,\hat{h}_j\rangle_{L^2}=\delta_{ij}$. Thanks to the fact that $|h_i|_H=1$, this induces a factorization $\gamma=\gamma_F\otimes\gamma_Y$, with $\gamma_F$ the standard Gaussian in $F$ (endowed with the metric inherited from $H$) and $\gamma_Y$ Gaussian in $(Y,\|\cdot\|)$. Furthermore, the orthogonal complement $F^\perp$ of $F$ in $H$ is the Cameron-Martin space of $(Y,\gamma_Y)$. \subsection{$BV$ functions and Sobolev spaces} Here we present the definitions of Sobolev and $BV$ spaces. Since we will consider bounded functions only, we shall restrict to this class for ease of exposition. Let $u:X\to\mathbb{R}$ be a bounded Borel function. Motivated by \eqref{Fomin}, we say that $u\in W^{1,1}(X,\gamma)$ if there exists a (unique) $H$-valued function, denoted by $\nabla u$, such that $|\nabla u|_H\in L^1(X,\gamma)$ and $$ \int_X u\partial_h\phi\,d\gamma=-\int_X \phi\langle\nabla u,h\rangle_H\,d\gamma+\int_X u\phi\hat{h}\,d\gamma $$ for all $\phi\in C^1_b(X)$ and $h\in H$. Analogously, following \cite{fuk2000_1,fuk2000_2}, we say that $u\in BV(X,\gamma)$ if there exists a (unique) $H$-valued Borel measure $D_\gamma u$ with finite total variation in $X$ satisfying $$ \int_X u\partial_h\phi\,d\gamma=-\int_X \phi\,d\langle D_\gamma u,h\rangle_H+\int_X u\phi\hat{h}\,d\gamma $$ for all $\phi\in C^1_b(X)$ and $h\in H$. In the sequel we will mostly consider the case when $u=\chi_E:X\to\{0,1\}$ is the characteristic function of a set $E$, although some statements are more natural in the general $BV$ context. Notice the inclusion $W^{1,1}(X,\gamma)\subset BV(X,\gamma)$, given by the identity $D_\gamma u=\nabla u\,\gamma$. \subsection{The OU semigroup and Mehler's formula} In this paper, the Ornstein-Uhlenbeck semigroup $T_tf$ will always be understood as defined by the \emph{pointwise} formula \begin{equation}\label{mehler} T_tf(x):=\int_X f(e^{-t}x+\sqrt{1-e^{-2t}}y)\,d\gamma(y) \end{equation} which makes sense whenever $f$ is bounded and Borel. This convention will be important when integrating $T_t f$ against potentially singular measures. We shall also use the dual OU semigroup $T_t^*$, mapping signed measures into signed measures, defined by the formula \begin{equation} \langle T_t^*\mu,\phi\rangle:=\int_X T_t\phi\,d\mu\qquad\text{$\phi$ bounded Borel.} \end{equation} In the next proposition we collect a few properties of the OU semigroup needed in the sequel (see for instance \cite{boga} for the Sobolev case, and \cite{AMP} for the $BV$ case). \begin{proposition}\label{pammiss1} Let $u:X\to\mathbb{R}$ be bounded and Borel, and $t>0$. Then $T_tu\in W^{1,1}(X,\gamma)$ and: \begin{itemize} \item[(a)] if $u\in W^{1,1}(X,\gamma)$ then, componentwise, it holds $\nabla T_tu=e^{-t}T_t\nabla u$; \item[(b)] if $u\in BV(X,\gamma)$ then, componentwise, it holds $\nabla T_tu\, \gamma=e^{-t}T_t^*(D_\gamma u)$. \end{itemize} \end{proposition} The next result is basically contained in \cite[Proposition~5.4.8]{boga}, see also \cite[Proposition~2.2]{ambfig10} for a detailed proof. We state it in order to emphasize that, $\gamma_Y$-a.e. $y \in Y$, the regular version of the restriction of $T_tf$ to $y+F$ (provided by the above proposition) is for precisely the one pointwise defined in Mehler's formula. \begin{proposition} \label{pbogaregu} Let $u$ be a bounded Borel function and $t>0$. With the above notation, for $\gamma_Y$-a.e. $y\in Y$ the map $z\mapsto T_t u(z,y)$ is smooth in $F$. \end{proposition} The next lemma provides a rate of convergence of $T_t u$ to $u$ when $u$ belongs to $BV(X,\gamma)$; the proof follows the lines of the proof of Poincar\'e inequalities, see \cite[Lemma~2.3]{ambfig10}, \cite[Theorem~5.5.11]{boga}. \begin{lemma}\label{lpoincare} Let $u\in BV(X,\gamma)$. Then \begin{equation}\label{poincarestr} \int_X\int_X |u(x)-u(e^{-t}x+\sqrt{1-e^{-2t}}y)|\,d\gamma(x)d\gamma(y)\leq c_t |D_\gamma u|(X) \end{equation} with $c_t:=\sqrt{\frac{2}{\pi}}\int_0^t\frac{e^{-s}}{\sqrt{1-e^{-2s}}}\,ds$, $c_t\sim 2\sqrt{t/\pi}$ as $t\downarrow 0$. In particular $$ \int_X|T_tu-u|\,d\gamma\leq c_t|D_\gamma u|(X). $$ \end{lemma} Let us now recall the fundamental facts about sets of locally finite perimeter $E$ in $\mathbb{R}^m$. De~Giorgi called \emph{reduced} boundary of $E$ the set $\mathcal F E$ of points in the support of $|D\chi_E|$ satisfying $$ \exists\,\,\nu_E(x):=\lim_{r\downarrow 0}\frac{D\chi_E(B_r(x))}{|D\chi_E|(B_r(x))} \quad\text{and}\quad |\nu_E(x)|=1. $$ By Besicovitch theorem, $|D\chi_E|$ is concentrated on $\mathcal F E$ and $D\chi_E=\nu_E|D\chi_E|$. The main result of \cite{DeG1} are: first, the blown-up sets \begin{equation}\label{blowneu} \frac{E-x}{r} \end{equation} converge as $r\downarrow 0$ locally in measure, and therefore in $L^1(G_m\Leb{m})$, to the halfspace $S_{\nu_E(x)}$ having $\nu_E$ as inner normal; second, this information can be used to show that $\mathcal F E$ is \emph{countably $\Haus{m-1}$-rectifiable}, namely there exist countably many $C^1$ hypersurfaces $\Gamma_i\subset\mathbb{R}^m$ such that $$ \Haus{m}\biggl(\mathcal F E\setminus\bigcup_i\Gamma_i\biggr)=0. $$ In the following results we assume that $(X,\gamma)$ is an $m$-dimensional Gaussian space; if we endow $X$ with the Cameron-Martin distance $d$, then $(X,\gamma,d)$ is isomorphic to $(\mathbb{R}^m,G_m\Leb{m},\|\cdot\|)$, $\|\cdot\|$ being the euclidean distance. Under this isomorphism, we have $D_\gamma\chi_E=G_mD\chi_E$ whenever $E$ has finite Gaussian perimeter, so that $|D\chi_E|$ is finite on bounded sets and $E$ has locally finite Euclidean perimeter. Since this isomorphism is canonical, we can and shall use it to define $\mathcal F E$ also for sets with finite perimeter in $(X,\gamma)$ (although a more intrinsic definition along the lines of the appendix of \cite{ambfig10} could be given). Having in mind the Ornstein-Uhlenbeck semigroup, the scaling \eqref{blowneu} now becomes \begin{equation}\label{blowngau} E_{x,t}:=\frac{E-e^{-t}x}{\sqrt{1-e^{-2t}}}, \end{equation} so that $$ T_t\chi_E(x)=\gamma(E_{x,t}). $$ It corresponds to the scaling \eqref{blowneu} with $r=\sqrt{1-e^{-2t}}\sim\sqrt{2t}$ and with eccentric balls, whose eccentricity equals $x(e^{-t}-1)$. Since $e^{-t}-1=O(t)=o(r)$, this eccentricity has not effect in the limit and allows to rewrite, arguing as in \cite[Proposition~3.1]{ambfig10}, the Euclidean statement in Gaussian terms: \begin{proposition} Let $(X,\gamma)$ be an $m$-dimensional Gaussian space and $E\subset X$ of finite Gaussian perimeter. Then, for $|D_\gamma\chi_E|$-a.e. $x\in X$ the rescaled sets $E_{x,t}$ in \eqref{blowngau} converge in $L^2(\gamma)$ to $S_{\nu_E}(x)$. \end{proposition} This way, we easily obtain the finite-dimensional version of Theorem~\ref{main}. As in \cite{ambfig10}, the following lemma (stated with the outer integral in order to avoid measurability issues) plays a crucial role in the extension to infinite dimensions: \begin{lemma}\label{lem:3.4} Let $(X,\gamma)$ be a finite-dimensional Gaussian space, let $(Y,{\cal F},\mu)$ be a probability space and, for $t>0$ and $y\in Y$, let $g_{t,y}:X\to [0,1]$ be Borel maps. Assume also that: \begin{itemize} \item[(a)]$\{\sigma_y\}_{y\in Y}$ are positive finite Borel measures in $X$, with $\int_Y^*\sigma_y(X)\,d\mu(y)$ finite; \item[(b)] $\sigma_y=G_m\Haus{m-1}\res\Gamma_y$ for $\mu$-a.e. $y$, with $\Gamma_y$ countably $\Haus{m-1}$-rectifiable. \end{itemize} Then \begin{equation}\label{hino7} \limsup_{t\downarrow 0}\int_Y^*\int_X T_t g_{t,y}(x)\,d\sigma_y(x)d\mu(y)\leq\limsup_{t\downarrow 0} \frac{1}{\sqrt{t}}\int_Y^*\int_X g_{t,y}(x)\,d\gamma(x) d\mu(y). \end{equation} \end{lemma} The proof, given in detail in \cite[Lemma~3.4]{ambfig10}, relies on the heuristic idea that in an $m$-dimensional Gaussian space $(X,\gamma)$, for the adjoint semigroup $T_t^*$ (i.e. the one acting on measures) we have $$ \sqrt{t} T_t^*(G_m\Haus{m-1}\res\Gamma)\leq (1+o(1))\gamma $$ whenever $\Gamma$ is a $C^1$ hypersurface. This is due to the fact in the case when $\Gamma$ is flat, i.e. $\Gamma$ is an affine hyperplane, the asymptotic estimate above holds, and that for a non-flat surface only lower order terms appear. In the flat case, using invariance under rotation and factorization of the semigroup (see the next section) one is left to the estimate of $\sqrt{t}T_t^*\sigma$ when $X=\mathbb{R}$ and $\sigma$ is a Dirac mass. Then, considering for instance $\sigma=\delta_0$, a simple computation gives $$ \sqrt{t}T_t^*(G_m(0)\delta_0)=\frac{1}{2\pi} \frac{\sqrt{t}}{\sqrt{1-e^{-2t}}} e^{-|y|^2/(1-e^{-2t})}\Leb{1}\leq \frac{1}{2\sqrt{2\pi}}\gamma+o(1) \quad\text{as $t\downarrow 0$.} $$ (See the proof \cite[Lemma~3.4]{ambfig10} for more details.) \subsection{Factorization of $T_t$ and $D_\gamma u$} Let us consider the decomposition $X=F\oplus Y$, with $F\subset\tilde{H}$ finite-dimensional. Denoting by $T_t^F$ and $T_t^Y$ the OU semigroups in $F$ and $Y$ respectively, it is easy to check (for instance first on products of cylindrical functions on $F$ and $Y$, and then by linearity and density) that also the action of $T_t$ can be ``factorized'' in the coordinates $x=(z,y)\in F\times Y$ as follows: \begin{equation}\label{factorization} T_tf(z,y)=T_t^Y\bigl(w\mapsto T_t^F f(\cdot,w)(z)\bigr)(y) \end{equation} for any bounded Borel function $f$. Let us now discuss the factorization properties of $D_\gamma u$. First of all, we can write $D_\gamma u=\nu_u|D_\gamma u|$ with $\nu_u:X\to H$ Borel vectorfield satisfying $|\nu_u|_H=1$ $|D_\gamma u|$-a.e. Moreover, given a Borel set $B$, define $$ B_y:=\left\{z\in F:\ (z,y)\in B\right\},\qquad B_z:=\left\{y\in Y:\ (z,y)\in B\right\}. $$ The identity \begin{equation}\label{hino440} \int_B|\pi_F(\nu_u)|\,d|D_\gamma u|=\int_Y|D_{\gamma_F}u(\cdot,y)|(B_y)\,d\gamma_Y(y) \end{equation} is proved in \cite[Theorem~4.2]{AMP} (see also \cite{AMMP,hin09set} for analogous results), where $\pi_F:H\to F$ is the orthogonal projection. Along the similar lines, one can also show the identity \begin{equation}\label{hino441} \int_B|\pi_{F^\perp}(\nu_u)|\,d|D_\gamma u|=\int_F|D_{\gamma_Y}u(z,\cdot)|(B_z)\,d\gamma_F(z) \end{equation} with $\pi_F+\pi_{F^\perp}={\rm Id}$. In the particular case $u=\chi_E$, with the notation \begin{equation}\label{hino16} E_y:=\left\{z\in F:\ (z,y)\in E\right\},\qquad E_z:=\left\{y\in Y:\ (z,y)\in E\right\} \end{equation} the identities \eqref{hino440} and \eqref{hino441} read respectively as \begin{equation}\label{hino40} \int_B|\pi_F(\nu_E)|\,d|D_\gamma \chi_E|=\int_Y|D_{\gamma_F}\chi_{E_y}|(B_y)\,d\gamma_Y(y) \qquad\text{for all $B$ Borel,} \end{equation} \begin{equation}\label{hino41} \int_B|\pi_{F^\perp}(\nu_E)|\,d|D_\gamma \chi_E|=\int_F|D_{\gamma_Y}\chi_{E_z}|(B_z)\,d\gamma_F(z) \qquad\text{for all $B$ Borel} \end{equation} with $D_\gamma\chi_E=\nu_E|D_\gamma\chi_E|$. \begin{remark}\label{rtoomuch}{\rm Having in mind \eqref{hino40} and \eqref{hino41}, it is tempting to think that the formula holds for any orthogonal decomposition of $H$ (so, not only when $F\subset\tilde{H}$), or even when none of the parts if finite-dimensional. In order to avoid merely technical complications we shall not treat this issue here because, in this more general situation, the ``projection maps'' $x\mapsto y$ and $x\mapsto z$ are no longer continuous. However, the problem can be solved removing sets of small capacity, see for instance \cite{feydelpra} for a more detailed discussion.} \end{remark} \subsection{Finite-codimension Hausdorff measures} Following \cite{feydelpra}, we start by introducing pre-Hausdorff measures which, roughly speaking, play the same role of the pre-Hausdorff measures $\Haus{n}_\delta$ in the finite-dimensional theory. Let $F\subset\tilde{H}$ be a finite-dimensional subspace of dimension $m$, and for $k\in \mathbb{N}$, $0 \leq k \leq m$, we define (with the notation of the previous section) \begin{equation}\label{feydel} \Haus{\infty-k}_F(B):=\int_Y\int_{B_y} G_m\,d\Haus{m-k}\,d\gamma_Y(y) \qquad\text{for all $B$ Borel,} \end{equation} where $G_m$ is the standard Gaussian density in $F$ (so that $\Haus{\infty-0}_F=\gamma$). It is proved in \cite{feydelpra} that $y\mapsto\int_{B_y} G_m\,d\Haus{m-k}$ is $\gamma_Y$-measurable whenever $B$ is Suslin (so, in particular, when $B$ is Borel), therefore the integral makes sense. The first key monotonicity property noticed in \cite{feydelpra}, based on \cite[2.10.27]{fed}, is $$ \Haus{\infty-k}_F(B)\leq\Haus{\infty-k}_G(B)\qquad\text{whenever $F\subset G\subset\tilde{H}$}, $$ provided $\Haus{m-k}$ in \eqref{feydel} is understood as the \emph{spherical} Hausdorff measure of dimension $m-k$ in $F$. This naturally leads to the definition \begin{equation}\label{hino50} \Haus{\infty-k}(B):=\sup_F\Haus{\infty-k}_F(B),\qquad\text{$B$ Borel,} \end{equation} where the supremum runs among all finite-dimensional subspaces $F$ of $\tilde H$. Notice however that, strictly speaking, the measure defined in \eqref{hino50} does not coincide with the one in \cite{feydelpra}, since all finite-dimensional subspaces of $H$ are considered therein. We make the restriction to finite-dimensional subspaces of $\tilde{H}$ for the reasons explained in Remark~\ref{rtoomuch}. However, still $\Haus{\infty-k}$ is defined in a coordinate-free fashion. These measures have been related for the first time to the perimeter measure $D_\gamma\chi_E$ in \cite{hin09set}. Hino defined the $F$-essential boundaries (obtained collecting the essential boundaries of the finite-dimensional sections $E_y\subset F\times \{y \}$) \begin{equation}\label{hino17} \partial_F^*E:=\left\{(z,y):\ z\in\partial^* E_y\right\} \end{equation} and noticed another key monotonicity property (see also \cite[Theorem~5.2]{AMP}) \begin{equation}\label{hino19} \Haus{\infty-1}_F(\partial^*_FE\setminus\partial^*_GE)=0 \qquad\text{whenever $F\subset G\subset \tilde H$.} \end{equation} Then, choosing a sequence ${\cal F}=\{F_1,F_2,\ldots\}$ of finite-dimensional subspaces of $\tilde{H}$ whose union is dense he defined \begin{equation}\label{hino70} \Haus{\infty-1}_{{\cal F}}:=\sup_n\Haus{\infty-1}_{F_n},\qquad \partial_{{\cal F}}^*E:=\liminf_{n\to\infty}\partial^*_{F_n}E, \end{equation} and showed that \begin{equation}\label{hino60} |D_\gamma\chi_E|=\Haus{\infty-1}_{{\cal F}}\res\partial_{{\cal F}}^*E. \end{equation} In order to prove our main result we will follow Hino's procedure, but working with the reduced boundaries in place of the essential boundaries. \subsection{Halfspaces}\label{sec:halfspaces} Let $h\in H$ and $\hat h$ be its corresponding element in $L^2(X,\gamma)$. Then there exist a linear subspace $X_0\subset X$ such that $\gamma(X\setminus X_0)=0$ and a representative of $\hat h$ which is linear in $X_0$. Indeed, let $h_n\to h$ in $L^2(X,\gamma)$ with $\hat h_n\in X^*$. It is not restrictive to assume that $\hat h_n \to\hat h$ $\gamma$-a.e. in $X$, so if we define $$ X_0:=\left\{x\in X:\ \text{$\hat{h}_n(x)$ is a Cauchy sequence}\right\} $$ we find that $X_0$ is a vector space of full $\gamma$-measure and that the pointwise limit of $\hat{h}_n$ provides a version of $h$, linear in $X_0$. Having this fact in mind, it is natural to define halfspaces in the following way. \begin{definition} Given a unit vector $h\in H$ we shall denote by $S_h$ the halfspace having $h$ as ``inner normal'', namely \begin{equation}\label{def:Sh} S_h:=\left\{x\in X:\ \hat{h}(x)>0\right\}. \end{equation} \end{definition} \begin{proposition} \label{prop:basichalfspace} For any $S_h$ halfspace it holds $\gamma(S_h)=1/2$, $P(S_h)=\sqrt{1/(2\pi)}$, and $D\chi_{S_h}=h|D\chi_{S_h}|$. Furthermore, the following implication holds: $$ \lim_{n\to\infty}|h_n-h|=0\qquad\Longrightarrow\qquad \lim_{n\to\infty}\chi_{S_{h_n}}=\chi_{S_h}. $$ \end{proposition} \begin{proof} Let us first show that convergence of $h_n$ to $h$ implies convergence of the corresponding halfspaces. Since for all $\varepsilon>0$ it holds $$ \{\hat{h}_n>0\}\setminus\{\hat{h}>0\}\subset \bigl(\{\hat{h}_n>0\}\setminus\{\hat{h}>-\varepsilon\}\bigr)\cup\{\hat{h}\in (-\varepsilon,0)\}\subset \{|\hat{h}_n-\hat{h}|>\varepsilon\}\cup\{\hat{h}\in (-\varepsilon,0)\} $$ and since the convergence of $\hat{h}_n$ to $\hat{h}$ in $L^2(X,\gamma)$ implies $\gamma(\{|\hat{h}_n-\hat{h}|>\varepsilon\})\to 0$ we obtain $$ \limsup_{n\to\infty}\gamma(\{\hat{h}_n>0\}\setminus\{\hat{h}>0\})\leq\gamma(\hat{h}^{-1}(-\varepsilon,0)). $$ Now, since $\hat{h}$ has a standard Gaussian law and $\varepsilon$ is arbitrary it follows that $\gamma(\{\hat{h}_n>0\}\setminus\{\hat{h}>0\})\to 0$. A similar argument (because the laws of all $\hat{h}_n$ are standard Gaussian) yields $\gamma(\{\hat{h}>0\}\setminus\{\hat{h}_n>0\})\to 0$. Now, if $\gamma$ is the standard Gaussian in $X=H=\mathbb{R}^n$ and $S_h$ is a halfspace, it is immediate to check that $\gamma(S_h)=1/2$. In addition, since $D_\gamma \chi_{S_h}=h|D_\gamma \chi_{S_h}|$ and $\hat{h}(x)=\langle h,x\rangle$, we can use $E=S_h$ and $\phi\equiv 1$ in the integration by parts formula $$ \int_{E}\partial_h\phi\, d\gamma+\int_X\phi\,d\langle h, D_\gamma\chi_E\rangle=\int_{E}\hat{h}\,d\gamma $$ to get $|D_\gamma S_h|(X)=\int_{S_h}\langle h,x\rangle\,dx=\sqrt{1/(2\pi)}$. By a standard cylindrical approximation we obtain that $\gamma(S_h)=\tfrac12$, $S_h$ has finite perimeter, and $D\chi_{S_h}=h|D\chi_{S_h}|$ in the general case. \end{proof} \subsection{Convergence to halfspaces}\label{sec:conv-halfspaces} In this section we prove Theorem~\ref{main}. We consider an increasing family of subspaces $F_n\subset\tilde H$ and, for any $n$, we consider the corresponding decomposition $x=(x_1,x_2)$ with $x_1\in F_n$ and $x_2\in Y_n$. Denote by $\gamma=\gamma_n\times\gamma_n^\perp$ the corresponding factorization of $\gamma$. Then, adapting the definition of boundary given in Hino's work \cite{hin09set} (with reduced in place of essential boundary) we define $$ \mathcal F_HE:=\liminf_{n\to\infty}B_n\qquad\text{where}\qquad B_n=\left\{x=(x_1,x_2):\ x_1\in\mathcal FE_{x_2}\right\} $$ (recall that $E_{x_2}=\left\{x_1\in F_n:\ (x_1,x_2)\in E\right\}$). We also set $C_n={\mathrm{}}_{m\geq n}B_m$, so that $C_n\uparrow\mathcal F_HE$ as $n\to\infty$. Recall that by \eqref{hino440} the measure $\sigma_n:=|\pi_{F_n}(\nu_E)||D_\gamma\chi_E|$ is concentrated on $B_n$, because by De~Giorgi's theorem the derivative of finite-dimensional sets of finite perimeter is concentrated on the reduced boundary. Since $\sigma_n$ is nondecreasing with respect to $n$, $\sigma_n$ is concentrated on all sets $B_m$ with $m\geq n$, and therefore on $C_n$. It follows that $|D_\gamma\chi_E|=\sup_n\sigma_n$ is concentrated on $\mathcal F_HE$, one of the basic observations in \cite{hin09set}. Let us denote by $\nu_n(x)=\nu_n(x_1,x_2)$ the approximate unit normal to $E_{x_2}^n$ at $x_1$. Notice that, in this way, $\nu_n$ is pointwise defined at all points $x\in B_n$ and $D_{\gamma_n}\chi_{E_{x_2}}=\nu_n(x)|D_{\gamma_n}\chi_{E_{x_2}}|$ (again by De~Giorgi's finite-dimensional result). Since the identity (an easy consequence of Fubini's theorem) $$ \pi_F(D_\gamma\chi_E)=D_{\gamma_n}\chi_{E_{x_2}}\gamma^\perp_n $$ and the definition of $\nu_n$ give $$ \pi_{F_n}(\nu_E)|D_\gamma\chi_E|=D_{\gamma_n}\chi_{E_{x_2}}\gamma^\perp_n=\nu_n|D_{\gamma_n} \chi_{E_{x_2}}|\gamma^\perp_n $$ we can use \eqref{hino440} once more to get $$ \pi_{F_n}(\nu_E)|D_\gamma\chi_E|=\nu_n|\pi_{F_n}(\nu_E)||D_\gamma\chi_E|, $$ so that $\nu_n=\pi_{F_n}(\nu_E))/|\pi_{F_n}(\nu_E)|$ $\sigma_n$-a.e. in $X$. Since $\sigma_n\uparrow |D_\gamma\chi_E|$ as $n\to\infty$, it follows that on each set $C_n$ the function $\nu_m$ is defined for $m\geq n$, and converges to $\nu_E$ as $m\to\infty$ $|D_\gamma\chi_E|$-a.e. on $C_n$. Then, Proposition \ref{prop:basichalfspace} and the convergence of $\nu_n$ give \begin{equation}\label{eq:error1} \lim_{n\to\infty}\int_X\int_X|\chi_{S_{\nu_n}}-\chi_{S_{\nu_E}}|\,d\gamma\,d\sigma_n=0. \end{equation} In addition, by the finite-dimensional result of convergence to half spaces, we get \begin{equation} \lim_{t\downarrow 0}\int_X\int_{F_n}\left|\chi_{E_{x_2}}(e^{-t}x_1+\sqrt{1-e^{-2t}}x_1')-\chi_{\tilde S_{\nu_n(x)}}(x_1')\right|\,d\gamma_n(x_1')\,d\sigma_n(x)=0, \end{equation} where $\tilde S_{\nu_n}$ is the projection of $S_{\nu_n}$ on $F_n$. Now, notice that $S_{\nu_n}=\tilde S_{\nu_n}\times Y_n$, since $\nu_n\in F$. This observation, in combination with \eqref{eq:error1}, gives that $$ \limsup_{t\downarrow 0}\int_X\int_X\left|\chi_{E_{x_2}}(e^{-t}x_1+\sqrt{1-e^{-2t}}x_1')-\chi_{S_{\nu_E(x)}}(x')\right|\,d\gamma(x')\,d\sigma_n(x) $$ is infinitesimal as $n\to\infty$. Therefore to prove \eqref{eq:main} it suffices to show that \begin{equation}\label{eq:errorefinale} \limsup_{t\downarrow 0}\int_X\int_X\left|\chi_{E_{x_2}}(e^{-t}x_1+\sqrt{1-e^{-2t}}x_1')-\chi_E(e^{-t}x+\sqrt{1-e^{-2t}}x')\right| \,d\gamma(x')\,d\sigma_n(x) \end{equation} is infinitesimal as $n\to\infty$. In order to show this last fact, using again $\sigma_n=|D_{\gamma_n}\chi_{E_{x_2}}|\gamma^\perp_n$, we can write the expression as $$ \limsup_{t\downarrow 0}\int_{Y_n}\int_{F_n}T^{F_n}_t g_t(x_1,x_2)\,d|D_{\gamma_n}\chi_{E_{x_2}}|(x_1)\,d\gamma^\perp_n(x_2) $$ with $g_t(x_1,x_2):=\int_{Y_n}\left|\chi_E(x_1,x_2')-\chi_E(x_1,e^{-t}x_2+\sqrt{1-e^{-2t}}x_2')\right|\,d\gamma_n^\perp(x_2')$. As in \cite{ambfig10} we now use Lemma~\ref{lem:3.4} and the rectifiability of the measures $|D_{\gamma_n}\chi_{E_{x_2}}|$ to bound the limsup above by \begin{equation}\label{eq:errorefinale1} \limsup_{t\downarrow 0}\int_{Y_n}\int_{F_n}\frac{g_t(x_1,x_2)}{\sqrt{t}}\,d\gamma_n(x_1)\,d\gamma^\perp_n(x_2). \end{equation} Now we integrate w.r.t. $\gamma_n$ the inequality (ensured by \eqref{poincarestr}) $$ \int_{Y_n}g_t(x_1,x_2)\,d\gamma_n^\perp(x_2)\leq c\sqrt{t}|D_{\gamma^\perp_n}\chi_{E_{x_1}}|(Y_n), $$ valid for all $x_1$ such that $E_{x_1}$ has finite perimeter in $(Y_n,\gamma^\perp_n)$, to bound the $\limsup$ in \eqref{eq:errorefinale1} by $$c\int_{F_n}|D_{\gamma^\perp_n}\chi_{E_{x_1}}|(Y_n)\,d\gamma_n(x_1)=c\int_X|\pi_{F_n}^\perp(\nu_E)|\,d|D_\gamma\chi_E|.$$ Since $|\pi_{F_n}^\perp\nu_E|\downarrow 0$ as $n\to\infty$, this concludes the proof. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Random key graphs are random graphs that belong to the class of random intersection graphs \cite{SingerThesis}; in fact they are sometimes called uniform random intersection graphs by some authors \cite{GodehardtJaworski, GodehardtJaworskiRybarczyk}. They have appeared recently in application areas as diverse as clustering analysis \cite{GodehardtJaworski, GodehardtJaworskiRybarczyk}, collaborative filtering in recommender systems \cite{Marbach2008} and random key predistribution for wireless sensor networks (WSNs) \cite{EschenauerGligor}. In this last context, random key graphs naturally occur in the study of the following random key predistribution scheme introduced by Eschenauer and Gligor \cite{EschenauerGligor}: Before deployment, each sensor in a WSN is independently assigned $K$ distinct cryptographic keys which are selected at random from a pool of $P$ keys. These $K$ keys constitute the key ring of the node and are inserted into its memory. Two sensor nodes can then establish a secure link between them if they are within transmission range of each other and if their key rings have at least one key in common; see \cite{EschenauerGligor} for implementation details. If we assume {\em full visibility}, namely that nodes are all within communication range of each other, then secure communication between two nodes requires only that their key rings share at least one key. The resulting notion of adjacency defines the class of random key graphs; see Section \ref{sec:RandomKeyGraph} for precise definitions. Much efforts have recently been devoted to developing zero-one laws for the property of connectivity in random key graphs. A key motivation can be found in the need to obtain conditions under which the scheme of Eschenauer and Gligor guarantees secure connectivity with high probability in large networks. An interesting feature of this work lies in the following fact: Although random key graphs are {\em not} equivalent to the classical Erd\H{o}s-R\'enyi graphs \cite{ER1960}, it is possible to transfer well-known zero-one laws for connectivity in Erd\H{o}s-R\'enyi graphs to random key graphs by asymptotically matching their edge probabilities. This approach, which was initiated by Eschenauer and Gligor in their original analysis \cite{EschenauerGligor}, has now been validated rigorously; see the papers \cite{BlackburnGerke, DiPietroManciniMeiPanconesiRadhakrishnan2008, Rybarczyk2009, YaganMakowskiISIT2009, YaganMakowskiConnectivity} for recent developments. Furthermore, Rybarczyk \cite{Rybarczyk2009} has shown that this transfer from Erd\H{o}s-R\'enyi graphs also works for a number of issues related to the giant component and its diameter. In view of these successes, it is natural to wonder whether the transfer technique can be applied to other graph properties. In particular, in the literature on random graphs there is a long standing interest \cite{ER1960, JansonLuczakRucinski, KaronskiScheinermanSingerCohen, PenroseBook, SingerThesis} in the containment of certain (small) subgraphs, the simplest one being the {\em triangle}. This last case has some practical relevance since the number of triangles in a graph is closely related to its clustering properties \cite{YaganMakowskiChennai2009}. With this in mind, we study the zero-one law for the existence of triangles in random key graphs and identify the corresponding critical scaling. From these results we easily conclude that in the many node regime, the expected number of triangles in random key graphs is always at least as large as the corresponding quantity in asymptotically matched Erd\H{o}s-R\'enyi graphs. For the parameter range that is of practical relevance in the context of WSNs, this expected number of triangles can be orders of magnitude larger in random key graphs than in Erd\H{o}s-R\'enyi graphs, a fact also observed earlier via simulations in \cite{DiPietroManciniMeiPanconesiRadhakrishnan2008}. As a result, transferring results from Erd\H{o}s-R\'enyi graphs by matching their edge probabilities is not a valid approach in general, and can be quite misleading in the context of WSNs. The zero-one laws obtained here were announced in the conference paper \cite{YaganMakowskiAllerton2009}. The results are established by making use of the method of first and second moments to the number of triangles in the graph. As the discussion amply shows, the technical details, especially for the one-law, are quite involved, and an outline of the proofs can be found in \cite{YaganMakowskiAllerton2009}. In line with developments currently available for other classes of graphs, e.g., Erd\H{o}s-R\'enyi graphs \cite[Chap. 3]{JansonLuczakRucinski} and geometric random graphs \cite[Chap. 3]{PenroseBook}, it would be interesting to consider the containment problem for small subgraphs other than triangles other than triangle in the context of random key graphs. Given the difficulties encountered in the case of This is likely to be a challenging problem given the difficulties encountered in the simple case of triangles. The paper is organized as follows: In Section \ref{sec:RandomKeyGraph} we formally introduce the class of random key graphs while in Section \ref{sec:MainResults} we present the main results of the paper given as Theorem \ref{thm:ZeroLaw} and Theorem \ref{thm:OneLaw}. Section \ref{sec:ComparingWithERGs} compares these results with the corresponding zero-one law in Erd\H{o}s-R\'enyi graphs. The zero-one laws are established by an application of the method of first and second moments, respectively \cite[p. 55]{JansonLuczakRucinski}. To that end, in Section \ref{sec:FirstMoment}, we compute the expected value of the number of triangles in random key graphs. Asymptotic results to be used in the proofs of several results are then collected in Section \ref{sec:UsefulAsymptotics} for easy reference. In Section \ref{subsec:ProofTheoremZero}, we give a proof of the zero-law (Theorem \ref{thm:ZeroLaw}) while an outline for the proof of the one-law (Theorem \ref{thm:OneLaw}) is provided in Section \ref{subsec:ProofTheoremOne}. The final sections of the paper, namely Sections \ref{sec:SecondMoment} through \ref{sec:FinalStep}, are devoted to completing the various steps of the proof of Theorem \ref{thm:OneLaw}. Additional technical derivations are given in Appendices \ref{Appendix:A}, \ref{Appendix:B} and \ref{Appendix:C}. A word on the notation and conventions in use: All limiting statements, including asymptotic equivalences, are understood with $n$ going to infinity. The random variables (rvs) under consideration are all defined on the same probability triple $(\Omega, {\cal F}, \mathbb{P})$. Probabilistic statements are made with respect to this probability measure $\mathbb{P}$, and we denote the corresponding expectation operator by $\mathbb{E}$. The indicator function of an event $E$ is denoted by $\1{E}$. For any discrete set $S$ we write $|S|$ for its cardinality. \section{Random key graphs} \label{sec:RandomKeyGraph} The model is parametrized by the number $n$ of nodes, the size $P$ of the key pool and the size $K$ of each key ring with $K \leq P$. We often group the integers $P$ and $K$ into the ordered pair $\theta \equiv (K,P)$ in order to simplify the notation. Now, for each node $i=1, \ldots , n$, let $K_i(\theta)$ denote the random set of $K$ distinct keys assigned to node $i$ and let $\mathcal{P}$ be the set of all keys. The rvs $K_1(\theta), \ldots , K_n(\theta)$ are assumed to be {\em i.i.d.} rvs, each of which is {\em uniformly} distributed with \begin{equation} \bP{ K_i(\theta) = S } = {P \choose K} ^{-1}, \qquad i=1,\ldots, n \label{eq:KeyDistrbution1} \end{equation} for any subset $S$ of $\mathcal{P}$ which contains exactly $K$ elements. This corresponds to selecting keys randomly and {\em without} replacement from the key pool. Distinct nodes $i,j=1, \ldots , n$ are said to be adjacent if they share at least one key in their key rings, namely \begin{equation} K_i (\theta) \cap K_j (\theta) \not = \emptyset , \label{eq:KeyGraphConnectivity} \end{equation} in which case an undirected link is assigned between nodes $i$ and $j$. The resulting random graph defines the {\em random key graph} on the vertex set $\{ 1, \ldots , n\}$, hereafter denoted $\mathbb{K}(n; \theta )$. For distinct $i,j =1, \ldots , n$, it is easy to check that \begin{equation} \bP{ K_i (\theta) \cap K_j (\theta) = \emptyset } = q(\theta) \end{equation} with \begin{equation} q (\theta) := \left \{ \begin{array}{ll} 0 & \mbox{if~ $P <2K$} \\ & \\ \frac{{P-K \choose K}}{{P \choose K}} & \mbox{if~ $2K \leq P$,} \end{array} \right . \label{eq:q_theta} \end{equation} whence the probability of edge occurrence between any two nodes is equal to $1-q(\theta)$. The expression given in (\ref{eq:q_theta}) is a simple consequence of the often used fact that \begin{equation} \bP{ S \cap K_i(\theta) = \emptyset } = \frac{{P- |S| \choose K}}{{P \choose K}}, \quad i=1, \ldots ,n \label{eq:Probab_key_ring_does_not_intersect_S} \end{equation} for every subset $S$ of $\{ 1, \ldots , P \}$ with $|S| \leq P-K$. Note that if $P<2K$ there exists an edge between any pair of nodes, so that $\mathbb{K}(n;\theta)$ coincides with the complete graph $K_{n}$. Also, we always have $0 \leq q(\theta) < 1 $ with $q(\theta)> 0$ if and only if $2K \leq P$. \section{The main results} \label{sec:MainResults} Pick positive integers $K$ and $P$ such that $K \leq P$. Fix $n=3, 4, \ldots $ and for distinct $i, j, k= 1, \ldots, n$, define the indicator function \[ \chi_{n,{ijk}}(\theta) := \1{ {\rm Nodes~}i, j ~{\rm and~} k~{\rm form~a~triangle~in~} \mathbb{K}(n; \theta) }. \] The number of (unlabelled) triangles in $\mathbb{K}(n; \theta)$ is simply given by \begin{equation} T_n (\theta) := \sum_{(ijk)} \chi_{n,{ijk}}(\theta) \label{eq:NumberOfTriangles} \end{equation} where $\sum_{(ijk)}$ denotes summation over all distinct triples $ijk$ with $1 \leq i < j<k \leq n$. The event $T(n, \theta)$ that there exists at least one triangle in $\mathbb{K}(n; \theta )$ is then characterized by \begin{equation} T(n, \theta) := [ T_n (\theta) > 0 ] = [ T_n (\theta) = 0 ]^{c}. \label{eq:triangle_basis} \end{equation} The main result of the paper is a zero-one law for the existence of triangles in random key graphs. To state these results we find it convenient to make use of the quantity \begin{equation} \tau(\theta) := \frac{K^3}{P^2} + \left ( \frac{K^2}{P} \right )^3, \quad \theta = (K,P) \label{eq:Tau(Theta)} \end{equation} with positive integers $K$ and $P$ such that $K \leq P$. For simplicity of exposition we refer to any pair of functions $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ as a {\em scaling} provided the natural conditions \begin{equation} K_n \leq P_n, \quad n=2, 3, \ldots \label{eq:ScalingCondition} \end{equation} are satisfied. The zero-law is given first. \begin{theorem} {\sl For any scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$, we have the zero-law \begin{equation} \lim_{n \rightarrow \infty } \bP{T(n, \theta_n)} = 0 \label{eq:MainTheoremZero} \end{equation} under the condition \begin{equation} \lim_{n \rightarrow \infty } n^3 \tau(\theta_n) = 0 . \label{eq:ConditionForZero} \end{equation} } \label{thm:ZeroLaw} \end{theorem} The one-law given next assumes a more involved form. \begin{theorem} {\sl For any scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ for which the limit $\lim_{n \rightarrow \infty } q(\theta_n) = q ^\star $ exists, we have the one-law \begin{equation} \lim_{n \rightarrow \infty } \bP{T(n, \theta_n)} = 1 \label{eq:MainTheoremOne} \end{equation} either if $ 0 \leq q^\star < 1$ or if $q^\star =1$ under the condition \begin{equation} \lim_{n \rightarrow \infty } n^3 \tau(\theta_n) = \infty . \label{eq:ConditionForOne} \end{equation} } \label{thm:OneLaw} \end{theorem} Theorem \ref{thm:ZeroLaw} and Theorem \ref{thm:OneLaw} will be established by the method of first and second moments, respectively \cite[p. 55]{JansonLuczakRucinski}, applied to the count variables defined at (\ref{eq:NumberOfTriangles}). To facilitate comparison with Erd\H{o}s-R\'enyi graphs, we combine Theorem \ref{thm:ZeroLaw} and Theorem \ref{thm:OneLaw} into a symmetric, but somewhat weaker, statement. \begin{theorem} {\sl For any scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ for which $\lim_{n \rightarrow \infty } q(\theta_n) = 1$, we have \begin{eqnarray} \lim_{n \rightarrow \infty } \bP{ T(n; \theta_n ) } = \left \{ \begin{array}{ll} 0 & \mbox{if~ $\lim_{ n\rightarrow \infty } n^3 \tau (\theta_n) = 0 $} \\ & \\ 1 & \mbox{if~ $\lim_{ n\rightarrow \infty } n^3 \tau (\theta_n) = \infty $.} \end{array} \right . \label{eq:TriangleZeroOneLaw+RKG} \end{eqnarray} } \label{thm:TriangleZeroOneLaw+RKG} \end{theorem} \section{Comparing with Erd\H{o}s-R\'enyi graphs} \label{sec:ComparingWithERGs} In this section we compare Theorem \ref{thm:TriangleZeroOneLaw+RKG} with its analog for Erd\H{o}s-R\'enyi graphs. First some notation: For each $p$ in $[0,1]$ and $n=2,3, \ldots $, let $\mathbb{G}(n;p)$ denote the Erd\H{o}s-R\'enyi graph on the vertex set $\{ 1, \ldots , n\}$ with edge probability $p$. In analogy with (\ref{eq:NumberOfTriangles}) and (\ref{eq:triangle_basis}) let $T_n(p)$ denote the number of (unlabelled) triangles in $\mathbb{G}(n; p)$, and define $T(n,p)$ as the event that there exists at least one triangle in $\mathbb{G}(n; p)$, i.e., $T(n,p) = [ T_n(p) > 0 ]$. we also refer to any mapping $p: \mathbb{N}_0 \rightarrow [0,1]$ as a scaling for Erd\H{o}s-R\'enyi graphs. The following zero-one law for connectivity in Erd\H{o}s-R\'enyi graphs is well known \cite{ER1960}. \begin{theorem} {\sl For any scaling $p: \mathbb{N}_0 \rightarrow [0,1]$, we have \begin{eqnarray} \lim_{n \rightarrow \infty } \bP{ T(n; p_n ) } = \left \{ \begin{array}{ll} 0 & \mbox{if~ $\lim_{ n\rightarrow \infty } n^3 \tau ^ \star (p_n) = 0 $} \\ & \\ 1 & \mbox{if~ $\lim_{ n\rightarrow \infty } n^3 \tau ^ \star (p_n) = \infty $} \end{array} \right . \label{eq:ERZeroOneLawTriangle} \end{eqnarray} where \begin{equation} \tau ^ \star (p) := p^3, \quad p \in [0,1] . \end{equation} } \label{thm:ERZeroOneLawTriangle} \end{theorem} As this result is also established by the method of first and second moments, its form is easily understood once we note that \begin{equation} \bE{ T_n(p) } = {n \choose 3} \tau ^ \star (p), \quad 0 \leq p \leq 1 \label{eq:ER+FirstMoment} \end{equation} for all $n=3,4, \ldots$. As mentioned earlier, random key graphs are {\em not} equivalent to Erd\H{o}s-Renyi graphs even when their edge probabilities are matched, i.e., $\mathbb{G}(n;p) \neq_{st} \mathbb{K}(n; \theta)$ with $p = 1 - q(\theta)$; see \cite{YaganMakowskiAllerton2009} for a discussion of similarities. However, in order to meaningfully compare the zero-one law of Theorem \ref{thm:ERZeroOneLawTriangle} with that contained in Theorem \ref{thm:TriangleZeroOneLaw+RKG}, we say that the scaling $p: \mathbb{N}_0 \rightarrow [0,1]$ (for Erd\H{o}s-R\'enyi graphs) is {\em asymptotically matched} to the scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ (for random key graphs) if \begin{equation} p_n \sim 1 - q(\theta_n). \label{eq:AsymptoticMatchingCondition} \end{equation} This is equivalent to requiring that the expected average degrees are asymptotically equivalent. Under the natural condition $\lim_{n \rightarrow \infty} q(\theta_n) = 1$, the matching condition (\ref{eq:AsymptoticMatchingCondition}) amounts to \begin{equation} p_n \sim \frac{K_n^2}{P_n} \label{eq:AsymptoticMatchingEquiv} \end{equation} by virtue of Lemma \ref{lem:AsymptoticEquivalence1}. The definitions readily yield \[ \frac{\tau(\theta_n)} {\tau^\star(p_n)} = \frac{1}{p_n^3} \cdot \left ( \frac{K^3_n}{P^2_n} \right ) + \frac{1}{p_n^3} \cdot \left ( \frac{K^2_n}{P_n} \right )^3, \quad n=2,3, \ldots \] whence \begin{equation} \frac{\tau(\theta_n)}{\tau ^ \star (p_n)} \sim 1 + \frac{P_n}{K_n^3} \label{eq:AsymptoticRatio} \end{equation} under (\ref{eq:AsymptoticMatchingEquiv}). By Proposition \ref{prop:AsymptoticEquivalence2}, this last statement is equivalent to \begin{equation} \frac{ \bE{ T_n(\theta_n) } } { \bE{ T_n(p_n) } } \sim 1 + \frac{P_n}{K_n^3} \label{eq:FirstMomentAsymptoticRatio} \end{equation} as we make use of the expressions (\ref{eq:ER+FirstMoment}) and (\ref{eq:FirstMoment}). In other words, for large $n$ the expected number of triangles in random key graphs is always at least as large as the corresponding quantity in asymptotically matched Erd\H{o}s-R\'enyi graphs. In the context of WSNs, it is natural to select the parameters $K_n$ and $P_n$ of the scheme of Eschenauer and Gligor such that the induced random key graph is {\em connected}. However, there is a tradeoff between connectivity and security \cite{DiPietroManciniMeiPanconesiRadhakrishnan2008}. This requires that $\frac{K_n ^ 2}{P_n}$ be kept as close as possible to the critical scaling $\frac{\log n}{n}$ for connectivity; see the papers \cite{BlackburnGerke, DiPietroManciniMeiPanconesiRadhakrishnan2008, Rybarczyk2009, YaganMakowskiISIT2009, YaganMakowskiConnectivity}. In the desired near boundary regime, this amounts to \begin{equation} \frac{K_n ^ 2 }{ P_n } \sim c \cdot \frac{ \log n }{ n } \label{eq:K^2/P_sim_logn/n} \end{equation} with $c > 1$ but close to one, and from (\ref{eq:FirstMomentAsymptoticRatio}) we see that \begin{equation} \frac{ \bE{ T_n(\theta_n) } } { \bE{ T_n(p_n) } } \sim 1 \quad \mbox{if and only if} \quad K_n \gg \frac{n}{\log n} . \label{eq:NotPractical} \end{equation} The expected number of triangles in random key graphs is then of the same order as the corresponding quantity in asymptotically matched Erd\H{o}s-R\'enyi graphs with $ \bE{ T_n(\theta_n) } \sim \bE{ T_n(p_n) } \sim \frac{c^3}{6} \left ( \log n \right )^3$. This conclusion holds regardless of the value of $c$ in (\ref{eq:K^2/P_sim_logn/n}). However, given the limited memory and computational power of the sensor nodes, the key ring sizes at (\ref{eq:NotPractical}) are not practical. In addition, they will lead to {\em high} node degrees and this in turn will decrease network {\em resiliency} against node capture attacks. Indeed, in \cite[Thm. 5.3]{DiPietroManciniMeiPanconesiRadhakrishnan2008} it was proposed that security in WSNs be ensured by selecting $K_n$ and $P_n$ such that $\frac{K_n}{P_n} \sim \frac{1}{n}$, a requirement which then leads to \begin{equation} K_n \sim c \cdot \log n \label{eq:K_n_sim_logn} \end{equation} under (\ref{eq:K^2/P_sim_logn/n}), and (\ref{eq:FirstMomentAsymptoticRatio}) implies \begin{equation} \lim_{n \rightarrow \infty} \frac{ \bE{ T_n(\theta_n) } } { \bE{ T_n(p_n) } } = \lim_{n \rightarrow \infty} \left ( 1 + \frac{n}{(c \cdot \log n) ^2} \right ) = \infty. \end{equation} Hence, for realistic WSN scenarios the expected number of triangles in the induced random key graphs can be orders of magnitude larger than in Erd\H{o}s-R\'enyi graphs. This provides a clear example where transferring known results for Erd\H{o}s-R\'enyi graphs to random key graphs by asymptotically matching their edge probabilities can be misleading. \section{Computing the first moment} \label{sec:FirstMoment} With positive integers $K$ and $P$ such that $K \leq P$, define \begin{equation} \beta(\theta) := (1-q(\theta))^3 + q(\theta)^3 - q(\theta) r(\theta) \label{eq:beta(tetha)} \end{equation} where we have set \begin{equation} r (\theta) := \left \{ \begin{array}{ll} 0 & \mbox{if~ $P <3K$} \\ & \\ \frac{{P-2K \choose K}}{{P \choose K}} & \mbox{if~ $3K \leq P$.} \end{array} \right . \label{eq:r(tetha)} \end{equation} Direct inspection shows that \begin{equation} r(\theta) \leq q(\theta)^2 \label{eq:r(tetha)B} \end{equation} whence \begin{equation} \beta (\theta) \geq (1 - q(\theta) )^3 > 0 . \label{eq:r(tetha)C} \end{equation} \begin{lemma} {\sl For positive integers $K$ and $P$ such that $K \leq P$, we have \begin{equation} \bE{ T_n (\theta) } = {n \choose 3 } \beta ( \theta ), \quad n=3,4, \ldots \label{eq:FirstMoment} \end{equation} } \label{lem:FirstMoment} \end{lemma} To help deriving (\ref{eq:FirstMoment}) we introduce the events \begin{equation} A(\theta) := [ K_1 (\theta) \cap K_2 (\theta) \neq \emptyset ] \cap [ K_1 (\theta) \cap K_3 (\theta) \neq \emptyset ] \end{equation} and \begin{eqnarray} B(\theta) &:= & [ K_1 (\theta) \cap K_2 (\theta) \neq \emptyset ] \cap [ K_1 (\theta) \cap K_3 (\theta) \neq \emptyset ] \cap [ K_2 (\theta) \cap K_3 (\theta) \neq \emptyset ] \nonumber \\ &=& A(\theta) \cap [ K_2 (\theta) \cap K_3 (\theta) \neq \emptyset ]. \end{eqnarray} The event $A(\theta)$ captures the existence of edges between node $1$ and the pair of nodes $2$ and $3$, respectively, in $\mathbb{K}(n;\theta)$, while $B(\theta)$ is the event where the nodes $1$, $2$ and $3$ form a triangle in $\mathbb{K}(n;\theta)$. \begin{lemma} The probability of the event $A(\theta)$ is given by \begin{equation} \bP{ A(\theta) } = (1-q(\theta))^2. \label{eq:A_theta} \label{eq:A_theta} \end{equation} \label{lem:A_theta} \end{lemma} In the proof of Lemma \ref{lem:A_theta} (as well as in other proofs) we omit the explicit dependence on $\theta$ when no confusion arises from doing so. \myproof Under the enforced independence assumptions we note that \begin{eqnarray} \bP{ A(\theta) } &=& \sum_{ |S| = K } \bP{ K_1 = S, S \cap K_2 \neq \emptyset, S \cap K_3 \neq \emptyset } \nonumber \\ &=& \sum_{ |S| = K } \bP{ K_1 = S} \bP{ S \cap K_2 \neq \emptyset } \bP{ S \cap K_3 \neq \emptyset } \nonumber \\ &=& \left ( 1 - q(\theta) \right )^2 \end{eqnarray} as we make use of (\ref{eq:Probab_key_ring_does_not_intersect_S}) with $\sum_{ |S| = K } \bP{ K_1 = S} = 1 $. \myendpf In many of the forthcoming calculations we make repeated use of the fact that for any pair of events, say $E$ and $F$, we have \begin{equation} \bP{ E \cap F } = \bP{ E} - \bP{ E \cap F^c } . \label{eq:Set_Formula} \end{equation} In particular, we can now conclude from Lemma \ref{lem:A_theta} that \begin{eqnarray} \lefteqn{\bP{ K_1(\theta)\cap K_2(\theta) = \emptyset,\; K_1(\theta) \cap K_3(\theta) \neq \emptyset }} && \nonumber \\ &=& \bP{ K_1(\theta)\cap K_2(\theta) \neq \emptyset,\; K_1(\theta) \cap K_3(\theta) = \emptyset } \nonumber \\ &=& q(\theta)(1-q(\theta)) \label{eq:cor_q_1-q} \end{eqnarray} and \begin{equation} \bP{ K_1(\theta)\cap K_2(\theta) = \emptyset,\; K_1(\theta) \cap K_3(\theta) = \emptyset } = q(\theta)^2 . \label{eq:cor_q^2} \end{equation} These facts will now be used in computing the probability of $B(\theta)$. \begin{lemma} {\sl With $\beta (\theta)$ given at (\ref{eq:beta(tetha)}) we have \begin{equation} \bP{ B(\theta) } = \beta (\theta) . \label{eq:B_theta} \end{equation} } \label{lem:Prob_B} \end{lemma} \myproof Repeated use of (\ref{eq:Set_Formula}) yields \begin{eqnarray} \bP{ B(\theta) } &=& \bP{ K_1\cap K_2\neq\emptyset,\; K_1 \cap K_3 \neq\emptyset } \nonumber \\ & & ~ - \bP{ K_1\cap K_2\neq\emptyset,\; K_1 \cap K_3 \neq\emptyset,\; K_2 \cap K_3 =\emptyset } \nonumber \\ &=& \bP{ A(\theta) } - \bP{ K_1 \cap K_2 \neq\emptyset,\; K_2 \cap K_3 =\emptyset } \nonumber \\ & & ~ + \bP{ K_1 \cap K_2 \neq \emptyset,\; K_1 \cap K_3 = \emptyset,\; K_2 \cap K_3 =\emptyset } \nonumber \\ &=& (1-q(\theta))^2 - q(\theta)(1-q(\theta)) + \bP{ K_1 \cap K_3 = \emptyset,\; K_2 \cap K_3 =\emptyset } \nonumber \\ & & ~ - \bP{ K_1\cap K_2=\emptyset,\; K_1 \cap K_3 = \emptyset,\; K_2 \cap K_3 =\emptyset } \nonumber \\ &=& (1-q(\theta))^2 - q(\theta)(1-q(\theta)) + q(\theta)^2 \nonumber \\ & & ~ - \bP{ K_1\cap K_2=\emptyset,\; K_1 \cap K_3 = \emptyset,\; K_2 \cap K_3 =\emptyset } \end{eqnarray} as we recall (\ref{eq:A_theta}), (\ref{eq:cor_q_1-q}) and (\ref{eq:cor_q^2}). By independence we get \begin{eqnarray} \lefteqn{ \bP{ K_1\cap K_2=\emptyset,\; K_1 \cap K_3 = \emptyset,\; K_2 \cap K_3 =\emptyset } } & & \nonumber \\ &=& \bP{ K_1\cap K_2=\emptyset,\; (K_1 \cup K_2 ) \cap K_3 =\emptyset } \nonumber \\ &=& \sum_{|S|=|T| = K, S \cap T = \emptyset} \bP{ K_1 = S, K_2 = T } \bP{ (S \cup T ) \cap K_3 =\emptyset } \nonumber \\ &=& \sum_{|S|=|T|= K, S \cap T = \emptyset} \bP{ K_1 = S , K_2 = T } \cdot r(\theta ) \nonumber \\ &=& \bP{ K_1 \cap K_2 = \emptyset } \cdot r(\theta ) \end{eqnarray} by invoking (\ref{eq:Probab_key_ring_does_not_intersect_S}) (since $|S \cup T| = 2K$ under the constraints $|S|=|T|= K$ and $S \cap T = \emptyset$). Thus, \[ \bP{ B(\theta) } = (1-q(\theta))^2 - q(\theta)(1-q(\theta)) + q(\theta)^2 - q(\theta) r(\theta), \] and the desired result follows upon noting the relation \[ (1-q(\theta))^2 - q(\theta)(1-q(\theta)) + q(\theta)^2 = (1-q(\theta))^3 + q(\theta)^3 . \] \myendpf The proof of Lemma \ref{lem:FirstMoment} is now straightforward: Fix $n=3,4, \ldots $. Exchangeability yields \begin{equation} \bE{ T_n (\theta)} = {n \choose 3} \bE{ \chi_{n,{123}} (\theta) } \label{eq:FirstMomentExpression} \end{equation} and the desired conclusion follows as we make use of Lemma \ref{lem:Prob_B}. \section{Some useful asymptotics} \label{sec:UsefulAsymptotics} In this section we collect a number of asymptotic results that prove useful in establishing some of the results derived in this paper. The first result was already obtained in \cite{YaganMakowskiConnectivity}. \begin{lemma} {\sl For any scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$, we have \begin{equation} \lim_{n \rightarrow \infty} q(\theta_n) = 1 \label{eq:Condition1} \end{equation} if and only if \begin{equation} \lim_{n \rightarrow \infty} \frac{K^2_n}{P_n} = 0, \label{eq:Condition2} \end{equation} and under either condition the asymptotic equivalence \begin{equation} 1 - q(\theta_n) \sim \frac{K^2_n}{P_n} \label{eq:AsymptoticsEquivalence1} \end{equation} holds. } \label{lem:AsymptoticEquivalence1} \end{lemma} Since $1 \leq K_n \leq {K_n}^2$ for all $n=1,2, \ldots $, the condition (\ref{eq:Condition2}) implies \begin{equation} \lim_{n \rightarrow \infty } \frac{K_n}{P_n} = 0 \label{eq:RatioConditionStrong+Consequence1} \end{equation} and \begin{equation} \lim_{n \rightarrow \infty } P_n = \infty . \label{eq:RatioConditionStrong+Consequence2} \end{equation} so that for any $c > 0$, we have \begin{equation} c K_n < P_n \label{eq:Condition0} \end{equation} for all $n$ sufficiently large in $\mathbb{N}_0$ (dependent on $c$). The following asymptotic equivalence will be crucial to stating the results in a more explicit form. \begin{proposition} {\sl For any scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ satisfying (\ref{eq:Condition1})-(\ref{eq:Condition2}), we have the asymptotic equivalence \begin{equation} \beta(\theta_n) \sim \tau(\theta_n) . \label{eq:AsymptoticsEquivalence2} \end{equation} } \label{prop:AsymptoticEquivalence2} \end{proposition} \myproof From (\ref{eq:beta(tetha)}), we get \[ \beta ( \theta_n ) = \left ( 1 - q(\theta_n) \right )^3 + q ( \theta_n )^3 \left( 1-\frac { r ( \theta_n ) } { q ^ 2 (\theta_n) } \right) . \] Under the enforced assumptions Lemma \ref{lem:AsymptoticEquivalence1} already implies \[ \left ( 1 - q(\theta_n) \right )^3 \sim \left ( \frac{K^2_n}{P_n} \right )^3 \] with $q ( \theta_n )^ 3 \sim 1$. It is now plain that the equivalence (\ref{eq:AsymptoticsEquivalence2}) will hold if we show that \begin{equation} 1 - \frac{r(\theta_n)}{q(\theta_n)^2} \sim \frac{K^3_n}{P^2_n} . \label{eq:AsymptoticsEquivalence2Reduced} \end{equation} This key technical fact is established in Appendix \ref{Appendix:A}. \myendpf The final result of this section also relies on Lemma \ref{lem:AsymptoticEquivalence1}, and will prove useful in establishing the one-law. \begin{proposition} {\sl For any scaling $P,K: \mathbb{N}_0\rightarrow\mathbb{N}_0$ satisfying (\ref{eq:Condition1})-(\ref{eq:Condition2}), we have \begin{equation} \lim_{n \to \infty} n^2 (1-q(\theta_n))=\infty \label{eq:n^2_1_q_to_inf} \end{equation} provided the condition (\ref{eq:ConditionForOne}) holds. } \label{prop:n^2_1_q_to_inf} \end{proposition} \myproof Consider a scaling $P,K: \mathbb{N}_0\rightarrow\mathbb{N}_0$ satisfying (\ref{eq:Condition1})-(\ref{eq:Condition2}). By Lemma \ref{lem:AsymptoticEquivalence1} the desired conclusion (\ref{eq:n^2_1_q_to_inf}) will be established if we show \begin{equation} \lim_{n \to \infty} n^2 \frac{K^2_n}{P_n} = \infty . \label{eq:n^2_1_q_to_infB} \end{equation} As condition (\ref{eq:ConditionForOne}) reads \[ \lim_{n \to \infty} n^3 \left( \frac{K_n^3}{P_n^2} + \left(\frac{K_n^2}{P_n}\right)^3 \right) = \infty , \] we immediately get (\ref{eq:n^2_1_q_to_infB}) from it by virtue of the trivial bounds \[ n^3 \left(\frac{K_n^2}{P_n}\right)^3 = \left(\frac{n K_n^2}{P_n}\right)^3 \leq \left(\frac{n^2 K_n^2}{P_n}\right)^3 \] and \[ n^3 \frac{K_n^3}{P_n^2} \leq n^4 \frac{K_n^4}{P_n^2} = \left(\frac{n^2 K_n^2}{P_n}\right)^2 \] valid for all $n=1,2, \ldots $. \myendpf Proposition \ref{prop:n^2_1_q_to_inf} will be used as follows: Pick $a>0$ and $b>0$, and consider a scaling $P,K: \mathbb{N}_0\rightarrow\mathbb{N}_0$ satisfying (\ref{eq:Condition1})-(\ref{eq:Condition2}). For each $n=2,3, \ldots $, we get \begin{eqnarray} \frac{1}{n^2} \cdot \frac{ \left ( 1 - q (\theta_n) \right )^a }{ \beta (\theta_n)^b } &\leq& \frac{1}{n^2} \cdot \frac{ \left ( 1 - q (\theta_n) \right )^a } { \left ( 1 - q (\theta_n) \right )^{3b} } \nonumber \\ &=& \frac{1}{n^2 \left ( 1 - q (\theta_n) \right )} \cdot { \left ( 1 - q (\theta_n) \right )^{a-3b+1} } . \end{eqnarray} Therefore, under condition (\ref{eq:ConditionForOne}) Proposition \ref{prop:n^2_1_q_to_inf} yields \begin{equation} \lim_{n \to \infty} \frac{1}{n^2} \cdot \frac{ \left ( 1 - q (\theta_n) \right )^a }{ \beta (\theta_n)^b } = 0 \quad \mbox{if~} a-3b+1 \geq 0 \label{eq:AsymptoticsForRatio} \end{equation} as we make use of (\ref{eq:Condition1})-(\ref{eq:Condition2}). \section{Proofs of Theorem \ref{thm:ZeroLaw} and Theorem \ref{thm:OneLaw}} \label{sec:ProofsTheoremsZeroOne} \subsection{A proof of Theorem \ref{thm:ZeroLaw}} \label{subsec:ProofTheoremZero} Fix $n=3,4, \ldots $, An elementary bound for $\mathbb{N}$-valued rvs yields \begin{equation} \bP{ T_n (\theta_n) > 0 } \leq \bE{ T_n(\theta_n) } , \end{equation} so that \begin{equation} \bP{ T (n, \theta_n) } \leq {n \choose 3} \beta (\theta_n) . \end{equation} The conclusion (\ref{eq:MainTheoremZero}) follows if we show that \begin{equation} \lim_{n \rightarrow \infty} {n \choose 3} \beta (\theta_n) = 0 \label{eq:ConditionForZero+A} \end{equation} under (\ref{eq:ConditionForZero}). The condition $\lim_{n \rightarrow \infty} n^3 \tau(\theta_n) = 0$ implies $\lim_{n \rightarrow \infty} \tau(\theta_n) = 0$ and (\ref{eq:Condition2}) automatically holds. By Proposition \ref{prop:AsymptoticEquivalence2} we conclude $\beta(\theta_n) \sim \tau(\theta_n)$, whence $n^3 \beta(\theta_n) \sim n^3 \tau(\theta_n)$, and condition (\ref{eq:ConditionForZero}) is indeed equivalent to (\ref{eq:ConditionForZero+A}) since ${n \choose 3} \sim \frac{n^3}{6}$. \subsection{A proof of Theorem \ref{thm:OneLaw}} \label{subsec:ProofTheoremOne} Assume first that $q^\star$ satisfies $0 \leq q^\star < 1$. Fix $n=3,4, \ldots $ and partition the $n$ nodes into the $k_n+1$ non-overlapping groups $(1,2,3)$, $(4,5,6)$, $\ldots $, $(3k_n+1,3k_n+2,3k_n+3)$ with $k_n = \lfloor \frac{n-3}{3} \rfloor $. If $\mathbb{K}(n;\theta_n)$ contains no triangle, then {\em none} of these $k_n + 1$ groups of nodes forms a triangle. With this in mind we get \begin{eqnarray} \lefteqn{\bP{ T_n(\theta_n) = 0 } } & & \nonumber \\ &\leq& \bP{ \bigcap_{\ell=0}^{k_n} \left [ \begin{array}{c} \mbox{Nodes $3\ell+1,3\ell+2, 3\ell+3$ do not form } \\ \mbox{a triangle in $\mathbb{K}(n;\theta_n)$ } \\ \end{array} \right ] } \nonumber \\ &=& \prod_{\ell=0}^{k_n} \bP{ \begin{array}{c} \mbox{Nodes $3\ell+1,3\ell+2, 3\ell+3$ do not form } \\ \mbox{a triangle in $\mathbb{K}(n;\theta_n)$ } \\ \end{array} } \label{eq:IndependenceTriangle} \\ &=& \left ( 1 - \beta(\theta_n) \right )^{k_n+1} \nonumber \\ &\leq& \left ( 1 - (1-q(\theta_n) )^3 \right )^{k_n+1} \label{eq:OneLawInequality0} \\ &\leq& e^{- (k_n +1 ) (1-q(\theta_n) )^3 }. \label{eq:OneLawInequality1} \end{eqnarray} Note that (\ref{eq:IndependenceTriangle}) follows from the fact that the events \[ \left [ \begin{array}{c} \mbox{Nodes $3\ell+1,3\ell+2, 3\ell+3$ do not form } \\ \mbox{a triangle in $\mathbb{K}(n;\theta_n)$ } \\ \end{array} \right ], \quad \ell =0, \ldots , k_n \] are mutually independent due to the non-overlap condition, while the inequality (\ref{eq:OneLawInequality0}) is justified with the help of (\ref{eq:r(tetha)C}). Let $n$ go to infinity in the inequality (\ref{eq:OneLawInequality1}). The condition $q^\star < 1$ implies $\lim_{n \rightarrow \infty} \bP{ T(n,\theta_n )^c } = 0$ since $k_n \sim \frac{n}{3}$ so that $\lim_{n \rightarrow \infty} ( k_n + 1 ) (1-q(\theta_n) )^3 = \infty $. This establishes (\ref{eq:MainTheoremOne}). To handle the case $q^\star =1$, we use a standard bound which forms the basis of the method of second moment \cite[remark 3.1, p. 55]{JansonLuczakRucinski}. Here it takes the form \begin{equation} \frac{ \bE{ T_n (\theta_n)}^2}{ \bE{T_n (\theta_n)^2 }} \leq \bP{T_n (\theta_n) > 0 }, \quad n=3,4, \ldots \label{eq:SecondMoment+b} \end{equation} It is now plain that (\ref{eq:MainTheoremOne}) will be established in the case $q^\star =1 $ if we show the following result. \begin{proposition} {\sl For any scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ satisfying (\ref{eq:Condition1})-(\ref{eq:Condition2}), we have \begin{equation} \lim_{n \rightarrow \infty } \frac{\bE{T_n (\theta_n)^2}} {\bE{T_n (\theta_n)}^2} =1 \label{eq:OneLawConvergenceSecondMoment} \end{equation} under the condition (\ref{eq:ConditionForOne}). } \label{prop:OneLawConvergenceSecondMoment} \end{proposition} The remainder of the paper is devoted to establishing Proposition \ref{prop:OneLawConvergenceSecondMoment}. As will soon become apparent this is a bit quite more involved than expected. \section{Computing the second moment} \label{sec:SecondMoment} A natural step towards establishing Proposition \ref{prop:OneLawConvergenceSecondMoment} consists in computing the second moment of the count variables (\ref{eq:NumberOfTriangles}). \begin{proposition} {\sl For positive integers $K$ and $P$ such that $K \leq P$, we have \begin{eqnarray} \bE{ T_n (\theta)^2 } = \bE{ T_n (\theta) } &+& \left ( \frac{ {n-3\choose 3} }{ {n \choose 3} } + 3 \frac{ {n-3\choose 2} }{ {n \choose 3} } \right ) \cdot \bE{ T_n (\theta) }^2 \label{eq:SecondMoment} \\ &+& {n \choose 3}{3 \choose 2}{n-3\choose 1} \cdot \bE{\chi_{n,123}(\theta) \chi_{n, 124} (\theta)} \nonumber \end{eqnarray} for all $n=3,4, \ldots$ with \begin{eqnarray} \lefteqn{ \bE{\chi_{n,123}(\theta) \chi_{n, 124} (\theta)} } & & \nonumber \\ &=& - (1-q(\theta))^5 + 2 \left ( 1- q(\theta) \right )^2 \beta (\theta) \nonumber \\ & & ~ - \frac{1}{q(\theta)} \left ( \beta (\theta) - (1-q(\theta))^3 \right )^2 + \sum_{k=0}^K c_k (\theta) - q(\theta)^4 \label{eq:SecondMomentCross} \end{eqnarray} where we have set \begin{equation} c_k(\theta) := {{\begingroup {K} \choose {k} \endgroup} {\begingroup {P-K} \choose {K-k} \endgroup} \over{P \choose K}} \cdot \left( {{\begingroup {P-2K+k} \choose {K} \endgroup} \over{P \choose K}} \right)^2, \quad k=0,1, \ldots , K. \label{eq:SecondMomentCrossB} \end{equation} } \label{prop:SecondMoment} \end{proposition} A careful inspection of the definition (\ref{eq:subs5to6}) given for the quantities (\ref{eq:SecondMomentCrossB}) yields the probabilistic interpretation \begin{equation} c_k (\theta) = \bP{ |K_1(\theta) \cap K_2 (\theta) | = k, \left ( K_1(\theta) \cup K_2(\theta) \right ) \cap K_i(\theta) = \emptyset, \ i=3,4 } \label{eq:SecondMomentCrossBInterpretation} \end{equation} for each $k=0,1, \ldots , K$. \myproof Consider positive integers $K$ and $P$ such that $K \leq P$ and fix $n=3, 4, \ldots $. By exchangeability and by the binary nature of the rvs involved we readily conclude that \begin{eqnarray} \bE{ T_n (\theta)^2 } &=& \sum_{(ijk)}\sum_{(abc)} \bE{\chi_{n,{ijk}} (\theta) \chi_{n,{abc}} (\theta) } \nonumber \\ &=& \bE{ T_n (\theta)} \label{eq:SecondMomentExpression} \nonumber \\ & & + {n \choose 3}{3 \choose 2}{n-3\choose 1} \bE{\chi_{n,123}(\theta) \chi_{n, 124} (\theta)} \nonumber \\ & & + {n \choose 3}{3 \choose 1}{n-3\choose 2} \bE{\chi_{n,123}(\theta) \chi_{n, 145} (\theta)} \nonumber \\ & & + {n \choose 3}{n-3\choose 3} \bE{\chi_{n,123}(\theta) \chi_{n, 456} (\theta)}. \label{eq:SecondMomentPart0} \end{eqnarray} Under the enforced independence assumptions the rvs $\chi_{n,123}(\theta)$ and $\chi_{n, 456} (\theta)$ are independent and identically distributed. As a result, \[ \bE{\chi_{n,123}(\theta) \chi_{n, 456} (\theta)} = \bE{\chi_{n,123}(\theta)} \bE{ \chi_{n, 456} (\theta)} = \beta (\theta)^2 \] so that \begin{equation} {n \choose 3}{n-3\choose 3} \bE{\chi_{n,123}(\theta) \chi_{n, 456} (\theta)} = \frac{ {n-3\choose 3} }{ {n \choose 3} } \cdot \bE{ T_n (\theta) }^2 \label{eq:SecondMomentPart1} \end{equation} as we make use of the relation (\ref{eq:FirstMoment}). On the other hand, we readily check that the indicator rvs $\chi_{n,123}(\theta)$ and $\chi_{n,145}(\theta)$ are independent and identically distributed {\em conditionally} on $K_1(\theta)$ with \[ \bP{ \chi_{n,123}(\theta) = 1 | K_1 (\theta ) = S } = \bP{ \chi_{n,123}(\theta) = 1 } = \beta (\theta), \quad S \in {\cal P}_K. \] A similar statement applies to $\chi_{n,145}(\theta)$, and the rvs $\chi_{n,123}(\theta)$ and $\chi_{n,145}(\theta)$ are therefore (unconditionally) independent and identically distributed so that \[ \bE{\chi_{n,123}(\theta) \chi_{n, 145} (\theta)} = \bE{\chi_{n,123}(\theta) } \bE{ \chi_{n, 145} (\theta)}. \] As before this last observation yields \begin{equation} {n \choose 3}{3 \choose 1}{n-3\choose 2} \bE{\chi_{n,123}(\theta) \chi_{n, 145} (\theta)} = 3 \frac{ {n-3\choose 2} }{ {n \choose 3} } \cdot \bE{ T_n (\theta) }^2 \label{eq:SecondMomentPart2} \end{equation} by virtue of (\ref{eq:FirstMoment}). The evaluation (\ref{eq:SecondMomentCross})--(\ref{eq:SecondMomentCrossB}) of the moment $\bE{\chi_{n,123}(\theta) \chi_{n, 124} (\theta)}$ is rather lengthy, although quite straightforward; details are given in Appendix \ref{Appendix:B}. Reporting (\ref{eq:SecondMomentCross})--(\ref{eq:SecondMomentCrossB}), (\ref{eq:SecondMomentPart1}) and (\ref{eq:SecondMomentPart2}) into (\ref{eq:SecondMomentPart0}) establishes Proposition \ref{prop:SecondMoment}. \myendpf In preparation of the proof of Proposition \ref{prop:OneLawConvergenceSecondMoment} we note that Proposition \ref{prop:SecondMoment} readily implies \begin{eqnarray} \frac{ \bE{ T_n (\theta)^2 } }{ \bE{ T_n (\theta) }^2 } = \frac{1}{ \bE{ T_n (\theta) } } &+& \left ( \frac{ {n-3\choose 3} }{ {n \choose 3} } + 3 \frac{ {n-3\choose 2} }{ {n \choose 3} } \right ) \label{eq:SecondMoment} \\ &+& \frac{ 3(n-3) }{ {n \choose 3} } \cdot \frac{ \bE{\chi_{n,123}(\theta) \chi_{n, 124} (\theta)} } { \bE{\chi_{n,123}(\theta) } ^2 } \nonumber \end{eqnarray} for all $n=2,3, \ldots$ as we make use of (\ref{eq:FirstMomentExpression}). \section{A proof of Proposition \ref{prop:OneLawConvergenceSecondMoment}} \label{sec:ProofPropositionOneLawConvergenceSecondMoment} Consider any scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ satisfying (\ref{eq:Condition1})-(\ref{eq:Condition2}). By Proposition \ref{prop:AsymptoticEquivalence2} we have $\lim_{n\rightarrow \infty} n^3 \beta (\theta_n) = \infty$ under the additional condition (\ref{eq:ConditionForOne}), whence \[ \lim_{n\rightarrow \infty} \bE{ T_n(\theta_n) } = \infty \] by virtue of (\ref{eq:FirstMomentExpression}). As pointed out earlier the equivalent conditions (\ref{eq:Condition1})-(\ref{eq:Condition2}) imply \begin{equation} 3 K_n < P_n \label{eq:3KvsP} \end{equation} for all $n$ sufficiently large in $\mathbb{N}_0$. On that range (\ref{eq:SecondMoment}) is valid with $\theta$ replaced by $\theta_n$. Letting $n$ go to infinity in the resulting expression, we note that \[ \lim_{n \rightarrow \infty} \left ( \frac{ {n-3\choose 3} }{ {n \choose 3} } + 3 \frac{ {n-3\choose 2} }{ {n \choose 3} } \right ) = 1 \quad \mbox{and} \quad \frac{ {n \choose 3} }{ 3(n-3) } \sim \frac{n^2}{18}. \] It is plain that the convergence (\ref{eq:OneLawConvergenceSecondMoment}) will hold if we show that \begin{equation} \lim_{n \rightarrow \infty } \frac{1}{n^2} \frac{ \bE{\chi_{n,123}(\theta_n) \chi_{n, 124} (\theta_n)} } { \bE{\chi_{n,123}(\theta_n) }^2 } = 0. \label{eq:ToBeShown} \end{equation} In order to establish (\ref{eq:ToBeShown}) under the assumptions of Proposition \ref{prop:OneLawConvergenceSecondMoment} we proceed as follows: Recall from Lemma \ref{lem:FirstMoment} that \begin{equation} \bE{\chi_{n,123}(\theta_n) }^2 = \beta(\theta_n)^2 \geq \left ( 1 - q(\theta_n) \right )^6 , \end{equation} and from (\ref{eq:SecondMomentCross}) observe that \begin{eqnarray} \lefteqn{ \frac{1}{n^2} \cdot \frac{ \bE{\chi_{n,123}(\theta_n) \chi_{n, 124} (\theta_n)} } { \left ( \bE{\chi_{n,123}(\theta_n) } \right )^2 } } & & \nonumber \\ &=& - \frac{1}{n^2} \cdot \frac{ (1-q(\theta_n))^5 }{ \beta(\theta_n)^2 } + \frac{2}{n^2} \cdot \frac{ \left ( 1- q(\theta_n) \right )^2 }{ \beta(\theta_n) } \nonumber \\ & & - \frac{1}{n^2} \cdot \frac{1}{q(\theta_n)} \left ( \frac{ \beta (\theta_n) - (1-q(\theta_n))^3 } { \beta(\theta_n) } \right )^2 \nonumber \\ & & + \frac{1}{n^2} \cdot \frac{ \sum_{k=0}^{K_n} c_k (\theta_n) -q(\theta_n)^4 }{\beta(\theta_n)^2 } \label{eq:OnelawRatioBeforeTakingLimit} \end{eqnarray} for all $n=3,4, \ldots $. Let $n$ go to infinity in (\ref{eq:OnelawRatioBeforeTakingLimit}). Using (\ref{eq:AsymptoticsForRatio}) (once with $a=5$ and $b=2$, then with $a=2$ and $b=1$), we get \begin{equation} \lim_{n \rightarrow \infty} \frac{1}{n^2} \cdot \frac{ (1-q(\theta_n))^5 }{ \beta(\theta_n)^2 } = 0 \end{equation} and \begin{equation} \lim_{n \rightarrow \infty} \frac{2}{n^2} \cdot \frac{ \left ( 1- q(\theta_n) \right )^2 }{ \beta(\theta_n) } = 0 . \end{equation} The convergence \begin{equation} \lim_{n \rightarrow \infty} \frac{1}{n^2} \cdot \frac{1}{q(\theta_n)} \left ( \frac{ \beta (\theta_n) - (1-q(\theta_n))^3 } { \beta(\theta_n) } \right )^2 = 0 \end{equation} is immediate since \[ \left | \frac{ \beta (\theta_n) - (1-q(\theta_n))^3 } { \beta(\theta_n) } \right |^2 \leq 1, \quad n =2,3, \ldots \] and $\lim_{n \rightarrow \infty} q(\theta_n) = 1 $. Consequently the proof of Proposition \ref{prop:OneLawConvergenceSecondMoment} will be completed if we show \begin{proposition} {\sl For any scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ satisfying (\ref{eq:Condition1})-(\ref{eq:Condition2}), we have \begin{equation} \lim_{n \rightarrow \infty } \frac{1}{n^2} \cdot \frac{ \sum_{k=0}^K c_k (\theta_n) -q(\theta_n)^4 }{\beta(\theta_n)^2 } = 0 \label{eq:OneLawConvergenceSecondMomentReduction} \end{equation} under the condition (\ref{eq:ConditionForOne}). } \label{prop:OneLawConvergenceSecondMomentReduction} \end{proposition} The proof of Proposition \ref{prop:OneLawConvergenceSecondMomentReduction} will proceed in several steps which are presented in the next three sections. \section{The first reduction step} \label{sec:FirstReductionStep} We start with an easy bound. \begin{lemma} {\sl With positive integers $K$ and $P$ such that $2K \leq P$, we have \begin{equation} c_1(\theta)\leq 1-q(\theta) \label{eq:c_1_leq_1_q}. \end{equation} } \label{lem:c_1_leq_1_q} \end{lemma} \myproof Specializing (\ref{eq:SecondMomentCrossBInterpretation}) with $k=1$ we get \begin{eqnarray*} c_1 (\theta) &=& \bP{ |K_1(\theta) \cap K_2 (\theta) | = 1, \left ( K_1(\theta) \cup K_2(\theta) \right ) \cap K_i(\theta) = \emptyset, \ i=3,4 } \nonumber \\ &\leq& \bP{ |K_1(\theta) \cap K_2 (\theta) | = 1 } \nonumber \\ &\leq& \bP{ |K_1(\theta) \cap K_2 (\theta) | \geq 1 } \end{eqnarray*} and the conclusion is immediate as we identify \[ \bP{ |K_1(\theta) \cap K_2 (\theta) | \geq 1 } = \bP{ K_1(\theta) \cap K_1(\theta) \neq \emptyset } = 1 - q(\theta) . \] \myendpf \begin{lemma} {\sl With positive integers $K$ and $P$ such that $3K \leq P$, the monotonicity property \begin{equation} \frac{c_1(\theta) }{c_0(\theta)} \geq \frac{c_2(\theta) }{c_1(\theta)} \geq \ldots \geq \frac{c_{K}(\theta) }{c_{K-1}(\theta)} \label{eq:Monotonicity} \end{equation} holds. } \label{lem:Monotonicity} \end{lemma} \myproof Fix $k=0, \ldots , K-1$. From the expression (\ref{eq:SecondMomentCrossB}) we note that \begin{eqnarray} \frac{c_{k+1}(\theta)} {c_{k}(\theta)} &=& {{\begingroup {K} \choose {k+1} \endgroup} {\begingroup {P-K} \choose {K-k-1} \endgroup} {\begingroup {P-2K+k+1} \choose {K} \endgroup}^2 \over{ \begingroup {K} \choose {k} \endgroup} {\begingroup {P-K} \choose {K-k} \endgroup} {\begingroup {P-2K+k} \choose {K} \endgroup}^2} \nonumber \\ &=&\frac{1}{k+1} \cdot \frac{(K-k)^2}{P-3K+k+1} \cdot\frac{P-2K+k+1}{P-3K+k+1} \label{eq:RatioExpression} \end{eqnarray} and by considering each factor in this last expression we readily conclude that the ratio $\frac{c_{k+1}(\theta)}{c_{k}(\theta)}$ decreases monotonically with $k$. \myendpf \begin{lemma} {\sl For any scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ satisfying (\ref{eq:Condition1})-(\ref{eq:Condition2}), we have \begin{equation} \frac{c_{2}(\theta_n)}{c_{1}(\theta_n)} \leq 1 - q (\theta_n) \label{eq:RatioVS1-q} \end{equation} for all $n$ sufficiently large in $\mathbb{N}_0$.} \label{lem:RatioVS1-q} \end{lemma} \myproof Pick a scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ satisfying (\ref{eq:Condition1})-(\ref{eq:Condition2}) so that (\ref{eq:3KvsP}) eventually holds. On that range replace $\theta$ by $\theta_n$ in (\ref{eq:RatioExpression}) with $k=1$ according to this scaling, yielding \begin{eqnarray} \frac{c_{2}(\theta_n)}{c_{1}(\theta_n)} = \frac{1}{2}\cdot\frac{(K_n-1)^2}{P_n-3K_n+2} \cdot \frac{P_n-2K_n+2}{P_n-3K_n+2} . \nonumber \end{eqnarray} The inequality \begin{eqnarray} \left ( 1 - q(\theta_n) \right )^{-1} \frac{c_{2}(\theta_n)}{c_{1}(\theta_n)} \leq \frac{1}{2} \cdot \left ( 1 - q(\theta_n) \right )^{-1} \frac{K^2_n}{P_n-3K_n} \cdot \frac{P_n-2K_n}{P_n-3K_n} \nonumber \end{eqnarray} readily follows. Now let $n$ go to infinity in this inequality: Recall the consequence (\ref{eq:RatioConditionStrong+Consequence1}) of the assumption (\ref{eq:Condition1})-(\ref{eq:Condition2}) and use the equivalence (\ref{eq:AsymptoticsEquivalence1}) to validate the limits \[ \lim_{n \rightarrow \infty} \left ( 1 - q(\theta_n) \right )^{-1} \frac{K^2_n}{P_n-3K_n} = 1 \] and \[ \lim_{n \rightarrow \infty} \frac{P_n-2K_n}{P_n-3K_n} = 1 . \] As a consequence, \[ \limsup_{n \rightarrow \infty} \left ( 1 - q(\theta_n) \right )^{-1} \frac{c_{2}(\theta_n)}{c_{1}(\theta_n)} \leq \frac{1}{2} \] and the desired conclusion is now immediate. \myendpf Combining Lemma \ref{lem:c_1_leq_1_q}, Lemma \ref{lem:Monotonicity} and Lemma \ref{lem:RatioVS1-q} will lead to the following key bounds. \begin{lemma} {\sl For any scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ satisfying (\ref{eq:Condition1})-(\ref{eq:Condition2}), we have \begin{equation} c_k(\theta_n)\leq\left(1-q(\theta_n)\right)^k, \quad k=1, 2, \ldots, K_n \label{eq:c_k_leq_1-q^k} \end{equation} for all $n$ sufficiently large in $\mathbb{N}_0$.} \label{lem:c_k_leq_1-q^k} \end{lemma} \myproof Pick a scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ satisfying (\ref{eq:Condition1})-(\ref{eq:Condition2}). For each $n=2,3, \ldots$, we can use Lemma \ref{lem:c_1_leq_1_q} and Lemma \ref{lem:Monotonicity} to conclude that \begin{eqnarray} c_k(\theta_n) &=& \prod_{\ell=1}^{k-1} \frac{c_{\ell+1}(\theta_n)} {c_\ell(\theta_n)} \cdot c_1 (\theta_n) \nonumber \\ &\leq& \left ( \frac{c_2(\theta_n)} {c_1(\theta_n)} \right )^{k-1} \cdot c_1 (\theta_n) \nonumber \\ &\leq& \left ( \frac{c_2(\theta_n)} {c_1(\theta_n)} \right )^{k-1} \cdot \left ( 1 - q(\theta_n) \right ) \end{eqnarray} with $k=1, \ldots , K_n$. The desired conclusion is now a simple consequence of Lemma \ref{lem:RatioVS1-q}. \myendpf We are now in a position to take the first step towards the proof of Proposition \ref{prop:OneLawConvergenceSecondMomentReduction}. \begin{proposition} {\sl For any scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ satisfying (\ref{eq:Condition1})-(\ref{eq:Condition2}), we have \begin{equation} \lim_{n \to \infty} \frac{1}{n^2} \cdot \frac{\sum_{k=5}^{K_n} c_k(\theta_n)} {\beta (\theta_n)^2} = 0 \label{eq:ReductionStep} \end{equation} under the condition (\ref{eq:ConditionForOne}). } \label{prop:ReductionStep} \end{proposition} \myproof Pick a scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ satisfying (\ref{eq:Condition1})-(\ref{eq:Condition2}). The result (\ref{eq:AsymptoticsForRatio}) is trivially true if $K_n \leq 4$ for all $n$ sufficiently large in $\mathbb{N}_0$. Thus, assume from now on that $K_n \geq 5$ for infinitely many $n$ in $\mathbb{N}_0$ -- In fact, there is now loss of generality in assuming $K_n \geq 5$ for all $n$ sufficiently large in $\mathbb{N}_0$. From Lemma \ref{lem:c_k_leq_1-q^k} it follows that \begin{eqnarray} \sum_{k=5}^{K_n} c_k(\theta_n) &\leq& \sum_{k=5}^{K_n} \left( 1-q(\theta_n) \right)^k \nonumber \\ &\leq & \sum_{k=5}^{\infty} \left( 1-q(\theta_n) \right)^k \nonumber \\ &=& \frac{ \left( 1-q(\theta_n) \right)^5 }{ q(\theta_n) } \end{eqnarray} for all $n$ sufficiently large in $\mathbb{N}_0$. Letting $n$ go to infinity in this last inequality we readily obtain (\ref{eq:ReductionStep}) as an immediate consequence of Proposition \ref{prop:n^2_1_q_to_inf}, to wit (\ref{eq:AsymptoticsForRatio}) (with $a=5$ and $b=2$). \myendpf \section{The second reduction step} \label{sec:SecondReductionStep} It is now plain from Proposition \ref{prop:ReductionStep} that the proof of Proposition \ref{prop:OneLawConvergenceSecondMomentReduction} will be completed if we show the following fact. \begin{proposition} {\sl For any scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ satisfying (\ref{eq:Condition1})-(\ref{eq:Condition2}), we have \begin{equation} \lim_{n \to \infty} \frac{1}{n^2} \cdot \frac{\sum_{k=0}^{4} c_k(\theta_n) - q(\theta_n) ^ 4} {\beta (\theta_n)^2} = 0 \label{eq:SecondStep} \end{equation} under the condition (\ref{eq:ConditionForOne}). } \label{prop:SecondStep} \end{proposition} To construct a proof of Proposition \ref{prop:SecondStep} we proceed as follows: Fix positive integers $K$ and $P$ such that $3K \leq P$. By direct substitution we get \begin{eqnarray} \lefteqn{\sum_{k=0}^{4} c_k(\theta) - q(\theta) ^ 4 } & & \nonumber \\ &=& \sum_{k=0}^{4} \frac{ {K \choose k}{P-K \choose K-k} } { {P \choose K } } \left ( \frac{ {P-2K+k \choose K} } { {P \choose K } } \right )^2 - \left ( \frac{ {P-K \choose K} }{ {P \choose K } } \right )^ 4 \nonumber \\ &=& {P \choose K }^{-4} \left ( \sum_{k=0}^{4} {P \choose K } {K \choose k} {P-K \choose K-k} {P-2K+k \choose K}^2 - {P-K \choose K}^4 \right ) \nonumber \\ &=& \frac{F(\theta) }{ G(\theta) } \label{eq:SecondStepExpression} \end{eqnarray} where we have set \begin{eqnarray} & & F(\theta) \label{eq:F(teta)_defn} \\ &:=& (K!)^4 \left ( \sum_{k=0}^{4} {P \choose K } {K \choose k} {P-K \choose K-k} {P-2K+k \choose K}^2 - {P-K \choose K}^4 \right) \nonumber \end{eqnarray} and \begin{equation} G(\theta):= \left ( \frac{ P! }{ (P-K)! } \right )^4 = \prod_{\ell=0}^{K-1}(P-\ell)^4 . \label{eq:G(teta)_defn} \end{equation} In this new notation Proposition \ref{prop:SecondStep} can be given a simpler, yet equivalent, form. \begin{proposition} {\sl Consider any scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ satisfying (\ref{eq:Condition1})-(\ref{eq:Condition2}), The convergence (\ref{eq:SecondStep}) holds if and only if \begin{eqnarray} \lim_{n\to \infty} \frac{1}{n^2 \beta (\theta_n) ^ 2} \frac{F(\theta_n)}{P_n^{4K_n}} = 0 . \label{eq:FinalRatioToZero} \end{eqnarray} } \label{prop:FinalRatioToZero} \end{proposition} \myproof Pick a scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ satisfying (\ref{eq:Condition1})-(\ref{eq:Condition2}) and assume that (\ref{eq:ConditionForOne}) holds. The desired equivalence is an immediate consequence of the expression (\ref{eq:SecondStepExpression}) as we show below the equivalence \begin{equation} G(\theta_n) \sim P_n^{4K_n}. \label{eq:SimpleFactForG(theta)} \end{equation} By (\ref{eq:G(teta)_defn}) this last equivalence amounts to \begin{equation} \lim_{n \to \infty} \prod_{\ell=0}^{K_n-1} \left( \frac{P_n - \ell }{P_n} \right)^4 = 1. \label{eq:P_P-l_to_1} \end{equation} To establish this convergence, fix $n=2,3, \ldots $ and note that \begin{eqnarray} \prod_{\ell=0}^{K_n-1} \left(\frac{P_n - \ell}{P_n}\right)^4 = \left ( \prod_{\ell=0}^{K_n-1} \left ( 1 - \frac{\ell}{P_n} \right ) \right )^4 . \label{eq:Identity+A} \end{eqnarray} The bounds \begin{equation} \left( 1-\frac{K_n}{P_n} \right)^{K_n} \leq \prod_{\ell=0}^{K_n-1} \left ( 1 - \frac{\ell}{P_n} \right ) \leq 1 \label{eq:boundsonProd_P_P_l} \end{equation} are straightforward, while simple calculus followed by a crude bounding rgument yields \[ 1 - \left ( 1 - \frac{K_n}{P_n} \right )^{K_n} = \int_{ 1 - \frac{K_n}{P_n} }^1 K_n t^{K_n-1}dt \leq \frac{K_n^2}{P_n} . \] With the help of (\ref{eq:boundsonProd_P_P_l}) we now conclude that \begin{equation} 1 -\frac{K_n^2}{P_n} \leq \prod_{\ell=0}^{K_n-1} \left( 1-\frac{\ell}{P_n} \right) \leq 1 . \end{equation} Letting $n$ go to infinity in this last expression yields the conclusion \begin{equation} \lim_{n \to \infty} \prod_{\ell=0}^{K_n-1} \left( 1-\frac{\ell}{P_n} \right) = 1 \end{equation} by virtue of (\ref{eq:Condition2}), and this readily implies (\ref{eq:P_P-l_to_1}) via (\ref{eq:Identity+A}). \myendpf The following bound, which is established in Section \ref{sec:FinalStep}, proves crucial for proving the convergence (\ref{eq:FinalRatioToZero}) under the assumptions of Proposition \ref{prop:SecondStep}. \begin{lemma} {\sl For any scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ satisfying (\ref{eq:Condition1})-(\ref{eq:Condition2}), we have \begin{equation} F(\theta_n) \leq K_n ^ {4} P_n ^ {4 K_n -3} \label{eq:F_theta_bound} \end{equation} for all $n$ sufficiently large in $\mathbb{N}_0$.} \label{lem:F_theta_bound} \end{lemma} While Lemma \ref{lem:F_theta_bound} is established in Section \ref{sec:FinalStep}, the proof of Proposition \ref{prop:SecondStep} can now be completed: Pick a scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ satisfying (\ref{eq:Condition1})-(\ref{eq:Condition2}) and assume that (\ref{eq:ConditionForOne}) holds. By Lemma \ref{lem:F_theta_bound} we get \begin{equation} \frac{1}{n^2 \beta ^ 2 (\theta_n)} \cdot \frac{F(\theta_n)}{P_n^{4K_n}} \leq \frac{1}{n^2 \beta ^ 2 (\theta_n)} \cdot \frac{K_n^4}{P_n^{3}} \label{eq:K^4/P^3_to_zero_pf_step1} \end{equation} for all $n$ sufficiently large in $\mathbb{N}_{0}$. Invoking Proposition \ref{prop:AsymptoticEquivalence2} we then conclude that \begin{eqnarray} \frac{1}{n^2 \beta ^ 2 (\theta_n)} \cdot \frac{K_n^4}{P_n^{3}} &\sim& \frac{1}{n^2 \tau (\theta_n)^ 2 } \cdot \frac{K_n^4}{P_n^{3}} \nonumber \\ &=& \frac{K_n^4}{n^2 P_n^3 \left( \frac{K_n^3}{P_n^2} + \left(\frac{K_n^2 }{P_n}\right)^3 \right)^2 } \nonumber \\ &\leq& \frac{K_n^4}{n^2 P_n^3 \left(\frac{K_n^3}{P_n^2}\right)^2} \nonumber \\ &=& \left ( n^2 \frac{K_n^2}{P_n} \right )^{-1} . \label{eq:AsymptoticInequality} \end{eqnarray} The validity of (\ref{eq:FinalRatioToZero}) follows upon letting $n$ go to infinity in (\ref{eq:K^4/P^3_to_zero_pf_step1}) and using (\ref{eq:AsymptoticInequality}) together with the consequence (\ref{eq:n^2_1_q_to_infB}) of (\ref{eq:ConditionForOne}) discussed in the proof of Proposition \ref{prop:n^2_1_q_to_inf}. The proof of Proposition \ref{prop:SecondStep} is completed with the help of Proposition \ref{prop:FinalRatioToZero}. \myendpf \section{Towards Lemma \ref{lem:F_theta_bound}} \label{sec:FinalStep} We are left with proving the key Lemma \ref{lem:F_theta_bound}. To do so we will need to exploit the structure of $F(\theta)$: Thus, fix positive integers $K$ and $P$ such that $3K \leq P$, and return to (\ref{eq:F(teta)_defn}). For each $k=0,1, \ldots , 4$, easy algebra shows that \begin{eqnarray} \lefteqn{ (K!)^4 {P \choose K } {K \choose k} {P-K \choose K-k} {P-2K+k \choose K}^2 } & & \nonumber \\ &=& \frac{P!}{k! (P-2K+k)!} \cdot \left ( \frac{(K!)^2 (P-2K+k)!}{K!(K-k)! (P-3K+k)!} \right )^2 \nonumber \\ &=& \frac{P!(P-2K+k)!}{k!} \cdot \left ( \frac{K!}{(K-k)! (P-3K+k)!} \right )^2 \nonumber \\ &=& k! {K \choose k}^2 \cdot b_{K,k} (\theta) \end{eqnarray} with \begin{equation} b_{K,k} (\theta) := \frac{P!(P-2K+k)!}{((P-3K+k)!)^2} . \label{eq:b_k(theta)} \end{equation} Next, it is plain that \begin{equation} b_K(\theta) := (K!)^4 {P-K \choose K}^4 = \left ( \frac{(P-K)!}{(P-2K)!} \right)^4 . \label{eq:b(theta)} \end{equation} Reporting these facts into (\ref{eq:F(teta)_defn}) we readily conclude \begin{eqnarray} F(\theta) &=& \sum_{k=0}^4 k! {K \choose k}^2 \cdot \frac{P!(P-2K+k)!}{((P-3K+k)!)^2} - \left ( \frac{(P-K)!}{(P-2K)!} \right )^4 \nonumber \\ &=& \left ( \sum_{k=0}^4 k! {K \choose k}^2 \cdot b_{K,k}(\theta) \right ) - b_K(\theta ) . \label{eq:F(teta)_A} \end{eqnarray} By direct inspection, using (\ref{eq:Factors1}) and (\ref{eq:Factors2}) in Appendix \ref{Appendix:C}, we check that $F(\theta)$ can be written as a polynomial in $P$ (of order $4K$), namely \begin{equation} F(\theta) = \sum_{\ell=0}^{4K} a_{4K -\ell} (K) P^\ell = \sum_{\ell=0}^{4K} a_{\ell} (K) P^{4K-\ell} \label{eq:F(theta)_polynom} \end{equation} where the coefficients are {\em integers} which depend on $\theta$ only through $K$. The first six coefficients can be evaluated explicitly. \begin{lemma} {\sl With positive integers $K$ and $P$ such that $3K \leq P$, we have \begin{equation} a_0(K)=a_1(K)=a_2(K)=0 \end{equation} and \begin{equation} a_3(K)=K^4 \label{eq:a_3} \end{equation} whereas \begin{equation} a_4(K)=-6K^6+6K^5-K^4 \label{eq:a_4} \end{equation} and \begin{eqnarray} a_5(K) &=& - \frac{1}{120}K^{10} + \frac{1}{6}K^9 + \frac{199}{12}K^8 - 34K^7 + \frac{1207}{120}K^6 \nonumber \\ & & ~ + \frac{161}{6}K^5 - \frac{209}{6}K^4 + 20K^3 - \frac{24}{5}K^2. \label{eq:a_5} \end{eqnarray} } \label{lem:first_six_coef} \end{lemma} The fact that (\ref{eq:a_5}) defines a polynomial expression in $K$ with rational coefficients does not contradict the integer nature of $a_5(K)$. In what follows we shall find it convenient to write \begin{equation} a_5^\star(K) = a_5(K) + \frac{1}{240} K^{10} . \label{eq:a_5Star} \end{equation} The proof of Lemma \ref{lem:first_six_coef} is tedious and is given in Appendix \ref{Appendix:C}. For the remaining coefficients, we rely on the following bounds which are also derived in Appendix \ref{Appendix:C}. \begin{lemma} {\sl With positive integers $K$ and $P$ such that $3K \leq P$, we have \begin{equation} | a_\ell (K) | \leq 2 \cdot (12 K^2) ^\ell, \quad \ell = 0, 1, \ldots, 4K. \label{eq:a_i_unif_bound} \end{equation} } \label{lem:a_i_unif_bound} \end{lemma} As expected these bounds are in agreement witht the exact expressions obtained in Lemma \ref{lem:first_six_coef} for $\ell =0, 1, \ldots , 5$. A proof of Lemma \ref{lem:F_theta_bound} can now be given: Pick a scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ satisfying (\ref{eq:Condition1})-(\ref{eq:Condition2}) and replace $\theta$ by $\theta_n$ in (\ref{eq:F(theta)_polynom}) according to this scaling. As Lemma \ref{lem:first_six_coef} implies \begin{equation} F(\theta_n) = K_n^4 P_n^{4K_n-3} + \sum_{\ell=4}^{4K_n} a_\ell(K_n) P_n^{4K_n-\ell} \label{eq:F(theta)Reformulated} \end{equation} for all $n=2,3, \ldots $, the bound (\ref{eq:F_theta_bound}) follows if we show that \begin{equation} \sum_{\ell=4}^{4K_n} a_\ell(K_n) P_n^{4K_n-\ell} \leq 0 \label{eq:F_theta_bound_prf_start} \end{equation} for all $n$ sufficiently large in $\mathbb{N}_0$. To do so, apply (\ref{eq:a_i_unif_bound}) and use elementary arguments to get \begin{eqnarray} \left | \sum_{\ell=6}^{4K_n} a_\ell(K_n) P_n^{4K_n-\ell} \right | &\leq& \sum_{\ell=6}^{4K_n} \left | a_\ell(K_n) \right | P_n^{4K_n-\ell} \nonumber \\ &\leq& \sum_{\ell=6}^{4K_n} 2 \cdot (12 K_n^2)^\ell P_n^{4K_n-\ell} \nonumber \\ &=& 2 P_n^{4K_n} \sum_{\ell=6}^{4K_n} \left ( \frac{ 12 K_n^2 }{P_n} \right )^\ell \nonumber \\ &\leq& 2 P_n^{4K_n} \left ( \frac{ 12 K_n^2 }{P_n} \right )^6 \cdot \sum_{\ell=0}^{\infty} \left ( \frac{ 12 K_n^2 }{P_n} \right )^\ell \nonumber \\ &=& 2 P_n^{4K_n} \left ( \frac{ 12 K_n^2 }{P_n} \right )^6 \cdot \left ( 1 - \frac{ 12 K_n^2 }{P_n} \right )^{-1} \end{eqnarray} for all $n$ large enough to ensure $12 K_n^2 < P_n$, say $n \geq n_1^\star$ for some finite integer $n_1^\star$; this is a simple consequence of condition (\ref{eq:Condition1})-(\ref{eq:Condition2}). On that range, going back to (\ref{eq:F_theta_bound_prf_start}), we find \begin{eqnarray} \lefteqn{ \sum_{\ell=4}^{4K_n} a_\ell(K_n) P_n^{4K_n-\ell} } & & \nonumber \\ &\leq& a_4(K_n) P_n^{4K_n-4} + a_5(K_n) P_n^{4K_n-5} + \left | \sum_{\ell=6}^{4K_n} a_\ell(K_n) P_n^{4K_n-\ell} \right | \nonumber \\ &\leq& a_4(K_n) P_n^{4K_n-4} + a_5(K_n) P_n^{4K_n-5} + 2 P_n^{4K_n} \left ( \frac{ 12 K_n^2 }{P_n} \right )^6 \cdot \left ( 1 - \frac{ 12 K_n^2 }{P_n} \right )^{-1} \nonumber \\ &=& P_n^{4K_n-5} \cdot L_n \label{eq:InequalityGeom1} \end{eqnarray} where \[ L_n := a_{4} (K_n) P_n + a_5(K_n) + 2 (12)^6 K^{10}_n \cdot \frac{K_n^2} { P_n } \cdot \left ( 1 - \frac{ 12 K_n^2 }{P_n} \right )^{-1} . \] Therefore, (\ref{eq:F_theta_bound_prf_start}) will hold for all $n$ sufficiently large in $\mathbb{N}_0$ provided \begin{equation} L_n \leq 0 \label{eq:InequalityToSHOW+a} \end{equation} for all $n$ sufficiently large in $\mathbb{N}_0$. This last statement will be established by showing that $L=-\infty$ where \[ L := \limsup_{n \rightarrow \infty} L_n . \] That $L=-\infty$ can be seen as follows: We begin with the bound \begin{equation} a_{4} (K_n) = - K_n^4 (6K_n (K_n-1) +1) \leq -K_n^4 \label{eq:InequalityC} \end{equation} for all $n=1,2, \ldots $. Next, condition (\ref{eq:Condition1})-(\ref{eq:Condition2}) implies \begin{equation} \lim_{n \rightarrow \infty} \frac{K_n^2} { P_n } \cdot \left ( 1 - \frac{ 12 K_n^2 }{P_n} \right )^{-1} = 0 , \label{eq:Limit=Zero} \end{equation} whence there exists some finite integer $n_2^{\star}$ such that \begin{equation} 2 (12)^6 \frac{K_n^2} { P_n } \cdot \left ( 1 - \frac{ 12 K_n^2 }{P_n} \right )^{-1} \leq \frac{1}{240} , \quad n \geq n_2^{\star} . \label{eq:InequalityD} \end{equation} Now, set $n^\star = \max \left ( n_1^\star, n_2^\star \right )$, and recall the definition (\ref{eq:a_5Star}). On the range $n \geq n^\star$, both inequalities (\ref{eq:InequalityGeom1}) and (\ref{eq:InequalityD}) hold, and we obtain \begin{eqnarray} \lefteqn{a_{4} (K_n) P_n + a_5(K_n) + 2 (12)^6 K^{10}_n \cdot \frac{K_n^2} { P_n } \cdot \left ( 1 - \frac{ 12 K_n^2 }{P_n} \right )^{-1} } & & \nonumber \\ &=& a_{4} (K_n) P_n + a_5^\star (K_n) + \left ( - \frac{1}{240} + 2 (12)^6 \cdot \frac{K_n^2} { P_n } \cdot \left ( 1 - \frac{ 12 K_n^2 }{P_n} \right )^{-1} \right ) K_n^{10} \nonumber \\ &\leq& -K_n^4 P_n + a_5^\star (K_n) \label{eq:InequalityE} \end{eqnarray} upon making use of (\ref{eq:InequalityC}). To conclude, set \begin{equation} L^\star := \limsup_{n \rightarrow \infty} \left ( a_5^\star (K_n) \right ) \label{eq:InequalityF} \end{equation} and note that $L^\star$ is necessarily an element of $[-\infty, \infty )$, i.e., it is never the case that $L^\star = \infty$. This follows easily from the fact that the mapping $\mathbb{R}_+ \rightarrow \mathbb{R}_+: x \rightarrow a_5^\star (x) $ is a polynomial of degree $10$ whose leading coefficient ($-\frac{1}{240}$) is negative. As we recall (\ref{eq:RatioConditionStrong+Consequence2}) under (\ref{eq:Condition1})-(\ref{eq:Condition2}), it is now plain from (\ref{eq:InequalityE}) that $L = -\infty$ by standard properties of the lim sup operation. \myendpf Careful inspection of the proof of Proposition \ref{prop:SecondStep} given at the end of Section \ref{sec:SecondReductionStep} shows that the inequality (\ref{eq:F_theta_bound}) of Lemma \ref{lem:F_theta_bound} could be replaced without prejudice by the following weaker statement: For any scaling $P,K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ satisfying (\ref{eq:Condition1})-(\ref{eq:Condition2}), there exists some positive constant $C$ such that \begin{equation} F(\theta_n) \leq C K_n ^ {4} P_n ^ {4 K_n -3} \label{eq:F_theta_bound+Weaker} \end{equation} for all $n$ sufficiently large in $\mathbb{N}_0$. Now, from only the knowledge of the first four coefficients in Lemma \ref{lem:first_six_coef} we can already conclude that \begin{equation} \lim_{P \rightarrow \infty} \frac{F(K,P)}{K^4 P^{4K-3} } = 1 \end{equation} for {\em each} $K=1,2, \ldots $, so that for each $\varepsilon > 0$ there exists a finite integer $P^\star ( \varepsilon, K) $ such that \begin{equation} F(K,P) \leq \left ( 1 + \varepsilon \right ) K^4 P^{4K-3}, \quad P \geq P^\star ( \varepsilon, K) \end{equation} Unfortunately, the threshold $P^\star ( \varepsilon, K) $ is not known to be uniform with respect to $K$, and the approach does {\em not} necessarily imply (\ref{eq:F_theta_bound+Weaker}) (with $C = 1 + \varepsilon$) {\em unless} the sequence $K: \mathbb{N}_0 \rightarrow \mathbb{N}_0$ is bounded. This technical difficulty is at the root of why more information on the coefficients $a_4(K)$ and $a_5(K)$ (as provided in Lemma \ref{lem:first_six_coef}) is needed, and paves the way for the subsequent arguments behind Lemma \ref{lem:F_theta_bound}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Despite the great success of the Standard Model (SM), marked by the discovery of the $125~{\mbox{GeV}}$ Higgs-like boson~\cite{Aad:2012tfa,Chatrchyan:2012xdj} and the on-going measurements of its properties, how the SM is embedded into a larger theory still remains a mystery. Since the Higgs boson mass parameter is in general not protected under radiative correction, a naive embedding would signal a high sensitivity of infrared (IR) parameters (the electroweak scale and the Higgs boson mass) to ultraviolet (UV) parameters (i.e. physical parameters defined at a high scale). Although this fine-tuned situation is logically possible, or might be explained to some extent by anthropic reasoning~\cite{Schellekens:2013bpa,Donoghue:2016tjk}, it is nevertheless natural to conjecture the existence of some systematic mechanism which protects the Higgs boson mass parameter from severe radiative instability. A well-known example of such systematic mechanism is supersymmetry, which has the merit of being weakly-coupled and thus offers better calculability compared to scenarios based on strong dynamics. However supersymmetry requires the introduction of a large number of new degrees of freedom, and a large number of new parameters associated with them, making the model quite cumbersome. None of the new degrees of freedom have been observed. It is therefore well-motivated to consider alternative but simpler mechanisms with weakly-coupled dynamics in their range of validity. One candidate of such alternative is the Little Higgs mechanism~\cite{ArkaniHamed:2001nc, ArkaniHamed:2002pa,ArkaniHamed:2002qx,ArkaniHamed:2002qy}\footnote{We refer the reader to ref.~\cite{Schmaltz:2005ky,Perelstein:2005ka} for reviews of Little Higgs models and ref.~\cite{Dercks:2018hgz,Reuter:2012sd,Reuter:2013iya,Han:2013ic} for some recent phenomenological analyses of Little Higgs models.}, in which the Higgs boson is a Goldstone boson of some spontaneous global symmetry breaking. The global symmetry is also explicitly broken in a collective manner\footnote{More specifically, the global symmetry is completely (explicitly) broken by a collection of spurions but not by any single spurion~\cite{ArkaniHamed:2002qy}.} such that the Higgs boson acquires a mass and at the same time the model is radiatively more stable. A very simple implementation of this collective symmetry breaking (CSB) idea is the Simplest Little Higgs (SLH) model~\cite{Kaplan:2003uc,Schmaltz:2004de}, in which the electroweak gauge group is enlarged to $SU(3)_L\times U(1)_X$, and two scalar triplets are introduced to realize the global symmetry breaking pattern \begin{align} & [SU(3)_1\times U(1)_1]\times[SU(3)_2\times U(1)_2] \nonumber \\ & \rightarrow[SU(2)_1\times U(1)_1]\times[SU(2)_2\times U(1)_2] \label{eq:gsb} \end{align} The global symmetry is also explicitly broken by gauge and Yukawa interactions, but in a collective manner to improve the radiative stability of the scalar sector. The particle content is quite economical. Especially in the low energy scalar sector, there exists only two physical degrees of freedom, one of which (denoted $H$) could be identified with the $125~{\mbox{GeV}}$ Higgs-like particle, while the other is a CP-odd scalar $\eta$ which is referred to as a pseudo-axion in the literature~\cite{Kilian:2004pp,Kilian:2006eh}. In the SLH, the pseudo-axion $\eta$ is closely related to the electroweak symmetry breaking (EWSB) and therefore studying its phenomenology is well-motivated. According to the hidden mass relation derived in ref.~\cite{Cheung:2018iur}, the $\eta$ mass $m_\eta$ is anti-correlated with the top partner mass $m_T$, which is in turn related to the degree of fine-tuning in the model. The hidden mass relation is derived within an approach consistent with the continuum effective field theory (CEFT) and does not rely on the assumption on the contribution from the physics at the cutoff. Although phenomenology of the $\eta$ particle has been studied by quite a few papers (e.g.~\cite{Kilian:2004pp,Kilian:2006eh, Cheung:2006nk,Cheung:2008zu,Han:2013ic}), their treatment was not based on the hidden mass relation, and also most of the papers were written before the $125~{\mbox{GeV}}$ boson was discovered. It is thus timely to revisit the status of $\eta$ phenomenology in light of the discovery of the $125~{\mbox{GeV}}$ boson, taking into account the properly derived hidden mass relation and focusing on the parameter space favored by naturalness considerations. There is another important reason that warrants a reanalysis of the $\eta$ phenomenology. The SLH is usually written as a gauged nonlinear sigma model, in which the EWSB can be parametrized through vacuum misalignment. However, the vacuum misalignment also leads to the fact that, in the usual parametrization of the two scalar triplets, there exist scalar kinetic terms that are not canonically-normalized, and vector-scalar two-point transitions that are ``unexpected''~\cite{He:2017jjx}. A further field rotation, including an appropriate gauge-fixing procedure, is thus required to properly diagonalize the vector-scalar sector of the SLH model. This subtlety had been overlooked in all related papers before ref.~\cite{He:2017jjx}, and if one carries out a proper diagonalization of the bosonic sector of the SLH, some of the $\eta$-related couplings will turn out to be different from what has been obtained in previous literature. This is the case for both the $ZH\eta$ coupling and the coupling of $\eta$ to a pair of SM fermions. The occurrence of the mass eigenstate antisymmetric $ZH\eta$ vertex (i.e. $Z_\mu(H\partial^\mu\eta-\eta\partial^\mu H)$) is postponed to $\mathcal{O}(\xi^3)$ (with $\xi\equiv\frac{v}{f}$, $v\approx 246~{\mbox{GeV}}$ and $f$ is the global symmetry breaking scale of Eq.~\eqref{eq:gsb}), and the couplings of $\eta$ to a pair of SM charged leptons, and to $b\overline{b},c\overline{c},u\overline{u}$ are found to vanish to all order in $\xi$. This leads to significant changes in the $\eta$ phenomenology, which will be studied in detail in this work. When one tries to derive the $\eta$-related Lagrangian in the SLH, symmetric vector-scalar-scalar (VSS) vertices, e.g. $Z_\mu(H\partial^\mu\eta+\eta\partial^\mu H)$ naturally appear, which is a feature that is often present in models based on a nonlinearly-realized scalar sector. The effects of such symmetric VSS vertices contain some subtleties which, to our knowledge, have not been discussed before in literature. Therefore, we devote one section to the analysis of symmetric VSS vertices, which could also be helpful to clarify similar situations in other nonlinearly-realized models. In this work we do not aim to give a complete characterization of the $\eta$ phenomenology, which could be very complicated in certain corner of parameter space. Instead, we focus our attention on the parameter space favored by naturalness considerations. More specifically, we will consider $\eta$ mass in the region $2m_t\lesssim m_\eta\lesssim 1~{\mbox{TeV}}$, which is favored by naturalness. We then calculate the $\eta$ decay and production at future high energy hadron colliders in various channels. It turns out at the $14~{\mbox{TeV}}$ (HL)-LHC the detection of $\eta$ is quite challenging due to various suppression mechanisms. A $pp$ collider with higher energy and luminosity, such as the $27~{\mbox{TeV}}$ HE-LHC, or even the $100~{\mbox{TeV}}$ FCC-hh or SppC, is therefore motivated to capture the trace of such a pNGB. The paper is organized as follows. In Section~\ref{sec:slh} we review the basic ingredients of the SLH, including the crucial hidden mass relation obtained from a CEFT analysis, and present the mass eigenstate Lagrangian relevant for phenomenological studies. In Section~\ref{sec:vss} we clarify the effect of symmetric VSS vertices. Then in Section~\ref{sec:ewpt} we derive important constraints from electroweak precision observables relevant for the pseudo-axion phenomenology. Section~\ref{sec:prod} is dedicated to the study of $\eta$ decay and production at hadron colliders. In Section~\ref{sec:dnc} we present the discussion and conclusions. \section{The Simplest Little Higgs} \label{sec:slh} \subsection{Overview of the Simplest Little Higgs} In the SLH, the electroweak gauge group is enlarged to $SU(3)_L\times U(1)_X$. Two scalar triplets $\Phi_1,\Phi_2$ are introduced to realize the spontaneous global symmetry breaking pattern in Eq.~\eqref{eq:gsb}. They are parameterized as \begin{align} \Phi_1=\exp\left(\frac{i\Theta'}{f}\right) \exp\left(\frac{it_\beta\Theta}{f}\right) \begin{pmatrix} 0 \\ 0 \\ fc_\beta \end{pmatrix} \label{eq:phi1} \\ \Phi_2=\exp\left(\frac{i\Theta'}{f}\right) \exp\left(-\frac{i\Theta}{ft_\beta}\right) \begin{pmatrix} 0 \\ 0 \\ fs_\beta \end{pmatrix} \label{eq:phi2} \end{align} Here we have introduced the shorthand notation $s_\beta\equiv\sin\beta,c_\beta\equiv\cos\beta,t_\beta\equiv\tan\beta$. $f$ is the Goldstone decay constant. $\Theta$ and $\Theta'$ are $3\times 3$ matrix fields, parameterized as \begin{align} \Theta=\frac{\eta}{\sqrt{2}}+ \begin{pmatrix} \textbf{0}_{2\times 2} & h \\ h^\dagger & 0 \end{pmatrix},\quad \Theta'=\frac{\zeta}{\sqrt{2}}+ \begin{pmatrix} \textbf{0}_{2\times 2} & k \\ k^\dagger & 0 \end{pmatrix} \label{eq:theta} \end{align} $\eta$ is the pseudo-axion, and $h$ and $k$ are parameterized as ($v$ denotes the vacuum expectation value (vev) of the Higgs doublet) \begin{align} h & =\begin{pmatrix} h^0 \\ h^- \end{pmatrix},\quad h^0=\frac{1}{\sqrt{2}}(v+H-i\chi) \label{eq:hdoub} \\ k & =\begin{pmatrix} k^0 \\ k^- \end{pmatrix},\quad k^0=\frac{1}{\sqrt{2}}(\sigma-i\omega) \label{eq:kdoub} \end{align} For future convenience, we introduce the notation \begin{equation} \hat{h}\equiv(h^\dagger h)^{1/2} \end{equation} We note that the spontaneous global symmetry breaking Eq.~\eqref{eq:gsb} should deliver 10 Goldstone bosons, which are parameterized here in $\Theta$ and $\Theta'$. The electroweak gauge group $SU(3)_L\times U(1)_X$ will eventually break to $U(1)_{EM}$, and therefore 8 Goldstone bosons will be eaten to make the associated gauge bosons massive. Only two Goldstone bosons remain physical, parameterized here as $h$ and $\eta$. The parametrization of these Goldstone fields actually has some freedom, which we refer the reader to ref.~\cite{Cheung:2018iur} for explanation. In the SLH, under the full gauge group $SU(3)_C\times SU(3)_L\times U(1)_X$, $\Phi_1$ and $\Phi_2$ have quantum number $(\textbf{1},\textbf{3})_{-\frac{1}{3}}$. The gauge kinetic term of $\Phi_1$ and $\Phi_2$ can thus be written as\footnote{ We note that Eq.~\eqref{eq:lgk} automatically satisfies the requirement of CSB.} \begin{equation} \mathcal{L}_{gk}=(D_\mu\Phi_1)^\dagger(D^\mu\Phi_1)+ (D_\mu\Phi_2)^\dagger(D^\mu\Phi_2) \label{eq:lgk} \end{equation} in which the covariant derivative can be expressed as \footnote{In this paper our convention agrees with Ref.~\cite{delAguila:2011wk} but differs from Ref.~\cite{Han:2005ru}. The conversion between the two conventions is discussed in Appendix~\ref{sec:cc}.} \begin{equation} D_\mu=\partial_\mu-igA_\mu^a T^a+ig_xQ_xB_\mu^x,\quad g_x=\frac{gt_W}{\sqrt{1-t_W^2/3}} \end{equation} In the above equation, $A_\mu^a$ and $B_\mu^x$ denote $SU(3)_L$ and $U(1)_X$ gauge fields, respectively. $g$ and $g_x$ denote the coupling constants of $SU(3)_L$ and $U(1)_X$ gauge groups, respectively. It is convenient to trade $g_x$ for $t_W\equiv\tan\theta_W$. $T^a=\frac{\lambda^a}{2}$ where $\lambda^a,a=1,...,8$ denote the Gell-Mann matrices. For $\Phi_1,\Phi_2$, $Q_x=-\frac{1}{3}$. Following ref.~\cite{delAguila:2011wk}, we parameterize the $SU(3)_L$ gauge bosons as \begin{align} A_\mu^a T^a &=\frac{A_\mu^3}{2} \begin{pmatrix} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 0 \end{pmatrix} \nonumber \\ & +\frac{A_\mu^8}{2\sqrt{3}} \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -2 \end{pmatrix} +\frac{1}{\sqrt{2}} \begin{pmatrix} 0 & W_\mu^+ & Y_\mu^0 \\ W_\mu^- & 0 & X_\mu^- \\ Y_\mu^{0\dagger} & X_\mu^+ & 0 \end{pmatrix} \end{align} with the \textit{first-order} neutral gauge boson mixing relation ($c_W\equiv\cos\theta_W,s_W\equiv\sin\theta_W$) \begin{align} \begin{pmatrix} A^3 \\ A^8 \\ B^x \end{pmatrix} = \begin{pmatrix} 0 & c_W & -s_W \\ \sqrt{1-\frac{t_W^2}{3}} & \frac{s_W t_W}{\sqrt{3}} & \frac{s_W}{\sqrt{3}} \\ -\frac{t_W}{\sqrt{3}} & s_W\sqrt{1-\frac{t_W^2}{3}} & c_W\sqrt{1-\frac{t_W^2}{3}} \end{pmatrix} \begin{pmatrix} Z' \\ Z \\ A \end{pmatrix} \label{eq:gbmixing} \end{align} Since the electroweak gauge group is enlarged to $SU(3)_L\times U(1)_X$, it is also necessary to enlarge the fermion sector in order that fermions transform properly under the enlarged group. We adopt the elegant anomaly-free embedding proposed in ref.~\cite{Kong:2003vm,Schmaltz:2004de,Kong:2004cv}. In the lepton Yukawa sector, the SM left-handed lepton doublets are enlarged to $SU(3)_L$ triplets $L_m=(\nu_L,\ell_L,iN_L)_m^T$ with $Q_x=-\frac{1}{3}$ ($m=1,2,3$ is the family index). There are also right-handed singlet lepton fields $\ell_{Rm}$ with $Q_x=-1$ and $N_{Rm}$ with $Q_x=0$. The lepton Yukawa Lagrangian can be written as~\cite{delAguila:2011wk} \begin{equation} \mathcal{L}_{LY}=i\lambda_N^m\bar{N}_{Rm}\Phi_2^\dagger L_m +\frac{i\lambda_\ell^{mn}}{\Lambda}\bar{\ell}_{Rm}\epsilon_{ijk}\Phi_1^i\Phi_2^j L_n^k +\text{h.c.} \label{eq:lly} \end{equation} In the quark sector, we have the following field content \begin{align} Q_1 & =(d_L,-u_L,iD_L)^T, \quad d_R,\quad u_R,\quad D_R \\ Q_2 & =(s_L,-c_L,iS_L)^T, \quad s_R,\quad c_R,\quad S_R \\ Q_3 & =(t_L,b_L,iT_L)^T, \quad t_R,\quad b_R,\quad T_R \end{align} Here $Q_1,Q_2$ transform under $\bar{\textbf{3}}$ representation of $SU(3)_L$ with $Q_x=0$. $Q_3$ transforms under $\textbf{3}$ representation of $SU(3)_L$ with $Q_x=\frac{1}{3}$. The right-handed quark fields are all $SU(3)_L$ singlets with various $U(1)_X$ charges. More specifically, $u_R,c_R,t_R,T_R$ carry $Q_x=\frac{2}{3}$ while $d_R,s_R,b_R,D_R,S_R$ carry $Q_x=-\frac{1}{3}$. The quark Yukawa Lagrangian can be written as~\cite{delAguila:2011wk} \begin{align} \mathcal{L}_{QY} & = i\lambda_1^t\bar{u}_{R3}^1\Phi_1^\dagger Q_3 +i\lambda_2^t\bar{u}_{R3}^2\Phi_2^\dagger Q_3 \nonumber \\ & +i\frac{\lambda_b^m}{\Lambda}\bar{d}_{Rm}\epsilon_{ijk}\Phi_1^i\Phi_2^jQ_3^k +i\lambda_1^{dn}\bar{d}_{Rn}^1Q_n^T\Phi_1 \nonumber \\ & +i\lambda_2^{dn}\bar{d}_{Rn}^2Q_n^T\Phi_2 +i\frac{\lambda_u^{mn}}{\Lambda}\bar{u}_{Rm}\epsilon_{ijk}\Phi_1^{*i}\Phi_2^{*j}Q_n^k +\text{h.c.} \label{eq:lqy} \end{align} In the above equation, $n=1,2$ is the family index for the first two generations of quark triplets. $d_{Rm}$ runs over $(d_R,s_R,b_R,D_R,S_R)$ and $u_{Rm}$ runs over $(u_R,c_R,t_R,T_R)$. $u_{R3}^1,u_{R3}^2$ are linear combinations of $t_R$ and $T_R$. $d_{Rn}^1,d_{Rn}^2$ are linear combinations of $d_R$ and $D_R$ for $n=1$ and of $s_R$ and $S_R$ for $n=2$. It is worth noting that in the dimension-4 part of the Eq.~\eqref{eq:lly} and Eq.~\eqref{eq:lqy} CSB is formally preserved. In contrast, in Eq.~\eqref{eq:lly} and Eq.~\eqref{eq:lqy}, the dimension-5 part formally violates CSB. Nevertheless the amount of violation is proportional to light fermion Yukawas and is thus negligible. We now turn to the scalar potential. Using a CEFT approach and combining tree level\footnote{At tree level we don't include a $(\Phi_1^\dagger\Phi_2)^2+\text{h.c.}$ term because it formally violates CSB. We note that introducing such a term may lead to spontaneous CP violation~\cite{Mao:2017hpp}. Furthermore, if both the $(\Phi_1^\dagger\Phi_2)^2+\text{h.c.}$ term and Majorana mass terms for $N_R$'s are introduced, the SLH light neutrino masses can be radiatively generated~\cite{delAguila:2005yi}.} and one-loop contributions, the scalar effective potential in the SLH is calculated to be~\cite{Cheung:2018iur} \begin{equation} V=-\mu^2(\Phi_1^\dagger\Phi_2+\Phi_2^\dagger\Phi_1) +\lambda |\Phi_1^\dagger\Phi_2|^2 +\Delta(\hat{h})\hat{h}^4 \label{eq:vrc} \end{equation} $\mu^2$ and $\lambda$ could be regarded as parameters to be determined from experiments, while $\Delta(\hat{h})$ is automatically finite, and could be expressed from Lagrangian parameters in the model \begin{align} \Delta(\hat{h}) & =\frac{3}{16\pi^2}\Bigg\{\lambda_t^4 \left[\ln\frac{M_T^2}{m_t^2(\hat{h})}-\frac{1}{2}\right] \nonumber \\ & -\frac{1}{8}g^4\left[\ln\frac{M_X^2}{m_W^2(\hat{h})}-\frac{1}{2}\right] \nonumber \\ & -\frac{1}{16}g^4(1+t_W^2)^2\left[\ln\frac{M_{Z'}^2}{m_Z^2(\hat{h})}-\frac{1}{2}\right] \Bigg\} \label{eq:dh} \end{align} $\lambda_t$ is defined as \begin{equation} \lambda_t\equiv\frac{\lambda_1^t\lambda_2^t} {\sqrt{\lambda_1^{t2} c_\beta^2+\lambda_2^{t2}s_\beta^2}} \label{eq:lt1} \end{equation} where $\lambda_1^t,\lambda_2^t$ are the two Yukawa couplings in the top sector, introduced in Eq.~\eqref{eq:lqy}. $M_T^2,M_X^2,M_{Z'}^2$ are defined as \begin{align} M_T^2 & \equiv(\lambda_1^{t2} c_\beta^2+\lambda_2^{t2}s_\beta^2)f^2 \\ M_X^2 & \equiv\frac{1}{2}g^2 f^2 \\ M_{Z'}^2 & \equiv\frac{2}{3-t_W^2}g^2 f^2 \label{eq:zpmass} \end{align} They are related to physical mass squared of the relevant particles as follows \begin{align} M_T^2 & =m_T^2+m_t^2 \label{eq:MT} \\ M_X^2 & =m_X^2+m_W^2 \\ M_{Z'}^2 & =m_{Z'}^2+m_Z^2 \end{align} in which $m_T,m_t$ denote the physical mass of the heavy top $T$ and the top quark $t$, $m_X,m_W$ denote the physical mass of the $X$ boson and $W$ boson, $m_{Z'},m_Z$ denote the physical mass of the $Z'$ boson and $Z$ boson, respectively. $m_t^2(\hat{h}), m_W^2(\hat{h}),m_Z^2(\hat{h})$ are field-dependent mass squared, which we use the following leading order (LO) expression \begin{align} m_t^2(\hat{h}) & =\lambda_t^2\hat{h}^2 \\ m_W^2(\hat{h}) & =\frac{1}{2}g^2\hat{h}^2 \\ m_Z^2(\hat{h}) & =\frac{1}{2}g^2(1+t_W^2)\hat{h}^2 \end{align} With the above expressions for the scalar effective potential we are able to compute the electroweak vev, Higgs mass, pseudo-axion mass, etc. as a function of $\mu^2,\lambda$ and other Lagrangian parameters in the model. Finally we note that there of course exist gauge-invariant kinetic Lagrangian for the $SU(3)_L\times U(1)_X$ gauge fields and the fermion fields in the model, according to their representations. \subsection{Hidden Mass Relation, Unitarity and Naturalness} Before starting the phenomenological analysis in the SLH, it is important to notice that there exist certain constraints that we have to take into account~\cite{Cheung:2018iur}. First, there exists a hidden mass relation which follows from an analysis of the scalar effective potential Eq.~\eqref{eq:vrc}. This is because if we consider $g,t_W,\lambda_t$ as fixed, then the scalar effective potential Eq.~\eqref{eq:vrc} is fully determined by 5 parameters, say $\mu^2,\lambda,f,t_\beta,m_T$. Requiring electroweak vev to be $246~{\mbox{GeV}}$ and the CP-even Higgs mass to be $125~{\mbox{GeV}}$ should eliminate two parameters, leaving only three parameters as independent. For instance, we may choose $f,t_\beta,m_T$ as the three independent parameters, then any other observable could be expressed in terms of these three parameters. Especially, the pseudo-axion mass $m_\eta$ is determined from the following hidden mass relation derived in ref.~\cite{Cheung:2018iur} \begin{align} m_\eta^2=[m_h^2-v^2\Delta_A(3-2\theta t_{2\theta}^{-1}) +v^2 A(5-2\theta t_{2\theta}^{-1})]s_\theta^{-2} \label{eq:mr2} \end{align} Here $t_{2\theta}^{-1}\equiv\frac{1}{\tan(2\theta)}, s_\theta^{-2}\equiv\frac{1}{\sin^2\theta}$, and $\theta,A,\Delta_A$ are defined by \begin{align} \theta\equiv\frac{v}{\sqrt{2}fs_\beta c_\beta} \end{align} \begin{align} A\equiv\frac{3}{16\pi^2} \left[\lambda_t^4-\frac{g^4}{8}-\frac{g^4}{16}(1+t_W^2)^2\right] \label{eq:A} \end{align} \begin{align} \Delta_A & \equiv\frac{3}{16\pi^2}\Bigg[\lambda_t^4 \ln\frac{M_T^2}{m_t^2}-\frac{g^4}{8}\ln\frac{M_X^2}{m_W^2} \nonumber \\ & -\frac{g^4}{16}(1+t_W^2)^2\ln\frac{M_{Z'}^2}{m_Z^2}\Bigg] \end{align} The basic feature of this mass relation is that the pseudo-axion mass is anti-correlated with the top partner mass. Second, the SLH is meant to be only an effective field theory valid up to some energy scale, which could be revealed by an analysis of partial wave unitarity. This is done in ref.~\cite{Cheung:2018iur} and the unitarity cutoff is determined to be \begin{align} \Lambda_U=\sqrt{8\pi}\times\min\{fc_\beta,fs_\beta\} \end{align} Apart from the lepton Yukawa part, the SLH Lagrangian is manifestly symmetric with respect to the exchange $\Phi_1\leftrightarrow\Phi_2$ (with the corresponding exchange of all related coefficients), therefore without loss of generality we may restrict to $t_\beta\geq 1$. The resulting formulae have the $t_\beta\leftrightarrow\frac{1}{t_\beta}$ invariance. Nevertheless, the lepton Yukawa Lagrangian Eq.~\eqref{eq:lly} does not share this exchange symmetry, and the $t_\beta\leftrightarrow\frac{1}{t_\beta}$ invariance could be lost. However, if we do not deal directly with lepton-related vertices, the $t_\beta\leftrightarrow\frac{1}{t_\beta}$ invariance violation could only come from input parameter corrections, which are all suppressed by $\frac{v^2}{f^2}$~\cite{Cheung:2018iur}, which is a very small quantity if we consider current bound on $f$. Therefore in the following, unless otherwise specified, we will assume $t_\beta\geq 1$. (Moreover, in Section~\ref{sec:ewpt} we will show that the $t_\beta<1$ case is disfavored by electroweak precision measurements for natural region of parameter space.) Then we can express the unitarity cutoff as \begin{align} \Lambda_U=\sqrt{8\pi}fc_\beta \end{align} and we require all particle masses be less than $\Lambda_U$. We note that since $\Lambda_U$ is determined by the smaller of the triplet vevs, while $m_{Z'}$ is determined by the quadrature of the triplet vevs, therefore requiring $m_{Z'}\leq\Lambda_U$ leads to an upper bound on $t_\beta$ (besides our assumption $t_\beta\geq1$) \begin{align} 1\leq t_\beta\leq\sqrt{\frac{4\pi(3-t_W^2)}{g^2}-1}\approx 8.9 \label{eq:tbrange} \end{align} Third, the parameter $M_T$ has a lower bound derived simply from the structure of the Yukawa Lagrangian~\cite{Han:2005ru} \begin{equation} M_T\geq \sqrt{2}\frac{m_t}{v}fs_{2\beta}\approx fs_{2\beta} \label{eq:MTmin} \end{equation} where $s_{2\beta}\equiv\sin(2\beta)$. $M_T$ is also bounded from above by either $\Lambda_U$ or the requirement that $m_\eta^2$ obtained from Eq.~\eqref{eq:mr2} should be positive. Finally, from the LHC search of $Z'$ boson in the dilepton channel~\cite{Aaboud:2017buh,Sirunyan:2018exx}, we estimate the lower bound on $f$ as~\cite{Mao:2017hpp} \begin{align} f\gtrsim 7.5~{\mbox{TeV}} \end{align} We note that when combined with Eq.~\eqref{eq:MTmin} and Eq.~\eqref{eq:tbrange} this also leads to a lower bound on the top partner mass of around $1.7~{\mbox{TeV}}$, which is much more stringent than constraints from top partner searches at the LHC. It is remarkable that the naturalness issue can also be analyzed in a CEFT approach, which is done in ref.~\cite{Cheung:2018iur}. We define the total degree of fine-tuning at a certain parameter point as \begin{align} \Delta_{\text{TOT}}=\max\{\Delta_{\text{TOT}}^{\mu^2},\Delta_{\text{TOT}}^\lambda\} \end{align} where $\Delta_{\text{TOT}}^{\mu^2},\Delta_{\text{TOT}}^\lambda$ are defined by \begin{align} \Delta_{\text{TOT}}^\lambda\equiv\left|\frac{\lambda_U}{m_h^2}\frac{\partial m_h^2}{\partial \lambda_U}\right|,\quad \Delta_{\text{TOT}}^{\mu^2}\equiv\left|\frac{\mu_U^2}{m_h^2}\frac{\partial m_h^2}{\partial \mu_U^2}\right| \end{align} Here $\lambda_U,\mu_U^2$ denote the $\lambda,\mu^2$ parameters defined at the unitarity cutoff. The above definitions obviously reflect how the IR parameters (e.g. $m_h^2$) are sensitive to UV parameters (e.g. $\lambda_U, \mu_U^2$), and thus may serve as a measure of the degree of fine-tuning in the allowed parameter space. We may follow ref.~\cite{Cheung:2018iur} to compute the degree of fine-tuning, and find several general features. One feature which is easy to understand is, generally speaking, with smaller $f$ and $m_T$ we could get smaller degree of fine-tuning. \begin{figure}[ht] \begin{center} \includegraphics[width=2.2in]{DeltaTOT8.pdf} \end{center} \caption{\label{fig:dft8} Density plot of $\text{Log}\Delta_\text{TOT}$ in the $m_\eta-m_T$ plane for $f=8~{\mbox{TeV}}$. $\text{Log}$ means $\log_{10}$.} \end{figure} In Figure~\ref{fig:dft8} we present the density plot of $\text{Log}\Delta_\text{TOT}$ in the $m_\eta-m_T$ plane for $f=8~{\mbox{TeV}}$. Only the colored region is allowed by various constraints. From the figure it is clear that the parameter region favored by naturalness considerations is featured by a small $m_T$, with $m_\eta$ around $500~{\mbox{GeV}}$. A light $\eta$, with a mass less than $2m_t$, is unfortunately disfavored. \subsection{Fermion Mass Diagonalization and Flavor Assumption} Fermion mass diagonalization has been studied in ref.~\cite{Han:2005ru,delAguila:2011wk}. In the lepton sector, the fermion mass matrices can be diagonalized by the following field rotations: \begin{align} \begin{pmatrix} N_{Ln} \\ \nu_{Ln} \end{pmatrix} \rightarrow \begin{pmatrix} c_\delta & s_\delta \\ s_\delta & -c_\delta \end{pmatrix} \begin{pmatrix} N_{Ln} \\ \nu_{Ln} \end{pmatrix}, \nonumber \\ \quad n=1,2,3,\quad \delta\equiv\frac{v}{\sqrt{2}ft_\beta} \end{align} \begin{align} \begin{pmatrix} e_L \\ \mu_L \\ \tau_L \end{pmatrix} \rightarrow U_l \begin{pmatrix} e_L \\ \mu_L \\ \tau_L \end{pmatrix},\quad \begin{pmatrix} e_R \\ \mu_R \\ \tau_R \end{pmatrix} \rightarrow W_l \begin{pmatrix} e_R \\ \mu_R \\ \tau_R \end{pmatrix} \end{align} where $U_l,W_l$ are both $3\times 3$ unitary matrices. In this work, for simplicity we will assume $U_l,W_l$ are both identity matrices. This leads to simplification of some Feynman rules associated with the heavy neutrino $N$. In the quark sector, first of all we perform field rotations in the right-handed sector as follows \begin{align} u_{R3}^1=\frac{-\lambda_2^t s_\beta t_R+\lambda_1^t c_\beta T_R} {\sqrt{\lambda_1^{t2}c_\beta^2+\lambda_2^{t2}s_\beta^2}},\quad u_{R3}^2=\frac{\lambda_1^t c_\beta t_R+\lambda_2^t s_\beta T_R} {\sqrt{\lambda_1^{t2}c_\beta^2+\lambda_2^{t2}s_\beta^2}} \end{align} \begin{align} d_{R1}^1=\frac{-\lambda_2^d s_\beta d_R+\lambda_1^d c_\beta D_R} {\sqrt{\lambda_1^{d2}c_\beta^2+\lambda_2^{d2}s_\beta^2}},\quad d_{R1}^2=\frac{\lambda_1^d c_\beta d_R+\lambda_2^d s_\beta D_R} {\sqrt{\lambda_1^{d2}c_\beta^2+\lambda_2^{d2}s_\beta^2}} \end{align} \begin{align} d_{R2}^1=\frac{-\lambda_2^s s_\beta s_R+\lambda_1^s c_\beta S_R} {\sqrt{\lambda_1^{s2}c_\beta^2+\lambda_2^{s2}s_\beta^2}},\quad d_{R2}^2=\frac{\lambda_1^s c_\beta s_R+\lambda_2^s s_\beta S_R} {\sqrt{\lambda_1^{s2}c_\beta^2+\lambda_2^{s2}s_\beta^2}} \end{align} For simplicity, the phenomenological studies done in this work will be carried out under the following flavor assumptions on the quark Yukawa Lagrangian Eq.~\eqref{eq:lqy} \begin{equation} \lambda_u^{Tu}=\lambda_u^{Tc}=\lambda_u^{12}=\lambda_u^{21} =\lambda_u^{31}=\lambda_u^{32}=0 \label{eq:fa1} \end{equation} \begin{equation} \lambda_b^D=\lambda_b^S=\lambda_b^1=\lambda_b^2=0 \label{eq:fa2} \end{equation} These flavor assumptions turn off all the generation-crossing quark flavor transitions and lead to a trivial CKM matrix, i.e. $V_{CKM}=\textbf{1}_{3\times 3}$, which is not realistic. Nevertheless, in this paper we are concerned with the direct production of new physics particles at high energy colliders rather than quark flavor observables. Also, for the parameter region which we are interested in, the phenomenology is not sensitive to the flavor assumptions adopted here, if the $\lambda$'s in Eq.~\eqref{eq:fa1} and Eq.~\eqref{eq:fa2}, which characterize the generation-crossing quark flavor changing effects, are small. With the above flavor assumptions, it is then straightforward to show, up to $\mathcal{O}(\frac{v}{f})$, after right-handed sector field rotations we only need to perform the following field rotations in the left-handed sector to diagonalize the quark mass matrices \begin{align} \begin{pmatrix} t_L \\ T_L \end{pmatrix} & \rightarrow \begin{pmatrix} 1 & -\delta_t \\ \delta_t & 1 \end{pmatrix} \begin{pmatrix} t_L \\ T_L \end{pmatrix} \\ \begin{pmatrix} d_L \\ D_L \end{pmatrix} & \rightarrow \begin{pmatrix} 1 & -\delta_{Dd} \\ \delta_{Dd} & 1 \end{pmatrix} \begin{pmatrix} d_L \\ D_L \end{pmatrix} \\ \begin{pmatrix} s_L \\ S_L \end{pmatrix} & \rightarrow \begin{pmatrix} 1 & -\delta_{Ss} \\ \delta_{Ss} & 1 \end{pmatrix} \begin{pmatrix} s_L \\ S_L \end{pmatrix} \label{eq:dss} \end{align} In the above equations, the field rotation parameters $\delta_t,\delta_{Dd},\delta_{Ss}$ can be expressed using $f,\beta$ and the corresponding heavy fermion mass as follows\footnote{Our expression for $\delta_t,\delta_{Dd},\delta_{Ss}$ differs from the corresponding expression in Eq.(2.63) of ref.~\cite{delAguila:2011wk}. The expressions of $\delta_t,\delta_{Dd},\delta_{Ss}$ given by ref.~\cite{delAguila:2011wk} are not consistent with their counterparts in ref.~\cite{Han:2005ru}. Our calculation agrees with ref.~\cite{Han:2005ru}.} \begin{align} \delta_t & =\frac{v}{2\sqrt{2}fs_\beta c_\beta}\left( s_\beta^2-c_\beta^2\pm\sqrt{1-8\frac{m_t^2}{v^2}\frac{f^2}{M_T^2}s_\beta^2 c_\beta^2}\right) \\ \delta_{Dd} & =-\frac{v}{2\sqrt{2}fs_\beta c_\beta}\left( s_\beta^2-c_\beta^2\pm\sqrt{1-8\frac{m_d^2}{v^2}\frac{f^2}{M_D^2}s_\beta^2 c_\beta^2}\right) \\ \delta_{Ss} & =-\frac{v}{2\sqrt{2}fs_\beta c_\beta}\left( s_\beta^2-c_\beta^2\pm\sqrt{1-8\frac{m_s^2}{v^2}\frac{f^2}{M_S^2}s_\beta^2 c_\beta^2}\right) \label{eq:signchoice} \end{align} Note in the above equations, before the square root, both the plus sign and minus sign give possible solutions, which leads to a total of eight sign combinations. When we refer to the sign combination in these equations, we will list according to the order $\delta_t,\delta_{Dd},\delta_{Ss}$, as e.g. $(+,+,+),(+,+,-)$, etc. $m_d,m_s,M_D,M_S$ correspond to the mass of $d,s,D,S$, respectively. In the following we will simply neglect the small $m_d,m_s$, then the expressions of $\delta_{Dd},\delta_{Ss}$ become identical, apart from a possible sign difference before the square root. Then we obtain the simple expression \begin{align} \delta_{Dd}^{+}=\delta_{Ss}^{+}=-\frac{vt_\beta}{\sqrt{2}f},\quad \delta_{Dd}^{-}=\delta_{Ss}^{-}=\frac{v}{\sqrt{2}ft_\beta} \label{eq:sc12} \end{align} where the superscripts indicate the sign choice for the corresponding rotation parameter. The rotation parameters $\delta_t,\delta_{Dd},\delta_{Ss}$ are important since they appear directly in the coefficients of various interaction vertices which affect the $\eta$ phenomenology, as we will see. \subsection{Lagrangian in the Mass Basis} We are now prepared to present the Lagrangian in the mass basis which is relevant for the investigation of $\eta$ phenomenology. However, let us first note that there is a subtle issue regarding the diagonalization in the bosonic sector. After EWSB, it can be shown that the CP-odd sector scalar kinetic matrix in terms of the $\eta,\zeta,\chi,\omega$ fields are not canonically-normalized. Also, there exist ``unexpected'' two-point vector-scalar transition terms like $Z^\mu\partial_\mu\eta$ after expanding the covariant derivative terms of the scalar fields. Therefore, a further field rotation (including a proper gauge-fixing) is needed to diagonalize the bosonic sector. This subtle issue had been overlooked for a long time in the literature, and was only remedied in a recent paper~\cite{He:2017jjx}. In ref.~\cite{He:2017jjx}, an expression for the fraction of mass eigenstate $\eta$ field contained in the $\eta,\zeta,\chi,\omega$ fields originally introduced in the parametrization Eq.~\eqref{eq:theta}, Eq.~\eqref{eq:hdoub},Eq.~\eqref{eq:kdoub} was obtained, valid to all orders in $\xi\equiv\frac{v}{f}$, as follows (we collect the four fraction values into a four-component column vector $\Upsilon$) \begin{align} \Upsilon=\begin{pmatrix} c_{\gamma+\delta}^{-1} \\ \\ -c_{\gamma+\delta}^{-1}(s_\delta^2 t_\beta-s_\gamma^2 t_\beta^{-1}) \\ \\ \frac{v}{\sqrt{2}f}c_{\gamma+\delta}^{-1}(c_{2\delta}t_\beta-c_{2\gamma}t_\beta^{-1}) \\ \\ \frac{1}{2}c_{\gamma+\delta}^{-1}(s_{2\delta}t_\beta+s_{2\gamma}t_\beta^{-1}) \end{pmatrix} \label{eq:upsilon} \end{align} where \begin{align} \gamma\equiv\frac{vt_\beta}{\sqrt{2}f},\quad \delta\equiv\frac{v}{\sqrt{2}ft_\beta} \end{align} The $\Upsilon$ vector is involved in the derivation of all $\eta$-related mass eigenstate vertices. Especially, from the expression of $\Upsilon$ we see there is an $\mathcal{O}(\xi)$ component of mass eigenstate $\eta$ contained in $\chi$. This has the following consequences. If we parameterize the mass eigenstate $ZH\eta$ vertex as follows \begin{align} \mathcal{L}_{ZH\eta} & =c^{as}_{ZH\eta}Z^\mu(\eta\partial_\mu H-H\partial_\mu\eta) \nonumber \\ & +c^{s}_{ZH\eta}Z^\mu(\eta\partial_\mu H+H\partial_\mu\eta) \label{eq:lzhe} \end{align} where $c^{as}_{ZH\eta}$ denotes the coefficient of the anti-symmetric $ZH\eta$ vertex, and $c^{s}_{ZH\eta}$ denotes the coefficient of the symmetric $ZH\eta$ vertex, then it is shown in ref.~\cite{He:2017jjx} that \begin{align} c^{as}_{ZH\eta}=-\frac{g}{4\sqrt{2}c_W^3 t_{2\beta}}\xi^3+\mathcal{O}(\xi^5) \label{eq:casv} \end{align} \begin{widetext} \begin{align} c^{s}_{ZH\eta}=\frac{g}{\sqrt{2}c_W t_{2\beta}}\xi +\frac{g}{24\sqrt{2}c_W s_{2\beta}}\left[\frac{8}{s_{2\beta}t_{2\beta}}+3c_{2\beta}\left(8+\frac{6}{c_W^2}-\frac{1}{c_W^4}\right)\right]\xi^3 +\mathcal{O}(\xi^5) \label{eq:csv} \end{align} \end{widetext} We see that the anti-symmetric $ZH\eta$ vertex only shows up from $\mathcal{O}(\xi^3)$, in contrast to the results presented in ref.~\cite{Kilian:2004pp,Kilian:2006eh} which claimed the existence of anti-symmetric $ZH\eta$ vertex at $\mathcal{O}(\xi)$ due to the lack of an appropriate diagonalization in the bosonic sector. This subtle issue of diagonalization in the bosonic sector also has impact on the $\eta$ coupling to fermions. For instance, if we consider the expansion of $\epsilon_{ijk}\Phi_1^i\Phi_2^j$, with the help of the expression for the $\Upsilon$ vector in Eq.~\eqref{eq:upsilon}, we could find the following result for the neutral component \begin{align} \epsilon_{ijk}\Phi_1^i\Phi_2^j\supset -if\begin{pmatrix} 0 \\ fs_\beta c_\beta s_{\gamma+\delta}+\frac{1}{\sqrt{2}}c_{\gamma+\delta}H \\ 0 \end{pmatrix} \end{align} An important message from this is that $\epsilon_{ijk}\Phi_1^i\Phi_2^j$ does not contain any fraction of mass eigenstate $\eta$ field, to all orders in $\xi$. Therefore, from Eq.~\eqref{eq:lly} we immediately conclude that $\eta$ does not couple to a pair of charged leptons to all orders in $\xi$. This point has been overlooked by previous studies~\cite{Cheung:2008zu,Kim:2011bv} which rely on $\eta\rightarrow\tau\tau$. In the following let us collect the other mass eigenstate vertices that are relevant for $\eta$ phenomenology, to the first nontrivial order in $\xi$. In the Yukawa sector, we have the following couplings of $H$ and $\eta$ to a pair of fermions: \begin{widetext} \begin{enumerate} \item $H$ and $\eta$ couplings to lepton sector: \begin{align} \mathcal{L}_{LY} & \supset-\sum_{n=1}^3\frac{m_{ln}}{\sqrt{2}fs_\beta c_\beta t_{\gamma+\delta}} H\bar{l}_{Rn}l_{Ln}+\sum_{n=1}^3\frac{M_{Nn}}{\sqrt{2}ft_\beta}H\bar{N}_{Rn}\nu_{Ln} \nonumber \\ & -i\sum_{n=1}^3\frac{M_{Nn}}{\sqrt{2}ft_\beta}c_{\gamma+\delta}\eta\bar{N}_{Rn}N_{Ln} -i\sum_{n=1}^3\frac{M_{Nn}}{\sqrt{2}ft_\beta}s_{\gamma+\delta}\eta\bar{N}_{Rn}\nu_{Ln}+\text{h.c.} \label{eq:lYu} \end{align} \item $H$ and $\eta$ couplings to up-type quark sector: \begin{align} \mathcal{L}_{QY} & \supset-\frac{m_u}{v}H\bar{u}_R u_L-\frac{m_c}{v}H\bar{c}_R c_L \nonumber \\ & -\frac{m_t}{v}H\bar{t}_R t_L+\frac{m_t}{v}\left(\frac{\sqrt{2}v}{ft_{2\beta}}+\delta_t \right)H\bar{t}_R T_L +\frac{M_T}{v}\delta_t H\bar{T}_R t_L+\frac{m_t^2}{vM_T}H\bar{T}_R T_L \nonumber \\ & -i\frac{m_t}{v}\delta_t\eta\bar{t}_R t_L-i\frac{m_t}{v}\eta\bar{t}_R T_L +i\frac{M_T}{v}\left(\frac{v^2}{2f^2}+\delta_t^2\right)\eta\bar{T}_R t_L +i\frac{M_T}{v}\delta_t\eta\bar{T}_R T_L \nonumber \\ &+\text{h.c.} \end{align} \item $H$ and $\eta$ couplings to down-type quark sector: \begin{align} \mathcal{L}_{QY} & \supset-\frac{m_b}{v}H\bar{b}_R b_L \nonumber \\ & -\frac{m_d}{v}H\bar{d}_R d_L+\frac{m_d}{v}\left(-\frac{\sqrt{2}v}{ft_{2\beta}}+\delta_{Dd}\right)H\bar{d}_R D_L +\frac{M_D}{v}\delta_{Dd}H\bar{D}_R d_L+\frac{m_d^2}{vM_D}H\bar{D}_R D_L \nonumber \\ & -\frac{m_s}{v}H\bar{s}_R s_L+\frac{m_s}{v}\left(-\frac{\sqrt{2}v}{ft_{2\beta}}+\delta_{Ss}\right)H\bar{s}_R S_L +\frac{M_S}{v}\delta_{Ss}H\bar{S}_R s_L+\frac{m_s^2}{vM_S}H\bar{S}_R S_L \nonumber \\ & -i\frac{m_d}{v}\delta_{Dd}\eta\bar{d}_R d_L-i\frac{m_d}{v}\eta\bar{d}_R D_L +i\frac{M_D}{v}\left(\frac{v^2}{2f^2}+\delta_{Dd}^2\right)\eta\bar{D}_R d_L +i\frac{M_D}{v}\delta_{Dd}\eta\bar{D}_R D_L \nonumber \\ & -i\frac{m_s}{v}\delta_{Ss}\eta\bar{s}_R s_L-i\frac{m_s}{v}\eta\bar{s}_R S_L +i\frac{M_S}{v}\left(\frac{v^2}{2f^2}+\delta_{Ss}^2\right)\eta\bar{S}_R s_L +i\frac{M_S}{v}\delta_{Ss}\eta\bar{S}_R S_L \nonumber \\ & +\text{h.c.} \label{eq:dYu} \end{align} \end{enumerate} \end{widetext} In the above equations, $m_{ln},n=1,2,3$ denote the masses of $e,\mu,\tau$ leptons, $M_{Nn},n=1,2,3$ denote the masses of the three heavy neutral leptons $N_n$. $m_u,m_c$ denote the masses of the $u,c$ quarks, respectively. $\eta$ can also be a decay product of the heavy fermions $N,T,D,S$, therefore we also list the relevant Lagrangian for the heavy fermion gauge interaction which enters the heavy fermion decays \begin{widetext} \begin{align} \mathcal{L}_{\text{matter}} & \supset\frac{gv}{2ft_\beta}W_\mu^+\bar{N}_{Lm}\gamma^\mu l_{Lm} -\frac{gv}{2\sqrt{2}c_W ft_\beta}Z_\mu\bar{N}_{Lm}\gamma^\mu\nu_{Lm} \nonumber \\ & -\frac{g\delta_t}{\sqrt{2}}W_\mu^+\bar{T}_L\gamma^\mu b_L -\frac{g\delta_t}{2c_W}Z_\mu\bar{T}_L\gamma^\mu t_L \nonumber \\ & -\frac{g\delta_{Dd}}{\sqrt{2}}W_\mu^+\bar{u}_L\gamma^\mu D_L +\frac{g\delta_{Dd}}{2c_W}Z_\mu\bar{d}_L\gamma^\mu D_L -\frac{g\delta_{Ss}}{\sqrt{2}}W_\mu^+\bar{c}_L\gamma^\mu S_L +\frac{g\delta_{Ss}}{2c_W}Z_\mu\bar{s}_L\gamma^\mu S_L \nonumber \\ & +\text{h.c.} \end{align} A further interesting possibility is that $\eta$ might come from the decay of a $Z'$ boson. The $Z'$-related parts of interaction Lagrangian are listed below: \begin{enumerate} \item $Z'$ couplings to leptons: \begin{align} \mathcal{L}_{\text{matter}} & \supset g\frac{1-t_W^2}{2\sqrt{3-t_W^2}}\bar{l}_{Ln}\gamma^\mu l_{Ln}Z'_\mu -g\frac{t_W^2}{\sqrt{3-t_W^2}}\bar{l}_{Rn}\gamma^\mu l_{Rn}Z'_\mu \nonumber \\ & +g\frac{1-t_W^2}{2\sqrt{3-t_W^2}}\bar{\nu}_{Ln}\gamma^\mu\nu_{Ln}Z'_\mu -g\frac{1}{\sqrt{3-t_W^2}}\bar{N}_{Ln}\gamma^\mu N_{Ln}Z'_{\mu} \label{eq:zpl} \end{align} \item $Z'$ couplings to 3rd generation quarks: \begin{align} \mathcal{L}_{\text{matter}} & \supset -g\frac{3-2t_W^2}{3\sqrt{3-t_W^2}}\bar{T}_L\gamma^\mu T_LZ'_\mu +g\frac{2t_W^2}{3\sqrt{3-t_W^2}}\bar{T}_R\gamma^\mu T_RZ'_\mu \nonumber \\ & +g\frac{3+t_W^2}{6\sqrt{3-t_W^2}}\bar{t}_L\gamma^\mu t_LZ'_\mu +g\frac{2t_W^2}{3\sqrt{3-t_W^2}}\bar{t}_R\gamma^\mu t_RZ'_\mu \nonumber \\ & +g\frac{3+t_W^2}{6\sqrt{3-t_W^2}}\bar{b}_L\gamma^\mu b_LZ'_\mu -g\frac{t_W^2}{3\sqrt{3-t_W^2}}\bar{b}_R\gamma^\mu b_RZ'_\mu \label{eq:zp3q} \end{align} \item $Z'$ couplings to 1st and 2nd generation quarks: \begin{align} \mathcal{L}_{\text{matter}} & \supset g\frac{\sqrt{3-t_W^2}}{3}\bar{D}_L\gamma^\mu D_LZ'_\mu -g\frac{t_W^2}{3\sqrt{3-t_W^2}}\bar{D}_R\gamma^\mu D_RZ'_\mu \nonumber \\ & -g\frac{\sqrt{3-t_W^2}}{6}\bar{d}_L\gamma^\mu d_LZ'_\mu -g\frac{t_W^2}{3\sqrt{3-t_W^2}}\bar{d}_R\gamma^\mu d_RZ'_\mu \nonumber \\ & -g\frac{\sqrt{3-t_W^2}}{6}\bar{u}_L\gamma^\mu u_LZ'_\mu +g\frac{2t_W^2}{3\sqrt{3-t_W^2}}\bar{u}_R\gamma^\mu u_RZ'_\mu \nonumber \\ & +\text{terms with}\quad u\rightarrow c, d\rightarrow s, D\rightarrow S \label{eq:zp12q} \end{align} \item $Z'$ couplings to bosons (relevant for $Z'$ decay): \begin{align} \mathcal{L}_{\text{gauge}} & \supset -ig\sqrt{3-t_W^2}(1-t_W^2)\frac{v^2}{8f^2}\Big\{ (\partial_\mu Z'_\nu)(W^{-\mu}W^{+\nu}-W^{+\mu}W^{-\nu}) \nonumber \\ & +Z'^\mu[(\partial_\mu W_\nu^+)W^{-\nu}-(\partial_\mu W_\nu^-)W^{+\nu}] +Z'^\nu[(\partial_\mu W_\nu^-)W^{+\mu}-(\partial_\mu W_\nu^+)W^{-\mu}]\Big\} \end{align} \begin{align} \mathcal{L}_{gk} & \supset -\frac{\sqrt{2}gv}{\sqrt{3-t_W^2}ft_{2\beta}}Z'^\mu(\eta\partial_\mu H-H\partial_\mu\eta) -\frac{\sqrt{3-t_W^2}gv}{\sqrt{2}ft_{2\beta}}Z'^\mu(\eta\partial_\mu H+H\partial_\mu\eta) \nonumber \\ & -\frac{g^2 v}{2c_W^2\sqrt{3-t_W^2}}\eta Z'^\mu\frac{(Y_\mu^{0\dagger}+Y_\mu^0)}{\sqrt{2}} +\frac{g^2 vc_{2W}}{2c_W^3\sqrt{3-t_W^2}}HZ'^\mu Z_\mu \end{align} \end{enumerate} \end{widetext} \section{Symmetric VSS Vertices} \label{sec:vss} In the derivation of SLH Lagrangian in the mass basis we obtain the $ZH\eta$ vertex in the form of Eq.~\eqref{eq:lzhe}, which contains two parts: the antisymmetric part ($Z^\mu(\eta\partial_\mu H-H\partial_\mu\eta)$) and the symmetric part ($Z^\mu(\eta\partial_\mu H+H\partial_\mu\eta)$)\footnote{The Hermiticity requirement on the Lagrangian does not forbid the symmetric part. $Z_\mu,H,\eta$ are all real fields. $\partial_\mu$ does not lead to an additional minus sign under Hermitian conjugate because in quantum field theory $x^\mu$'s are labels, not operators. This is not to be confused with the situation in ordinary quantum mechanics.}. An antisymmetric VSS vertex often appears in models based on a linearly-realized scalar sector, such as the usual two-Higgs-doublet model(2HDM). It is natural to ask whether the symmetric VSS vertices can have any physical effect. We note that in a Lorentz-invariant $ZH\eta$ vertex, the $\partial_\mu$ may act on any of the three fields ($Z^\mu,H,\eta$). However because a total derivative term $\partial_\mu(Z^\mu H\eta)$ has no physical effects, we therefore expect at most two independent contributions from the interaction of one vector fields with two scalar fields. If symmetric VSS vertices are allowed and present in a general theory and could lead to distinct physical effects, it would mean that a vector field could interact with two scalar fields in a manner different from the usually expected antisymmetric pattern, which may further reveal interesting features of the enlarged scalar sector. Let us first note that the symmetric VSS Lagrangian $Z^\mu(\eta\partial_\mu H+H\partial_\mu\eta)$ can be written as \begin{eqnarray} Z_\mu\partial^\mu(H\eta) \label{eqn:ZdelhA} \end{eqnarray} via Leibniz rule and is therefore (via integration by parts) equivalent to \begin{eqnarray} -(\partial^\mu Z_\mu)(H\eta) \label{eqn:delZhA} \end{eqnarray} in the Lagrangian formulation of the theory. A reflective reader might at this moment wonder whether terms like ~\eqref{eqn:delZhA} indeed contribute to S-matrix elements if canonical quantization is adopted. Note that what matters in canonical quantization is the interaction Hamiltonian in the interaction picture (denoted $H_I^{\text{int}}$), and if $Z^\mu$ is a massive spin-1 field, then the corresponding interaction picture field operator $Z_{I}^\mu$ (the subscript ``$I$'' denotes interaction picture) will automatically satisfy~\cite{Weinberg:1995mt} \begin{eqnarray} \partial_\mu Z_{I}^\mu=0 \label{eqn:delZ0} \end{eqnarray} It is tempting to arrive at the conclusion that terms like ~\eqref{eqn:delZhA} cannot contribute to S-matrix elements due to Eq.~\eqref{eqn:delZ0}. Actually this is not quite correct. The correct procedure from the classical Lagrangian to the interaction Hamiltonian in the interaction picture $H_I^{\text{int}}$ is first identify appropriate canonical coordinates and their conjugate momenta, then perform a Legendre transformation to obtain the Hamiltonian and express it in terms of canonical coordinates and their conjugate momenta, then promote the canonical variables to field operators satisfying appropriate canonical communtation relations, and finally split the Hamiltonian into a free part and an interaction part and replace the Heisenberg-picture quantities with their interaction-picture counterparts~\cite{Weinberg:1995mt}. If this procedure is strictly followed, we would find that only the spatial components of $Z^\mu$ can be treated as independent canonical coordinates while $Z^0$ is dependent because no matter we start with Eq.~\eqref{eqn:ZdelhA} or Eq.~\eqref{eqn:delZhA} the derivative of the Lagrangian with respect to $\dot{Z}_0$ cannot be made to satisfy canonical commutation relations. To avoid the appearance of $\partial^0 Z_0$ in the Hamiltonian we could start with Eq.~\eqref{eqn:ZdelhA} and then the problem turns out to be what has been treated in Section 7.5 of Ref.~\cite{Weinberg:1995mt}. Using the results there we could see that Eq.~\eqref{eqn:ZdelhA} leads to a term \begin{eqnarray} -Z^\mu_I\partial_\mu(h_I A_I) \label{eqn:Hint} \end{eqnarray} in the interaction Hamiltonian in the interaction-picture (barring a Lorentz non-covariant term which is not shown here). This will certainly lead to a vertex Feynman rule \begin{eqnarray} -k^\mu \end{eqnarray} where $k^\mu$ is the $Z$ momentum flowing into the vertex. This vertex Feynman rule could also be derived from Eq.~\eqref{eqn:ZdelhA} via the path-integral method. Notice that it is not legitimate to perform integration by parts in the interaction-picture Hamiltonian $H_I^{\text{int}}$ to obtain \begin{eqnarray} (\partial_\mu Z^\mu_I)(h_I A_I) \end{eqnarray} from Eq.~\eqref{eqn:Hint}\footnote{More specifically, integration by parts for spatial components of $H_I^{\text{int}}$ should be fine if the fields are assumed to satisfy certain boundary condition, which is usually the case. However, integration by parts for the temporal component of $H_I^{\text{int}}$ is problematic since in the expression for the scattering operator $S=T\exp(-i\int_{-\infty}^{+\infty}H_I^{\text{int}}dt)$ the temporal integration is actually twisted by the time-ordering. No such problem exists if we adopt the path-integral method.}. The appearance of $\partial_\mu Z^\mu$ in Eq.~\eqref{eqn:delZhA} is reminiscent of covariant gauge-fixing in gauge field theories. Eq.~\eqref{eqn:delZhA} is not gauge-invariant, nevertheless at this moment let us suppose that it can be deduced from a gauge-invariant operator. Because we are dealing with quantum field theories it is important not to be confused with the case of classical field theories. In a classical gauge field theory a gauge-fixing condition (such as the Landau gauge condition $\partial_\mu Z^\mu=0$) is employed so that the solutions of the equation of motion are required to also satisfy the gauge-fixing condition. In quantum field theory all classical field configurations, regardless of whether they satisfy the classical equation of motion, are to be integrated over in the path-integral. The usually-adopted covariant gauge, the general $R_\xi$ gauge, actually corresponds to a Gaussian smearing of a class of covariant gauge conditions and does not strictly force the classical field to satisfy a simple gauge-fixing equation. However, the limit $\xi\rightarrow 0$ makes the gauge-fixing functional act like a delta-function imposing the Landau gauge condition $\partial_\mu Z^\mu=0$~\cite{Weinberg:1995mt}. Therefore it is heuristic to guess that in the Landau gauge, symmetric VSS vertices do not contribute to S-matrix of the theory. However, we should not forget that in the Landau gauge it is necessary to take into account the Goldstone contribution to the S-matrix, and also the associated ghost contribution when we go beyond tree level in perturbation theory. This observation suggests that at tree level, processes involving symmetric VSS vertices can be seen as purely Goldstone-mediated. \begin{figure}[ht] \begin{center} \includegraphics[width=2.6in]{hA_production.pdf} \end{center} \caption{Associated production of $h$ and $A$. \label{fig:hAproduction}} \end{figure} Physical effects of antisymmetric VSS vertices have been well-studied in the literature. For example, in 2HDM, a benchmark process which embodies the effect of antisymmetric VSS vertices is \begin{eqnarray} f+\bar{f}\rightarrow A+h \end{eqnarray} where $A$ and $h$ denote a generic CP-odd and CP-even 2HDM Higgs boson, respectively. The corresponding Feynman diagram is shown in Fig.~\ref{fig:hAproduction} in unitarity gauge. Now suppose we replace the antisymmetric VSS $ZhA$ vertex in Fig.~\ref{fig:hAproduction} by a completely symmetric VSS $ZhA$ vertex. It is obvious that if the $Z$ boson is on-shell, then the amplitude should vanish since for an on-shell massive vector-boson we have the relation $p\cdot\epsilon=0$ for its momentum and polarization vector. It is tempting to proceed with the case that $Z$ boson is off-shell. The amplitude in this case can be examined from two perspectives. First, we can perform the calculation in unitarity gauge. In this gauge, the result of dotting the $Z$ momentum $p$ at the $ZhA$ vertex into its s-channel propgator is again proportional to the $Z$ momentum $p$ at the $Zf\bar{f}$ vertex. It is then obvious that only the axial-vector part of the $Zf\bar{f}$ vertex contributes to the amplitude, with a contribution proportional to the fermion mass $m_f$. Alternatively, we may perform the calculation in Landau gauge ($\xi=0$), in which the diagram shown in Fig.~\ref{fig:hAproduction} does not contribute to the amplitude, nevertheless we need to take into account the s-channel Goldstone-mediated amplitude, which again gives an contribution proportional to the fermion mass $m_f$. Although usually $f$ is a light fermion with negligible mass effects, we might be interested in the case that $f$ is heavy with important mass effects, e.g. the top quark. If in this case the symmetric VSS vertex could lead to physical effects, we would seem to produce a paradox in the SLH. In the SLH there exists a symmetric $ZH\eta$ vertex, however if we consider a linearly-realized SLH as a UV completion, then it cannot lead to symmetric VSS vertices and hence there will be no related physical effects. Since the usual nonlinearly-realized SLH can be related to a linearly-realized SLH via an appropriate field redefinition, the above discussion seems to cause violation of the field redefinition invariance of the S-matrix element\footnote{The radial mode does not help since it does not have the required CP property.}. We can turn the argument around to use the field redefinition invariance to infer the existence of additional contribution in the SLH which also contributes to the $f\bar{f}\rightarrow H\eta$ process such that the field redefinition invariance is maintained. In fact, if we examine the Yukawa part of the SLH Lagrangian, we would find the following four-point contact vertex ($m_f$ denotes the mass of $f$) \begin{align} \mathcal{L}\supset i\frac{2\sqrt{2}g_A}{ft_{2\beta}}\frac{m_f}{v}H\eta\bar{f}\gamma^5 f \end{align} Here $g_A$ is the axial coupling of the fermion $f$ which also appears in its interaction to $Z$ boson and the associated Goldstone $\chi$ as \begin{align} \mathcal{L}\supset \frac{g}{2c_W}Z^\mu\bar{f}\gamma_\mu(g_V+g_A\gamma^5)f +i\frac{2g_A m_f}{v}\bar{f}\gamma^5 f\chi \end{align} Now if we compute the amplitude for $f\bar{f}\rightarrow H\eta$ in $R_\xi$ gauge, we need to include three contributions: s-channel $Z$ exchange, s-channel $\chi$ exchange, and $ffH\eta$ contact interaction, as shown in Fig.~\ref{fig:ffHeta}. \begin{figure}[ht] \begin{center} \includegraphics[width=3.5in]{ffHeta.pdf} \end{center} \caption{Feynman diagrams in the SLH for $f\bar{f}\rightarrow H\eta$ in $R_\xi$ gauge. \label{fig:ffHeta}} \end{figure} The amplitudes corresponding to these three diagrams are computed to be (from left to right): \begin{align} i\mathcal{M}_I & =\frac{\sqrt{2}}{vft_{2\beta}}\frac{-\xi m_Z^2}{q^2-\xi m_Z^2}2g_A m_f\bar{v}(p_{\bar{f}})\gamma^5 u(p_f) \\ i\mathcal{M}_{II} & =\frac{\sqrt{2}}{vft_{2\beta}}\frac{q^2}{q^2-\xi m_Z^2}2g_A m_f\bar{v}(p_{\bar{f}})\gamma^5 u(p_f) \\ i\mathcal{M}_{III} & =-\frac{\sqrt{2}}{vft_{2\beta}}2g_A m_f\bar{v}(p_{\bar{f}})\gamma^5 u(p_f) \end{align} Here $p_f$ and $p_{\bar{f}}$ are the four-momenta of $f$ and $\bar{f}$, respectively and $q\equiv p_f+p_{\bar{f}}$. When we add the three contributions, we find \begin{align} i\mathcal{M}_I+i\mathcal{M}_{II}+i\mathcal{M}_{III}=0 \end{align} which is exactly what we would expect from field redefinition invariance. Moreover, we see that the $Z$ and $\chi$ contributions add to be gauge-independent, while the contact interaction contribution itself is gauge-independent. Here we would like to mention a further subtle point related to the symmetric VSS vertex. It might still be somewhat counter-intuitive the contribution from the symmetric $ZH\eta$ vertex is cancelled by the contribution from $ffH\eta$ contact vertex, since the former contribution should know the position of $Z$ pole and therefore vanish for an on-shell $Z$ boson while the latter certainly does not ``feel'' the $Z$ pole. To illustrate this issue, we can include the effect of $Z$ boson width $\Gamma_Z$ so that the $Z$ boson propagator in the unitarity gauge is written as \begin{align} \frac{-g^{\mu\nu}+\frac{q^\mu q^\nu}{m_Z^2}}{q^2-m_Z^2+im_Z\Gamma_Z} \label{eq:prop} \end{align} When this propagator is dotted into $q_\nu$ coming from the symmetric VSS Feynman rule, at $q^2=m_Z^2$ it will vanish, which seems quite plausible given our previous argument that symmetric VSS vertex does not contribute to the process in which the related vector boson is on-shell. However, this immediately leads to the paradoxical situation that near on-shell region the field redefinition invariance is again violated since the contribution from $ffH\eta$ contact vertex certainly does not know about the $Z$ pole. The resolution of this paradox consists in the treatment of particle width in its propagator. The naive treatment in Eq.~\eqref{eq:prop} is actually not quite correct and will in general lead to results that violate the Ward-Takahashi identities. A proper treatment can be made by e.g. employing the complex mass scheme which properly retains gauge invariance. The final result is, of course, no exotic structure appears near $Z$ pole and the field redefinition invariance is maintained. \section{Constraints from Electroweak Precision Observables} \label{sec:ewpt} As discussed in Section~\ref{sec:slh} in the study of the pseudo-axion phenomenology there are eight sign combinations for the rotation parameters $\delta_t,\delta_{Dd},\delta_{Ss}$. Moreover, when lepton sector is relevant, either $t_\beta\geq1$ or $t_\beta<1$ could be possible, leading to further complication. Nevertheless, as will be shown in this section, the number of possibilities greatly reduces if we require \begin{enumerate} \item The parameter space under consideration is favored by naturalness consideration and thus embodies (to some extent) the original motivation of the SLH model. \item The parameter space under consideration is allowed by electroweak precision measurements. \end{enumerate} As discussed in Section~\ref{sec:slh} the first requirement points to the region characterized by a small top partner mass. In the SLH, currently the lower bound on top partner mass is derived from Eq.~\eqref{eq:MTmin} where $f$ is stringently constrained by dilepton resonance searches. Constraints from direct searches for top partner production is not as competitive at the moment. For given $f$, a small top partner mass could be obtained by requiring a large $t_\beta$ (or $t_\beta^{-1}$ for $t_\beta<1$), which is in turn bounded by unitarity consideration. To summarize, the first requirement points to the region characterized by a small $f$ and large $t_\beta$ (or $t_\beta^{-1}$ for $t_\beta<1$). As to the second requirement, in the present work we consider the following electroweak observables \begin{enumerate} \item The $W$ boson mass $m_W$. \item $R$ observables measured at the $Z$-pole: $R_b,R_c,R_e,R_{\mu},R_{\tau}$, which are defined by \begin{align} R_b & \equiv\Gamma(b\bar{b})/\Gamma(\text{had}),R_c\equiv\Gamma(c\bar{c})/\Gamma(\text{had}),\nonumber \\ R_l & \equiv\Gamma(\text{had})/\Gamma(l^+l^-),l=e,\mu,\tau \end{align} in which $\Gamma(\text{had})$ denotes the total hadronic width of the $Z$ boson, and $\Gamma(b\bar{b}), \Gamma(c\bar{c}), \Gamma(l^+l^-)$ denote the $Z$ boson partial widths into $b\bar{b},c\bar{c},l^+l^-$ channels. \end{enumerate} To set up the calculation we choose the fine structure constant $\alpha_{\text{em}}\equiv\frac{e^2}{4\pi}$ (defined at $Z$-pole), Fermi constant $G_F$ and $Z$ boson mass $m_Z$ as the input parameters. Expressed with the SM quantities we have the tree level relations \begin{align} e=g_{SM}s_{W,SM},\quad \frac{G_F}{\sqrt{2}}=\frac{g_{SM}^2}{8m_{W,SM}^2} \end{align} \begin{align} m_Z^2=\frac{g_{SM}^2 {v_{SM}^2}}{4c_{W,SM}^2},\quad m_{W,SM}^2=\frac{1}{4}g_{SM}^2 v_{SM}^2 \end{align} These relations get modified in the SLH to be \begin{align} e=gs_W,\quad \frac{G_F}{\sqrt{2}}=\frac{g^2}{8m_{W,SLH}^2}\left(1 -\frac{v^2}{4f^2 t_\beta^2}\right)^2 \end{align} \begin{align} & m_Z^2=\frac{g^2 v^2}{4c_W^2}+\frac{g^2}{32c_W^2}\left[c_W^{-2}(3-t_W^2) -\frac{4}{3}s_\beta^{-2}c_\beta^{-2}\right]\frac{v^4}{f^2} \\ & m_{W,SLH}^2=\frac{1}{4}g^2 v^2+\frac{1}{24}g^2(3-s_\beta^{-2}c_\beta^{-2})\frac{v^4}{f^2} \end{align} Here we note that in the above equations, as in Section~\ref{sec:slh}, $g,v,s_W$ represent quantities in the SLH and are thus different from the SM quantities $g_{SM},v_{SM},s_{W,SM}$. From the above two set of relations we may derive \begin{align} \frac{m_{W,SLH}^2}{m_{W,SM}^2}=1+\frac{1}{8}\left(1-t_{W,SM}^2+\frac{1-c_{W,SM}^2}{2c_{W,SM}^2-1} \frac{4}{t_\beta^2}\right)\frac{v_{SM}^2}{f^2} \end{align} \begin{align} \frac{s_W^2}{s_{W,SM}^2}=1-\frac{1}{8}\left(1-t_{W,SM}^2+\frac{c_{W,SM}^2}{2c_{W,SM}^2-1} \frac{4}{t_\beta^2}\right)\frac{v_{SM}^2}{f^2} \label{eq:swc} \end{align} To calculate the $R$ observables in the SLH we also need the modified $Z$ couplings to light fermions. Although the corrections relative to the SM come in in the $\frac{v^2}{f^2}$ order, they are still relevant since the $R$ observables have been measured to a few per mille precision. In such a case the diagonal entries in the rotational matrices in Eq.~\eqref{eq:dss} should be understood as $1-\frac{1}{2}\delta_{Dd}^2$ and $1-\frac{1}{2}\delta_{Ss}^2$, respectively. Then the modified $Z$ couplings to light fermions in the SLH can be written as \begin{align} g'_{L,Z,f} & =g_{L,Z,f}+\delta_Z g_{L,Z',f}, \nonumber \\ g'_{R,Z,f} & =g_{R,Z,f}+\delta_Z g_{R,Z',f}, \nonumber \\ & \text{for } f=u,c,b,e,\mu,\tau \label{eq:zc1} \end{align} In the above equations, $\delta_Z$ is the $\mathcal{O}\left(\frac{v^2}{f^2}\right)$ $Z-Z'$ mixing angle, appearing in the mixing relation \begin{align} Z'=Z'_m+\delta_Z Z_m,\quad Z=Z_m-\delta_Z Z'_m \end{align} Here $Z_m,Z'_m$ denote the final mass eigenstates after the $\mathcal{O}\left(\frac{v^2}{f^2}\right)$ rotation while $Z,Z'$ denote the states before the $\mathcal{O}\left(\frac{v^2}{f^2}\right)$ rotation, as define via Eq.~\eqref{eq:gbmixing}. In the process of gauge boson mass diagonalization, $\delta_Z$ is computed to be \begin{align} \delta_Z=-\frac{(1-t_W^2)\sqrt{3-t_W^2}}{8c_W}\frac{v^2}{f^2} \label{eq:deltaZ} \end{align} In Eq.~\eqref{eq:zc1}, $g_{L,Z,f}=\frac{g}{c_W}(T_3^f-Q_f s_W^2), g_{R,Z,f}=-\frac{g}{c_W}Q_f s_W^2$ are leading-order coefficients of the Lagrangian terms $\bar{f}_L\gamma^\mu f_L Z_\mu,\bar{f}_R\gamma^\mu f_R Z_\mu$ and $T_3^f,Q_f$ denote the third component of the isospin and the electric charge of $f$, respectively. $g_{L,Z',f},g_{R,Z',f}$ are leading-order coefficients of the Lagrangian terms $\bar{f}_L\gamma^\mu f_L Z'_\mu,\bar{f}_R\gamma^\mu f_R Z'_\mu$, which are given in Eq.~\eqref{eq:zpl},Eq.~\eqref{eq:zp3q} and Eq.~\eqref{eq:zp12q}. $g'_{L,Z,f},g'_{R,Z,f}$ in Eq.~\eqref{eq:zc1} denote the coefficients of the Lagrangian terms $\bar{f}_L\gamma^\mu f_L Z_\mu,\bar{f}_R\gamma^\mu f_R Z_\mu$ and $T_3^f,Q_f$ , to the $\mathcal{O}\left(\frac{v^2}{f^2}\right)$ precision. For $f=d$ the modified $Z$ couplings in the SLH turn out to be \begin{align} g'_{L,Z,d} & =g_{L,Z,d}+\delta_Z g_{L,Z',d}+\delta_{Dd}^2 (g_{L,Z,D}-g_{L,Z,d}), \nonumber \\ g'_{R,Z,d} & =g_{R,Z,d}+\delta_Z g_{R,Z',d} \label{eq:zc2} \end{align} Obviously the additional correction is due to the left-handed $D-d$ mixing. The corresponding formulae for $f=s$ can be obtained by the replacement $d\rightarrow s,D\rightarrow S$. $g_{L,Z,D},g_{L,Z,S}$ are leading-order coefficients of the Lagrangian terms $\bar{D}_L\gamma^\mu D_L Z_\mu,\bar{S}_R\gamma^\mu S_R Z_\mu$ \begin{align} g_{L,Z,D}=g_{L,Z,S}=\frac{1}{3}gs_W t_W \end{align} Now we have all the SLH couplings that are necessary to calculate the $R$ observables. It should be noted that in the above coupling formulae, $s_W,c_W,t_W$ are quantities in the SLH and are therefore different from their SM counterparts $s_{W,SM},c_{W,SM},t_{W,SM}$, see Eq.~\eqref{eq:swc}. Therefore, the modification of $Z$ couplings to light fermions relative to the SM is caused by three factors: $Z-Z'$ mixing,left-handed $D-d,S-s$ mixing, and correction of the weak-mixing angle. A $95\%$ CL level constraint can be obtained in the $f-t_\beta$ plane by performing a $\chi^2$-fit of the five $R$ observables. The $\chi^2$ is defined by \begin{align} \chi^2=\sum_{f=b,c,e,\mu,\tau} \frac{(R_{f,SLH}-R_f)^2}{\delta_{R_f}^2+\delta_{R_{f,SM}}^2} \end{align} \begin{figure*}[ht] \includegraphics[width=2.2in]{ppp.pdf} \includegraphics[width=2.2in]{ppm.pdf} \includegraphics[width=2.2in]{pmm.pdf} \\ \vspace{0.5cm} \includegraphics[width=2.2in]{mpp.pdf} \includegraphics[width=2.2in]{mpm.pdf} \includegraphics[width=2.2in]{mmm.pdf} \caption{\label{fig:ewpt}Constraints from $m_W$ and $R$ observables on the $f-t_\beta$ plane. Upper left: $t_\beta\geq 1,\delta_{Dd}^+,\delta_{Ss}^+$, upper middle: $t_\beta\geq 1,\delta_{Dd}^+,\delta_{Ss}^-$ or $t_\beta\geq 1,\delta_{Dd}^-,\delta_{Ss}^+$, upper right: $t_\beta\geq 1,\delta_{Dd}^-,\delta_{Ss}^-$, lower left: $t_\beta\leq 1,\delta_{Dd}^+,\delta_{Ss}^+$, lower middle: $t_\beta\leq 1,\delta_{Dd}^+,\delta_{Ss}^-$ or $t_\beta\geq 1,\delta_{Dd}^-,\delta_{Ss}^+$, lower right: $t_\beta\leq 1,\delta_{Dd}^-,\delta_{Ss}^-$. See the text for detailed description.} \vspace{0.5cm} \end{figure*} In the above equation, $R_f$ denote the experimental values and $\delta_{R_f}$ denotes the associated experimental uncertainty. Also, $R_{f,SM}$ is the SM theory prediction and $\delta_{R_{f,SM}}$ denotes the associated theory uncertainty. Their values are listed in Table ~\ref{table:rq}~\cite{Erler:2018rpp}. \begin{table}[t!] \begin{tabular}{|c|c|c|} \hline Quantity & Value & Standard Model \\ \hline $R_e$ & $20.804\pm0.050$ & $20.737\pm0.010$ \\ \hline $R_\mu$ & $20.785\pm0.033$ & $20.737\pm0.010$ \\ \hline $R_\tau$ & $20.764\pm0.045$ & $20.782\pm0.010$ \\ \hline $R_b$ & $0.21629\pm0.00066$ & $0.21582\pm0.00002$ \\ \hline $R_c$ & $0.1721\pm0.0030$ & $0.17221\pm0.00003$ \\ \hline \end{tabular} \caption{Experimental values and the SM predictions of the $R$ observables.} \label{table:rq} \end{table} As to the constraint from $W$ boson mass, we treat it separately and consider two most precise measurements~\cite{Erler:2018rpp} \begin{align} m_W &=80.387\pm0.016~{\mbox{GeV}}\quad \text{(Tevatron)} \\ m_W &=80.370\pm0.019~{\mbox{GeV}}\quad \text{(ATLAS)} \end{align} while we note the SM prediction for $m_W$ is~\cite{Erler:2018rpp} \begin{align} m_{W,SM}=80.358\pm0.004~{\mbox{GeV}} \end{align} In Figure~\ref{fig:ewpt} the results of the electroweak precision analysis of $m_W$ and $R$ observables are shown. To clarify the situation we present the results according to whether $t_\beta\geq 1$ and the sign combination of the rotation parameters $\delta_{Dd},\delta_{Ss}$ (see Eq.~\eqref{eq:sc12}). At first sight there are eight possibilities in total, however it is immediately recognized that $\delta_{Dd}^+,\delta_{Ss}^-$ and $\delta_{Dd}^-,\delta_{Ss}^+$ make no difference in terms of constraints in the $f-t_\beta$ plane, reducing the number of possibilities to six. Therefore we obtain the six panels in Figure~\ref{fig:ewpt}, each panel showing one possibility as described in the caption. For all the panels, the green and yellow regions correspond to parameter points that are allowed by $\chi^2$-fit of $R$ observables at $68\%$ and $95\%$ CL, respectively. These allowed regions do not exhibit a $t_\beta\rightarrow t_\beta^{-1}$ symmetry (for example, the allowed region in the upper right panel and the lower left panel still differ under the transformation $t_\beta\rightarrow t_\beta^{-1}$), since in the computation of $R$ observables, the correction of $s_W^2$ relative to its SM value has to be taken into account, as was pointed out previously. When $f$ is larger than about $17~{\mbox{TeV}}$ there will be a lower theoretical bound (from the mass relation) on $t_\beta$ or $t_\beta^{-1}$ which is larger than $1$, corresponding to the white region at large $f$ and small $t_\beta$ or $t_\beta^{-1}$ in each panel. The $2\sigma$ constraints from $m_W$ measurements are simply implemented by requiring \begin{align} |m_{W,SLH}-m_W|<2\sqrt{\delta_{m_W}^2+\delta_{m_{W,SM}}^2} \end{align} In the above equation $m_W$ denotes the experimentally measured $W$ boson mass and $\delta_{m_W}$ and $\delta_{m_{W,SM}}$ denote the associated experimental and theoretical uncertainties, respectively. We superimpose the constraint boundary on the six plots as blue or red lines, representing constraints from Tevatron or ATLAS measurements, respectively. For all these $m_W$ constraint boundary lines, the regions on the right side of the lines are allowed at $2\sigma$ level. \begin{figure*}[ht] \includegraphics[width=2.2in]{GametaA.pdf} \includegraphics[width=2.2in]{BretapmmA.pdf} \includegraphics[width=2.2in]{BretammmA.pdf} \caption{\label{fig:etadA}Total width $\Gamma$ and decay branching ratios of $\eta$ in Case A. } \end{figure*} \begin{figure*}[ht] \includegraphics[width=2.2in]{GametaB.pdf} \includegraphics[width=2.2in]{BretapmmB.pdf} \includegraphics[width=2.2in]{BretammmB.pdf} \caption{\label{fig:etadB}Total width $\Gamma$ and decay branching ratios of $\eta$ in Case B. } \end{figure*} As can be seen from Figure~\ref{fig:ewpt}, if $t_\beta<1$, then the region favored by naturalness consideration is disfavored by constraints from both $R$ observables and $W$ boson mass measurements, regardless of the sign combination of the rotation parameters $\delta_{Dd},\delta_{Ss}$. If $t_\beta\geq 1$, then $W$ boson mass measurement does not constrain the parameter region favored by naturalness consideration. However, in this case constraints from $R$ observables are significant when any of the rotation parameters $\delta_{Dd},\delta_{Ss}$ adopt the plus sign in Eq.~\eqref{eq:sc12}. This is because the choice of plus sign leads to a large $t_\beta$ enhancement of the rotation parameter and therefore a larger deviation of $Z$ couplings to the corresponding fermion. Although the lower bound on $f$ has been pushed to around $7.5~{\mbox{TeV}}$ by LHC dilepton resonance searches, the $R$ observable constraints still force us to avoid this $t_\beta$ enhancement, and consequently the only possibility left is $\delta_{Dd}^-,\delta_{Ss}^-$ with $t_\beta\geq 1$. This result has important consequences for the pseudo-axion phenomenology since the sign combinations of $\delta_{Dd},\delta_{Ss}$ will determine how $\eta$ interacts with the $D,S$ quarks which in turn influences the decay and production of the $\eta$ particle, as will be discussed in more detail in the next section. In previous literature on the SLH model the $t_\beta\geq 1$ and $t_\beta<1$ cases are usually not distinguished, since a $t_\beta\rightarrow t_\beta^{-1}$ symmetry is tacitly assumed. Then only the $t_\beta\geq 1$ case is considered. However strictly speaking this symmetry is only valid when the leptonic sector is not considered. Here we established clearly that if we consider the region favored by naturalness consideration, the $t_\beta<1$ case is disfavored by measurements $m_W$ and $R$ observables. This is closely related to the breakdown of the $t_\beta\rightarrow t_\beta^{-1}$ symmetry in the lepton sector. Moreover, in previous literature~\cite{Han:2005ru,Reuter:2012sd}, the sign combination of the rotation parameter $\delta_{Dd},\delta_{Ss}$ was simply \emph{assumed} to be (effectively) $\delta_{Dd}^-,\delta_{Ss}^-$, in order to suppress contribution to the electroweak precision observables. Here we also establish firmly this choice based on constraints from $R$ observables, combined with $m_W$ and naturaless consideration, keeping in mind that the constraint on $f$ has been pushed to around $7.5~{\mbox{TeV}}$ due to updated LHC constraints. \section{Production and Decay of the Pseudo-Axion} \label{sec:prod} With the preparation made in the previous three sections we are now ready to calculate the production and decay of the pseudo-axion. We will restrict ourselves to the region $2m_t\lesssim m_\eta\lesssim 1~{\mbox{TeV}}$, which is favored by naturalness consideration. All the related partial widths formulae are given in Appendix~\ref{sec:pwf}. \subsection{Decay of the Pseudo-Axion} \begin{figure*}[ht] \includegraphics[width=2.2in]{BretapmmC.pdf} \includegraphics[width=2.2in]{BretammmC.pdf} \\ \includegraphics[width=2.2in]{BretapmmD.pdf} \includegraphics[width=2.2in]{BretammmD.pdf} \caption{\label{fig:etadCD}Decay branching ratios of $\eta$ in Case C and Case D. } \end{figure*} For $\eta$ in the mass range $2m_t\lesssim m_\eta\lesssim 1~{\mbox{TeV}}$, it can always decay into $t\bar{t},gg,\gamma\gamma$ channels. (The $WW,ZZ,Z\gamma$ channels are also possible and may have comparable branching ratio compared to $\gamma\gamma$. However from a detection viewpoint, it is preferrable to consider further decays into leptons in these channels, leading to an additional suppression by the leptonic branching. For simplicity we will not consider these channels further in this work.). $\eta\rightarrow ZH$ is highly suppressed, since the antisymmetric $ZH\eta$ vertex is suppressed to $\mathcal{O}\left(\frac{v^3}{f^3}\right)$ while the symmetric $ZH\eta$ vertex does not contribute, as pointed out in Section~\ref{sec:vss}. If the new fermions $D,S,N$ are heavy enough such that they cannot appear as decay products of $\eta$, then we are left with only the $t\bar{t},gg,\gamma\gamma$ channels. Nevertheless we should keep in mind that when $f$ and $m_\eta$ are given, the partial withds of these channels still depend on the masses of the additional heavy quarks $T,D,S$ which do not appear as decay products of $\eta$. First, the $\eta\rightarrow t\bar{t}$ decay is controlled by the rotation parameter $\delta_t$, which in turn depends on the top partner mass. The loop-induced decays $\eta\rightarrow gg,\gamma\gamma$ have contributions from both the top quark and the heavy quark partners $T,D,S$. The top quark contribution again depends on $\delta_t$ while the $T,D,S$ contributions depend on the $\eta T\bar{T}, \eta D\bar{D},\eta S\bar{S}$ couplings which are propotional to the corresponding rotation parameters times the quark partner mass. Experimentally the current lower bound for light-flavor quark partner $D$ and $S$ is around $700~{\mbox{GeV}}$~\cite{Sirunyan:2017lzl}. Thus for a heavy enough $\eta$ the $\eta\rightarrow Dd,Ss$ channels are still possible if the mass of $D$ or $S$ is close to the lower bound. To be definite, we will consider four benchmark scenarios: \begin{enumerate} \item Case A: $f=8~{\mbox{TeV}}, m_T=m_D=m_S=3~{\mbox{TeV}}, \text{all }m_N>m_\eta$. \item Case B: $f=8~{\mbox{TeV}}, m_\eta=500~{\mbox{GeV}}, m_D=m_S=m_T, \text{all }m_N>m_\eta$. \item Case C: $f=8~{\mbox{TeV}}, m_T=3~{\mbox{TeV}},m_D=700~{\mbox{GeV}},m_S=1~{\mbox{TeV}},\text{all }m_N=150~{\mbox{GeV}}$. \item Case D: $f=8~{\mbox{TeV}}, m_\eta=500~{\mbox{GeV}}, m_D=m_S=m_T, \text{all }m_N=150~{\mbox{GeV}}$. \end{enumerate} For each case, there are two allowed sign combinations for the rotation parameters $(\delta_t,\delta_{Dd},\delta_{Ss})$: $(+,-,-)$ and $(-,-,-)$. Other choices are excluded by electroweak precision measurements, if we are only interested in parameter region favored by nartualness consideration. Therefore in the following we will use Case A$+$, Case A$-$, etc. to indicate the sign choice of $\delta_t$ in each case (see Eq.~\eqref{eq:signchoice}). \begin{figure*}[ht] \includegraphics[width=2.2in]{GamT.pdf} \includegraphics[width=2.2in]{BrTp.pdf} \includegraphics[width=2.2in]{BrTm.pdf} \caption{\label{fig:Tdecay}Total width $\Gamma$ and decay branching ratios of $T$ in the SLH. We assume $f=8~{\mbox{TeV}}$ and $m_\eta=500~{\mbox{GeV}}$. Note that in the considered mass range $T\rightarrow bX, tY$ channels do not open.} \end{figure*} \begin{figure*}[htbp] \includegraphics[width=2.2in]{etascanmetapmm.pdf} \includegraphics[width=2.2in]{etascanmetammm.pdf} \\ \includegraphics[width=2.2in]{etascanmTpmm.pdf} \includegraphics[width=2.2in]{etascanmTmmm.pdf} \caption{\label{fig:ggFeta}Gluon fusion production cross section of $\eta$ as a function of $m_\eta$ (upper panel, assuming $m_T=m_D=m_S=3~{\mbox{TeV}}$) or $m_T$ (lower panel, assuming $m_D=m_S=m_T$ and $m_\eta=500~{\mbox{GeV}}$). The sign combination of $(\delta_t,\delta_{Dd}, \delta_{Ss})$ is indicated in each plot. } \end{figure*} The total width and branching ratios of $\eta$ are shown in Figure~\ref{fig:etadA} and Figure~\ref{fig:etadB} for Case A and Case B respectively. In these two cases, the additional fermion partners $D,S,N$ are not light enough to appear as decay products of $\eta$ and therefore we are left with the standard $\eta\rightarrow t\bar{t},gg,\gamma\gamma$ channels. From the figures it is clear that $\eta$ can be viewed as a narrow width particle, however the width is not small enough to give rise to displaced vertices. In both Case A and Case B and for both sign choices, $\eta$ decays almost $100\%$ to $t\bar{t}$, with only very small branching ratios to $gg$ ($\mathcal{O}(0.1\%)$) and $\gamma\gamma$ ($\mathcal{O}(0.001\%)$). Here (and in the following) all the partial widths are calculated at LO, but it is obvious that the inclusion of higher order radiative corrections has little effect on the whole picture. From a detection point of view this situation is somewhat unfortunate since the dominant channel $t\bar{t}$ suffer from huge background at hadron colliders, while the clean channel $\gamma\gamma$ has an extremely small branching ratio. It is natural to ask how the situation will change if any of $D,S,N$ is light enough, such that exotic channels like $\eta\rightarrow NN, N\nu, Dd, Ss$ could be open. This is embodied in Cases C and D and we show the corresponding branching ratio plots in Figure~\ref{fig:etadCD}. Nevertheless the exotic channels contribute at most a few percent in terms of branching ratio, therefore are of little use for $\eta$ detection even if any of $D,S,N$ is light enough. This can be understood from the interaction Lagrangian containing the $\eta Dd, \eta Ss$ and $\eta N\nu,\eta NN$ vertices. The $\eta Dd$ vertex is shown in Eq.~\eqref{eq:dYu}. When $\eta\rightarrow Dd$ is open, $\frac{M_D}{v}$ is an $\mathcal{O}(1)$ quantity, and therefore from Eq.~\eqref{eq:dYu} we may recognize that the $\eta Dd$ coupling can be considered as being relatively suppressed by $\mathcal{O}(\frac{v}{f})$ compared to $\eta t\bar{t}$ vertex. This leads to the suppression of $\eta\rightarrow Dd$ channel. The $\eta N\nu$ coupling is relatively suppressed by $\mathcal{O}(\frac{v}{f})$ compared to $\eta NN$ coupling, as can be seen from Eq.~\eqref{eq:lYu}. However, when $\eta\rightarrow NN$ is open, $M_{Nn}$ can be at most $\mathcal{O}(v)$. Moreover, the $\eta NN$ coupling suffers from a $t_\beta$ suppression. Therefore numerically $\eta\rightarrow NN$ channel is much suppressed compared to $\eta\rightarrow t\bar{t}$ channel. \subsection{Decay of the Top Partner} \begin{figure*}[ht] \includegraphics[width=2.2in]{etattscanmetap.pdf} \includegraphics[width=2.2in]{etattscanmetam.pdf} \caption{\label{fig:tteta}Production cross section of $pp\rightarrow t\bar{t}\eta$ as a function of $m_\eta$. We assume $f=8~{\mbox{TeV}}$ and $m_T=3~{\mbox{TeV}}$.} \end{figure*} The pseudo-axion may appear as a decay product of some additional heavy particles in the model. Among the additional particles in the SLH only $Z'$ and $T$ are closely related to EWSB and naturalness favors small $Z'$ and $T$ masses within theoretical constraints. In this subsection we consider the decay of the top partner. The possibility of $T\rightarrow t+a$ where $T$ and $a$ denote the top partner and a pNGB in the context of composite Higgs models have been investigated in the literature~\cite{Bizot:2018tds,Serra:2015xfa,Kearney:2013cca}. Here we focus on the situation in the SLH. To be specific we fix $f=8~{\mbox{TeV}}$ and $m_\eta=500~{\mbox{GeV}}$ and then plot the total width and branching ratios of $T$ as a function of the top partner mass $m_T$ in Figure~\ref{fig:Tdecay}. Both $\delta_t^+$ and $\delta_t^-$ possibilities are considered. Note that when $m_T$ is also given, then according to the mass relation, $t_\beta$ can be calculated, which in turn determines the total width and branching ratios. The relation ${\mathrm{Br}}(T\rightarrow bW)= 2{\mathrm{Br}}(T\rightarrow tH)=2{\mathrm{Br}}(T\rightarrow tZ)$ holds to a good approximation. In the $\delta_t^+$ case, ${\mathrm{Br}}(T\rightarrow t\eta)$ is small (not larger than $10\%$ for $m_T>2~{\mbox{TeV}}$) and decreases with the increase of $m_T$. In the $\delta_t^-$ case, ${\mathrm{Br}}(T\rightarrow t\eta)$ is sizable and becomes dominant (larger than $50\%$) for $m_T\gtrsim 2.2~{\mbox{TeV}}$. Another interesting and important feature is about the total width of $T$. In the $\delta_t^-$ case, the total width is around $20~{\mbox{GeV}}$ which makes the narrow width approximation valid to high precision. In the $\delta_t^+$ case, the total width increases with $m_T$. For $m_T\approx 3.5~{\mbox{TeV}}$ the total width increases to around $500~{\mbox{GeV}}$. In this case $\Gamma/M\lesssim 20\%$ and the narrow width approximation still roughly holds, if the phase space is large enough. The width will however leave appreciable impact on the invariant mass distribution of the $T$ decay products. \subsection{Direct Production of the Pseudo-Axion} The pseudo-axion can be directly produced via the gluon fusion mechanism at hadron colliders. The particles running in the loop now contain $t,T,D,S$. In the calculation of the production cross section\footnote{For simplicity, in this work, all the cross sections are calculated at LO using MadGraph5\_aMC@NLO~\cite{Alwall:2014hca} and FeynRules~\cite{Alloul:2013bka}. We use the MSTW2008lo68cl PDF~\cite{Martin:2009iq}. For $2\rightarrow 1$ production, the renormalization and factorization scale is taken to be the rest mass of the s-channel resonance. Otherwise, the renormalization and factorization scale is taken to be the sum of transverse mass of final state particles (before resonance decay) divided by two.}, we consider the $14~{\mbox{TeV}}$ (HL-)LHC, the $27~{\mbox{TeV}}$ HE-LHC and also the $100~{\mbox{TeV}}$ FCC-hh. The production cross sections are plotted in Figure~\ref{fig:ggFeta} as a function of $m_\eta$ or $m_T$, with other parameters described in the figure caption. Although the production cross section may reach $\mathcal{O}(\text{pb})$ in certain region of parameter space, unfortunately when combined with $\eta$ decay it turns out very difficult to detect in the gluon fusion channel. The dominant $t\bar{t}$ decay mode suffers from huge background, while the $\gamma\gamma$ decay mode has only $\mathcal{O}(10^{-5})$ branching ratio. \begin{figure*}[ht] \includegraphics[width=2.2in]{TjTTscanmTp.pdf} \includegraphics[width=2.25in]{TjTTscanmTm.pdf} \caption{\label{fig:TjTT}Production cross section of $pp\rightarrow T\bar{T}$ and $pp\rightarrow Tj$ as a function of $m_T$. We assume $f=8~{\mbox{TeV}}$ and $m_\eta=500~{\mbox{GeV}}$. For $pp\rightarrow Tj$, the contribution from $pp\rightarrow\bar{T}j$ is also included.} \end{figure*} Another way to directly produce $\eta$ is through the $pp\rightarrow t\bar{t}\eta$ channel. We plot the production cross section as a function of $m_\eta$ in Figure~\ref{fig:tteta}, for three center of mass energies and both $\delta_t^+$ and $\delta_t^-$. Here we fix $f=8~{\mbox{TeV}}$ and $m_T=3~{\mbox{TeV}}$, and therefore for given $m_\eta$, $t_\beta$ (and $\delta_t^\pm$)is also determined. The cross section in the $\delta_t^-$ case is much smaller than that of the $\delta_t^+$ case. Even in the $\delta_t^+$ case the detection of $pp\rightarrow t\bar{t}\eta$ process is still very difficult. For instance, if we take $m_\eta=450~{\mbox{GeV}}$, then in the $\delta_t^+$ case the cross section reaches only about $0.6~{\mbox{fb}}$ at $14~{\mbox{TeV}}$ and $100~{\mbox{fb}}$ at $100~{\mbox{TeV}}$. When we consider $\eta\rightarrow t\bar{t}$ decay, there exists the SM four-top production as an irreducible background, with cross section of about $10~{\mbox{fb}}$ at $14~{\mbox{TeV}}$ and $5000~{\mbox{fb}}$ at $100~{\mbox{TeV}}$. Unfortunately, since $m_\eta$ is not far above the $2m_t$ threshold, we don't expect large differences of kinematical features between the $pp\rightarrow t\bar{t}\eta$ signal and the SM four-top background, making the discrimination very difficult. With larger $m_\eta$ (say $1~{\mbox{TeV}}$), the top pair from $\eta$ decay can be boosted, with invariant mass distribution peaked around a high value, which can facilitate the discrimination from SM backgrounds. However, the cross section for such a heavy $\eta$ becomes very small. Therefore we don't expect $pp\rightarrow t\bar{t}\eta$ to be a promising channel for future $\eta$ detection in the SLH. \begin{figure*}[htbp] \includegraphics[width=2.2in]{TTtoetascanmTp.pdf} \includegraphics[width=2.2in]{TTtoetascanmTm.pdf} \\ \includegraphics[width=2.2in]{TjtoetascanmTp.pdf} \includegraphics[width=2.2in]{TjtoetascanmTm.pdf} \caption{\label{fig:TTTjtoeta} Cross section of $pp\rightarrow T\bar{T}\rightarrow\eta+\text{anything}$ and $pp\rightarrow Tj\rightarrow\eta+\text{anything}$ as a function of $m_T$. We assume $f=8~{\mbox{TeV}}$ and $m_\eta=500~{\mbox{GeV}}$. For $pp\rightarrow Tj$, the contribution from $pp\rightarrow\bar{T}j$ is also included. } \end{figure*} \subsection{Pseudo-Axion Production from Top Partner Decay} The above discussion shows that it is very difficult to detect $\eta$ via the gluon fusion and $t\bar{t}\eta$ associated production channels. It is therefore natural to consider alternative $\eta$ production mechanisms, such as decay from heavier particles. In the SLH, particles that can be heavier than $\eta$ are $T,D,S,N,Z',X$ and $Y$. Here we will concentrate on $T$, which is most tightly connected to EWSB. We will briefly comment on the possibility of detecting $\eta$ from other heavy particle decays in the next subsection. Under current constraints, the lower bound on $m_T$ is already larger than the largest possible value of $m_\eta$ plus $m_t$, therefore the exotic decay channel $T\rightarrow t\eta$ will always open. The branching fraction of $T\rightarrow t\eta$ has been discussed (see Figure~\ref{fig:Tdecay}). Here we focus on top partner production. Two major production mechanisms are pair production through QCD interaction, and single production through the $TbW$ vertex. Pair production has the virtue of being model-independent, while single production depends on the value of $\delta_t$. In Figure~\ref{fig:TjTT} we present the cross section of $pp\rightarrow T\bar{T}$ and $pp\rightarrow Tj+\bar{T}j$ for both $\delta_t^+$ and $\delta_t^-$, as a function of $m_T$ while we fix $f=8~{\mbox{TeV}},m_\eta=500~{\mbox{GeV}}$. Three center of mass energies ($14,27,100~{\mbox{TeV}}$) are considered. Whether pair or single production delivers a larger cross section depends on the sign choice for $\delta_t$ and the center of mass energy. In the $\delta_t^+$ case, for all three center of mass energies the single production cross section is larger. In the $\delta_t^-$ case, at $14~{\mbox{TeV}}$ single production is larger since pair production is highly suppressed by phase space. At $27~{\mbox{TeV}}$ pair production and single production become comparable while for $100~{\mbox{TeV}}$ collider energy pair production dominates. \begin{figure*}[ht] \includegraphics[width=2.2in]{DDtoetascanmDm.pdf} \includegraphics[width=2.25in]{DDtoetascanmetam.pdf} \caption{\label{fig:Deta}Production cross section of $pp\rightarrow D\bar{D}\rightarrow\eta+\text{anything}$ as a function of $m_D$ (left) and $m_\eta$ (right). For the left plot, we assume $f=8~{\mbox{TeV}},m_T=3~{\mbox{TeV}},m_\eta=500~{\mbox{GeV}}$. For the right plot, we assume $f=8~{\mbox{TeV}},m_T=3~{\mbox{TeV}},m_D=700~{\mbox{GeV}}$.} \end{figure*} To detect $\eta$ we would also like to consider the top partner decay $T\rightarrow t\eta$ that follows the pair or single production of $T$. The associated cross sections are plotted as a function of $m_T$ in Figure~\ref{fig:TTTjtoeta}, using narrow width approximation, for both $\delta_t^+$ and $\delta_t^-$. For definiteness we take $f=8~{\mbox{TeV}},m_\eta=500~{\mbox{GeV}}$. To be precise, the plotted cross sections are defined by (for $pp\rightarrow Tj$, the contribution from $pp\rightarrow\bar{T}j$ is also included) \begin{widetext} \begin{align} \sigma(pp\rightarrow Tj\rightarrow\eta+\text{anything}) & =\sigma(pp\rightarrow Tj)\times{\mathrm{Br}}(T\rightarrow t\eta) \\ \sigma(pp\rightarrow T\bar{T}\rightarrow\eta+\text{anything}) & =2\sigma(pp\rightarrow T\bar{T})\times{\mathrm{Br}}(T\rightarrow t\eta)(1-{\mathrm{Br}}(T\rightarrow t\eta)) \nonumber \\ & +\sigma(pp\rightarrow T\bar{T})\times{\mathrm{Br}}^2(T\rightarrow t\eta) \end{align} \end{widetext} For the purpose of $\eta$ detection, let us consider using the $\eta\rightarrow t\bar{t}$ channel, which has almost $100\%$ branching fraction. Then the $\eta$ production from top partner decays generically leads to a multi-top ($\geq 3$) signature. Moreover, the top quarks will be boosted since $m_T\gg m_t+m_\eta$. For example, suppose a $2~{\mbox{TeV}}$ top partner is produced with little boost in the lab frame and then decays into $t+\eta$. At this step $t$ and $\eta$ roughly shares the rest energy of the top partner and therefore will each have about $1~{\mbox{TeV}}$ energy. The $\eta$ boson then further decays into $t$ and $\bar{t}$, each of which roughly has an energy about $0.5~{\mbox{TeV}}$. All three top quarks are boosted: the first one will have the decay ($t \rightarrow b W$) cone size approximated by $\sim 2 m_t / E_t \simeq 0.4$ while the second and third top have $\sim 2 m_t / E_t \simeq 0.8$. Furthermore, the second and the third top quark decaying from $\eta$ is close to each other, of separation approximated by $\sim 2 m_\eta / E_\eta \simeq 0.8$. In the single production case, the signature will be $3t+j$, in which the first top is highly boosted while the second and third are still somewhat boosted and close to each other. One can make use of such kinematics to discriminate from QCD backgrounds. The most serious background is perhaps multi-top production. One may be able to reduce the background using the {\it boosted} techniques~\cite{Kaplan:2008ie}. In the pair production case, if we consider one top partner decaying into $t\eta$ with the other decaying into $bW$, then we obtain a signature of $3t+b+W$ in which the top quarks and also the $W$ boson will be boosted. In both single and pair production channels, the invariant mass peaks at $m_T$ and $m_\eta$ will also be helpful in discriminating between the signal and background. Nevertheless, a full signal-background analysis using boosted-top techniques is beyond the scope of the present work. From Figure~\ref{fig:TTTjtoeta}, we see that the cross sections at $14~{\mbox{TeV}}$ (HL-)LHC for all these channels are very small ($<1~{\mbox{fb}}$), making the detection very difficult. Nevertheless, with the increase of collider energy, the signal cross sections increase significantly. For example, at the $100~{\mbox{TeV}}$ FCC-hh, for both $\delta_t^+$ and $\delta_t^-$ and pair and single production channels, at relatively small $m_T$ the cross sections could reach $\mathcal{O}(100~{\mbox{fb}})$. In the $\delta_t^+$ case, the single production (with the top partner decaying to $t\eta$) turns out to deliver a cross section of about $200~{\mbox{fb}}$, which is larger than the pair production channel. In the $\delta_t^-$ case, the pair production (with one top partner decaying to $t\eta$) turns out to deliver a cross section of about $400~{\mbox{fb}}$, which is however larger than the single production channel. In principle, top partner production and decay provide a way to measure $t_\beta$ (which is important for testing the SLH mass relation) and also discriminate between the $\delta_t^+$ and $\delta_t^-$ cases. In practice, we may consider the partial width ratio $R_\eta\equiv\frac{\Gamma(T\rightarrow t\eta)}{\Gamma(T\rightarrow bW)}$ as both an indicator of the sign choice for $\delta_t$ and a way to measure $\delta_t$, which in turn determines $t_\beta$. $\delta_t$ can also be determined from $pp\rightarrow Tj$ production since the cross section is proportional to $\delta_t^2$. Furthermore, in the $\delta_t^+$ case the total width of $T$ could reach $\mathcal{O}(100~{\mbox{GeV}})$, which may have impact on the invariant mass distribution of $T$ decay products (e.g. $bW$). Measurement of the $T$ total width in principle could also help determine the value of $\delta_t$. If $\delta_t$ is determined (including the sign choice), we should note however the determination of $t_\beta$ and the test of mass relation still requires the measurement of $f$ and $m_\eta$, which can be obtained if we are able to measure the masses of $Z'$ and $\eta$ particles. \subsection{Comments on Other Channels} Currently the SLH is stringently constrained by the LHC $Z'\rightarrow ll$ search, nevertheless it also means that if the SLH was realized in nature, the $Z'\rightarrow ll$ signature would be the first place that we might expect the appearance of new physics. Then it would also be important to consider whether we may detect $\eta$ as a decay product of $Z'$. Two channels might be conceived: $Z'\rightarrow\eta H$ and $Z'\rightarrow\eta Y$. However, it turns out they give too small branching fractions: ${\mathrm{Br}}(Z'\rightarrow\eta H)<0.01$ and ${\mathrm{Br}}(Z'\rightarrow\eta Y)<10^{-4}$. This is regardless of whether the $Z'\rightarrow DD, SS, NN$ channels are kinematically allowed. Therefore it is not preferable to consider detecting $\eta$ from $Z'$ decay. If kinematically allowed, we might also consider $D\rightarrow d\eta, S\rightarrow s\eta, N\rightarrow\nu\eta$ decays. However, these decay channels also suffer from small branching fractions, since the $\eta Dd,\eta Ss,\eta N\nu$ couplings are $\mathcal{O}(\frac{v}{f})$ suppressed compared to $HDd,HSs,HN\nu$ couplings (see Eq.~\eqref{eq:lYu} and Eq.~\eqref{eq:dYu}). For example, $D$ will dominantly decay to $uW,dZ,dH$, with only ${\mathrm{Br}}(D\rightarrow d\eta)<1\%$, for the benchmark point $f=8~{\mbox{TeV}},m_T=2~{\mbox{TeV}},m_\eta=0.5~{\mbox{TeV}}$ and any value of $m_D$. Here $\delta_{Dd}^-$ is assumed, to be consistent with electroweak precision constraints. As to $D$ production, for the $\delta_{Dd}^-$ case, there is a $t_\beta^{-1}$ suppresion for single $D$ production, therefore $D$ pair production is more promising. Moreover, current collider constraint on $D$ mass is not stringent, such that $m_D=700~{\mbox{GeV}}$ is still allowed~\cite{Sirunyan:2017lzl}. Therefore, if $m_D$ is as light as $700~{\mbox{GeV}}$, the large $pp\rightarrow D\bar{D}$ production cross section could compensate for the small $D\rightarrow d\eta$ branching fraction, leading to sizable $\eta$ production rate. At the $100~{\mbox{TeV}}$ FCC-hh, the $\eta$ production cross section from $D$ decay, $\sigma(pp\rightarrow D\bar{D}\rightarrow\eta+\text{anything})$ could also reach more than $100~{\mbox{fb}}$ for $m_D$ not much larger than $700~{\mbox{GeV}}$(see Figure~\ref{fig:Deta}). This is comparable with $\eta$ cross section from top partner production, and in principle could also be used to measure $t_\beta$. The expected signature would be $t\bar{t}+2j+W/Z/H$, in which the $W/Z/H$ should be boosted. The existence of various intermediate resonances would be helpful in discriminating signal and background. Nevertheless, we should be aware that naturalness does not offer any guidance on the preferred value of $m_D$. This is different from the case of $m_T$, in which naturalness clearly favors a lighter top partner. The case of $pp\rightarrow S\bar{S}$ production with $S\rightarrow s\eta$ decay is completely similar to the above discussion of $D$ production and decay. For $N$, $Br(N\rightarrow\nu\eta)$ is also very small (less than $1\%$ for the benchmark point $f=8~{\mbox{TeV}},m_T=2~{\mbox{TeV}},m_\eta=0.5~{\mbox{TeV}}$ and any value of $m_N$. Moreover, $N$ does not have QCD pair production channels like $D,S$, therefore it is difficult to detect $\eta$ from $N$ decay at hadron colliders. The $X,Y$ gauge bosons in the SLH may have decays like $X\rightarrow\eta W$ and $Y\rightarrow H\eta$. However, single production cross section of $X,Y$ at hadron colliders are highly suppressed, and we need to rely on production with other heavy particles (heavy gauge bosons or quark partners)~\cite{Han:2005ru}. Since $X,Y$ bosons are quite heavy (with masses of about $0.8m_{Z'}$), their production with other heavy particles would be limited by phase space while their decays are expected to be dominated by fermionic final states. Therefore we don't consider $\eta$ production from $X,Y$ decays as promising channels for $\eta$ detection. \section{Discussion and Conclusions} \label{sec:dnc} \begin{table*}[ht] \begin{tabular}{|c|c|c|} \hline Channel & Cross section at the benchmark point ($\sqrt{s}=100~{\mbox{TeV}}$)(fb) & Signature \\ \hline $pp\rightarrow T\bar{T}\rightarrow\eta+\text{anything}$ & $84(\delta_t^+),\quad\quad379(\delta_t^-)$ & $3t+W+b$ or $4t+Z/H$ \\ \hline $pp\rightarrow Tj\rightarrow\eta+\text{anything}$ & $209(\delta_t^+),\quad\quad133(\delta_t^-)$ & $3t+j$ \\ \hline $pp\rightarrow D\bar{D}\rightarrow\eta+\text{anything}$ & 322 & $2t+W/Z/H+2j$ \\ \hline \end{tabular} \caption{Summary of $\eta$ production from $T,D(S)$ decays at the $100~{\mbox{TeV}}$ FCC-hh. For $pp\rightarrow Tj$, the contribution from $pp\rightarrow\bar{T}j$ is also taken into account. For $T\bar{T},Tj$ channels, the benchmark point is $f=8~{\mbox{TeV}},m_T=2~{\mbox{TeV}}, m_\eta=500~{\mbox{GeV}}$ while for $D\bar{D}$ channels, the benchmark point is $f=8~{\mbox{TeV}},m_T=3~{\mbox{TeV}},m_\eta=500~{\mbox{GeV}}, m_D=700~{\mbox{GeV}}$. When listing the signatures for $T\bar{T},D\bar{D}$ channels we don't consider the situation in which both quark partners decay into $\eta+t$ or $\eta+j$, but this possibility is taken into account in the cross section values and plots.} \label{table:sumeta} \end{table*} The Simplest Little Higgs model provides a most simple manner to concretely realize the collective symmetry breaking mechanism, in order to alleviate the Higgs mass naturalness problem. In the scalar sector, its particle content is very economical, since besides the CP-even Higgs which should serve as the $125~{\mbox{GeV}}$ Higgs-like particle, the only additional scalar particle is the pseudo-Nambu-Goldstone particle $\eta$ associated with a remnant global $U(1)$ symmetry. The detection of $\eta$ is important since its mass enters into the crucial SLH mass relation and it will also play an important role in discriminating SLH from other new physics scenarios. In this work we are concerned with the production and decay of $\eta$ particle at future hadron colliders. We found that for natural region of parameter space, $m_\eta$ is larger than $2m_t$ and decays almost exclusively to $t\bar{t}$, and ${\mathrm{Br}}(\eta\rightarrow\gamma\gamma)$ is too small to be considered promising for detection. Also it is very difficult to detect $\eta$ in direct production channels $pp\rightarrow\eta$ (gluon fusion) and $pp\rightarrow t\bar{t}\eta$. Channels that are worth further consideration include $\eta$ production from heavy quark partner ($T,D,S$) decays, in which the heavy quark partner might be singly (for $T$) or pair produced. The corresponding $\eta$ production cross section at $100~{\mbox{TeV}}$ FCC-hh could reach $\mathcal{O}(100~{\mbox{fb}})$ for certain range of parameter space that is allowed by current constraints, while at $14~{\mbox{TeV}}$ (HL-)LHC the rate might be too small for detection. However, the detection prospects in these channels (at $100~{\mbox{TeV}}$) might still be challenging since the final states are quite complicated, including multi-top associated production with other objects, in which one or more of them could be boosted, requiring sophisticated tagging techniques. At the same time the SM background also enjoys a large increase with the collider energy, with more complicated hadronic environment. The aim of this paper is to examine the $\eta$ production channels with a LO estimate of the $\eta$ cross sections in the relatively promising ones as a function of model parameters, keeping in mind the most up-to-date theoretical and experimental constraints (see Table~\ref{table:sumeta} for a summary). We do not attempt here to give a quantitative assessment of the collider sensitivities in these channels. Phenomenology of the $\eta$ particle in the SLH was studied long time ago by several papers (e.g.~\cite{Kilian:2004pp,Kilian:2006eh, Cheung:2006nk,Cheung:2008zu}). Compared to all the previous studies, the present paper is different in a few crucial aspects: \begin{enumerate} \item Instead of working with the ad hoc assumption of no direct contribution to the scalar potential from the physics at the cutoff, we take into account in all calculations the crucial SLH mass relation Eq.~\eqref{eq:mr2} which is a reliable prediction of the SLH. Therefore our prediction preserves all the correlation required by theoretical consistency but does not depend on the choice of any fixed cutoff value such as $4\pi f$. \item We have focused our attention on the parameter region favored by naturalness consideration. This region is characterized by small $m_T$ and large $t_\beta$ or $t_\beta^{-1}$. The favored $\eta$ mass is larger than $2m_t$. \item We have taken into account the recent collider constraint on $f$ ($f\gtrsim 7.5~{\mbox{TeV}}$) which is much more stringent than the constraints obtained long time ago. We also take into account the constraint from perturbative unitarity which sets an upper bound on the allowed value of $t_\beta$ or $t_\beta^{-1}$. These two factors determine the current lower bound on $m_T$ and crucially affect the largest cross section that can be achieved in all channels. \item Our study is based on an appropriate treatment of the diagonalization of the vector-scalar system in the SLH, and especially the field redefinition related to $\eta$. This affect the derivation of $ZH\eta$ vertices and also $\eta$ coupling to fermions, which are not treated properly in previous works until ref.~\cite{He:2017jjx}. \item We also clarify the role played by the symmetric VSS vertices that appear in the Lagrangian and how they are compatible with the general principle like field redefinition invariance and gauge independence. \end{enumerate} From our study it turns out that the detection of $\eta$ at the $14~{\mbox{TeV}}$ (HL-)LHC will be very difficult, and therefore a $pp$ collider with higher energy and larger luminosity, such as the $27~{\mbox{TeV}}$ HE-LHC or even the $100~{\mbox{TeV}}$ FCC-hh or SppC, is motivated to capture the trace of such an elusive particle. Moreover, generally we would expect some other SLH signatures (e.g. $Z'\rightarrow ll, T\rightarrow bW$ or $D\rightarrow uW$) to show up earlier than $\eta$ signatures since $\eta$ signatures are usually very complicated (with multiple top quarks) and suffer from small rates). It is nonetheless important to study $\eta$ properties since they are crucial in testing the SLH mass relation and also provide a basis for model discrimination. \subsection*{Acknowledgements} We thank Yue-Lin Sming Tsai for helpful discussion. P.Y.T. was supported by World Premier International Research Center Initiative (WPI), MEXT, Japan. This work was supported in part by the Natural Science Foundation of China (Grants No. 11635001 and No. 11875072), the China Postdoctoral Science Foundation (Grant No. 2017M610992) and the MoST of Taiwan under the grant no.:105-2112-M-007-028-MY3 and 107-2112-M-007-029-MY3.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Landscape evolution on Earth is a competition between tectonics and rainfall \citep[e.g.][]{Burbank2011}. On Mars, both these factors have been negligible for at least 3\,Gyr, allowing slow landscape evolution through aeolian processes to dominate. Thus, Mars is a natural laboratory for exploring the co-evolution of wind and landscapes \citep[e.g.][]{Holt2010, Conway2012, Brothers2013, Brothers2016}. In this study, we focus on layered sediments in craters. Most of these sediments are indurated \citep{Malin2000}, and we refer to them as sedimentary rocks. Most of the known light-toned, post-Noachian sedimentary rocks on Mars take the form of mountains (mounds) within craters and canyons \citep{Hynek2003}, including Mt.\ Sharp in Gale crater, the target of the Mars Science Laboratory `Curiosity' rover \citep{Anderson2010, Milliken2010}. The other currently operating Mars rover, MER-B `Opportunity', is also exploring a crater that contains a sedimentary mound; the 22\,km diameter Endeavour crater \citep[e.g.][]{Squyres2012, Grant2016}. These mounds are distributed across the Martian surface, with most of the mapped intra-crater mounds located in the Arabia Terra region \citep{Malin2000, Fergason2008, Zabrusky2012, Bennett2016}. Figure~\ref{all_mounds} shows the locations of intra-crater mounds (the focus of this study), as well as mounds within the Valles Marineris canyon system \citep{Kite2016}, ice mounds in the north polar region \citep{Conway2012} and Medusa Fossae Formation mounds \citep{Bradley2002}. Visually, the intra-crater mounds mapped by \citet{Bennett2016} fall into three main types. There are mounds with a distinctive moat encircling them, mounds joined partly to the crater wall, and mounds forming a ramp down from the crater rim (see Figure~\ref{mound_examples}). The data suggest that there is a tendency for mounds completely encircled by moats (Figure~\ref{mound_examples}e,f) to become more frequent as the crater diameter increases, while mounds defined here as ramps (Figure~\ref{mound_examples}a,b) occur only in craters $<$\,60\,km in diameter (see Figure~\ref{mound_types}). \begin{figure*}[t] \begin{center} \noindent\includegraphics[width=1.0\textwidth]{fig01.png} \caption{Global distribution of mapped sedimentary mounds on shaded MOLA topography, showing intra-crater mounds \citep{Bennett2016}, Valles Marineris mounds \citep{Kite2016}, ice mounds \citep{Conway2012} and Medusae Fossae Formation mounds \citep{Bradley2002}. The two black stars show mounds in the Terby and Galle craters that were mapped but not included in \citet{Kite2016}.} \label{all_mounds} \end{center} \end{figure*} Despite the central role of mounds in the sedimentary-rock landscapes of Mars, the mechanisms responsible for mound formation and evolution remain unclear. One hypothesis for the presence of mounds is that they are the result of wind erosion of initially sediment-filled craters, with material preferentially eroded around the edges of the craters \citep{Malin2000, AndrewsHanna2010, Bennett2016}. Wind tunnel experiments carried out by \citet{Day2016} show that a mound and moat can be shaped by wind erosion, though these experiments used damp sand as opposed to sedimentary rock, with a crater model 30\,cm in diameter. Large eddy simulations \citep{Day2016, Anderson2017} suggest that vortical flows emanating from the upwind crater rim are responsible for moat excavation in sediment-filled craters, with a positive-feedback mechanism in which the erosion potential of the sediment increases the more the sediment erodes. \citet{Chan2017} present a wind-sculpted sandstone mound on Earth as an analogue to Gale crater, though it is $O(10^3)$ times smaller. Another hypothesis, motivated by outward dips in sedimentary mound strata, is that some mounds form in place by interspersed episodes of aeolian deposition and slope-wind erosion \citep{Kite2013, Kite2016}. In either hypothesis, winds play a vital role in the erosion of sedimentary deposits, and the transport of sediment within or away from the crater. \begin{figure*}[t] \begin{center} \noindent\includegraphics[width=1.0\textwidth]{fig02.png} \caption{Mars Orbiter Laser Altimeter (MOLA) elevation data of intra-crater mounds \citep[as listed in][]{Bennett2016} showing a variety of mound morphologies, with (a,b) mounds forming a ramp down from the rim, (c,d) mounds joined to the crater wall, and (e,f) mounds encircled by moats. Craters are unnamed apart from (e) Nicholson and (f) Gale. Crater diameters are listed above each panel.} \label{mound_examples} \end{center} \end{figure*} Wind erosion occurs on Mars today, as evidenced by dune field activity \citep[e.g.][]{Fenton2006, Silvestro2010, Silvestro2013, Chojnacki2011}. Observations of dune field morphologies and other aeolian features can be used to infer present-day and potential paleowind directions \citep[e.g.][]{Hobbs2010, Bridges2014, Day2016b}. Estimated sedimentary-rock erosion rates are between 0.01--50\,$\mu$m\,yr$^{-1}$, with the higher rates corresponding to vertical rock faces \citep[e.g.][]{Bridges2012, Farley2014, Golombek2014, Grindrod2014, Levy2016, Salese2016, Kite2017}. Rates $>$1\,$\mu$m\,yr$^{-1}$ allows for many kilometers of cumulative erosion \citep{Armstrong2005}. Some of the strongest winds within craters are slope winds on crater walls \citep[e.g.][]{Kite2013, Tyler2013, Tyler2015, Rafkin2016, Newman2017,Steele2017}. Due to the low density of the Martian atmosphere, the heating and cooling of the surface has a much larger impact on the near-surface atmosphere than on Earth. Due to the correspondingly strong horizontal temperature gradients, the resulting slope winds are typically 2--3 times faster than on Earth \citep[e.g.][]{Ye1990, Savijarvi1993, Tyler2002, Spiga2009, Spiga2011a}. Indeed, the strong nighttime downslope winds can increase near-surface air temperatures by up to 20\,K \citep{Spiga2011b}. \begin{figure*}[t] \begin{center} \noindent\includegraphics[scale=0.85]{fig03.pdf} \caption{The fraction of craters from \citet{Bennett2016} in each size bin that have mounds displaying the characteristics of mounds encircled by moats, mounds joined to the rim or mounds forming a ramp down from the rim (see Figure~\ref{mound_examples} for mound types). The numbers in brackets under each size range show how many craters are in that bin. The Becquerel mound is ambiguous and is excluded.} \label{mound_types} \end{center} \end{figure*} Several processes may contribute to slope-wind erosion; (i) rock weakening and break-up by weathering and/or hydration state changes \citep[e.g.][]{Chipera2007,Wang2011}; (ii) mass wasting, followed by aeolian removal of talus to maintain steep slopes and allow continued mass wasting; (iii) aeolian erosion of weakly cemented sediments \citep{Shao2008}; and (iv) aeolian abrasion of bedrock \citep{Wang2011}. These processes range from transport-limited to detachment-limited, and predict correspondingly different shear-stress dependencies and thresholds for erosion. However, what they all have in common is the need for wind. Thus, in order to identify physical mechanisms involved in sedimentary mound formation and evolution, we need to obtain an understanding of the diurnal variation of slope winds, and the feedback between terrain evolution and circulation. To achieve this, we use a mesoscale model to simulate the circulation within craters of different morphologies. We assume detachment-limited erosion, where only the magnitude of the wind is of concern (as opposed to transport-limited erosion, where wind vectors are required for determining the transport of the eroded sediment). This is complementary to the large eddy simulations of \citet{Day2016} and \citet{Anderson2017}, where the focus was on vortical flows and not the radial slope winds. \section{Model description} \label{sec:model} Simulations are performed using the three-dimensional non-hydrostatic Mars Regional Atmospheric Modeling System (MRAMS) mesoscale model \citep{Rafkin2001}. This model has been used extensively to investigate many features of the Martian circulation \citep[e.g.][]{Michaels2006a, Michaels2006b, Michaels2008, PlaGarcia2016, Rafkin2002, Rafkin2003, Rafkin2009, Rafkin2016}. Two types of simulation were performed: `idealized' and `realistic'. The purpose of the idealized simulations is to isolate only those circulations related to crater topography. As such, the simulations have the Coriolis force and thermal tides removed, and are initialized without large-scale winds. This is similar to the approach used by \citet{Tyler2015}. Three nested grids are used, with the resolution of the innermost grid ranging between 0.5--4\,km, depending on the size of the crater being simulated (80 grid boxes span the crater diameter). There are 60 vertical levels, with the midpoint of the lowest level at 15\,m above the surface, and with 15 levels in the lowest kilometer. Tests were performed with increased numbers of vertical levels, but there were no significant changes in the strengths of the slope flows. Time steps in the outermost grid vary between 2--8\,s, depending on the crater diameter, and are reduced by a factor of two for each successive grid. The surrounding topography has constant albedo (0.23), thermal inertia (230 J\,m$^{-2}$\,K$^{-1}$\,s$^{-1/2}$) and aerodynamic surface roughness (3\,cm). For computational simplicity the transport of individual dust particles is not modeled here, and instead the visible dust optical depth at 610\,Pa is set to a constant value of 0.45. The water cycle is not included. Craters are located at 0$^\circ$N, 0$^\circ$E, at $L_\mathrm{S} = 135^\circ$, resulting in sunrise and sunset times of 05:30 and 17:30 respectively. Different times of year were tested, but the results changed little. For the `realistic' simulations we use five nested grids, with the size of the outer grid, \textit{O}($10^4$\,km), chosen so that the crater circulations that develop on the inner grids are not directly affected by the boundary conditions. The grid spacings of the outer and inner grids are 324\,km and 4\,km respectively (decreasing by a factor of three with each successive grid). A time step of 8\,s is used in the outer grid, and this is decreased by a factor of two for each successive grid. Surface properties are interpolated from TES nighttime thermal inertia and albedo data sets \citep{Putzig2007}, with the topography from MOLA 32 pixel per degree (ppd) data \citep{Smith2001}. Output from the LMD global circulation model \citep[e.g.][]{Forget1999} is used to provide the initial conditions and boundary conditions every 1.5 Mars hours at four different times of year: $L_\mathrm{S} = 45^\circ$, 135$^\circ$, 225$^\circ$ and 315$^\circ$. For both the `idealized' and `realistic' cases, simulations are performed for 7 sols, with model data output every 20 Mars minutes. The last sol is used for analysis, in order to give the model time to `spin up', though the atmospheric temperatures and circulations patterns are repeatable after around 3--4 sols. In this study we assume that saltation abrasion is the landscape-modifying mechanism, and that physical or chemical weathering processes break down the sediment, producing grains suitable for saltation. As such, we use the surface wind stress distributions from the simulations as a proxy for erosion. The surface wind stress is given by $\tau_* = \rho_\mathrm{a}u_*^2$, with $\rho_\mathrm{a}$ the density of the atmosphere at the surface and $u_*$ the friction velocity \citep[see][]{Kok2012}. Saltation, and hence erosion, is initiated when the wind stress is above a critical value. The saltation flux, $Q$, scales as $Q \propto \tau_\mathrm{ex}V$, where $\tau_\mathrm{ex} = \tau_* - \tau_{*\mathrm{it}}$ is the `excess' stress, $\tau_{*\mathrm{it}}$ is the impact threshold stress -- the minimum value required to sustain saltation -- and $V$ is the mean horizontal particle speed (see \citet{Kok2012} and \citet{Sullivan2017} for more details). If $V$ is assumed to increase linearly with $u_*$ then $Q \propto u_*\tau_\mathrm{ex}$, while if $V$ is constant with $u_*$ then $Q \propto \tau_\mathrm{ex}$. Previous work has assumed a linear increase of $V$ with $u_*$ \citep[e.g.][]{White1979, Armstrong2005, Almeida2008, Wang2015}, while recent work suggests the relation $Q \propto \tau_\mathrm{ex}$ should be used \citep{Sullivan2017}. The timing of mound erosion (relative to atmospheric loss) is currently not well understood \citep{Bennett2016}, and nor is the climate at the time erosion might have occurred, as this can vary with orbital changes and atmospheric loss \citep[e.g.][]{Kite2014, Soto2015, Wordsworth2016, Ramirez2017}. Partly for these reasons, and partly because it allows us to compare our model results to reality, all simulations presented here have surface pressures similar to those of present day Mars ($\sim$6\,hPa). In general, the surface stresses predicted by our simulations are not large enough to initiate saltation, which is a situation that occurs in many other models \citep[see][]{Sullivan2017}. As such, we do not use an explicit erosion relation, and simply compare the magnitudes of the surface wind stress across the craters and mounds, relating regions of higher stress with increased potential erosion. \section{Results} It is not possible to simulate the entire mound formation process with a mesoscale model, but such a model can be used to take `snapshots' in time, to see how the circulation patterns would potentially erode the sediment within the crater. Initially we look at the circulation in axisymmetric craters with diameters of 40, 80 and 160\,km. This spans most of the range of mound-hosting craters cataloged by \citet{Bennett2016}, with only the smallest craters missing. The craters are surrounded by flat topography, with the results azimuthally-averaged from radial slices taken every 3$^\circ$ \citep[as in][]{Tyler2015}. Later we also look at craters covered with thick sedimentary layers, and idealized craters embedded within realistic topography. In these cases the results are not azimuthally-averaged, as the topography is not axisymmetric. \subsection{Erosion in craters filled with sedimentary deposits} We begin by looking at mound formation in craters containing horizontally-level sedimentary deposits. For diameters of 40, 80 and 160\,km, we assume sediment-free (basement) depths of 2.4, 3.5 and 5\,km, corresponding to data for pristine Mars craters \citep{Tornabene2018}. For each diameter, we model craters with floors that are (i) level with the surrounding flat plains, so only the crater rim protrudes, and (ii) 1.75\,km below the surrounding plains. For the 80 and 160\,km diameter craters we also consider crater floors 3.5\,km below the surrounding plains. These simulations represent different levels of sedimentary infill. We do not consider the possibility of a central peak (produced during crater formation) protruding from the sediment-filled craters. Instead we assume that if present, a central peak is below the sediment, either due to an initial small size, or through degradation \citep[e.g.][]{Robbins2012, Tornabene2014}. Figure~\ref{circulation_160} shows results from the 160\,km diameter simulations at (a--c) 14:00 and (d--f) 19:00 local time, as this is when the upslope winds and downslope winds (respectively) are typically at their strongest in these simulations (see Movie S1 in the supporting information for full diurnal results). Results from the smaller-diameter craters are similar, and are thus not shown. The shading shows the magnitude of the wind multiplied by the sign of the radial wind, i.e.\ $u = (u_\mathrm{r}/|u_\mathrm{r}|) \sqrt{u_\mathrm{r}^2 + u_\mathrm{v}^2}$, with $u_\mathrm{r}$ and $u_\mathrm{v}$ the radial and vertical components of the wind ($u_\mathrm{r}$ is typically an order of magnitude greater than $u_\mathrm{v}$). For a given crater diameter, as the depth of the crater increases, the strength of the wind on the crater rim increases. This is caused by the larger temperature and hence pressure differences across the crater, as noted by \citet{Tyler2015}. Figure~\ref{stress_160} shows the surface wind stress for these three craters, as a function of time of day and distance from the crater center. As can be seen in Figure~\ref{stress_160}a, away from the non-erodible crater rim the largest values of surface wind stress occur on the crater floor in the evening (18:00--22:00), and are associated with air moving down the walls of the crater rim and towards the center, as seen in Figure~\ref{circulation_160}d. The `lumpy' appearance of the surface wind stress from 18:00--22:00 is a result of the discrete model output every 20 Mars minutes. For a crater filled with sediment with only the rim protruding as in Figure~\ref{stress_160}a, it is likely that passing synoptic weather systems and localized strong gusts would lead to more erosion than the nighttime downslope flow. This would likely increase the depth of the crater at all locations, though vortical flows may preferentially erode more sediment near the crater walls \citep{Day2016, Anderson2017}. \begin{figure*}[t] \begin{center} \noindent\includegraphics[width=1.0\textwidth]{fig04.pdf} \caption{Azimuthally-averaged wind speed (shaded), wind direction (arrows) and potential temperature (contours) for three 160\,km diameter craters with floors at different depths, $z$, below the surrounding plains (labeled above each plot). Plots show values at 14:00 (a--c) and 19:00 (d--f) local time, with colored circles showing the maximum daytime (08:00--17:00) and nighttime (17:00--08:00) surface wind stress values in the top and bottom rows respectively. Potential temperature is contoured at 2\,K intervals in (a--c) and 4\,K intervals in (d--f).} \label{circulation_160} \end{center} \end{figure*} \begin{figure*}[t] \begin{center} \noindent\includegraphics[width=1.0\textwidth]{fig05.pdf} \caption{Surface wind stress, as a function of time of day (Mars hours) and distance from the crater center, for the three craters shown in Figure~\ref{circulation_160}. Horizontal dotted lines mark the top and bottom of the crater wall, while vertical dotted lines show the sunrise and sunset times.} \label{stress_160} \end{center} \end{figure*} In the case of a crater 1.75\,km deep (Figures~\ref{circulation_160}b,e and \ref{stress_160}b), it is clear that on the erodible crater floor there are two daily periods of increased surface wind stress. There is again the 18:00--22:00 period associated with downslope winds which would likely erode all locations equally, but there is also now a period centered around 13:00, which is associated with upslope winds and has stress values increasing towards the crater wall. It should be noted that the stress values here result from grid box-average winds, which do not explicitly take gustiness into account. In general there is increased gustiness during the daytime \citep[e.g.][]{Fenton2010}, so peak surface wind stresses are likely to be higher than represented in the model. Even without taking this into account, erosion associated with a stress distribution like that in Figure~\ref{stress_160}b would result in more sediment being removed towards the base of the crater wall, with erosion decreasing with distance towards the center, forming a mound (assuming detachment-limited sediment transport). Erosion from traveling synoptic systems would likely be less important than for the filled case in Figure~\ref{stress_160}a, as craters get isolated from the surrounding environment as they get deeper \citep[e.g.][]{Rafkin2016}. At an even greater depth of 3.5\,km (Figures~\ref{circulation_160}c,f and \ref{stress_160}c), the surface wind stress associated with the nighttime downslope wind has increased, mainly because of stronger winds (as can be seen in Figure~\ref{circulation_160}f), but also due to the increasing atmospheric density in a deeper crater. However, now the stress during the daytime at the base of the crater wall is lower, and so the tendency to form a mound is reduced. Additional simulations at different depths were performed in order to understand this behavior, with the results shown in Figure~\ref{max_stress_day}. The surface wind stresses on the crater floor near the crater wall initially increase as the crater depth increases, up to $\sim$2\,km, and then start to decrease again. This maximum is a result of two competing factors. Firstly, as the crater depth increases, the daytime air over the crater at the same level as the surrounding plains gets cooler, as can be seen by the potential temperature contours in Figure~\ref{circulation_160}a--c. This results in a larger pressure difference, creating a stronger surge of air out of the crater at the rim \citep[see][]{Tyler2015}. The wind speed increases from the crater center to the rim, so initially as the depth increases the wind speeds and hence stress values on the crater floor increase. However, for crater walls of the same angle (10$^\circ$ in these simulations), as the crater gets deeper the base of the wall moves closer to the crater center (see Figure \ref{stress_160}), into a region of slower winds. These two competing factors result in the behavior seen in Figure~\ref{max_stress_day}, where a crater depth of $\sim$2\,km is the most favorable for erosion by slope winds near the crater walls. \begin{figure*}[t] \begin{center} \noindent\includegraphics[scale=0.8]{fig06.pdf} \caption{Variation of maximum daytime surface wind stress on the crater floor, as a function of distance from the crater wall (colored lines, ranging from 0--24\,km). Values are shown for six 160\,km diameter craters of different depths (1--3.5\,km below the surrounding plains).} \label{max_stress_day} \end{center} \end{figure*} Similar behavior is seen in the simulations with diameters of 40\,km and 80\,km. Thus, it seems plausible that a mound can begin to form by slope wind erosion if a sediment-filled crater has a depth shallower than a certain value ($\sim$2\,km in these idealized simulations). If a process results in the depth of the sediment-filled crater being much larger this value, then the reduced stress near the crater wall may result in either much slower mound formation, or possibly no mound formation at all if the threshold for sand transport is high. However, it may be possible for saltation to be initiated and maintained at lower wind speeds than the fluid threshold \citep{Kok2010, Sullivan2017}. \begin{figure*}[t] \begin{center} \noindent\includegraphics[width=1.0\textwidth]{fig07.pdf} \caption{Cartoon showing the proposed evolution of a sediment filled crater, from initial level infill to mound formation. Red and blue arrows show the directions of the strongest daytime and nighttime winds respectively. Black arrows show how sediment erodes.} \label{cartoon} \end{center} \end{figure*} Figure~\ref{cartoon} shows a cartoon of our proposed method of mound formation in craters with initial flat infill. Erosion from nighttime downslope winds, as well as daytime wind gusts and dust devils which are not modeled, initially results in a gradual deepening of the crater (Figure~\ref{cartoon}a). As the crater gets deeper it gets more isolated from the surrounding environment and the upslope and downslope flows on the crater wall increase in strength, preferentially eroding sediment close to the crater wall (Figure~\ref{cartoon}b). Small dust particles can remain suspended in the air, and can be transported away from the crater. However, larger abrading clasts may accumulate at the low points on the crater floor (Figure~\ref{cartoon}c). This may increase the erosion in these areas, such as in the case of potholing in rivers on Earth \citep[e.g.][]{Pelletier2015}, leading to a positive feedback. Alternatively, a coarse-grained lag deposit can armor underlying softer rocks. Accumulations of larger particles in crater moats are observed, e.g.\ the Bagnold Dune Field in Gale crater \citep{Hobbs2010, Charles2017}. \subsection{Mound evolution} The results of the previous section suggest that mounds can form from craters with initial flat sedimentary infill. Thus, we next look at different mound profiles to see how they might evolve through wind erosion. Mound heights at a distance $r$ from the crater center are given by $h_\mathrm{mound}(r) = h_\mathrm{max}\cos(\pi r/2r_\mathrm{max})$, where $h_\mathrm{max}$ is the maximum height of the mound, and $r_\mathrm{max}$ is the maximum radius of the mound (which is $\le$ the crater floor radius). This profile provides a good match to the average slope of Mt.\ Sharp. \subsubsection{Mounds in craters 1.75\,km deep} We begin by considering two different mound shapes in craters of 40, 80 and 160\,km diameter, and 1.75\,km depth. The first mound profile begins at the base of the crater wall, and extends to 90\% of the crater depth. A mound of this type might emerge if the surface wind stress and hence erosion were maximum at the base of the crater wall and decreased towards the crater center, as suggested by Figure~\ref{stress_160}b. As the mounds are the same height but the crater diameters differ, the sides of the mound get steeper as the diameter gets smaller. The circulation patterns at 14:00, and the peak daytime stresses for these mound shapes, are shown in Figure~\ref{wind_2pm_diff}a--c (see Movie S2 in the supporting information for full diurnal results). As the crater diameter increases, the strength of the flow over the crater rim increases slightly, because of the larger temperature and hence pressure difference between the air over the crater and over the plains. Conversely, the upslope flow over the mound decreases in strength. This is because fractionally more air is lost from the smaller crater over the rim, resulting in increased downwelling, increased adiabatic warming of air in the crater, and stronger flow up the mound \citep[see][]{Tyler2015}. Due to the stronger winds blowing up the mound, the peak daytime surface wind stresses on the mound increase as crater diameter decreases. The peak stress values occur roughly 2/3 of the way up the mound in all cases. At the tops of the mounds, the peak stresses in the 40 and 80\,km craters are larger than in the moat (Figure~\ref{wind_2pm_diff}a,b), while in the 160\,km crater the peak stress at the top of the mound is lower than in the moat. \begin{figure*}[p] \begin{center} \noindent\includegraphics[width=1.0\textwidth]{fig08.pdf} \caption{Azimuthally-averaged wind speed (shaded), wind direction (arrows) and potential temperature (contours) at 14:00 local time for craters with diameters of 40, 80 and 160\,km, and with mounds of different fractional heights, $f_\mathrm{h}$, and radii, $f_\mathrm{r}$, (labeled above each plot, as fractions of the crater depth and crater floor radius). Colored circles show the maximum daytime (08:00--17:00) surface wind stress values. Potential temperature is contoured at 2\,K intervals.} \label{wind_2pm_diff} \end{center} \end{figure*} \begin{figure*}[p] \begin{center} \noindent\includegraphics[width=1.0\textwidth]{fig09.pdf} \caption{As Figure~\ref{wind_2pm_diff}, but for 19:00 local time, with the colored circles showing the maximum nighttime (17:00--08:00) surface wind stress values.} \label{wind_7pm_diff} \end{center} \end{figure*} Looking next at the circulation at 19:00, and the peak nighttime surface wind stresses (Figure~\ref{wind_7pm_diff}a--c; see also Movie S2) it can be seen that while the downslope flows on the crater walls are similar, the strength of the flow on the mound increases as the crater diameter increases. This is because the smaller crater cools more quickly, so by 19:00 the potential temperature contours are aligned horizontally, while in the larger craters the potential temperature contours are still terrain-following. This larger horizontal potential temperature gradient sustains the downslope flow for longer, resulting in larger surface wind stresses, with the peak value in the largest crater occurring about 2/3 of the way down the slope (Figure~\ref{wind_7pm_diff}c). In the smallest crater, the stresses are again larger towards the top of the mound compared to the moat (Figure~\ref{wind_7pm_diff}a). Daytime and nighttime stress distributions such as these suggest that, if all other factors were held equal, the mounds in the 40 and 80\,km craters would likely erode more at their tops than at their bases, eventually becoming more squat, while the mound in the 160\,km crater would erode more at the sides and base than at the top, becoming steeper-sided. This may be one of the reasons why larger diameter craters have a greater frequency of mounds surrounded by moats (see Figure~\ref{mound_types}). Figure~\ref{mound_heights} shows how mound heights in craters identified by \citet{Bennett2016} compare to their host craters. Heights for each crater were determined by taking radial slices through MOLA 128 ppd data every 0.5$^\circ$, and then calculating the maximum crater depth, the maximum mound height (ignoring central uplift peaks from crater formation), and the minimum, maximum and average rim heights. It can be seen that there is a tendency for smaller craters to have proportionally smaller mounds, suggesting more erosion of the mound tops. Indeed, from $10^5$ Monte Carlo bootstrap trials fitting a linear trend line to Figure~\ref{mound_heights}c, in only 36 cases did a negative slope result. This behavior is in agreement with the stress distributions, which show mounds in smaller craters experience greater surface wind stresses towards the tops of the mounds than do larger craters. \begin{figure*}[t] \begin{center} \noindent\includegraphics[width=1.0\textwidth]{fig10.pdf} \caption{Mound heights expressed as a fraction of the distance between the crater floor and (a) the minimum, (b) the average, and (c) the maximum rim height. Data are for mound-hosting craters listed by \citet{Bennett2016}, with the mound and rim heights determined from MOLA 128 ppd data (errors are smaller than the symbol sizes).} \label{mound_heights} \end{center} \end{figure*} We now consider a much smaller mound that extends to 75\% of the radius of the crater floor, and 30\% of the crater depth. This morphology may be the result of erosion over a long time scale in a crater initially filled with sediment, or early erosion in a crater only filled with a small layer of sediment. During the daytime (Figure~\ref{wind_2pm_diff}d--f; see also Movie S2) the upslope winds along the crater walls and out over the rim are slightly stronger than for the case with the larger mound (Figure~\ref{wind_2pm_diff}a--c). This is because a smaller mound results in a greater volume of air within the crater, and a greater average distance between the crater floor/mound (which heats up rapidly during the day) and the air that is level with the surrounding plains. As such, the air is cooler by $\sim$2--3\,K, and hence there is a larger pressure difference driving the outward surge of air away from the crater. As the smaller mounds sit lower in the crater, they are affected more by the downwelling and associated adiabatic warming discussed earlier, and hence near-surface temperatures are warmer than for the larger mounds. At the tops of the mounds, the temperatures are $\sim$5\,K warmer in the 40\,km diameter crater, and $\sim$2\,K warmer in the 80 and 160\,km diameter craters. However, the stronger downwelling and denser atmosphere results in weaker upslope winds on the mound flanks, and so surface wind stresses are lower ($\sim$20--50\% of the values on the larger mounds). For the 40\,km diameter crater the stress is still larger near the top of the mound than at the base, while in the 80 and 160\,km diameter craters the stress values are similar along the mounds. By 19:00 (Figure~\ref{wind_7pm_diff}d--f; see also Movie S2), near-surface temperatures over the mounds have cooled by around 10\,K, and downslope winds are at their strongest. Surface wind stresses on the mound in the smallest crater show little variation, with a slight increase towards the top of the mound. In the 80 and 160\,km diameter craters, the greater potential temperature difference near the mound flanks means downslope winds can exist for longer (as discussed earlier), resulting in stress values that increase towards the mound base. (The surface wind stresses are again $\sim$20--50\% of the values on the larger mounds.) These results suggest that as the mounds become more eroded and exist deeper within the craters, the weaker near-surface circulation produces less erosion. This may explain why intra-crater mounds persist today, rather than wind erosion removing them completely. \subsubsection{Mounds in craters 3.5\,km deep} We now briefly look at mounds in craters where the moat is 3.5\,km below the surrounding plains, focusing on craters with diameters of 160\,km. We consider mounds with radii extending to 70\% of the crater floor radius, and heights ranging from 25\% to 100\% of the crater depth. The maximum daytime and nighttime surface wind stresses are shown in Figure~\ref{max_stress_160}. The circulation patterns are not shown, but follow the behavior seen in Figures~\ref{wind_2pm_diff}c,f and \ref{wind_7pm_diff}c,f. \begin{figure*}[t] \begin{center} \noindent\includegraphics[width=1.0\textwidth]{fig11.pdf} \caption{Maximum daytime and nighttime surface wind stress (colors) from simulations of craters 160\,km in diameter and 3.5\,km deep. The symbol locations show the mound profiles, where $f_\mathrm{h}$ denotes the fractional height of the mound in relation to the crater depth (the fractional radius is 0.7 in all cases).} \label{max_stress_160} \end{center} \end{figure*} At the crater rim the temperature fields are nearly identical in the different simulations, and thus the daytime surges of air away from the craters are similar. Near-surface daytime temperatures increase as the mound height decreases (through adiabatic warming associated with downwelling) with temperatures over the shortest mound ($f_\mathrm{h} = 0.25$) being $\sim$5\,K warmer than over the tallest mound ($f_\mathrm{h} = 1$). As for the 1.75\,km deep craters, the upslope winds decrease in strength as the mounds become shorter, due to the combination of increased air density and downwelling. As such, the surface wind stresses also decrease (Figure~\ref{max_stress_160}a). In the 160\,km diameter, 1.75\,km deep crater, the stress at the top of the mound was the lowest of any point within the crater (see Figure~\ref{wind_2pm_diff}c). This is not true for the tallest two mounds in the 3.5\,km deep case, where stronger winds at the mound tops produce stresses larger than in the moat. At nighttime, the acceleration of the downslope winds causes the surface wind stress to increase towards the base of the mounds (Figure~\ref{max_stress_160}b). For the mounds with $f_\mathrm{h} = 0.25$ and $f_\mathrm{h} = 0.5$, the nighttime stress values at the mound base are the largest of any time of day. Such mounds may be expected to erode more on the flanks and become steeper-sided. For the mound with $f_\mathrm{h} = 1$, the maximum daytime stress from upslope winds is larger than the maximum nighttime value. Thus, this mound may be expected to erode more at the top, becoming more squat. If this were to occur, it may eventually reach a height where the stress was larger towards the base of the mound, and it would then erode more horizontally. (For the mound with $f_\mathrm{h} = 0.75$, the maximum daytime and nighttime stress values are about equal.) \subsection{Erosion in craters covered with sedimentary deposits} So far we have considered erosion in axisymmetric craters filled with sediment and surrounded by flat topography. However, it has been suggested that some intra-crater mounds are the result of the erosion of large sedimentary deposits that existed on top of craters, particularly for mounds in the Arabia Terra region \citep{Fergason2008, Bennett2016}. We have therefore performed simulations of 40, 80 and 160\,km diameter craters with a 1\,km thick sedimentary layer partially covering the craters to different extents (to represent the gradual erosion and retreat of the layer over time). Figure~\ref{sedim_topog} shows slices through the 160\,km diameter craters to highlight the morphologies, and Figure~\ref{sedim_layer} shows results for the same craters. In these simulations the sedimentary layer slopes in the east-west direction, with no variation in the north-south direction. The gradient is 0.06 (slope angle $\sim$3.4$^\circ$), which was chosen to be similar to the gradients of the sloping mounds seen in Figure~\ref{mound_examples}a,b. Three nested grids are used, with an inner grid spacing of 5\,km. \begin{figure*}[t] \begin{center} \noindent\includegraphics[width=1.0\textwidth]{fig12.pdf} \caption{Morphologies of three 160\,km diameter craters covered to different extents by a 1\,km deep sedimentary layer. This represents the retreat of the layer over time from west to east. (Time evolution between the three morphologies is not considered.)} \label{sedim_topog} \end{center} \end{figure*} \begin{figure*}[t] \begin{center} \noindent\includegraphics[width=1.0\textwidth]{fig13.pdf} \caption{Topography (left column) and surface wind stress values at three different times (remaining columns) for three 160\,km diameter craters covered to different extents by a 1\,km thick sedimentary layer (each row shows a different crater morphology). The dotted line shows the location of the crater rim, and the local times are labeled on each panel.} \label{sedim_layer} \end{center} \end{figure*} When the sedimentary layer covers almost the whole crater, leaving only the rim exposed, the meridional wind has little effect on the circulation, and it is the zonal wind blowing up and down the face of the sedimentary layer that causes the surface wind stress distributions (Figure~\ref{sedim_layer}a--d). As such, the layer would be expected recede to the east uniformly over time. We assume that as the layer recedes the sediment within the crater will also be eroded, deepening the crater. As such, we next model a crater where the sedimentary layer covers 3/4 of the crater diameter, and the crater depth has increased (Figure~\ref{sedim_layer}e--h). In this case, the daytime flow up the face of the layer is still generally uniform across the crater, but there is increased surface wind stress on top of the layer near the crater rim due to the zonal wind being funneled by the topography. During the evening and night, the downslope flow along the crater walls and sedimentary layer results in increased surface wind stresses towards the crater walls (Figure~\ref{sedim_layer}h). Similar behavior is seen for the case where the sedimentary layer covers half the crater (Figure~\ref{sedim_layer}i--l), with the deeper crater allowing for stronger downslope winds and increased stresses towards the crater walls (Figure~\ref{sedim_layer}l). The surface wind stress patterns in these simulations suggest that as a sedimentary layer recedes across a crater, it will erode more at the edges of the crater, resulting in a crescent-shaped moat. The behavior shown for these 160\,km diameter craters also occurs for 80\,km diameter craters, but in the 40\,km diameter simulations the behavior suggests just a linear retreat of the sedimentary layer, with no clear signal for the formation of a crescent-shaped moat. Wind tunnel experiments by \citet{Day2016} showed that a crescent-shaped moat can form if there is a uni-directional wind blowing across a crater. However, their experiments used 30\,cm and 60\,cm diameter crater models, and so the mound at all times is impacted by the prevailing wind. In large diameter craters, the mound could be many tens of kilometers away from the crater rim, lessening the impact of the large-scale wind blowing across the crater. However, large eddy simulations suggest that vortical flows can also result in crescent-shaped moats \citep{Day2016, Anderson2017}. Thus, in smaller craters ($<40$\,km diameter) vortical flows can explain crescent-shaped moats, while in larger craters both vortical flows or the erosion of a covering sedimentary layer by slope winds are possible mechanisms. \subsection{Erosion in a realistic atmosphere} The simulations performed so far lack the Coriolis force, thermal tides, initial large-scale winds, and realistic topography. This was in order to isolate the topography-windfield coupling. To compare these idealized simulations with reality, we performed additional `realistic' simulations using GCM boundary conditions (see section \ref{sec:model} for a description of the method). An idealized axisymmetric crater (160\,km in diameter and 3.5\,km deep, with a mound covering 70\% of the crater floor radius and the full depth of the crater) was placed at 0$^\circ$N, 0$^\circ$E (see Figure~\ref{real_nests}). This is close to the region in Arabia Terra where mound-hosting craters are common \citep{Lewis2014, Bennett2016,Tanaka2000, Hynek2017}. Simulations were performed at four different times of year ($L_\mathrm{S} = 45^\circ$, 135$^\circ$, 225$^\circ$ and 315$^\circ$). Results are similar in all periods, and the results from $L_\mathrm{S} = 315^\circ$ are shown in Figure~\ref{real_atmos}. (See Movies S3-S4 in the supporting information for full diurnal results at each $L_\mathrm{S}$). \begin{figure*}[t] \begin{center} \noindent\includegraphics[width=1.0\textwidth]{fig14.pdf} \caption{Topography on grids 1, 3 and 5 of the `realistic' simulations (resolution 324, 36 and 4\,km respectively). Black contours show the latitude and longitude in intervals of 10$^\circ$, 5$^\circ$ and 2$^\circ$ respectively. Dotted lines show 0$^\circ$N, 0$^\circ$E.} \label{real_nests} \end{center} \end{figure*} It is evident that the behavior is similar to the idealized cases, with downslope winds at night and strong upslope winds during the afternoon which increase in strength as they travel up the crater walls and mound flanks. The main difference is that the external wind field, which is strongest in the morning and afternoon, blowing from east to west (Figure~\ref{real_atmos}b,c), causes the stress field to be non-axisymmetric . The effect of this wind field is to increase the surface wind stress on the western crater wall and the leeward slope of the mound. Figure~\ref{real_atmos_circ} shows the circulation in a longitude-altitude plane taken across the center of the crater, with the times corresponding to those of Figure~\ref{real_atmos}. \begin{figure*}[t] \begin{center} \noindent\includegraphics[width=0.85\textwidth]{fig15.pdf} \caption{Surface wind stress (shading) at four different local times from a simulation of a 160\,km diameter and 3.5\,km deep axisymmetric crater at $L_\mathrm{S} = 315^\circ$. Arrows show the wind speed and direction, while the three black circles denote the locations of the crater rim and the bases of the crater and mound walls.} \label{real_atmos} \end{center} \end{figure*} \begin{figure*}[t] \begin{center} \noindent\includegraphics[width=0.99\textwidth]{fig16.pdf} \caption{Wind speed (shaded), wind direction (arrows) and potential temperature (contours) at four different times from longitude-altitude slices through the center of the crater shown in Figure~\ref{real_atmos} in an east-west direction. Potential temperature is contoured at 5\,K intervals.} \label{real_atmos_circ} \end{center} \end{figure*} At 06:00 the downslope wind is strongest on the eastern crater wall, as the slope is oriented in the same direction as the prevailing wind. By 09:00 the upslope flow over the mound flanks has developed, and is stronger than the flow on the crater walls. By 12:00 the upslope flows are fully developed. At the top of the mound there is convergence of the upslope flow, and the air is transported upwards away from the crater (Figure~\ref{real_atmos_circ}c), resulting in lower surface wind stresses at the top of the mound (Figure~\ref{real_atmos}c). At this time the upslope flow is strongest on the leeward slope of the mound. This is because the windward slope is affected by the prevailing wind, which, by the time it arrives at the mound, is traveling westward and downward. The subsiding component results in adiabatic warming, and thus the temperature contrast between the mound flank and the surrounding air is reduced (compare the potential temperature contours on either side of the mound in Figure~\ref{real_atmos_circ}c), limiting the strength of the upslope flow on the mound. The leeward slope is shielded from the prevailing easterly wind, and so the air temperatures are cooler, there is a larger temperature contrast between the mound flank and the surrounding air, and the upslope flow (and hence surface wind stress) can become stronger. By contrast, \citet{Day2016} found that the mound in their wind tunnel experiments was preferentially eroded on the windward flank. However, as noted earlier, the size of their model crater means the mound is more likely to feel the direct effects of the wind, and be eroded. Additionally, such small models cannot take into account the changes in temperature experienced within real craters. Again, the mechanism inferred by \citet{Day2016} may occur in smaller diameter craters, while mounds in larger diameter craters may experience different erosional patterns. An example of such a case is Gale crater, where Mt.\ Sharp is offset in the opposite direction to the prevailing wind direction \citep{Bennett2016}, which is the behavior suggested by the erosion patterns in our simulations. Indeed, if erosion follows the surface wind stress field shown in Figure~\ref{real_atmos}c, then our work suggests an explanation for the `bat-wing' shape of Mt.\ Sharp. \section{Discussion} Our results show that winds on topographic slopes can potentially erode intra-crater sedimentary deposits to produce mounds. Mound evolution depends on the size of the host crater, with erosion in smaller craters resulting in mounds that are more squat, and erosion in larger craters resulting in steeper-sided mounds surrounded by moats. This behavior agrees with the mound morphologies in craters mapped by \citet{Bennett2016}. If craters are initially covered in sedimentary layers, more complex erosion patterns emerge, and can result in crescent-shaped moats with mounds joined partly to the crater rim. Large-scale winds blowing over large mound-hosting craters can result in the mound eroding more on the leeward side, with the center of the mound appearing to `march upwind' over time. This would result in a mound offset towards the direction of the prevailing wind, such as is observed for Mt.\ Sharp in Gale crater. Due to the strong day/night cycle of slope winds within canyons \citep[e.g.][]{Kite2016}, the results presented here may also apply to the formation of mounds within canyon systems such as Valles Marineris. Implicit in these results is that saltation-abrasion is the landscape-modifying mechanism. We do not consider other processes that may have operated in a warmer or wetter environment, as erosion by liquid water has not been globally significant since the Late Noachian/Early Hesperian \citep{Golombek2006}. We assume detachment-limited erosion, i.e.\ that the timescale for weathering the sediment is much longer than the timescale to transport sediment out of the crater, and thus we do not follow the motion of individual particles. We know that small dust particles can remain suspended in the atmosphere of Mars in the present day, so it is likely that over time attrition will result in sedimentary particles becoming smaller, at which point they can be transported away from the crater in the daytime upslope winds. Larger abrading clasts may remain in the crater moat, as is evidenced in the Bagnold Dune Field in Gale crater \citep{Hobbs2010, Charles2017}. Behavior such as this might result in increased erosion of the moat, resulting in a positive feedback mechanism. The simulations performed here are for atmospheric conditions relevant to present-day Mars, whereas much of the erosion of sedimentary mounds likely occurred billions of years ago \citep{Thomson2011, Palucis2016, Kite2017} when the atmosphere may have been much more dense \citep{Jakosky2017}. However, the main features noted here that are responsible for the erosion -- the upslope and downslope winds -- will still occur in a denser atmosphere. For example, slope winds are a common feature on Earth \citep[e.g.][]{Renfrew2006, Whiteman2010, Haiden2011, Munoz2013, Villagrasa2013, Lehner2016, Shapiro2016}. Indeed, the diurnal variation of temperature profiles within Meteor Crater in Arizona \citep{Whiteman2010} is similar to that in our simulations. However, in small craters like Meteor Crater, the strength of slope flows is limited due to the shallow depth, and so erosion is likely to be caused by smaller-scale features, such as those noted in large eddy simulations \citep{Day2016, Anderson2017}. Thus, the main features and processes noted here are still likely to occur in a denser Martian atmosphere, though the strength of the winds, and hence the potential erosion rates, are likely to differ. It should also be noted that there are features of the circulation not modeled here, and which may potentially affect erosion over long timescales. For example, dust devil tracks have been observed in many craters \citep{Reiss2016}, and dust devils have been detected in-situ by rovers in Gusev and Gale crater \citep[e.g.][]{Greeley2006, Greeley2010, Moores2015, Kahanpaa2016, Steakley2016, Etxeberria2018}. However, simulations of crater circulations have shown that the boundary layer is suppressed within craters \citep[e.g.][]{Tyler2015, Rafkin2016}, which should limit the formation of dust devils in deep craters (indeed, fewer were detected in Gale crater compared to the shallower Gusev crater). Thus, while convective vortices have the ability to remove dust from the surface \citep[e.g.][]{Balme2006a, Balme2006b, Neakrase2016, Koester2017}, it is unlikely that dust devils contribute greatly to erosion rates within craters in the present-day. This may have been different in past climates, however \citep{Newman2005, Haberle2006}. In the future, our ideas could be tested and refined by better constraints on erosion rates and patterns using crater counts \citep{Kite2017} and cosmogenic isotope exhumation-age dating \citep{Farley2014}. \section{Conclusions} While sedimentary mounds exist in craters of many different sizes, data \citep{Bennett2016} suggest that there is a tendency for intra-crater mounds completely encircled by moats to become more frequent as the crater diameter increases, hinting at a characteristic length scale (crater diameter) for encircling moats. We have performed mesoscale simulations considering craters 40, 80 and 160\,km in diameter, with depths extending to 3.5\,km, and a variety of mound and crater morphologies, to understand the formation of these sedimentary mounds. \begin{enumerate} \item Using a physically self-consistent numerical model, we find that mounds can form through wind erosion from craters surrounded by flat topography and filled with sediment. For a crater that is shallow, erosion will be fairly constant across the crater floor, resulting in an increase in the crater depth. As the depth increases to $\sim$2\,km, slope winds become more important, and result in increased erosion near the crater walls, forming a mound. However, if the sediment-filled crater is much deeper than $\sim$2\,km, the erosion near the crater walls reduces, and mound formation would either slow or stop completely. \item Once a mound has formed, its evolution depends on the size of the host crater and its depth within the crater. For craters 40 and 80\,km in diameter, the surface wind stress distributions in the simulations (used as a proxy for erosion) suggest that mounds would erode more at their tops than at their bases, eventually becoming flatter. Conversely, mounds in the 160\,km crater would erode more at the sides and base than at the top, becoming thinner. This behavior is in agreement with observations: smaller craters tend to have proportionally shorter mounds. As mounds become more eroded and exist deeper in the crater, the weaker near-surface circulation reduces the surface wind stress, limiting the erosion. This may help to explain why mounds persist rather than being completely obliterated. \item In the case of a large-scale sedimentary layer covering the craters \citep[e.g.][]{Fergason2008, Bennett2016} the surface wind stress patterns in the simulations suggest that as the sedimentary layer recedes across a crater, it will erode more towards the edges of the crater, which could explain the appearance of some of the mounds that are still joined to the crater wall. \item When considering more realistic (GCM) meteorological boundary conditions, the main difference compared to the idealized simulations is the presence of a large-scale prevailing wind. The effect of this wind is to increase the surface wind stress values on the leeward side of the mound. The reason for this is that downwelling air on the windward side limits the strength of the daytime upslope flow. The leeward side experiences less downwelling air, and so the upslope wind can become stronger, increasing the surface wind stress and hence potential erosion. While most mounds are offset in the direction of the prevailing wind \citep{Bennett2016}, Mt.\ Sharp is offset in the opposite direction. The behavior in our simulations may offer an explanation for this offset, and for the `bat-wing' shape of Mt.\ Sharp. \end{enumerate} \section*{Acknowledgments} We thank Mackenzie Day and an anonymous reviewer for their helpful comments which improved this paper. We thank Jasper Kok and Rob Sullivan for discussions of saltation on Mars, Daniel Tyler and Jeffrey Barnes for providing simulation results for benchmarking our model, Scot Rafkin for providing assistance with simulations, and the University of Chicago Research Computing Center. This work was funded in part by NASA grant NNX15AH998G. Model output is available to download from https://psd-repo.uchicago.edu/kite-lab/mesoscale\_crater\_data.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} We consider the planar restricted 3-body problem in rotating coordinates. We call the two primaries the sun and the earth and place the earth at the origin of our coordinate system. Then the describing Hamiltonian $H:\C\setminus\{0,1\}\times\C\to\R$ is given by \beq H(q,p)=\frac12|p|^2+\langle p,iq\rangle -\langle p,i\mu\rangle-\frac{1-\mu}{|q|}-\frac{\mu}{|q-1|}\;, \eeq where $\langle p,iq\rangle=p_2q_1 - p_1 q_2$. Also $\mu\in[0,1]$ is the mass ratio $\mu=\frac{m_S}{m_E+m_S}$ where $m_E$ is the mass of the earth and $m_S$ is the mass of the sun. The approximate value for the (real) sun/earth system is $\mu\approx0.999997$. For $\mu\notin \{0,1\}$ the Hamiltonian $H$ has five critical points $L_1,\ldots,L_5$, which we order by increasing value of $H$. These are the Lagrange points. For $\mu\in \{0,1\}$ the Hamiltonian $H$ has a critical manifold diffeomorphic to $S^1$. Note that the critical value $H(L_1)$ converges to $-\frac32$ as $\mu$ tends to either $0$ or $1$. If we choose the energy level $-c$ to be below the first Lagrange value $H(L_1)$, then the energy hypersurface $H^{-1}(-c)$ has three connected components: one is near the earth, one is near the sun, and one is near infinity. Throughout this article, we will focus on the component closest to the earth. Note that the components around the earth and sun are non-compact due to collisions with the respective primaries. However, it is well known that such two-body collisions can be regularized. Recall the Levi-Civita coordinates given by $q=2v^2$ and $p=\frac{u}{\bar{v}}$ in \cite{Levi_Civita}. These coordinates define a 2:1-map, which is symplectic up to a factor $4$. Indeed, $\Re(dq\wedge d\bar{p})=4\Re(dv\wedge d\bar{u})$. Transforming and regularizing the Hamiltonian function at energy $-c$ leads to \beq K_{\mu,c}(v,u):=|v|^2\big(H(v,u)+c\big)=\frac12|u|^2+2|v|^2\langle u,iv\rangle -\mu\Im(uv)-\frac{1-\mu}{2}-\frac{\mu|v|^2}{|2v^2-1|}+c|v|^2\;. \eeq For $\mu\notin\{0,1\}$ the component of the energy hypersurface $H^{-1}(c)$ around the earth lifts to a compact component $\Sigma_{\mu,c}$ of the energy hypersurface $K_{\mu,c}^{-1}(0)$. The energy surface $K_{\mu,c}^{-1}(0)$ is diffeomorphic to $S^3$. Next we recall a version of the definition of a surface of section. \begin{definition} Let $\Sigma$ be a smooth closed three-manifold equipped with a smooth flow without rest points. A \emph{global disk-like surface of section} consists of a topologically embedded closed disk ${\mathcal D}\subset \Sigma$ having the following properties: \begin{itemize} \item[(1)] The boundary $\partial{\mathcal D}$ is an (un-parametrized) periodic orbit, called the spanning orbit. \item[(2)] The interior of the disk $\dot{\mathcal D}={\mathcal D}\setminus\partial{\mathcal D}$ is a smooth submanifold of $\Sigma$ and is transversal to the flow. \item[(3)] Every orbit, other than the spanning orbit, intersects the (interior of the) disk in forward and backward time. \end{itemize} \end{definition} The above definition allows the disk ${\mathcal D}$ to be rather wild near its boundary. Given a global disk-like surface of section it follows that there exists a smooth map $\psi:\dot{\mathcal D}\rightarrow\dot{\mathcal D}$, called the global return map. In general $\psi$, which is defined on the interior of the disk, does not need to have an extension to the boundary. Note that there is not much one can say about a continuous self-map defined on an open disk. For example, Brouwer's fixed point theorem fails. However, much more can be said if the map is an area-preserving diffeomorphism. Indeed, a consequence of Brouwer's translation theorem is that such maps always have a fixed point. The notion of global surfaces of section goes back to Poincar\'e, and it is clear that they encode much of the dynamics on the energy surface. Later we shall describe some consequences of their existence, but presently we state our main result. \begin{theorem}\label{thm:main} For every $c>\frac32$ there exists $\mu_0=\mu_0(c)\in [0,1) $ such that for all $\mu_0<\mu<1$ there exists a global disk-like surface of section for the component $\Sigma_{\mu,c}$ of the energy hypersurface of $K_{\mu,c}^{-1}(0)$. \end{theorem} The existence of this global surface of section follows as a consequence of a global result in symplectic geometry, which is applicable provided the energy surface satisfies certain geometric conditions. Also note that it seems impossible to obtain this surface of section by the usual method of perturbing an understood model. Instead, to achieve our result we must verify that a certain convexity assumption holds. We make this precise below. \begin{definition} The \emph{convexity range} $\mathfrak{C}$ is defined to be the collection of all pairs $(c,\mu)$ with $c>\frac{3}{2}$ and $\mu\in (0,1)$ such that the energy surface $\Sigma_{\mu,c}$ bounds a strongly convex domain. \end{definition} Here we say a compact surface $\Sigma\subset \R^4$ bounds a strongly convex domain provided there exists a constant $\delta>0$ and a smooth convex function $C:{\mathbb R}^4\to\R$ such that $\Sigma=C^{-1}(1)$ and the matrix valued function $D^2C(z)- \delta Id$ is positive definite for all $z\in \R^4$. An elementary exercise shows that a connected compact hypersurface $\Sigma\subset {\mathbb R}^{4}$ bounds a strongly convex domain $W$ whenever there exists a smooth function $\phi:{\mathbb R}^4\rightarrow {\mathbb R}$ with the following properties: \begin{enumerate} \item $\Sigma=\{\phi=0\}$ is a regular level of $\phi$. \item $W=\{z\in \R^4:\phi(z)\leq 0\}$ is bounded. \item $D^2\phi(z)(h,h)>0$ for each point $z\in W$ and for each non-zero vector $h$. \end{enumerate} We now state the main technical result of this article. \begin{proposition}\label{prop1} For each $c>\frac{3}{2}$ there exists a number $\mu_0(c)\in (0,1)$ such that $$ \{(c,\mu)\ |\ c>{\textstyle \frac{3}{2}},\ \mu\in(\mu_0(c),1)\}\subset \mathfrak{C}. $$ \end{proposition} As the elementary proof of Proposition \ref{prop1} shows, it should be possible to use a computer to get a more precise idea of the convexity range. Observe that Theorem \ref{thm:main} now follows from Proposition \ref{prop1} and the following theorem which relies on a pseudoholomorphic curve theory for contact manifolds. The core idea is the construction of certain foliations, called finite energy foliations. \begin{theorem}[\cite{HWZ_the_dynamics_on_three_dimensional_strictly_conve_eneergy_surfaces}]\label{thm2} If $\Sigma$ is a smooth, regular, bounded energy surface in ${\mathbb R}^4$ bounding a strongly convex domain, then there exists a global disk-like surface of section ${\mathcal D}$ and an associated global return map $\psi:\dot{\mathcal D}\rightarrow \dot{\mathcal D}$, which is smoothly conjugated to a smooth area-preserving disk map $\Psi:\dot{D}\rightarrow \dot{D}$, where $\dot{D}$ is the open unit disk in the plane equipped with the Lebesgue measure. \end{theorem} A celebrated result by Franks, \cite{Franks_Geodesics_on_S2_and_periodic_points_of_annulus_homeomorphisms}, implies that $\Psi$ either has precisely one periodic point or infinitely many. This result then also holds for $\Sigma_{\mu,c}$ whenever $(c,\mu)\in \mathfrak{C}$. \begin{remark} Analyzing the proof of Theorem \ref{thm:main} in \cite{HWZ_the_dynamics_on_three_dimensional_strictly_conve_eneergy_surfaces} one should be able to obtain some refinements. First, one should be able to find a continuously differentiable ${\mathcal D}$ in such a way that the return map defined on $\dot{\mathcal D}$ has a continuously differentiable extension over the closed disk. This map should be conjugated to an area-preserving map $\Psi$ on the closed unit disk $D$. Recent results by Franks/Handel,\cite{Franks_Handel_Periodic_points_of_Hamiltonian_surface_diffeomorphisms}, and LeCalvez, \cite{LeCalvez_Periodic_orbits_of_Hamiltonian_homeomorphisms_of_surfaces}, then imply that for $ \phi$ one of the following holds: \begin{itemize} \item[(1)] $\Psi$ is a pseudo-rotation, i.e. it has precisely one periodic point. \item[(2)] Some iterate of $\Psi$ is the identity. \item[(3)] The minimal periods of periodic orbits of $\Psi$ are unbounded. \end{itemize} As shown in \cite{Bramham1,Bramham2,Bramham3} finite energy foliations can also be used to study area-preserving disk maps. We refer the reader for more details to \cite{Bramham_Hofer_First_steps_toward_a_symplectic_dynamics} where some of the recent results on area-preserving disk maps are surveyed. We leave the construction described in the remark to the interested reader. One should be able to prove that item (2) would imply integrability of the flow on the corresponding energy surface $\Sigma_{\mu,c}$, which seems unlikely. Also item (1) seems unlikely for most energy surfaces. Finally, observe that in the simplest case, namely the rotating Kepler problem, for which $\mu=0$, item (3) holds. \end{remark} {\bf Acknowledgments:} We thank B.~Bramham and E.~Belbruno for stimulating discussions. The research of P.~Albers, J.~Fish and H.~Hofer was partially supported by the NSF-grants DMS-0903856, DMS-0802927, and DMS-1047602. U.~Frauenfelder was partially supported by the Basic Research fund 2010�0007669 and O.~van Koert by the New Faculty Research Grant 0409-20100147 funded by the Korean government basic. P.~Albers, J.~Fish, U.~Frauenfelder and O.~van Koert thank the IAS for its hospitality. \section{History, known results and open questions} Near the end of his lifelong quest to find periodic orbits, Poincar\'e introduced the concept of an annulus-like global surface of section (see \cite{Poincare}). In that same article, Poincar\'e observed that if a certain fixed point theorem (specifically Poincar\'e's last geometric theorem) holds true then the existence of such an annular surface of section implies the existence of periodic orbits. Shortly thereafter, Birkhoff proved Poincar\'e's last geometric theorem (see \cite{Birkhoff_Proof_of_Poincares_geometrict_heorem}) and then later generalized the notion of an annular surface of section to a surface of arbitrary genus and with an arbitrary number of boundary components (see \cite{Birkhoff_Dynamical_systems_with_two_degrees_of_freedom}). The above results of Poincar\'e and Birkhoff were then employed by Conley in \cite{Conley_On_some_new_long_periodic_solutins_of_the_plane_restricted_three_body_problem} to prove the existence of certain long periodic orbits in the planar restricted three-body problem. More precisely, Conley proved that there exists a sufficiently negative constant $E_0$, which is independent of the mass ratio $\mu$, with the property that each energy surface $\{H=E<E_0\}$ admits an annulus-like surface of section. Under this assumption there are two bounded Hill's regions, and regularizing the associated singularities gives bounded energy surfaces. Each component of these sufficiently negative energy levels is diffeomorphic to $\R P^3$, and it is heuristically clear that they are well modeled by a small perturbation of the regularized Kepler problem. It is then possible to construct surfaces of section for the regularized Kepler problem, which persist under small perturbations. Also note that an alternative approach using only canonical transformations can be found in \cite{Kummer_On_the_stability_of_Hills_solutions_of_the_plane_restricted_three_body_problem}. For sufficiently small mass ratio McGehee \cite{McGehee_PhD} constructs disk-like surfaces of section around the heavy primary for energies up to the energy of the first Lagrange point. He also uses the Levi-Civita regularization and works in the double covering, $S^3$, as we do. The purpose of this article is to prove the analogue of McGehee's theorem around the small primary. The surfaces of section in the articles by Conley and McGehee are perturbations of surfaces of section of the Kepler problem, which is completely integrable. We apply a result obtained by an entirely different method due to Hofer-Wysocki-Zehnder \cite{HWZ_the_dynamics_on_three_dimensional_strictly_conve_eneergy_surfaces}, which is based on holomorphic curve techniques. For this it suffices to prove convexity of the Levi-Civita embedding of the energy hypersurface into $\C^2$. We would like to raise the following question: \begin{question} Does there exist a global (disk-like) surface of section for each mass ratio $\mu$ and energy below the critical value $H(L_1)$ in both bounded energy components of the regularized problem? \end{question} The above question could be answered in the affirmative provided one could show that in the appropriate energy range the two bounded components of the regularized energy surfaces are dynamically convex, i.e.~all Conley-Zehnder indices are greater or equal 3, see \cite{HWZ_the_dynamics_on_three_dimensional_strictly_conve_eneergy_surfaces}. There is an interesting recent paper by U. Hryniewicz and P. Salomao, which gives a sufficient and necessary condition for the existence of global disk-like surfaces of section which is relevant to our question, see \cite{Hryniewicz_Salomao_On_the_existence_of_disk_like_global_surfaces_of_section_for_Reeb_flows_on_the_tight_3_sphere}, and even goes beyond dynamical convexity. It was shown in \cite{HWZ_the_dynamics_on_three_dimensional_strictly_conve_eneergy_surfaces} that strong convexity implies dynamical convexity. Also note that in Appendix \ref{sec:appendix} below we show that for energies near $H(L_1)$ the energy surface near the large primary fails to be convex. On the other hand we prove in \cite{Albers_Fish_Frauenfelder_Hofer_Koert_The_Conley_Zehnder_indices_of_the_rotating_Kepler_problem} that for the same energy levels, the energy surface near the primary at $0$ is \emph{dynamically} convex provided the mass ratio $\mu$ is sufficiently small; this corresponds to a heavy primary located in $q=0$. The method of the present paper can be used to check for a large class of pairs $(c,\mu)$ whether the energy hypersurface is indeed convex. For energies just a bit higher than the critical value $H(L_1)$, the topology of (the bounded component of) the energy hypersurface changes from a disjoint union of two copies of $\R P^3$ to a connected sum $\R P^3\#\R P^3$. For topological reasons, global surfaces of section of disk or annulus type do not exist for $\R P^3\#\R P^3$. However, the more general theory of finite energy foliations developed by Hofer-Wysocki-Zehnder in \cite{HWZ_Finite_energy_foliations_of_tight_three_spheres} still applies. We shall discuss this in an upcoming paper. We expect the following global picture for energy levels just above $H(L_1)$. It is well-known that in the neck region of the connected sum there exists a hyperbolic periodic orbit with Conley-Zehnder index 2; this is the Lyapunov or halo orbit. We expect there to exist a finite energy foliation where this Lyapunov orbit is one of at least three binding orbits. The existence of such a foliation would yield a structure theorem explaining in some detail the global behavior of the stable and unstable manifold of the Lyapunov orbit. Furthermore it would give a geometric explanation of the existence of a well-known homoclinic orbit asymptotic to this Lyapunov orbit, see Conley \cite{Conley_Twist_mappins_linking_analyticity} and McGehee \cite{McGehee_PhD}. The reader should consult \cite[Theorem 1.9]{HWZ_Finite_energy_foliations_of_tight_three_spheres}, where the theory of finite energy surfaces is developed for contact-type flows on $S^3$. It is possible to make this technology work also for the connected sum of ${\mathbb RP}^3$'s. Finally we would like to point out that convex energy surfaces have interesting symplectic and dynamical properties. For example, the smallest occurring action of a periodic orbit on a strongly convex energy surface is a symplectic capacity, which in turn is a crucially important concept in symplectic geometry. Also, it is very likely that in the case that $\Sigma\subset {\mathbb R}^4$, the surface of section is bounded by a periodic orbit of smallest action. This has been an an open problem for quite some time. It is not too difficult to find these orbits by minimization of a dual action functional, a method which can be implemented numerically. The details of the latter remarks are explained in \cite{Hofer_Zehner_Book}. The relevant numerical methods are described in \cite{Goeing_Jaeschke_PhD}. \section{Convexity of the planar restricted 3-body problem} We recall that $\Sigma_{\mu,c}$ is the compact component of $K_{\mu,c}^{-1}(0)$ corresponding to the component of the energy hypersurface $H^{-1}(c)$ around earth, for a fixed mass ratio $\mu\in [0,1]$. \begin{proposition}\label{thm:convex} For every $c>\frac32$ there exists $\mu_0=\mu_0(c)\in (0,1)$ such that for all $\mu_0<\mu<1$ the component $\Sigma_{\mu,c}$ of the energy hypersurface of $K_{\mu,c}^{-1}(0)$ bounds a strongly convex domain. \end{proposition} \begin{proof} We first compute the Hessian of \beq K_{\mu,c}(v,u)=\frac12|u|^2+c|v|^2+2|v|^2\langle u,iv\rangle-\mu\Im(uv)-\frac{\mu|v|^2}{|2v^2-1|}-\frac{1-\mu}{2}\;. \eeq In order to do so we need some auxiliary computations and consider the Hessian of \beq\nonumber g(v)=\frac{1}{|2v^2-1|}\;. \eeq For this we first set \beq\nonumber f(v)=|2v^2-1|^2=(2v^2-1)(2\bar{v}^2-1)=4|v|^4-4\Re(v^2)+1\;. \eeq Then we see \beq\nonumber df(v)\hat{v}=16|v|^2 \langle v,\hat{v}\rangle-8\Re(v\hat{v})\;, \eeq and \beq\nonumber D^2f(v)[\hat{v},\hat{v}]=32\langle v,\hat{v}\rangle^2+16|v|^2|\hat{v}|^2-8\Re(\hat{v}^2)\;. \eeq Thus \bea\nonumber Dg(v)\hat{v}&=-\tfrac12f(v)^{-\frac32}df(v)\hat{v}\\ &=-\frac{1}{|2v^2-1|^3}\big(8|v|^2\langle v,\hat{v}\rangle -4\Re(v\hat{v})\big)\;, \eea and \bea\nonumber D^2g(v)[\hat{v},\hat{v}]&=\tfrac34f(v)^{-\frac52}\big(df(v)\hat{v}\big)^2-\frac12f(v)^{-\frac32}D^2f(v)[\hat{v},\hat{v}]\\ &=\frac{3}{4|2v^2-1|^5}\big(16|v|^2\langle v,\hat{v}\rangle -8\Re(v\hat{v})\big)^2\\ &-\frac{4}{|2v^2-1|^3}\big(4\langle v,\hat{v} \rangle^2+2|v|^2|\hat{v}|^2-\Re(\hat{v}^2)\big)\;.\\ \eea Therefore, we compute \bea\nonumber D\bigg(\frac{|v|^2}{|2v^2-1|}\bigg)\hat{v}&=-\frac{|v|^2}{|2v^2-1|^3}\big(8|v|^2 \langle v,\hat{v}\rangle-4\Re(v\hat{v})\big)\\ &+ \frac{2\langle v,\hat{v}\rangle}{|2v^2-1|}\;, \eea and finally \bea\nonumber D^2\bigg(\frac{|v|^2}{|2v^2-1|}\bigg)[\hat{v},\hat{v}]&=\frac{3|v|^2}{4|2v^2-1|^5}\big(16|v|^2\langle v,\hat{v}\rangle -8\Re(v\hat{v})\big)^2\\ &-\frac{4|v|^2}{|2v^2-1|^3}\big(4\langle v,\hat{v}\rangle^2+2|v|^2|\hat{v}|^2-\Re(\hat{v}^2)\big)\\ &-\frac{2\langle v,\hat{v}\rangle}{|2v^2-1|^3}\big(8|v|^2\langle v,\hat{v}\rangle-4\Re(v\hat{v})\big)\\ &+ \frac{2|\hat{v}|^2}{|2v^2-1|}\\ &-\frac{2\langle v,\hat{v}\rangle}{|2v^2-1|^3}\big(8|v|^2\langle v,\hat{v}\rangle-4\Re(v\hat{v})\big)\;. \eea We simplify this to \bea\nonumber D^2\bigg(\frac{|v|^2}{|2v^2-1|}\bigg)[\hat{v},\hat{v}]&=\frac{48|v|^2}{|2v^2-1|^5}\big(2|v|^2\langle v,\hat{v}\rangle-\langle\bar{v},\hat{v}\rangle\big)^2\\ &-\frac{4|v|^2}{|2v^2-1|^3}\big(4\langle v,\hat{v}\rangle^2+2|v|^2|\hat{v}|^2-\Re(\hat{v}^2)\big)\\ &-\frac{16\langle v,\hat{v}\rangle }{|2v^2-1|^3}\big(2|v|^2\langle v,\hat{v}\rangle -\langle\bar{v},\hat{v}\rangle\big)\\ &+ \frac{2|\hat{v}|^2}{|2v^2-1|}\;. \eea From this we conclude that the Hessian of \beq\nonumber K_{\mu,c}(v,u)=\frac12|u|^2+c|v|^2+2|v|^2\langle u,iv\rangle -\mu\Im(uv)-\frac{\mu|v|^2}{|2v^2-1|}-\frac{1-\mu}{2} \eeq is \bea\nonumber D^2K_{\mu,c}(u,v)[(\hat{u},\hat{v}),(\hat{u},\hat{v})]&=|\hat{u}|^2+2c|\hat{v}|^2+4\langle u,iv\rangle|\hat{v}|^2+8\langle v,\hat{v}\rangle\langle u,i\hat{v}\rangle\\ &+8\langle v,\hat{v}\rangle \langle \hat{u},iv\rangle+4|v|^2\langle \hat{u},i\hat{v}\rangle\\ &-2\mu\Im (\hat{u}\hat{v})\\ &-\frac{48\mu|v|^2}{|2v^2-1|^5}\big(2|v|^2\langle v,\hat{v}\rangle -\langle \bar{v},\hat{v}\rangle \big)^2\\ &+\frac{4\mu|v|^2}{|2v^2-1|^3}\big(4\langle v,\hat{v}\rangle ^2+2|v|^2|\hat{v}|^2-\Re(\hat{v}^2)\big)\\ &+\frac{16\mu\langle v,\hat{v}\rangle }{|2v^2-1|^3}\Big(2|v|^2\langle v,\hat{v}\rangle -\langle \bar{v},\hat{v}\rangle \Big)\\ &-\frac{2\mu|\hat{v}|^2}{|2v^2-1|}\;. \eea From now on we fix $c>\frac32$. We observe that in the limit $\mu\to1$ the energy hypersurface $\Sigma_{\mu,c}$ collapses onto the origin. To see this, first note that as $\mu\to 1$, the distance of $q$ to the Lagrange point goes to $0$. As was also observed in \cite{Albers_Frauenfelder_Koert_Paternain_Liouville_field_for_PCR2BP}, this distance provides an upper bound for the size of Hill's region in the projection to the $(q_1,q_2)$-plane. Since $q=2v^2$, we see that $|v|\to 0$ as $\mu\to 1$. Now observe that the level set $K_{\mu,c}=0$ may be written as $$ 0=\frac12|u|^2+|u|\left(|v|^2\langle \frac{u}{|u|},iv \rangle -\mu\Im(\frac{u}{|u|} v ) \right ) +|v|^2 \left(c-\frac{\mu}{|1-2v^2|} \right) -\frac{1-\mu}{2}\;. $$ Regard this as a quadratic equation for $|u|$ which we can solve explicitly, \bean |u|=&-\left(|v|^2\langle \frac{u}{|u|},iv \rangle -\mu\Im(\frac{u}{|u|} v ) \right )\\[1ex] &+ \sqrt{ \left(|v|^2\langle \frac{u}{|u|},iv \rangle -\mu\Im(\frac{u}{|u|} v ) \right)^2-2\left ( |v|^2(c-\frac{\mu}{|1-2v^2|}) -\frac{1-\mu}{2}\right ) } \eea Since $|v|\to 0$ and $1-\mu\to 0$ as $\mu\to 1$, we see that $|u|\to 0$ as $\mu\to 1$. In other words, $|u|,|v|\to 0$ as $\mu\to 1$ as claimed. Consequently, for each $0<\epsilon<\frac14$ there exists $\mu_1=\mu_1(\epsilon)<1$ such that for $\mu_1\leq\mu\leq1$ we have \beq\nonumber |u|^2,\;|v|^2<\epsilon\quad\text{for all } (u,v)\in\Sigma_{\mu,c}\;. \eeq Thus, there exists a constant $C>3$ independent of $\epsilon$, $\mu$, and $c$ (provided that $c>\frac32$), such that \beq\nonumber D^2K_{\mu,c}(u,v)[(\hat{u},\hat{v}),(\hat{u},\hat{v})]\geq|\hat{u}|^2+2c|\hat{v}|^2-C\epsilon\big(|\hat{u}|^2+|\hat{v}|^2\big)-2\mu|\Im (\hat{u}\hat{v})|-\frac{2\mu|\hat{v}|^2}{|2v^2-1|}\;. \eeq Using again that $|v|^2<\epsilon<\tfrac14$ and the inequality $\frac{1}{1-x}\leq 1+2x$ for $0\leq x\leq\tfrac12$ we can estimate \beq\nonumber \frac{1}{|2v^2-1|}\leq1+4\epsilon\;, \eeq and thus \bea\nonumber D^2K_{\mu,c}(u,v)[(\hat{u},\hat{v}),(\hat{u},\hat{v})]&\geq|\hat{u}|^2+2c|\hat{v}|^2-C\epsilon\big(|\hat{u}|^2+|\hat{v}|^2\big)-2\mu|\Im (\hat{u}\hat{v})|-2\mu|\hat{v}|^2(1+4\epsilon)\;. \eea Estimating further, we note that \beq\nonumber 2|\Im(\hat{u}\hat{v})|\leq\Delta| \hat{u} |^2+\frac1\Delta| \hat{v} |^2 \eeq for each $\Delta>0$, and thus \bea\nonumber D^2K_{\mu,c}(u,v)[(\hat{u},\hat{v}),(\hat{u},\hat{v})]&\geq|\hat{u}|^2+2c|\hat{v}|^2-C\epsilon\big(|\hat{u}|^2+|\hat{v}|^2\big)-\mu(\Delta |\hat{u}|^2+\frac1\Delta|\hat{v}|^2)\\ &\phantom{\geq}-2\mu|\hat{v}|^2(1+4\epsilon)\\ &\geq|\hat{u}|^2(1-C\epsilon-\mu\Delta)+|\hat{v}|^2(2c-C\epsilon-\frac{\mu}{\Delta}-2\mu(1+4\epsilon))\;. \eea Next choose \beq\nonumber \epsilon<\min\left\{\frac{2c-3}{3C-8},\frac{1}{2C}\right\}\;, \eeq and fix $\Delta=1-C\epsilon$. Then by using $\mu<1$ we find \bea\nonumber D^2K_{\mu,c}(u,v)[(\hat{u},\hat{v}),(\hat{u},\hat{v})]&\geq|\hat{u}|^2(1-C\epsilon-\mu(1-C\epsilon))+|\hat{v}|^2(2c-C\epsilon-\frac{1}{1-C\epsilon}-2(1+4\epsilon))\\ &\geq|\hat{u}|^2(1-\mu)(1-C\epsilon)+|\hat{v}|^2(2c-2-(C+8)\epsilon-\frac{1}{1-C\epsilon})\;. \eea Using $\epsilon<\frac{1}{2C}$ we estimate as above \beq\nonumber \frac{1}{1-C\epsilon}\leq(1+2C\epsilon)\;, \eeq and thus \bea\nonumber D^2K_{\mu,c}(u,v)[(\hat{u},\hat{v}),(\hat{u},\hat{v})]&\geq|\hat{u}|^2(1-\mu)(1-C\epsilon)+|\hat{v}|^2(2c-2-(C+8)\epsilon-\frac{1}{1-C\epsilon})\\ &\geq|\hat{u}|^2(1-\mu)(1-C\epsilon)+|\hat{v}|^2(2c-2-(C+8)\epsilon-(1+2C\epsilon))\\ &=|\hat{u}|^2(1-\mu)(1-C\epsilon)+|\hat{v}|^2(2c-3-(3C+8)\epsilon)\;. \eea Finally since $\epsilon<\frac{2c-3}{3C-8}$ we obtain \bea\nonumber D^2K_{\mu,c}(u,v)[(\hat{u},\hat{v}),(\hat{u},\hat{v})]&\geq|\hat{u}|^2\underbrace{(1-\mu)(1-C\epsilon)}_{>0}+|\hat{v}|^2\underbrace{(2c-3-(3C+8)\epsilon)}_{>0}\;. \eea Thus, for a suitable $\delta=\delta(\mu,c)>0$ we have the desired estimate \bea\nonumber D^2K_{\mu,c}(u,v)[(\hat{u},\hat{v}),(\hat{u},\hat{v})]\geq \delta\cdot |(\hat{u},\hat{v})|^2\;. \eea In particular, the set $\Sigma_{\mu,c}$ bounds a strongly convex domain, which completes the proof of the proposition. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:main}] This follows from \cite[Theorem 1.3]{HWZ_the_dynamics_on_three_dimensional_strictly_conve_eneergy_surfaces} and Proposition \ref{thm:convex}. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} In many secret-communication applications, it is required not only that the adversary should not learn the content of the message being communicated, as in \cite{shannon49}, but also that it should not learn whether the legitimate parties are communicating at all or not. Such problems are often referred to as communication with \emph{low probability of detection (LPD)} or \emph{covert communication}. Depending on the application, they can be formulated in various ways. In \cite{houkramer14} the authors consider a wiretap channel model \cite{wyner75}, and refer to this LPD requirement as \emph{stealth}. They show that stealth can be achieved without sacrificing communication rate or using an additional secret key. In their scheme, when not sending a message, the transmitter sends some random noise symbols to simulate the distribution of a codeword. There are many scenarios, however, where this cannot be done, because the transmitter must be switched off when not transmitting a message. Indeed, the criterion is often that the adversary should not be able to tell whether the transmitter is on or off, rather than whether it is sending anything meaningful or not. It is the former criterion that is considered in the current paper. Our work is closely related to the recent works \cite{bashgoekeltowsley13,chebakshijaggi13,bloch16}. In \cite{bashgoekeltowsley13} the authors consider the problem of communication over an additive white Gaussian noise (AWGN) channel with the requirement that a wiretapper should not be able to tell with high confidence whether the transmitter is sending a codeword or the all-zero sequence. It is observed that the maximum amount of information that can be transmitted under this requirement scales like the \emph{square root} of the blocklength.\footnote{We adopt the usual terminology to use ``blocklength'' to refer to the total number of channel uses by a code. However, in the square-root case, the channel codes are not ``block codes'' in the traditional sense, because they cannot be used repeatedly. Indeed, repeated tramsmission would increase the eavesdropper's probability of detecting the communication.} In \cite{chebakshijaggi13} the authors consider a similar problem for the binary symmetric channel and show that the ``square-root law'' also holds. One major difference between \cite{bashgoekeltowsley13} and \cite{chebakshijaggi13} is that in the former the transmitter and the receiver use a secret key to generate their codebook, whereas in the latter no secret key is used. More recently, \cite{bloch16} studies the LPD problem from a resolvability perspective and improves upon \cite{bashgoekeltowsley13} in terms of secret-key length. In the current paper, we show that the square-root law holds for a broad class of discrete memoryless channels (DMCs).\footnote{The achievability part of the square-root law, but not the converse, is independently derived in \cite{bloch16}.} Furthermore, we provide exact characterizations for the scaling constant of the amount of information with respect to the square root of the blocklength for DMCs as well as AWGN channels, which is not done in \cite{bashgoekeltowsley13,chebakshijaggi13,bloch16}. We do not assume that the eavesdropper observes a noisier channel than the intended receiver; instead, we assume that they both observe the same channel outputs. Our reason for dropping the wiretap structure is that, unlike in secret communication where the assumption that the eavesdropper observes a noisier channel allows one to obtain information-theoretic secrecy without using a secret key, in LPD problems the wiretap assumption does not bring essential new insights. In particular, the square-root law does not rely on the wiretap structure.\footnote{In fact, one can verify that the results in \cite{bashgoekeltowsley13} hold without the wiretap assumption; see Section~\ref{sec:AWGN} of the current paper for stronger results.} Hence, by putting the eavesdropper in the same position as the intended receiver, we allow ourselves to focus on the essence of the LPD-communication problem, while at the same time making our results more relevant in practice, the latter because in applications the legitimate parties usually cannot fully determine the statistical behavior of the eavesdropper's channel. We also note that extension of most of the results in the paper to wiretap channels is straightforward, part of which can be seen in \cite{bloch16}. Because we do not assume a wiretap structure, contrary to \cite{chebakshijaggi13}, in our setting LPD communication is impossible without a secret key. We assume that such a key is available, and are not concerned with its length within the scope of this paper. We assume that the receiver does know when the transmitter is sending a message. This is a realistic assumption because the transmitter and the receiver can use part of their secret key to perform synchronization prior to transmission: They choose a (large enough) number of input sequences of a certain length such that each sequence induces an output distribution that is sufficiently different from the output distribution when there is no input to the channel, while on average these sequences induce an output distribution that is sufficiently close to the output distribution when there is no input. Using part of the secret key they randomly pick one of these sequences, which the transmitter sends to the receiver as a synchronization signal before sending a message. One technical difference between \cite{bashgoekeltowsley13,chebakshijaggi13} and the present work is that the earlier works use total variation distance to measure probability of detection whereas we use \emph{relative entropy}, as \cite{houkramer14,cachin04}. Note that, when the relative entropy is given, the total variation distance can be upper-bounded using Pinsker's inequality \cite{csiszarkorner81}. See \cite{houkramer14} for further discussions on the relation between relative entropy and detectability. In practice, which of the two quantities is more relevant may depend on the actual application,\footnote{The total variation distance would be the right quantity to look at if one assumes equal probabilities for the transmitter sending and not sending a message, because it would correspond to the minimum probability of detection error by the eavesdropper. However, such an assumption is clearly unrealistic in practice.} whereas for theoretical analysis relative entropy is clearly easier to handle. Summarizing the above discussions, we now briefly describe our setting: \begin{itemize} \item We consider a DMC whose input alphabet contains an ``off'' symbol. When the transmitter is switched off, it always sends this symbol. \item The transmitter and the receiver share a secret key that is sufficiently long \item We assume that the adversary observes the same channel outputs as the intended receiver, i.e., there is no wiretap structure. \item The LPD criterion is that the relative entropy between the output distributions when a codeword is transmitted and when the all-zero sequence is transmitted must be sufficiently small. \end{itemize} The square-root law has been observed in various scenarios in \emph{steganography} \cite{ker07,fridrich09,fillerfridrich09}. The setup in steganography that is most related to our work is as follows: a data file called the \emph{cover text} is generated according to some distribution, and a message must be concealed in this file subject to the constraint that the file should look almost unchanged. This is similar to the LPD setting in the sense that, when no message is to be conveyed, the encoder should not do anything, hence, in steganography the output is the original data file, whereas in LPD communications the output is pure noise. But steganography and LPD communications are essentially different: in steganography the data file is generated first and shown to the encoder, whereas in LPD communications noise is added to the codeword after the latter is chosen by the encoder. Hence the two types of problems require different analyses. The rest of this paper is arranged as follows. In Section~\ref{sec:setup} we formulate the problem for DMCs and briefly analyze the case where the ``off'' input symbol induces an output distribution that can be written as a mixture of the other output distributions; the next two sections focus on the case where it cannot. In Section~\ref{sec:IXY} we derive formulas for characterizing the maximum amount of information that can be transmitted over any DMC under the LPD constraint. In Section~\ref{sec:var} we derive a simpler formula that is applicable to some DMCs. In Section~\ref{sec:AWGN} we formulate and solve the problem for AWGN channels. Finally, in Section~\ref{sec:conclusion} we conclude the paper with some remarks on future directions. \section{Problem Formulation for DMCs}\label{sec:setup} Consider a DMC of finite input and output alphabets $\set{X}$ and $\set{Y}$, and of transition law $W(\cdot|\cdot)$. Throughout this paper, we use the letter $P$ to denote input distributions on $\set{X}$ and the letter $Q$ to denote output distributions on $\set{Y}$. Let $0\in\set{X}$ be the ``off'' input symbol; i.e., when the transmitter is not sending a message, it always transmits $0$. Denote \begin{equation} Q_0(\cdot) \triangleq W(\cdot|0). \end{equation} Without loss of generality, we assume that no two input symbols induce the same output distribution; in particular, $W(\cdot|x)=Q_0(\cdot)$ implies $x=0$. A (deterministic) code of blocklength $n$ for message set $\set{M}$ consists of an encoder $\set{M} \to \set{X}^n$, $m\mapsto x^n$ and a decoder $\set{Y}^n \to \set{M}$, $y^n\mapsto \hat{m}$. The transmitter and the receiver choose a \emph{random} code of blocklength $n$ for message set $\set{M}$ using a secret key shared between them. The adversary is assumed to know the distribution according to which the transmitter and the receiver choose the random code, but not their actual choice.\footnote{Note that we assume that the eavesdropper observes the same channel outputs as the intended receiver, so LPD communication is impossible with deterministic codes.} The random code, together with a message $M$ uniformly drawn from $\set{M}$, induces a distribution $Q^n(\cdot)$ on $\set{Y}^n$. We require that, for some constant $\delta>0$,\footnote{All logarithms in this paper are natural. Accordingly, information is measured in nats.} \begin{equation}\label{eq:LPD} D\left(\left. Q^n \right\| Q_0^{\times n}\right) \le \delta. \end{equation} Here $Q_0^{\times n}$ denotes the $n$-fold product distribution of $Q_0$, i.e., the output distribution over $n$ channel uses when the transmitter is off. At this point, we observe that an input symbol $x$ with $\mathsf{supp}(W(\cdot|x)) \not\subseteq \mathsf{supp}(Q_0)$, where $\mathsf{supp}(\cdot)$ denotes the support of a distribution, should never be used by the transmitter. Indeed, using such an input symbol with nonzero probability would result in $D\left(\left. Q^n \right\| Q_0^{\times n}\right)$ being infinity. Hence we can drop all such input symbols, as well as all output symbols that do not lie in $\mathsf{supp} (Q_0)$, reducing the channel to one where \begin{equation}\label{eq:suppQ0} \mathsf{supp} (Q_0) = \set{Y}. \end{equation} Throughout this paper we assume that \eqref{eq:suppQ0} is satisfied. Note that, for channels that cannot be reduced to one that satisfies \eqref{eq:suppQ0}, such as the binary erasure channel, nontrivial LPD communication is not possible. Our goal is to find the maximum possible value for $\log |\set{M}|$ for which a random codebook of length $n$ exists that satisfies condition \eqref{eq:LPD}, and whose average probability of error is at most $\epsilon$. (Later we shall require that $\epsilon$ be arbitrarily small.) We denote this maximum value by $K_n(\delta,\epsilon)$. We call an input symbol $x$ \emph{redundant} if $W(\cdot|x)$ can be written as a mixture of the other output distributions, i.e., if \begin{equation} W(\cdot|x) \in \mathsf{conv} \left\{ W(\cdot|x')\colon x'\in\set{X}, x'\neq x\right\}, \end{equation} where $\mathsf{conv}$ denotes the convex hull. As we shall show, $K_n(\delta,\epsilon)$ can increase either linearly with the blocklength $n$ or like $\sqrt{n}$, depending on whether $0$ is redundant or not. \subsection{Case~1: input symbol $0$ is redundant}\label{sub:redundant} This is the case where there exists some distribution $P$ on $\set{X}$ such that \begin{subequations}\label{eq:degenerate} \begin{IEEEeqnarray}{rCl} P(0) & = & 0 \label{eq:p00}\\ \sum_{x\in\set{X}} P(x) W(\cdot |x) & = & Q_0(\cdot) . \label{eq:average} \end{IEEEeqnarray} \end{subequations} In this case, a positive communication rate can be achieved: \begin{proposition}\label{prp:linear} If input symbol $0$ is redundant, then for any $\delta\ge 0$, \begin{equation} \lim_{\epsilon\downarrow 0} \lim_{n\to\infty} \frac{K_n(\delta,\epsilon)}{n}= \max I(P,W), \end{equation} where the maximum is taken over input distribution $P$ that satisfies \eqref{eq:degenerate}. \end{proposition} \begin{IEEEproof} First note that a random codebook generated IID according to $P$ that satisfies \eqref{eq:degenerate} yields $D(Q^n\|Q_0^{\times n}) = 0$. By the standard typicality argument~\cite{shannon48}, when the rate of the code is below $I(P,W)$, the probability of a decoding error can be made arbitrarily small as $n$ goes to infinity. Conversely, for a codebook whose empirical input distribution does not satisfy \eqref{eq:average}, $D(Q^n\|Q_0^{\times n})$ grows linearly in $n$ and is hence unbounded as $n$ goes to infinity. Finally, we check that any $P$ that does not satisfy \eqref{eq:p00} is suboptimal. Indeed, for any (nontrivial) $P$ that satisfies \eqref{eq:average} but not \eqref{eq:p00}, let $P'$ be $P$ conditional on $\set{X}\setminus \{0\}$, then $P'$ also satisfies \eqref{eq:average} and $I(P',W)>I(P,W)$. \end{IEEEproof} \begin{example} Binary symmetric channel with an additional ``off'' symbol. \end{example} Consider a binary symmetric channel with an additional ``off'' symbol as shown in Fig.~\ref{fig:BSC0}. Its optimal input distribution for LPD communication is uniform on $\{-1,1\}$, and its capacity under the LPD constraint \eqref{eq:LPD} is the same as its capacity without this constraint, and equals $1-H_{\textnormal{b}}(p)$, where $H_{\textnormal{b}}(\cdot)$ is the binary entropy function. \begin{figure}[tbp] \center \vspace{-5mm} \includegraphics[width=0.3\textwidth]{BSC0.pdf} \vspace{-5mm} \caption{A binary symmetric channel on the alphabet $\{-1,1\}$ with cross-over probability $p$, with an additional ``off'' input symbol $0$ which induces a uniform output distribution. } \label{fig:BSC0} \end{figure} \subsection{Case 2: input symbol $0$ is not redundant}\label{sub:notredundant} This is the case where no $P$ satisfying \eqref{eq:degenerate} can be found. It is the focus of the next two sections. A simple example for this case is the binary symmetric channel in Fig.~\ref{fig:BSC}. \begin{figure}[tbp] \center \vspace{-3mm} \includegraphics[width=0.3\textwidth]{BSC.pdf} \vspace{-5mm} \caption{The binary symmetric channel with cross-over probability $p$. } \label{fig:BSC} \end{figure} We shall show that, in this case, $K_n$ grows like $\sqrt{n}$. Let \begin{equation}\label{eq:defL} L \triangleq \lim_{\epsilon\downarrow 0} \varliminf_{n\to\infty} \frac{K_n(\delta,\epsilon)}{\sqrt{n\delta}}, \end{equation} where $\varliminf$ denotes the limit inferior. Note that both $K_n(\delta,\epsilon)$ and $\delta$ have unit $\textnormal{nat}$, so $L$ has unit $\sqrt{\textnormal{nat}}$. We shall characterize $L$ in the next two sections. Note that, by definition, $L$ can be infinity, as it is in Case~1. At this point, we provide some intuition why positive communication rates cannot be achieved in this case. To achieve a positive rate, a necessary condition is that a non-vanishing proportion of input symbols used in the codebook should be different from the ``off'' symbol $0$. This would mean that the average marginal distribution $\bar{P}$ on $\set{X}$ has a positive probability at values other than $0$ and, since $Q_0$ cannot be written as a mixture of output distributions produced by nonzero input symbols, the average output distribution $\bar{Q}$ must be different from $Q_0$ so $D(\bar{Q}\| Q_0)>0$. This implies that $D(Q^n\|Q_0^{\times n})$ must grow without bound as $n$ tends to infinity, violating the LPD constraint \eqref{eq:LPD}. \section{General Expressions for $L$ for All DMCs}\label{sec:IXY} In this section we derive computable expressions for $L$. Our focus is on Case~2 where $0$ is not redundant, though some results also hold (in a trivial way) in Case~1 where $0$ is redundant. We first prove the following natural but nontrivial single-letter formula. \begin{theorem}\label{thm:IXY} For any DMC, \begin{equation}\label{eq:IXY} L =\max_{\{P_n\}} \varliminf_{n\to\infty} \sqrt{\frac{n}{\delta}} \,I(P_n,W) \end{equation} where the maximum is taken over sequences of joint distributions on $\set{X}\times \set{Y}$ induced by input distributions $P_n$ and channel $W$, whose marginals $Q_n$ on $\set{Y}$ satisfy \begin{equation}\label{eq:deltan} D(Q_n\| Q_0) \le \frac{\delta}{n}. \end{equation} \end{theorem} \emph{Remark:} Although the proof below does not guarantee that the limit inferior in \eqref{eq:IXY} can be replaced by the limit, this is indeed the case, as we show at the end of this section. \begin{IEEEproof}[Proof of Theorem~\ref{thm:IXY}] Proposition~\ref{prp:linear} shows that, when input symbol $0$ is redundant, $L=\infty$. This is consistent with Theorem~\ref{thm:IXY}. The rest of the proof focuses on Case~2 as in Section~\ref{sub:notredundant}, where $0$ is not redundant. We first prove the converse part. This is done via Fano's inequality and manipulation of the information quantities. Suppose there exists a sequence of random codes satisfying \eqref{eq:LPD}, where, at blocklength $n$, the size of the codebook is $\exp(K_n)$, and the error probability is $\epsilon_n$ which tends to zero as $n$ tends to infinity. By a standard argument using Fano's inequality \cite{coverthomas91}, \begin{equation} K_n (1-\epsilon_n) -1 \le I(X^n;Y^n). \label{eq:singleletter10} \end{equation} Let $\bar{P}_n$ denote the average input distribution on $\set{X}$, averaged over the codebook and over the $n$ channel uses. We upper-bound $I(X^n;Y^n)$ in the usual way: \begin{IEEEeqnarray*}{rCl} I(X^n;Y^n) & = & \sum_{i=1}^n I(X^n; Y_i|Y^{i-1})\\ & = & \sum_{i=1}^n H(Y_i|Y^{i-1}) - H(Y_i|X^n, Y^{i-1})\\ & = & \sum_{i=1}^n H(Y_i|Y^{i-1}) - H(Y_i|X_i)\\ & \le & \sum_{i=1}^n I(X_i;Y_i)\\ & \le & n I(\bar{P}_n,W), \IEEEyesnumber \label{eq:singleletter15} \end{IEEEeqnarray*} where the last step follows because, when the channel law is fixed, mutual information is concave in the input distribution. Combining \eqref{eq:defL}, \eqref{eq:singleletter10}, and \eqref{eq:singleletter15} yields \begin{equation} L \le \varliminf_{n\to\infty} \sqrt{\frac{n}{\delta}}\, I(\bar{P}_n,W). \label{eq:single16} \end{equation} Next let $\bar{Q}_n$ denote the average output distribution on $\set{Y}$. Clearly, $\bar{Q}_n$ is the output distribution induced by $\bar{P}_n$ through $W$. Recall that $Q^n$ denotes the $n$-fold output distribution on $\set{Y}^n$. Further let $Q_{n,i}$ denote the marginal of $Q^n$ on the $i$th output $Y_i$. Let $Y^n$ have distribution $Q^n$, then (see also \cite{hou14}) \begin{IEEEeqnarray*}{rCl} D\left(\left. Q^n\right\|Q_0^{\times n} \right & = & -H(Y^n) + \E[Q^n]{\log\frac{1}{Q_0^{\times n}(Y^n)}}\\ & = & -\sum_{i=1}^n H(Y_i|Y^{i-1}) + \E[Q^n]{\log\frac{1}{Q_0(Y_i)}}\\ & = & -\sum_{i=1}^n H(Y_i|Y^{i-1}) + \E[Q_{n,i}]{\log\frac{1}{Q_0(Y_i)}}\\ & \ge & -\sum_{i=1}^n H(Y_i) + \E[Q_{n,i}]{\log\frac{1}{Q_0(Y_i)}}\\ & = & \sum_{i=1}^n D(Q_{n,i}\|Q_0)\\ & \ge & n D(\bar{Q}_n\|Q_0)\IEEEyesnumber \end{IEEEeqnarray*} where the last step follows because relative entropy is convex. This combined with \eqref{eq:LPD} implies that \begin{equation} D(\bar{Q}_n\| Q_0) \le \frac{\delta}{n}.\label{eq:single23} \end{equation} Combining \eqref{eq:single16} and \eqref{eq:single23} proves the converse part of Theorem~\ref{thm:IXY}. We next prove the achievability part. To this end, we randomly generate a codebook that satisfies \eqref{eq:LPD} and then show that, as the length of the codewords tends to infinity, the probability of a decoding error can be made arbitrarily small provided that the codebook has a size smaller than that determined by the right-hand side of \eqref{eq:IXY}. Let $\{P_n\}$ be a sequence of input distributions such that the induced output distributions $\{Q_n\}$ satisfy \eqref{eq:deltan}. For every $n$, we randomly generate a codebook by choosing the codewords IID according to $P_n$. The decoder performs joint-typicality decoding. It is clear that the output distribution on $\set{Y}^{\times n}$ for this code is $Q^n = Q_n^{\times n}$ and that \eqref{eq:LPD} is satisfied. It remains to show that, provided that the size of the codebook is smaller than $\exp\big(nI(P_n,W) - \sqrt{n}\epsilon_n\big)$ for some $\epsilon_n$ tending to zero as $n$ tends to infinity, the probability of a decoding error can be made arbitrarily small. This cannot be shown using the asymptotic equipartition property \cite{shannon48}, or the information-spectrum method \cite{verduhan94,han03}, because we are in a situation where communication rate is zero. However, by slightly varying the methods in \cite{verduhan94,han03}, or using the one-shot achievability bounds as in \cite{wangcolbeckrenner09,polyanskiypoorverdu10}, we can obtain that the sequence $\{K_n\}$ is achievable provided \begin{equation}\label{eq:Pliminf} \varliminf_{n\to\infty} \frac{K_n}{\sqrt{n}}\ge \textnormal{$P$-}\liminf_{n\to\infty} \frac{1}{\sqrt{n}} \log \frac{W(Y^n|X^n)}{Q_n^{\times n}(Y^n)}, \end{equation} where $P$-$\liminf$ denotes the \emph{limit inferior in probability}, namely, the largest number such that the probability that the random variable in consideration is greater than this number tends to one as $n$ tends to infinity. Recalling \eqref{eq:defL}, to prove the achievability part of Theorem~\ref{thm:IXY}, it now suffices to show that the right-hand side of \eqref{eq:Pliminf} is lower-bounded by $$ \varliminf_{n\to\infty} \sqrt{n} I(P_n,W).$$ We show a slightly stronger result which is \begin{equation}\label{eq:inprobability} \frac{1}{\sqrt{n}} \log\frac{W(Y^n|X^n)}{Q_n^{\times n}(Y^n)} - \sqrt{n}\, I(P_n,W) \to 0 \quad \textnormal{in probability} \end{equation} as $n$ tends to infinity. To this end, first note \begin{equation} \E{\frac{1}{\sqrt{n}} \log \frac{W(Y^n|X^n)}{Q_n^{\times n}(Y^n)}} = \frac{1}{\sqrt{n}} I(X^n; Y^n) = \sqrt{n} \, I(P_n, W). \end{equation} It then follows by Chebyshev's inequality that, for any constant $a>0$, \begin{IEEEeqnarray}{rCl} \lefteqn{\mathsf{Pr} \left[ \left| \frac{1}{\sqrt{n}} \log\frac{W(Y^n|X^n)}{Q_n^{\times n}(Y^n)} - \sqrt{n} \, I(P_n,W) \right| \ge a \right]}~~~~~~~~~~~~~~~~~~~~~\nonumber\\ & \le & \frac{1}{a^2} \mathsf{var} \left( \frac{1}{\sqrt{n}} \log \frac{W(Y^n|X^n)}{Q_n^{\times n}(Y^n)} \right).\IEEEeqnarraynumspace \end{IEEEeqnarray} Thus, to prove \eqref{eq:inprobability}, it suffices to show \begin{equation} \label{eq:tozero} \mathsf{var} \left( \frac{1}{\sqrt{n}} \log \frac{W(Y^n|X^n)}{Q_n^{\times n}(Y^n)} \right) \to 0 \end{equation} as $n$ tends to infinity. To show \eqref{eq:tozero}, we first simplify this variance to \begin{IEEEeqnarray}{rCl} \mathsf{var} \left( \frac{1}{\sqrt{n}} \log \frac{W(Y^n|X^n)}{Q_n^{\times n}(Y^n)} \right) & = & \frac{1}{n} \sum_{i=1}^n \mathsf{var} \left( \log\frac{W(Y_i|X_i)}{Q_n(Y_i)} \right) \nonumber \\ & = & \mathsf{var}\left( \log\frac{W(Y|X)}{Q_n(Y)}\right). \label{eq:var29} \end{IEEEeqnarray} The variance on the right-hand side of \eqref{eq:var29} is upper-bounded by the second moment: \begin{IEEEeqnarray}{rCl} \lefteqn{\mathsf{var}\left( \log\frac{W(Y|X)}{Q_n(Y)}\right)}~~~~~~~~\nonumber\\ & \le & \E[P_n\circ W]{\left( \log\frac{W(Y|X)}{Q_n(Y)}\right)^2} \nonumber\\ & = & P_n(0) \, \E[Q_0]{\left( \log\frac{Q_0(Y)}{Q_n(Y)}\right)^2} \nonumber\\ & & {} + \sum_{x\neq 0} P_n(x) \,\E[W(\cdot|x)]{\left(\log\frac{W(Y|x)}{Q_n(Y)}\right)^2}.\label{eq:var30}\IEEEeqnarraynumspace \end{IEEEeqnarray} Here we use $P_n\circ W$ to denote the joint distribution on $\set{X}\times\set{Y}$ induced by input distribution $P_n$ through channel $W$. To prove \eqref{eq:tozero}, it suffices to show that both terms on the right-hand side of \eqref{eq:var30} tend to zero as $n$ tends to infinity. For the first term, note that \eqref{eq:deltan} requires that \begin{equation}\label{eq:QntoQ0} Q_n\to Q_0 \end{equation} as $n$ tends to infinity, so \begin{equation} \lim_{n\to\infty} \log\frac{Q_0(y)}{Q_n(y)} = 0, \quad \forall y\in\set{Y}, \end{equation} which further implies (recall that $|\set{Y}|$ is finite so one can switch the order of limit and expectation) \begin{equation} \lim_{n\to\infty} \E[Q_0]{\left( \log\frac{Q_0(Y)}{Q_n(Y)}\right)^2} = 0. \end{equation} Thus, since $P_n(0)$ is bounded between $0$ and~$1$, the first term on the right-hand side of \eqref{eq:var30} tends to zero as $n$ tends to infinity. To analyze the second term on the right-hand side of \eqref{eq:var30}, recall our assumption that $Q_0$ cannot be written as a mixture of the other output distributions. Thus, to have \eqref{eq:QntoQ0} we need \begin{equation}\label{eq:Pntoone} \lim_{n\to\infty} P_n(0) = 1, \end{equation} so \begin{equation}\label{eq:Pntozero} \lim_{n\to\infty} P_n(x)=0, \quad \forall x\neq 0. \end{equation} We next use \eqref{eq:QntoQ0} to obtain (recall again that $|\set{Y}|$ is finite) \begin{IEEEeqnarray}{rCl} \lefteqn{\lim_{n\to\infty} \E[W(\cdot|x)]{\left(\log\frac{W(Y|x)}{Q_n(Y)}\right)^2}}~~~~~~\nonumber \\ & = & \E[W(\cdot|x)]{\left(\log\frac{W(Y|x)}{Q_0(Y)}\right)^2}, \end{IEEEeqnarray} which is finite for every $x\in\set{X}$, $x\neq 0$, because $Q_0(y)>0$ for every $y\in\set{Y}$; recall \eqref{eq:suppQ0}. This combined with \eqref{eq:Pntozero} implies that the second term on the right-hand side of \eqref{eq:var30} tends to zero as $n$ tends to infinity. We have now established that the right-hand side of \eqref{eq:var30} tends to zero as $n$ tends to infinity, which further establishes \eqref{eq:tozero} and, hence, \eqref{eq:inprobability}. This concludes the achievability part of Theorem~\ref{thm:IXY}. \end{IEEEproof} Using Theorem~\ref{thm:IXY} we derive the following computable expression for $L$. \begin{theorem}\label{thm:general} For any DMC satisfying \eqref{eq:suppQ0}, whose ``off'' input symbol $0$ is not redundant, and which has at least one input symbol other than $0$,\footnote{By our assumption, this input symbol induces an output distribution that is different from $Q_0$, so the channel is not trivial.} $L$ is positive and finite, and is given by \begin{equation}\label{eq:general} L = \max_{\tilde{P}\colon \tilde{P}(0)=0} \frac{ \sum_{x\in\set{X}} \tilde{P}(x) D\left( \left.(W(\cdot|x) \right\| Q_0\right)}{\sqrt{\displaystyle \frac{1}{2}\sum_{y\in\set{Y}} \frac{\big(\tilde{Q}(y) - Q_0(y)\big)^2}{Q_0(y)}}}, \end{equation} where $\tilde{Q}$ is the output distribution induced by $\tilde{P}$ through $W$. \end{theorem} Before proving Theorem~\ref{thm:general} we note that, for some channels, such as the next example, \eqref{eq:general} is very easy to compute. \begin{example}\label{ex:BSC} Binary symmetric channel. \end{example} Consider the binary symmetric channel in Fig.~\ref{fig:BSC}. Clearly, the only possible choice for $\tilde{P}$ in \eqref{eq:general} is $\tilde{P}(1)=1$. We thus obtain the value of $L$ as a function of $p$, which we plot in Fig.~\ref{fig:plot}. Not surprisingly, when $p$ approaches $0.5$, $L$ approaches zero, as does the capacity of the channel. It is however interesting to notice that, when $p$ approaches zero, $L$ also approaches zero, even though the capacity of the channel approaches $1$ bit per use. This is because, when $p$ is very small, it is very easy to distinguish the two input symbols $0$ and $1$ at the receiver end. Hence the LPD criterion requires that the transmitter must use $1$ very sparsely, limiting the number of information bits it can send. The maximum of $L$ is approximately $0.94$ $\sqrt{\textnormal{nat}}$, achieved at $p=0.083$. \begin{figure}[tbp] \center \includegraphics[width=0.4\textwidth]{plotBSC.pdf} \caption{The value of $L$ for the binary symmetric channel in Fig.~\ref{fig:BSC} as a function of $p$.} \label{fig:plot} \end{figure} \begin{IEEEproof}[Proof of Theorem~\ref{thm:general}] For every $n$, let \begin{equation} \hat{P}_n \triangleq \operatorname*{argmax}_{P_n} I(P_n, W) \end{equation} subject to \begin{equation}\label{eq:DQn} D(Q_n\|Q_0) \le \frac{\delta}{n}. \end{equation} Using the same argument as for \eqref{eq:Pntoone}, we have \begin{equation} \lim_{n\to\infty} \hat{P}_n(0) =1, \end{equation} hence $\hat{P}_n$ can be written as \begin{equation}\label{eq:mixture} \hat{P}_n = (1-\mu_n) P_0 + \mu_n \tilde{P}_n \end{equation} where $P_0$ is the deterministic distribution with $P_0(0)=1$, $\tilde{P}_n$ is a distribution with $\tilde{P}_n(0)=0$, and $\mu_n$ is positive and tends to zero as $n$ tends to infinity. Fix $\tilde{P}_n$ and consider $\hat{P}_n$ given by \eqref{eq:mixture} as a function of $\mu_n$, then \begin{equation} \left. \frac{\d I(\hat{P}_n,W)}{\d \mu_n} \right|_{\mu_n=0}= \sum_{x\in\set{X}} \tilde{P}_n(x) D( W(\cdot|x) \| Q_0), \end{equation} hence \begin{equation}\label{eq:general33} I(\hat{P}_n, W) = \mu_n \sum_{x\in\set{X}} \tilde{P}_n(x) D( W(\cdot|x) \| Q_0) + o(\mu_n), \end{equation} where the term $o(\mu_n)$ tends to zero faster than $\mu_n$ as $n$ tends to infinity. The output distribution resulting from feeding $\hat{P}_n$ given by \eqref{eq:mixture} into the channel $W$ is \begin{equation} \hat{Q}_n = (1-\mu_n) Q_0 + \mu_n \tilde{Q}_n \end{equation} where $\tilde{Q}_n$ is the output distribution induced by input distribution $\tilde{P}_n$ through $W$. The relative entropy $D(\hat{Q}_n\| Q_0)$ is approximated by the Fisher Information \cite{kullback59} with respect to parameter $\mu_n$: \begin{equation}\label{eq:fisher36} D(\hat{Q}_n\| Q_0) = \frac{\mu_n^2}{2} \sum_{y\in\set{Y}} \frac{\big(\tilde{Q}_n(y) - Q_0(y)\big)^2}{Q_0(y)} + o(\mu_n^2), \end{equation} where the term $o(\mu_n^2)$ tends to zero faster than $\mu_n^2$ as $n$ tends to infinity. By \eqref{eq:DQn} and \eqref{eq:fisher36}, $\mu_n$ should have the form \begin{equation}\label{eq:general36} \mu_n = \sqrt{\frac{\delta}{n}} \cdot \frac{1}{\sqrt{\displaystyle \frac{1}{2}\sum_{y\in\set{Y}} \frac{\big(\tilde{Q}_n(y) - Q_0(y)\big)^2}{Q_0(y)}}} + o\left(n^{-1/2}\right). \end{equation} Plugging \eqref{eq:general36} into \eqref{eq:general33} yields \begin{IEEEeqnarray}{rCl} I(\hat{P}_n,W) & = & \sqrt{\frac{\delta}{n}}\cdot \frac{ \sum_{x\in\set{X}} \tilde{P}_n(x) D\left( \left.(W(\cdot|x) \right\| Q_0\right)}{\sqrt{\displaystyle \frac{1}{2}\sum_{y\in\set{Y}} \frac{\big(\tilde{Q}_n(y) - Q_0(y)\big)^2}{Q_0(y)}}} \nonumber\\ & & {} + o\left(n^{-1/2}\right).\label{eq:general37} \end{IEEEeqnarray} When $n$ tends to infinity, $I(\hat{P}_n,W)$ is dominated by the first term on the right-hand side of \eqref{eq:general37}, hence $\tilde{P}_n$ should tend to the (not necessarily unique) distribution that maximizes this term. Recalling Theorem~\ref{thm:IXY}, this completes the proof of Theorem~\ref{thm:general}. \end{IEEEproof} From the proof of Theorem~\ref{thm:general} it follows that the limit inferior in \eqref{eq:IXY} can be replaced by the limit, yielding a more convenient expression for $L$: \begin{corollary}\label{cor:lim} For any DMC, \begin{equation}\label{eq:IXY2} L = \lim_{n\to\infty} \sqrt{\frac{n}{\delta}} \max_{P_n} I(P_n,W) \end{equation} where the maxima are subject to \eqref{eq:deltan}. \end{corollary} \begin{IEEEproof} We only need to show that the limit in \eqref{eq:IXY2} exists. When input symbol $0$ is redundant, this limit exists and is infinity. When $0$ is not redundant, the proof of Theorem~\ref{thm:general} shows that this limit also exists and equals the right-hand side of \eqref{eq:general}. \end{IEEEproof} \section{A Simpler but Less General Expression for $L$}\label{sec:var} In this section we consider channels that satisfy the following condition. \begin{condition}\label{con:all} There exists a capacity-achieving input distribution that uses all the input symbols. \end{condition} Note that Condition~\ref{con:all} implies that no input symbol is redundant; in particular, $0$ is not redundant. We next give a simple upper bound on $L$ under Condition~\ref{con:all}. Later we provide an additional condition under which this bound is tight. \begin{theorem}\label{thm:var} Consider a DMC that satisfies Condition~\ref{con:all}. Denote its capacity-achieving output distribution by $Q^*$, then \begin{equation}\label{eq:var} L \le \sqrt{ 2 \,\mathsf{var}_{Q_0} \left(\log\frac{Q_0(Y)}{Q^*(Y)}\right)}, \end{equation} where $\mathsf{var}_{Q_0}(\cdot)$ denotes the variance of a function of $Y$ where $Y$ has distribution $Q_0$. \end{theorem} The proof of Theorem~\ref{thm:var} utilizes the following lemma. \begin{lemma}\label{lem:KT} Let $Q^*$ denote the capacity-achieving output distribution for a DMC $W(\cdot|\cdot)$ of capacity $C$. Let $P'$ be any input distribution, and let $Q'$ denote the output distribution induced by $P'$ through $W$. Then \begin{equation}\label{eq:lem} I(P',W) \le C - D(Q'\| Q^*), \end{equation} where equality holds if $\mathsf{supp}(P') \subseteq \mathsf{supp}(P^*)$ for some capacity-achieving input distribution $P^*$. \end{lemma} \begin{IEEEproof} We have the following identity (see \cite{topsoe67}): \begin{IEEEeqnarray}{rCl} I(P',W) & = & \sum_{x\in\set{X}} P'(x) D(W(\cdot|x)\| Q')\nonumber\\ & = & \sum_{x\in\set{X}} P'(x) \E[W(\cdot|x)]{\log\frac{W(Y|x)}{Q'(Y)}}\nonumber\\ & = & \sum_{x\in\set{X}} P'(x) \left(\E[W(\cdot|x)]{\log\frac{W(Y|x)}{Q^*(Y)}} \right.\nonumber\\ & & ~~~~~~~\left. {}- \E[W(\cdot|x)]{\log\frac{Q'(Y)}{Q^*(Y)}}\right)\nonumber \\ & = & \sum_{x\in\set{X}} P'(x) D(W(\cdot|x)\|Q^*) - D(Q'\|Q^*).\IEEEeqnarraynumspace \label{eq:topsoe} \end{IEEEeqnarray} By the Kuhn-Tucker conditions for channel capacity \cite{csiszarkorner81}, \begin{equation} D(W(\cdot|x)\| Q^*)) \le C \end{equation} where equality holds if $x\in\mathsf{supp}(P^*)$. We hence have \begin{IEEEeqnarray}{rCl} C & = & \sum_{x\in\set{X}} P^*(x) D (W(\cdot|x)\|Q^*)\nonumber\\ & \ge & \sum_{x\in\set{X}} P'(x) D (W(\cdot|x)\|Q^*), \label{eq:KKT} \end{IEEEeqnarray} where equality holds if $\mathsf{supp}(P') \subseteq \mathsf{supp}(P^*)$. Combining \eqref{eq:topsoe} and \eqref{eq:KKT} proves the lemma. \end{IEEEproof} \begin{IEEEproof}[Proof of Theorem~\ref{thm:var}] Since the channel satisfies Condition~\ref{con:all}, from Lemma~\ref{lem:KT} and Corollary~\ref{cor:lim} we have \begin{equation}\label{eq:DQQ} L = \lim_{n\to\infty} \sqrt{\frac{n}{\delta}} \left(C - \min D(Q_n\|Q^*)\right), \end{equation} where the minimum is over $Q_n\in\mathsf{conv}\{W(\cdot|x)\colon x\in\set{X}\}$ satisfying \eqref{eq:deltan}. To determine $L$, we need to find $Q_n$ that minimizes $D(Q_n\| Q_0)$ for a fixed $D(Q_n\|Q^*)$. To find an upper bound on $L$, we drop the condition $Q_n\in\mathsf{conv}\{W(\cdot|x)\colon x\in\set{X}\}$ to consider all distributions on $\set{Y}$. Then the minimum is well known to be achieved by a distribution from the exponential family connecting $Q_0$ and $Q^*$ \cite{csiszarmatus03}: \begin{equation}\label{eq:Qlambda} Q_n (y) = \frac{Q_0(y)^{1-\lambda_n} Q^*(y)^{\lambda_n}}{\sum_{{y'}\in\set{Y}} Q_0(y')^{1-\lambda_n} Q^*(y')^{\lambda_n}},\quad y\in\set{Y} \end{equation} for some $\lambda_n\in[0,1]$. Indeed, if a distribution $Q_n$ minimizes $D(Q_n\| Q^*)$ for some fixed $D(Q_n\|Q_0)$, then it must minimize $$(1-\lambda_n)D(Q_n\|Q_0) + \lambda_n D(Q_n\|Q^*)$$ for some $\lambda_n\in[0,1]$. This sum can be written as \begin{IEEEeqnarray}{rCl} \lefteqn{(1-\lambda_n)D(Q_n\|Q_0) + \lambda_n D(Q_n\|Q^*)}~~~~~\nonumber\\ & = & D(Q_n\|R_n) - \log \sum_{y'\in\set{Y}} Q_0(y')^{1-\lambda_n} Q^*(y')^{\lambda_n},\IEEEeqnarraynumspace \end{IEEEeqnarray} where \begin{equation} R_n (y) \triangleq \frac{Q_0(y)^{1-\lambda_n} Q^*(y)^{\lambda_n}}{\sum_{{y'}\in\set{Y}} Q_0(y')^{1-\lambda_n} Q^*(y')^{\lambda_n}},\quad y\in\set{Y}. \end{equation} Hence the best choice is $Q_n = R_n$. It remains to compute $D(Q_n\| Q_0)$ and $D(Q_n\| Q^*)$, where $Q_n$ is of the form \eqref{eq:Qlambda}, for large $n$. When $n$ is large, $Q_n$ must be close to $Q_0$ and hence $\lambda_n$ must be close to zero. In this case, $D(Q_n \| Q_0)$ is approximated by the Fisher Information~\cite{kullback59} with respect to parameter $\lambda_n$: \begin{equation} D(Q_n\| Q_0) = \frac{\lambda_n^2}{2} \mathsf{var}_{Q_0} \left(\log\frac{Q_0(Y)}{Q^*(Y)}\right) + o(\lambda_n^2). \end{equation} This together with the requirement that $Q_n$ must satisfy \eqref{eq:deltan} implies that \begin{equation}\label{eq:lambda} \lambda_n \le \sqrt{\frac{2\delta}{\displaystyle n \,\mathsf{var}_{Q_0} \left(\log\frac{Q_0(Y)}{Q^*(Y)}\right)}} + o(n^{-1/2}). \end{equation} Next we compute the derivative of $D(Q_n\|Q^*)$, with $Q_n$ given in \eqref{eq:Qlambda}, with respect to $\lambda_n$ evaluated at $\lambda_n=0$ to be \begin{equation} \left.\frac{\d D(Q_n\|Q^*)}{\d \lambda_n} \right|_{\lambda_n=0} = - \mathsf{var}_{Q_0}\left(\log\frac{Q_0(Y)}{Q^*(Y)}\right). \end{equation} By Condition~\ref{con:all}, there exists a capacity-achieving input distribution that uses $0$, so \begin{equation} \lim_{\lambda_n\downarrow 0} D(Q_n\|Q^*) = D(Q_0\|Q^*) = C. \end{equation} Hence \begin{equation}\label{eq:1stderivative} C - D(R_n\| Q^*) = \lambda_n \mathsf{var}_{Q_0} \left(\log\frac{Q_0(Y)}{Q^*(Y)}\right) + o(\lambda_n). \end{equation} Combining \eqref{eq:DQQ}, \eqref{eq:lambda}, and \eqref{eq:1stderivative} proves \eqref{eq:var}. \end{IEEEproof} The bound \eqref{eq:var} is tight for many channels, e.g., the binary symmetric channel of Example~\ref{ex:BSC}. We next provide a sufficient condition for \eqref{eq:var} to be tight. Let $\mathbf{s}$ be the $|\set{Y}|$-dimensional vector given by \begin{equation} s(y) = Q_0(y)\left(\log\frac{Q^*(y)}{Q_0(y)}+C\right),\quad y\in\set{Y}. \end{equation} Consider the following system of linear equations with unknowns $\alpha_x$, $x\in\set{X}\setminus\{0\}$: \begin{equation}\label{eq:system} \sum_{x\in\set{X}\setminus\{0\}} \alpha_x \left( W(\cdot|x) - Q_0\right) = \vect{s}. \end{equation} Solving \eqref{eq:system} is a simple problem in linear algebra. \begin{theorem}\label{thm:equalvar} Suppose Condition~\ref{con:all} is satisfied. If \eqref{eq:system} has a nonnegative solution, then \eqref{eq:var} holds with equality: \begin{equation}\label{eq:equalvar} L = \sqrt{ 2 \,\mathsf{var}_{Q_0} \left(\log\frac{Q_0(Y)}{Q^*(Y)}\right)}. \end{equation} \end{theorem} The intuition behind Theorem~\ref{thm:equalvar} is the following: the vector $\vect{s}$ represents the tangent of the curve $Q_n(y)$ given by \eqref{eq:Qlambda} as a function of $\lambda_n$ at $\lambda_n=0$. That \eqref{eq:system} has a nonnegative solution means that $\vect{s}$ lies in the convex cone generated by $\{W(\cdot|x) - Q_0\colon x\in\set{X}\setminus\{0\}\}$. This further implies that, for small enough $\lambda_n$, $Q_n$ of the form given by \eqref{eq:system} is a valid output distribution, which, as can be seen in the proof of Theorem~\ref{thm:var}, guarantees \eqref{eq:var} to hold with equality. Along a different direction, we provide below a proof utilizing Theorem~\ref{thm:general}. \begin{IEEEproof}[Proof of Theorem~\ref{thm:equalvar}] We use Theorem~\ref{thm:general} to prove Theorem~\ref{thm:equalvar}. Let $\{\alpha_x\colon x\in\set{X}\setminus \{0\} \}$ be a nonnegative solution to \eqref{eq:system}, and let \begin{equation} A \triangleq \sum_{x\in\set{X}\setminus \{0\}} \alpha_x. \end{equation} Then the following constitutes a valid choice for $\tilde{P}$ in \eqref{eq:general}: \begin{equation}\label{eq:tildeP} \tilde{P}(x) = \frac{\alpha_x}{A},\quad x\in\set{X}\setminus \{0\}. \end{equation} The corresponding $\tilde{Q}$ is given by \begin{IEEEeqnarray}{rCl} \tilde{Q} & = & \sum_{x\in\set{X}\setminus \{0\}} \frac{\alpha_x}{A} W(\cdot|x)\nonumber\\ & = & Q_0+\frac{1}{A} \sum_{x\in\set{X}\setminus \{0\}} \alpha_x (W(\cdot|x) - Q_0)\nonumber\\ & = & Q_0 + \frac{\vect{s}}{A}. \label{eq:tildeQ} \end{IEEEeqnarray} We evaluate \eqref{eq:general} for this choice of $\tilde{P}$ to obtain a lower bound on $L$. We first compute the denominator, using \eqref{eq:tildeQ}: \begin{IEEEeqnarray}{rCl} \lefteqn{\sqrt{\displaystyle \frac{1}{2}\sum_{y\in\set{Y}} \frac{\big(\tilde{Q}(y) - Q_0(y)\big)^2}{Q_0(y)}}}~~~~~~~~~~ \nonumber\\ & = & \sqrt{\frac{1}{2A^2} \sum_{y\in\set{Y}} \frac{s(y)^2}{Q_0(y)} }\nonumber\\ & = & \sqrt{\frac{1}{2A^2} \sum_{y\in\set{Y}} Q_0(y) \left(\log \frac{Q^* (y)}{Q_0(y)} + C \right)^2}\nonumber\\ & = & \sqrt{\frac{1}{2A^2} \,\mathsf{var}_{Q_0} \left(\log\frac{Q^*(Y)}{Q_0(Y)}\right)}. \label{eq:equalvardeno} \end{IEEEeqnarray} We next compute the numerator: \begin{IEEEeqnarray}{rCl} \lefteqn{\sum_{x\in\set{X}\setminus\{ 0\}} \tilde{P}(x) D\left( \left.(W(\cdot|x) \right\| Q_0\right)}~~~\nonumber\\ & = & \sum_{x\in\set{X}\setminus\{ 0\}} \tilde{P}(x) \sum_{y\in\set{Y}} W(y|x) \log \frac{W(y|x)}{Q^*(y)} \nonumber\\ & & {} + \sum_{x\in\set{X}\setminus\{ 0\}} \tilde{P}(x) \sum_{y\in\set{Y}} W(y|x) \log\frac{Q^*(y)}{Q_0(y)}\nonumber\\ & = & \sum_{x\in\set{X}\setminus\{ 0\}} \tilde{P}(x) \cdot C + \frac{1}{A}\sum_{\substack{x\in\set{X}\setminus\{ 0\}\\y\in\set{Y}}} \alpha_x W(y|x) \log\frac{Q^*(y)}{Q_0(y)}\nonumber\\ & = & C + \frac{1}{A} \sum_{y\in\set{Y}} \log\frac{Q^*(y)}{Q_0(y)} \sum_{x\in\set{X}\setminus\{ 0\}} \alpha_x W(y|x)\nonumber\\ & = & C + \frac{1}{A} \sum_{y\in\set{Y}} \log\frac{Q^*(y)}{Q_0(y)} \bigl(A Q_0(y) + s(y)\bigr) \label{eq:equalvar61}\\ & = & C - D(Q_0\| Q^*) + \frac{1}{A} \sum_{y\in\set{Y}} s(y) \log\frac{Q^*(y)}{Q_0(y)} \nonumber\\ & = & C - C + \frac{1}{A} \sum_{y\in\set{Y}} Q_0(y) \log\frac{Q^*(y)}{Q_0(y)} \left( \log\frac{Q^*(y)}{Q_0(y)} + C \right)\nonumber\\ & = & \frac{1}{A} \mathsf{var}_{Q_0} \left(\log\frac{Q^*(y)}{Q_0(y)}\right), \label{eq:equalvarnume} \end{IEEEeqnarray} where \eqref{eq:equalvar61} follows from \eqref{eq:system}. Combining Theorem~\ref{thm:general}, \eqref{eq:equalvardeno}, and \eqref{eq:equalvarnume} yields \begin{equation}\label{eq:equalvar63} L \ge \sqrt{2\, \mathsf{var}_{Q_0} \left(\log\frac{Q^*(y)}{Q_0(y)}\right)}. \end{equation} Recalling Theorem~\ref{thm:var}, both \eqref{eq:var} and \eqref{eq:equalvar63} must hold with equality. \end{IEEEproof} \begin{example} A $k$-ary uniform-error channel. \end{example} Consider a channel with $\set{X}=\set{Y}=\{0,1,\ldots,k-1\}$ and \begin{equation}\label{eq:uniformerror} W(y|x) = \begin{cases} 1-p,& y=x\\ \displaystyle \frac{p}{k-1},&y\neq x\end{cases} \end{equation} where $p\in(0,1)$. Clearly, its capacity-achieving output distribution $Q^*$ is uniform. It is easy to check that \eqref{eq:system} has solution \begin{equation} \alpha_x= \frac{p(1-p) \bigl(\log((k-1)(1-p)-\log p)\bigr)}{(k-1)(1-p)-p},\quad x\in\set{X}\setminus \{0\} \end{equation} which is nonnegative. We can hence use Theorem~\ref{thm:equalvar} to obtain \begin{equation} L = \sqrt{2 v(k,p)} \end{equation} where \begin{IEEEeqnarray}{rCl} v(k,p) & = & (1-p)\left( \log\frac{1}{1-p}\right)^2 + p \left(\log\frac{k-1}{p} \right)^2\nonumber\\ & & {}- \left( (1-p)\log\frac{1}{1-p}+p\log\frac{k-1}{p}\right)^2. \end{IEEEeqnarray} While one might speculate that \eqref{eq:equalvar} holds, for example, for all symmetric channels, this is, perhaps surprisingly, not the case. The following example demonstrates this. \begin{example}\label{ex:ternary} A ternary symmetric channel. \end{example} Consider a ternary symmetric channel where $\set{X}=\set{Y}=\{0,1,2\}$ and \begin{subequations} \begin{IEEEeqnarray}{rCl} W(\cdot| 0) & = & [ 0.37~~0.01~~0.62 ]\\ W(\cdot|1) & = & [ 0.62~~0.37~~0.01 ]\\ W(\cdot|2) & = & [ 0.01~~0.62~~0.37]. \end{IEEEeqnarray} \end{subequations} The right-hand side of \eqref{eq:equalvar} yields $0.66$ for this channel, but one can check that, in fact, $L=0.62$. This is because, as Fig.~\ref{fig:exponential} shows, the exponential family connecting $Q_0$ and $Q^*$ in the neighborhood of $Q_0$ does not lie in the set of possible output distributions $\mathsf{conv}\{W(\cdot|x)\colon x\in\set{X}\}$, or, roughly equivalently, $\vect{s}$ does not lie in the convex cone generated by $\{W(\cdot|x) - Q_0\colon x\in\set{X}\setminus\{0\}\}$. \begin{figure}[tbp] \center \includegraphics[width=0.5\textwidth]{exponential.pdf} \caption{The ternary symmetric channel in Example~\ref{ex:ternary}. The black triangle depicts the set of possible output distributions. The blue curves are the exponential families connecting the conditional output distributions and the capacity-achieving output distribution $Q^*$. The exponential family connecting $Q_0$ and $Q^*$ (as the other two exponential families) has a part that lies outside the black triangle, which is why \eqref{eq:equalvar} does \emph{not} hold for this channel.} \label{fig:exponential} \end{figure} \section{AWGN Channels}\label{sec:AWGN} Consider an AWGN channel described by \begin{equation}\label{eq:AWGNmodel} Y = X+Z, \end{equation} where $X\in\Reals$ is the channel input, $Y\in\Reals$ is the channel output, and $Z\in\Reals$ has the zero-mean Gaussian distribution of variance $\sigma^2$, denoted $\Normal{0}{\sigma^2}$, and is independent of $X$. Let the ``off'' input symbol be $0$, so $Q_0$ is also $\Normal{0}{\sigma^2}$. The encoder and decoder generate a random code as in Section~\ref{sec:setup} subject to the LPD constraint \eqref{eq:LPD}, and $L$ is again defined as in \eqref{eq:defL}. Note that we do not impose any average- or peak-power constraint on the input, but imposing such constraints will not affect the value of $L$ due to the stronger LPD constraint \eqref{eq:LPD}.\footnote{The LPD constraint requires that the average input power tend to zero as $n$ tends to infinity, hence rendering any additional average-power constraint inactive. As for peak-power constraints, our choice of input distribution to achieve $L$ is zero-mean Gaussian with vanishing variance. The influence of cutting the tail of such a distribution to meet any peak-power constraint will vanish as $n$ tends to infinity.} \begin{theorem}\label{thm:AWGN} For an AWGN channel, \begin{equation}\label{eq:L1} L = 1\ \sqrt{\textnormal{nat}} \end{equation} irrespectively of the noise power $\sigma^2$ \end{theorem} The proof of Theorem~\ref{thm:AWGN} is divided into the converse part and the achievability part, and is given below. \subsection{Converse for Theorem~\ref{thm:AWGN}} Examining the proof of Theorem~\ref{thm:IXY}, we see that its converse part is valid for the AWGN channel. Hence \begin{equation}\label{eq:IXYAWGN} L \le \max_{\{P_n\}} \varliminf_{n\to\infty} \sqrt{\frac{n}{\delta}} I(P_n,W) \end{equation} where the maximum is taken over sequences of joint distributions on $(X,Y)\in \Reals\times\Reals$ induced by input distribution $P_n$ via the channel law $W$ resulting from the relation \eqref{eq:AWGNmodel}, such that the marginal distributions $Q_n$ for $Y$ satisfy \begin{equation}\label{eq:AWGNLPD} D(Q_n\|Q_0)\le \frac{\delta}{n}. \end{equation} Let the second moment of the distribution $P_n$ be denoted $\rho_n$. It is well known that the zero-mean Gaussian maximizes $I(P_n,W)$ among all distributions of the same second moment (see, e.g., \cite{coverthomas91}), so \begin{equation}\label{eq:AWGN65} I(P_n,W) \le \frac{1}{2}\log \left(1+\frac{\rho_n}{\sigma^2}\right). \end{equation} Because $X$ and $Z$ are independent, the second moment of the distribution $Q_n$ is $\rho_n+\sigma^2$, yielding \begin{IEEEeqnarray*}{rCl} D(Q_n\|Q_0) & = & -h(Q_n) + \E[Q_n]{\log\frac{1}{Q_0(Y)}}\\ & = & -h(Q_n) + \E[Q_n]{ \log \left( \sqrt{2\pi\sigma^2} \,e^{\frac{Y^2}{2\sigma^2}}\right)}\\ & = & -h(Q_n) + \frac{1}{2} \log \left(2\pi \sigma^2\right) + \E[Q_n]{\frac{Y^2}{2\sigma^2}}\\ & = & -h(Q_n) +\frac{1}{2} \log \left(2\pi \sigma^2\right) + \frac{\rho_n+\sigma^2}{2\sigma^2}\cdot \\ & \ge & -\frac{1}{2} \log \left(2\pi e (\rho_n+\sigma^2)\right) \\ & & {} +\frac{1}{2} \log \left(2\pi \sigma^2\right) + \frac{\rho_n+\sigma^2}{2\sigma^2} \\ & = & \frac{\rho_n}{2\sigma^2} -\frac{1}{2} \log\frac{\rho_n+\sigma^2}{\sigma^2}, \IEEEyesnumber\label{eq:AWGN66} \end{IEEEeqnarray*} where $h(\cdot)$ denotes the differential entropy, and where the inequality follows because the zero-mean Gaussian distribution maximizes differential entropy among all distributions of the same second moment. It follows from \eqref{eq:AWGN66} that, for $D(Q_n\|Q_0)$ to approach zero as $n$ tends to infinity, $\rho_n$ must tend to zero and \begin{equation} D(Q_n\|Q_0) \ge \frac{\rho_n^2}{4\sigma^4} + o(\rho_n^2). \end{equation} Combined with \eqref{eq:AWGNLPD}, this implies \begin{equation} \rho_n \le 2\sigma^2 \sqrt{\frac{\delta}{n}} + o(n^{-1/2}). \end{equation} Plugging this into \eqref{eq:AWGN65} we obtain \begin{IEEEeqnarray*}{rCl} I(P_n,W) & \le & \frac{1}{2}\log \left(1+\frac{\rho_n}{\sigma^2}\right)\\ & \le & \frac{\rho_n}{2\sigma^2}\\ & \le & \sqrt{\frac{\delta}{n}} + o(n^{-1/2}). \IEEEyesnumber\label{eq:AWGN68} \end{IEEEeqnarray*} Combining \eqref{eq:IXYAWGN} and \eqref{eq:AWGN68} yields \begin{equation} L \le 1. \end{equation} This concludes the proof of the converse part of Theorem~\ref{thm:AWGN}. \subsection{Achievability for Theorem~\ref{thm:AWGN}} The achievability proof of Theorem~\ref{thm:IXY} relies on the finiteness of the input and output alphabets, therefore it is not applicable to the AWGN channel. Indeed, Theorem~\ref{thm:IXY} may not hold for a general continuous-alphabet channel. However, for the AWGN channel, we only need to prove an achievability result for Gaussian input distributions, which is much simpler than proving it for arbitrary input distributions. For blocklength $n$, we randomly generate a codebook such that every codeword is independent of every other codeword, and is IID $\Normal{0}{\rho_n}$ with \begin{equation} \rho_n \triangleq 2\sigma^2 \sqrt{\frac{\delta}{n}}. \end{equation} We first check that the LPD condition is met. Indeed, the output sequence is IID $\Normal{0}{\rho_n+\sigma^2}$, so \begin{IEEEeqnarray*}{rCl} D\left(Q^n \left\| Q_0^{\times n} \right.\right) & = & n D\left(\Normal{0}{\rho_n+\sigma^2} \| \Normal{0}{\sigma^2}\right)\\ & = & n \left( \frac{\rho_n}{2\sigma^2} - \frac{1}{2} \log\frac{\rho_n+\sigma^2}{\sigma^2} \right)\\ & \le & n \left(\frac{\rho_n}{2\sigma^2} - \frac{1}{2} \left(\frac{\rho_n}{\sigma^2} - \frac{\rho_n^2}{2\sigma^4}\right)\right)\\ & = & \frac{n\rho_n^2}{4\sigma^4}\\ & = & \frac{n}{4\sigma^4} \cdot \left(2\sigma^2 \sqrt{\frac{\delta}{n}}\right)^2\\ & = & \delta, \IEEEyesnumber \end{IEEEeqnarray*} where for the inequality we use the fact \begin{equation}\label{eq:logineq} \log (1+a) \ge a - \frac{a^2}{2}, \quad a\ge 0. \end{equation} We next look at the maximum number of nats that can be reliably transmitted with this code. Similar to the DMC case, we can show that the sequence $\{K_n\}$ is achievable if \eqref{eq:Pliminf} holds, except that now $Q_n$ and $W$ are density and conditional density, respectively. The ratio between $W$ and $Q_n^{\times n}$ in \eqref{eq:Pliminf} can be evaluated as \begin{IEEEeqnarray*}{rCl} \lefteqn{\frac{W(y^n|x^n)}{Q_n^{\times n} (y^n)}}~~~~~~\\ & = & \frac{\displaystyle \prod_{i=1}^n \frac{1}{\sqrt{2\pi\sigma^2}} e^{-\frac{(y_i-x_i)^2}{2\sigma^2}}}{\displaystyle \prod_{i=1}^n \frac{1}{\sqrt{2\pi(\rho_n+\sigma^2)}} e^{-\frac{y_i^2}{2(\rho_n+\sigma^2)}}}\\ & = & \left(\frac{\rho_n+\sigma^2}{\sigma^2}\right)^\frac{n}{2} \exp \left( \frac{\sum_{i=1}^n y_i^2}{2(\rho_n+\sigma^2)} - \frac{\sum_{i=1}^n z_i^2}{2\sigma^2}\right).\IEEEyesnumber\IEEEeqnarraynumspace \end{IEEEeqnarray*} Hence \begin{IEEEeqnarray}{rCl} \frac{1}{\sqrt{n}} \log \frac{W(Y^n|X^n)}{Q_n^{\times n} (Y^n) & = & \frac{\sqrt{n}}{2} \log \left(\frac{\rho_n + \sigma^2}{\sigma^2} \right) \nonumber\\ & & {} + \frac{1}{\sqrt{n}} \left( \frac{\sum_{i=1}^n Y_i^2}{2(\rho_n+\sigma^2)} - \frac{\sum_{i=1}^n Z_i^2}{2\sigma^2}\right). \nonumber \\ \, \label{eq:yes73} \end{IEEEeqnarray} The mean of \eqref{eq:yes73} satisfies \begin{IEEEeqnarray*}{rCl} \lefteqn{ \E{\frac{1}{\sqrt{n}} \log \frac{W(Y^n|X^n)}{Q_n^{\times n} (Y^n)}}}~~~~\\ & = & \frac{\sqrt{n}}{2} \log \left(\frac{\rho_n+\sigma^2}{\sigma^2}\right) \\ & & {} + \frac{1}{\sqrt{n}} \left( \frac{\sum_{i=1}^n \E{Y_i^2}}{2(\rho_n+\sigma^2)} - \frac{\sum_{i=1}^n \E{Z_i^2}}{2\sigma^2}\right)\\ & = & \frac{\sqrt{n}}{2} \log \left(\frac{\rho_n+\sigma^2}{\sigma^2}\right) + 0 \\ & \ge & \frac{\sqrt{n}}{2} \left( \frac{\rho_n}{\sigma^2} - \frac{\rho_n^2}{2\sigma^4} \right)\\ & = & \sqrt{\delta} - \frac{\delta}{\sqrt{n}},\IEEEyesnumber \label{eq:AWGN75} \end{IEEEeqnarray*} where we again use \eqref{eq:logineq}. By \eqref{eq:AWGN75} we know that \begin{equation} \varliminf_{n\to\infty} \E{\frac{1}{\sqrt{n}} \log \frac{W(Y^n|X^n)}{Q_n^{\times n} (Y^n)}} \ge \sqrt{\delta}. \end{equation} It remains to show that \begin{equation}\label{eq:AWGNvartozero} \lim_{n\to\infty} \mathsf{var} \left(\frac{1}{\sqrt{n}} \log \frac{W(Y^n|X^n)}{Q_n^{\times n} (Y^n)} \right) = 0. \end{equation} Then, by Chebyshev's inequality, we can establish \begin{equation}\label{eq:AWGNliminf} P-\liminf_{n\to\infty} \frac{1}{\sqrt{n}} \log \frac{W(Y^n|X^n)}{Q_n^{\times n} (Y^n)} \ge \sqrt{\delta} \end{equation} and hence \begin{equation}\label{eq:AWGNachievability} L \ge 1. \end{equation} Using \eqref{eq:yes73}, the variance in \eqref{eq:AWGNvartozero} can be computed as: \begin{IEEEeqnarray*}{rCl} \lefteqn{ \mathsf{var} \left(\frac{1}{\sqrt{n}} \log \frac{W(Y^n|X^n)}{Q_n^{\times n} (Y^n)} \right)}~~~\\ & = & \mathsf{var}\left(\frac{1}{\sqrt{n}} \left( \frac{\sum_{i=1}^n Y_i^2}{2(\rho_n+\sigma^2)} - \frac{\sum_{i=1}^n Z_i^2}{2\sigma^2}\right) \right)\\ & = & \frac{1}{n} \sum_{i=1}^n \mathsf{var} \left( \frac{Y_i^2}{2(\rho_n+\sigma^2)} - \frac{Z_i^2}{2\sigma^2} \right)\\ & = & \mathsf{var} \left(\frac{Y^2}{2(\rho_n+\sigma^2)}-\frac{Z^2}{2\sigma^2} \right)\\ & = & \E{\left(\frac{Y^2}{2(\rho_n+\sigma^2)}-\frac{Z^2}{2\sigma^2} \right)^2}\\ & = & \frac{1}{4(\rho_n+\sigma^2)^2} \E{ \left(X^2+2XZ - \frac{\rho_n}{\sigma^2} Z^2 \right)^2}\\ & \le & \frac{1}{4\sigma^4}\E{ \left(X^2+2XZ - \frac{\rho_n}{\sigma^2} Z^2 \right)^2}.\IEEEyesnumber\label{eq:AWGN81} \end{IEEEeqnarray*} After expanding the square inside the expectation in \eqref{eq:AWGN81}, one can verify that the expectation of every summand tends to zero as $n$ tends to infinity, establishing \eqref{eq:AWGNvartozero}, and hence \eqref{eq:AWGNliminf} and \eqref{eq:AWGNachievability}, proving the achievability part of Theorem~\ref{thm:AWGN}. \section{Concluding Remarks}\label{sec:conclusion} A DMC in practice often represents discretization of a continuous-alphabet channel. For example, Figs.~\ref{fig:BSC0} and~\ref{fig:BSC} can result from two different discretizations of the same AWGN channel. In this sense, our results suggest that the optimal discretization may depend heavily on whether there is an LPD requirement or not. In practice, LPD communication systems of positive data rates often can be implemented even when the channel model does not seem to allow positive rates. Indeed, in such applications, the concern is often not that the transmitted signal should be sufficiently weak, but rather that it should have a wide spectrum and resemble white noise \cite{simon94}. We believe that one of the reasons why such systems may work is that realistic channels often have memory. For example, on a channel whose noise level varies with a coherence time that is longer than the length of a codeword, the transmitter and the receiver can use the adversary's ignorance of the actual noise level to communicate without being detected. One way to formulate this scenario is to assume that the channel has an unknown parameter that is fixed. This is discussed for the binary symmetric channel in \cite{chebakshichanjaggi14}. Further addressing this scenario is part of ongoing research. \section*{Acknowledgements} The authors thank Boulat Bash and Matthieu Bloch for helpful comments. \bibliographystyle{hieeetr}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The most common supervised task in machine learning is to learn a single-task, single-output prediction model. However, such a setting can be ill-adapted to some problems and applications. On the one hand, producing a single output can be undesirable when data is scarce and when producing reliable, possibly set-valued predictions is important (for instance in the medical domain where examples are very hard to collect for specific targets, and where predictions are used for critical decisions). Such an issue can be solved by using conformal prediction approaches~\cite{shafer2008tutorial}. It was initially proposed as a transductive online learning approach to provide set predictions (in the classification case) or interval predictions (in the case of regression) with a statistical guarantee depending on the probability of error tolerated by the user, but was then extended to handle inductive processes~\cite{papadopoulos2002inductive}. On the other hand, there are many situations where there are multiple, possibly correlated output variables to predict at once, and it is then natural to try to leverage such correlations to improve predictions. Such learning tasks are commonly called Multi-task in the literature~\cite{caruana1998dozen}. Most research work on conformal prediction for multi-task learning focuses on the problem of multi-label prediction~\cite{wang2015comparison,wang2020active}, where each task is a binary classification one. Conformal prediction for multi-target regression has been less explored, with only a few studies dealing with it: Kuleshov \emph{et al.}~\cite{kuleshov2018conformal} provide a theoretical framework to use conformal predictors within manifold (e.g., to provide a mono-dimensional embedding of the multi-variate output), while Neeven and Smirnov~\cite{neeven2018conformal} use a straightforward multi-target extension of a conformal single-output $k$-nearest neighbor regressor~\cite{papadopoulos2011regression} to provide weather forecasts. However, this latter essentially verifies validity (i.e., having well-calibrated outputs) for each individual target. Recently, we proposed a simple method to have an approximate validity for the multi-variate prediction~\cite{messoudi2020conformal}, that generally provided overly conservative results. In this paper, we propose a new conformal prediction method fitted to multi-target regression, that makes use of copulas~\cite{nelsen1999introduction} (a common tool to model dependence between multi-variate random variables) to provide valid multi-variate predictions. The interest of such a framework is that it remains very easy to apply while linking multi-variate conformal predictions to the theoretically sound framework that are copulas. Experiments also show that it works quite well, and allows to improve upon previous heuristics~\cite{messoudi2020conformal}. Section~\ref{sec:confMTR} provides a general overview of our problem: a brief introduction to conformal prediction and multi-target regression will be presented in Sections~\ref{sec:conform} and ~\ref{sec:mtr}, before raising the problematic of applying conformal prediction to the multi-target regression setting in Section~\ref{sec:cpmtr}. We will then present our setting in Section~\ref{sec:Cop-Conf-MTR}: we will first recall the needed basic principles and theorems of copulas in Section~\ref{sec:copulas}, before detailing our conformal multi-target approach in Section~\ref{sec:conform_multi}. The experiments and their results are described in Section~\ref{sec:expe}. \section{Inductive conformal prediction (ICP) for Multi-Target Regression} \label{sec:confMTR} This section recalls the basics of inductive conformal regression and multi-target regression, before introducing the issues we will tackle in this paper. \subsection{Inductive conformal regression} \label{sec:conform} In regression tasks, conformal prediction is a method that provides a statistical guarantee to the predictions by giving an interval prediction instead of a point prediction in the regression case. By statistical guarantee, it is meant that the set-valued predictions cover the true value with a given frequency, i.e., they are calibrated. It was first introduced as a transductive online learning approach~\cite{gammerman2013learning} and then adapted to the inductive framework~\cite{papadopoulos2002inductive} where one uses a model induced from training examples to get conformal predictions for the new instances. The two desirable features in conformal regressors are (a) \textit{validity}, i.e. the error rate does not exceed $\epsilon $ for each chosen confidence level $1 - \epsilon $, and (b) \textit{efficiency}, meaning prediction intervals are as small as possible. Let $\lbag z_1=(x_1, y_1), z_2=(x_2, y_2), \dots , z_{n}=(x_{n}, y_{n})\rbag$ be the successive pairs of an object $x_i \in X$ and its real-valued label $y_i \in \mathbb{R}$, which constitute the observed examples. Assuming that the underlying random variables are exchangeable (a weaker condition than i.i.d.), we can predict $y_{n+1} \in \mathbb{R}$ for any new object $x_{n+1} \in X$ by following the inductive conformal framework. The first step consists of splitting the original data set $Z = \lbag z_1, \dots, z_{n} \rbag$ into a \textit{training set} $Z^{tr} = \lbag z_1, \dots, z_l\rbag$ and a \textit{calibration set} $Z^{cal} = \lbag z_{l+1}, \dots, z_{n}\rbag$, with $|Z^{cal}|= n - l$. Then, an \textit{underlying algorithm} is trained on $Z^{tr}$ to obtain the \textit{non-conformity measure} $A_l$, a measure that evaluates the strangeness of an example compared to other examples of a bag, called the non-conformity score. Hence, we can calculate the non-conformity score ${\alpha}_k$ for an example $z_k$ compared to the other examples in the bag $\lbag z_1, \dots, z_l \rbag$ with ${\alpha}_k = A_l(\lbag z_1, \dots , z_l \rbag , z_k)$. By computing the non-conformity score ${\alpha}_i$ for each example $z_i$ of $Z^{cal}$ using this equation, we get the sequence ${\alpha}_{l+1}, \ldots , {\alpha}_{n}$. When making a prediction for a new example $x_{n+1}$, we use the underlying algorithm to associate to any possible prediction $\hat{y}$ its non-conformity score ${\alpha}^{\hat{y}}_{n+1}$, and calculate its \textit{p-value} which indicates the proportion of less conforming examples than $z_{n+1}$, with: \begin{equation} p(\hat{y}_{n+1}) = \frac{|\{ i = l+1, \dots , n, n+1 : {\alpha}_i \geq {\alpha}^{\hat{y}}_{n+1} \}|}{n - l + 1}. \label{eqconfo} \end{equation} The final step before producing the conformal prediction consists of choosing the \textit{significance level} $\epsilon \in (0, 1)$ to get a prediction set with a \textit{confidence level} of $1 - \epsilon$, which is the statistical guarantee of coverage of the true value $y_{n+1}$ by the interval prediction $\hat{\mathbf{y}}_{n+1}$ such that $$\hat{\mathbf{y}}_{n+1}=\{ \hat{y}_{n+1} \in \mathbb{R}: p(\hat{y}_{n+1}) > \epsilon\}.$$ The most basic non-conformity measure in a regression setting is the absolute difference between the actual value $y_i$ and the predicted value $\hat{y}_i$ by the underlying algorithm. The non-conformity score is then calculated as follows: \begin{equation} {\alpha}_i = |y_i - \hat{y}_i|. \label{eqbase} \end{equation} The sequence of non-conformity scores ${\alpha}_{l+1}, \ldots , {\alpha}_{n}$ for all examples in $Z^{cal}$ are obtained and sorted in descending order. Then, we compute the index of the $(1-\epsilon)$-percentile non-conformity score ${\alpha}_{s}$, based on the chosen significance level $\epsilon$, such as: \begin{equation} \mathbb{P}(|y_i - \hat{y_i}| \leq \alpha _s ) \geq 1 - \epsilon. \end{equation} Finally, the prediction interval for each new example $x_{n+1}$, which covers the true output $y_{n+1}$ with probability $1 - \epsilon$ is calculated as: \begin{equation} \hat{\mathbf{y}}_{n+1}=[{\hat{y}}_{n+1} - {\alpha}_{s}, {\hat{y}}_{n+1} + {\alpha}_{s}]. \end{equation} The drawback of this standard non-conformity measure is that all prediction intervals are equally sized ($2{\alpha}_{s}$) for a given confidence level. Adopting a \textit{normalized} non-conformity measure instead provides personalized individual bounds for each new example by scaling the standard non-conformity measure with ${\sigma}_{i}$, a term that estimates the difficulty of predicting $y_i$. This means that using a \textit{normalized} non-conformity measure gives a smaller prediction interval for ``easy'' examples, and a bigger one for ``hard'' examples. Thus, two distinct examples with the same ${\alpha}_{s}$ calculated by~\eqref{eqbase} will have two different interval predictions depending on their difficulty. In this case, the normalized non-conformity score is as follows: \begin{equation} {\alpha}_i = \frac{|y_i - \hat{y}_i|}{{\sigma}_{i}}. \label{eqnorm} \end{equation} Thus, we have: \begin{equation}\label{eq:epsilon_mono} \mathbb{P}\left( \frac{|y_i - \hat{y_i}|}{\sigma _i } \leq \alpha _s \right) \geq 1 - \epsilon, \end{equation} which becomes an equality if the method is perfectly calibrated. For a new example $x_{n+1}$, the prediction interval becomes : \begin{equation} \hat{\mathbf{y}}_{n+1}=\left[ {\hat{y}}_{n+1} - {\alpha}_{s}{\sigma}_{n+1}, {\hat{y}}_{n+1} + {\alpha}_{s}{\sigma}_{n+1}\right]. \end{equation} The value ${\sigma}_{i}$ can be defined in various ways. A popular approach proposed by Papadopoulos and Haralambous~\cite{papadopoulos2011reliable} consists of training a small neural network to estimate the error of the underlying algorithm by predicting the value ${\mu}_i = \ln(|y_i - \hat{y}_i|)$. In this case, the non-conformity score is defined as: \begin{equation} {\alpha}_i = \frac{|y_i - \hat{y}_i|}{\exp({\mu}_{i})+\beta}, \label{eqnormexp} \end{equation} where $\beta \geq 0$ is a sensitivity parameter. With the significance level $\epsilon$, we have: \begin{equation} \mathbb{P}\left( \frac{|y_i - \hat{y_i}|}{\exp({\mu}_{i})+\beta} \leq \alpha _s \right) \geq 1 - \epsilon. \end{equation} For a new example $x_{n+1}$, the prediction interval is: \begin{equation} \hat{\mathbf{y}}_{n+1}=\left[{\hat{y}}_{n+1} - {\alpha}_{s}(\exp({\mu}_{n+1})+\beta), {\hat{y}}_{n+1} + {\alpha}_{s}(\exp({\mu}_{n+1})+\beta)\right]. \end{equation} Other approaches use different algorithms to normalize the non-conformity scores, such as regression trees~\cite{johansson2018interpretable} and $k$-nearest neighbors~\cite{papadopoulos2011regression}. Before introducing the problem of multi-target regression, let us first note that, assuming that our method is well-calibrated and that $|y_i - \hat{y_i}|/\sigma _i$ is associated to a random variable $Q$,~\eqref{eq:epsilon_mono} can be rewritten as \begin{equation}\label{eq:cumul_mono}\mathbb{P}( Q \leq \alpha _s ) = 1 - \epsilon := F_{Q}(\alpha_s),\end{equation} which will be instrumental when dealing with copulas and multi-variate outputs later on. Also note that this means that specifying a confidence $\epsilon$ uniquely defines a value $\alpha_s$. \subsection{Multi-target regression (MTR)} \label{sec:mtr} In multi-target regression, the feature space $X$ is the same as in standard regression, but the target space $Y \subset \mathbb{R}^m$ is made of $m$ real-valued targets. This means that observations are i.i.d pairs $(x_i, y_i)$ drawn from a probability distribution on $X \times Y$, where each instance $x_i \in X$ is associated to an $m$ dimensional real-valued target $y_i = (y_i^1, \ldots , y_i^m) \in Y$. The usual objective of multi-target regression is then to learn a predictor $h: X \rightarrow Y$, i.e. to predict multiple outputs based on the input features characterizing the data set, which generalizes standard regression. There are two distinct approaches to treat MTR called \textit{algorithm adaptation} and \textit{problem transformation} methods. For \textit{algorithm adaptation} approaches, standard single-output regression algorithms are extended to the multi-target regression problem. Many models were adapted to the MTR problem, such as Support Vector Regressors~\cite{sanchez2004svm}, regression trees~\cite{de2002multivariate}, kernel methods~\cite{baldassarre2012multi} and rule ensembles~\cite{aho2009rule}. In \textit{problem transformation}, one usually decomposes the initial multi-variate problems into several simpler problems, thus allowing the use of standard classification methods without the need for an adaptation that can be tricky or computationally costly. A prototypical example of such a transformation is the chaining method~\cite{spyromitros2016multi}, where one predicts each target sequentially, using the output and predictions of previous targets as inputs for the next one, thus capturing some correlations between the targets. As our goal here is not to produce a new MTR method, but rather to propose a flexible means to make their predictions reliable through conformal prediction, we will not make a more detailed review of those methods. The reader interested in different methods can consult for instance~\cite{spyromitros2016multi}. We will now detail how conformal prediction and MTR can be combined. Let us just mention that exploiting the possible relationships allow in general to improve performances of the methods~\cite{ruder2017overview,caruana1993multitask}. \subsection{Inductive conformal prediction for Multi-Target Regression} \label{sec:cpmtr} As said before, previous studies about conformal MTR focused on providing valid and efficient inferences target-wise~\cite{neeven2018conformal}, thus potentially neglecting the potential advantages of exploiting target relations. Our main goal in this paper is to provide an easy conformal MTR method allowing to do so. Within the MTR setting, we have a multi-dimensional output $\{ Y^1 , \ldots , Y^m\}$ (we will use superscripts to denote the dimensions, and subscripts to denote sample indices) with $Y^j \in \mathbb{R}, j \in \{ 1, \ldots , m \}$ the different individual real-valued $m$ targets. Let $\underline{\hat{y}}_{n+1}^j,\overline{\hat{y}}_{n+1}^j$ be respectively the lower and upper bounds of the interval predictions given by the non-conformity measure for each target $Y^j$ given a new instance $x_{n+1}$. We define the hyper-rectangle $[\hat{\mathbf{y}}_{n+1}]$ as the following Cartesian product: \begin{equation}\label{eq:vol_hyper} [\hat{\mathbf{y}}_{n+1}]=\times_{j=0}^m [\underline{\hat{y}}_{n+1}^j,\overline{\hat{y}}_{n+1}^j]. \end{equation} This hyper-rectangle forms the volume $\prod_{j=0}^m (\overline{\hat{y}}_{n+1}^j - \underline{\hat{y}}_{n+1}^j)$ to which a global prediction $y_{n+1}$ of a new example $x_{n+1}$ should belong in order to be valid, i.e. each single prediction $y_{n+1}^j$ for each individual target $Y^j$ should be between the bounds $\underline{\hat{y}}^j_{n+1},\overline{\hat{y}}^j_{n+1}$ of its interval prediction. With this view, the objective of the conformal prediction framework for MTR in the normalized setting is to satisfy a global significance level $\epsilon_g$ required by the user such that: \begin{equation} \mathbb{P}(y_{n+1} \in [\hat{\mathbf{y}}_{n+1}]) \geq 1 - \epsilon_g. \end{equation} This probability can also be written as follows: \begin{gather} \mathbb{P}(y_{n+1}^1 \in [\underline{y_{n+1}^1}, \overline{y_{n+1}^1}], \ldots , y_{n+1}^m \in [\underline{y_{n+1}^m}, \overline{y_{n+1}^m}]) \nonumber\\ = \mathbb{P}\left(\frac{|y_{n+1}^1 - \hat{y}_{n+1}^1|}{\sigma_{n+1}^1} \leq \alpha^1_s , \ldots , \frac{|y_{n+1}^m - \hat{y}_{n+1}^m|}{\sigma_{n+1}^m} \leq \alpha^m_s \right) \geq 1 - \epsilon_g. \end{gather} Thus, we need to find the individual non-conformity scores $\alpha^1_s , \ldots , \alpha^m_s$, defined for instance by target-wise confidence levels $\epsilon_j$, such that we ensure a global confidence level $1 - \epsilon_g$. Extending~\eqref{eq:cumul_mono} and considering the random variables $Q^j = |y^j - \hat{y}^j|/\sigma ^j$, $j \in \{ 1, \ldots , m \}$, we get: \begin{equation} \mathbb{P}(Q^1 \leq \alpha^1_s , \ldots , Q^m \leq \alpha^m_s ) \geq 1 - \epsilon_g. \label{eqprobacop} \end{equation} Should we know the joint distribution in~\eqref{eqprobacop}, and therefore the dependence relations between target predictions, it would be relatively easy to get the individual significance levels\footnote{Note that there may be multiple choices for such individual levels. Here we will fix them to be equal for simplicity.} $\epsilon_j$ associated to the individual non-conformity scores $\alpha^j_s$ such that we satisfy the chosen confidence level $1 - \epsilon_g$. Yet, such a joint distribution is usually unknown. The next section proposes a simple and efficient method to do so, leveraging the connection between~\eqref{eqprobacop} and copulas. Before doing that, note again that under the assumption that we are well calibrated, we can transform~\eqref{eqprobacop} into \begin{equation} F(\alpha^1_s , \ldots ,\alpha^m_s ) = 1 - \epsilon_g, \label{eq:probacopcum} \end{equation} where $F$ denotes here the joint cumulative distribution induced by $\mathbb{P}$. \section{Copula-based conformal Multi-Target Regression} \label{sec:Cop-Conf-MTR} This section introduces our approach to obtain valid or better conformal prediction in the multi-variate regression setting. We first recall some basics of copulas and refer to Nelsen~\cite{nelsen1999introduction} for a full introduction, before detailing how we apply them to conformal approaches. \subsection{Overview on copulas} \label{sec:copulas} A copula is a mathematical function that can describe the dependence between multiple random variables. The term ``copula'' was first introduced by Sklar~\cite{sklar1959fonctions} in his famous theorem, which is one of the fundamentals of copula theory, now known as Sklar's theorem. However, these tools have already been used before, as for instance in Fr{\'e}chet's paper~\cite{frechet1951tableaux} and H{\"o}ffding's work~\cite{hoffding1940masstabinvariante, hoeffding1941masstabinvariante} (reprinted as~\cite{hoeffding1994scale}). Copulas are popular in the statistical and financial fields~\cite{embrechts2002correlation}, but they are nowadays more and more used in other domains as well, such as hydrology~\cite{favre2004multivariate}, medicine~\cite{nikoloulopoulos2008multivariate}, and machine learning~\cite{liu2019copula}. Let $\mathbf{Q} = (Q^1, \ldots , Q^m)$ be an $m$-dimensional random vector composed of the random variables $Q^1, \ldots , Q^m$. Let its cumulative distribution function (c.d.f.) be $F = F_Q : \mathbb{R}^m \rightarrow [0, 1]$. This c.d.f. carries two important pieces of information: \begin{itemize} \item The c.d.f. of each random variable $Q^j$ s.t. $F_j(q^j) = \mathbb{P}(Q^j \leq q^j)$, for all $j \in \{1,\ldots m\}.$ \item The dependence structure between them. \end{itemize} The objective of copulas is to isolate the dependence structure from the marginals $Q^j$ by transforming them into uniformly distributed random variables $U^j$ and then expressing the dependence structure between the $U^j$'s. In other words, an $m$-dimensional copula $C: [0, 1]^m \rightarrow [0, 1]$ is a c.d.f. with standard uniform marginals. It is characterized by the following properties: \begin{enumerate} \item $C$ is grounded, i.e. if $u^j = 0$ for at least one $j \in \{1,\ldots , m\}$, then $C(u^1, \ldots , u^m) = 0 $. \item If all components of $C$ are equal to 1 except $u^j$ for all $u^j \in [0, 1]$ and $j \in \{1,\ldots , m\}$, then $C(1, \ldots, 1, u^j , 1, \ldots, 1) = u^j$. \item $C$ is $m$-increasing, i.e., for all $\mathbf{a}, \mathbf{b} \in [0, 1]^m$ with $\mathbf{a} \leq \mathbf{b}$ : \begin{equation} {\Delta}_{(\mathbf{a},\mathbf{b}]}C = \sum_{j \in \{0, 1\}^m} (-1)^{\sum_{k=1}^{m} j_k}C(a_1^{j_1}b_1^{1-j_1},\dots, a_m^{j_m}b_m^{1-j_m}) \geq 0. \nonumber \end{equation} \end{enumerate} The last inequality simply ensures that the copula is a well-defined c.d.f. inducing non-negative probability for every event. The idea of copulas is based on probability and quantile transformations~\cite{mcneil2015quantitative}. Using these latter, we can see that all multivariate distribution functions include copulas and that we can use a mixture of univariate marginal distributions and a suitable copula to produce a multivariate distribution function. This is described in Sklar's theorem~\cite{sklar1959fonctions} as follows: \begin{theorem}[Sklar's theorem] For any $m$-dimensional cumulative distribution function (c.d.f.) $F$ with marginal distributions $F_1,\dots, F_m$, there exists a copula $C: [0,1]^m \rightarrow [0,1]$ such that: \begin{equation} F(\mathbf{q})=F(q^1, \ldots ,q^m) = C(F_1(q^1), \ldots , F_m(q^m)), \quad\mathbf{q} \in \mathbb{R}^m. \label{eqsklar} \end{equation} If $F_j$ is continuous for all $j \in \{1, \ldots , m\}$, then $C$ is unique. \end{theorem} Denoting the pseudo inverse of $F_j$ as $F^{\leftarrow}_j$~\cite{mcneil2015quantitative}, we can get from~\eqref{eqsklar} that \begin{equation} C(\mathbf{u})=C(u^1, \ldots ,u^m) = F(F^{\leftarrow}_1 (u^1), \ldots , F^{\leftarrow}_m (u^m)). \end{equation} There are a few noticeable copulas, among which are: \begin{itemize} \item the product copula: $\Pi (\mathbf{u}) = \prod_{j=1}^{m} u^j$; \item the Fr{\'e}chet-H{\"o}ffding upper bound copula \footnote{$M$ is a copula for all $m \geq 2$.}: $M(\mathbf{u}) = \min_{1 \leq j \leq m}\{u^j\}$; \item the Fr{\'e}chet-H{\"o}ffding lower bound copula \footnote{$W$ is a copula if and only if $m = 2$.}: $W(\mathbf{u}) = \max\{\sum_{j=1}^{m} u^j - m + 1, 0\}$. \end{itemize} While the product copula corresponds to classical stochastic independence, the Fr{\'e}chet-H{\"o}ffding bound copulas play an important role as they correspond to extreme cases of dependence~\cite{schmidt2007coping}. Indeed, any $m$-dimensional copula $C$ is such that $W(\mathbf{u}) \leq C(\mathbf{u}) \leq M(\mathbf{u}), \mathbf{u} \in [0, 1]^m.$ Another important class of copulas are so-called Archimedean copulas, which are based on generator functions $\phi$ of specific kinds. More precisely, a continuous, strictly decreasing, convex function $\phi : [0, 1] \rightarrow [0, \infty]$ satisfying $\phi (1) = 0$ is known as an Archimedean copula generator. It is known as a strict generator if $\phi (0) = \infty$. The generated copula is then given by \begin{equation} C(u^1, \ldots ,u^m) = {\phi}^{[-1]}(\phi (u^1) + \ldots + \phi (u^m)). \label{eqmultiarchicop} \end{equation} Table~\ref{archifamtb} provides examples and details of three one parameter Archimedean copula families~\cite{mcneil2015quantitative}, which are particularly convenient in estimation problems (being based on a single parameter). \begin{table}[ht] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Family & Generator $\phi (t)$ & $\theta$ range & Strict & Lower & Upper \\ \hline Gumbel~\cite{gumbel1960distributions} & $(- \ln t)^{\theta}$ & $\theta \geq 1$ & Yes & $\Pi$ & $M$ \\ Clayton~\cite{genest1993statistical} & $\frac{1}{\theta}(t^{-\theta } - 1)$ & $\theta \geq - 1$ & $\theta \geq 0$ & $W$ & $M$ \\ Frank~\cite{frank1979simultaneous} & $-\ln\left(\frac{e^{- \theta t} - 1}{e^{- \theta} - 1}\right)$ & $\theta \in \mathbb{R}$ & Yes & $W$ & $M$ \\ \hline \end{tabular} \end{center} \caption{Archimedean copula families.} \label{archifamtb} \end{table} \subsection{Copula-based conformal Multi-Target Regression} \label{sec:conform_multi} Let us now revisit our previous problem of finding the significance levels $\epsilon_j$ for each target so that the hyper-rectangle prediction $[\hat{\mathbf y}]$ covers the true value with confidence $1-\epsilon_g$. Let us first consider~\eqref{eq:probacopcum}. Following Sklar's theorem, we have \begin{align*} F(\alpha^1_s , \ldots ,\alpha^m_s ) & = C(F_1(\alpha^1_s), \ldots , F_m(\alpha^m_s)) & \\ &= C(1-\epsilon^1, \ldots , 1-\epsilon^m)& \\ &= 1-\epsilon_g & \end{align*} where the second line is obtained from~\eqref{eq:epsilon_mono}. Clearly, if we knew the copula $C$, then we could search for values $\epsilon_j$ providing the desired global confidence. A major issue is then to obtain or estimate the copula modelling the dependence structure between the targets and their confidence levels. As copulas are classically estimated from multi-variate observations, a simple means that we will use here is to estimate them from the non-conformity scores generated from the calibration set $Z^{cal}$. Namely, if $\alpha_i^j$ is the non-conformity score corresponding to the $j^{th}$ target of the $z_i$ example of $Z^{cal}$ for $i \in \{l+1, \ldots , n\}$, we simply propose to estimate a copula $C$ from the matrix \begin{equation} A = \begin{bmatrix} \alpha_{l+1}^1 & \alpha_{l+1}^2 & \dots \\ \vdots & \ddots & \\ \alpha_{n}^1 & & \alpha_{n}^m \end{bmatrix}. \end{equation} \subsection{On three specific copulas} \label{sec:3-cop} We will now provide some detail about the copulas we performed experiments on. They have been chosen to go from the one requiring the most assumptions to the one requiring the least assumptions. \subsubsection{The Independent copula} The Independent copula means that the $m$ targets are considered as being independent, with no relationship between them. It is a strong assumption, but it does not require any estimation of the copula. In this case, \eqref{eqprobacop} becomes: \begin{align} \Pi(F_1(\alpha ^1_s), \ldots , F_m(\alpha ^m_s)) &= \prod_{j=1}^{m} F_j(\alpha ^j_s) = \prod_{j=1}^{m} \mathbb{P}(Q^j \leq \alpha ^j_s) \nonumber\\ &\geq \prod_{j=1}^{m} (1 - \epsilon ^j) = 1 - \epsilon_g, \nonumber \end{align} If we assume that all $\epsilon ^1 , \ldots , \epsilon ^m$ equal the same value $\epsilon_t$, then: \begin{equation} \prod_{j=1}^{m} (1 - \epsilon ^j) = (1 - \epsilon_t)^m = 1 - \epsilon_g. \nonumber \end{equation} Thus, we simply obtain \begin{equation} \epsilon_t = 1 - \sqrt[m]{1 - \epsilon_g}. \label{eqcorrepsilon} \end{equation} This individual significance level $\epsilon_t$ is then used to calculate the different non-conformity scores $\alpha ^j_s$ for each target in the multi-target regression problem for the Independent copula. \subsubsection{The Gumbel copula} The Gumbel copula is a member of the Archimedean copula family which depends on only one parameter, and in this sense is a good representative of parametric copulas. It comes down to applying the generator function $\phi (F_j(\alpha^j_s)) = (- \ln F_j(\alpha^j_s))^{\theta}$ and its inverse ${\phi}^{[-1]} (F_j(\alpha ^j_s)) = \exp{-(F_j(\alpha ^j_s))^{1/ \theta}}$ to~\eqref{eqmultiarchicop}, resulting in the expression \begin{equation} C^\theta_G(F_1(\alpha^1_s), \ldots , F_m(\alpha^m_s)) = \exp{-\left(\sum _{j = 1}^{m} \left(- \ln F_j(\alpha^j_s)\right)^{\theta}\right)^{1/ \theta}}. \label{eqgumbel} \end{equation} In this case, we need to estimate the parameter $\theta$. Since the marginals $F_j(\alpha^j)$ are unknown, we also need to estimate them. In our case, we will simply use the empirical c.d.f. induced by the non-conformity scores $\alpha_i^j$ of matrix $A$. An alternative would be to also assume a parametric form of the $F_j$, but this seems in contradiction with the very spirit of non-conformity scores. In particular, we will denote by $\hat{F}_j$ the empirical cumulative distribution such that \begin{equation*} \hat{F}_j(\beta)=\frac{|\{\alpha^j_i:\alpha^j_i \leq \beta, i \in \{l+1,\ldots,n\}\}|}{n-l}, \quad \beta \in \mathbb{R}. \end{equation*} The parameter $\theta$ can then be estimated from matrix $A$ using the Maximum Pseudo-Likelihood Estimator~\cite{hofert2019elements} with a numerical optimization, for instance by using the Python library ``copulae''\footnote{https://pypi.org/project/copulae/}. Once this is obtained, we then get for a particular choice of $\epsilon_j$ that \begin{align}\label{eqgumbel2} C_G^{\hat{\theta}} & = \exp{-\left(\sum _{j=1}^m \left(- \ln (1-\epsilon_j)\right)^{\hat{\theta}}\right)}^{1/ {\hat{\theta}}} \\ & = \exp{-\left(\sum _{j=1}^m \left(- \ln F_j(\alpha^j_s)\right)^{\hat{\theta}}\right)}^{1/{\hat{\theta}}} \end{align} And we can search for values $\epsilon_j$ that will make this equation equal to $1-\epsilon_g$, using the estimations $\hat{F}_j$. The solution is especially easy to obtain analytically if we consider that $\epsilon^1=\ldots=\epsilon^m=\epsilon_t$, as we then have that $$\epsilon_t= 1 - (1 - \epsilon_g)^{1/\sqrt[\theta]{m}},$$ and one can then obtain the corresponding non-conformity scores $\alpha^1_s , \ldots , \alpha^m_s$ by replacing $F_j$ by $\hat{F}_j$. We chose this particular family of Archimedean copulas because its lower bound is the Independent copula (as seen in Table~\ref{archifamtb}). We can easily verify this by taking $\hat{\theta} = 1$. Thus, we can capture independence if it is verified, and otherwise search in the direction of positive dependence. One reason for such a choice is that previous experiments~\cite{messoudi2020conformal} indicate that the product copula gives overly conservative results. \subsubsection{The Empirical copula} Parametric copulas, as all parametric models, have the advantage of requiring less data to be well estimated, while having the possibly important disadvantage that they induce some bias in the estimation, that is likely to grow as the number of target increases. The Empirical copula presents a non-parametric way of estimating the marginals directly from the observations~\cite{ruschendorf1976asymptotic,ruymgaart1978asymptotic}. It is defined as follows~\cite{hofert2019elements}: \begin{equation} C_{E}(\mathbf{u}) = \frac{1}{n-l}\sum _{i=l+1}^{n}\mathbbm{1}_{\mathbf{u}_{i}\leq \mathbf{u}} = \frac{1}{n-l}\sum _{i=l+1}^{n} \prod _{j=1}^{m}\mathbbm{1}_{u_{i}^j\leq u^j}, \quad\mathbf{u} \in [0, 1]^m, \label{eqempcop} \end{equation} where $\mathbbm{1}_A$ is the indicator function of event $A$, and the inequalities $\mathbf{u}_{i}\leq \mathbf{u}$ for $i \in \{l+1, \ldots , n\}$ need to be understood component-wise. $\mathbf{u}_{i}$ are the pseudo-observations that replace the unknown marginal distributions, which are defined as: \begin{equation} \mathbf{u}_{i} = (u_{i}^1, \ldots , u_{i}^m) = (\hat{F}_{1}(\alpha_{i}^1), \ldots , \hat{F}_{m}(\alpha_{i}^m)), \quad i \in \{l+1, \ldots , n\}, \label{pseudoobs} \end{equation} where distributions $\hat{F}_j$ are defined as before. Simply put, the Empirical copula corresponds to consider as our joint probability the Empirical joint cumulative distribution. We then have that \begin{equation} C_E(F_1(\alpha^1_s), \ldots , F_m(\alpha^m_s)) = \frac{1}{n-l}\sum _{i=l+1}^{n} \prod _{j=1}^{m}\mathbbm{1}_{u_{i}^j\leq F_j(\alpha^j_s)}. \label{empcopexp} \end{equation} Using that $F_j(\alpha^j_s)=1-\epsilon_j$, we can then search for values of $\epsilon_j$, $j=1,\ldots,m$ that will make~\eqref{empcopexp} equal to $1-\epsilon_g$. Note that in this case, even assuming that $\epsilon^1=\ldots=\epsilon^m=\epsilon_t$ will require an algorithmic search, which is however easy as $C_E$ is an increasing function, meaning that we can use a simple dichotomic search. \section{Evaluation} \label{sec:expe} In this section, we describe the experimental setting (underlying algorithm, data sets and performance metrics) and the results of our study. \subsection{Experimental setting} We choose to work with a deep neural network as the underlying algorithm. We keep the same underlying algorithm for all non-conformity measures, since our focus is to compare between the three copula functions chosen to get the different non-conformity scores. To compute the non-conformity scores over the calibration set, we use the normalized non-conformity score given by~\eqref{eqnormexp} as described in~\cite{papadopoulos2011reliable}, and predict ${\mu}_i = \ln(|y_i - \hat{y}_i|)$ simultaneously for all targets by a single multivariate multi-layer perceptron. In this case, ${\mu}_i $ represents the estimation of the underlying algorithm's error. As mentioned before, the approach can be adapted to any conformal regression approach. Experiments are conducted on normalized data with a mean of 0 and a standard deviation of 1 to simplify the deep neural network optimization, with a 10-fold cross validation to avoid the impact of biased results, and with a calibration set equal to $10\%$ of the training examples for all data sets. We take the value $\beta = 0.1$ for the sensitivity parameter and do not optimize it when calculating the normalizing coefficient ${\mu}_i$. After getting the proper training data $(X^{tr}, Y^{tr})$, calibration data $(X^{cal}, Y^{cal})$ and test data $(X^{ts}, Y^{ts})$ for each fold, we follow the steps described below: \begin{enumerate} \item Train the underlying algorithm (a deep neural network) on the proper training data $(X^{tr}, Y^{tr})$. Its architecture is composed of a first dense layer applied to the input with ``selu'' activation (scaled exponential linear units~\cite{klambauer2017self}), three hidden dense layers with dropouts and ``selu'' activation, and a final dense layer with $m$ outputs and a linear activation. \item Predict $\hat{Y}^{cal}$ and $\hat{Y}^{ts}$ for calibration and test data respectively using the underlying algorithm. \item Train the normalizing multi-layer perceptron on the proper training data $(X^{tr}, \mu_{tr} = \ln (|Y^{tr} - \hat{Y}^{tr}|)$, corresponding to the error estimation of the underlying algorithm. The normalizing MLP consists of three hidden dense layers with ``selu'' activation and dropouts and a final dense layer with $m$ outputs for predicting all targets simultaneously. \item Predict $\mu_{cal}$ and $\mu_{ts}$ for calibration and test data respectively using the normalizing MLP. \item If needed, get an estimation\footnote{In the case of the Gumbel copula, we use a Maximum Pseudo-Likelihood Estimator with a numerical optimization using the BFGS algorithm} of the copula $C$ from the matrix $A$ of calibration non-conformity scores. \item For each global significance level $\epsilon_g$: \begin{itemize} \item Get the individual significance level $\epsilon_j = \epsilon_t$ for $j \in \{ 1, \ldots , m\}$ and calculate $\alpha _s = \{\alpha^1_s , \ldots , \alpha^m_s\}$ for all targets using calibration data, according to the methods mentioned in Section~\ref{sec:3-cop}. \item Get the interval predictions for the test data with: \begin{equation} \left[{\hat{Y}}^{ts} - {\alpha}_{s}(\exp({\mu}_{ts})+\beta), {\hat{Y}}^{ts} + {\alpha}_{s}(\exp({\mu}_{ts})+\beta)\right]. \end{equation} \end{itemize} \end{enumerate} \begin{remark}\label{rem:ident_conf}We choose $\epsilon_j = \epsilon_t$ for $j \in \{ 1, \ldots , m\}$ as we have no indication that individual targets should be treated with different degree of cautiousness. However, since copulas are functions from $[0,1]^m$ to $[0,1]$, there is in principle no problem in considering different confidence degrees for different tasks, if an application calls for it. How to determine and elicit such degrees is however, to our knowledge, an open question. \end{remark} The implementation was done using Python and Tensorflow. The copula part of our experiments was based on the book~\cite{hofert2019elements} and the Python library ``copulae''. \begin{table} \begin{center} \begin{tabular}{|l|l|l|l|} \hline \textbf{Names} & \textbf{Examples} & \textbf{Features} & \textbf{Targets} \\ \hline music origin~\cite{zhou2014predicting} & 1059 & 68 & 2 \\ \hline indoor loc~\cite{torres2014ujiindoorloc} & 21049 & 520 & 3 \\ \hline scpf~\cite{tsoumakas11a} & 1137 & 23 & 3 \\ \hline sgemm~\cite{nugteren2015cltune} & 241600 & 14 & 4 \\ \hline rf1~\cite{tsoumakas11a} & 9125 & 64 & 8 \\ \hline rf2~\cite{tsoumakas11a} & 9125 & 576 & 8 \\ \hline scm1d~\cite{tsoumakas11a} & 9803 & 280 & 16 \\ \hline scm20d~\cite{tsoumakas11a} & 8966 & 61 & 16 \\ \hline \end{tabular} \caption{Information on the used multi-target regression data sets.} \label{tabledatasets} \end{center} \end{table} We use eight data sets with different numbers of targets and varying sizes. They are summarized in Table~\ref{tabledatasets}. \subsection{Results} This section presents the results of our experiments, investigating in particular the validity and efficiency of the proposed approaches. Figures~\ref{fig:fig0} and~\ref{fig:fig2} detail these results for ``music origin'' and ``sgemm''. The figures for all other data sets can be found in~\ref{sec:appendix1}. To verify the validity of each non-conformity measure, we calculate the accuracy of each one and compare it with the calibration line. This line represents the case where the error rate is exactly equal to $\epsilon_g$ for a confidence level $1 - \epsilon_g$, which is the desired outcome of using conformal prediction. In multi-target regression, the accuracy is computed based on whether the observation $y$ belongs to the hyper-rectangle $[\hat{\mathbf{y}}]$ or not depending on the significance level $\epsilon_g$. Thus, a correctly predicted example must verify that all its individual predictions $y_i$ for each individual target $Y_i$ is in its corresponding individual interval predictions. Concretely, for each considered confidence level $\epsilon_g$ and test example $x \in X^{ts}$, we obtain a prediction $[\hat{\mathbf{y}}]_{\epsilon_g}$. From this, we can compute the empirical validity as the percentage of times that $[\hat{\mathbf{y}}]_{\epsilon_g}$ contains the true observed value, i.e., $$\frac{\sum_{(x,y) \in Z^{ts}} \mathbbm{1}_{y \in [\hat{\mathbf{y}}]_{\epsilon_g}}}{|Z^{ts}|}.$$ Doing it for several values of $\epsilon_g$, we obtain a calibration curve that should be as close as possible to the identity function. \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{Images/all_empirical_validity_music_origin.eps} \caption{Empirical validity} \label{fig:01} \end{subfigure} % \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{Images/Hyperrectangle_box_plot_music_origin.eps} \caption{Hyper-rectangle median volume} \label{fig:02} \end{subfigure} \caption{Results for music origin.} \label{fig:fig0} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{Images/all_empirical_validity_sgemm.eps} \caption{Empirical validity} \label{fig:3} \end{subfigure} % \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{Images/Hyperrectangle_box_plot_sgemm.eps} \caption{Hyper-rectangle median volume} \label{fig:4} \end{subfigure} \caption{Results for sgemm.} \label{fig:fig2} \end{figure} The results of the error rate or accuracy curves are shown in sub-figure a of each figure for the Independent, Gumbel and Empirical multivariate non-conformity measures. The outcomes clearly show that the best performance is obtained by using the Empirical copula, where the model is well calibrated. For most of the studied data sets, the Empirical copula accuracy curve is almost perfectly aligned with the calibration line, and thus almost exactly valid. This is due to the fact that Empirical copula functions non-parametrically estimate the marginals based on the observations, which enables the model to better adapt to the dependence structure of each data set. This dependence structure is neglected when using an Independent copula-based non-conformity measure, as the $m$ targets are treated as if they were independent, and so the link between them is not exploited when computing $\epsilon_t$. This also means that the difference between the Empirical and the Independent copula-based non-conformity measures is bigger when there is a strong dependence between the non-conformity scores, and is an indication of the strength of this dependence. For instance, we can deduce that the targets are strongly related for ``sgemm'' by the big gap between the Independent and Empirical accuracy curves (sub-figure~\ref{fig:3}). For the Gumbel copula, the accuracy curve is generally closer to the calibration line than the one for the Independent copula. This supports the existence of a dependence structure between the targets, since the lower bound of the Gumbel copula is the Independent copula, which means that if the targets were in fact independent, the two curves would perfectly match. This can be seen in sub-figure~\ref{fig:01} for ``music origin'', where the accuracy curves almost overlap all the time, meaning that the targets are likely to be independent. From the empirical validity results, we also noticed that the Empirical copula non-conformity measure can be slightly invalid sometimes (sub-figure~\ref{fig:21} for ``scpf''). We explain this by the fewer number of examples, in which case one could use a more regularized form than the Empirical copula. However, when a lot of examples are available (for instance, more than 20000 observations for ``sgemm''), the validity curve of the Empirical copula non-conformity measure is perfectly aligned with the calibration line, meaning that this measure is exactly valid (sub-figure~\ref{fig:3}). In single-output regression, efficiency is measured by the size of the intervals, and a method is all the more efficient as predicted intervals are small. To assess efficiency in multi-target regression, we can simply compute the volume of the obtained predictions $[\hat{\mathbf{y}}]_{\epsilon_g}$, after~\eqref{eq:vol_hyper}. For each experiment, we then compute the median value of those hyper-rectangle volumes (for the estimation to be robust against very large hyper-rectangles). Efficiency results are shown in sub-figure b for all data sets for $\epsilon_g = 0.1$. They show that, in general, the Independent copula has a bigger median hyper-rectangle volume compared to the Gumbel and Empirical copulas, especially in those cases where the existence of a dependence structure is confirmed by the calibration curves. This is due to the fact that using an Independent copula ignores the dependence between the non-conformity scores, which leads to an over-estimation of the global hyper-rectangle error. This impact is avoided when using the Empirical copula because it takes advantage of the dependence structure to construct better interval predictions. Another remark concerning efficiency is that the box plots for Empirical copula are tighter than the other two, which shows that the values are homogeneous on all folds compared to the Independent copula for instance, where the variation is much more visible. The empirical validity and hyper-rectangle median volume results are summarized in Tables~\ref{tabresval} and~\ref{tabreseff}. The validity simply provides the average difference between a perfect calibration (the identity function) and the observed curve for each copula. This means, in particular, that a negative value indicates that the observed frequency is in average below the specified confidence degree. \begin{table} \begin{center} \begin{adjustbox}{width={\textwidth},totalheight={\textheight},keepaspectratio}% \begin{tabular}{|l|c|c|c|} \hline Data sets & Independent & Gumbel & Empirical \\ \hline music origin & $7.06 \times 10^1 \pm 5.12$ & $8.48 \times 10^1 \pm 5.72$ & $\mathbf{2.90 \times 10^1 \pm 5.48}$ \\ \hline indoor loc & $2.99 \pm 1.17$ & $2.00 \pm 1.28$ & $\mathbf{3.24 \times 10^{-1} \pm 1.28}$ \\ \hline scpf & $9.04 \pm 5.07$ & $2.73 \pm 5.64$ & $\mathbf{-1.42 \pm 4.16}$ \\ \hline sgemm & $2.54 \times 10^1 \pm 1.00$ & $3.26 \pm 6.53 \times 10^{-1}$ & $\mathbf{-1.35 \times 10^1 \pm 3.00 \times 10^{-1}}$ \\ \hline rf1 & $5.60 \pm 1.59$ & $3.46 \pm 1.56$ & $\mathbf{-9.35 \times 10^{-3} \pm 1.51}$ \\ \hline rf2 & $6.09 \pm 1.86$ & $2.19 \pm 2.27$ & $\mathbf{-3.61 \times 10^{-1} \pm 2.14}$ \\ \hline scm1d & $1.44 \times 10^1 \pm 1.82$ & $1.03 \times 10^1 \pm 2.98$ & $\mathbf{-7.03 \times 10^{-1} \pm 2.32}$ \\ \hline scm20d & $1.68 \times 10^1 \pm 1.43$ & $1.02 \times 10^1 \pm 2.35$ & $\mathbf{-1.34 \pm 2.25}$ \\ \hline \end{tabular} \end{adjustbox} \caption{Validity (average gap between the empirical validity curve and the calibration line in percentage) summarized results for all data sets.} \label{tabresval} \end{center} \end{table} \begin{table} \begin{center} \begin{adjustbox}{width={\textwidth},totalheight={\textheight},keepaspectratio}% \begin{tabular}{|l|c|c|c|} \hline Data sets & Independent & Gumbel & Empirical \\ \hline music origin & $\mathbf{1.97 \times 10^1 \pm 2.99}$ & $3.19 \times 10^1 \pm 1.73 \times 10^1$ & $2.90 \times 10^1 \pm 1.39 \times 10^1$ \\ \hline indoor loc & $1.70 \times 10^{-1} \pm 5.12 \times 10^{-2}$ & $9.54 \times 10^{-2} \pm 2.04 \times 10^{-2}$ & $\mathbf{8.69 \times 10^{-2} \pm 1.86 \times 10^{-2}}$ \\ \hline scpf & $5.10 \pm 5.31$ & $3.06 \pm 3.7$ & $\mathbf{2.39 \pm 3.67}$ \\ \hline sgemm & $1.17 \times 10^{-3} \pm 5.69 \times 10^{-4}$ & $2.56 \times 10^{-4} \pm 1.95 \times 10^{-4}$ & $\mathbf{2.20 \times 10^{-4} \pm 1.60 \times 10^{-4}}$ \\ \hline rf1 & $1.18 \times 10^{-2} \pm 1.52 \times 10^{-2}$ & $5.52 \times 10^{-3} \pm 1.05 \times 10^{-2}$ & $\mathbf{3.61 \times 10^{-3} \pm 6.00 \times 10^{-3}}$ \\ \hline rf2 & $2.56 \times 10^{-3} \pm 1.87 \times 10^{-3}$ & $7.48 \times 10^{-4} \pm 8.44 \times 10^{-4}$ & $\mathbf{7.00 \times 10^{-4} \pm 8.48 \times 10^{-4}}$ \\ \hline scm1d & $3.49 \times 10^4 \pm 2.89 \times 10^4$ & $1.28 \times 10^4 \pm 1.20 \times 10^4$ & $\mathbf{1.15 \times 10^3 \pm 1.22 \times 10^3}$ \\ \hline scm20d & $5.43 \times 10^6 \pm 4.43 \times 10^6$ & $1.80 \times 10^5 \pm 2.15 \times 10^5$ & $\mathbf{4.14 \times 10^4 \pm 6.66 \times 10^4}$ \\ \hline \end{tabular} \end{adjustbox} \caption{Efficiency (hyper-rectangle median volume for $\epsilon_g = 0.1$) summarized results for all data sets.} \label{tabreseff} \end{center} \end{table} The numbers confirm our previous observations on the graphs, as the average gap is systematically higher for the Independent copula and lower for the Empirical one, with Gumbel in-between. We can however notice that while the Empirical copula provides the best results, it is also often a bit under the calibration line, indicating that if conservativeness is to be sought, one should maybe prefer the Gumbel copula. About the same conclusions can be given regarding efficiency, with the Empirical copula giving the best results and the Independent one the worst. \section{Conclusion and discussion} In this paper, we provided a quite easy and flexible way to obtain valid conformal predictions in a multi-variate regression setting. We did so by exploiting a link between non-conformity scores and copulas, a commonly used tool to model multi-variate distribution. Experiments on various data sets for a small choice of representative copulas show that the method indeed allows to improve upon the naive independence assumption. Those first results indicate in particular that while parametric, simple copulas may provide valid results for some data sets, more complex copulas may be needed in general to obtain well calibrated predictions, with the cost that good estimations of such copulas require a lot of calibration data. As future lines of work, we would like to explore further the flexibility of our framework, for instance by exploring the possibility of using vines~\cite{joe2011dependence} to model complex dependencies, or by proposing protocols allowing to obtain $\epsilon_g$ from different individual, user-defined confidence degrees, taking up on our Remark~\ref{rem:ident_conf}. Finally, while we mostly focused on multi-variate regression in the present paper, it would be interesting to try to extend the current approach to other multi-task settings, such as multi-label problems. A possibility could be to make such problems continuous, as proposed for instance by Liu~\cite{liu2019copula}. \section{Acknowledgments} This research was supported by the UTC foundation. \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{introduction} Carbon nanotube and graphene-based nanomechanical resonators (GNMR) have attracted significant attention from the scientific community; see the recent reviews of Refs.~\onlinecite{EkinciKL,ArlettJL,EomK,BartonRA}. Experiments have demonstrated that single-walled carbon nanotube nanomechanical resonators can serve as a mass sensor that is capable of detecting individual atoms or molecules~\cite{JensenK,LassagneB,ChiuHY}, which is made possible by the high stiffness and low mass of carbon nanotubes. Graphene also possesses high stiffness~\cite{LeeCG,JiangJW2009,JiangJW2010}, but may prove to be a superior mass sensor than nanotubes due to its significantly larger surface area on which molecules can attach. The performance of GNMR mass sensors is closely related to its quality (Q)-factor, which reflects the energy that is lost during the mechanical oscillation of the resonator. In the first reported study of GNMRs, Bunch \emph{et al.} observed very low ($<100$) Q-factors for GNMRs working in the megahertz range~\cite{BunchJS}. In a later experiment, the same group reported a dramatic increase of the Q-factor at lower temperatures, with the Q-factors reaching up to 9000 at 10K for GNMRs produced from the chemical vapor deposition growth method~\cite{Zande}. Chen {\it et al.} also found that the Q-factor of GNMR increases with decreasing temperature, and reaches $10^{4}$ at 5 K~\cite{ChenC}. More recently, Eichler {\it et al.}~\cite{EichlerA} found that the Q-factors of GNMRs can reach values of $10^{5}$, which is very close to the theoretical Q-factor upper bound of about $10^{6}$, which has been predicted for GNMRs with all edges fixed coupled with idealized mechanical actuation~\cite{KimSY,JiangJW2012}. Because a key sensing objective for GNMRs is to enable detection of individual molecules or atoms, it is critical to determine methods of enhancing their mass sensitivity. For example, Eichler {\it et al.} have shown that it is possible to increase the resonant frequency (and thus the mass sensitivity) of GNMRs by driving its mechanical oscillation into the nonlinear regime, which was explained using a continuum mechanics model~\cite{EichlerA}. Using similar continuum models, several groups have theoretically studied the nonlinear effect on the mass sensitivity of different nanoresonators~\cite{BuksE,DaiMD,AtalayaJ}, with only a single, recent study related to graphene~\cite{eom2012}. Besides the nonlinear resonance, mechanical strain is another possible method to improve the mass sensitivity of the GNMR, as the resonant frequency can be enhanced by mechanical tension~\cite{KimSY2008,KimSYapl,DaiMD,KimSYnanotechnology,QiZ}. In this letter, we investigate the utility of inducing nonlinear oscillations to enhance the mass sensitivity of GBMRs using classical molecular dynamics (MD) simulation. Our simulations show that the adsorption-induced frequency shift resulting from a single Au atom can be enhanced by increasing the actuation energy, when the actuation energy parameter $\alpha$ is below a critical value $\alpha_{c}=2.0875\pm 0.0125$. For actuation energy above the critical value, the adsorbed Au atom exhibits significant diffusion, which reduces the Q-factor of the GNMR by more than three orders of magnitude and significantly degrades the mass sensitivity. Quite different from the widely used continuum mechanics explanation, we show that the nonlinear-induced frequency enhancement results from the `effective strain' that is induced in the oscillating GNMR. We derive an analytic expression for the effective strain $\epsilon_{\alpha}= \frac{3}{4}\pi^{2}\alpha\frac{E_{k}^{0}}{m\omega^{2}L^{2}}$ (here $E_{k}^{0}$ is the kinetic energy, $m$ the mass, $\omega$ the angular frequency, and $L$ length of the resonator) that enables us to directly link the equivalence of applied mechanical tensile strain and the strain induced by nonlinear oscillations to the resonant frequencies of the GNMRs. \section{simulation details} The graphene sample in our simulations has dimensions $(L_{x}, L_{y}) = (34, 38)$~{\AA}, and is composed of 504 carbon atoms. The atoms at the $+x$ and $-x$ ends of the GNMR are fixed, while periodic boundary conditions are applied in the $y$ direction. The interactions of the carbon atoms are described by the Brenner (REBO-II) potential~\cite{Brenner}. For the cases where a single Au atom is adsorbed on the GNMR, the interaction between the Au atom and the GNMR is modeled by a Lennard-Jones potential with length parameter $\sigma$=2.9943~{\AA} and energy parameter $\epsilon$=0.02936~{eV}~\cite{KimSYnanotechnology}. The standard Newton equations of motion are integrated in time using the velocity Verlet algorithm with a time step of 1 fs. Our simulations are performed as follows. First, the Nos\'e-Hoover\cite{Nose,Hoover} thermostat is applied to thermalize the system to a constant temperature of 4.2~{K} within the NVT ensemble, which is run for $10^{5}$ MD steps. The mechanical oscillation of the resonator is then actuated by adding a velocity distribution to the system, which follows the morphology of the first flexural vibrational mode of graphene~\cite{JiangJW2012}. The imposed velocity distribution, or actuation energy, is $\Delta E=\alpha E_{k}^{0}$, where $E_{k}^{0}$ is the total kinetic energy in the GNMR after thermalization but just before its actuation and $\alpha$ is the actuation energy parameter. After the actuation energy is applied, the system is allowed to oscillate freely within the NVE ensemble for $2^{19}$ MD steps. The data from the NVE ensemble is used to analyze the mechanical oscillation of the GNMR. \section{results} During the free vibration period, the energy in the GNMR switches between kinetic and potential energy. The frequency of the switching is $2f$, with $f$ being the frequency of mechanical oscillation. Fig.~\ref{fig_fft_alpha} shows the resonant peaks that are obtained by taking a Fourier transformation of the time history of the kinetic energy; the resonant peaks are used to extract the resonant frequency $f$ at liquid helium temperature (4.2 K) for GNMRs with different actuation energies $\alpha=$ 0.1, 0.5, 1.0, 1.5, and 2.0, where for reference, $\alpha=1$ means that the actuation energy equals the total kinetic energy in the GNMR after thermalization. It should be noted that the amplitude in Fig.~\ref{fig_fft_alpha} is large, because of the long simulation time we have performed. We focus on the liquid helium temperature because this temperature is commonly utilized in experiments involving GNMRs (eg. 90~{mK} or 4.0~{K} in Ref.~\onlinecite{EichlerA}). Fig.~\ref{fig_f_au_pure} shows the resonant frequency as a function of actuation energy $\alpha$. Panel (a) shows that the resonant frequency is enhanced by increasing $\alpha$. This enhancement is due to the increasingly nonlinear behavior of the GNMR induced by the increasing oscillation amplitude. Adsorption of a single Au atom on top of the GNMR causes a considerable reduction of the resonant frequency, due to the resulting increase in effective mass of the GNMR. This frequency shift is what is measured experimentally to detect the adsorbed mass, and it is seen in Panel (b) that the magnitude of the frequency shift increases by a factor of three for large actuation energies ($\alpha\approx2.5$) as compared to if the GNMR was actuated with a small $\alpha$. Panel (b) shows that the frequency shift can be increased by applying larger actuation energies, which also results in increased mass sensitivity. However, as shown in the same figure, the frequency shift does not increase monotonically, and a decrease with increasing actuation energy is observed when $\alpha$ is large. The reduction in the frequency shift with increasing $\alpha$ is determined to be a result of diffusion of the adsorbed Au atom, which was previously observed to occur at GNMR temperatures exceeding about 30~{K}~\cite{KimSYnanotechnology}. To confirm the diffusion mechanism, we calculate in Fig.~\ref{fig_meanfreepath} the mean free path for the Au atom as a function of $\alpha$, where a sharp jump in the mean free path is observed at a critical value of actuation energy $\alpha_{c}=2.0875\pm0.0125$. For $\alpha<\alpha_{c}$, the mean free path of the Au atom is around 0.5~{\AA/ps} with small fluctuations. In contrast, for $\alpha>\alpha_{c}$, the mean free path increases to a value around 1.0~{\AA/ps}, which implies diffusion of the Au atom. The left top inset of Fig.~\ref{fig_meanfreepath} shows resonant curves for smaller actuation energy $\alpha=$ 0.1 and 1.5. These smooth curves provide evidence that the mechanical oscillation of the GNMR is the only vibrational mode in the system. The right bottom inset of Fig.~\ref{fig_meanfreepath} shows a significant amount of diffusion-induced noise in the resonant curves of large actuation energy $\alpha=$ 4.0 and 10.0. The noise corresponds to other vibrations that are induced by the diffusion of the Au atom. Fig.~\ref{fig_trajectory} illustrates the trajectory history of the Au atom with $\alpha=$ 1.0, 2.075, 2.1, and 4.0. The diffusion is clearly observed for actuation energy above the critical value $\alpha_{c}$. The thermal noise provides important energy damping channels for the mechanical oscillation of the GNMR. As a result, the Q-factor of the GNMR is greatly reduced as shown in Fig.~\ref{fig_quality_factor_alpha}. There is almost no energy dissipation in GNMR for actuation energy below $\alpha_{c}$, leading to extremely high Q-factors. For actuation energy above the critical value, the Q-factor is reduced by more than three orders of magnitude. The time history of the kinetic energy is shown in Fig.~\ref{fig_quality_factor_alpha}(b) for $\alpha=2.075$ and in Fig.~\ref{fig_quality_factor_alpha}(c) for $\alpha=2.1$, from which the Q-factor has been extracted~\cite{JiangJW2012}. Fig.~\ref{fig_quality_factor_absorbate} shows the Q-factor vs. the actuation energy parameter $\alpha$ for graphene nanomechanical resonators with three different adsorbates. In these three systems, the adsorbate mass $m$ and the Lennard-Jones interaction parameter $\epsilon$ are ($m$, $\epsilon$), ($m/2$, $\epsilon$), and ($m$, $\epsilon/2$). For the first system ($m$, $\epsilon$), the mass of the adsorbate is $m=197$ and the Lennard-Jones potential parameter is $\epsilon=0.02936$~{eV}, which are the actual values for Au adsorbate. For the second system ($m/2$, $\epsilon$), the mass of the adsorbate is reduced by half to be $m=99$ while $\epsilon$ (the interaction strength between the adsorbate and graphene) remains unchanged. In the third system ($m$, $\epsilon/2$), the mass of the adsorbate is unchanged at $m=197$ while the interaction strength $\epsilon$ between the adsorbate and graphene is reduced by half. In all three systems, there is a sharp decrease in the Q-factor at different values of actuation energy parameter $\alpha$, which results from the diffusion of the adsorbate. These results also follow physical intuition. For example, diffusion happens at smaller $\alpha$ for adsorbates that have a weaker bonding strength with graphene. In contrast, diffusion happens at larger $\alpha$ for adsorbates with less mass, since those atoms have a smaller kinetic energy at the same temperature as larger mass adsorbates, and therefore more energy via the nonlinear actuation parameter $\alpha$ is needed to induce diffusion for the smaller mass atoms. These results may serve as useful guidelines for experimentalists to verify our theoretical predictions, as different types of adsorbates (of different mass or/and different interaction strengths with graphene) are usually observed experimentally. \section{discussion} The above discussion has established that the mass sensitivity of the GNMR can be enhanced by driving the mechanical oscillations into the nonlinear regime using a larger actuation energy. The remainder of this article will be devoted to explaining the mechanism that enables the nonlinear-induced resonant frequency, and thus mass sensitivity enhancement. As illustrated in Fig.~\ref{fig_cfg}, the GNMR is initially flat (gray points on the horizontal line) with length $L$. After exciting the mechanical oscillations via the actuation energy $\Delta E=\alpha E_{k}^{0}$, the GNMR oscillates with amplitude $A=\sqrt{\frac{2\Delta E}{m\omega^{2}}}$, where $\omega=2\pi f$ is the angular frequency of the mechanical oscillation and $m$ is the total mass of the system. In the derivation of the oscillation amplitude, the thermal vibrations at 4.2~{K} have been ignored due to the very low temperature conditions. We focus on a particular point in the GNMR, i.e. the atom at the midpoint, which oscillates as $u=A\sin\omega t$. The mean oscillation amplitude of this point is $\sqrt{<u^{2}>}=A/\sqrt{2}$, and so this point can effectively be regarded as a stationary point located at $A/\sqrt{2}$. Similarly, the shape of the oscillating GNMR is equivalent to a stationary sine function, but with amplitude $A/\sqrt{2}$. This effective shape is shown as big red points in Fig.~\ref{fig_cfg}, where the length of the effective shape is \begin{eqnarray} S=\frac{\sqrt{2}L}{\pi}\sqrt{a^{2}+3}\left(K(k)\right), \end{eqnarray} where $a=\pi A/\sqrt{2}L$, $k=a^{2}/(a^{2}+1)$ and $K(k)$ is the complete elliptic integral of the first kind. In analyzing the sine-shaped form of the oscillating GNMR (red dots) in Fig.~\ref{fig_cfg}, our key insight is that the oscillating GNMR is under an `effective strain' as compared to the original, flat GNMR (gray points on the horizontal line). This effective strain is denoted by $\epsilon_{\alpha}$, where the subscript indicates that the effective strain is induced by the actuation energy $\alpha$, and where $\epsilon_{\alpha}=(S-L)/L$. Approximating the elliptic integral up to second order, we can derive an analytic expression for the oscillation-induced effective strain: \begin{eqnarray} \epsilon_{\alpha}= \frac{3}{4}\pi^{2}\alpha\frac{E_{k}^{0}}{m\omega^{2}L^{2}}. \label{eq_effective_strain} \end{eqnarray} Substituting all structural and physical parameters into the expression, we get a concise relationship between the effective strain and the actuation energy that is applied to the GNMR: $\epsilon_{\alpha}\approx 0.216 \% \times \alpha$. The effective strain for the Au adsorbed GNMR is obtained analogously. As previously discussed, the resonant frequency of the GNMR can be enhanced by applying mechanical tension due to the enhanced stiffening of the structure that results~\cite{KimSY2008,KimSYapl,DaiMD,KimSYnanotechnology,QiZ}. In this sense, we propose that the enhancement of the resonant frequency that results from driving the GNMR into the nonlinear oscillation regime is actually due to the effective strain that is induced in the oscillating GNMR. In Fig.~\ref{fig_f_alpha_strain}, we show that the effective strain is indeed responsible for the enhancement of the resonant frequency with increasing $\alpha$. We present the resonant frequency for the GNMR in (a) and the GNMR with adsorption of a single Au atom in (b). The circles represent the resonant frequency of the GNMR due to different actuation energies $\alpha$, i.e $f(\alpha)$, while the dashed lines (blue online) are the resonant frequencies $f(\epsilon_{\alpha})$ for the GNMR that was pre-stretched with a tensile strain $\epsilon_{\alpha}$ before actuation, where the strain $\epsilon_{\alpha}$ for actuation energy $\alpha$ is calculated by Eq.~(\ref{eq_effective_strain}). A very small actuation energy of $\alpha=0.001$ is used to actuate the stretched GNMR such only the strain effect is important in the GNMR under mechanical tension. Both Figs.~\ref{fig_f_alpha_strain}(a) and (b) show good agreement between the resonant frequencies obtained from increasing the actuation energy $\alpha$ and by applying tensile strain $\epsilon_{\alpha}$. This agreement verifies that the effective strain is the cause for the increase of the resonant frequency that we have previously observed by increasing $\alpha$. The agreement is particularly good for $\alpha<1.0$, though there is an increasing discrepancy for larger $\alpha$. The solid lines (red online) show that this discrepancy is greatly improved by considering a high-order correction in the square of the oscillation amplitude $A^{2}$. This high-order correction is demonstrated in Fig.~\ref{fig_f_alpha_strain}(c), where $u_{z}$ on the $y$-axis of Fig.~\ref{fig_f_alpha_strain}(c) is the oscillation of the midpoint of the GNMR obtained from the MD simulation, and where Fig.~\ref{fig_f_alpha_strain}(c) is plotted using a log-log scale. $<u_{z}^{2}>$ deviates from linear behavior because of the phonon-phonon scattering. Although the temperature is quite low (4.2~{K}), the phonon-phonon scattering is still important for the first bending mode (i.e the mode of mechanical oscillation) because this mode is in a highly non-equilibrium state. In other words, because the first bending mode has been mechanically actuated with very large amplitude, it is driven far away from its thermal equilibrium state at 4.2~{K}. Our MD simulations gives $<u_{z}^{2}>\propto \alpha^{0.79}$ for the pure GNMR. However, we have assumed $<u_{z}^{2}>\propto \alpha^{1.0}$ in our derivation of the effective strain. The effective strain $\epsilon_{\alpha}\propto <u_{z}^{2}>$, which is why we have $\epsilon_{\alpha} \propto \alpha$. Therefore, the effective strain must be modified by: $\epsilon'_{\alpha}=\epsilon_{\alpha}/\alpha^{0.21}$, where $\epsilon_{\alpha}$ is the effective strain without correction. A similar correction factor of $\alpha^{0.28}$ can be done for the effective strain of the GNMR with a single adsorbed Au atom. We note that Fig.~\ref{fig_f_alpha_strain}(c) shows a small oscillation amplitude (~0.8~{\AA}) for the system simulated in this work, due to its small size. The amplitude $U$ actually depends on the length $L$ of the graphene as $U=2.43\times\frac{\sqrt{\alpha T}}{\omega(L)}$~{\AA}, where the frequency $\omega$ is length-dependent. For the system we have simulated, this formula gives $U\approx0.8$~{\AA} at $\alpha=1.0$, which is in good agreement with MD simulation results in Fig.~\ref{fig_f_alpha_strain}(c). Considering the flexural property of the bending mode in graphene, i.e $\omega\propto 1/L^{2}$, we can obtain the oscillation amplitude of an arbitrary system length as $U=U_{0}(L/L_{0})^{2}$, where $L_{0}=34$~{\AA} and $U_{0}=0.8$~{\AA} for our simulated system at $\alpha=1.0$. For instance, the mechanical oscillation amplitude becomes 67~nm if the system length is 100~nm. If the system length is 1~{$\mu$m}, then the mechanical oscillation amplitude is very large, i.e. 6.7~{$\mu$m}. These results show that $\alpha=1$ is large enough to induce oscillations that can be clearly distinguished from random thermal fluctuations in experiments. Previously, we have established that the effective strain mechanism can explain the enhancement of the resonant frequency that occurs by increasing $\alpha$. However, we also needed to perform a high-order correction for large actuation energies to account for the phonon-phonon scattering that is still present even at the low (4.2~{K}) temperature due to the highly non-equilibrium behavior resulting from the large actuation energy. However, if the temperature is reduced to 0~{K}, there is no phonon-phonon scattering for the first bending mode because this mode cannot decay without the assistance of other vibrations according to symmetry selection rules~\cite{BornM}. As a result, at 0~{K}, the enhancement of the resonant frequency by applying tensile strain before actuation should agree with that obtained by increasing the actuation energy, without any high-order correction. Indeed, we observe this result in Fig.~\ref{fig_f_alpha_strain}~(d). We emphasize that for these two simulations of the GNMR, the GNMR exists initially in an energy minimized configuration at 0~{K}, and the applied actuation energy is $\Delta E=\alpha E_{k}^{0}$. Because the kinetic energy $E_{k}^{0}$ is 0 at 0~{K}, we use $E_{k}^{0}=E_{k}(4.2K)$, where $E_{k}(4.2K)$ is the total kinetic energy at 4.2~{K}, for both of the two simulations shown in Fig.~\ref{fig_f_alpha_strain}~(d). The solid line is the kinetic energy for the GNMR that is actuated by the large actuation energy parameter $\alpha=2.0$. The dashed line (blue online) is the kinetic energy of the GNMR that is pre-stretched in tension by $0.4\%$ before actuation, where $0.4\%$ is chosen as it is also the effective strain that results from the actuation energy $\alpha=2.0$. The pre-stretched GNMR is actuated with a very small actuation energy $\alpha=0.001$ so that the effect from $\alpha$ can be ignored and only strain is important in this case. We observe exactly the same resonant frequency in the mechanical oscillation in these two very different situations. From the Fourier transformation of the time history of thees two kinetic energy, we get the same resonant frequency of 270 GHz, which further establishes the equivalent effect of applied tensile strain and different actuation energies on the resonant frequencies of GNMRs. \section{conclusion} In conclusion, we used classical MD simulations to demonstrate that a significant enhancement of the mass sensitivity of graphene nanomechanical resonators can be achieved by driving them into the nonlinear oscillation regime. The enabling mechanism was determined to be the effective strain that is induced in the nanoresonators due to the nonlinear oscillations. A simple analytic expression relating the effective strain to the actuation energy was obtained, and shown to be quite accurate for moderate actuation energies. The key implication is that it should be possible for experimentalists to directly incorporate the present findings to enhance the sensitivity of graphene-based mass sensors simply by actuating the graphene nanomechanical resonators into the nonlinear oscillation regime. \textbf{Acknowledgements} The work is supported by the Grant Research Foundation (DFG) Germany. HSP acknowledges support from the Mechanical Engineering Department of Boston University.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{introduction} Recent experiments \cite{zhu2009,feldman2009,fuhrer2010,junzhu2010,herrero2010,ki2010} have revealed an intriguingly strong (and anomalous) ``insulating'' temperature dependence in the measured electrical conductivity of bilayer graphene (BLG) samples, not only at the charge neutrality point (CNP) where the electron-hole bands touch each other (with vanishing average carrier density), but also at carrier densities as high as $10^{12}$ cm$^{-2}$ or higher. (``Insulating'' temperature dependence of conductivity $\sigma(T)$ simply means an increasing $\sigma$ with increasing temperature at a fixed gate voltage, which is, in general, considered unusual in a nominally metallic system where the resistivity, not the conductivity, should increase with temperature.) Such an anomalous insulating temperature dependence of $\sigma(T)$ is typically not observed in monolayer graphene (MLG) away from the CNP although the gate voltage (or equivalently, the density) dependence of MLG and BLG conductivities are very similar with both manifesting linear-in-density conductivity away from the CNP and an approximately a constant minimum conductivity around the CNP \cite{morozov2008,xiao2009}. In this Letter we theoretically establish that this anomalous insulating BLG $\sigma(T)$ behavior is likely to be caused by the much stronger BLG density inhomogeneity \cite{dassarma2010} (compared with MLG) which gives rise to a qualitatively new type of temperature dependence in graphene transport, namely, the intriguing coexistence of both metallic and activated transport, hitherto not discussed in the literature. We therefore predict that the observed temperature dependence of BLG $\sigma(T)$ arises from the same charged impurity induced puddles in the system which are responsible for the minimum conductivity plateau at the CNP \cite{dassarma2010b}. We provide an analytic theory which appears to be in excellent qualitative agreement with the existing experimental results. One direct prediction of our theory, the suppression of the anomalous insulating temperature dependence in high mobility samples with lower disorder, seems to be consistent with experimental observations. As a direct corollary of our theory, we find, consistent with experimental observation \cite{castro2007,oostinga2008,mak2009}, that a gapped BLG (with the gap at the CNP induced, for example, by an external electric field) would typically manifest a transport activation gap substantially smaller than the intrinsic spectral gap (i.e. the energy band gap) unless the band gap is much larger than the typical puddle-induced potential energy fluctuations. Our theory is based on a physically motivated idea: In the presence of large potential fluctuations $V({\bf r})$, the local Fermi level, $\mu({\bf r}) = E_F - V({\bf r})$, would necessarily have large spatial fluctuations [particularly when $E_F \alt s$, where $s=V_{rms}$ is the root-mean-square fluctuations or the standard deviation in $V({\bf r})$], leading to a complex temperature dependence of transport since both metallic and activated transport would be present due to random local gap. Below we carry out an analytical theory implementing this physical idea. We will see that this physical idea leads to the possible coexistence of metallic and activated transport, which explains the observed temperature dependence of BLG transport. We start by assuming that the disorder-induced potential energy fluctuations in the BLG is described by a distribution function $P(V)$ which $V=V({\bf r})$ is the fluctuating potential energy at the point ${\bf r}\equiv (x,y)$ in the 2D BLG plane. We approximate the probability $P(V)dV$ of finding the local electronic potential energy within a range $dV$ about $V$ to be a Gaussian form, i.e., $P(V) = \frac{1}{\sqrt{2\pi s^2}} \exp(-V^2/2s^2)$, where $s$ is the standard deviation (or equivalently, the strength of the potential fluctuation). Then in the presence of electron-hole puddles the density of states (DOS) is reduced by the allowed electron region fraction and given by $D(E) = \int_{-\infty}^{E}D_0P(V)dV = {D_0}{\rm erfc}(-E/\sqrt{2}s)/2$, where erfc$(x)$ is the complementary error function and $D_0 = {g_sg_v m}/(2\pi \hbar^2)$ is the DOS in a homogeneous system, where $m$ is the band effective mass, $g_s=2$ and $g_v=2$ are the spin and valley degeneracies, respectively. We have $D_0=2.8\times 10^{10}$ cm$^{-2}$/meV with the effective mass $m=0.033m_e$ (where $m_e$ is the bare electron mass). Note that the tail of the DOS is determined by the potential fluctuation strength $s$. Since BLG is a gapless semiconductor the electron density at finite temperature increases due to the direct thermal excitation from valence band to conduction band, and this thermal excitation is an important source of temperature dependent transport. Thus, we first consider the temperature dependence of thermally excited electron density. The total electron density is given by \begin{equation} n_e = \int_{-\infty}^{\infty}D(E)\frac{dE}{e^{\beta(E-E_F)}+1}, \end{equation} where $\beta=1/k_BT$ and $E_F$ is the Fermi energy. When the Fermi energy is zero (or at CNP) all electrons are located in the band tail at $T=0$ and the electron density in the band tail is given by $n_0=n_e(E_F=0)= {D_0 s}/{\sqrt{2\pi}}$. Note that the electron density in the band tail is linearly proportional to the standard deviation $s$. At finite temperatures we find the asymptotic behavior of $n_0(T)$. The low temperature ($k_BT/s \ll 1$) behavior of electron density at CNP becomes \begin{equation} n_e(T) = n_0 \left [ 1 + \frac{\pi^2}{6} \left ( \frac{k_BT}{s} \right )^2 \right ]. \label{eq:den_0} \end{equation} Thus, the electron density increases quadratically in low temperature limit. For homogeneous BLG with the constant DOS the electron density at finite temperatures is given by $n_e(T)=D_0\ln(2)k_BT$. The presence of the band tail suppresses the thermal excitation of electrons and gives rise to the quadratic behavior. However, at high temperature the density increases linearly with the same slope as in the homogeneous system, i.e., \begin{equation} n(T) \sim D_0 \left [ \ln(2) k_BT + \frac{1}{8}\frac{s^2}{(k_BT)^2} \right ]. \label{eq:den_0h} \end{equation} In Fig.~\ref{fig:den}(a) we show the temperature dependent electron density at CNP for different standard deviations. \begin{figure} \epsfysize=1.8in \epsffile{fig_1a.eps} \epsfysize=1.8in \epsffile{fig_1b.eps} \epsfysize=1.8in \epsffile{fig_1c.eps} \caption{ (Color online) (a) The electron density at CNP as a function of temperature for different $s$. At $T=0$ the density is given by $n_0=D_0 s/\sqrt{2\pi}$. (b) The temperature dependent electron density at finite $E_F$ for different $s$. For $s/E_F \neq 0 $ the leading order behavior is quadratic while at $s=0$ the density is exponentially suppressed. (c) Total electron densities (solid lines) and hole densities (dashed lines) as a function of $E_F$ for two different $s=30$ meV and 70 meV. The linear line represents the density difference $n=n_e-n_h=D_0E_F$, which linearly depends on the Fermi energy. The densities at the band tails are given by $n_e(E_F=0)=n_h(E_F=0)=D_0 s/\sqrt{2\pi}$. \label{fig:den} } \end{figure} In the case of finite doping (or gate voltage), i.e., the Fermi level away from CNP, $E_F\neq 0$, the electron density of the homogeneous BLG for $s=0$ is given by \begin{equation} n_{0e}(T)=D_0E_F \left [1+ t \ln \left (1+e^{-1/t} \right ) \right ], \end{equation} where $t=T/T_F$ and $T_F = E_F/k_B$. At low temperatures ($T \ll T_F$) the thermal excitation is exponentially suppressed due to the Fermi function, but at high temperatures ($T \gg T_F$) it increases linearly. In the presence of electron-hole puddles ($s \neq 0$) we have the electron density at zero temperature for the inhomogeneous system: \begin{equation} n_e(0) = {D_0 E_F} \left [ \frac{1}{2}{\rm erfc} \left( \frac{-1}{\sqrt{2}\tilde{s}} \right ) + \frac{\tilde{s}}{\sqrt{2\pi}} e^{-1/2\tilde{s}^2} \right ], \end{equation} where $\tilde{s}=s/E_F$. At low temperatures ($T \ll T_F$) the asymptotic behavior of the electron density is given by \begin{equation} n_e(T) = n_e(0) + D_0 E_F \frac{\pi^2 }{12\sqrt{2}}\frac{e^{-1/2\tilde{s}^2}}{\tilde{s}} \left ( \frac{T}{T_F} \right )^2. \label{eq:den_mu} \end{equation} The leading order term is the same quadratic behavior as in undoped BLG ($E_F=0$), but the coefficient is strongly suppressed by fluctuation. In the case of $s > E_F$, the existence of electron-hole puddles gives rise to a notable quadratic behavior [see Fig.~\ref{fig:den}(b)]. At high temperatures ($T \gg T_F$) we find \begin{equation} n_e(T)= n_{0e}(T) +\frac{D_0E_F}{(1+e^{\beta E_F})^2} \frac{\tilde{s}^2}{2} \frac{T_F}{T}. \end{equation} At CNP ($E_F=0$) electrons and holes are equally occupied. As the Fermi energy increases, more electrons occupy increasingly larger proportion of space [see Fig.~\ref{fig:den}(c)]. For $E_F \gg s$ nearly all space is allowed to the electrons, and the conductivity of the system approaches the characteristic of the homogeneous materials. In the presence of electron-hole puddles, there is a possible coexistence of metallic and thermally-activated transport. When electron puddles occupy more space than hole puddles, most electrons follow the continuous metallic paths extended throughout the system, but it is possible at finite temperature that the thermally activated transport of electrons persists above the hole puddles. On the other hand, holes in hole puddles propagate freely, but when they meet electron puddles activated holes conduct over the electron puddles. Carrier transport in each puddle is characterized by propagation of weak scattering transport theory \cite{dassarma2010}. The activated carrier transport of prohibited regions, where the local potential energy is $V$ less (greater) than Fermi energy for electrons (holes), is proportional to the Fermi factor. If $\sigma_e$ and $\sigma_h$ are the average conductivity of electron and hole puddles, respectively, then the activated conductivities are given by \begin{subequations} \begin{eqnarray} \sigma_e^{(a)}(V) & = & \sigma_e \exp[\beta (E_F-V)], \\ \sigma_h^{(a)}(V) & = &\sigma_h \exp[\beta (V-E_F)], \end{eqnarray} \end{subequations} where the density and temperature dependent average conductivities ($\sigma_e$ and $\sigma_h$) are given within the Boltzmann transport theory \cite{dassarma2010} by $\sigma_{e} = {n_e e^2 \langle \tau \rangle}/{m}$ and $\sigma_{h} = {n_h e^2 \langle \tau \rangle}/{m}$, where $n_e$ and $n_h$ are average electron and hole densities, respectively, and $\langle \tau \rangle$ is the transport relaxation time which depends explicitly on the scattering mechanism \cite{dassarma2010}. Now we denote the electron (hole) puddle as region `1' (`2'). In region 1 electrons are occupied more space than holes when $E_F>0$. The fraction of the total area occupied by electrons with Fermi energy $E_F$ is given by $p=\int_{-\infty}^{E_F}P(V)dV$. Then the total conductivity of region 1 can be calculated \begin{eqnarray} \sigma_1 & = & \frac{1}{p}\int^{E_F}_{-\infty}(\sigma_e + \sigma_h^{(a)})P(V) dV, \nonumber \\ &=& \sigma_{e}+\frac{\sigma_{h}}{2p} e^{ \frac{\beta^2s^2}{2} -\beta E_F } {\rm erfc} \left ( -\frac{E_F}{\sqrt{2} s} + \frac{\beta s}{\sqrt{2}} \right). \label{eq:sig1} \end{eqnarray} At the same time the holes occupy the area with a fraction $q=1-p$ and the total conductivity of region 2 becomes \begin{eqnarray} \sigma_2 & = &\frac{1}{q}\int_{E_F}^{\infty}(\sigma_e^{(a)} + \sigma_h)P(V) dV \nonumber \\ &=& \sigma_{h}+\frac{\sigma_{e}}{2q} e^{ \frac{\beta^2s^2}{2} +\beta E_F } {\rm erfc} \left ( \frac{E_F}{\sqrt{2} s} + \frac{\beta s}{\sqrt{2}} \right). \label{eq:sig2} \end{eqnarray} The $\sigma_1$ and $\sigma_2$ are distributed according to the binary distribution. The conductivity of binary system can be calculated by using the effective medium theory of conductance in mixtures\cite{kirkpatrick1973}. The result for a 2D binary mixture of components with conductivity $\sigma_1$ and $\sigma_2$ is given by \cite{kirkpatrick1973} \begin{equation} \sigma_t = (p-\frac{1}{2})\left [ (\sigma_1 -\sigma_2) + \sqrt{(\sigma_1-\sigma_2)^2+\frac{4\sigma_1 \sigma_2}{(2p-1)^2}} \right ]. \label{eq:sig_tot} \end{equation} This result can be applied for all Fermi energy. For a large doping case, in which the hole puddles disappear, we have $p=1$ and $\sigma_2=0$, then Eq.~(\ref{eq:sig_tot}) becomes $\sigma = \sigma_1$, i.e., the conductivity of electrons in the homogeneous system. \begin{figure} \includegraphics[width=6.5cm]{fig_2.eps} \caption{(Color online) $\sigma_t(T)$ at charge neutral point for different $s$. Inset shows the thermally activated conductivity as a function temperature. \label{fig:sig_mu0} } \end{figure} \begin{figure}[tb] \begin{center} \includegraphics[width=6.5cm]{fig_3a.eps} \includegraphics[width=6.5cm]{fig_3b.eps} \caption{ (Color online). (a) $\sigma_t(T)$ for $E_F=55$ meV and for different $s$. (b) $\sigma_t(T)$ for $s=50$ meV and for several $E_F=18$, 36, 55, 78 meV, which correspond to the densities $n=0.5,$ 1.0, 1.5, 2.0$\times 10^{12}$ cm$^{-2}$. } \label{fig:sig_tot} \end{center} \end{figure} We first consider the conductivity at CNP ($E_F=0$). The conductivities in each region are given by \begin{subequations} \begin{eqnarray} \sigma_1 & = & \sigma_{e} \left [ 1 + \frac{\eta}{2p} e^{\beta^2 s^2/2} {\rm erfc} (\beta s/\sqrt{2}) \right ], \\ \sigma_2 & = & \sigma_{h} \left [ 1 + \frac{1}{2q\eta} e^{\beta^2 s^2/2} {\rm erfc} (\beta s/\sqrt{2}) \right ], \end{eqnarray} \end{subequations} where $\eta = n_h/n_e$ is the ratio of the hole density to the electron density. Since the electrons and holes are equally populated we have $p=q=1/2$ and $\sigma_{e}=\sigma_{h}$, then the total conductivity becomes $\sigma_{t} = \sqrt{\sigma_1 \sigma_2} = \sigma_1$. The asymptotic behavior of the conductivity at low temperatures ($k_BT \ll s$) becomes \begin{equation} \sigma_t(T) = \sigma_{e} \left [1 + \sqrt{ \frac{2}{\pi}} \frac{k_BT}{s} - \frac{2}{\sqrt{\pi}}\frac{(k_BT)^3}{s^3} \right ]. \end{equation} The activated conductivity increases linearly with a slope $\sqrt{2/\pi}k_B/s$ as temperature increases. Because $s$ is typically smaller in higher mobility sample, the high mobility samples show stronger insulating behavior at low temperatures. The next order temperature correction to conductivity arises from the thermal excitation given in Eq.~(\ref{eq:den_0}) which gives $T^2$ corrections. Thus in low temperature limit the total conductivity at CNP is given by \begin{equation} \sigma_t(T) = \sigma(0) \left [1+ \sqrt{\frac{2}{\pi}}\frac{k_BT}{s} + \frac{\pi^2}{6} \left ( \frac{k_BT}{s} \right )^2 \right ]. \end{equation} At high temperatures ($k_BT \gg s$) we have \begin{equation} \sigma_t = \sigma_e \left [ 2 - \sqrt{ \frac{2}{\pi}} \frac{s} {k_BT} + \frac{s^2}{2 (k_BT)^2} \right ]. \end{equation} The total conductivity due to the activation behavior approaches a limiting value and all temperature dependence comes from the thermal excitation through the change of carrier density given in Eq.~(\ref{eq:den_0h}). Thus at very high temperatures ($T\gg s/k_B$) the BLG conductivity at the charge neutral point increases linearly with a universal slope $\ln(2)$ regardless of the sample quality. In Fig.~\ref{fig:sig_mu0} we show the calcuated temperature dependent conductivity at charge neutral point. At finite doping ($E_F > 0$) the temperature dependent conductivities are very complex because three energies ($E_F$, $s$, and $k_BT$) are competing. Especially when $k_BT \ll s$, regardless of $E_F$, we have the asymptotic behavior of conductivities in region 1 and 2 from Eqs.~(\ref{eq:sig1}) and (\ref{eq:sig2}), respectively, \begin{subequations} \begin{eqnarray} \sigma_1 & = & \sigma_{e} \left [ 1+ \frac{\eta}{2p} e^{-1/2\tilde{s}^2} \sqrt{\frac{2}{\pi}} \frac{1}{\tilde{s}/t-1/\tilde{s}} \right ], \\ \sigma_2 & = & \sigma_{h} \left [ 1+ \frac{1}{2q \eta} e^{-1/2\tilde{s}^2} \sqrt{\frac{2}{\pi}} \frac{1}{\tilde{s}/t+1/\tilde{s}} \right ], \end{eqnarray} \end{subequations} where $\tilde{s}=s/E_F$ and $t=T/T_F$. The leading order correction is linear but the coefficient is exponentially suppressed by the term $\exp(-E_F^2/2s^2)$. This fact indicates that in the high mobility sample with small $s$, the activated conductivity is weakly temperature dependent except around CNP, i.e. $E_F < s$. Since the density increase by thermal excitation is also suppressed exponentially by the same factor [see Eq.~(\ref{eq:den_mu})] the dominant temperature dependent conductivity arises from the scattering time \cite{dassarma2010}. On the other hand, for a low mobility sample with a large $s$, the linear temperature dependence due to thermal activation can be observed even at high densities $E_F \agt s$. In Fig.~\ref{fig:sig_tot} we show the total conductivities (a) for a fixed $E_F$ and several $s$ and (b) for a fixed $s$ and several $E_F$. In total conductivity the activated insulating behavior competes with the metallic behavior due to the temperature dependent screening effect. When $s$ is small the activated behavior is suppressed. As a result the total conductivity manifests the metallic behavior \cite{dassarma2010}. However, for large $s$ the activated temperature dependence behavior overwhelms the metallic temperature dependence, and the system shows insulating behavior. Finally, we discuss three important issues: (1) The same physics, of course, also applies to MLG graphene, but the quantitative effects of inhomogeneity (i.e. the puddles) are much weaker since simple estimates show that the dimensionless potential fluctuation strength $\tilde s$ ($\equiv s/E_F$) is much weaker in MLG than in BLG because of the linear (MLG) versus constant (BLG) DOS in the two systems. In particular, $\tilde s_{BLG}/ \tilde s_{MLG} \sim 32 \sqrt{\tilde n}$ where $\tilde n = n/10^{10}$, and therefore $\tilde s_{BLG} \gg \tilde s_{MLG}$ upto $n=10^{13}$ cm$^{-2}$. Direct calculations \cite{dassarma2010} show that the self-consistent values of $s$ tend to be much larger in BLG than in MLG for identical impurity disorder. In very low mobility MLG samples, where $s$ is very large, the insulating behavior of temperature dependent resistivity can be observed at high densities even in MLG samples\cite{tan2007,heo2010}. (2) We have neglected all quantum tunneling effects in our consideration because they are unimportant except at very low temperatures. In particular, Klein tunneling is strongly suppressed in strong disorder \cite{rossi2010}. (3) In the presence of a BLG gap ($\Delta_g$), the situation becomes extremely complicated since four distinct energy scales ($s$, $E_F$, $k_BT$, $\Delta_g$) compete, and any conceivable temperature dependence may arise depending on the relative values of these four energy scales. It is, however, obvious that any experimental measurement of the activation gap ($\Delta_a$) in such an inhomogeneous situation will produce $\Delta_a \ll \Delta_g$ unless $\Delta_g \gg s$. The system is now dominated by a random local gap arising from the competition among $s$, $\Delta_g$, and $E_F$, and no simple activation picture would apply. This is precisely the experimental observations \cite{castro2007,oostinga2008,mak2009}. Work supported by ONR-MURI and NRI-NSF-SWAN.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Link operations} \label{3.4Sec} This section extends the multi-level data structure to solve our most general dynamic nca problem. The algorithm processes $m$ $nca$ and {\it link} operations on a set of $n$ nodes in time $O(m\alpha(m,n)+n)$ and linear space $O(n)$. The multilevel structure shares many details with that of the previous section: The levels $\ell=L,\ldots, 1$, the notions of $\ell$-tree, $\ell$-node, and $\ell$-subtree are all unchanged. A difference is that a tree $T_\ell$ at level $\ell>1$ gives its level $\ell-1$ counterpart $T_{\ell-1}$ by contracting {\em every} $\ell$-subtree, i.e., no subtrees are deleted. The notations $\Px., \fx.$, and $\gx.$ are defined without change. $nca$ operations are implemented using the $c$ and $\widehat{\,c\,}$ routines as in last section. $link$ operations are implemented using a recursive routine $l$ similar to $a$ of last section. The analog of \wa\ for link is folded into $l$, i.e., there is no $\wl$. It is convenient to use an extra argument for $l$: We write $l(r,x,y,\ell)$ where $r$ is the root of the $\ell$-tree containing $x$. Call a tree built up by {\it link} operations a {\em link tree}. The operation $link(x,y)$ is performed by $l(\rho, x,y,L)$ for $\rho$ the root of the link tree containing $x$. For motivation we start by sketching the two simplest versions of our algorithm. \long\def\example #1. #2{\bigskip \noindent{\bf Example: #1.} {#2}\bigskip} \example {Algorithm 1}. {Every link tree is represented as an incremental tree. The {\em link} operation uses {\em add\_leaf} and {\em add\_root} operations to transfer the nodes of the smaller tree into the larger (for trees of equal size break the tie arbitrarily). It then discards the incremental tree of the smaller tree. The number of node transfers is $O(n\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$. So Theorem \ref{3.1Thm} shows the total time is $O(m+n\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$.} \def\stage.{\mathy{\sigma}} \def\hskip-2pt\uparrow\hskip-2pt{\hskip-2pt\uparrow\hskip-2pt} The analysis of Algorithm 1 is based on what we will call the ``stage'' of the link tree: A tree in stage \stage. has between $2^{\stage.}$ and $2^{\stage.+1}$ vertices. We view the analysis as charging a vertex $O(1)$ to advance from one stage to the next. (This accounts for the total time spent on {\it add\_leaf}$\,$\ and $add\_root$ operations, since Theorem \ref{3.1Thm} shows the time spent on a tree that ultimately grows to $n_s$ nodes is $O(n_s)$, i.e., the time is proportional to the number of node transfers.) Our more efficient algorithms maintain explicit stages, and these stages will require faster growth in the tree size. \example {Algorithm 2}. {Algorithm 1 can be improved using a 2-level stategy similar to previous ones. Level 2 classifies each tree as stage 1 or 2: A 2-tree is in stage 1 if it has $<\ifmmode \,{ \rm log}\,\else{\it log }\fi n$ nodes and stage 2 if it has $\ge \ifmmode \,{ \rm log}\,\else{\it log }\fi n$ nodes. A stage 2 2-tree is partitioned into 2-subtrees, each of which contains $\ge \ifmmode \,{ \rm log}\,\else{\it log }\fi n$ nodes. Each 2-subtree is represented as an incremental tree, using the data structure of Theorem \ref{3.1Thm}. Contracting all these 2-subtrees gives its corresponding 1-tree. A stage 1 2-tree is also a 1-tree. For consistent terminology in stage 1, view each 2-node as a 2-subtree. Level 1 uses Algorithm 1 on all 1-trees. The $l$ routine works as follows on level 2: It sets $\pi(y)\gets x$. Then, letting $X$ and $Y$ denote the 2-trees containing $x$ and $y$ respectively, it executes the case below that applies: \case {Both trees are in stage 2} Link the level 1 trees using Algorithm 1. \case {Only one tree is stage 2} If $X$ is stage 2, transfer the nodes of $Y$ to \wx., using {\it add\_leaf}$\,$\ operations. The discard the data structures for $Y$ on levels 1 and 2. If $Y$ is stage 2 do the same, using appropriate $add\_root$ operations in the transfer of $X$ to $\widehat y$. \case {Both trees are stage 1} If the combined trees contain $\ge \ifmmode \,{ \rm log}\,\else{\it log }\fi n$ nodes initialize the 2-tree as a new stage 2 tree, with one 2-subtree consisting of all nodes of $X$ and $Y$. Discard the data structures for $X$ and $Y$ on both levels. Otherwise link the level 1 trees using Algorithm 1. \bigskip The total time is dominated by the time spent for all incremental trees on both levels 1 and 2. On level 2 a 2-subtree that grows to contain $n_i$ nodes (as in the last two cases) uses time $O(n_i)$ for all {\it add\_leaf}$\,$\ and $add\_root$ operations. So all 2-subtrees use total time $O(n)$. Consider level 1. The 1-trees for stage 2 2-trees collectively contain $\le n/\ifmmode \,{ \rm log}\,\else{\it log }\fi n$ nodes. Each node is transferred by Algorithm 1 at most $\ifmmode \,{ \rm log}\,\else{\it log }\fi n$ times. So the total time is $O(n)$. The stage 1 2-trees collectively contain $n$ nodes. Each is transferred $\le \ifmmode \,{ \rm log}\,\else{\it log }\fi \ifmmode \,{ \rm log}\,\else{\it log }\fi n$ times by Algorithm 1. So the total time is $O(n\ifmmode \,{ \rm log}\,\else{\it log }\fi\log n)$. This term strictly dominates the algorithm's time bound. Clearly we can improve this algorithm by adding another stage, for 2-trees with $\le \ifmmode \,{ \rm log}\,\else{\it log }\fi \ifmmode \,{ \rm log}\,\else{\it log }\fi n$ nodes. The time becomes $O(n \ifmmode \,{ \rm log}\,\else{\it log }\fi^{(3)} n)$. Continuing in this fashion we can achieve time $O(n\ifmmode \,{ \rm log}\,\else{\it log }\fi ^* n)$.% \footnote{$\ifmmode \,{ \rm log}\,\else{\it log }\fi^{(i)} n$ and $\ifmmode \,{ \rm log}\,\else{\it log }\fi ^* n$ are defined as in \cite{CLRS}.} Let us sketch this algorithm. (The detailed version of the algorithm is the case $\ell=2$ of algorithm ${\cal A}_\ell$ presented below.) It is convenient to switch notation from small functions like $\ifmmode \,{ \rm log}\,\else{\it log }\fi$ to large ones like exponentiation. Recall the superexponentiation function, defined by $2\hskip-2pt\uparrow\hskip-2pt 1 = 2$, $2\hskip-2pt\uparrow\hskip-2pt(s+1) = 2^{(2\uparrow s)}$. In Algorithm 2 level 2 has $\ifmmode \,{ \rm log}\,\else{\it log }\fi^* n$ stages. A stage \stage. 2-tree has between $2\hskip-2pt\uparrow\hskip-2pt \stage.$ and $2\hskip-2pt\uparrow\hskip-2pt (\stage.+1)$ nodes. It is partitioned into 2-subtrees, each of which contains $\ge 2\hskip-2pt\uparrow\hskip-2pt\stage.$ nodes. The remaining properties of 2-trees are essentially the same as the previous algorithm. The $l$ routine uses new criteria to determine the cases, but is otherwise unchanged. In more detail, let $X$ be in stage $\sigma(X)$ and similarly for $\sigma(Y)$, and let $\sigma=\max\{\sigma(X),\sigma(Y)\}$. If the combined trees contain $\ge 2\hskip-2pt\uparrow\hskip-2pt(\sigma+1)$ nodes a new stage $\sigma+1$ tree is initialized (as in the last case above). Otherwise if $\sigma(X)\ne \sigma(Y)$ the nodes of the smaller 2-tree are transferred to the larger (as in the middle case above). Otherwise ($\sigma(X)= \sigma(Y)$) Algorithm 1 links the images of the two 2-trees (as in the first and last cases). The time for all $link$ operations is $O(n\ifmmode \,{ \rm log}\,\else{\it log }\fi^*n)$. This holds because the time on each stage is $O(n)$. Let us sketch a proof. Consider level 2. As before, a 2-subtree that grows to contain $n_i$ nodes uses time $O(n_i)$ for all {\it add\_leaf}$\,$\ and $add\_root$ operations. This gives $O(n)$ time total for each stage on level 2. There are $\ifmmode \,{ \rm log}\,\else{\it log }\fi^*n$ stages so the total time is $O(n\ifmmode \,{ \rm log}\,\else{\it log }\fi^* n)$. As for level 1, a 1-node is a contracted 2-subtree. A fixed stage $\stage.$ of level 2 contains a total of $\le n/(2\hskip-2pt\uparrow\hskip-2pt \stage.)$ 2-subtrees. Thus over the entire algorithm stage $\stage.$ has $\le n/(2\hskip-2pt\uparrow\hskip-2pt \stage.)$ 1-nodes $x$. After being transferred $2\hskip-2pt\uparrow\hskip-2pt \stage.$ times by Algorithm 1, $x$'s 1-tree has grown to $\ge 2^{ 2\uparrow \stage.}=2\hskip-2pt\uparrow\hskip-2pt (\stage.+1)$ 1-nodes. So the 2-tree containing $\gx.$ has advanced to stage $\stage.+1$. So $O(2\hskip-2pt\uparrow\hskip-2pt \stage.)$ time total is spent on $x$ in level 1. Thus the time for Algorithm 1 to process all stage $\stage.$ 1-nodes is $O( \frac{n}{2 \hskip-2pt\uparrow\hskip-2pt \stage.}\,\cdot\, { 2\hskip-2pt\uparrow\hskip-2pt \stage. } )=O(n)$. Again there are $\ifmmode \,{ \rm log}\,\else{\it log }\fi^*n$ stages so the total time on level 1 is $O(n\ifmmode \,{ \rm log}\,\else{\it log }\fi^* n)$. We conclude that Algorithm 2 uses total time $O(m+n\ifmmode \,{ \rm log}\,\else{\it log }\fi^* n)$. } \iffalse Note that Algorithm 1 achieves linear time if $m=\Omega(n\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$. A slight variant of Algorithm 2 achieves linear time if $m=\Omega(n\il c n)$ for any constant $c$: Assume $m=o(n\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$. Define stage 2 to have prenodes of size $\ge 2^{m/n}$; later stages are as before (i.e., if prenodes have size $\ge S$ in one stage they have size $\ge 2^S$ in the next). In stage 1, a 1-node advances $\le m/n$ times before its tree has size $\ge 2^{m/n}$ and so advances to stage 2. The time to advance all 1-nodes is $O(\frac{m}{n} \,\cdot\,n)=O(m)$. The other stages are analyzed as before, so the total link time is $O(m+n)$ per stage. The number of stages is $\ifmmode \,{ \rm log}\,\else{\it log }\fi^* n - \ifmmode \,{ \rm log}\,\else{\it log }\fi^* (m/n)$, since $\ifmmode \,{ \rm log}\,\else{\it log }\fi^* n$ is the smallest $k$ such that $2\hskip-2pt\uparrow\hskip-2pt k\ge n$ and $\ifmmode \,{ \rm log}\,\else{\it log }\fi^* (m/n)$ is the smallest $k$ such that $2\hskip-2pt\uparrow\hskip-2pt k\ge m/n$. So the total time is $O((m+n) (\ifmmode \,{ \rm log}\,\else{\it log }\fi^*n -\ifmmode \,{ \rm log}\,\else{\it log }\fi^* (m/n))$. If $m=\Omega(n\il c n)$ then $ \ifmmode \,{ \rm log}\,\else{\it log }\fi^*n \le c+\ifmmode \,{ \rm log}\,\else{\it log }\fi^* (m/n)$ (since $ \ifmmode \,{ \rm log}\,\else{\it log }\fi^*n = c+\ifmmode \,{ \rm log}\,\else{\it log }\fi^* (\il c n)$). Thus for any constant $c$ the time is $O(m)$. \fi This construction can be repeated, using Algorithm 2 to get even faster Algorithm 3, etc. We now present the formal details. Define Ackermann's function $A_i(j)$ for $i,j\ge 1$ by \begin{eqnarray*} A_1(j)&= &2^j, \hbox{\ for\ } j \ge 1;\\ A_i(1)&=&2, \hbox{\ for\ } i \ge 2;\\ A_i(j) &= &A_{i-1}(A_i(j-1) ), \hbox{\ for\ } i,j \ge 2. \end{eqnarray*} \noindent Define two inverse functions, \begin{eqnarray*} a_i(n)&=&\min\set {j}{A_i(j)\ge n};\\ \alpha(m,n)&=&\min\set{i}{A_i(4\c{m/n})\ge n}, \hbox{\ for\ } m,n \ge 1. \end{eqnarray*} These definitions differ slightly from those of \cite{T83} but this does not change asymptotic estimates. The most significant difference is that our function $A_i(1)$ is constant compared to a rapidly growing function in \cite{T83}. This makes for a more convenient treatment of the base case in our algorithms. We use some very weak properties of Ackermann's function including these inequalities, which are proved in Appendix \ref{AckAppendix}: \begin{eqnarray} \label{4Eqn} A_i(j+1) &\ge &2A_i(j), \hbox{\ for\ } i,j \ge 1;\\ \label{5Eqn} A_{i+1}(j) &\ge &A_i(2j), \hbox{\ for\ } i \ge1,j \ge 4;\\ \label{6Eqn} \alpha(m',n')&\ge&\alpha(m,n)-1, \hbox{\ for\ } m'\le 2m, n'\ge n. \iffalse m'\le \max \{2m,n'\} or m'=n', So 4\c{m'\over n'} =4\le 4\c{m\over n} By definition A_{\alpha(m,n)-1}(4\c{m\over n})\le n \le n' Sice a is increasing, \alpha(m',n') \ge \alpha(m,n). \[a_\ell(n_{i+1})\le 4 \c{ {m_{i} \over n_{i}} } +1 \le {4m_{i} \over n_{i}} +5.\] The first inequality follows since $n_{i+1}\le 2n_{i}$ (Observation \ref{ReorgLemma}) implies $$A_\ell(4\c{ {m_{i} \over n_{i}} } +1) \ge 2A_\ell(4\c{ {m_{i} \over n_{i}} } )\ge 2n_{i}\ge n_{i+1}.$$ \fi \end{eqnarray} \iffalse \ell=\alpha(m_i,n_i)-1$. n \ge A_{\alpha(m,n)-1}(4\c{m\over n})\ge A_{\alpha(m,n)-1}(4)\ge \[a_\ell(n_{i+1})\le 4 \c{ {m_{i} \over n_{i}} } +1 \le {4m_{i} \over n_{i}} +5.\] The first inequality follows since $n_{i+1}\le 2n_{i}$ (Observation \ref{ReorgLemma}) implies $$A_\ell(4\c{ {m_{i} \over n_{i}} } +1) \ge 2A_\ell(4\c{ {m_{i} \over n_{i}} } )\ge 2n_{i}\ge n_{i+1}.$$ \fi \iffalse \noindent \eqref{4Eqn} is proved using $A_i(j)\ge 2^j$. \eqref{5Eqn} uses this as well as $2^{2j}\ge 2j+2$ for $j\ge 1$. Note \eqref{5Eqn} needn't hold for $j<4$, e.g., $A_2(3)=16<A_1(6)=64$. \eqref{6Eqn} holds since $ A_{\ell-2}(4 \c{ {m' \over n'} } ) \le A_{\ell-2}(8\c{ m\over n })\le A_{\ell-1}(4\c{ m \over n })<n.$ Here we use \eqref{5Eqn}, which applies since $4\c{ m \over n }\ge 4$. (A slightly more involved calculation shows $\alpha(m',n')\le\alpha(m,n)+2$ when $m'\ge m, n'\le 2n $ but we do not use this fact.) \fi A preprocessing step tabulates the relevant values of Ackermann's function. We use the values $A_i(j)$ that are $\le n$ for $i\le \alpha(m,n)$. Define an array $ackermann[1..\ifmmode \,{ \rm log}\,\else{\it log }\fi n,1..\ifmmode \,{ \rm log}\,\else{\it log }\fi n]$: If $A_i(j)\le n$ then $ackermann[i,j]= A_i(j)$, else $ackermann[i,j]= \epsilon$. This table also allows us to find $\alpha(m,n)$, which is $\le \ifmmode \,{ \rm log}\,\else{\it log }\fi n$ (since \eqref{5Eqn} shows $A_{\ifmmode \,{ \rm log}\,\else{\it log }\fi n}(4)\ge A_1(2^{1+\ifmmode \,{ \rm log}\,\else{\it log }\fi n})$). The table is initialized, and $\alpha(m,n)$ is found, in time $O(\ifmmode \,{ \rm log}\,\else{\it log }\fi^2 n)$. The table allows any desired value of Ackermann's function to be found in $O(1)$ time. We use the linear-time incremental tree data structure of Theorem \ref{3.1Thm}. Call a tree that is represented by this data structure an {\it incremental tree}. The preprocessing step computes all the tables for this algorithm in time $O(n)$. The approach is similar to that of \cite{G85b} for a list-splitting problem. We construct a family of algorithms ${\cal A}_\ell$, $\ell\ge 1$. ${\cal A}_\ell$ is a multi-level algorithm based on the function $A_\ell$. It calls ${\cal A}_{\ell-1}$ if $\ell> 1$. ${\cal A}_\ell$ runs in time $O( m\ell+na_\ell(n) )$. Algorithm ${\cal A}_\ell$ works on level $\ell$. The terms {\it $\ell$-node} and {\it $\ell$-tree} refer to the objects manipulated by ${\cal A}_\ell$. Every link tree corresponds to an $L$-tree with the same nodes and edges. Every level $\ell$ has $a_\ell(n)$ {\it stages} $\sigma$, $\sigma=0,\ldots, a_\ell(n)-1$. Each $\ell$-tree $T$ belongs to a unique stage $\sigma$ defined as follows. \case {$|V(T)|<4$} $T$ is in stage $\sigma=0$. Stage 0 uses a trivial algorithm so an invocation of $c$ or $\ell$ uses time $O(1)$. \case {$|V(T)|\ge 4$} $T$ is in the stage $\sigma\ge 1$ satisfying $|V(T)|\in [2A_\ell(\sigma),2A_\ell(\sigma+1))$. (This is possible since $A_\ell( 1)=2$.) An {\it $\ell$-subtree in stage $\sigma$} is a subtree that has $\ge 2A_\ell(\sigma)$ nodes. The nodes of $T$ are partitioned into $\ell$-subtrees. If $\ell>1$ then $T$, with each $\ell$-subtree contracted, is represented on level $\ell-1$. Note that the contracted tree on level $\ell-1$ may be a trivial tree in stage 0 (of level $\ell-1$). Also if $\ell=1$ there is no need to store the contracted tree since $T$ has only one $\ell$-subtree. This follows since an $\ell$-subtree has $\ge 2A_1(\sigma)=2^{\sigma+1}$ nodes and $|V(T)|<2A_1(\sigma+1)=2^{\sigma+2}$ nodes. \bigskip Algorithm ${\cal A}_\ell$ uses the following data structure. Each $\ell$-tree and $\ell$-subtree is represented by its root. An $\ell$-tree $T$ is stored using parent pointers and children lists. If $r$ is the root of $T$ then $s(r)$ equals the size of $T$ (the number of its $\ell$-nodes). For any node $x$, $\sigma(x)$ equals the stage of $x$'s $\ell$-tree; if $\sigma(x)>0$ then $\widehat x$ points to the $\ell$-subtree containing $x$. Each $\ell$-subtree is represented as an incremental tree. Recall (Theorem \ref{3.1Thm}) that it has a root pointer $\varrho$ which is updated by $add\_root$ operations. We turn to the $link$ and $nca$ operations. Initially every node is a singleton link tree, in stage 0 of level $L$. Recall the operation $link(x,y)$ is processed by invoking the recusive algorithm $l(r,x,y,\ell)$ with arguments $\ell=L$ and $r$ equal to the root of the link tree containing $x$. $r$ is found by a simple recursive algorithm: If $\rho(\wx.)$ is the root of its $\ell$-tree then it is $r$. Otherwise recursively compute $r$ as the root of the $(\ell-1)$-tree containing $\fx.$ and set $r\gets \gz r.$. The algorithm for $l(r,x,y,\ell)$ is as follows. Let $X$ and $Y$ denote the $\ell$-trees with root $r$ and $y$ respectively, on entry to $l$. \bigskip \noindent {\em Combine Step}: Combine $X$ and $Y$ to a new $\ell$-tree $T_\ell$ by setting $\pi(y)\gets x$ and adding $y$ to the child list of $x$. \bigskip The rest of the algorithm determines the stage of $T_\ell$ and its decomposition into $\ell$-subtrees. Start by increasing $s(r)$ by $s(y)$. Let $\sigma=\max\{\sigma(x), \sigma(y)\}$. Execute the first of the following cases that applies and then return. \numcase 1 {$s(r)\ge 2A_\ell(\sigma+1)$} Make $T_\ell$ a new stage $\sigma+1$ $\ell$-tree consisting of one $\ell$-subtree, as follows: Initialize a new incremental tree $\widehat r$. Traverse $T_\ell$ top-down; when visiting a node $v$ do an \al. operation to add $v$ to $\widehat r$. Discard the data structures for $X$ and $Y$ on all levels $\le \ell$. If $\ell >1$ then create an $(\ell-1)$-tree in stage 0 for $T_\ell$, consisting of one node. \numcase 2 {$\sigma(x)>\sigma(y)$} Traverse $Y$ top-down, doing \al. operations to add each node of $Y$ to the incremental tree $\widehat x$. Discard the data structures for $Y$ on all levels $\le \ell$. \numcase 3 {$\sigma(x)<\sigma(y)$} Traverse the path from $x$ to $r$ in $X$, doing $add\_root$ operations to add each node to the incremental tree $\widehat y$. Then traverse $X$ top-down, doing \al. operations to add the other nodes to $\widehat y$. Discard the data structures for $X$ on all levels $\le \ell$ and set $\sigma(r)\gets\sigma(y)$. \numcase 4 {$\sigma(x)=\sigma(y)$} If $\sigma>0$ then do $l(\fz r., \fx., \fy.,\ell-1)$. If $\sigma=0$ then combine $X$ and $Y$ using a trivial algorithm. } \begin{figure}[t] \centering \input{Alpha.pstex_t} \caption{Examples for $link$. $L=3$. (a) Trees for $link(x,y)$: (a.1) 3-trees $X$ and $Y$. (a.2) 3-subtrees for $X$ and $Y$. $X$ has one 3-subtree. (a.3) 2-trees for $X$ and $Y$. (b) $T_3$ formed for $link (x,y)$, and link tree $Z$. (b.1)--(b.3) 3-trees, 3-subtrees, and 2-trees, as before. (c) $T_3$ formed for $link(z,r)$. (c.1)--(c.3) 3-tree, 3-subtrees, 2-tree. (c.4) 2-subtree contains entire 2-tree.} \label{AlphaFig} \end{figure} \bigskip Fig.\ref{AlphaFig} illustrates two $link$ operations. $link(x,y)$ starts with the trees of Fig.\ref{AlphaFig}(a) and executes Case 3 in level 3. The resulting link tree $link(x,y)$ has root $r$, which is the nonroot node $\varrho$ in the incremental tree of Fig.\ref{AlphaFig}(b.2). $link(z,r)$ starts with the trees of Fig.\ref{AlphaFig}(b) and executes Case 4 in level 3. Then it executes Case 1 for the two stage 0 trees in level 2. The resulting 2-tree (Fig.\ref{AlphaFig}(c.3)) is in stage 1 since it has $5\ge 4$ nodes. All 2-nodes are in one 2-subtree, as in Fig.\ref{AlphaFig}(c.4). Fig.\ref{AlphaFig}(b.2) and (c.2) illustrate that in general, any incremental tree may have its root pointer $\varrho$ pointing to a node at arbitrary depth. \begin{lemma} The algorithm for $link(x,y)$ preserves all the defining properties of the data structure. \end{lemma} \begin{proof} We sketch the argument giving only the most interesting details. Assume the bookkeeping fields $\sigma(v)$ and \wv. are updated when node $v$ is added to a new incremental tree. Consider a link tree $T$. For every level $\ell\in [1..L]$ $T_\ell$ denotes the corresponding tree as defined by the data structure's parent and child pointers. $T$ is represented correctly by $T_L$, by a simple induction using the Combine Step. Furthermore for every level $\ell \in [2..L]$ $T_{\ell-1}$ is formed from $T_{\ell}$ by contracting every $\ell$-subtree. (Notice that in Case 2 $T_{\ell-1}$ does not change since $Y$ is absorbed into $\Px.$. Similarly for Case 3 and $\Py.$.) This implies that the vertex $y$ is the root of its tree $T_L$, and $\fy.$ and all lower images are roots of their $\ell$-trees (see especially Fig.\ref{AlphaFig}(b.2)). This justifies the argument $\fy.$ in the recursive call of Case 4. Now consider the four cases that determine the stage and the $\ell$-subtrees. \case {\rm 1} The new $\ell$-tree belongs in stage $\sigma+1$ because $s(r)< 4A_\ell(\sigma+1)\le 2A_\ell(\sigma +2)$ by \eqref{4Eqn}. \case {\rm 2} The incremental tree $\widehat x$ exists, since $x$ is in a positive stage. Similarly in Case 3 $\widehat y$ exists. \case {\rm 4} The new $\ell$-tree $T_\ell$ (formed in the Combine Step) is correctly partitioned into $\ell$-subtrees, since $X$ and $Y$ were. Also Case 4 always has $\ell>1$. (So level ${\ell-1}$ actually exists.) This is because if $\ell=1$ and $\sigma(r)=\sigma(y)=\sigma$ then Case 1 applies, since $s(r)\ge 2(2A_1(\sigma))= 2^{\sigma+2}=2A_1(\sigma+1)$. If $\sigma=0$ then $s(r)<4$ so having updated $T_\ell$ to the combined tree we are done. \end{proof} \iffalse Otherwise $\Px.$ and $\Py.$ correspond to two distinct $(\ell-1)$-nodes. Set $(a, a_x, a_y)\gets c(x^-, y^-,\ell-1)$. Thus the $\ell$-prenode $\widehat{a^+}$ contains $nca(x,y)$. \fi The algorithm for $ca(x,y)$ is trivial in universe zero. In positive universes it is the multi-level algorithm $c(x,y,\ell)$ of Section \ref{3.3Sec}. The first case of the $c$ algorithm is always used (since every $\ell$-subtree is contracted to an $(\ell - 1)$-node). It executes the code of Fig.\ref{MultiAlg}. \iffalse $c(x^-, y^-,\ell-1)$ and then finds the desired characteristic ancestors by executing the incremental tree $ca$ algorithm on the appropriate $\ell$-prenode. The details are in Section \ref{3.3Sec}. \fi \begin{lemma} \label{3.6Lemma} Algorithm ${\cal A}_\ell$ executes a sequence of $m$ $ca$ and {\it link} operations on a set of $n$ nodes in $O(m\ell+na_\ell(n))$ time and $O(na_\ell(n))$ space. \end{lemma} \begin{proof} First consider the time. A $ca$ query uses $O(\ell)$ time in a positive stage, since $O(1)$ time is spent on each of $\ell$ levels of recursion. The time is $O(1)$ in universe zero. The time for {\it links} is estimated as follows. Charge each {\it link} operation $O(\ell)$ time to account for the initial computation of the root $r$ plus the $\ell$ levels of recursion and associated processing in routine $l$ (e.g., Case 4 and the Combine Step). So far all charges are included in the term $O(m\ell)$ of the lemma. For the rest of the time call an \al. or $add\_root$ operation an $add$ operation and define \[ \eta = \text{the total number of $add$ operations.} \] ($\eta$ includes all $add$ operations done in recursive calls.) The rest of the time for $l$ is proportional to $\eta$. Here we are using Theorem \ref{3.1Thm}, which shows that an incremental tree that grows to contain $n_i$ nodes uses time $O(n_i)$ for all {\it add\_leaf}$\,$\ and $add\_root$ operations. (Also note that discarding data structures in Cases 1--3 is just a nop.) For the time bound of the lemma it suffices to show $\eta=O(na_\ell(n))$. In fact we will show by induction on $\ell$ that \begin{equation} \label{EtaEqn} \eta\le 2 na_\ell(n). \end{equation} First consider the $add$ operations in Cases 1--3 of level $\ell$, i.e., we exclude the operations that result from a recursive call made in Case 4 from level $\ell$. Each such $add$ is done for a node previously in a lower stage of level $\ell$. So at most one $add$ is done for each node in each stage. This gives $\le na_\ell(n)$ $add$s total. (In particular this establishes the base case of the induction, $\ell=1$.) To bound the number of $add$s in all levels $<\ell$, fix a stage $\sigma>0$ of level $\ell$. We will show there are $\le n$ $add$s total in recursive calls made from stage $\sigma$ of level $\ell$. So the $a_\ell(n)$ stages contribute a total of $\le na_\ell(n)$ $add$s in levels $<\ell$. Adding together the two bounds gives \eqref{EtaEqn} and completes the induction. First note an approach that does {\em not} work. \iffalse An $\ell$-prenode contains $\ge 2A_\ell(\sigma)$ vertices. There are $\le n/2A_\ell(\sigma)$ such prenodes since they are all disjoint. The inductive assumption holds for $\A._{\ell-1}$. But applying it to a universe of size $n/2A_\ell(\sigma)$ overestimates the number of operations, since \fi The inductive assumption holds for $\A._{\ell-1}$. So we could estimate the total number of $(\ell-1)$-nodes, say $n_{\ell-1}$, and use the inductive bound $2 n_{\ell-1} a_{\ell-1}(n_{\ell-1})$. But this overestimates the number $add$s, since $\A._\ell$ discards the entire data structure for an $\ell$-tree as soon as it moves to a higher stage, i.e., when it has size $\ge 2A_\ell(\sigma+1)$ in Case 1, or earlier in Cases 2 and 3. So instead, our approach is to count the number of $add$s for each maximal $\ell$-tree $M_i$ of stage $\sigma$. Let $M_i$ have $n_i$ vertices and $s_i$ $\ell$-subtrees. Then \begin{equation} \label{PSubiEqn} s_i\le n_i/2A_\ell(\sigma) \le A_\ell(\sigma+1)/A_\ell(\sigma) \end{equation} where the first inequality uses the lower bound $2A_\ell(\sigma)$ on the size of an $\ell$-subtree and the second inequality uses the upper bound $2A_\ell(\sigma+1)$ on the size of an $\ell$-tree. The $(\ell-1)$-tree for $M_i$ has $s_i$ nodes. The inductive assumption shows the number of $add$s to form this $(\ell-1)$-tree is $\le 2s_i a_{\ell-1}(s_i)$. Using the second inequality of \eqref{PSubiEqn} gives $ a_{\ell-1}(s_i) \le a_{\ell-1}(A_{\ell}(\sigma+1)/A_\ell(\sigma) )\le a_{\ell-1}(A_\ell(\sigma+1))= a_{\ell-1}( A_{\ell-1}(A_\ell(\sigma)))=A_\ell(\sigma).$$ Using this and the first inequality of \eqref{PSubiEqn} shows the total number of $add$s for all $(\ell-1)$-trees of stage $\sigma$ is at most \[\sum_i 2s_i a_{\ell-1}(s_i) \le 2A_\ell(\sigma)\sum_i s_i \le 2A_\ell(\sigma)\sum_i n_i/2A_\ell(\sigma) =\sum_i n_i\le n. \] This bound of $n$ $add$s per stage implies $\le n a_\ell(n)$ recursive $add$s total. This completes the induction. Now consider the space. There are initially $n$ nodes on level $L$. Additional nodes are only created in Case 1. The number of these nodes is obviously bounded by the number of \al. operations and so is $\le \eta$. Theorem \ref{3.1Thm} shows the space used for incremental trees is proportional to $\eta$. (As usual all space is allocated from one global array $S$ using Lemma \ref{SpaceDoublingLemma}.) So \eqref{EtaEqn} implies the desired space bound. \end{proof} \iffalse At any time, the space is proportional to the number of nodes on all levels. We show by induction on $\ell$ that this number is at most $2n$. There are $n$ $\ell$-nodes. This establishes the base case $\ell=1$, since there are no lower levels. Suppose $\ell>1$. For $\sigma\ge 1$, over the entire algorithm there are $ \le n/2A_\ell(\sigma)$ distinct $\ell$-subtrees in stage $\sigma$. By induction the total number of nodes associated with stage $\sigma$, on levels $\ell-1$ and lower, is at most $n/A_\ell(\sigma)$. Since $A_\ell(\sigma)\ge 2^\sigma$, the total number of nodes is $\le n+(n/2+n/4+\ldots\ )\le 2n$. This proves the space bound. \fi The remaining issue is how to choose the number of levels $\ell$. Consider the usual case where bounds on $m$ and $n$ are known when the algorithm begins. Take $\ell= \alpha(m,n)$. By definition $a_{\alpha(m,n)}(n)\le 4\c{m/n}\le m/n+4$. So the lemma implies the following. \begin{theorem} \label{3.2Thm} A sequence of $\le m$ $nca$ and $link$ operations on a universe of $\le n$ nodes can be processed in time $O(m\alpha(m,n)+n)$ and space $O(m+n)$. \hfill$\Box$\end{theorem} Now we show that the same time bound can be achieved when $m$ and $n$ are not known in advance. In this setting we allow the operation {\it make\_node}$(x)$ which creates a new node $x$ in a singleton tree. It is convenient to assume that such a node $x$ is not counted in $n$ until it is involved in a $link$ operation. (Since $\alpha(m,n)$ is increasing with $n$, this can only make the desired time bound of Theorem \ref{3.2Thm} stronger.) Since a $link$ increases $n$ by $\le 2$ we always have \begin{equation*} \label{mn2Eqn} m\ge n/2. \end{equation*} We achieve the desired bound $ O(m\alpha(m,n)+n)$ using a doubling strategy. The desired conclusion is not immediately clear because of two main difficulties: First, $\alpha(m,n)$ is decreasing with $m$. So the time bound for a $ca$ operation can decrease as the algorithm progresses. Second the term $n a_\ell(n)$ in the bound of Lemma \ref{3.6Lemma} does not change in a predictable way. \def\nlop.{$nca$/$link$} We begin by describing our new procedure. It uses algorithm ${\cal A}_\ell$ where $\ell$ is repeatedly modified. In precise terms the sequence of operations is divided into {\it periods}. The parameters $n$ and $m$ denote their values at the start of a period. The period processes $nca$ and $link$ operations using algorithm ${\cal A}_\ell$ where $\ell=\alpha(m,n)$. (Note that the first period begins with the execution of the first $link$, i.e., $m=1$, $n=2$, $\ell=1$.) The period continues as long as the value of $\alpha$ remains in $\{\ell-1,\ell\}$. In other words we declare a new period whenever $n'$ (the current number of $link$s), $m'$ (the current number of $link$s and $nca$s), and $\ell'=\alpha(m',n')$, have $\ell'>\ell$ or $\ell'<\ell-1$. The last period ends at the conclusion of the algorithm ($\alpha$ need not have changed). In precise terms the algorithm is as follows. \bigskip {\narrower {\parindent=0pt Before executing the current $nca$/$link$ operation, update $n',m'$ and $\ell'$ to include that operation. If $\ell'\in \{\ell-1,\ell\}$ then execute the operation. Otherwise do the following: \bigskip Set $n\gets n',\ m\gets m',\ \ell \gets \ell'$. Reorganize the entire data structure to use algorithm ${\cal A}_{\ell}$. Do this by making each current link tree $T$ an incremental tree and placing it in the appropriate stage for $\A._\ell$. If $\ell>1$ add a corresponding node in stage 0 of level $\ell-1$. Finally execute the current $nca$/$link$ operation. } } \bigskip This procedure clearly handles $link$s and $nca$s correctly. The time to start a new period is $O(n)$ for the new value of $n$. (This includes the time to compute a new $ackermann$ table, find $\ell$, compute new incremental tree tables and find the stage for each incremental tree. All tables are computed for the value $2n$, e.g., $ackermann$ stores all values $A_i(j)\le 2n$.) Now we prove the procedure achieves our goal. Note that the resource bounds of the following corollary are essentially the same as Theorem \ref{3.2Thm}. \begin{corollary} \label{3.2Cor} A sequence of $nca$ and $link$ operations can be processed in time $O(m\alpha(m,n))$ and space $O(m)$. Here $m$ is the number of $nca$s and $link$s, $n$ is the number of $link$s, and neither is known in advance. \end{corollary} \def\bn.{\mathy{\overline n}} \def\bm.{\mathy{\overline m}} \def\bn.{\mathy{\bar n}} \def\bm.{\mathy{\bar m}} \begin{proof} The bulk of the argument establishes the time bound. We will charge processing time to the counts $n$ and $m$. To do this define a {\em unit} of time to be enough to pay for any constant amount of computing in the algorithm's time bound. As in the algorithm, the analysis uses $n$, $m$, and $\ell$ to denote the values when the first \nlop. operation of the period is executed. In addition we use $\bn.$ and $\bm.$ to denote the counts up to and including the last operation of the period. (So $\alpha(\bn.,\bm.)\in \{\ell-1,\ell\}$.) We also use $n',m',\ell'$ to denote those values for the first operation of the next period. (So $m'=\bm.+1$. This holds for the last period by convention. We also take $n'=\bn.$ for the last period.) Note that $m'-m$ is the number of operations in the period. The proof of the time bound consists of these three claims. \claim 1 {The time for any period is at most $(m'-m)\ell + 12 \bm.$ units.} \nclaim 2 {The total time from the start of the algorithm to the end of any period but the last is at most $(12\ell'+50)\bm.$ units. (As above, $\ell'=\alpha(m',n')$.)} \claim 3 {When the last period ends, the total time for the entire algorithm is at most $(12 \ell+62)\bm.$ units. The time bound of the corollary holds.} \noindent In Claim 1, all parameters (e.g., $\ell$) are defined for the period under consideration. In Claim 3, all parameters are defined for the last period. \iffalse \noindent Claim 3 establishes the time bound of the corollary. (In the notation of Claim 3, the corollary's time bound is $O(\bm.\alpha(\bm.,\bn.)$.) \fi \bigskip \noindent {\bf Proof of Claim 1.} Lemma \ref{3.6Lemma} shows the time for a period is $O((m'-m)\ell +\bn. a_\ell(\bn.))$. So we need only show $\bn. a_\ell(\bn.)\le 12\bm.$. First observe \begin{equation} \label{aEllEqn} a_\ell(\bn.)\le 4 \c{\bm.\over \bn.}. \end{equation} This is equivalent to $A_\ell( 4\c{\bm.\over \bn.})\ge \bn.$. This inequality holds by definition if $\alpha(\bm.,\bn.)=\ell$. The other possibility is $\alpha(\bm.,\bn.)=\ell-1$. But this also implies the same inequality, since we have $A_\ell( 4\c{\bm.\over \bn.})\ge A_{\ell-1}( 4\c{\bm.\over \bn.}) \ge \bn.$. The right-hand side of \eqref{aEllEqn} is $< 4 (\bm./\bn. +1) $. Thus $\bn. a_\ell(\bn.) \le 4 (\bm. +\bn.)$. Using $\bm.\ge \bn./2$ the last quantity is bounded by $4\bm.+8\bm.=12\bm.$. \hfill $\diamondsuit$\\ We prove Claims 2 and 3 by charging each $nca$/$link$ at most $\gamma$ time units, where $\gamma$ is $12 \ell' +50$ in Claim 2 and $12 \ell+62$ in Claim 3. \bigskip \noindent {\bf Claim 2 implies Claim 3.} Claim 2 shows the total time from the start of the algorithm to the beginning of the last period is accounted for by a charge of $\gamma= 12\ell+50$, where $\ell=\alpha(m,n)$ is the value used in the data structure of the last period. (This holds {\em a fortiori} if the last period is actually the first period.) Account for the time in the last period in two steps. First account for the term $(m'-m)\ell$ in Claim 1 by charging each $nca$/$link$ of the last period $\ell$ units. Since $\ell<\gamma$ every $nca$/$link$ is charged $\le \gamma$ units. Next account for the term $12\bm.$ in Claim 1 by increasing $\gamma$ by 12, so the new charge is $12 \ell+62$ units. This gives the first part of Claim 3. The final value of $\alpha$ is $\ge \ell-1$. So using the first part of Claim 3 and changing $\bm.$ to the parameter $m$ of the corollary, the time bound for the entire algorithm is $O(m (\alpha(m,n)+1) = O(m \alpha(m,n))$. Claim 3 is now completely proved. \hfill $\diamondsuit$\\ \noindent {\bf Proof of Claim 2.} Assume Claim 2 holds at the end of the previous period. Now switch to the notation of the current period. (The value $\ell'$ in Claim 2 becomes the parameter of the current period, $\ell$.) So each operation preceding the current period is charged $\gamma= 12 \ell +50$ units. Let $\ell'$ now denote the value of $\alpha$ after the current period ends. We wish to show the total time is accounted for by charging every operation $\gamma'=12\ell' +50$ units. Consider the two possibilities for $\ell'$. \case {$\ell'\ge \ell+1$} Account for the first term of Claim 1 by charging each $nca$/$link$ of the current period $\ell$ units. Certainly $\ell< 12\ell+50$. So now every $nca$/$link$ is charged $\le \gamma$ units. Account for the second term by increasing $\gamma$ to $\gamma+12= 12(\ell+1)+50\le 12 \ell'+50$. This gives Claim 2 for the current period. (This case applies when the current period is the first period, since $\ell=1$.) \iffalse So all the time has been accounted for with $\gamma= 16(\ell+1)+66\le 16\ell'+66$. So Claim 2 holds in this case. \fi \case {$\ell'\le \ell -2$} If $m'\le 2m$ then \eqref{6Eqn} implies $\ell'=\alpha(m',n')\ge \alpha(m,n)-1=\ell-1$. So \[m'>2m.\] \eqref{6Eqn} also implies $\ell'=\alpha(m',n')\ge \alpha(\bm.,\bn.)-1$ since $m'=\bm.+1\le 2\bm.$. Using $\alpha(\bm.,\bn.)\ge \ell-1$ gives $\ell'=\alpha(m',n')\ge \ell-2$. With the inequality assumed for the current case, we get \[\ell'=\ell-2.\] Since $m'/2 >m$, we have $m'-m > m'/2$. So we can account for the second term of Claim 1 by charging each $nca$/$link$ of this period 24 units ($24(m'/2)=12m'>12\bm.$). The charge for the first term of Claim 1 is $\ell$. So the total charge to each new time period is $\ell +24= (\ell'+2)+24=\ell'+26$. Each of the first $m-1$ operations is currently charged $\gamma= 12\ell+50=12(\ell'+2)+50$. Transfer 24 units to an operation of the current period. (Permissible since $m'-m> m>m-1$.) The first $m-1$ operations are now each charged $\gamma'=12\ell'+50$. The operations of the current period are each charged $\le (\ell'+26)+24<12\ell'+50=\gamma'$. So $\gamma'$ accounts for all the time so far. \hfill $\diamondsuit$\\ Finally consider the space. Lemma \ref{3.6Lemma} shows the space for each period is $O(\bn. a_\ell(\bn.))$. The argument of Claim 1 shows any period has $\bn.a_\ell(\bn.) =O(\bm.)$. Since $\bm.$ is at most the final value of $m$, and the space for a period is always reused, the space bound follows. \end{proof} \iffalse Assume a charge of 1 unit pays for any constant amount of computing. (For instance the second case below pays $m_{i}/2 +n_{i}/4$ units for time $O( m_{i} + n_{i})$.) \fi \iffalse The following invariants will be maintained: \bigskip {\narrower {\parindent=0pt Each {\it make\_node} operation is charged $\le 1$ if it is in the current period, else $\le 2$. \smallskip Each {\it link} or {\it ca} operation is charged $\le \ell$ if it is in the current period else $\le \ell+3$. Here $\ell$ denotes its current value. \smallskip } } \bigskip \noindent We shall present the charging scheme and establish invariants, and then show how that implies the corollary. Charge the operations as follows. \bigskip {\narrower {\parindent=0pt When a {\it make\_node} operation is executed charge it 1; this pays for the $O(1)$ time it uses. \smallskip When a {\it ca} or {\it link} operation is executed charge it $\ell $. This pays for all the time used by a {\it ca}, and part of the time used by a {\it link}. } } \bigskip The remaining time, which is for for {\it links}, is paid for in the next reorganization, or at the end of the algorithm if there is no such reorganization. Consider the end of period $i$. If there is no reorganization then no new charges are made. If the data structure is reorganized then we will make charges to account for $O(m_{i} + n_{i})$ units of computing. First observe that such a charge after a reorganization actually pays for the total time of the algorithm (up to this point): The new charge must pay for the remaining time for {\it links} as well as the time for reorganization. The former is bounded by $O(n_{i+1} a_\ell(n_{i+1}) )$ where $\ell=\alpha(m_{i},n_{i})$ by the proof of Lemma \ref{3.6Lemma}. This expression also bounds the time for reorganization, $O(n_{i+1})$. Thus it suffices to show \[n_{i+1} a_\ell(n_{i+1}) = O(m_{i} + n_{i}).\] To prove this first observe \[a_\ell(n_{i+1})\le 4 \c{ {m_{i} \over n_{i}} } +1 \le {4m_{i} \over n_{i}} +5.\] The first inequality follows since $n_{i+1}\le 2n_{i}$ (Observation \ref{ReorgLemma}) implies $$A_\ell(4\c{ {m_{i} \over n_{i}} } +1) \ge 2A_\ell(4\c{ {m_{i} \over n_{i}} } )\ge 2n_{i}\ge n_{i+1}.$$ Now complete the proof by observing $n_{i+1} a_\ell(n_{i+1}) \le 2n_i a_\ell(n_{i+1}) \le 8 m_i+10n_i$. \iffalse it suffices to show that $a_\ell(n_{i+1})\le 4 \c{ {m_{i} \over n_{i}} } +1$, since $n_{i+1}\le 2n_{i}$. The latter follows since \fi To make the charges when the data structure is reorganized, let $\ell'$ denote the new value of $\ell$. Consider the following two cases for a reorganization, guaranteed by Observation \ref{ReorgLemma}. \case {$n_{i+1}= 2n_{i}$} Observation \ref{ReorgLemma} also gives $m_ {i+1} \le 2m_{i}$, so $m_ {i+1} /n_ {i+1} \le m_{i} /n_{i}$. Thus $A_i( 4\c { {m_ {i+1} \over n_ {i+1} } }) \le A_i(4\c{{m_{i} \over n_{i} }}) $, whence $ \alpha (m_ {i+1} ,n_ {i+1} ) \ge \alpha(m_{i} ,n_{i}) $. This implies $\ell$ does not decrease in the reorganization, so $\ell'>\ell $. \iffalse to show $ \c { {m_ {i+1} \over n_ {i+1} } } \le \c{{m_{i} \over n_{i} }} $, Observe that the value of $\ell$ increases in the reorganization. For this it suffices to show $ \alpha (m_ {i+1} ,n_ {i+1} ) \ge \alpha(m_{i} ,n_{i}) $. This inequality follows since $ \c { {m_ {i+1} \over n_ {i+1} } } \le \c{{m_{i} \over n_{i} }} $, since this inequality implies that $\ell$ does not decrease. If $m_ {i+1} \le n_ {i+1}$ then $\c{m_ {i+1} \over n_ {i+1} }=1$, which gives the inequality. The other possibility is that $m_ {i+1} \le 2m_ {i}$. This implies $ {m_ {i+1} \over n_ {i+1} } \le {m_{i} \over n_{i}} $ so again the inquality holds. \fi Make the timing charge $O( m_{i} + n_{i})$ as follows. For the charge of $O(n_{i})$, charge 1 to each of the $n_{i}$ new {\it make\_node} operations in period $i$. Each such operation is now charged 2, preserving the invariant. For the charge of $O(m_{i})$, charge 1 more unit to each {\it link} or {\it ca} operation in all periods $\le i$. This preserves the invariant since $\ell'>\ell$. \case {$m_{i+1}= \max\{2m_{i},n_{i+1} \}$} The number of new {\em ca} and {\em links}, $m_{i+1}-m_{i}$, satisfies $m_{i+1}-m_{i} \ge m_{i}$ and $m_{i+1}-m_{i}\ge m_{i+1}/2\ge n_{i+1}/2\ge n_{i}/2$. Thus the timing charge $O( m_{i} + n_{i})$ can be made by charging 1 more unit to each {\it ca} or {\it link} operation in period $i$. The charge to these operations is now $\ell+1$. In addition we must preserve the invariant for {\it link} and {\it ca} operations. Since $\ell'\ne \ell$ we can assume $\ell'<\ell$. If $m_{i+1}= 2m_i$ then since $n_{i+1}\ge n_i$, \eqref{6Eqn} shows $\alpha(m_{i+1},n_{i+1})\ge\alpha(m_i,n_i)-1$. As indicated above $\ell'=\ell-1$. If $m_{i+1}= n_{i+1}$ then since Charge 1 less unit to each of the $m_{i}$ {\it link} or $ca$ operations before period $i$, and 1 more unit to each such operation in period $i$. This does not decrease the total charge since $m_{i+1}-m_{i} \ge m_{i}$. It preserves the invariant, since the operations before period $i$ are now charged $\le \ell+2$, the operations in period $i$ are charged exactly $\ell +2$, and $\ell +2=\ell'+3$. If the value of $\ell$ increases in the reorganization, the invariant holds {\it a fortiori}. To complete the proof we analyze state of affairs when the algorithm halts. Suppose the last step is a reorganization. The invariants show the total time is $O(m\ell+n)$. Since the last reorganization ensures $\ell=\alpha(m,n)$ this equals the bound of the theorem. Now suppose the algorithm halts without reorganizing. Lemma \ref{ReorgLemma} shows the current counts $n,m$ satisfy $n<2n_i$, $m< \max \{2m_i, n\}$, for $i$ the index of the last period. The above proof shows the {\em link} time that has not been paid for is $O(n a_\ell(n))$, and it also shows this quantity is $O(m_i + n_i)$. We pay this debt by charging 1 more unit to each of the $m_i+n_i$ operations that precede period $i$. Again the total time is $O(m\ell+n)$, but now $\ell=\alpha(m_i,n_i)$. Proposition \ref{ReorgLemma} and equation \eqref{6Eqn} show $\alpha(m,n)\ge \alpha(m_i,n_i)-1=\ell-1$. So the total time is $O(m(\alpha(m,n)+1)+n)=O(m\alpha(m,n)+n)$. \fi The multi-level method we have used can be applied to achieve the same time and space bounds for several other problems. As mentioned above, \cite{G85b} applies it to solve the list splitting problem that arises in {\it expand} steps of Edmonds' algorithm. The technique was rediscovered by Han La Poutr\'e: \cite{LaP} presents a multi-level algorithm for the set merging problem (this application is noted in \cite[p.\ 99]{G85b}; also a result similar to Corollary \ref{3.2Cor} was independently arrived at [J.A. La Poutr\'e, personal communication]. Other applications include the static cocycle problem introduced in \cite{GS}, both for graphic matroids and the job scheduling matroid; the former is useful for various problems involving spanning trees. \iffalse \iffalse -------------- So $n_i< 2A_\ell(\sigma+1)$. $\max \sum 2 p_ia_{\ell-1}(p_i)$ subject to $\sum p_i \le n/2A_\ell(\sigma)$ and $p_i \le A_\ell(\sigma+1)/A_\ell(\sigma)$. and so $n_i \le A_\ell(\sigma+1)/A_\ell(\sigma)$. Since the T's contain a total of $\le n$ vertices, so $\sum n_i \le n/2A_\ell(\sigma)$ Since $a_{\ell-1}$ is a nondecreasing function the maximum occurs when all but at most one $i$ have $n_i \ge A_\ell(\sigma+1)/2A_\ell(\sigma)$. So the number of trees is $\le 1+n/A_\ell(\sigma+1)$ For any $i$ we have $$a_{\ell-1}(n_i) \le a_{\ell-1}(A_{\ell}(\sigma+1)/A_\ell(\sigma) )\le a_{\ell-1}(A_\ell(\sigma+1))= a_{\ell-1}( A_{\ell-1}(A_\ell(\sigma)))=A_\ell(\sigma).$$ Thus the total number of operations is at most $n/A_\ell(\sigma+1) 2A_\ell(\sigma+1)/A_\ell(\sigma) A_\ell(\sigma)=2n.$ ------------- Let the $i$th such maximal link tree have $n_i$ prenodes. Its tree on level $\ell-1$ has $n_i$ nodes. The inductive assumption shows the number of operations to form this tree is $\le 2n_ia_{\ell-1}(n_i)$. Hence the worst-case number of opeation is $\max \sum 2 n_ia_{\ell-1}(n_i)$ subject to $\sum n_i \le n/2A_\ell(\sigma)$ and $n_i \le A_\ell(\sigma+1)/A_\ell(\sigma)$. each T has size $< 2A_\ell(\sigma+1)$ and so $n_i \le A_\ell(\sigma+1)/A_\ell(\sigma)$. Since the T's contain a total of $\le n$ vertices, so $\sum n_i \le n/2A_\ell(\sigma)$ Since $a_{\ell-1}$ is a nondecreasing function the maximum occurs when all but at most one $i$ have $n_i \ge A_\ell(\sigma+1)/2A_\ell(\sigma)$. So the number of trees is $\le 1+n/A_\ell(\sigma+1)$ For any $i$ we have $$a_{\ell-1}(n_i) \le a_{\ell-1}(A_{\ell}(\sigma+1)/A_\ell(\sigma) )\le a_{\ell-1}(A_\ell(\sigma+1))= a_{\ell-1}( A_{\ell-1}(A_\ell(\sigma)))=A_\ell(\sigma).$$ Thus the total number of operations is at most $n/A_\ell(\sigma+1) 2A_\ell(\sigma+1)/A_\ell(\sigma) A_\ell(\sigma)=2n.$ ------------- Since $T$ has $< 2A_\ell(\sigma+1)$ vertices, it corresponds to a set of $< 2A_\ell(\sigma+1)/2A_\ell(\sigma)$ prenodes processed by $A._{\ell-1}$. The inductive assumption shows the number of operations is at most at most $< 2A_\ell(\sigma+1)/A_\ell(\sigma)$ times the quantity $$a_{\ell-1}(A_{\ell}(\sigma+1)/A_\ell(\sigma) )\le a_{\ell-1}(A_\ell(\sigma+1))= a_{\ell-1}( A_{\ell-1}(A_\ell(\sigma)))=A_\ell(\sigma).$$ There are $\le n/2A_\ell(\sigma)$ maximal trees $T$. So the total number of operations in recursive calls associated with stage $\sigma$ is $\le n/2A_\ell(\sigma) 2A_\ell(\sigma+1)/A_\ell(\sigma) A_\ell(\sigma)$ \fi \iffalse On level $L$ the time is $O(m+ n \ifmmode \,{ \rm log}\,\else{\it log }\fi \mu_L)=O(m)$. All other levels have time $O(m+ (n/\mu_{\ell+1}) \ifmmode \,{ \rm log}\,\else{\it log }\fi \mu_\ell)=O(m+n)$. The total time is $O((m+n)L)$. By definition level 1 is the level with 1 prenode, i.e., $\mu_1\ge n$. This corresponds to \fi \iffalse Take $L=\ifmmode \,{ \rm log}\,\else{\it log }\fi^* n$ and $\mu_\ell= \il {\ell-1} n$. Level $\ell<L$ has $\le n/\mu_{\ell+1}$ nodes. So the time on level $\ell<L$ is $O(m+ (n/\mu_{\ell+1}) \ifmmode \,{ \rm log}\,\else{\it log }\fi \mu_\ell)=O(m+n)$. Similarly the time on level $L$ is $O(m+ n \ifmmode \,{ \rm log}\,\else{\it log }\fi \mu_L)=O(m+n)$. Thus the total time is $O((m+n)\ifmmode \,{ \rm log}\,\else{\it log }\fi^* n)$. Note that Algorithm 1 achieves linear time if $m=\Omega(n\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$. A slight variant of Algorithm 2 achieves linear time if $m=\Omega(n\il 2 n)$: Assume $m=o(n\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$. Define $\mu_L=2^{m/n}$, $\mu_{\ell}= 2^{\mu_{\ell+1}}$. On level $L$ the time is $O(m+ n \ifmmode \,{ \rm log}\,\else{\it log }\fi \mu_L)=O(m)$. All other levels have time $O(m+ (n/\mu_{\ell+1}) \ifmmode \,{ \rm log}\,\else{\it log }\fi \mu_\ell)=O(m+n)$. The total time is $O((m+n)L)$. By definition level 1 is the level with 1 prenode, i.e., $\mu_1\ge n$. This corresponds to $L = \ifmmode \,{ \rm log}\,\else{\it log }\fi^* n - \ifmmode \,{ \rm log}\,\else{\it log }\fi^* (m/n)$, since $\ifmmode \,{ \rm log}\,\else{\it log }\fi^* n$ is the smallest $k$ such that $2\uparrow k\ge n$ and $\ifmmode \,{ \rm log}\,\else{\it log }\fi^* (m/n)$ is the smallest $k$ such that $2\uparrow k\ge m/n$. So the total time is $O((m+n) (\ifmmode \,{ \rm log}\,\else{\it log }\fi^*n -\ifmmode \,{ \rm log}\,\else{\it log }\fi^* (m/n))$. If $m=\Omega(n\il k n)$ then $ \ifmmode \,{ \rm log}\,\else{\it log }\fi^*n =k+\ifmmode \,{ \rm log}\,\else{\it log }\fi^* (m/n)$. Thus for any constant $c$ the time is $O(m)$. \fi \iffalse If both trees are in stage 2 recurse link the level 1 trees. If just 1 tree is in stage 2 transfer the nodes of the stage 1 tree to the appropriate 2-subtree of the stage 2 tree, using {\it add\_leaf}$\,$\ and add\_root operations. If both trees are in stage 1: If the trees link operation tree has $\ge \ifmmode \,{ \rm log}\,\else{\it log }\fi n$ nodes initialize it as a stage 2 tree, with one 2-subtree. Otherwise link the level 1 trees. Each link tree is represented in the usual way: Its vertices are partitioned into $\mu$-trees; contracting these $\mu$-trees forms the corresponding level-1 tree. Each $\mu$-tree is represented using Algorithm 1 for $\widehat{\,c\,}$ and $\wa$ operations. A level-2 tree in stage \stage. has $\ge 2\hskip-2pt\uparrow\hskip-2pt s$ (original) vertices. The same holds for a level-2 $\mu$-tree. An operation $link(x,y)$ makes progress by either advancing the stage of one or both link trees (as in Algorithm 1) or descending to level 1, where progress is easier (stages grow slower). In the second alternative we use Algorithm 1 to link the level-1 trees containing \fx. and \fy.. We skip the remaining details (in the algorithm below, Cases 1--3 correspond to the first alternative, Case 4 to the second). \fi \iffalse An operation $link(x,y)$ either advances the stage of 1 or both link trees (Cases 1--3 in the algorithm below) or uses Algorithm 1 to link $x^-$ and $y^-$ (the level 1 contractions of prenodes \wx. and \wy.; this is Case 4 of the algorithm below). \fi NOV 24: \iffalse Now we give the detailed description of $ca(x,y,\ell)$. First consider the case where both $\ell$-prenodes $\Px., \Py.$ are full. If $\Px.=\Py.$ then simply return $\widehat{\,c\,}(x,y)$. Otherwise $\Px.$ and $\Py.$ correspond to two distinct $(\ell-1)$-nodes. Set $(a, a_x, a_y)\gets ca(x^-, y^-,\ell-1)$. Thus the $\ell$-prenode $\widehat{a^+}$ contains $nca(x,y)$. Set $b_x$ to the first $\ell$-node ancestor of $x$ in $a$: If $a_x=a$ then $b_x=x$ else $b_x=\pi(a^+_x)$. Define $b_y$ similarly. If $b_x\ne b_y$ then the desired characteristics ancestors are $\widehat{\,c\,}(b_ x,b_ y)$. If $b_x= b_y$ then $b_x=nca(x,y)$. It is easy to find the two other characteristic ancestors -- they are among the nodes $b_x$, $\mathy{\rho}(a_x)$ and $\mathy{\rho}(a_y)$. It remains to consider the case when one or both of $\Px.$, $\Py.$ is nonfull. As above, if $\Px.=\Py.$ then return $ca(x,y,\Px.)$. Otherwise for $z=x,y$ set $c_z$ to the first $\ell$-node ancestor of $z$ in a full prenode: If $z$ is in such a node then $c_z=z$ else $c_z=p_\ell(\mathy{\rho}(P_z))$. If $c_x\ne c_y$ then we seek the desired characteristic ancestors $c_ x$ and $c_ y$ in $T_\ell$. They are found using the procedure for the first case. If $c_x=c_y$ then $c_x=nca(x,y)$ and the other characteristic ancestors are among the nodes $c_x$, $\mathy{\rho}(\Px.)$ and $\mathy{\rho}(\Py.)$. This concludes the algorithm for $ca$. Correctness is clear from the discussion. \fi \iffalse starting with $l(x,y,\ell)$. Assume the prenodes \Px. and \Py. exist. This is true for every $L$-node, and we shall see it holds in the recursive calls ($\ell<L$). If x is not in a link tree ie it is not in a prenode. declare \Px. equal to \{x\} and create an $\ell-1$ node $x^-$ if $\ell\ge 2$. similarly for y. Start by making $x$ the parent of $y$. Neither $\Px.$ nor $\Py.$ is small. Do $l(x^-, y^-,\ell-1)$. $\Py.$ is small but $\Px.$ is not. $\Px.$ is small but $\Py.$ is not. \Py. is big: \Px. becomes medial. Let $r$ be the root of \Px.. If $\pi(r)$ exists set $\pi(r^-)\gets \pi(r)^-$. Then (in either case) do $l(x^-,y^-,\ell-1)$. \Py. is medial: Let $r$ be the root of \Px. (recall this involves node $x^-$). Then discard node $x^-$ and for every $z\in \Px.$ reset $z^-\gets y^-$. Reset $(y^-)^+\gets r$. Do $\wl(x,y)$. (The resulting tree becomes $\wz.$ for every $z\in \wx.\cup \wy.$. $\wz.$ is either big or medial.) Both $\Px.$ and $\Py.$ are small. Do $\wl(x,y)$. If $\Px.$ is nonroot then define the root pointer of every node in $\Py.$. If this makes $\Px.$ big then create a node $x^-$ and add it to level $\ell-1$ with $l$ addleaf If big but root just add the node. then make $y$ a singleton $\ell$-prenode, \fi \iffalse The routines used in our data structure are \[ l(x,y,\ell),\ \wl(x,y),\quad c(x,y,\ell),\ \widehat{\,c\,}(x,y). \] $l(x,y,\ell)$ is a recursive routine that performs the link operation on $\ell$-nodes $x$ and $y$; it calls $l(x^-,y^-,\ell-1)$. Similarly $c(x,y,\ell)$ performs the ca operation on $\ell$-nodes $x$ and $y$, calling $c(x^-,y^-,\ell-1)$. So for vertices $x,y$ in the given graph the operations $link(x,y)$ and $ca(x,y)$ are implemented by $l(x,y,L)$ and $c(x,y,L)$ respectively. $\widehat{\,c\,}(x,y)$ is a given algorithm. Its arguments $x$, $y$ are nodes in the same prenode at some level $\ell$ (so actually there may be $L$ different $\widehat{\,c\,}$ algorithms, one for each level). $\widehat{\,c\,}$ returns the characteristic ancestors of $x$ and $y$ (which are nodes in $\wx.=\wy.$). $\widehat{\,c\,}(x,y)$ is a given algorithm. Its arguments $x$, $y$ are nodes in the same prenode at some level $\ell$ (so actually there may be $L$ different $\widehat{\,c\,}$ algorithms, one for each level). $\widehat{\,c\,}$ returns the characteristic ancestors of $x$ and $y$ (which are nodes in $\wx.=\wy.$). $\wl(x,y)$ is a given algorithm, with arguments $x$, $y$ nodes in the same level $\ell$. Node $y$ is the root of a link tree at level $\ell$ and $x$ is a node not in that tree. $\wl(x,y)$ updates the partition of $\ell$-nodes into prenode subtrees so that $x$ is the parent of $y$. ============================================== \[ link(x,y),\ l(x,y,\ell),\ \wl(x,y),\quad ca(x,y),\ c(x,y,\ell),\ \widehat{\,c\,}(x,y). \] $link$ and $ca$ have arguments that are vertices, and they implement the solution to the overall problem. $l(x,y,\ell)$ is a recursive routine that performs the link operation on $\ell$-nodes $x$ and $y$; it calls $l(x^-,y^-,\ell-1)$. Thus $link(x,y)$ is the value of $l(x,y,L)$. Similarly $c(x,y,\ell)$ is a recursive routine that performs the ca operation on $\ell$-nodes $x$ and $y$; it calls $c(x^-,y^-,\ell-1)$. ================================================= \fi \iffalse , which ends by decreasing $\ell$. This makes $\c { m_{i+1}\over n_{i+1} }> \c { m_i\over n_i }$. Hence $m_{i+1}> n_{i+1}$. Let us show this implies \[m_{i+1}\le 2m_i.\] Assume the contrary, $m_{i+1}> 2m_i$. Let $O$ be the operation that ended period $i$ (with counts $m_{i+1}$, $n_{i+1}$) and $O^-$ the preceding operation. $O$ satisfies the second termination test with strict inequality, $m_{i+1}> \max \{2m_i, n_{i+1}\}$. Letting $m$ and $n$ denote the counts for $O^-$ gives $m\ge m_{i+1}-1 \ge \max \{2m_i, n_{i+1}\}\ge \max \{2m_i, n\}$. We also have $m\ge 2m_i> 1$ (since period $i-1$ ended with $m_i\ge 1$). These 2 inequalities for $m$ imply $O^-$ ended a period, i.e., period $i-1$. Thus $m=m_i\ge 2m_i$, so $m_i=0$, contradiction. \fi \iffalse $m_i\ge 1$ with the assumption $m_{i+1}> 2m_i$ implies $m_{i+1}>m_i+1$, which means operation $P$ did not end period $i$, contradiction. \fi \iffalse Our first observation is that a reorganization decreases $\ell$ by $\le 1$. To show this consider a period $i\ge 1$ with parameters $n_i$, $m_i$, and $\ell=\alpha(m_i,n_i)$, which ends by decreasing $\ell$. This makes $\c { m_{i+1}\over n_{i+1} }> \c { m_i\over n_i }$. Hence $m_{i+1}> n_{i+1}$. Let us show this implies \[m_{i+1}\le 2m_i.\] Assume the contrary, $m_{i+1}> 2m_i$. Let $O$ be the operation that ended period $i$ (with counts $m_{i+1}$, $n_{i+1}$) and $O^-$ the preceding operation. $O$ satisfies the second termination test with strict inequality, $m_{i+1}> \max \{2m_i, n_{i+1}\}$. Letting $m$ and $n$ denote the counts for $O^-$ gives $m\ge m_{i+1}-1 \ge \max \{2m_i, n_{i+1}\}\ge \max \{2m_i, n\}$. We also have $m\ge 2m_i> 1$ (since period $i-1$ ended with $m_i\ge 1$). These 2 inequalities for $m$ imply $O^-$ ended a period, i.e., period $i-1$. Thus $m=m_i\ge 2m_i$, so $m_i=0$, contradiction. \iffalse $m_i\ge 1$ with the assumption $m_{i+1}> 2m_i$ implies $m_{i+1}>m_i+1$, which means operation $P$ did not end period $i$, contradiction. \fi Using the displayed inequality gives $\c{ {m_{i+1} \over n_{i+1}} } \le \c{ {2m_i \over n_i} } \le 2\c{ m_i \over n_i }$. So using inequality \eqref{5Eqn}, $$ A_{\ell-2}(4 \c{ {m_{i+1} \over n_{i+1}} } ) \le A_{\ell-2}(8\c{ m_i \over n_i })\le A_{\ell-1}(4\c{ m_i \over n_i })<n_i.$$ (Note \eqref{5Eqn} applies since $4\c{ m_i \over n_i }\ge 4$.) Thus $\ell$ decreases by $\le 1$. (A similar calculation shows $\ell$ always increases by $\le 2$ but we do not use this fact.) \fi \iffalse $$ A_{\ell-2}(4 \c{ {m_{i+1} \over n_{i+1}} } ) \le A_{\ell-2}(4 \c{ {2m_i \over n_i} } ) \le A_{\ell-2}(8\c{ m_i \over n_i })\le A_{\ell-1}(4\c{ m_i \over n_i })<n_i.$$ \fi } ================================================ \iffalse ; more precisely the vertices contained in $S$ form a subtree that is an $(\ell-1)$-node. Any $(\ell-1)$-node arises in this way. Any $\ell$-prenode with root node $x\ne \mathy{\rho}(T)$ has $p_\ell(x)$ in a full $\ell$-prenode. \fi \iffalse An $S_2$-subtree subtree is {\em full} if it contains exactly $\mu$ nodes. We use superscripts to navigate nodes between $T$ and $T_1$: If $x$ is a vertex, $x^-$ denotes the 1-node containing $x$, if such exists. If $x$ is a 1-node, $x^+$ denotes the root vertex of the $S$-tree corresponding to $x$. in contrast no superscript indicates an object in $T$. For instance if $x$ is a vertex, $x^1$ denotes the 1-node containing $x$, if such exists. and $S(x^1)$ denotes the $S$-subtree corresponding to a 1-node $x^1$. A multi-level algorithm works on a small number of levels designated $\ell =1,\ldots, L$. (Later in this section we choose $L=3$.) Each level $\ell$ has an associated partition of a subtree of $T$ into subtrees called {\it $\ell$-nodes}. Contracting all $\ell$-nodes transforms the subtree of $T$ into a tree $T_\ell$. Thus there are $|V(T_\ell)|$ $\ell$-nodes. Level $\ell$ has an associated partition of $T_\ell$ into subtrees called {\it $\ell$-trees}. Each level has an algorithm and corresponding data structures to compute the characteristic ancestors of any two $\ell$-nodes in the same $\ell$-tree. (These characteristic ancestors are $\ell$-nodes.) \fi \iffalse \begin{figure}[t] \centering \input{2nca.pstex_t} \caption{2-level $nca$ algorithm.} \label{2ncaFig} \end{figure} \fi \iffalse Since $T_1$ has $O(n/\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$ nodes. Hence the space for ancestor tables is $O(n)$. To find $nca(x,y)$ first assume $x$ and $y$ do not belong to the same full pretree. In $T_1$ let $ca(x^-,y^-)=(c,c_{x^-},c_{y^-})$. Thus the subtree $P_c$ in $T$ contains $nca(x,y)$. More precisely this subtree contains vertices $\pi(c^+_{x^-})$ and $\pi(c^+_{y^-})$, and $nca(x,y)$ is the $nca$ of these two vertices. Each pretree has a table that gives $nca(x,y)$ for every possible pair of its vertices. It is not hard to ensure that these tables use $O(n\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$ space. We omit the details since our goal is to eliminate these tables and use linear space. MAYBE MORE DETAILS CURRENTLY IN THE LINEAR SPACE VERSION TRANSFER TO THE TABLE VERSION -- I SUSPECT NO Alternatively we can eliminate the tables and use linear space. \fi \iffalse THE ASSUMPTIONS IN THIS PAR SEEM UNNECESSARY , because i think the algorith adapts correctly to any sequence First any operation $make\_edge(x,y)$ has $x$ as the new outer vertex. We start with three trivial assumptions about $make\_edge$ operations (they are easily incorporated into the search routine that invokes $make\_edge$). Second all $make\_edge$ operations are done in the natural order, specifically, in a blossom step first all $merges$ are performed, after which every new outer vertex $x$ is scanned and $make\_edge(x,y)$ is executed for every edge joining $x$ to an outer vertex. Finally, when a new outer blossom $B$ is created in a grow, expand or blossom step, the last operation $make\_edge(x,y)$ for some $x\in V(B)$ is marked as last. This mark will allow our algorithm to complete the processing of all the new edges from $B$. ============================================= We make a natural assumption about $make\_edge$ operations that is easily incorporated into the search routine: Recall that when the search creates a new outer blossom $B$, the new outer vertices $x\in B$ are scanned and $make\_edge(x,y)$ is executed for every outer vertex $y$ adjacent to $x$. Assume that the last of these $make\_edge$ operations for $B$ is marked as such. This mark to will allow our algorithm to complete the processing of all the new edges from $B$. \fi An $L$-node is a vertex. (Thus $T_L=T$.) Each level $\ell$ has an integral size parameter $\mu_\ell$ (which depends on $n$). With one exception, an $\ell$-tree $S$ contains between $\mu_\ell$ and $2\mu_\ell$ $\ell$-nodes. Futhermore $S$ corresponds to an $(\ell-1)$-node, i.e., the vertices contained in $S$ form a subtree that is an $(\ell-1)$-node. The exception is when there is only one $\ell$-tree. Such a tree $S$ may have less than $\mu_\ell$ $\ell$-nodes, but then $S$ does not correspond to an $(\ell-1)$-node. Take $\mu_1=n$ so there is always at most one 1-tree. ---------------------------------------- There are no difficulties in implementing this rule (for instance note that there is no need to store the root node as an entry in an ancestor table). ---------------------------------------- First note that the doubling rule implies $m_i+n_i\ge 2(m_{i-1}+n_{i-1})$. Letting $k_i$ be the number of operations since the between the $(i-1)$st and $i$th operations, $k_i=(m_i+n_i)- (m_{i-1}+n_{i-1}$, we have $n_i=O(k_i)$, since $n_i= n_{i-1}+(n_i-n_{i-1})$. \f \section{Computing logarithms} \label{LogAppendix} We show how to compute $\f{\ifmmode \,{ \rm log}\,\else{\it log }\fi_\beta r}$ for a given integer $r\in [1..cn^e]$ in time $O(1)$. Here $\beta=a/b$ is a fixed rational number for positive integers $a>b$, $c$ and $e$ are fixed integers, $c\ge 1, e>1$. Let $k=\f{\ifmmode \,{ \rm log}\,\else{\it log }\fi_\beta n}$. We precompute these values: \bigskip $\bullet$ $k, a^k, b^k$. $\bullet$ a table $\ell[1.. n]$ with $\ell[r] = \f{\ifmmode \,{ \rm log}\,\else{\it log }\fi_\beta r}$ for $r\in [1..n]$. \bigskip \noindent We show the precomputation time is $O(n)$. \iffalse First note that for given positive integers $r,s,t,u$ we can check the relation $r/s \le t/u$ in time $O(1)$. This is done by checking $ru\le st$. \fi \iffalse To compute the $\ell$ table we step $r$ from 1 to $\bar n$, maintaining the largest value integer $i$ with $\beta^i \le r$. We set $\ell[r]$ to the current value of $i$. This computation also gives $k$, the smallest integer with $\beta^k\ge n$, as well as $a^k$ and $b^k$. \fi The following code precomputes the $\ell$ table: \bigskip {\parindent=40pt \def{\bf for }{{\bf for }} \def{\bf to }{{\bf to }} \def{\bf while }{{\bf while }} \def{\bf do }{{\bf do }} $a'=1; \ b'=1; \ k=-1$ {\bf for } $r=1$ {\bf to } $n$ {\bf do } {\advance \parindent by 20pt {\bf while }{$r\ge a'/b'$} {\bf do } {\advance \parindent by 20pt $a'=aa';\ b'=bb';\ k=k+1$ } $\ell[r]=k$ } } \bigskip \noindent On exit $k$ is the desired value $\f{\ifmmode \,{ \rm log}\,\else{\it log }\fi_\beta n}$ and the desired values $a^k$ and $b^k$ are given by $a'/a$ and $b'/b$ respectively. It is clear that the time is $O(n)$. Now we give the algorithm to compute $\f{\ifmmode \,{ \rm log}\,\else{\it log }\fi_\beta r}$ for a given integer $r\in [1..cn^e]$. Let $h$ be the unique integer satisfying \[\beta^{hk} \le r <\beta^{(h+1)k}.\] So \begin{equation} \label{LogEqn} \ifmmode \,{ \rm log}\,\else{\it log }\fi_{\beta}r=hk + \ifmmode \,{ \rm log}\,\else{\it log }\fi_\beta (r/{\beta^{hk}}). \end{equation} Taking floors gives the desired value. We find $h$ by testing successive values $\beta^{hk}$. The desired $h$ is at most $e + (e+\ifmmode \,{ \rm log}\,\else{\it log }\fi_\beta c) /k=O(1)$. This follows for $r\le c n^e$ since $c=\beta^{\ifmmode \,{ \rm log}\,\else{\it log }\fi_\beta c}$ and $n<\beta^{k+1}$ implies $n^e<\beta^{ke+e}$. Using the values $a^k,b^k$ the time is $O(e)=O(1)$. The desired floor of the logarithmic term in \eqref{LogEqn} is $\f{\ifmmode \,{ \rm log}\,\else{\it log }\fi_\beta \f{r/{\beta^{hk}}}}$. This corresponds to an entry in the $\ell$ table since $r/{\beta^{hk}}<\beta^{k}\le n$. The desired entry is found as $\ell[rb^{hk}/a^{hk}]$ (since division is truncating). Again the time is $O(1)$. \section{Simple inequalities for Ackermann's function} \label{AckAppendix} \noindent {\em Proof of \eqref{4Eqn}, $A_i(j+1) \ge 2A_i(j)$ for $i,j \ge 1$}: First note the trivial inequality $2^i\ge 2i$ for $i\ge 1$. Also for every $i\ge 1$, $A_i(2)=4$. Next we show $A_i(j)\ge 2^j$ by induction on $i$, with the inductive step inducting on $j$. The base case $j=2$ of the inductive step is $A_i(2)=4=2^2$. For the inductive step \[A_i(j) = A_{i-1}(A_i(j-1) )\ge 2^{A_i(j-1) }\ge 2^{2^{j-1}} \ge 2\cdot 2^{j-1}=2^j.\] Now \eqref{4Eqn} itself follows from the first $\ge$ relation displayed above and $2^{A_i(j-1) }\ge 2A_i(j-1)$. \bigskip \iffalse \eqref{5Eqn} uses this as well as $2^{2j}\ge 2j+2$ for $j\ge 1$. We first observe that $A_i$ is an increasing function: $A_i(j+1)\ge 2^{A_i(j) }>A_i(j)$. \fi \noindent {\em Proof of \eqref{5Eqn}, $A_{i+1}(j) \ge A_i(2j)$ for $i \ge1,j \ge 4$}: Note that \eqref{5Eqn} needn't hold for $j<4$: Recalling that $A_2$ is superexponentiation $A_2(j)=2\uparrow j$, $A_2(3)=16<64=A_1(6)$. However \eqref{5Eqn} holds for $i=1,j=4$: $A_2(4)=2^{16}>2^8=A_1(8)$. In general for $i\ge 1$, \[A_{i+1}(4)= A_i(A_{i+1}(3))=A_i(A_i(A_{i+1}(2)))= A_i(A_i(4)).\] Also $A_i$ is an increasing function by \eqref{4Eqn}. We prove \eqref{5Eqn} by induction on $i$. For the base case, $A_{i+1}(4)= A_i(A_i(4))\ge A_i(2^4)>A_i(8)$. For the inductive step, $A_{i+1}(j+1) = A_{i}(A_{i+1}(j) )$. The argument to $A_i$ is $A_{i+1}(j) > A_{i}(2j) \ge 2^{2j} \ge 2(2j)\ge 2j+2$. \bigskip \noindent {\em Proof of \eqref{6Eqn}, $\alpha(m',n')\ge\alpha(m,n)-1$ for $m'\le 2m, n'\ge n$}: Let $i=\alpha(m,n)$. We wish to show $\alpha(m',n')\ge i-1$, i.e., $A_{i-2}(4 \c{ {m' \over n'} } ) <n'$. This follows since \[ A_{i-2}(4 \c{ {m' \over n'} } ) \le A_{i-2}(8\c{ m\over n })\le A_{i-1}(4\c{ m \over n })<n\le n'.\] The first inequality uses $\c{m'/n'}\le \c{2m/n}\le 2\c{m/n}$. The second uses \eqref{5Eqn}, which applies since $4\c{ m \over n }\ge 4$. (A slightly more involved calculation shows $\alpha(m',n')\le\alpha(m,n)+2$ when $m'\ge m, n'\le 2n $ but we do not use this fact.) \iffalse To state the dual adjustment step we first review the linear program for perfect matching. Its variables are given by the function $x:E \to \mathbb {R_+}$ which indicates whether or not an edge is matched. The following linear program for maximum matching uses our summing convention, e.g., $x(\delta(v))=\sum_{e\in \delta(v)}x(e)$. \hskip60pt maximize $\sum_{e\in E} w(e)x(e)$ subject to \[\begin{array}{llll} x(\delta(v))&=&1&\hbox{for every } v\in V\\ x(\gamma(B))&\le& \f{|B|\over 2}&\hbox{for every } B\subseteq V\\ x(e)&\ge& 0&\hbox{for every } e\in E \end{array} \] \iffalse \[\begin{alignat}{4} x(\delta(v))&=&1&\hbox{for every } v\in V\label{bvConstraint}Eqn\\ x(\gamma(B))&\le& \f{|B|\over 2}&\hbox{for every } B\subseteq V\notag\\ x(e)&\ge& 0&\hbox{for every } e\in E\notag \end{alignat} \] \fi The dual LP uses dual functions $y:V \to \mathbb {R}$, $z:2^V\to \mathbb {R_+}$. Define $\H{yz}:E\to \mathbb {R}$ by \begin{equation} \H{yz}(e) = y(e) + z\set {B} {e \subseteq B}. \end{equation} \noindent (Note for $e=vw$, $y(e)$ denotes $y(v)+y(w)$ and $z\set {B} {e \subseteq B}$ denotes $\sum_{e \subseteq B} z(B)$.) \iffalse \hskip60pt minimize $y(V) + \sum \set{\f{|B|/2}\, z(B)} {B\subseteq V}$ subject to \hskip60pt minimize $y(V) + \sum_{B\subseteq V} \f{|B|/2}\, z(B) $ subject to \fi \hskip60pt minimize $y(V) + \sum_{B\subseteq V} \f{|B|\over 2}\, z(B) $ subject to \[\begin{array}{llll} \H{yz}(e) &\ge& w(e)&\hbox{for every } e\in E\\ z(B)&\ge& 0&\hbox{for every } B\subseteq V \end{array} \] \noindent $e$ is {\em tight} when equality holds in its constraint, i.e., $\H{yz}(e) = w(e)$. The algorithm maintains the complementary slackness conditions: $x(e)>0 \ifmmode {\ \Longrightarrow \ }\else{$\ \Longrightarrow \ $}\fi e$ is tight. $z(B)>0 \ifmmode {\ \Longrightarrow \ }\else{$\ \Longrightarrow \ $}\fi$ $x(\gamma(B)) \f{|B|\over2}$. \noindent In addition every edge in a blossom subgraph is tight (so blossoms can be rematched). It is easy to see the following dual adjustment step maintains these conditions. \begin{algorithm} \DontPrintSemicolon \iffalse \mbox{\vbox{ \begin{eqnarray*} \delta_1&\gets&\min \set{y(e)-w(e)}{e=uv \mbox{ with $u$ outer, } v\notin \S.};\\ \delta_2&=&\min \set{(y(e)-w(e))/2}{e=uv \mbox{ with $u,v$ in distinct outer blossoms}};\\ \delta_3&=&\min \set{(z(B)/2}{B \mbox{ an inner blossom of }\os.};\\ \delta&=&\min \{\delta_1,\delta_2,\delta_3\}; \end{eqnarray*} }}} \fi $\delta_1\gets\min \set{y(e)-w(e)}{e=uv \mbox{ with $u$ outer, } v\notin \S.}$\; $\delta_2=\min \set{(y(e)-w(e))/2}{e=uv \mbox{ with $u,v$ in distinct outer blossoms}}$\; $\delta_3=\min \set{(z(B)/2}{B \mbox{ an inner blossom of }\os.}$\; $\delta=\min \{\delta_1,\delta_2,\delta_3\}$\; \lFor{every vertex $v\in \S.$}\\ \Indp\lIf{$v$ is inner}{$y(v)\gets y(v)+\delta$} \lElse{$y(v)\gets y(v)-\delta$}\; \Indm\lFor{every blossom $B$ in \os.}\\ \Indp\lIf{$B$ is inner}{$z(B)\gets z(B) -2\delta$} \lElse{$z(B)\gets z(B) +2\delta$}\; \caption{Dual adjustment step in Edmonds' algorithm.} \label{DualEdmonds} \end{algorithm} \section{Details for $b$-matching and $f$-factor algorithms \label{bfAppendix} The LPs for $b$-matching are the obvious generalizations of ordinary matching: \hskip60pt maximize $\sum_{e\in E} w(e)x(e)$ subject to \[\begin{array}{llll} x(\delta(v))+2x(\gamma(v))&=&b(v)&\hbox{for every } v\in V\\ x(\gamma(B))&\le& \f{b(B)\over 2}&\hbox{for every } B\subseteq V\\ x(e)&\ge& 0&\hbox{for every } e\in E \end{array} \] \hskip60pt minimize $\sum_{v\in V} b(v)y(v) + \sum_{B\subseteq V} \f{b(B)\over2}\, z(B) $ subject to \[\begin{array}{llll} \H{yz}(e) &\ge& w(e)&\hbox{for every } e\in E\\ z(B)&\ge& 0&\hbox{for every } B\subseteq V \end{array} \] The complementary slackness conditions are essentially the same as ordinary matching: $x(e)>0 \ifmmode {\ \Longrightarrow \ }\else{$\ \Longrightarrow \ $}\fi e$ is tight. $z(B)>0 \ifmmode {\ \Longrightarrow \ }\else{$\ \Longrightarrow \ $}\fi$ $x(\gamma(B)) \f{|b(B)|\over2}$. As mentioned in Section \ref{bBlossomSec} complementary slackness requires that a blossom $B$ with $z(B)>0$ has precisely one incident matched edge, i.e., \eqref{CSforBMatchingEqn} holds. Let us review this fact. Our LP constraint $x(\gamma(B))\le\f{b(B)/2}$ is redundant if $b(B)$ is even (since $2x(\gamma(B))\le x\set{\delta(v)}{ v\in B } +2x\set{\gamma(v)}{v\in B} =b(B)$). So we can assume $b(B)$ is odd. Now equality in the constraint amounts to \eqref{CSforBMatchingEqn}. The dual adjustment step differs from ordinary matching only in allowing a loop to cause a blossom (Fig.\ref{DualbMatch}). Like ordinary matching, the numerical quantities in our algorithm are always half-integers. More precisely assume all given weights $w(e)$ are integral. Assume either every initial $y$-value is integral or every initial $y$-value is integral plus $1/2$; furthermore every initial $z$-value is integral. This assumption holds for common initializations, e.g., $y\equiv \max_{e\in E}w(e)/2$ and $z\equiv 0$. It also holds for the initialization in our strongly polynomial algorithm, Section \ref{bStrongSec}. (Note the $y$-values for $BG$, i.e., the transportation problem, are integral-valued. So \eqref{yDefnTransportationEqn} gives integral $y$-values for our algorithm assuming we double the given weight function.) We will show that throughout the algorithm \begin{equation} \label{yzIntegralEqn} (\forall v^{\in V})(y(v)\in \mathbb{Z}/2) \hbox{ and } (\forall B^{\subseteq V})(z(B)\in \mathbb{Z}). \end{equation} To prove \eqref{yzIntegralEqn} assume it holds before a dual adjustment. Examining the changes of Fig.\ref{DualbMatch} shows it suffices to prove $\delta$ is a half-integer. Clearly $\delta_1$ and $\delta_3$ are half-integers. We will show any edge joining two vertices of \S. has integral $y$-value. This makes $\delta_2$ half-integral and completes the proof. Any tight edge has $\H{yz}(e)=w(e)$. So \eqref{yzIntegralEqn} (specifically the integrality of $z$) implies $y(e)\in \mathbb{Z}$. Any vertex $v$ in \S. is joined to a free vertex $x$ by a path $P$ of tight edges. Thus $y(v) +2y\set{u}{u\in P-v-x}+y(x)\in \mathbb{Z}$, i.e., $y(v) +y(x)\in \mathbb{Z}$. Taking any other vertex $v'$ of \S. with similar relation $y(v') +y(x')\in \mathbb{Z}$ gives $y(v) +y(v')+y(x)+y(x')\in \mathbb{Z}$. A free vertex is always outer, so its $y$-value always decreases by $\delta$. So the initialization implies $y(x)+y(x')\in \mathbb{Z}$. Thus $y(v) +y(v')\in \mathbb{Z}$ as desired. The magnitude of numbers computed by the algorithm can be bounded as follows. Let $W$ be the largest magnitude of an edge weight. Assume all initial $y$ values are $\le W$ and $z\equiv 0$. We claim the largest value of $\Delta$ is $\le W b(V)$. Clearly this implies every $y$ and $z$ value is $\le 2Wb(V)$. To prove the claim consider any point in the algorithm. Let $b'(v)$ be the remaining degree requirement at $v$, i.e., $b'(v)=b(v)-d(v,M)$ for $M$ the current matching. Since every matched edge is tight, \begin{equation} \label{DualToMEqn} w(M)=\sum_{e\in M} \H{yz}(e)= \sum_{v\in V} d(v,M)y(v)+\sum_{B\in\B.} \f{|B|/2}z(B). \end{equation} Thus we can rewrite the current value of the dual objective function as $\sum_{v\in V} b'(v)y(v)+ w(M)$. \iffalse \sum_{e\in M} \H{yz}(e)$. \fi The dual adjustment preserves tightness of the edges of $M$. So \eqref{DualToMEqn} holds and the updated dual objective can be rewritten the same way. Thus the dual adjustment decreases the dual objective value by $b'(V)\delta\ge 2\delta$. The initial dual objective is $\le b(V)W$. The final objective is the weight of a maximum $b$-matching, which is $\ge -Wb(V)/2\ge -Wb(V)$. So we always have $\Delta=\sum \delta\le b(V)W$. \iffalse \sum_{e\in M} \H{yz}(e)+u(e)$ (changing $b'$ to $f'$). The dual adjustment changes $y,z$, and $u$ (in particular a value $u(e)$ may increase or decrease). However by the definition of $u$ any edge $e\in M$ has $\H{yz}(e)+u(e)=w(e)$. preserves the relations $\H{yz}(e)+u(e)=w(e)$, $e\in M$. So as before the dual objective decreases by $\sum_{v\in V} f'(v)y(v)$. The rest of the analysis is unchanged. \fi \iffalse Examining the dual adjustment step shows $\delta$ is half-integral, and the adjusted values of $y$ and $z$ satisfy \eqref{yzIntegralEqn}. \fi \begin{algorithm}[h] \DontPrintSemicolon \def\KwSty{or}\ {\KwSty{or}\ } $\delta_1\gets\min \set{y(e)-w(e)}{e=uv\notin M \mbox{ with $B_u$ outer, } B_v\notin \S.}$\; $\delta_2=\min \set{(y(e)-w(e))/2}{e=uv\notin M \mbox{ with $B_u,B_v$ outer, either $B_u\ne B_v$ or $u=v$ atomic}}$\; $\delta_3=\min \set{z(B)/2}{B \mbox{ an inner blossom of }\os.}$\; $\delta=\min \{\delta_1, \delta_2, \delta_3\}$\; \iffalse {\bf if }{$\delta=0$ \KwSty{or}\ $\exists$ edge $e=uv\in M$, $y\notin \S.$, with $B_x$ inner or an outer blossom, \KwSty{or}\ \; \hspace{17pt} $\exists$ edge $e=uv\in M-\os.$, $B_u\ne B_v$ or $u=v$ atomic, with\; \hspace{17pt}\goin $B_u$ and $B_v$ inner atom or an outer blossom, $e\in M-\os.$, \KwSty{or}\ \; \hspace{17pt} $\exists$ inner blossom $B$ with $z(B)=0$} {\Return} \fi \lFor{every vertex $v\in \S.$}\\ \Indp\lIf{$B_v$ is inner}{$y(v)\gets y(v)+\delta$} \lElse{$y(v)\gets y(v)-\delta$}\; \Indm\lFor{every blossom $B$ in \os.}\\ \Indp\lIf{$B$ is inner}{$z(B)\gets z(B) -2\delta$} \lElse{$z(B)\gets z(B) +2\delta$}\; \caption{Dual adjustment step for $b$-matching.} \label{DualbMatch} \end{algorithm} As with ordinary matching, other versions of weighted $b$-matching have LPs that are minor modifications of the original. Correspondingly, minor modifications of our algorithm find such matchings. We illustrate with maximum cardinality maximum weight $b$-matching (defined in Section \ref{bMAnalSec}). It is convenient to treat the more general problem of finding a $b$-matching of maximum weight subject to the constraint that it contains exactly $k$ edges. The primal LP relaxes the vertex degree constraint to \[ x(\delta(v))+2x(\gamma(v))\le b(v)\hskip 20pt \hbox{for every } v\in V \] and adds the cardinality constraint \[ x(E)= k. \] The dual problem has a variable $c$ for the cardinality constraint, the left-hand side of the dual edge constraint changes from $\H{yz}(e)$ to $\H{yz}(e)+c$, and the nonnegativity constraint $y(v)\ge 0$ is added. The additional complementary slackness constraint is \[ y(v)>0 \ifmmode {\ \Longrightarrow \ }\else{$\ \Longrightarrow \ $}\fi x(\delta(v))+2x(\gamma(v))= b(v)\hskip20pt \hbox{for every } v\in V.\] To find such a matching we initialize our algorithm using a common value for every $y(v)$. The algorithm halts after the search that increases the matching size to $k$. For maximum cardinality maximum weight $b$-matching, this is the first time a search fails. To get an optimal LP solution, let $Y$ be the common final value for $y(v)$, $v$ free, or 0 if no such vertex exists. (Fig.\ref{DualbMatch} implies that throughout the algorithm all free vertices have the same $y$-value, and this value is the minimum $y$-value.) Decrease all $y$ values by $Y$ and set $c=2Y$. This solves the new LP. (In the dual edge constraint the new $y$-values decrease $\H{yz}(e)$ by $2Y$, which is balanced by the new LP term $c=2Y$.) We conclude that our algorithm is correct. It also proves the LP formulation is correct. \iffalse can be found using different initialization and terminations conditions. To Relaxing maximum weight partial $b$-matching x(\delta(v))\le b(v)&\hbox{for every } v\in V\\ This adds the dual constraint y(v)\ge 0 for every v\in V and the complementary slackness condition y(v)>0 \ifmmode {\ \Longrightarrow \ }\else{$\ \Longrightarrow \ $}\fi x(\delta(v))= b(v)&\hbox{for every } v\in V\\ Our algorithm finds a maximum weight partial b-matching if every initial y-value is nonnegative and a free vertex with y(v)=0 is are not made \fi \bigskip The LPs for $f$-factors incorporate limits on the number of copies of an edge as well as $I(B)$ sets of blossoms. (The graph may have parallel edges, so wlog we allow only 1 version of each copy to be in the $f$-factor.) \hskip60pt maximize $\sum_{e\in E} w(e)x(e)$ subject to \[\begin{array}{llll} x(\delta(v))+2x(\gamma(v))&=&f(v)&\hbox{for every } v\in V\\ x(\gamma(B) \cup I )&\le& \f{f(B)+|I|\over 2}&\hbox{for every } B\subseteq V,\, I\subseteq \delta(B)\\ x(e)&\le& 1&\hbox{for every } e\in E\\ x(e)&\ge& 0&\hbox{for every } e\in E \end{array} \] The dual LP uses dual functions $y:V \to \mathbb {R}$, $z:2^V \times 2^E\to \mathbb {R_+}$. Define $\H{yz}:E\to \mathbb {R}$ by \begin{equation} \label{fHyzEqn} \H{yz}(e) = y(e) + z\set {(B,I)} {e \in \gamma(B)\cup I}. \end{equation} \hskip60pt minimize $\sum_{v\in V} f(v)y(v) + \sum_{B\subseteq V,I\subseteq \delta(B)} \f{f(B)+|I|\over2}\, z(B,I) +u(E)$ subject to \[\begin{array}{llll} \H{yz}(e) +u(e)&\ge& w(e)&\hbox{for every } e\in E\\ u(e)&\ge& 0&\hbox{for every } e\in E\\ z(B,I)&\ge& 0&\hbox{for every } B\subseteq V,\,I\subseteq \delta(B) \end{array} \] In our algorithm every nonzero $z$ value has the form $z(B,I(B))$ for $B$ a mature blossom. So we use the notation $z(B)$ as a shorthand for $z(B,I(B))$. Say that $e$ is {\em dominated, tight,} or {\em underrated} depending on whether $\H{yz}(e)$ is $\ge w(e)$, $= w(e)$, or $\le w(e)$, respectively; {\em strictly dominated} and {\em strictly underrated} refer to the possibilities $>w(e)$ and $< w(e)$ respectively. The complementary slackness conditions for optimality can be written with $u$ eliminated as $x(e)>0 \ifmmode {\ \Longrightarrow \ }\else{$\ \Longrightarrow \ $}\fi e$ is underrated $x(e)=0 \ifmmode {\ \Longrightarrow \ }\else{$\ \Longrightarrow \ $}\fi e$ is dominated $z(B)>0 \ifmmode {\ \Longrightarrow \ }\else{$\ \Longrightarrow \ $}\fi$ $x(\gamma(B) \cup I(B) )= \f{f(B)+|I(B)|\over 2}$. The numbers computed by the algorithm are analyzed similar to $b$-matching. The same argument applies to show the algorithm always works with half-integers. The same bound holds for the magnitude of numbers. The only addition to the analysis is to account for the term $u(E)$ in the dual objective function. Clearly the optimum $u$ function is defined by setting $u(e)$ equal to the slack in $e$, $w(e)-\H{yz}(e)$, for every edge $e\in M$. So \eqref{DualToMEqn} has the analog, $w(M)= \sum_{e\in M} \H{yz}(e)+u(e)= \sum_{v\in V} d(v,M)y(v)+\sum_{B\in\B.} \f{\frac{f(B)+I(B)}{2}}z(B)+u(E)$. This equation holds both before and after the dual adjustment. (Note the dual adjustment will change $u$ values also, and each $u(e)$ may increase.) The dual objective function can be rewritten just as before, as $\sum_{v\in V} f'(v)y(v)+ w(M)$, both before and after the adjustment step. The rest of the analysis is identical to $b$-matching. Similar to $b$-matching our algorithm extends to variants of the maximum $f$-factor problem. We again illustrate with maximum cardinality maximum weight partial $f$-factors. The LP is modified exactly as in $b$-matching. Our modified algorithm and the definition of new LP variables is exactly the same. The only difference in the analysis is that the new complementary slackness conditions for edges are $x(e)>0 \ifmmode {\ \Longrightarrow \ }\else{$\ \Longrightarrow \ $}\fi \H{yz}(e)+c \le w(e)$ $x(e)=0 \ifmmode {\ \Longrightarrow \ }\else{$\ \Longrightarrow \ $}\fi \H{yz}(e)+c \ge w(e)$. \noindent As before the quantity $\H{yz}(e)+c$ equals the algorithm's value of $\H{yz}(e)$, so these conditions are equivalent to the original ones. \section{Grow/Expand steps} \label{GrowExpandSection} \def\zI{Z\_TO\_I} \def\ZY{DEL} We give a simple data structure to handle grow and expand steps. First consider ordinary matching. At any point in a search, for any vertex $v\in V$ define $slack(v)$ to be the smallest slack in an unmatched edge from an outer node to $v$. If $v\notin \S.$ and $slack(v)<\infty$, dual adjustments reduce $slack(v)$. When $slack(v)$ becomes 0 a grow step can be performed to make $B_v$ inner. But if $B_v$ is a blossom, it may become inner before $slack(v)$ becomes 0. This blossom may later get expanded, and $v$ may leave \S.. If not some smaller blossom containing $v$ may get expanded causing $v$ to leave \S.. Continuing in this fashion $v$ may oscillate in and out of \S., becoming eligible and ineligible for grow steps. This makes tracking potential grow steps nontrivial. Note there is no such complication for grow steps using a matched edge to add a new outer node, since matched edges are always tight and outer nodes never leave \S.. The same overview applies to $b$-matching. $f$-factors are more general, since matched edges need not be tight. We first present the algorithm that applies to ordinary matching and $b$-matching. Then we extend the algorithm to $f$-factors. \paragraph*{Data structures} As in Section \ref{EdAlgSec} for ordinary matching and \ref{bBlossomSec} for $b$-matching and $f$-factors, we use a tree representing the laminar structure of blossoms. Specifically at the start of a search the current blossoms (from previous searches) form a tree $\B.$. The root of \B. corresponds to $V$, and each leaf corresponds to a vertex of $G$. The children of $B$ in \B. are the blossoms and atoms in the cycle $C(B)$ forming $B$. The subtree of a blossom $B$ has size $O(|V(B)|)$, as in Sections \ref{EdAlgSec} and \ref{bBlossomSec}. \iffalse Number the leaves of \B. in left-to-right order. So the vertices of $G$ are numbered from 1 to $n$. For any blossom $B$ the vertices of $V(B)$ are consecutively numbered. $B$ is labelled with the interval $[lo(B),hi(B)]$ that specifies the set $V(B)$. \fi \iffalse Throughout the discussion a node of \B. is called {\em maximal} at any point in the algorithm when it is a vertex of the working graph $\overline G$, i.e., it is either a maximal contracted blossom or a vertex of $G$ that is currently an atom. \fi Recall (Section \ref{TBMAlgSec}) the {rank} of a \B.-node $B$ is $r(B)=\f{\ifmmode \,{ \rm log}\,\else{\it log }\fi |V(B)|}$. A \B.-child of $B$ is {\em small} if it has rank $<r(B)$, else {\em big}. Clearly $B$ has at most one big child. So the rank $r(B)$ descendants of $B$ form a path $P$ starting at $B$. Each node on $P$ except $B$ is the big child of its parent.% \footnote{$P$ is a slight variant of the ``heavy path'' of \cite{HT, T79}.} The data structure marks each node as big or small. We also use this notion: A child of a node on the above path $P$ is a {\em small component} of $B$. Clearly a small component of $B$ is a small child of its parent. If $B$ is a blossom then $V(B)=\cup \set{V(A)}{A \text{ a small component of $B$}}$. (This fails if $B$ is a leaf of \B.. Such a $B$ has no children or components.) \iffalse For any blossom $B$, $V(B)$ is the disjoint union of sets $V(A)$, $A$ a \B.-child of $B$. We require a stronger property, for which we use $C(\B.)$, the compressed tree of \B. (recall the definition of compressed tree from the beginning of Section \ref{3.1Sec}). For any blossom $B$ define a {\em maximal light child} of $B$ to be a maximal proper \B.-descendant of $B$ that is a light child. Thus if $B$ is an apex its maximal light children are its light children in $C(\B.)$. If $B$ is not an apex its maximal light children are its siblings in $C(\B.)$ that are light children and descend from $B$ in \B.. For any blossom $B$, $V(B)$ is the disjoint union of sets $V(A)$, $A$ a maximal light child of $B$. \fi The main task for the data structure is tracking $slack(v)$ values. Obviously this requires tracking $B_v$ (as usual $B_v$ denotes the currently maximal blossom or atom containing $v$). The values $node(v)$ defined below allow identifying $B_v$ in $O(1)$ time. $node(v)$ values are also used in blossom and augment steps to compute paths in \os.. Recall the data structure for numerical quantities given in the last subsection of Section \ref{bMAnalSec}, in particular these definitions: $\Delta$ is the sum of all dual adjustment quantities $\delta$ in the current search. Any outer vertex $v$ has a quantity $Y(v)$, such that the current value of $y(v)$ is $Y(v)-\Delta$. A global Fibonacci heap \F. has entries for candidate grow, blossom, and expand steps, with key equal to the value of $\Delta$ when the corresponding edge becomes tight. To compute current $y$ and $z$ values for nonouter nodes, we use an auxiliary quantity $\ZY(B)$ that tracks $z$-values of expanded blossoms that have been converted into $y$-values. To define this quantity let $y_0$ and $z_0$ denote the dual functions at the start of the current search. The algorithm stores the quantity \[Y(v)=y_0(v).\] \iffalse If $B$ is a currently maximal nonouter blossom, the $z$ values of all its ancestors in \B. have been used to increase the $y$ values of vertices in $V(B)$. To track this \fi Every node $B$ of \B. is labelled with the quantity \begin{equation} \ZY(B)=\mbox{\small{$\frac{1}{2}$}}\,z_0\set {A} {A \text{ a proper ancestor of $B$ in \B.}}. \end{equation} Observe that when $B$ is a maximal blossom, $\ZY(B)$ is equal to the total of all dual adjustments made while $B$ was properly contained in an inner blossom. At any point in time current $y$ values are computed by \begin{equation} \label{yYZtoYeqn} y(v)=\begin{cases} Y(v)+\ZY(B_v)&B_v \text{ not in \os.}\\ Y(v)+\ZY(B_v)+\Delta-\Delta_0(B_v)&B_v \text{ an inner node} \end{cases} \end{equation} where $\Delta_0(B)$ denotes the value of $\Delta$ when blossom $B$ became an inner node (blossom or atom). We will compute $y(v)$ in $O(1)$ time when it is needed. To do this we must identify $B_v$ in $O(1)$ time. This is done using the pointer $node(v)$, as we will describe below. We track the best candidate edges for grow steps from outer nodes using a system of Fibonacci heaps. At any point in the algorithm every maximal nonouter blossom $B$ has a Fibonacci heap $\F._B$. The nodes of $\F._B$ are the small components of $B$. Thus if $B$ is not a node of \os., the smallest slack of an unmatched edge for a grow step to $B$ is the smallest value $slack(v)$, $v$ a vertex in $V(A)$, $A$ a blossom or atom with a node in $\F._B$. The data structure must also handle maximal nonouter atoms $B$. For uniformity we assume atoms are handled like blossoms -- they have a Fibonacci heap of one node, the atom itself. We will not dwell on this case, the reader can make the obvious adjustments for maximal nonouter atoms. Returning to the general case, the data structure does not explicitly store values $slack(v)$, since they change with every dual adjustment. Instead we store offsetted versions of related quantities as follows. Observe that whenever $B_v$ is not in \os., the slack in an unmatched edge $uv$ with $B_u$ outer is \[y(u)+y(v)-w(uv)=(Y(u)-\Delta)+(Y(v)+\ZY(B_v))-w(uv). \] (Note this relation holds regardless of prior history, i.e., when $u$ was first in an outer node or the pattern of $v$'s movement in and out of \S..) So the data structure stores the quantity \[ SLACK(v)=\min \set{Y(u)+Y(v)-w(uv)}{B_u \text{ outer}, uv \in E-M} \] for every vertex $v$ where $B_v$ is not outer. Note that the expression for a given edge $uv$ never changes in value, even as $B_u$ changes. The data structure also records the minimizing edge $uv$. $SLACK(v)$ and its minimizing edge are updated as new outer nodes are created. At any point in time when $v$ is not in \S., the current value of $slack(v)$ is \begin{equation} \label{SlackEqn} slack(v)= SLACK(v)-\Delta+\ZY(B_v). \end{equation} The key of a node $A$ in $\F._B$ is \begin{equation} \label{AKeyEqn} key(A,\,\F._B)=\min \set{SLACK(v)}{v\in V(A)}. \end{equation} At any point in time when $B$ is not in \os., the current smallest slack of an unmatched grow step edge to $B$ is $find\_min(\F._B ) -\Delta+\ZY(B)$. Thus a grow step for $B$ can be done when $\Delta=find\_min(\F._B )+\ZY(B)$. So the key of $B$ in the global heap \F. is $find\_min(\F._B )+\ZY(B)$, if $B$ is not a node of \os.. \iffalse \[key(B,\F.)=find\_min(\F._B ) +Z\_TO\_Y(B). \] This key is decreased when $find\_min(\F._B )$ decreases. A special case of this discussion is when $B$ is an atom -- in that case there is no need for a Fibonacci heap, interpret $\F._B$ as $SLACK(v)$. \fi \iffalse Every maximal blossom $B$ not in \os. has a node in the global Fibonnaci heap \F. for choosing the next step (recall \F. from the Efficiency analysis at the end of Section \ref{bAlgSec}. The key for $B$ indicates when a grow step can be done to add $B$ to \os.. Specifically the key of the node is maintained as $\Delta_0+find\_min(\F._B)$, where $\Delta_0\ge 0$ is the value of $\Delta$ when $B$ becomes a maximal blossom not in \os. (at the start of the search or when the appropriate expand step is done), and $find\_min(\F._B)$ is the current value. ($decrease\_key$ may be done on this-------------------------- \fi For every vertex $v\in V$, $node(v)$ is the unique ancestor of $v$ that is currently a node of some heap $\F._B$. $node(v)$ is used in (\ref{AKeyEqn}) to maintain keys in $\F._B$ (i.e., $node(v)$ gives $A$ in (\ref{AKeyEqn})). $node(v)$ is also used in \eqref{yYZtoYeqn} to determine the current blossom $B_v$. Specifically $node(v)$ is in the heap $\F._{B_v}$. \paragraph*{Algorithms} When a new outer node $B$ is created, every unmatched edge $uv$ ($u\in B$) is examined. $SLACK(v)$ is decreased if appropriate. This may trigger a $decrease\_key$ for $node(v)$ in $\F._{B_v}$. This may in turn trigger a $decrease\_key$ for $B_v$ in $\F.$, if $B_v$ is currently not in \os.. When a grow step adds a blossom $B$ to \os., the node for $B$ in \F. is deleted. Note that whether $B$ becomes inner or outer, it never gets reinserted in \F. in this search. If $B$ becomes inner the value $\Delta_0(B)$ is recorded. If $B$ becomes outer, the values $y(v), v\in V(B)$ are required to redefine $Y(v)$ (recall from Section \ref{bMAnalSec}). This is done using the first alternative of \eqref{yYZtoYeqn}. If $B$ becomes inner and later becomes outer in a blossom step, $Y(v)$ is redefined using the second alternative of \eqref{yYZtoYeqn}. Consider an expand step for an inner blossom $B$. The \B.-children of $B$ (i.e., the nodes of $C(B)$) become maximal blossoms or atomic, and we must update the data structure for them. Let $B'$ be the big \B.-child of $B$, if it exists. For every \B.-child $A\ne B'$ of $B$, delete the node $A$ of $\F._B$. Initialize a new F-heap $\F._A$ as follows (modifying appropriately if $A$ is atomic): \bigskip {\narrower {\parindent=0pt For each small component $D$ of $A$, create a node in $\F._A$. For every $v\in V(D)$ update $node(v)$ to $D$. Assign $key(D,\F._A)\gets\min\set{ SLACK(v)}{v\in V(D)}$. }} \bigskip \noindent Let the new heap $\F._{B'}$ be the (updated) heap $\F._B$. Insert the \B.-children of $B$ that are no longer in \S. as entries in \F.. For the \B.-children that are inner nodes of \os. record their $\Delta_0$ value. Process \B.-children that are outer nodes of \os. as above. \bigskip The main observation for correctness of the expand procedure is that $\F._{B'}$ is the desired heap for $B'$. This follows since the small components of $B'$ are those of $B$ minus the small children of $B$. It is easy to see the total time used in the course of an entire search is $O(m+n\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$. When a small child $A$ becomes maximal it is charged $O(\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$ to account for its deletion from $\F._B$. For $D$ a small component of $A$, each vertex $v\in V(A)$ is charged $O(1)$ for resetting $node(v)$ and examining $SLACK(v)$. (The new $node(v)$ values are easily found by traversing the subtree of $A$ in the blossom tree \B.. The traversal uses time proportional to the number of leaves, i.e., $O(1)$ time for each vertex $v$.) $v$ moves to a new small component $O(\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$ times so this charge totals $O(n\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$. Finally and most importantly, $decrease\_key$ uses $O(1)$ amortized time in a Fibonnaci tree. \paragraph*{$f$-factors} Two new aspects of $f$-factors are that matched edges needn't be tight and edges can be in $I$-sets. We will use some simple facts about $I$-sets. \begin{lemma} \label{IBconIALemma} Consider blossoms $A,B$ with $V(A)\subseteq V(B)$, and edge $e\in \delta(A)\cap \delta(B)$. ($i$) } \def\xi{($i$) $e=\eta(A)\ifmmode {\ \Longleftrightarrow \ }\else{$\ \Longleftrightarrow \ $}\fi e=\eta(B)$. ($ii$) } \def\xii{($ii$) $e\in I(A) \ifmmode {\ \Longleftrightarrow \ }\else{$\ \Longleftrightarrow \ $}\fi e\in I(B)$. \end{lemma} \begin{proof} ($i$) } \def\xi{($i$) Consider three cases for $A$. \case{$A\not\subseteq \alpha(B)$} This makes $\eta(A)\in \gamma(B)$. So $e\in \delta(B)$ implies $e\ne \eta(A)$. Also $e\in \delta(A)$ implies $e\ne \eta(B)$. \case{$A= \alpha(B)$} This makes $\eta(A)= \eta(B)$. Hence $e=\eta(A)$ iff $e= \eta(B)$. \case{$A\subset \alpha(B)$} Edge $e$ of the hypothesis is in $\delta(A)\cap \delta(\alpha(B))$. By induction $e=\eta(A)\ifmmode {\ \Longleftrightarrow \ }\else{$\ \Longleftrightarrow \ $}\fi e=\eta(\alpha(B))$. Since $ \eta(\alpha(B))=\eta(B)$ this implies \xi. \bigskip \iffalse The two possibilities are $\eta(A)\in \gamma(B)$ and $\eta(A)\ne \eta(B)$ or $\eta(A)\in \delta(B)$ and $\eta(A)=\eta(B)$. Now take edge $e\in \delta(A)\cap \delta(B)$. It is easy to check $e=\eta(A)\ifmmode {\ \Longleftrightarrow \ }\else{$\ \Longleftrightarrow \ $}\fi e=\eta(B)$ in both cases. \fi ($ii$) } \def\xii{($ii$) By ($i$) } \def\xi{($i$) there are two possibilities: \case{$e\ne \eta(A),\eta(B)$} $e\in I(A)\ifmmode {\ \Longleftrightarrow \ }\else{$\ \Longleftrightarrow \ $}\fi e\in M \ifmmode {\ \Longleftrightarrow \ }\else{$\ \Longleftrightarrow \ $}\fi e\in I(B)$. \case{$e=\eta(A)=\eta(B)$} $e\in I(A)\ifmmode {\ \Longleftrightarrow \ }\else{$\ \Longleftrightarrow \ $}\fi e\notin M \ifmmode {\ \Longleftrightarrow \ }\else{$\ \Longleftrightarrow \ $}\fi e\in I(B)$. \end{proof} Now observe an edge $e=uv\in I(B_v)$ has \begin{equation} \label{zContributionEqn} z_0\set{A}{V(A)\subseteq V(B_v),\, e \in I(A)}= z_0\set{A}{v\in V(A)\subseteq V(B_v)}=2(\ZY(v)-\ZY(B_v)). \end{equation} The second equation is trivial and the first follows immediately part ($ii$) } \def\xii{($ii$) of the lemma. The analog of the previous definition of $slack$ is \begin{equation} \label{fFactorSlackDefnEqn} slack(v)=\min\set {|\H{yz}(uv)-w(uv)| }{uv\in E \mbox{ eligible at $u$}}. \end{equation} (Recall Lemma \ref{AlwaysEligible} and its terminology.) As in Lemma \ref{fDualsPreservedLemma} define a sign $\sigma$ as $-1$ if $uv\in M$ else $+1$, so any edge $uv$ has $|\H{yz}(uv)-w(uv)| =\sigma(\H{yz}(uv)-w(uv))$. The highest level outline of the data structure is as before: We track $slack$ by maintaining the invariant \eqref{SlackEqn}, where the stored quantity $SLACK(v)$ will be defined below. We define keys in $\F._B$ and $\F.$ exactly as before, e.g., (\ref{AKeyEqn}). The invariant implies that for any blossom $B$ not in \os., the current smallest $slack$ of a grow step edge to $B$ is $find\_min(\F._B ) -\Delta+\ZY(B)$. So the data structure gives the correct value for the next dual adjustment. Our definition of $SLACK(v)$ involves two quantities $IU(uv)$ and $IV(uv)$ that account for the contributions of $I$-edges to the slack of $uv$, $IU$ at the $u$ end and $IV$ at the $v$ end. We will define $IU$ and $IV$ to be fixed, stored quantities so the following relations hold. At any time when $v\notin \S.$, and $B_v$ is the maximal blossom/vertex currently containing $v$, \begin{equation} \label{vSlackInvariantEqn} y(v)+z\set{A}{v\in V(A),\, uv \in I(A)}=Y(v)+IV(uv)+\sigma\ZY(B_v). \end{equation} At any time after $uv$ becomes eligible at $u$, \begin{equation} \label{uSlackInvariantEqn} y(u)+z\set{A}{u\in V(A),\, uv \in I(A)}=Y(u)+IU(uv)-\sigma\Delta. \end{equation} We reiterate that the only terms on the right-hand side of these two equations that change with time are $\ZY(B_v)$ and $\Delta$. Now define \[SLACK(v)= \min \set{\sigma(Y(u)+Y(v)+IU(uv)+IV(uv)-w(uv))} {uv\in E \text{ eligible at }u}.\] Let us show the above relations imply the desired invariant \eqref{SlackEqn} for $SLACK$. Adding the two equations and multiplying by $\sigma$ implies that at any point in time when $uv$ is eligible and $v\notin S$, \begin{equation*} \label{vNotinSEqn} |\H{yz}(uv)-w(uv)|= \sigma(Y(u)+IU(uv)+Y(v)+IV(uv)-w(uv))-\Delta+\ZY(B_v). \end{equation*} Applying this for every edge $uv$ in the definition of $SLACK$ gives \eqref{SlackEqn} as desired. \iffalse Let us first give a simple convention to handle the case where a vertex $x\in \{u,v\}$ is atomic. For atomic $B_x$ define $\eta(B_x)=\emptyset$ and $I(B_x)=\delta(x,M)$. This causes no harm since $z(B_x)=0$. \fi It remains to give $IV$ and $IU$. The contribution at the nonouter end $v$ is defined by \[IV(uv)=\begin{cases} 0&uv\notin M\cup\eta(B_v)\\ 2\ZY(v)&uv\in M-\eta(B_v)\\ 2\ZY(B_v)&uv=\eta(B_v)\in M\\ 2(\ZY(v)-\ZY(B_v))&uv=\eta(B_v)\notin M. \end{cases} \] To discuss this definition we will use the following terminology. Recall that the algorithm computes $IV(uv)$ when $uv$ becomes eligible at $u$. $IV(uv)$ is defined using the blossom/vertex $B_v$ at that time. However we must verify \eqref{vSlackInvariantEqn} whenever $v\notin \S.$, so $B_v$ may change. To keep the two cases straight say the {\em defining $B_v$} is used to compute $IV(uv)$, and a {\em useful $B_v$} is one that may be required later on in \eqref{vSlackInvariantEqn} to establish the invariant \eqref{SlackEqn} for the algorithm. The defining $B_v$ is useful iff $v\notin \S.$ when $IV(uv)$ is computed. Clearly a useful $B_v$ is a subset of the defining $B_v$, but we shall see that not every such $B_v$ is useful. To prove the definition is correct we will analyze each of its four cases separately. We will show that if the defining $B_v$ is in that case, so is every useful $B_v$. Then we will show \eqref{vSlackInvariantEqn} is satisfied for every useful $B_v$. To do this we will compute the value of the left-hand side of \eqref{vSlackInvariantEqn} and deduce the correct value of $IV(uv)$ by comparing to the right-hand side. To begin the analysis, note that whenever $v\notin \S.$ the current value of $y(v)$ is \[Y(v)+\ZY(B_v)\] since every dual adjustment increases $y(v)$ by $\delta$. Also when $uv\in I(B_v)$ the $z$ contribution to the left-hand side of \eqref{vSlackInvariantEqn} is \[z_0\set{A}{v\in V(A)\subseteq V(B_v)}=2(\ZY(v)-\ZY(B_v)),\] by \eqref{zContributionEqn}. \case{$uv\notin M\cup\eta(B_v)$} We assume this case holds for the defining $B_v$. So for any useful $B_v$, say $B$, $uv$ is an unmatched edge and $uv\ne \eta(B)$ (by Lemma \ref{IBconIALemma}\xi). So this case holds for every useful $B$. Now we establish \eqref{vSlackInvariantEqn} for any useful $B_v$. The contribution to the left-hand side of \eqref{vSlackInvariantEqn} is $y(v)=Y(v)+\sigma\ZY(B_v)$. This follows since this case has $uv\notin I(B_v)$ (so there is no $z$ contribution) and $\sigma=1$ (since $uv\notin M$). Comparing to the right-hand side of \eqref{vSlackInvariantEqn} shows $IV(uv)=0$, as desired. \case{$uv\in M-\eta(B_v)$} We assume this holds for the defining $B_v$. So any useful $B_v$ has $uv$ matched and not its base edge (by Lemma \ref{IBconIALemma}\xi). Thus this case holds for any useful $B_v$. Now consider any useful $B_v$. If $B_v$ is a blossom then $uv\in I(B_v)$. So the $z$ contribution is $2(\ZY(v)-\ZY(B_v))$. This also holds if $B_v$ is atomic, since the $z$ contribution is 0. Since $uv\in M$, $\sigma=-1$. Adding the $y$ and $z$ contributions to the left-hand side of \eqref{vSlackInvariantEqn} gives total contribution \[(Y(v)+\ZY(B_v))+2(\ZY(v)-\ZY(B_v))= Y(v)+2\ZY(v) +\sigma\ZY(B_v).\] Thus $IV(uv)=2\ZY(v)$, independent of $B_v$. \bigskip The next two cases have $uv=\eta(B_v)$ for the defining $B_v$. If $v\in \S.$ at this point then wlog $B_v$ is inner. Since $v=\beta(B_v)$, $v$ will remain in \S. for the rest of the search. So $uv$ is irrelevant to the data structure. If $v\notin \S.$ then $B_v$ is itself the first useful $B_v$. The first time this $B_v$ becomes a node of \os., the preceding argument applies. It shows there are no other useful $B_v$'s. In summary we have shown for the next two cases, every useful $B_v$ belongs to the same case. \case{$uv=\eta(B_v)\in M$} Since $uv\notin I(B_v)$ there is no $z$ contribution (by Lemma \ref{IBconIALemma}\xii). So the total contribution is $y(v)=Y(v)+\ZY(B_v)=Y(v)+2\ZY(B_v) +\sigma\ZY(B_v)$. Thus $IV(uv)= 2\ZY(B_v)$. \case{$uv=\eta(B_v)\notin M$} This makes $uv\in I(B_v)$ so there is a $z$ contribution. The total contribution is \[(Y(v)+\ZY(B_v))+2(\ZY(v)-\ZY(B_v))= Y(v)+2(\ZY(v)-\ZY(B_v))+\sigma\ZY(B_v).\] Thus $IV(uv)=2(\ZY(v)-\ZY(B_v))$. \bigskip \begin{figure}[t] \centering \input{fBlossomB4.pstex_t} \caption{Precursor to structure of Fig.\ref{fBlossomFig}.} \label{fBlossomFigB4} \end{figure} \remark {It might seem that the cases for $uv=\eta(B_v)$ are subject to a simplification because this edge is often tight. Specifically if $B_v$ was not a maximal blossom at the beginning of the current search then $\eta(B_v)$ is tight when the search starts. So $\eta(B_v)$ will be tight when $B_v$ becomes maximal. However this need not be the case when $\eta(B_v)$ becomes eligible. For instance suppose a search starts out with the structure of Fig.\ref{fBlossomFigB4}. Then the inner blossom $B_5$ gets expanded to give part of Fig.\ref{fBlossomFig}, where $\alpha_2=\eta_4=\eta(B_4)$. As mentioned (in the Examples section after Fig.\ref{fAlgExFig}) a dual adjustment makes $\alpha_2$ strictly underrated. A subsequent expansion of $B_3$ may make $\alpha_2$ eligible, but still underrated.} \bigskip The contribution at the \os. end $u$ is \[IU(uv)=\begin{cases} \ZY(B_u)-\Delta_0(B_u)&B_u \text{ inner}, uv\in M\\ 2\ZY(u)-\ZY(B_u)+\Delta_0(B_u)&B_u \text{ inner}, uv=\eta(B_u)\notin M\\ 0&B_u \text{ outer}, uv\notin M\\ 2\big(\ZY(u)-\ZY(B_u)-2\Delta_0(O_u)+\Delta_0(B_u)\big)&B_u \text{ outer}, uv\in M. \end{cases} \] $O_u$ is defined below. To verify correctness let $\Delta_0$ be the value of $\Delta$ when $uv$ first becomes eligible for (any) $B_u$. We will show \eqref{uSlackInvariantEqn} holds at that point. Thereafter, $uv$ remains eligible (Lemma \ref{AlwaysEligible}), so \eqref{DdeMeqn} shows the left-hand side of \eqref{uSlackInvariantEqn} changes by $-\sigma\delta$ in every dual adjustment, as does the right-hand side. Thus \eqref{uSlackInvariantEqn} continues to hold in every dual adjustment. \case{$B_u \text{ inner}, uv\in M$} This makes $uv\notin I(B_u)$. (There are two cases: If $B_u$ is a blossom then $uv=\eta(B_u)$ since $uv$ is eligible. If $B_u$ is atomic then $I(B_u)=\emptyset$.) Thus the contribution is \[y(u)=Y(u)+\ZY(B_u) =Y(u)+ \ZY(B_u) -\Delta_0(B_u)-\sigma\Delta_0.\] Thus $IU(uv)=\ZY(B_u)-\Delta_0(B_u)$. \case{$B_u \text{ inner}, uv=\eta(B_u)\notin M$} This makes $B_u$ a blossom and $uv\in I(B_u)$. The contribution for $y(u)$ is the same as the previous case. The contribution for $z$ is $2( \ZY(u)-\ZY(B_u))$. The total contribution is $(Y(u)+\ZY(B_u))+2(\ZY(u)-\ZY(B_u)) = Y(u)+2\ZY(u)-\ZY(B_u)+\Delta_0(B_u) -\sigma\Delta_0$. Thus $IU(uv)=2\ZY(u)-\ZY(B_u)+\Delta_0(B_u)$. \bigskip We are left with the case where $uv$ first becomes eligible when $u$ enters an outer node. Furthermore $uv\ne \eta(B_u)$ when $B_u$ is a blossom. To prove the latter, the preceding two cases apply if blossom $B_u$ enters \os. as inner. If $B_u$ enters as outer clearly $\eta(B_u)=\tau(B_u)\in \os.$. \iffalse It is helpful to enumerate the possiblities for this case: \bigskip $B_u$ becomes inner in a grow step: A subsequent blossom step creates $O_u\supset B_u$. If $B_u$ is a blossom then $uv\ne \eta(B_u)$ (since $\eta(B_u)\in \gamma(B_u)$). If $B_u$ is an atom then $uv\notin M$ (since $uv\in M$ was treated in the first case). $B_u$ becomes outer in a grow step: So $B_u=O_u$. If $B_u$ is a blossom then $uv\ne \eta(B_u)$ (since $B_u$ is outer). If $B_u$ is atomic then $uv\notin M$. \bigskip \fi Let $O_u$ be the first outer node that contains $B_u$. Let $\Delta_0(O_u)$ be the value of $\Delta$ when $O_u$ is formed. So $\Delta_0=\Delta_0(O_u)$. Recall that when $O_u$ is formed we redefine $Y(u)$ to be the current value of $y(u)$ plus $\Delta_0(O_u)$. Hence at any time after $O_u$ is formed $B_u$ is outer and \[y(u)=Y(u)-\Delta.\] Also the only $z$ contribution comes from $B_u$ (since we assume $\Delta=\Delta_0$). \iffalse Note this case applies to $B_u$ that is a blossom that becomes outer in a grow step, or a blossom that becomes inner in a grow step (If it were Since $uv\notin I(O_u)$ \fi \case{$uv$ becomes eligible for $O_u$, $uv\notin M$} There is no $z$ contribution. (This is by definition if $B_u$ is atomic. If $B_u$ is a blossom we have noted $uv\ne \eta(B_u)$.) So the total contribution is $y(u)=Y(u)-\Delta_0(O_u)= Y(u)-\sigma\Delta_0$. Thus $IU(uv)=0$. \case{$uv$ becomes eligible for $O_u$, $uv\in M$} First suppose $B_u$ is a blossom. This case makes $uv\in I(B_u)$. When $B_u$ becomes an \os.-node (outer or inner) the $z$ contribution is \[z_0\set{A}{u\in V(A)\subseteq V(B_u)}.\] If $B_u$ enters as an inner node and is later absorbed in an outer node, this $z$ contribution decreases by \[2(\Delta_0(O_u)-\Delta_0(B_u)).\] This also holds if $B_u$ enters as outer. (The latter may occur in a grow step that adds $B_u=O_u$, or in an expand step that makes $B_u$ maximal and outer.) It is possible that $B_u$ is an atom. We must have $B_u$ outer, by the first case. An atom has no $z$ contribution. This is consistent with the two displayed $z$ contributions, since they are both 0 for an atom $B_u$ ($B_u=O_u$). So in all cases, the left-hand side of \eqref{uSlackInvariantEqn} is \begin{eqnarray*} &&(Y(u)-\Delta_0(O_u)) + 2( \ZY(u)-\ZY(B_u)-(\Delta_0(O_u)-\Delta_0(B_u)) )\\ &=&Y(u) +2(\ZY(u)-\ZY(B_u)-2\Delta_0(O_u)+\Delta_0(B_u)) -\sigma\Delta_0. \end{eqnarray*} Thus $IU(uv) =2(\ZY(u)-\ZY(B_u)-2\Delta_0(O_u)+\Delta_0(B_u)) $. \bigskip \iffalse We now define $SLACK(v)$ using an auxiliary quantity $s(uv)$: \begin{eqnarray} \label{SlackDfnsEqn} SLACK(v)&=& \min \set{s(uv)} {uv\in E \text{ eligible at }u}\\ s(uv)&=&\sigma(Y(u)+Y(v)+IU(uv)+IV(uv)-w(uv)) +\begin{cases} 0&v\notin \S. \text{ when $uv$ becomes eligible}\\ \Delta_0(B_v)+z_0(B_v)/2 &v\in \S. \text{ when $uv$ becomes eligible.} \end{cases} \end{eqnarray} In this definition $B_v$ is the blossom containing $v$ when $uv$ becomes eligible. We must show this definition satisfies the invariant \eqref{SlackEqn}, We have done this already for the first case of s(uv). So now suppose v\in \S. \text{ when $uv$ becomes eligible. B_v gets expanded when \Delta_0(B_v)+z_0(B_v)/2. At that point \eqref{uSlackInvariantEqn} shows y(u)+z\set{A}{u\in V(A),\, uv \in I(A)}=Y(u)+IU(uv)-\sigma \Delta_0(uv) Since uv is eligible for B_u at any \Delta\ge \Delta_0(uv) y(u)+z\set{A}{u\in V(A),\, uv \in I(A)}=Y(u)+IU(uv)-\sigma \Delta_0(uv) -\sigma(\Delta-\Delta_0(uv)) Combining with \eqref{vSlackInvariantEqn} as before shows \label{vNotinSEqn} |\H{yz}(uv)-w(uv)|= \sigma(Y(u)+IU(uv)+Y(v)+IV(uv)-w(uv))-\Delta_0(uv)+\ZY(B_v). The expression for a given edge $uv$ is computed as follows. If $B_v\notin \os.$ we use the quantities $IU(uv)$, $IV(uv)$ as defined above. If $B_v$ is an inner node then we must compute this expression at the time $B_v$ gets expanded and $v$ leaves \os. ($v$ leaves \S. in this expand step (Fig.\ref{fMAlg}), even if it gets added back to \S. (Fig.\ref{fExpandAlg}). It is possible that $B_v$ is never expanded, but this occurs only if the search ends before the projected expansion time. Clearly using the projected expansion time in this case poses no harm.) A simple approach is to delay the computation until the expansion occurs. Alternatively we can maintain the organization of updating $SLACK$ when the edges become eligible, as follows. The expansion occurs when $\Delta=\Delta_0(B_v)+z_0(B_v)/2$ so $IV(uv)$ is computed for the blossom $B'_v$ containing $v$ at this time -- $B'_v$ is the maximal blossom/vertex containing $v$ that is properly contained in $B_v$. {} \fi The only changes to the algorithm are the obvious ones for examining edges: Matched edges must be examined and added to the data structure. $IU$ and $IV$ quantities must be computed. It is easy to see the latter uses $O(1)$ time per edge. So the timing estimate is not affected. \iffalse $uv$ when an inner node $B_u$ is added to \os.. If $B_u$ is a blossom the new eligible edge $uv=\eta(B_u)$ is examined. If $B_u$ is atomic then every matched edge incident to $u$ is examined. \fi \iffalse Next consider an arbitrary new node $B_u$. When any newly eligible edge $uv$ is examined the value for updating $SLACK(v)$ involves the quantity $IV(uv)$. It chosen as follows . \fi \iffalse Let $B'_v$ be the blossom/vertex containing $v$ when $B_v$ ceases to be an inner node. ($B'_v$ exists as long as the search does not halt with $B_v$ inner.) If $B'_v$ is outer (i.e., a blossom step puts $v$ in the new outer blossom) then $uv$ is no longer in the data structure (it is eligible at both ends). Otherwise the search is executing an expand step for $B_v$, and $B'_v$ is the maximal blossom/vertex containing $v$ and properly contained in $B_v$. As noted above $B'_v\notin \os.$ when the expand step removes $B_v$. So we compute $s(uv)$ by there are two possibilities for $B_v$ We compute $i_v(uv)$ using blossom $B'_v\notin \os.$. We compute $i_u(uv)$ as defined, but increase $s(uv)$ by the amount the contribution at the $u$ end has increased after $uv$ has become eligible. Every dual adjustment changes the contribution by $-\sigma \delta$, and the total of all dual adjustments made until $B_v$ gets expanded is $z(B_v)/2-(\Delta_0-\Delta_0(B_v))$. So we increase $s(uv)$ by $-\sigma(z(B_v)/2+\Delta_0(B_v)-\Delta_0)$. To simplify this expression compute $i_v(uv)$ using blossom $B_v$. The term $\sigma\ZY(B_v)$ in \eqref{vSlackInvariantEqn} equals $\sigma\ZY(B'_v)-z_0(B_v)/2$ in \fi \iffalse For the time bound note that $B'_v$ is easily found in $O(1)$ time: Suppose $node(v)$ is in the F-heap $\F._B$. If $B$ is the parent of $node(v)$ in \B. then $B'_v=node(v)$. Otherwise $B'_v$ is the big child of $B$. \fi \iffals It remains to define $SLACK_u$ and $SLACK_m$. This involves the second new aspect which creates 4 possibilities -- an edge $uv$ may or may not belong to each of the sets $I(B_u)$, $I(B_v)$. ($B_u$ or $B_v$ may be atomic, so to simplify notation we assume atoms have empty $I$ sets.) \iffalse Since $uv$ can be in $I(B)$ for $B$ equal to $B_u$ or $B_v$ there are 4 possibilities. Since such $B$ are heavy with $uv=\eta(B)$, $uv\in I(B_u)$ requires $B_u$ to be an inner blossom. \fi For any $\B.$-node $B$ with a vertex $u\in V(B)$ we define the amount that $B$ and \B.-blossoms contained in $B$ contribute to $\H{yz}(uv)$, for every edge $uv\in I(B)$, at the start of the search: \[ \zI(u,B)=z_0\,\set{A}{A \text{ a blossom},\ u\in V(A)\subseteq V(B)} =2(Z\_TO\_Y(u)-Z\_TO\_Y(B)). \] This formula follows from the fact that $u\in V(A)\subseteq V(B)$ implies $\delta(u,I(B)) \subseteq \delta(u,I(A))$. (This in turn follows from the observation that $uv=\eta(A) $ iff $uv=\eta(B)$.) $SLACK_u$ and $SLACK_m$ include quantities $i_u(uv)$ and $i_v(uv)$ that gives the contribution to $\H{yz}(uv)$ from sets $I(B_u)$ and $I(B_v)$ respectively. Generalizing the previous definition of $SLACK$ gives \begin{equation*} \label{SlackDfnsEqn} SLACK_u(v)&=&\min \set{Y(u)+Y(v)+i_u(uv)+i_v(uv)-w(uv)}{uv\in E-M \text{ eligible for }B_u}\\ SLACK_m(v)&=&\min \set{w(uv)- (Y(u)+Y(v)+i_u(uv)+i_v(uv))} {uv\in M \text{ eligible for }B_u}. \end{equation*} Unlike the definition of $SLACK$ these definitions include quantities that can change over the course of the algorithm -- blossom $B_u$ and the $i(uv)$ terms. We shall verify that there will be no need to recompute $SLACK_u$ or $SLACK_m$: the choice of $B_u$ does not change the term for $uv$. The algorithm will add $uv$ to the set defining $SLACK_u$ or $SLACK_m$ when the first qualifying $\overline G$-node $B_u$ is formed. $SLACK_u$ or $SLACK_m$ \[ i(uv)=\begin{cases} 0&uv\notin I(B_u)\cup I(B_v)\\ \zI(v,B_v)&uv\in I(B_v)-I(B_u)\\ Z\_TO\_Y(B_u)+\zI(u,B_u)+\Delta_0(B_u)&uv\in I(B_u)-I(B_v)\\ Z\_TO\_Y(B_u)+\zI(u,B_u)+\Delta_0(B_u)+\zI(v,B_v) &uv\in I(B_u)\cap I(B_v) \end{cases} \] The definition of $i(uv)$ depends on whether or not $uv$ is matched. First consider grow steps for unmatched edges $uv$. \iffalse To define $i(uv)$ we first review the definition of sets $I(B_u)$ and $I(B_v)$ in our context. \bigskip \[\begin{array}{ll} uv \in I(B_v) &\text{if $B_v$ is a heavy blossom with $uv=\eta(B_v)$}\\ uv \notin I(B_v)&\text{if $B_v$ is an outer atom or a blossom with $uv\ne \eta(B_v)$}\\ uv \in I(B_u) &\text{if $B_u$ is a heavy inner blossom with $uv=\eta(B_u)$}\\ uv \notin I(B_u) &\text{if $B_u$ is an atom or a blossom with $uv\ne \eta(B_u)$} \end{array} \] \bigskip \fi Recall that an unmatched edge is in a set $I(B)$ only if it is the base edge of $B$ (so $B$ is a heavy blossom). When $uv$ is eligible for $B_u$ this means $B_u$ must be an inner blossom. For any currently inner blossom $B$ let $\Delta_0(B)$ be the value of $\Delta$ when $B$ became inner (in a grow or expand step). \[ i(uv)=\begin{cases} 0&uv\notin I(B_u)\cup I(B_v)\\ \zI(v,B_v)&uv\in I(B_v)-I(B_u)\\ Z\_TO\_Y(B_u)+\zI(u,B_u)+\Delta_0(B_u)&uv\in I(B_u)-I(B_v)\\ Z\_TO\_Y(B_u)+\zI(u,B_u)+\Delta_0(B_u)+\zI(v,B_v) &uv\in I(B_u)\cap I(B_v) \end{cases} \] Note the algorithm can determine which case applies to $uv$ since the data structure of Section \ref{fBlossomSec} records the base edge $\eta(B)$, as well as the matching. \iffalse \[ i(uv)=\begin{cases} 0&B_u \text{ outer}, uv\ne \eta(B_v)\\ z\_to\_I(v,B_v)&B_u \text{ outer}, uv=\eta(B_v)\\ Z\_TO\_Y(B_u)+z\_to\_I(u,B_u)+\Delta_0(B_u)&B_u \text{ inner}, uv\ne \eta(B_v)\\ Z\_TO\_Y(B_u)+z\_to\_I(u,B_u)+\Delta_0(B_u)+z\_to\_I(v,B_v) &B_u \text{ inner}, uv=\eta(B_v) \end{cases} \] \fi To prove the invariant (\ref{SlackEqn}), it clearly holds in the first two alternatives, which have $B_u$ outer, so assume $uv\in I(B_u)$. This makes $B_u$ an inner blossom. $B_u$ becomes inner when $\Delta=\Delta_0(B_u)$. The definitions of $Z\_TO\_Y$ and $z\_to\_I$ easily show (\ref{SlackEqn}) holds at that point. Thereafter each dual adjustment by $\delta$ decreases $\H{yz}(uv)$ by $\delta$. So for any $\Delta\ge \Delta_0(B_u)$, both sides of (\ref{SlackEqn}) decrease by $\Delta-\Delta_0(B_u)$. Thus (\ref{SlackEqn}) continues to hold. \iffalse , and after that $y(u)+z(B_u)$ increases by $(\Delta-\Delta_0(B_u)) - 2(\Delta-\Delta_0(B_u))= \Delta_0(B_u)-\Delta$. Now it is easy to check (\ref{SlackEqn}). \fi Note this argument may apply to various inner blossoms $B_u$. Let us show that as mentioned, the term for $uv$ in $SLACK_u(v)$ does not change with $B_u$. Suppose the current $B_u$ changes to $B$. This occurs when $B_u$ is either expanded or absorbed in a blossom step. If $B_u$ is absorbed in a blossom then $uv=\eta(B_u)$ is in the blossom cycle, so $v$ is no longer part of the grow step data structure. If $B_u$ is expanded, $z(B_u)$ has decreased to 0. $B$ is either another inner blossom with $uv=\eta(B)$ or an outer atom (by Definition \ref{FBlossomDefn}). For $B$ inner $uv$ is eligible for $B$ and $i(uv)$ does not change ($Z\_TO\_Y$ increases by $z_0(B_u)/2$, $z\_to\_I$ decreases by $z_0(B_u)$, $\Delta_0$ increases by $z_0(B_u)/2$). For $B_u$ outer $uv$ remains eligible, and we already know that the new $i(uv)$ gives the desired value for the current point dual variables, just as it did for $B_u$. \iffalse we have already verified that each the term for each $B_u$ gives the value of $\Delta$ when a grow step can be done for $uv$ (assuming This is based on the assumption that $B_u$ does not change and every \fi \iffalse Note also that various blossoms $B_u$ can satisfy the condition $B_u$ inner with $uv=\eta(B_u)$. But when $v$ is not in \S. all such $B_u$ give rise to the same quantity $i(uv)$, so it suffices to update $SLACK(v)$ for $uv$ the first time the condition holds.) \fi \iffalse Recall that an unmatched edge is in a set $I(B)$ only if it is the base edge of $B$ (so $B$ is a heavy blossom). When $uv$ is eligible for $B_u$ this means $B_u$ must be an inner blossom. To handle this consider a blossom $B$ of \B. and a vertex $u\in V(B)$. For an edge $uv\in I(B)-M$ we define the amount that blossoms contained in $B$ contribute to $\H{yz}(uv)$ at the start of the search: \[ z\_to\_I(u,B)=\sum\set{z_0(A)}{A\in \B.,\ u\in V(A)\subseteq V(B)} =2(Z\_TO\_Y(u)-Z\_TO\_Y(B)). \fi Now consider grow steps for matched edges $uv$. We define the $SLACK_m$ similar to $SLACK_u$: A matched edge is in a set $I(B)$ only if it is not the base edge of $B$, which must be a blossom. When $uv$ is eligible for $B_u$ this implies $B_u$ is an outer blossom. If $u$ is in an outer node $B_u$ let $O_u$ be the maximal \B.-node with $u\in V(O_u)\subseteq B_u$. (Note $O_u$ may be an atom.) In other words $O_u$ either became an outer node in a grow step, or $O_u$ was an inner node that was part of the cycle in a blossom step. Let $\Delta_0(O_u)$ be the value of $\Delta$ when this grow or blossom step was executed. Note that $\Delta_0(O_u)$ is the value $\Delta_0$ used to define $Y(u)$ (Section \ref{bAlgSec}). If $B_u$ is an inner blossom then, as before, $\Delta_0(B_u)$ is the value of $\Delta$ when $B_u$ became inner. \[ i(uv)=\begin{cases} Z\_TO\_Y(B_u)-\Delta_0(B_u)& uv\notin I(B_u)\cup I(B_v)\\ Z\_TO\_Y(B_u)-\Delta_0(B_u)+\zI(v,B_v)&uv\in I(B_v)- I(B_u)\\ \zI(u,O_u) -2\Delta_0(O_u)&uv\in I(B_u)- I(B_v)\\ \zI(u,O_u) -2\Delta_0(O_u)+\zI(v,B_v)&uv\in I(B_u)\cap I(B_v) \end{cases} \] As before the algorithm can determine which case applies to $uv$. To justify this definition first consider the case $uv\notin I(B_u)$ (i.e., the first two alternatives). With $uv$ eligible for $B_u$ this means $B_u$ is inner (either atomic or a blossom). The argument is the same as before: $B_u$ becomes inner when $\Delta=\Delta_0(B_u)$, and it is easy to see (\ref{SlackEqn}) holds at that point. Thereafter each dual adjustment preserves (\ref{SlackEqn}). Similarly as before, if $B_u$ is an inner blossom that gets expanded, the term for $uv$ does not change (the new node containing $u$ is inner, $Z\_TO\_Y$ and $\Delta_0$ both increase by $z_0(B_u)/2$). Now consider the case $uv\in I(B_u)$, i.e., $B_u$ an outer blossom. The \B.-node $O_u$ contributes $z\_to\_I(u,O_u)$ to $z$ terms in $\H{yz}(uv)$, and the outer blossoms containing $O_u$ contribute $2(\Delta-\Delta_0(O_u))$. \iffalse Also $y(u)$ decreases by $\Delta-\Delta_0(B_u)$ in the outer blossoms containing $B^0_u$. \fi This gives total contribution to $y$ and $z$ terms of $z\_to\_I(u,O_u)$ plus $(Y(u)-\Delta)+ 2(\Delta-\Delta_0(O_u))= Y(u) + \Delta -2\Delta_0(O_u)$, so the last 2 alternatives of $i(uv)$ are correct. Note also that we can compute $i(uv)$ in these 2 cases at the moment $uv$ becomes eligible. \f \iffalse after $B_u$ becomes outer, when $\Delta=\Delta_0(B_u)$, $y(u)+z(B_u)$ increases by $ (\Delta_0(B_u) -\Delta) +2(\Delta-\Delta_0(B_u)) = \Delta-\Delta_0(B_u)$, Since $Y(u)=y_0(u)+\Delta_0$, the term $y_0(u)=Y(u)-\Delta_0$, $Y(u)+i(uv)$ gives the correct contribution of $B_u$ to $slack(v)+\Delta$. $Y(u)-\Delta_0$ is the contribution of $B_u$ to ,$y_0(u)=Y(u)-\Delta_0$, $= \Delta_0(B_u)-\Delta$. Note also that various blossoms $B_u$ can satisfy the condition $B_u$ inner with $uv=\eta(B_u)$. But when $v$ is not in \S. all such $B_u$ give rise to the same quantity $i(uv)$, so it suffices to update $SLACK(v)$ for $uv$ the first time the condition holds.) \fi \iffalse Every maximal blossom that is not in \os. has an entry in \F. whose key specifies when the blossom can be made inner in a grow step. The key is updated each time we scan an unmatched edge from a new outer node B. An expand step for a blossom B does split operations on the list corresponding to replace $L_B$ by the sublists corresponding to the blossoms in $\A.(B)$. the csts of these lists give the key for the correspondin grow step. Each blossom $B$ has a corresponding list $L_B$. It is the concatenation of the lists of all its subblossoms. Introduce an artificial root to is used to number the vertices so every blossom (maximal or not( Each current list has a unique identifier. Elements of the list Each element $x$ has a value $\ell(x)$ that specifies its current list. These values are maintained using the ``relabel-the-smaller-half'' strategy -- split(x) assigns a new identifier to the sublist that is smaller, and it updates L(x) for every x in that list. An array c[1..n] stores the cost of each element. The costs The array is partitioned into packets of \ifmmode \,{ \rm log}\,\else{\it log }\fi n consecutive entries. A packet is ful So each list corresponds to a set of consecutive entries that is consists of a short set of <\ifmmode \,{ \rm log}\,\else{\it log }\fi n elements in the first pacekt, a number of full sets of \ifmmode \,{ \rm log}\,\else{\it log }\fi n elements constituting a packet, and a short set of <\ifmmode \,{ \rm log}\,\else{\it log }\fi n elements in the last packet. Any of these sets may be empty. \fi \iffalse an edge of the graph. (This occurs either when the search begins, or when a blossom containing both $u$ and $v$ is expanded and $u$ is not in the new $B_v$.) \fi \iffalse $uv \in I(B) \ifmmode {\ \Longleftrightarrow \ }\else{$\ \Longleftrightarrow \ $}\fi uv \in I(A) \text{ for every blossom A with }v\in A\subseteq B$. (This follows since $uv=\eta(B) \ifmmode {\ \Longleftrightarrow \ }\else{$\ \Longleftrightarrow \ $}\fi uv=\eta(A) \text{ for every blossom A with }v\in A\subseteq B$. so $uv \in \delta(B,M) \oplus\eta(B) \ifmmode {\ \Longleftrightarrow \ }\else{$\ \Longleftrightarrow \ $}\fi uv \in \delta(A,M) \oplus\eta(A)$. \fi \iffalse Similar to the analysis for $v$ we will write the value of the left-hand side of \eqref{uSlackInvariantEqn} in the form \[Y(u)+\chi-\sigma\Delta\] for an expression $\chi$, and then conclude that $\chi$ is the correct value of $i_u(uv)$. \fi \iffalse Thus the contribution is \[y(u)=Y(u)+\ZY(B_u)+(\Delta-\Delta_0(B_u))=Y(u)+ \ZY(B_u)-\Delta_0(B_u) -\sigma\Delta.\] Thus $i_u(uv)=\ZY(B_u)-\Delta_0(B_u)$. \fi \iffalse This makes $uv\in I(B_u)$. The contribution for $y(u)$ is the same as the previous case. The contribution for $z$ is $2( \ZY(u)-\ZY(B_u) -(\Delta-\Delta_0(B_u) )$. The total contribution is $(Y(u)+\ZY(B_u)+(\Delta-\Delta_0(B_u)) +2(\ZY(u)-\ZY(B_u)-(\Delta-\Delta_0(B_u))= Y(u)+2\ZY(u)-\ZY(B_u)+\Delta_0(B_u) -\sigma\Delta$. Thus $i_u(uv)=2\ZY(u)-\ZY(B_u)+\Delta_0(B_u)$. \fi \iffalse \case{$B_u$ \text{ outer}, uv\notin M$} Since $uv\notin I(B_u)$ there is no $z$ contribution. So the total contribution is $y(u)=Y(u)-\sigma\Delta$, and $\chi=0=i_u(uv)$. \fi \iffalse \case{$B_u \text{ outer}, uv\in M$}. This makes $uv\in I(B_u)$. When $B_u$ becomes an \os.-node the $z$ contribution is $z_0\set{A}{u\in V(A)\subseteq V(B_u)}$. If $B_u$ enters as an inner node and is later absorbed in an outer node, the $z$ contribution decreases by $2(\Delta_0(O_u)-\Delta_0(B_u))$; this also holds if $B_u$ enters as outer. Once $u$ is in an outer node it remains so, increasing the $z$ contribution by $2(\Delta-\Delta_0(O_u))$. \iffalse So the $z$ contribution is $2(\ZY(u)-\ZY(B_u)-(\Delta_0(O_u)-\Delta_0(B_u)) + \Delta-\Delta_0(O_u)) \fi So the left-hand side of \eqref{uSlackInvariantEqn} is $(Y(u)-\Delta) + 2(\ZY(u)-\ZY(B_u)-(\Delta_0(O_u)-\Delta_0(B_u)) + \Delta-\Delta_0(O_u)) =Y(u) +2(\ZY(u)-\ZY(B_u)-2\Delta_0(O_u)+\Delta_0(B_u)) -\sigma\Delta$. So $\chi=2(\ZY(u)-\ZY(B_u)-2\Delta_0(O_u)+\Delta_0(B_u)) =i_u(uv)$. \fi \iffalse the moment $uv$ becomes eligible. Furthermore they will both hold at any future point when $v$ is not in \S. but may have transitioned in and out of \S. any number of times. In proof, clearly \eqref{uSlackInvariantEqn} holds (at this future value of $\Delta$). And no matter what the value of $\Delta$, the totality of change to $v$'s slack (i.e., the left-hand side of \eqref{vSlackInvariantEqn}) is accounted for by updating the term $\ZY(B_v)$ to the new $B_v$. \fi \iffalse*********** \case{$v\in \S.$ when $uv$ becomes eligible} Let $B_v$ be the maximal blossom/vertex containing $v$ when $uv$ becomes eligible. Let $B'_v$ be the \B.-child of $B_v$ that is an ancestor of $v$. \eqref{vSlackInvariantEqn} becomes valid for $B'_v$ when $B_v$ gets expanded and $v$ leaves \S.. This occurs when $\Delta=\Delta_0(uv)=\Delta_0(B_v) +z_0(B_v)/2$. Clearly \eqref{uSlackInvariantEqn} is valid for this value of $\Delta$. at any future point when $v$ is not in \S. but may have transitioned in and out of \S. any number of times. In proof, clearly \eqref{uSlackInvariantEqn} shows the slack in u has decreased by $\Delta-\Delta_0(uv)$. holds (at this future value of $\Delta$). And no matter what the value of $\Delta$, the totality of change to $v$'s slack (i.e., the left-hand side of \eqref{vSlackInvariantEqn}) is accounted for by updating the term $\ZY(B_v)$ to the new $B_v$. the maximal blossom/vertex containing We calculate the slack at the instant $v$ leaves \S., i.e., when $B_v$ is projected to expand. This occurs when $\Delta=\Delta_0(B_v) +z_0(B_v)/2$. Applying \eqref{uSlackInvariantEqn} with this $\Delta$ and \eqref{vSlackInvariantEqn} at $B'_v$, the above slack becomes the last two terms in the above slack become -(\Delta_0(B_v) +z_0(B_v)/2) +\ZY(B'_v)= -\Delta_0(B_v) +\ZY(B_v) Thus at any time when $\vfill \notin \S.$ \[|\H{yz}(uv)-w(uv)|= \sigma(Y(u)+i_u(uv)+Y(v)+i_v(uv)-w(uv)) -\Delta_0(B_v) +\ZY(B_v) -\Delta+ (\Delta_0(B_v) +z_0(B_v)/2 )) -(z_0(B_v)/2 - (\Delta_0-\Delta_0(B_v)))+\ZY(B'_v)= \Delta_0-\Delta_0(B_v) +\ZY(B_v). Thus at any time when $\vfill \notin \S.$ \[|\H{yz}(uv)-w(uv)|= \sigma(Y(u)+i_u(uv)+Y(v)+i_v(uv)-w(uv)) +\Delta_0-\Delta_0(B_v) +\ZY(B_v) +(\Delta-(\Delta_0(B_v) +z_0(B_v)/2 ) \]****************** \fi \iffalse We now define the $IU$ and $IV$ quantities. A superficial reading of the definitions suggests the quantities depend on time -- in particular they involve the current blossoms $B_u, B_v$. To remedy this we assume the algorithm computes them at the first possible time and never recomputes them. A closer reading shows that the quantities do not depend on the time of evaluation, but this property is not needed in our exposition. \fi \iffalse There is an important difference between this definition and the previous one for $b$-matching. In the latter, the expression for $uv$ (i.e., $Y(u)+Y(v)-w(uv)$) does not change over time. Correctness of the algorithm presented for $b$-matching depends on this property. We will use essentially the same algorithm for $f$-factors. The definitions for $i_u$ and $i_v$ presented below do not immediately have the necessary time-invariance. We shall elaborate on the meaning of the definitions, and their time-invariance, when we prove the definitions are correct. For now we are content to just state the definitions. \fi \iffalse (We are using the fact that as in ordinary matching and $b$-matching, if $uv$ is eligible for $B_u$ and it gives a grow step, every dual adjustment by a quantity $\delta$ decreases $|\H{yz}(uv)-w(uv)|$ by $\delta$. For instance if $uv\notin M$ and $uv=\eta(B_u)$ then the dual adjustment increases $y(u)$ by $\delta$ and decreases $z(B_u)$ by $2\delta$, so $\H{yz}(uv)$ decreases by $\delta$.) \fi \iffalse We shall compute a quantity $s(uv)$ at that point and use it to update the value of $SLACK(v)$, which is defined as \fi \iffalse Here $\Delta_0(uv)$ is the value of $\Delta$ the first time that $v\notin \S.$ after $uv$ becomes eligible for $B_u$. (If $v\in \S.$ when $uv$ becomes eligible, $\Delta_0(uv)$ is the projected value of $\Delta$ when an expand step will be executed for $B_v$. Note that $v$ leaves \S. in this expand step (Fig.\ref{fMAlg}), even if it gets added back to \S. (Fig.\ref{fExpandAlg}). It is possible that $B_v$ is never expanded, but this occurs only if the search ends before the projected expansion time. Clearly using the projected expansion time in this case poses no harm.) \fi \iffalse gives the contribution at the end $u$ with $B_u\in \os.$ and $i_v(uv)$ We compute $s(uv)$ using two quantities that account for the contributions of $I$-edges to the slack: $i_u(uv)$ gives the contribution at the end $u$ with $B_u\in \os.$ and $i_v(uv)$ gives the contribution at the end $v$ where $B_v$ is nonouter. Define a sign $\sigma$ as $-1$ if $uv\in M$ else $+1$. We define \begin{eqnarray*} \label{SlackDfnsEqn} s(uv)=\sigma(Y(u)+Y(v)+i_u(uv)+i_v(uv)-w(uv)) +\begin{cases} 0&v\notin \S. \text{ when $uv$ becomes eligible}\\ z_0(B_v)/2&v\in \S. \text{ when $uv$ becomes eligible.} \end{cases} \end{eqnarray*} \iffalse \begin{eqnarray*} \label{SlackDfnsEqn} s(uv)=\begin{cases} |Y(u)+Y(v)+i_u(uv)+i_v(uv)-w(uv)|&$v\notin \S.$ \text{when $uv$ becomes eligible}\\ |Y(u)+Y(v)+i_u(uv)+i_v(uv)+z_0(B_v)/2-w(uv)|&$v\in \S.$ \text{when $uv$ becomes eligible}\\ \end{cases} \end{eqnarray*} \fi \fi \iffalse Our main auxiliary tree is the compressed tree, so we begin by reviewing the basic definitions \cite{HT, T79}. Let $T$ be a tree with root $\mathy{\rho}(T)$. The {\it size} $s(v)$ of a node $v$ is the number of its descendants. (As usual a node is a descendant of itself.) A child $w$ of $v$ is {\it light} if $s(v)\ge 2s(w)$; otherwise it is {\it heavy}. Deleting each edge from a light child to its parent partitions the nodes of $T$ into paths of nonnegative length, called the {\it heavy paths of $T$}. (An isolated node is considered a heavy path.) A node is an {\it apex} if it is not a heavy child (e.g., $\mathy{\rho}(T)$). Equivalently an apex is the highest node on some heavy path. Consider any partition of $V(T)$ into a family $\cal P$ of disjoint paths of nonnegative length. Call the highest node on each path its apex. The {\it compressed tree for $T$ and $\cal P$}, $C(T,{\cal P} )$, has nodes $V(T)$ and root $\mathy{\rho}(T)$; the parent of a node $v\ne \mathy{\rho}(T)$ is the first proper ancestor of $v$ that is an apex. Any apex $v$ has $s_{C(T,{\cal P} )}(v)=s_T(v)$. When $\cal P$ consists of the heavy paths of $T$ we call this the {\it compressed tree \(for $T$\)}, denoted $C(T)$. As extreme examples $C(T)$ is $T$ if $T$ is a complete binary tree, and $C(T)$ has height 1 if $T$ is a path. --------------------------------------------------------- We give a simple algorithm for the list-splitting problem introduced in \cite{} to model the expand step. Our algorithm runs in time $O(m+n\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$. Asymptotically faster algorithms are presented in \cite_+ the problem is defined on a universe of $n$ elemets partitioned into linear lists. each element x is oin a list L(x) and has a cost c(x), a real number or infinity. The ost c(L) of a list L is the smallest cost of an element of L. We must process on line a sequence of operation s of 2 types: {\parindent=40pt \narrower $decrease\_cost(x,\gamma)$ -- set $c(x)$ to $\min \{c(x),\, \gamma\}$, updating $c(L(x))$ if necessary $split(x)$ -- replace $L(x)$ by the sublist of all elements up to and including $x$, and the sublist of elements following $x$, defining the costs of these lists appropriately } \fi \fi \subsection{Link operations} also in alpha.tex, alg for 0 seems missing: go to \fi \setlength{\topmargin}{-.5in} \setlength{\oddsidemargin}{0in} \setlength{\evensidemargin}{0in} \setlength{\textwidth}{6.5in} \setlength{\textheight}{9in} \usepackage{latexsym} \usepackage{epsfig} \usepackage{color \usepackage{amssymb,amsmath} \usepackage[noline,nofillcomment,noend,noresetcount,boxed,figure]{algorithm2e} \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{observation}[theorem]{Observation} \newtheorem{definition}{Definition}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{example}{Example}[section] \newtheorem{problem}[theorem]{Problem} \def\remark #1{\noindent{\bf Remark:} #1\\} \long\def\remarks #1{\noindent{\bf Remarks:} #1\\} \long\def\claim #1 #2{\bigskip\noindent{\bf Claim {#1}} {\it #2}\bigskip} \long\def\nclaim #1 #2{\noindent{\bf Claim {#1}} {\it #2}} \def\xclaim #1 #2{\noindent{\bf Claim {#1}} {\it #2}\bigskip} \newenvironment{proof}{\noindent{\bf Proof:}}{\hfill $\Box $\\} \newenvironment{nproof}{\noindent{\bf Proof:}}{\hfill $\Box $} \newcommand{\noindent{\bf Proof: }}{\noindent{\bf Proof: }} \newcommand{\hfill $\Box $\\}{\hfill $\Box $\\} \newcommand{\hfill $\Box $}{\hfill $\Box $} \newcommand{\hfill $\diamondsuit$\\}{\hfill $\diamondsuit$\\} \newcommand{\hfill $\diamondsuit\ \Box$\\}{\hfill $\diamondsuit\ \Box$\\} \newcommand{\hfill $\diamondsuit$}{\hfill $\diamondsuit$} \newcommand{\hfill $\triangle$\\}{\hfill $\triangle$\\} \newcommand{\noindent{\bf Remark: }}{\noindent{\bf Remark: }} \newcommand{\hfill \\}{\hfill \\} \renewcommand{\thetheorem}{\arabic{section}.\arabic{theorem}} \def\sqr#1#2{{\vcenter{\vbox{\hrule height .#2pt \hbox{\vrule width .#2pt height#1pt \kern#1pt \vrule width .#2pt} \hrule height .#2pt}}}} \def\ncas #1 {\noindent {\bf Case #1.}\ } \def\bipart #1 #2{\bigskip \noindent {\bf #1} {\it #2}} \def\xbipart #1 #2{\noindent {\bf #1} {\it #2}} \def\iipart #1 #2{\bigskip \noindent {\it #1} {\it #2}} \def\xiipart #1 #2{\noindent {\it #1} {\it #2}} \def\brpart #1 #2{\bigskip \noindent {\bf #1} {#2}} \def\xbrpart #1 #2{\noindent {\bf #1} {#2}} \def\irpart #1 #2{\bigskip \noindent {\it #1} {#2}} \def\xirpart #1 #2{\noindent {\it #1} {#2}} \def\overline {\overline} \def\epsilon{\epsilon} \def{\rm{\ mod\;}}{{\rm{\ mod\;}}} \def\case #1{\bigskip\noindent{{\bf Case} {\em #1}:}} \def\subcase #1{\bigskip\noindent{{\bf Subcase} {\em #1}:}} \def\numcase #1 #2{\bigskip\noindent{{\bf Case #1} {\em #2}:}} \def\bigskip \noindent{\bf Claim: }{\bigskip \noindent{\bf Claim: }} \def\rem #1{\noindent{\bf Remark:} #1 \bigskip} \newcommand{\hfill $\spadesuit$}{\hfill $\spadesuit$} \newcommand{\smallskip\noindent{\bf Proof: }}{\smallskip\noindent{\bf Proof: }} \newcommand{\hfill $\Box $}{\hfill $\Box $} \def\obs #1 {\bigskip\noindent{\bf Observation #1: }} \iffalse Data Structures for Weighted Matching and Nearest Common Ancestors with Linking, and Extensions to $b$-matching and $f$-factors% \fi \begin{document} \title{A Data Structure for Nearest Common Ancestors with Linking% \thanks{A preliminary version of results in this paper appeared in {\em Proc.~1st Annual ACM-SIAM Symp.~on Disc.~Algorithms}, 1990 \cite{G90}.} \author{Harold N.~Gabow% \thanks{Department of Computer Science, University of Colorado at Boulder, Boulder, Colorado 80309-0430, USA. Research supported in part by NSF Grant No. CCR-8815636. E-mail: {\tt hal@cs.colorado.edu} } } } \date{December 18, 2014; revised November 18, 2016} \maketitle \input prelude \begin{abstract} Consider a forest that evolves via $link$ operations that make the root of one tree the child of a node in another tree. Intermixed with $link$ operations are $nca$ operations, which return the nearest common ancestor of two given nodes when such exists. This paper shows that a sequence of $m$ such $nca$ and $link$ operations on a forest of $n$ nodes can be processed on-line in time $O(m\alpha(m,n)+n)$. This was previously known only for a restricted type of $link$ operation. The special case where a $link$ only extends a tree by adding a new leaf occurs in Edmonds' algorithm for finding a maximum weight matching on a general graph. Incorporating our algorithm into the implementation of Edmonds' algorithm in \cite{G17} achieves time $O(n(m + n\ifmmode \,{ \rm log}\,\else{\it log }\fi n))$ for weighted matching, an arguably optimum asymptotic bound ($n$ and $m$ are the number of vertices and edges, respectively). \end{abstract} \def0{0} \ifcase 0 \input introII \input nca \input multi \input alpha \or \input intro \input newed \input bnotes \input nca \input alpha \input bmatch \input code \input strong \fi \iffalse \input intro \input ed \fi \ifcase 1 \or \section*{Acknowledgments} The author thanks an anonymous referee for a careful reading and many suggestions. \section{Multi-level incremental-tree algorithms} \label{3.3Sec} This section begins by giving the details of the multi-level approach illustrated in Fig.\ref{EdLevelsFig}(b). Then it presents a 3-level algorithm to solve the incremental-tree nca problem in time $O(1)$ for $nca$ queries, total time $O(n)$ for {\it add\_leaf}$\,$\ and $add\_root$ operations, and space $O(n)$. That algorithm is used in the next section to construct our most general nca algorithm. It uses the multi-level structure presented in this section, with unbounded number of levels, and some changes that we note. \iffalse \begin{figure}[t] \centering \input{MultiLevel.pstex_t} \caption{Consecutive levels in multilevel data structure.} \label{MultiLevelFig} \end{figure} \fi \paragraph*{The framework} This section gives the high-level organization of an incremental nca algorithm with an arbitrary number of levels. We will only need 3 levels in the next section but the number of levels is unbounded in Section \ref{3.4Sec}. The terms {\it vertex} and {\it the incremental tree} $T$ refer to the objects of the given problem, e.g., an operation {\it add\_leaf}$\,$$(x,y)$ makes vertex $x$ the parent of vertex $y$ in $T$. A multi-level algorithm works on a number of levels designated $\ell =L,L-1,\ldots, 1$. The incremental tree $T$ is represented by a tree $T_\ell$ on every level (a small tree $T$ may have $T_\ell$ empty for levels $\ell$ less than some threshold). $T_L$ is $T$. Every other $T_\ell$ is a smaller tree derived from $T_{\ell+1}$ by deletions and contractions. Each $T_\ell$ is composed of {\em $\ell$-nodes} (called {\em nodes} if the level is clear). The algorithm maintains a partition of the nodes of $T_\ell$ into subtrees called {\em $\ell$-subtrees}. Each level is provided with given algorithms that solve the incremental problem in any $\ell$-subtree; the multi-level algorithm described in this section sews these given algorithms together to solve the incremental problem on the given tree $T$. Every level $\ell$ has an integral size parameter $\mu_\ell$.% \footnote{In Section \ref{3.4Sec} these size parameters are replaced by a notion of ``stage''.} Every $\ell$-subtree $S$ contains $\le\mu_\ell$ $\ell$-nodes. $S$ is {\it full} if equality holds. $\mu_1=n+1$, so $T_1$ is always nonfull (if it exists). For $L\ge \ell>1$, $T_{\ell-1}$ is formed from $T_{\ell}$ by discarding every nonfull $\ell$-subtree and contracting the full ones. The fact that any $T_\ell$ is tree (i.e., not a forest) follows from this invariant: For every level $\ell$, the nonfull $\ell$-subtrees of $T_\ell$ are at its frontier, i.e., any node $x$ in a nonfull $\ell$-subtree $S$ has all its $T_\ell$-children in $S$. Efficiency of a multilevel algorithm is achieved using the shrinkage of the tree from level to level. Specifically an $\ell$-node with $\ell<L$ contains $\Pi_{\ell+1}^L \mu_i$ vertices of $T$. So the number of $\ell$-nodes is \begin{equation} \label{NumEllNodes} |V(T_\ell)|\le n/\Pi_{\ell+1}^L \mu_i . \end{equation} We use this additional notation: Let $x$ be an $\ell$-node, $L\ge \ell\ge1$. \Px. denotes the $\ell$-subtree containing $x$. If $\Px.$ is full (in particular $\ell>1$) then $\fx.$ denotes the ($\ell-1$)-node that is the contraction of $\Px.$. If $\ell<L$ then $x$ is the contraction of an $(\ell+1)$-subtree $S$, and $\gx.$ denotes the root node of $S$. As before we write functions of nodes like $\pi(x)$, relying on context (i.e., the identity of argument $x$) to indicate which tree $T_\ell$ is referenced. Each $T_\ell$ uses this data structure: Let $x$ be an $\ell$-node. If $x$ is the root of its $\ell$-subtree it stores the subtree size $|V(\Px.)|$ ($1\le |V(\Px.)|\le \mu_\ell$). It also has pointers to $\gx.$ if $\ell<L$ and $\fx.$ if $\ell>1$. A nonroot $x$ has a pointer to its $\ell$-subtree root. Every $x$ has a parent pointer. The main routines for incremental trees are \[a(x,y,\ell),\ \wa(x,y,\ell),\ c(x,y,\ell),\ \widehat{\,c\,}(x,y,\ell). \] $a(x,y,\ell)$ is a recursive routine that performs the {\it add\_leaf}$\,$\ operation for $\ell$-node $x$ and new $\ell$-node $y$. It may call $a(\fx.,\fy.,\ell-1)$. For vertices $x,y$ in the given graph the operation ${\it add\_leaf}$\,$(x,y)$ is performed by $a(x,y,L)$. The $a$ routine makes use of $\wa$ to grow $\ell$-subtrees. Specifically for $\wa(x,y,\ell)$, $x$ is an $\ell$-node and $y$ a new $\ell$-node that is made a child of $x$ in the $\ell$-subtree $\Px.$. The $ca$ operation is organized similarly. It uses the recursive routine $c(x,y,\ell)$, which returns the characteristic ancestors of $\ell$-nodes $x$ and $y$, possibly invoking $c(\fx.,\fy.,\ell-1)$. For vertices $x,y$ in the given graph the operation $nca(x,y)$ is performed by $c(x,y,L)$. The $c$ routine makes use of $\widehat{\,c\,}$, which returns the characteristic ancestors of $\ell$-nodes $x$ and $y$ that belong to the same $\ell$-subtree $\wx.=\wy.$. We will extend these operations below to allow $add\_root$. Also looking ahead, in Section \ref{3.4Sec} the $nca$ routine will use $c$ and $\widehat{\,c\,}$ with some obvious modifications. The {\em link} routine will use the same overall structure as $a$ and $\wa$ for {\it add\_leaf}$\,$. \iffalse We begin by presenting the routine for $nca$'s and then describe {\it add\_leaf}$\,$'s. Both routines have a similar recursive organization. and $\widehat{\,c\,}(x,y,\ell)$. are The routines for the multi-level algorithm uses two routines for $nca$'s, $c(x,y,\ell)$ and $\widehat{\,c\,}(x,y,\ell)$. are \[ c(x,y,\ell),\ a(x,y,\ell),\ \widehat{\,c\,}(x,y,\ell), \wa(x,y,\ell). \] This section describes the first two routines ($c$ and $a$) treating the last two (\widehat{\,c\,} and \wa) as primitive operations. The next section gives the details of those primitives. Section \ref{3.4Sec} uses the $c$ algorithm described here The details of last two depend on the problem being solved. The strategy for $\widehat{\,c\,}(x,y,\ell)$ and $\wa(x,y,\ell)$ may vary with $\ell$. \fi Now we describe the two recursive algorithms starting with $a(x,y,\ell)$: {\narrower {\parindent=0pt \case {$\Px.$ is full} Make $x$ the parent of node $y$, and make $y$ a singleton $\ell$-subtree. \case {$\Px.$ is nonfull} Execute $\wa(x,y,\ell)$. If $\Px.$ is still not full we are done but suppose $\Px.$ has become full. Create a new $(\ell-1)$-node $z$. Make $z$ the node $\fx.$. Now there are two subcases: \subcase {$\Px.=T_\ell$} Make $z$ the unique $(\ell-1)$-node, as well as a singleton $(\ell-1)$-subtree. \subcase {$\Px.\ne T_\ell$} $w=\pi(\mathy{\rho}(\Px.))$ is in a full $\ell$-subtree. Execute $a(\fz w.,z,\ell-1)$ to add $z$ as a new $(\ell-1)$-leaf. }} \bigskip The {\it add\_leaf}$\,$\ algorithm preserves the defining properties of $T_\ell$ trees and so is correct. The total time for all {\it add\_leaf}$\,$\ operations is dominated by the time used by $\wa$ to build all the $\ell$-subtrees, $L\ge \ell\ge 1$. (This includes the time to create a new singleton subtree.) We turn to the $c$ routine. The high-level strategy is simple -- use $c(\fx.,\fy.,\ell-1)$ to find the $\ell$-subtree $S$ containing $nca(x,y)$, and then use the $\widehat{\,c\,}$ routine in $S$ to find the desired level $\ell$ characteristic ancestors $(c,c_x,c_y)$. There are various special cases, depending on whether or not \wx. and \wy. are full, whether or not $S$ contains $c_x$ or $c_y$, etc. The details of $c(x,y,\ell)$ are as follows. {\narrower {\parindent=0pt \case {$\Px.$ and $\Py.$ are both full} Fig.\ref{MultiAlg} gives pseudocode for this case. It handles special cases such as $x=y$ or $\wx.=\wy.$. \begin{figure} { \def{\bf for }{{\bf for }} \def{\bf to }{{\bf to }} \def{\bf while }{{\bf while }} \def{\bf do }{{\bf do }} \def{\bf if }{{\bf if }} \def\Then{{\bf then }} \def\Return{{\bf return }} \def\g #1.{\mathy{\overleftarrow{#1}}} \def\myif #1 #2 {{\bf if} #1 {\bf then} #2} \def\myifel #1 #2 #3 {{\bf if} #1 {\bf then} #2 {\bf else} #3} \parindent=40pt $(a,a_x,a_y) \gets c(\fx.,\fy.,\ell-1)$ \myif {$a_x\ne a$} {$x \gets \pi(\gz {a_x}.)$; } \myif {$a_y\ne a$} {$y \gets \pi(\gz {a_y}.)$} {$(b,b_x,b_y)\gets \widehat{\,c\,}(x,y,\ell)$} \myif {$b_x=b$ and $a_x\ne a$} {$b_x\gets \gz {a_x}.$; } \myif {$b_y=b$ and $a_y\ne a$} {$b_y\gets \gz {a_y}.$} \Return {$(b,b_x,b_y)$} } \caption{Procedure for $c(x,y,\ell)$ when $\wx.$ and $\wy.$ are full.} \label{MultiAlg} \end{figure} \case {One or both of $\Px.,\Py.$ is nonfull} If $\wx.=\wy.$ we use $\widehat{\,c\,}(x,y,\ell)$ directly. (This includes the special case where there are no $(\ell-1)$-nodes.) Assuming $\Px.\ne \Py.$, when $\wx.$ is nonfull we replace $x$ by $\pi(\mathy{\rho}(\Px.))$, and similarly for $y$. We then execute the code of Fig.\ref{MultiAlg}. If the returned $b_x$ is the replacement for $x$ we change $b_x$ to $\mathy{\rho}(\Px.)$, and similarly for $y$. }} \bigskip The analysis of this algorithm is similar to {\it add\_leaf}$\,$: Correctness follows from the defining properties of $T_\ell$ trees. The time for an operation $nca(x,y)$ is dominated by the time used by the routines $\widehat{\,c\,}(x,y,\ell)$, $L\ge \ell\ge 1$. We extend the routines to allow $add\_root$ similar to the extension for Corollary \ref{LogSquaredIncNCACor}, as follows. $add\_root$ is still implemented by \eqref{AddRootEqn}, where now $\varrho$ is a pointer to a node in the tree $T_L$. \iffalse For general $T_\ell$, $\ell>1$, assuming $T_{\ell-1}$ exists its root pointer is $\fz r.$ for $r$ equal to $\varrho$ if $\widehat \varrho$ is full else $\pi(\rho(\widehat \varrho))$. \fi The routine for $nca(x,y)$, instead of immediately calling $c(x,y,L)$, is modified to use Lemma \ref{RootedCalemma}($ii$) } \def\xii{($ii$) as before. Specifically it calls $c(x,y,L),c(x,\varrho,L)$, and $c(y,\varrho,L)$, and chooses $nca(x,y)$ according to the lemma. \iffalse We extend the routines to allow $add\_root$ as follows. Each tree $T_\ell$ maintains a pointer $\varrho$ to its root as updated by $add\_root$ operations. For $T_L$ $\varrho$ is the vertex $y$ in the last operation $add\_root(y)$. For general $T_\ell$, $\ell>1$, assuming $T_{\ell-1}$ exists its root pointer is $\fz r.$ for $r$ equal to $\varrho$ if $\widehat \varrho$ is full else $\pi(\rho(\widehat \varrho))$. In Fig.\ref{MultiAlg} the routine $\widehat{\,c\,}(x,y,\ell)$ is implemented using Lemma \ref{RootedCalemma}($ii$) } \def\xii{($ii$) (so three calls $\widehat{\,c\,}(x,y,\ell),\widehat{\,c\,}(x,\varrho,\ell),\widehat{\,c\,}(y,\varrho,\ell)$ are made). \fi \paragraph*{Linear-time incremental trees} Take $L=3$ levels with \begin{equation} \label{muDefEqn} \mu_3= \mu_2= \c{\ifmmode \,{ \rm log}\,\else{\it log }\fi n},\ \mu_1=n+1. \end{equation} Level 1 uses the incremental-tree algorithm of Section \ref{3.1Sec}. It uses $O(m+n)$ time and $O(n)$ space. This follows since \eqref{NumEllNodes} shows there are $\le {n\over \ifmmode \,{ \rm log}\,\else{\it log }\fi^2 n }$ 1-nodes. Thus Corollary \ref{LogSquaredIncNCACor} shows the time on level 1 is $O(m+{ n\ifmmode \,{ \rm log}\,\else{\it log }\fi^2 n \over \ifmmode \,{ \rm log}\,\else{\it log }\fi^2 n } )=O(m+n)$. The space in level $1$ is $O( { n\ifmmode \,{ \rm log}\,\else{\it log }\fi n \over \ifmmode \,{ \rm log}\,\else{\it log }\fi^2 n } )=O(n)$. Levels 3 and 2 both use the bitstring data structure of the previous section. For level 2 we must extend the $nca$ algorithm to compute all characteristic ancestors $ca$. We precompute a table that gives least-significant bits. Specifically for any bitstring $b\ne 0$ of $\ifmmode \,{ \rm log}\,\else{\it log }\fi n$ bits, $lsb[b]$ is the index of the least significant bit of $b$. The operation $ca(x,y)$ is implemented as \[ {\bf if\ } nca(x,y)=x {\bf\ then\ } c_x=x {\bf \ else\ } c_x = v\,[lsb\,(anc[x]\land \neg anc[y])]. \] This discussion shows that levels 3 and 2 use $O(m+n)$ time. The space is kept linear, $O(n)$, by using the doubling strategy of Lemma \ref{SpaceDoublingLemma} for the $v$ tables of nonfull $\ell$-subtrees. This completes the 3-level incremental tree algorithm. \begin{theorem} \label{3.1Thm} The incremental-tree nearest common ancestors problem with {\it add\_leaf}$\,$, $add\_root$ and $ca$ operations can be solved in $O(m+n)$ time and $O(n)$ space.\hfill$\Box$ \end{theorem} As in Corollary \ref{LogSquaredIncNCACor} the theorem does not require $n$ (the number of {\it add\_leaf}$\,$\ and $add\_root$ operations) to be known in advance. To achieve this first consider \eqref{muDefEqn} defining the $\mu_i$. One approach is to update these values every time $n$ doubles. Instead we will simply interpret $n$ in \eqref{muDefEqn} to be $N$, the maximum integer that can be represented in the RAM. So $\mu_3=\mu_2=\ifmmode \,{ \rm log}\,\else{\it log }\fi N$ is the number of bits in a RAM word. The timing estimates are unchanged. For instance the time in level 1 is $O(m+{ n\ifmmode \,{ \rm log}\,\else{\it log }\fi^2 n \over \ifmmode \,{ \rm log}\,\else{\it log }\fi^2 N } )=O(m+n)$. The space in level $1$ is $O( { n\ifmmode \,{ \rm log}\,\else{\it log }\fi n \over \ifmmode \,{ \rm log}\,\else{\it log }\fi^2 N } )=O(n)$. The space for all three levels is maintained in one array $S$, using Lemma \ref{SpaceDoublingLemma}. \iffalse OLD VERSION, STILL CORRECT \paragraph*{Linear-time incremental trees} Take $L=3$ levels with \[\mu_3= \c{(\il 2 n)^2},\ \mu_2= \c{\ifmmode \,{ \rm log}\,\else{\it log }\fi^2 n},\ \mu_1=n+1. \] Levels 2 and 1 use the incremental-tree algorithm of Section \ref{3.1Sec}. Both levels use $O(m+n)$ time and $O(n)$ space. To prove this, Lemma \ref{3.5Lemma} shows that for $\ell<3$, a $\mu_\ell$-tree containing $k$ $\ell$-nodes processes $p$ {\it ca} operations in $O(p+k \ifmmode \,{ \rm log}\,\else{\it log }\fi^2\mu_\ell)$ time. In general an $\ell$-node with $\ell<L$ contains $\prod_{i=\ell+1}^L \mu_i$ $\Prod_{i=\ell+1}^L \mu_i$ $\prod_{\ell+1}^L \mu_i$ $\pi_{\ell+1}^L \mu_i$ $\Pi_{\ell+1}^L \mu_i$ vertices of $T$. So there are $\le {n\over \mu_{\ell+1} }$ $\ell$-nodes. Thus for $\ell<3$ the total time on level $\ell$ is $O(m+ { n\ifmmode \,{ \rm log}\,\else{\it log }\fi^2\mu_\ell \over \mu_{\ell+1} } )$. For $\ell=1,2$ we have ${ \ifmmode \,{ \rm log}\,\else{\it log }\fi^2\mu_\ell \over \mu_{\ell+1} } =O(1)$. Thus levels 1 and 2 use $O(m+n)$ time. A similar argument using Lemma \ref{3.5Lemma} shows that the space for level $\ell=1,2$ is $O( { n\ifmmode \,{ \rm log}\,\else{\it log }\fi \mu_\ell \over \mu_{\ell+1} } )=O(n)$. The level 3 algorithms $\wa (x,y,3)$ and $\widehat{\,c\,}(x,y,3)$, for $x$ and $y$ vertices in the same $\mu_3$-tree, are based on the microset technique of \cite{GT85}. Consider a fixed $\mu_3$-tree $M$. Each vertex $x$ of $M$ has an identifier $id[x]$, an integer between 1 and $\mu_3$. The $i$th vertex added to $M$ is assigned the identifier $i$. $M$ has an associated array $v[1..\mu_3]$ that translates identifiers to their corresponding vertex, i.e., $v[i]$ specifies the vertex of $M$ whose identifier is $i$. Each vertex $x\in M$ has an {\it ancestor list} denoted $ancestor[x]$. This is the sequence of identifiers of its ancestors in $M$, starting with the root and descending to $x$. Since a vertex has $\le \mu_3$ ancestors, an ancestor list can be represented in $O(\mu_3 \ifmmode \,{ \rm log}\,\else{\it log }\fi \mu_3) = O((\il 2 n)^3)$ bits. Since we assume a random access machine with a word size of $\ifmmode \,{ \rm log}\,\else{\it log }\fi n$ bits, $ancestor[x]$ fits in $O(1)$ words. Store each ancestor list left-justified in its word, the rest of the word filled out with 0's, and each identifier written as a string of precisely $\f{\ifmmode \,{ \rm log}\,\else{\it log }\fi \mu_3}+1$ bits. The algorithm for $\wa(x,y,3)$ constructs $ancestor[y]$ by adding $y$'s identifier in the appropriate position to $ancestor[x]$. This can be done in $O(1)$ time using arithmetic operations. In addition the algorithm maintains the $v$ array: It adds an entry for $y$. We use a doubling strategy to keep the total space linear: When $M$ grows to $2^i$ vertices, allocate a new $v$ array of size $2^{i+1}$, copying the current $v$ array into the first $2^i$ entries. The algorithm for $\widehat{\,c\,}(x,y,3)$ finds the characteristic ancestors $(c,c_x,c_y)$ as follows. Form the boolean {\it exclusive or} of $ancestor[x]$ and $ancestor[y]$. Since $x$ and $y$ both descend from the root of $M$, it begins with one or more bit fields that are identically 0. Extract the id that occurs in the last of these fields (from $ancestor[x]$ or $ancestor[y]$) and call it $c\_id$. Then $c$ is the vertex $v[c\_id]$. If $c=x$ then $c_x=x$, otherwise $c_x$ is found in a similar way from the bit field following $c\_id$ in $ancestor[x]$. $c_y$ is similar. \iffalse The most significant bit of the result occurs within the bit field that stores $c_x$ in $ancestor[x]$ and $c_y$ in $ancestor[y]$. Consider $x$. If the bit field is the field following the id of $x$ then The fields for $a_x$ is extracted, call it $ax\_id$, and $v[ax\_id]$ gives $a_x$. $a_y$ is similar., and $nca(x,y)$ is $\pi(a_x)$. \fi All boolean operations needed -- appending an id for $\wa$, and for $\widehat{\,c\,}$ performing {\it exclusive or}, finding the most significant bit, and extracting the desired fields, can be done in $O(1)$ time by table look-up. The appropriate tables are generated in $O(n)$ time. A more detailed discussion of similar algorithms involving table look-up can be found in \cite{AHU, GT85}. \fi \iffalse THIS IS THE BEST WRITEUP OF THE 2 LEVEL ALGORITHM We close this section by sketching a simpler version of our algorithm that has the same time bound but applies only to the problem of finding nearest common ancestors in static trees. The data structure uses 2 levels instead of 3. The algorithm seems to be simpler than the static tree algorithm of \cite{HT}, which has the same asymptotic efficiency but again uses 3 levels (called ``plies''). Take $\mu_2= \c{{ \ifmmode \,{ \rm log}\,\else{\it log }\fi n \over 4 }}$. Construct the 1- and 2-trees recursively as follows: Let $S$ be the given static tree. If $|V(S)|<\mu_2$ then make $S$ a 2-prenode. Otherwise let $S_0$ be a subtree containing $\mathy{\rho}(S)$ and having $\mu_2$ nodes; make $S_0$ a 2-prenode and a 1-node, and process the trees of forest $S-S_0$ recursively. The unique 1-prenode has $O(n/\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$ nodes. Use the incremental-tree algorithm of Section \ref{3.1Sec} on it. Lemma \ref{3.3Lemma} shows the preprocessing time and space on level 1 is $O(n)$. \def\lp.{{\tt (}}\def\rp.{{\tt )}} Level 2 uses a microset data structure based on the Euler tour technique of \cite{TV}. Represent a prenode $P$ of $b$ nodes by a string $\beta$ of balanced parentheses of length $2b-2$. $\beta$ represents a depth-first traversal of $P$ -- \lp. corresponds to when the search descends along an edge, \rp. corresponds to when the search ascends the edge, having explored its subtree. Each node $x\in V(P)$ has a canonical representation as the string $\beta_x$, the shortest prefix of $\beta$ that leads to it. $nca(x,y)$ is found as follows: Wlog $x\ne y$ and $\beta_y$ is longer than $\beta_x$. Let $\beta_{y,x}$ be the suffix of $\beta_y$ following $\beta_x$. Let $\beta_{y,x,\lp.}$ be the shortest prefix of $\beta_{y,x}$ that ends with an unmatched \lp.. This string models the path taken by the dfs as it moves from $x$ to the first vertex that is an ancestor of $y$ but not $x$. So $nca(x,y)$ corresponds to the penultimate vertex on this path. The algorithm can be implemented to use $O(1)$ time by table lookup. We sketch the ideas. The crucial step is the computation of $\beta_{y,x,\lp.}$ from $\beta_{y,x}$. Our tables will have entries for every possible bitstring, even though some strings do not have the form we are interested in. Note that $\beta_{y,x}$ is a concatenation $\gamma\,\lp.\,\delta$ where $\gamma$ ($\delta$) is a concatenation of 0 or more strings, each each of which is either an \rp. or a string of balanced parentheses. Call a string of this form {\em well-formed}. Our tables will have entries for every possible bitstring, but the algorithm will only access the entries for well-formed strings. For any string of parentheses $\sigma$ define $\Delta(\sigma)$ to be the number of \lp.'s minus the number of \rp.'s. Also define $\Delta^*(\sigma)$ to be $\max \set{\Delta(\tau)} {\tau \text{ a suffix of }\sigma}$. Then $\sigma$ begins with an unmatched \lp. iff $\Delta(\sigma) = \Delta^*(\sigma') +1$ for $\sigma'$ equal to $\sigma$ with its leading symbol deleted. Then for $\sigma$ isa well-formed string, $\beta_{y,x,\lp.}$ is from write $\beta_{y,x}=\gamma \lp. \delta$. where $\lp. \delta$ is the shortest suffix of $\beta_{y,x}$ having $\Delta^*(\beta_{y,x})=\Delta^*(\lp. \delta)$. If $b$ is a bitstring with most significant bit $2^k$, then it is an easy matte to compute $\Delta[b]$ and $\Delta^*[b]$ and $i[b]$ from the $2^{k-1}$ bitof $b$ and $i[b-2^k]$. $\Delta[c]$ $\Delta^*[c]$ \fi \iffalse We use 2 bits to encode a parenthesis, representing ``(`` and ``)'' by 10 and 11 respectively. For a tree $P$ with $\le \mu_2$ nodes, $\beta$ fits into one word. It is not hard to design a set of tables that can be precomputed in $O(n)$ time, such that for any prenode $P$, $\widehat{\,c\,}(x,y)$ can be found in $O(1)$ time from $\beta_x$ and $\beta_y$. This gives a 2-level, static tree algorithm that uses $O(n)$ preprocessing time, $O(n)$ space, and performs a $ca$ query in $O(1)$ time. The ideas for a set of tables to implmenet this agoithm The computation of \delta from \gamma can be implemented using the following table. For every string $b$ of $\ifmmode \,{ \rm log}\,\else{\it log }\fi n$ bits define 2 values, r[b] the number of unmatched ``)``'s preceding the first unmatched ``('' r[b]\ge 0 r[b] =\epsilon if this ( does not exist i[b]= the index of the vertex corresponding to the ( in \gamma the index of the ( in \beta is |\beta_x|+i[b] evev array gives the vertex being visited at the ithste pof the dfs A prenode nonfull prenode can use the bitstrign obtained by padding the defs with visits a number ofto children of the root following all the real children. \fi \iffalse In the next section uses a number of incremental tree data structures simultaneously, each growing to sizes not known in advance. The same doubling strategy keeps the total storage linear. We make this precise in the following lemma. Consider a collection of arrays $A_i[1..n_i], i=1,\ldots, k$ that is manipulated by some process, which also uses these two operations: \bigskip $add\_array$ -- $k\gets k+1,\ n_k\gets 1$, open a new array $A_k[1..1]$ $add\_entry(i)$ -- $n_i\gets n_i+1$, expand $A_i$ by one entry \bigskip The $add\_entry$ operation must preserve all values in $A_i$ (except for the new entry). At any point in time let $n$ denote $\sum_i n_i$. \begin{lemma} \label{SpaceDoublingLemma} Any sequence of $add\_array$ and $add\_entry$ operations can be processed using additional time $O(n)$ and storing all arrays $A_i$ in an array $S[1..4n]$. \end{lemma} \begin{proof} We will maintain values $\eta_i, i=1,\ldots,k$ with $\eta_i\le n_i$, and $\eta=\sum_i \eta_i$. The arrays are stored in $S[1..4\eta]$. The current version of $A_i$ is an array $A_i[1..2\eta_i]$. Assuming this holds, the $add$ operations are implemented as follows. $add\_entry$ Let $n_i$ denote its value before it is incremented for the new entry. If $n_i<2\eta_i$ the space for the new entry is available. Suppose the opposite, i.e., the current $A_i$ is full. Define $\eta'_i=2\eta_i=n_i$. Allocate a new array $A_i[1..2\eta'_i]$ by expanding $S$ with $2\eta'_i$ new entries. Copy the current $n_i$ entries of $A_i$ into the new space. This leaves $\eta'_i$ vacant entries in the new space. The time for this procedure is $O(\eta'_i)$. Charge this to the $\eta'_i/2$ entries that have been added to $A_i$ since the last reallocation. Thus the total time is linear as claimed in the lemma. The total space has changed from $\eta=\sum_i\eta_i$ to $\eta+ increased to $4\sum_j\eta_j +2\eta'_i$. Since $\eta'_i=2\eta_i$ this equals $4\sum_i\eta'_i$. $4\eta'$. Since we still have $n_i\ge \eta'_i$, $n\ge \eta'$. \end{proof} $add\_array$ is easily implemented by setting increasing $\eta_k=1$ and allocating 2 cells at the end of S. \fi \section{Fat preorder for dynamic trees} \label{3.1Sec} This section introduces the basic idea for dynamic trees, the fat preorder numbering that generalizes preorder numbering of trees. It starts with an algorithm to find nearest common ancestors on a tree that is given in advance. Then it extends that algorithm to solve the incremental-tree nca problem in $O(m+n\ifmmode \,{ \rm log}\,\else{\it log }\fi^2n)$ time. Our main auxiliary tree is the compressed tree, so we begin by reviewing the basic definitions \cite{HT, T79}. Let $T$ be a tree with root $\mathy{\rho}(T)$. The {\it size} $s(v)$ of a node $v$ is the number of its descendants. (As usual a node is a descendant of itself.) A child $w$ of $v$ is {\it light} if $s(w)\le s(v)/2$; otherwise it is {\it heavy}. Deleting each edge from a light child to its parent partitions the nodes of $T$ into paths of nonnegative length, called the {\it heavy paths of $T$}. The highest node on a heavy path is the {\em apex} of the path; each apex has at most one child. (So an isolated node is considered a heavy path, and a heavy path has only one node at each depth.) A node is an {\it apex} if it is not a heavy child (e.g., $\mathy{\rho}(T)$). Equivalently an apex is the highest node on its heavy path. To generalize this consider an arbitrary partition of $V(T)$ into a family \P. of disjoint paths of nonnegative length. Length 0 paths are allowed, and we require that each path has only one node at each depth. Call the highest node on each path its apex. The {\it compressed tree for $T$ and $\cal P$}, $C(T,{\cal P} )$, has nodes $V(T)$; its root is $\mathy{\rho}(T)$ and the parent of a node $v\ne \mathy{\rho}(T)$ is the first proper ancestor of $v$ that is an apex. Any apex $v$ has the same descendants in $C(T,{\cal P} )$ and $T$, so in particular $s_{C(T,{\cal P} )}(v)=s_T(v)$. When $\cal P$ consists of the heavy paths of $T$ we call $C(T,{\cal P} )$ the {\it compressed tree \(for $T$\)}, denoted $C(T)$. As extreme examples $C(T)$ is $T$ if $T$ is a complete binary tree, and $C(T)$ has height 1 if $T$ is a path. Let $T$ be an arbitrary tree and let $C$ be its compressed tree $C(T)$. $C$ has height $\le \f{\ifmmode \,{ \rm log}\,\else{\it log }\fi n}$. This follows from the fact that in $C$, the parent $v$ of node $w$ has $s_C(v)\ge 2s_C(w).$ (This is clear if $w$ is a light child in $T$. If $w$ is a heavy child then $s_C(w)=1$ and $s_C(v)\ge 2$.) \iffalse see below that for any nodes $x,y\in V(T)$, $nca_T(x,y)$ is easily found from the characteristic ancestors $ca_C(x,y)$. But we will start by concentrating just on $nca_C(x,y)$. \fi For any nodes $x,y$ in the given tree $T$, we compute $nca_T(x,y)$ by starting with the corresponding node $nca_C(x,y)$ in the compressed tree $C$. \cite{HT} computes $nca_C(x,y)$ in $O(1)$ time by embedding $C$ in a complete binary tree $B$; $ncas$ in $B$ are calculated using the binary expansion of the inorder numbers of the nodes. \cite{SV} uses a similar approach. We now present a different strategy. It seems to give simpler algorithms (see Section \ref{3.2Sec}). \begin{figure}[t] \centering \input{FPreorder.pstex_t} \caption{Fat preorder inverval for vertex $v$. The two empty intervals guard the preorder numbers of the vertices of $T_v$, which are in $[p(v),q(v))$.} \label{FPreorderFig} \end{figure} Our main tool, the fat preordering, is defined for a tree $C$ that again generalizes the compressed tree. Choose a real-valued constant $\beta>1$ and integers $e>1$, $c>2$ such that \begin{equation} \label{2Eqn} {2\over \beta^{e-1}-1}\le c-2\le \beta^e. \end{equation} Note the left inequality is satisfied when $\beta^{e-1}\ge 2$ and $c\ge 4$, so a convenient choice is $e=\beta=2$ and $c=4$. Let $\sigma:V(C)\to [1..n]$ \footnote{Here $n$ is not necessarily equal to $|V(C)|$. Eventually we will have $n$ either equal to $|V(C)|$ with $\sigma=s_C, \beta=2$ or $n\le |V(C)|$ and $\sigma$ a variant of $s_C$.} be such that every node $v$ with child $w$ satisfies \begin{equation} \label{1Eqn} \sigma(v) \ge \beta \sigma(w). \end{equation} For functions $p,q,\overline p,\overline q: V(C) \to \mathbb{Z_+}$, $p$ is a {\it fat preorder numbering of $C$} if for any node $v$, as illustrated in Fig.\ref{FPreorderFig}, \iffalse an injective function $p:V(C)\to \mathbb{Z_+}$ together with functions :V(C)\to \mathbb{Z_+}$ such that for any node $v$, Let $\sigma:V(C)\to [0..n]$ be a function satisfying (\ref{1Eqn}) for $s=\sigma$. A {\it fat preorder numbering of $C$} is an injective function $p:V(C)\to \mathbb{Z_+}$ together with functions $q,\overline p,\overline q:V(C)\to \mathbb{Z_+}$ such that ====================================== Let $\sigma,p,q,\overline p,\overline q$ be functions from $V(C)$ into $[0..n]$ such that (\ref{1Eqn}) is satisfied for $s=\sigma$. \fi \bigskip ($i$) } \def\xi{($i$) the descendants of $v$ in $C$ are the nodes $w$ with $p(w)\in [p(v),q(v) )$; ($ii$) } \def\xii{($ii$) no node $w$ has $p(w)\in [\overline p(v),p(v))\cup [ q(v),\overline q(v) )$; ($iii$) } \def\xiii{($iii$) $\overline q(v)-\overline p(v)= c\sigma(v)^e$ and $p(v)- \overline p(v),\ \overline q(v)-q(v)= \sigma(v)^{e}$. \bigskip \noindent Note that ($i$) } \def\xi{($i$) is equivalent to $p$ being a preorder numbering. Also the definition allows ``guarding intervals'' to overlap, e.g., we may have $\overline q(w)\in [p(v),q(v))$ for $w$ not descending from $v$. However our algorithms will maintain the intervals $[\overline p(v),\overline q(v))$ as a laminar family. Without loss of generality we can assume $\overline p (\mathy{\rho}(C))=0$, so all $p$-numbers are in $[0,cn^e)$. The fat preorders that we construct take $ \sigma$ to be the size function $s_C$ (for static trees) or a close variant of $s_C$ (for dynamic trees). Given a fat preordering, the following high-level algorithm returns $nca_C(x,y)$: \bigskip {\narrower {\parindent=0pt Let $a$ be the first ancestor of $x$ that has $ (c-2) \sigma(a)^{e}>|p(x)-p(y)|$. If $a$ is an ancestor of $y$ (i.e., $p(a)\le p(y)< q(a)$) then return $a$ else return $\pi_C(a)$ (the parent of $a$). } } \begin{lemma} \label{3.1Lemma} The {\em nca}$_{\rm C}$ algorithm is correct. \end{lemma} \begin{proof} We first show that any common ancestor $b$ of $x$ and $y$ satisfies \[ (c-2) \sigma(b)^{e}>|p(x)-p(y)|.\] By ($i$) } \def\xi{($i$) the interval $ [p(b),q(b) )$ contains both $p(x)$ and $p(y)$ so its length is at least $1+|p(x)-p(y)|$. By ($ii$) } \def\xii{($ii$) its length is $ \le (c-2) \sigma(b)^{e}$. The above inequality follows. Now to prove the lemma we need only show that $a'=\pi_C(a)$ is a common ancestor. (Clearly we can assume $a$ is not the root.) This amounts to showing $a'$ is an ancestor of $y$. By ($i$) } \def\xi{($i$) -- ($iii$) } \def\xiii{($iii$) a nondescendant of $a'$ and a descendant of $a'$ differ in $p$-number by more than $\sigma(a')^e$. Using (\ref{1Eqn}) and the right inequality of (\ref{2Eqn}), $\sigma(a')^e\ge \beta^e \sigma(a)^e\ge (c-2) \sigma(a)^{e}>|p(x)-p(y)|$. Since $x$ descends from $a'$, so must $y$. \end{proof} We implement the high-level algorithm using the following data structure. Each vertex $x$ stores an {\it ancestor table}, $ancestor_x[0..\f{\ifmmode \,{ \rm log}_\beta \else{\it log XX }\fi cn^e}]$, where \[ancestor_x[i] \text{ is the last ancestor $b$ of $x$ that has $(c-2) \sigma(b)^{e}< \beta^i$.}\] If no such ancestor exists, i.e., $(c-2) \sigma(x)^{e}\ge \beta^i$, then the entry is $\epsilon$. Before implementing the high-level algorithm we note that computing $nca$s using the compressed tree (and other auxiliary trees that we shall see) requires more than just the $nca$ node. Fix any tree and consider nodes $x,y$. Let $a=nca(x,y)$. For $z=x,y$, let $a_z$ be the ancestor of $z$ immediately preceding $a$; if $a=z$ then take $a_z=a$. Define $ca(x,y)$, the {\it characteristic ancestors of $x$ and $y$}, as the ordered triplet $(a,a_x,a_y)$. Our $nca$ algorithms actually compute $ca$. We now implement the high-level $nca$ algorithm in $C$, in fact finding all the characteristic ancestors, in $O(1)$ time. We seek $nca_C(x,y)$ and the characteristic ancestor $a_x$; $a_y$ is symmetric. \bigskip It suffices to find the ancestor $a$ of the $nca_C$ algorithm, plus the ancestor of $x$ that precedes $a$ if $a\ne x$. (If $a=x$ then either $a= nca_C(x,y)=x$ giving $a_x=x$, or $nca_C(x,y)=\pi_C(a)$ again giving $a_x=a=x$.) Let $i= \f{\ifmmode \,{ \rm log}_\beta \else{\it log XX }\fi |p(x)-p(y)| }$. Clearly the first ancestor $w$ of $x$ that has \[(c-2) \sigma(w)^{e}\ge \beta^i\] is either $a$ or a descendant of $a$ (possibly $w=a$). Find $w$ as follows. Let $v= ancestor_x[i]$. $v\ne\mathy{\rho}(C)$ since $(c-2)\sigma(\mathy{\rho}(C))^e > |p(x)-p(y)|\ge \beta^i$, where the first inequality holds for arbitrary $x,y\in V$. If $v\ne \epsilon$ then $w=\pi(v)$. If $v= \epsilon$ then $w=x$. First suppose $w\ne\mathy{\rho}(C)$. For $w'=\pi(w)$, (\ref{1Eqn}) and $e\ge 1$ show $(c-2) \sigma(w')^{e}\ge \beta^{i+1}$. Since $\beta^{i+1}> |p(x)-p(y)| $, the desired $a$ is either $w$ or $w'$. This also holds if $w=\mathy{\rho}(C)$, since that makes $a=w$. If $a=w'$ the ancestor preceding $a$ is $w$. If $a=w$ and $v\ne \epsilon$ the desired ancestor is $v$. In the remaining case $a=w=x$, so the ancestor preceding $a$ is not needed. \bigskip The time for this procedure is $O(1)$. (The value $i$ is computed as in Appendix \ref{LogAppendix}.) We apply the procedure twice (for $a_x$ and $a_y$). Thus we have shown how to compute $ca_C(x,y)$ in $O(1)$ time. Now let $C$ be a tree on $n$ nodes satisfying (\ref{1Eqn}) for $\sigma=s=s_C$. (An example is the compressed tree, with $\beta=2$.) We show that a fat preordering of $C$ exists and can be constructed in $O(n)$ time. We use a recursive numbering procedure. It traverses $C$ top-down. When visiting a node $u$, $u$ will have already been assigned an interval $[\overline p(u), \overline q(u))$ with $\overline q(u)-\overline p(u)=cs(u)^e$. Initially assign $\mathy{\rho}(C)$ the interval $[0,cn^e)$. Each child of $u$ will get the leftmost possible interval, i.e., the intervals of $u$'s children will form a partition of an interval beginning at $p(u)+1$. To visit $u$ execute the following procedure: \bigskip {\narrower {\parindent=0pt Assign $p(u)\gets \overline p(u)+ s(u) ^e$ and $q(u)\gets \overline q(u)-s(u)^e$. Then assign intervals to the children of $u$, starting at $p(v)+1$, as follows: \smallskip For each child $v$ of $u$, assign the next interval in $[p(u),q(u))$ of length $cs(v)^e$ to $v$, and then visit $v$. }} \begin{lemma} \label{3.2Lemma} The numbering algorithm gives a valid fat preordering with $\sigma=s_C$ when $C$ is a tree satisfying (\ref{1Eqn}) for $\sigma=s_C$. The time is $O(n)$ for $n=|V(C)|$. \end{lemma} \begin{proof} It is clear that the algorithm achieves properties ($i$) } \def\xi{($i$) -- ($iii$) } \def\xiii{($iii$) of fat preorder, and it runs in $O(n)$ time. We must show that the intervals assigned by $u$ all fit into the interval $[\overline p(u),\overline q(u))$ given to $u$. For $u$ a leaf this holds since the interval's size is $c\ge 3$. Assume $u$ is an interior node and let $U$ denote the set of children of $u$. Starting with the relation $s(u)=1+\sum_{v \in U} s(v)$, multiply by $s(u)^{e-1}$ and use (\ref{1Eqn}) (and its consequence $s(u)\ge\beta$) to obtain $s(u)^{e}\ge \beta^{e-1}+\beta^{e-1}\sum _{v\in U} s(v)^e$. This implies $cs(u)^{e}/\beta^{e-1} \ge 1+\sum_{v\in U} cs(v)^e$. The right-hand side is the total size of intervals assigned in $[p(u),q(u) )$ (the term 1 accounts for the number $p(u)$). Since $[p(u),q(u) )$ has length $(c-2)s(u)^e$ it suffices to have $c-2\ge c/\beta^{e-1}$. This is equivalent to the left inequality of (\ref{2Eqn}). \end{proof} After the numbering the tree in fat preorder we construct the ancestor tables: For every node $x$ we find successive entries of $ancestor_x[i]$ by traversing the path from $x$ to $\rho(C)$. The time and space for this is $O(n \ifmmode \,{ \rm log}\,\else{\it log }\fi n)$ and dominates the entire algorithm. The last important step in the $nca$ algorithm is a procedure (due to \cite{HT}) that computes characteristic ancestors in $T$ from those in the compressed tree $C(T)$. To state it let $C=C(T,\cal P)$ for an {\em arbitrary} set of paths $\cal P$. Suppose the characteristic ancestors $ca_C(x,y)=(c,c_x,c_y)$ are known and we seek the characteristic ancestors $ca_T(x,y)=(a,a_x,a_y)$. \bigskip Let $P$ be the path of $\cal P$ with apex $c$. The definition of $C$ implies that $a=nca_T(x,y)$ is the first common ancestor of $x$ and $y$ on $P$. For $z\in \{x,y\}$ let $b_z$ denote the first ancestor of $c_z$ on $P$, i.e., $b_z$ is $c_z$ or $\pi_T(c_z)$. Then $a$ is the shallower vertex of $b_x$ and $b_y$. \iffalse , wlog say $a=b_x$. Note that if $b_x=c_x$ then $c_x=x$ since If $c_x$ is a leaf this follows since $c_x$ is an ancestor of $x$. If $c_x$ is not a leaf then it is the unique vertex of $P$ that is not a leaf, i.e., $c_x=c$, so by definition $x$ is the $nca$. \fi Next we show how to find $a_x$ (the same procedure applies to $a_y$). Consider three cases. \case{$a\ne b_x$} This makes the predecessor $a^-$ of $a$ on $P$ an ancestor of $b_x$ (possibly $a^-=b_x$). Thus $a^-$ is an ancestor of $x$. Clearly this makes $a_x=a^-$. \case{$a=b_x \ne c_x$} This makes $b_x=\pi_T(c_x)$. Combining equations gives $a=b_x=\pi_T(c_x)$. So by definition $a_x=c_x$. \case{$a=b_x =c_x$} We will show \[c_x=x.\] Combining this with the case definition gives $a=x$. So the definition of $ca_T$ shows $a_x=x$. (This also makes $a_x=c_x$ as in the previous case.) If $c_x$ is a leaf of $C$ the displayed equation follows since $c_x$ is an ancestor of $x$. If $c_x$ is not a leaf of $C$ then $c_x$ is the unique vertex of $P$ that is nonleaf, i.e., $c_x=c$. The definition of $ca_C$ shows $x=c$. Combining gives $c_x=c=x$. \iffalse \case{$a=b_x$} We show $a_x=c_x$: \subcase{$b_x\ne c_x$} This makes $b_x=\pi_T(c_x)$. Hence $a=b_x=\pi_T(c_x)$. So by definition $a_x=c_x$. \iffalse Next suppose $b_x=c_x$. By transitivity $a=c_x$. Thus $c_x$ is not a leaf (by the assumption $x\ne y$). The only nonleaf on $P$ is $c$. So $c_x=c$. The definition of $ca_C$ shows $x=c$. So $x=c_x$. Transitivity implies $a=x$. So by definition $a_x=x$. \fi \subcase{$b_x=c_x$} First observe $c_x=x$: If $c_x$ is a leaf of $C$ this follows since $c_x$ is an ancestor of $x$. If $c_x$ is not a leaf of $C$ then it is the unique vertex of $P$ that is nonleaf, i.e., $c_x=c$. The definition of $ca_C$ shows $x=c$. Combining gives $c_x=c=x$. Combining again gives $a=b_x=c_x=x$. So the definition of $ca_T$ shows $a_x=x$. (This also makes $a_x=c_x$.) \fi \iffalse ======== Thus c_x is on P By transitivity $a=c_x$. Recall that $c$ is not a leaf (since $x\ne y$). So $c_x$ must be the apex $c$. Thus c_x=c. So a_x=a=c_x. $a=x$, so by definition $a_x=x$. \fi \bigskip \iffalse Otherwise ($a=b_x=c_x$) $a_x$ is the predecessor of $a$ on $P$. For $a_y$ we can assume This implies $a_x$ is \fi \bigskip Putting these pieces together gives our algorithm for $nca$ queries on a static tree. Let us summarize the algorithm. A preprocessing step computes the compressed tree $C=C(T)$. It is numbered in fat preorder ($\beta=2$). In addition the order of nodes in each heavy path is recorded (so we can find $a^-$ in the first case above). The ancestor tables for $C$ are constructed. The query algorithm computes characteristic ancestors, by finding $ca_C(x,y)$ and using it to find $ca_T(x,y)$. \begin{lemma} \label {3.3Lemma} A tree $T$ with $n$ nodes can be preprocessed using $O(n\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$ time and space so that {\em ca} queries can be answered in $O(1)$ time.\hfill$\Box$ \end{lemma} Note that the preprocessing time and the space are both $O(n)$ except for the resources needed to compute and store the ancestor tables. \subsection*{Incremental trees} We extend these ideas to trees that grow by \al. operations. It is then to easy to complete the incremental tree data structure by incorporating $add\_root$ operations. We start by presenting the high-level strategy. Then we give the data structure and detailed algorithm, and finally prove that it works correctly. We use a dynamic version $D$ of the compressed tree, maintaining a fat preordering and computing $ca$s as before. In more detail, $D$ is maintained to be $C(T,\P.)$ for a time-varying collection of paths $\cal P$ that always partitions $V(T)$. \al. makes the new leaf a singleton path of \P.. The algorithm to maintain $D$ is based on this operation: Let $v$ be an apex of $D$ (i.e., a shallowest vertex on some path of \P.). Thus $V(D_v)=V(T_v)$. To {\it recompress $v$} means to replace $D_v$ in $D$ by $C(T_v)$. As usual $C(T_v)$ is defined using the heavy paths of $T_v$, and the recompression changes \P. accordingly. Each node of $T_v$ gets {\it reorganized} in this recompression. Recompressing $v$ updates the fat preordering of $D$ as follows. Let $s$ be the size function on the recompressed subtree $D_v=C(T_v)$. The fat preordering will take $\sigma$ to be $s$. The other parameters of the ordering are specified below. If $v=\mathy{\rho}(D)$ the recompression uses the (static) fat preordering algorithm to assign new numbers to the nodes of $D_v$ in the interval $[0,cs(v)^e)$. If $v\ne\mathy{\rho}(D)$ let $u$ be the parent $u=\pi_D(v)$. Let $\overline Q(u)$ be the currently largest value $\overline q(z)$ for a child $z$ of $u$ in $D$. The {\it expansion interval} for $u$ is $[\overline Q(u),q(u) )$. Clearly no numbers have been assigned in this interval. Use the fat preordering algorithm to assign new numbers to the nodes of $T_v$, in the interval $[\overline Q(u), \overline Q(u)+cs(v)^e )$. This updates $\overline Q(u)$ to $\overline q(v)$, decreasing the size of $u$'s expansion interval by $cs(v)^e $. The old interval for $v$, $[\overline p(v),\overline q(v) )$, will no longer be used, in effect it is discarded. The last part of the high-level description is based on a parameter $\alpha$, $3/2>\alpha>1$. For any node $u$ let $s(u)$ denote its current size in $D$ and let $\sigma(u)$ denote the $\sigma$-value for $u$ in the current fat preordering (i.e., $u$'s interval $[\overline p (u),\overline q(u))$ has size $c\sigma(u)^e$). $s(u)$ equals $\sigma(u)$ plus the number of descendants that $u$ has gained since its reorganization. $D$ is maintained to always have \[s(u)<\alpha \sigma(u)\] for every node $u$. We turn to the data structure. In $D$ the data structure maintains the values of $\overline p, p, q , \overline q, \overline Q, s$ and $\sigma$ for every vertex $u$. It also marks the apexes. The $D$ tree is represented by parent pointers. The tree $T$ is represented by children lists (i.e., each node has a list of its children). Also each path of \P. is recorded (for the $ca$ algorithm). Now we give the detailed algorithms. The $ca$ algorithm is the same as the static case. {\it add\_leaf}$\,$$(x,y)$ proceeds as follows: \bigskip {\narrower {\parindent=0pt Add $y$ to the list of children of $x$ in $T$. Make $y$ a singleton path of $\cal P$ by marking it an apex and setting $\pi_D(y)$ to $x$ if $x$ is an apex, else $\pi_D(x)$. Increase $s(a)$ by 1 for each ancestor $a$ of $y$ in $D$. Let $v$ be the last ancestor of $y$ that now has $s(v)\ge\alpha \sigma(v)$. (This condition holds for $v=y$ by convention.) Recompress $v$. (Start by computing the values $s_T(z)$ for $z\in T_v$.) Then construct the new ancestor table for each node of $T_v$. } } \bigskip Note that if $v=y$ in this algorithm, recompressing $v$ does not change $D$ but assigns $v$ its fat preorder values. Finally we give the parameters for the fat preorder. As we shall show, they are selected so the above strategy can be carried out, in particular expansion intervals are large enough. Starting with the above parameter $\alpha\in (1,3/2)$, we will use \[\beta= {2\over 2\alpha-1}\] as the constant of (\ref{1Eqn}) and in addition to inequality (\ref{2Eqn}) we require \begin{equation} \label{3Eqn} c {(\alpha-1/2)^e +1/2^e\over1-1/\alpha^e} \le c-2. \end{equation} Notice $\alpha\in (1,3/2)$ implies the fraction of the left-hand side approaches 0 as $e\to \infty$ so this inequality can always be achieved. For example take $\alpha=6/5$, $\beta=10/7$, $e=4$, $c=5$. (Then (\ref{2Eqn}) amounts to $1.1\le 3\le 4.1$ and (\ref{3Eqn}) amounts to $2.93\le 3$.) The fat preorder must satisfy the defining properties ($i$) } \def\xi{($i$) -- ($iii$) } \def\xiii{($iii$) for these parameters. \iffalse fraction is ${(3/4)^5 +(1/2)^5\over1-(4/5)^5}= (Since $1<\alpha<3/2$ the fraction is less than its value with $e=4$, which is ${(3/4)^4 +(1/2)^4\over1-(4/5)^4}= {81/256 + 16/256\over 1-256/625}=97/256 \over 369/625 \fi \iffalse It is straightforward but tedious to verify that the left inequality of (\ref{3Eqn}) implies the left inequality of (\ref{2Eqn}). Alternatively the reader can simply use the above example values, which satisfy (\ref{2Eqn}) and (\ref{3Eqn}) and suffice to define the data structure. \fi \iffalse Finally we give the parameters for the fat preorder. As we shall show, they are selected so the above strategy can be carried out, in particular expansion intervals are large enough. For the above parameter $\alpha$, we will use \[\beta= 1+ {1\over 2\alpha-1}\] as the constant of (\ref{1Eqn}) and we replace inequality (\ref{2Eqn}) by \begin{equation} \label{3Eqn} {(\alpha-1/2)^e +(1/2)^e\over1-(1/\alpha)^e} \le c-2\le \beta^e \end{equation} for integral $e,c$. For example take $\alpha=5/4$, $\beta=5/3$, $e=4$, $c=6$. It is straightforward but tedious to verify that the left inequality of (\ref{3Eqn}) implies the left inequality of (\ref{2Eqn}). Alternatively the reader can simply use the above example values, which satisfy (\ref{2Eqn}) and (\ref{3Eqn}) and suffice to define the data structure. The fat preorder continues to satisfy the defining properties ($i$) } \def\xi{($i$) -- ($iii$) } \def\xiii{($iii$) for these parameters. \fi To show the algorithm is correct we must first ensure that expansion intervals are large enough. More precisely consider an apex $u$ of $D$ that has just been reorganized. The algorithm adds $<(\alpha -1)\sigma(u)$ descendants of $u$ in $D$ before recompressing $D_u$. The additions may cause various children $v$ of $u$ to get recompressed and thus assigned new intervals in $[p(u),q(u))$. We must show the total length of all intervals ever assigned to children of $u$ is $<q(u)-p(u)=(c-2)\sigma(u)^e$. Here strict inequality accounts for the fact that the integer $p(u)$ is assigned to $u$. Also note that the ``total length'' includes both the original intervals assigned when $u$ is reorganized, and the new intervals. We will use the following notation. We continue to let $\sigma(u)$ denote its value when $u$ is reorganized. Let $v$ be a child of $u$ in $D$. If $v$ is an apex it may get recompressed some number of times, say $\ell$ times. Let $\sigma_i(v)$, $i=0,\ldots,\ell$ be the various values of $\sigma(v)$. For example $\sigma_0(v)$ is the value of $\sigma(v)$ when $u$ is reorganized. For $i\ge 1$, \begin{equation} \label{ReorganizeEqn} \sigma_i(v)\ge \alpha \sigma_{i-1}(v). \end{equation} (If $v$ is not an apex then $\ell=0$ and $\sigma_0(v)=1$.) \begin{lemma} From when $u$ is reorganized until its next reorganiation, the total length of all intervals ever assigned to children of $u$ is $<q(u)-p(u)$. \end{lemma} \iffalse We consider two types of children $v$ of $u$. If $v$ was never recompressed then the total length of all its intervals is $c\sigma_0(v)^e$. If $v$ was recompressed \fi \begin{proof} For any child $v$ of $u$, \eqref{ReorganizeEqn} implies $\sigma_\ell(v)\ge \alpha^i\sigma_{\ell-i}(v)$ for every $i=0,\ldots, \ell$. Thus the total size of all intervals ever assigned to $v$ is strictly less than \[c\sigma_\ell(v)^e (1+ (1/\alpha)^e+(1/\alpha)^{2e}+\ldots\ ) = { c\sigma_\ell(v)^e \over 1-1/\alpha^e }.\] Obviously this holds for children with $\ell=0$ too. So the total size of intervals ever assigned to children of $u$ is strictly less than \[S={c\sum_{v\in U} \sigma_\ell(v)^e \over 1-1/\alpha^e }.\] We can assume $u$ has at least two children when it is initially reorganized. (If $u$ starts with $\le 1$ child, $u$ gets reorganized as soon as it gains a child, since $\alpha\sigma(u) < (3/2)\sigma(u) \le \sigma(u)+1$.) Right after $u$ was reorganized, every child $v$ had $\sigma_0(v)=s(v)\le s(u)/2=\sigma(u)/2$ descendants in $D$. $u$ gets $<(\alpha-1)\sigma(u)$ new descendants before reorganization. \iffalse the largest possible value for $\sigma_{ell}(v)$ is $(\alpha-1 + 1/2)\sigma(u)=(\alpha-1/2)\sigma(u)$. \fi The sum of all $\sigma_\ell(v)$ values is less than $\alpha \sigma(u)$. Since $e>1$ simple calculus shows $S$ is maximized when $u$ starts with exactly two children, each with $\le \sigma(u)/2$ children, and every new node descends from the same initial child. (For any initial configuration, $S$ is maximized when all new nodes descend from the child that starts with the greatest value $\sigma_0(v)$. This value in turn is maximized when there are only two descendants, and each starts with $\le \sigma(u)/2$ descendants.) Thus the maximum value of $S$ is \[c\sigma(u)^e { (\alpha-1/2)^e + 1/2^e \over 1-1/\alpha^e }.\] The left inequality of (\ref{3Eqn}) implies $S\le (c-2)\sigma(u)^e$ as desired. \end{proof} \iffalse Since $s(u)=1+\sum_{v\in U} s(v)$, simple calculus shows $S$ is maximized when one child $v$ has the greatest possible number of descendants. Right after $u$ was reorganized, any child $v$ had $\le \sigma(u)/2$ descendants in $D$. Since $u$ gets $<(\alpha-1)\sigma(u)$ new descendants, the current value $s(v)$ is $\le (\alpha-1/2)\sigma(u)$. Thus $S$ is maximized when $u$ has exactly two children, with $s$-values at most $(\alpha-1/2)\sigma(u)$ and $\sigma(u)/2$. So \[S\le c\sigma(u)^e { (\alpha-1/2)^e +(1/2)^e \over 1-(1/\alpha)^e }.\] The left inequality of (\ref{3Eqn}) implies $S\le (c-2)\sigma(u)^e$ as desired. \end{proof} \fi Now we complete the correctness proof. \iffalse First note two differences from the static case: First, in general $D \ne C(T)$ -- a child $v$ of node $u$ in $T$ may be an apex in $D$, but it may have grown to become $u$'s heavy child. Second, the algorithm for $ca(x,y)$ uses old information -- the ancestor table of $x$ may have been constructed before $y$ had its current preorder number. \fi \begin{lemma} The {\em add\_leaf} and {\em ca} algorithms are correct. \end{lemma} \begin{proof} We start by verifying (\ref{1Eqn}), i.e., at any time when $v$ is a child of $u$ in $D$, the current $\sigma$ function satisfies $\sigma(u)\ge \beta \sigma(v)$. Immediately after $u$ was last reorganized we have $\sigma(u)=s(u)\ge 2s(v)=2\sigma(v)$. This inequality holds even if $v$ has not been added by {\it add\_leaf}$\,$, since we take $\sigma(v)=0$. After the reorganization $\sigma(u)$ does not change, and $u$ gets less than $(\alpha-1)\sigma(u)$ new descendants. Thus at any time $s(v)\le \sigma(u)/2+ (\alpha-1)\sigma(u)= (\alpha-1/2)\sigma(u)$. We always have $\sigma(v)\le s(v)$. Thus $\sigma(v)/ (\alpha-1/2)\le \sigma(u)$, i.e., $\beta\sigma(v)\le \sigma(u)$. The defining properties \xi--($iii$) } \def\xiii{($iii$) of fat preorder numbers hold since recompression uses the static preorder numbering algorithm. (Since $n$ is nondecreasing, the current $\sigma$ always has values $\le n$.) The rest of the data structure consists of the $ancestor$ tables and the orderings of the paths of \P.. If $x\in T_v$, $ancestor_x$ is constructed in entirety in the recompression. If $x\notin T_v$ the recompression does not change $ancestor_x$. This table remains valid since no ancestor $b$ of $x$ is in $T_v$, so $\sigma(b)$ does not change. Similarly a path of \P. with an apex not in $T_v$ is vertex-disjoint from $T_v$. So it does not change when $v$ is recompressed and the data structure representing it remains valid. The $ca$ algorithm works correctly since the data structure is correct. \end{proof} \iffalse \begin{proof} We start by verifying (\ref{1Eqn}), i.e., at any time when $v$ is a child of $u$ in $D$, $s(u)\ge \beta s(v)$. Define $s_u$ and $s_v$ to be the values of $s(u)$ and $s(v)$ respectively immediately after $u$ was last reorganized. (In particular $s_u=\sigma(u)$.) Thus $s_u\ge 2s_v$. After that time $u$ gets less than $(\alpha-1)s_u$ new descendants. The ratio $s(u)/s(v)$ decreases each time a new descendant of $u$ also descends from $v$. So it is minimized when every new descendant also descends from $v$. Using the relation $s_v/s_u\le 1/2$ gives \[ {s(u)\over s(v)} \ge {\alpha s_u \over s_v + (\alpha-1)s_u} \ge {\alpha \over \alpha-1/2 }=1+{1/2 \over \alpha-1/2} =\beta. \] Thus at any time $s(u)\ge \beta s(v)$ in $D$ as desired. Now all the hypotheses for fat preorder numbers hold. (Since $n$ is nondecreasing, the current $\sigma$ always maps into $[1,n]$.) So the preceding discussion shows the $ca$ algorithm works correctly. \fi \iffalse The crucial fact to be proved is that until a node $u$ gets reorganized, its expansion interval is large enough to accommodate all the new intervals. When $u$ was last reorganized it got an interval of size $c\sigma(u)^e$. The interval $[p(u),q(u))$ available to descendants of $u$ has size $q(u)-p(u)=(c-2)\sigma(u)^e$. Since that interval contains the preorder number of $u$, we must show the total size of all intervals ever assigned to children of $u$ is strictly less than $(c-2)\sigma(u)^e$. To do this fix a time after $u$ is reorganized. Let $U$ be the set of all current children of $u$. For any child $v\in U$ let $s_v$ denote the current value of $\sigma(v)$. (Note that after $u$'s reorganization $s(v)$ and $\sigma(v)$ increase monotonically, although they can decrease to 1 when $u$ gets reorganized.) If $v$ has been recompressed after $u$'s reorganization, its previous value $\sigma(v)$ satisfied $\alpha \sigma(v)\le s_v$, i.e., $\sigma(v)\le s_v/\alpha$. At any time the interval for $v$ has size $c\sigma(v)^e$ for the current value of $\sigma(v)$. Thus the total size of all intervals ever assigned to $v$ is strictly less than \[cs_v^e(1+ (1/\alpha)^e+(1/\alpha)^{2e}+\ldots\ ) = { cs_v^e \over 1-(1/\alpha)^e }.\] Since $s_v$ is at most the current value $s(v)$, the total size of intervals ever assigned to children of $u$ is strictly less than \[S={c\sum_{v\in U} s(v)^e \over 1-(1/\alpha)^e }.\] Since $s(u)=1+\sum_{v\in U} s(v)$, simple calculus shows $S$ is maximized when one child $v$ has the greatest possible number of descendants. Right after $u$ was reorganized, any child $v$ had $\le \sigma(u)/2$ descendants in $D$. Since $u$ gets $<(\alpha-1)\sigma(u)$ new descendants, the current value $s(v)$ is $\le (\alpha-1/2)\sigma(u)$. Thus $S$ is maximized when $u$ has exactly two children, with $s$-values at most $(\alpha-1/2)\sigma(u)$ and $\sigma(u)/2$. So \[S\le c\sigma(u)^e { (\alpha-1/2)^e +(1/2)^e \over 1-(1/\alpha)^e }.\] The left inequality of (\ref{3Eqn}) implies $S\le (c-2)\sigma(u)^e$ as desired. \end{proof} \fi \iffalse Since the intervals of $u$'s children have size strictly less than $S$, that size plus 1 is $\le (c-2)s_0(u)^e$ as desired. For Edmonds' algorithm we are also interested in a restricted version of the nca problem. The {\em incremental-tree nxa problem} is the restricted version where the $nca$ operations occur in $\le n$ groups, and in one group every operation where each group has a corresponding vertex x and each operation is $nca(x,y)$. \fi \begin{lemma} \label{3.5Lemma} The nearest common ancestors problem with {\it add\_leaf}$\,$\ and {\em ca} operations can be solved in $O(m+n\ifmmode \,{ \rm log}\,\else{\it log }\fi^2n)$ time and $O(n\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$ space. \end{lemma} \begin{proof} A $ca$ operation uses $O(1)$ time, so the $ca$s use $O(m)$ time total. In {\it add\_leaf}$\,$$(x,y)$ examining each ancestor of $y$ uses $O(\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$ time. Recompressing a node $v$ uses $O(s(v))$ time for all processing except constructing the new ancestor tables, which uses $O(s(v)\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$ time. Hence the time for an \al. operation is $O(s(v)\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$. The algorithm's recompression strategy implies $s(v)<1+\alpha \sigma(v)\le (1+\alpha) \sigma(v)$. So the time is \[O(\sigma(v)\ifmmode \,{ \rm log}\,\else{\it log }\fi n).\] When $v$ is recompressed $\ge(\alpha-1)\sigma(v)$ descendants $y'$ of $v$ have been added since the last reorganization of $v$. Charge the above time $O(\sigma(v)\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$ to these new descendants $y'$, at the rate of $O(\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$ per node. Since $\alpha>1$ this accounts for the time recompressing $v$. A given node $y'$ gets charged at most once from a node $v$ that was an ancestor of $y'$ when \al. added $y'$. (In proof, recompressing $v$ reorganizes every new ancestor $w$ of $y'$. So $y'$ will not be charged from $w$. In other words after $y'$ is charged from $v$ it is only charged from proper ancestors of $v$.) Thus (\ref{1Eqn}) implies any $y'$ is charged $\le \ifmmode \,{ \rm log}_\beta \else{\it log XX }\fi cn^e$ times total. So the total time charged to $y'$ is $O(\ifmmode \,{ \rm log}\,\else{\it log }\fi^2 n)$. The time bound follows. For the space bound note that the ancestor tables use space $O(n\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$. The remaining space (as specified in the data structure for $D$) is linear. \end{proof} The lemma does not require $n$ (the number of {\it add\_leaf}$\,$\ operations) to be known in advance. (This is the case in most of our applications of this algorithm, although not the implementation of Edmonds' algorithm.) The timing analysis still applies verbatim. The use of ancestor tables necessitates a storage management system to achieve the lemma's space bound. We use a standard doubling strategy, which we make precise in the following lemma. Consider a collection of arrays $A_i[1..n_i]$ that grow by adding entries. Each such operation enlarges $A_i$ with a new entry, i.e., $n_i$ increases by 1 and the contents of old entries in $A_i$ do not change. We implement this operation by allocating space for all arrays $A_i$ sequentially within a larger array $S$. When the current version of $A_i$ in $S$ becomes full we allocate a new version at the end of $S$ with twice the size. We will also allow an operation that creates a new array $A_i$. \iffalse \begin{lemma} \label{DoublingSpaceProposition} An array $A[1..n]$ that grows by adding entries can be maintained within an array $S[1..4n]$. The extra time is $O(n)$. \end{lemma} \begin{proof} We maintain the invariant that the current $A$ array has size $s$ when $n\in [s/2..s]$ and we have used $2s$ entries of $S$. When $n$ reaches $s$, we allocate a new copy of $A$ of size $2s$ at the end of $S$, copying $A[1..n]$ into it. Now $A$ has size $s'=2s$, we have used $4s=2s'$ entries of $S$, and $n=s=s'/2$. The time for the operation (copying $n$ entries) is $O(n)$ and can be charged to the $s/2=n/2$ elements added to $A$ since the last allocation. \end{proof} We will extend the doubling strategy of Proposition \ref{DoublingSpaceProposition}. Consider a collection of arrays $A_i[1..n_i],\ i=1,\ldots,k$ that grow by adding entries, as well as operations that create a new array of size 0. We use the same doubling strategy as before on each array $A_i$. \fi \iffalse Any sequence of $add\_array$ and $add\_entry$ operations can be processed using additional time $O(n)$ and storing all arrays $A_i$ in an array $S[1..4n]$. \fi \begin{lemma} \label{SpaceDoublingLemma} A collection of arrays $A_i[1..n_i], i=1,\ldots, k$ that grows by adding new array entries and creating new arrays can be maintained within an array $S[1..4n]$, for $n=\sum_i n_i$. The extra time is $O(n)$. \end{lemma} \begin{proof} We maintain the invariant that each current array $A_i$ has size $2s_i$ in $S$ when $n_i\in [s_i+1..2s_i]$, and furthermore the total amount of space in $S$ used for versions of $A_i$ is $\le 4s_i$. (Thus at every point in time every array $A_i$ has used $\le 4s_i <4n_i$ entries of $S$, i.e., all allocated storage is within $S[1..4n]$.) When a new $A_i$ is created we set $s_i=1$, allocate 2 cells, and set $n_i=2$. When a new entry is added to $A_i$ and $n_i=2s_i$ we allocate a new copy of $A_i$ of size $4s_i$ at the end of $S$, copying the contents of $A_i[1..n_i]$ into it. Setting $s'_i=2s_i$ and $n_i=2s_i+1$ the new $A_i$ has size $4s_i=2s'_i$ with $n_i=s'_i+1$. $A_i$ has used $\le 4s_i+4s_i=4s'_i$ entries of $S$. So the invariant is preserved. The time for the operation (copying $n_i$ entries) is $O(n_i)$ and can be charged to the $s_i=n_i/2$ elements added to $A_i$ since the last allocation. \end{proof} Returning to Lemma \ref{3.5Lemma} when the final value of $n$ is unknown, note the space usage consists of ancestor tables and single values associated with each vertex. (The children lists for $T$ are stored as vertex values -- $v$ points to its first child, and each child of $v$ points to the next child. The paths of $\cal P$ are also stored as vertex values $pred(v)$ -- a vertex $v$ on a heavy path has $pred(v)$ equal to the predecessor of $v$ on the heavy path.) The single values are updated in {\it add\_leaf}$\,$\ operations and recompressions. Lemma \ref{SpaceDoublingLemma} is used to manage all the space (including single vertex values). We conclude that Lemma \ref{3.5Lemma} holds in entirety when $n$ is not known in advance. Now we extend these algorithms to allow $add\_root$ operations in addition to \al.. We show that the general incremental tree problem reduces to \al. and $ca$ operations. First extend the characteristic ancestor operation. For an arbitrary node $r$ of $T$, let $nca(x,y;r)$ denote the nearest common ancestor of $x$ and $y$ when $T$ is rerooted at vertex $r$. Define $ca(x,y;r)$ similarly. All other terminology is unchanged, e.g., $ca(x,y)$ denotes the characteristic ancestors in $T$ with its original root, and similarly for the term ``ancestor''. The following lemma shows we can compute $ca(x,y;r)$ just using the $ca$ functions on $T$. \iffalse Observe that the path from $x$ to $r$ passes through $nca(x,r)$. This justifies the following algorithm for $ca(x,y,r)$ in terms of $ca(x,y)$: If $nca(x,r)= nca(y,r)$ then return $ca(x,y)$. Otherwise one of the two nodes $nca(x,r)$, $nca(y,r)$ equals $nca(x,y)$. Without loss of generality let $nca(y,r)=nca(x,y)$. Return $(b, b_x, p(b))$ where $ca(x,r)=(b,b_x, b_r)$. \fi \begin{lemma} \label{RootedCalemma} ($i$) } \def\xi{($i$) Any 3 nodes in a tree $x,y,z$ have $|\{nca(x,y), nca(x,z), nca(y,z)\}|\le 2$. ($ii$) } \def\xii{($ii$) $ca(x,y;z) = \begin{cases} ca(x,y) & \text{if }nca(x,z)= nca(y,z),\\ (a, \pi(a), a_y)& \text{if }nca(x,z)=nca(x,y)\ne nca(y,z) \text{ and } ca(y,z)=(a,a_y, a_z). \end{cases}$ \end{lemma} \remark{Part ($i$) } \def\xi{($i$) and symmetry of $x$ and $y$ show that part ($ii$) } \def\xii{($ii$) gives a complete definition of $ca(x,y;z)$.} \begin{proof} ($i$) } \def\xi{($i$) Let $a$ be the shallowest of the three nodes $nca(x,y)$, $nca(x,z)$, $nca(y,z)$. So wlog $a=nca(x,y)$, and let $ca(x,y)=(a,a_x,a_y)$. If $a\ne nca(x,z)$ then $nca(x,z)$ is an ancestor of $x$ deeper than $a$. So $a_x\ne a$ and $z$ descends from $a_x$. Thus the path from $y$ to $z$ goes through $a$ and $a_x$, so $a=nca(y,z)$. \iffalse Suppose $nca(x,y)\ne nca(x,z)$. Wlog $nca(x,y)$ is a proper ancestor of $nca(x,z)$. This makes $nca(x,y)$ a common ancestor of $y$ and $z$. Furthermore it is the first ancestor of $y$ on the path from $nca(x,z)$ to $nca(x,y)$. This, with the fact that $nca(x,z)$ is not an ancestor of $y$, makes it the first ancestor of $z$ that is an ancestor of $y$, i.e., it is $nca(y,z)$. This makes $nca(x,y)$ a common ancestor of $y$ and $z$. Furthermore it is the first ancestor of $y$ on the path from $nca(x,z)$ to $nca(x,y)$. This, with the fact that $nca(x,z)$ is not an ancestor of $y$, makes it the first ancestor of $z$ that is an ancestor of $y$, i.e., it is $nca(y,z)$. \fi \iffalse Wlog $nca(x,y)$ is an ancestor of $nca(x,z)$. So it is the first ancestor of $nca(x,z)$ that is an ancestor of $y$. This implies it is the first ancestor of $z$ that is an ancestor of $y$. Equivalently, it is $nca(y,z)$. \fi ($ii$) } \def\xii{($ii$) Let $ca(x,z)=(b,b_x,b_z)$. Suppose $nca(x,z)= nca(y,z)$. $b=nca(x,z)$ is an ancestor of $y$. If $b_x\ne b$ and $b_x$ is an ancestor of $y$ then $nca(y,z)$ descends from $b_x$, contradicting $b=nca(y,z)$. Thus $b=nca(x,y)$ and $b=nca(x,y;z)$. Clearly $ca(x,y;z)=ca(x,y)$. \iffalse Suppose $nca(x,z)= nca(y,z)$. So this node is an ancestor of both $x$ and $y$, and thus an ancestor of $nca(x,y)$. So the tree has mutually edge-disjoint paths from $x$, $y$, and $z$ to $nca(x,y)$. Thus $nca(x,y;z) = nca(x,y)$. \fi Next suppose $nca(x,z)=nca(x,y)\ne nca(y,z)$. The equality implies $b$ is an ancestor of $y$, and the inequality implies $b\ne b_z$ and $b_z$ is an ancestor of $y$. Thus $nca(x,y;z)=nca(y,z)$. Let $ca(y,z)=(a,a_y,a_z)$. Then $ca(x,y;z)=(a, \pi(a), a_y)$. \end{proof} \iffalse As in the previous case there are mutually edge-disjoint paths from $x$, $y$, and $z$ to $nca(y,z)$. Thus $nca(x,y;z) = nca(y,z)$. Let $ca(y,z)=(a,a_y, a_z)$. We have shown $a=nca(x,y;z)$, i.e., it is the first component of $ca(x,y;z)$. Either $y=a=a_y$ or the unique $ya$path has penultimate node $a_y$. In both cases $a_y$ is the third component of $ca(x,y;z)$. Assume (as in the definition of this part ($ii$) } \def\xii{($ii$) case) $a=nca(y,z)$ is distinct from $ nca(x,z)=nca(x,y)$. Thus $a$ is a proper descendant of $nca(x,y)$, and $x$ does not descend from $a$. Thus the edge from $a$ to its parent is on the unique $xa$path. This makes $\pi(a)$ the second component of $ca(x,y;z)$. \fi \iffalse Furthermore it is the first ancestor of nca(x,z) that is an ancestor of y so it is the first ancestor of z that is an ancestor of y. Furthermore $nca(x,y)$ is the only ancestor of $y$ on the path from $nca(x,z)$ to $nca(x,y)$. This makes $nca(x,y)=nca(y,z)$. Let $ca(x,y)=(c,c_x,c_y)$. Let $c_x$ be the child of $nca(x,y)$ that is an ancestor $x$. $c_x$ is not an ancestor of $y$. Thus $nca(x,y)=nca(y,z)$. $nca(y,z)$ \fi It is now a simple matter to implement $add\_root$ operations in terms of \al.: We never change the child lists of the data structure representing $T$. Instead we maintain a pointer $\varrho$ that gives the current root of $T$ as defined by $add\_root$ operations. The operation $add\_root(y)$ is implemented as \begin{equation} \label{AddRootEqn} add\_leaf(\varrho,y);\,\varrho\gets y. \end{equation} The algorithm for $ca(x,y)$ is $ca(x,y;\varrho)$. We close this section by noting that $add\_root$ can be implemented directly, without the general reduction. The main observation is that $add\_root$ changes the compressed tree in a simple way: Let $T'$ be the result of performing $add\_root(y)$ on $T$, a tree with root $x$. If $|V(T)|>1$ then $y$ plus the heavy path with apex $x$ in $T$ forms a heavy path in $T'$. Thus $C(T')$ can be constructed from $C(T)$ by changing the name of the root from $x$ to $y$ and giving the root a new child named $x$. (This works when $|V(T)|=1$ too.) This transformation is easily implemented in our data structure. \begin{corollary} \label{LogSquaredIncNCACor} The nearest common ancestors problem with {\it add\_leaf}$\,$, $add\_root$ and {\em ca} operations can be solved in $O(m+n\ifmmode \,{ \rm log}\,\else{\it log }\fi^2n)$ time and $O(n\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$ space. \end{corollary} As before the corollary does not require that $n$ be known in advance. \section{{\em nca}'s for Edmonds' algorithm} \label{3.2Sec} This section gives a simple algorithm to find the nca's needed in Edmonds' matching algorithm. Each nca operation uses $O(1)$ time, and the total time for all {\it add\_leaf}$\,$'s is $O(n\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$. The extra space is $O(n)$. So this completes our efficient implementation of the weighted matching algorithm. (Readers interested only in matching needn't go beyond this section.) This section also introduces the multi-level approach, wherein the incremental tree algorithm is used on a number of trees derived from the given tree, each at a given ``level''. We use three versions of the approach. The simplest is for Edmonds' algorithm, and the two more elaborate versions are presented in the next two sections. \iffalse The approach generalizes to the multi-level data structures of the next two sections. \fi \begin{figure}[t] \centering \input{LevelPics.pstex_t} \caption{(a) Data structure for $nca$s in Edmonds' algorithm. $\mu=\f{\ifmmode \,{ \rm log}\,\else{\it log }\fi n}$. $y$ is a 1-node, \wz. is nonfull. (b) Generalization to $L$ levels, $T_L=T$, $L\ge \ell >1$.} \label{EdLevelsFig} \end{figure} The idea is to reduce the number of tree nodes by contracting small subtrees. The terms {\it vertex} and {\it the incremental tree} $T$ refer to the objects of the given problem, e.g., an operation {\it add\_leaf}$\,$$(x,y)$ makes vertex $x$ the parent of vertex $y$ in $T$. We use two trees, illustrated in Fig. \ref{EdLevelsFig}(a). $T_2$ is the incremental tree $T$, enhanced with its data structure. $T_1$ is a smaller version of $T$, derived by contractions and deletions. This indexing is a special case of our second multi-level structure, illustrated in Fig. \ref{EdLevelsFig}(b): A tree $T$ is represented on $L>1$ levels, with $T=T_L$, and for level $\ell \in [2..L]$ tree $T_{\ell-1}$ a minor of $T_{\ell}$. Edmonds' algorithm uses $L=2$. For Edmonds' algorithm define $\mu=\f{\ifmmode \,{ \rm log}\,\else{\it log }\fi n}$. The algorithm maintains a partition of the vertices of $T=T_2$ into subtrees of $\le \mu$ vertices called {\em 2-subtrees}. A 2-subtree containing exactly $\mu$ vertices is {\em full}. $T_1$ is formed from $T=T_2$ by discarding the nonfull 2-subtrees and contracting the full ones. A node of $T_1$ (i.e., a contracted 2-subtree) is called a {\em 1-node}.% \iffalse \footnote{The data structures of the next sections use a larger number of trees $T_i$. The $T_1/T_2$ terminology makes this section consistent with these later sections. However for notational simpicity we usually refer to $T_2$ by its equivalent designation $T$.} \fi We use this additional notation, illustrated in Fig. \ref{EdLevelsFig}: For any vertex $x$, \wx. denotes the 2-subtree containing $x$. As an example 2-subtrees are created and maintained in {\it add\_leaf}$\,$\ as follows: In {\it add\_leaf}$\,$$(x,y)$ if \wx. is a full 2-subtree then $y$ is made the root of new 2-subtree; otherwise $y$ is added to \wx.. (Note this descriptioin guarantees that $T_1$ is a tree and not a forest.) \iffalse To make this unambiguious we give a more precise definition: $x$ enters $T$ as a leaf of a prenode $P$, in an operation {\it add\_leaf}$\,$$(\cdot,x)$. \wx. is the prenode in the current tree $T$ that contains $P$. Note that if $P$ is full, the current tree $T$ may have other prenodes $P'\ne P$ that contain $x$; $x$ is the root of every such $P'$. \fi When \wx. is full, \fx. denotes the 1-node containing $x$.% \footnote{The forward arrow notation corresponds to the arrows in Fig. \ref{EdLevelsFig}.} If $x$ is a 1-node, i.e.~the contraction of a 2-subtree $S$, \gx. denotes the root vertex of $S$ (\gx. is a vertex of $T_2=T$). \iffalse \footnote{These two notations make sense when viewing the two trees $T=T_2$ and $T_1$ as pictured in Figure \ref{EdLevelsFig}, which illustrates a mapping $T_2\to T_1$. This notation is used in subsequent sections too.} \fi Also note that we write functions of nodes like $\pi(x)$ and $ca(x,y)$, relying on context (i.e., the identity of arguments $x$ and $y$) to indicate which tree $T_i$ is being used. \iffalse this invariant: The nonfull 2-subtrees are at the frontier of $T$, i.e., any vertex $x$ with $\wx.$ nonfull has all its $T$-children in $\wx.$. \fi $T_1$ is processed using the incremental-tree $nca$ algorithm of Lemma \ref{3.5Lemma}. Clearly there are $O(n/\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$ 1-nodes, so the time spent on $T_1$ is $O(m+n\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$ and the space is $O(n)$. $T$ uses a simple data structure: Each root $x$ of a 2-subtree is marked as such, and stores the size of its tree $|V(\Px.)|$. If $\Px.$ is full then $x$ has a pointer to its 1-node $y=\fx.$; also $y$ has a pointer to $x=\gy.$. Each nonroot has a pointer to its 2-subtree root. Each node $x$ of $T$ has a parent pointer, as well as child pointers; the children of $x$ that belong to $\Px.$ all occur before the children not in $\Px.$. Fig.\ref{StaticAlg} gives the algorithm for $nca(x,y)$. Note the $ca$ operation takes place in $T_1$ and the $nca$ operation is in a 2-subtree, in $T_2$. \begin{figure} {\parindent=40pt \def{\bf for }{{\bf for }} \def{\bf to }{{\bf to }} \def{\bf while }{{\bf while }} \def{\bf do }{{\bf do }} \def{\bf if }{{\bf if }} \def\Then{{\bf then }} \def\Return{{\bf return }} \def\g #1.{\mathy{\overleftarrow{#1}}} {\bf if } $\wx.\ne \wy.$ \Then {\advance \parindent by 20pt $/*$ set $x$ and $y$ so $\wx.= \wy.$ and $nca(x,y)$ is unchanged $*/$ {\bf if } {\wx. is nonfull} \Then {$x \gets \pi(\rho(\wx.))$}; {\bf if } {\wy. is nonfull} \Then {$y \gets \pi(\rho(\wy.))$} $(a,a_x,a_y) \gets ca(\fx.,\fy.)$ {\bf if } {$a\ne a_x$} \Then {$x \gets \pi(\g {a_x}.)$}; {\bf if } {$a\ne a_y$} \Then {$y \gets \pi(\g {a_y}.)$} } \Return {$nca(x,y)$} } \caption{Finding nca's in Edmonds' algorithm.} \label{StaticAlg} \end{figure} We complete the data structure by showing how to process {\it add\_leaf}$\,$\ and $nca$ operations in 2-subtrees. We do this by maintaining a representation of ancestors as bitstrings in trees that grow by {\it add\_leaf}$\,$\ operations, assuming their size remains $\le \ifmmode \,{ \rm log}\,\else{\it log }\fi n$. Edmonds' algorithm uses this data structure on every 2-subtree. The details of the data structure are as follows. Let $T$ be a tree that grows by {\it add\_leaf}$\,$\ operations. The nodes of $T$ are numbered sequentially as they get added, starting at 1. The number of node $x$ is stored as its ``identifier'' $id[x] \in [1..|V(T)|]$. $T$ also has an array $v[1..|V(T)|]$ that translates identifiers to their corresponding node, i.e., $v[i]$ specifies the vertex of $T$ whose identifier is $i$. Each vertex $x\in T$ has a RAM word $anc[x]$ that stores a string of $\le \ifmmode \,{ \rm log}\,\else{\it log }\fi n$ bits. The $i$th bit of $anc[x]$ (corresponding to $2^{i}$) is 1 iff node number $i$ is an ancestor of $x$. So for example bits 1 and $id[x]$ are always 1. The key property is that reading the bits most-significant-first gives the ancestors of $x$ in their proper order, i.e., decreasing depth. For {\it add\_leaf}$\,$\ we maintain a value $s$ as the current size of $T$. {\it add\_leaf}$\,$$(x,y)$ is implemented as \[\pi(y)\gets x;\; id[y],\, s\gets s+1;\; v[s]\gets y;\; anc[y]\gets anc[x]+2^s.\] We precompute a table that gives most-significant bits. Specifically for any bitstring $b\ne 0$ of $\ifmmode \,{ \rm log}\,\else{\it log }\fi n$ bits, $msb[b]$ is the index of the most significant bit of $b$. The operation $nca(x,y)$ is implemented as \[ v\,[msb\,[anc[x]\land anc[y]]]. \] It is easy to see that {\it add\_leaf}$\,$\ and $nca$ both use $O(1)$ time. To use this data structure in Edmond's algorithm, we keep space usage linear by using the doubling strategy of Lemma \ref{SpaceDoublingLemma} on the collection of $v$ arrays of all 2-subtrees. \iffalse We note that other approaches for Edmonds' algorithm are possible, although this one seems the simplest. One possibility is to use an algorithm for nca's in static trees (\cite{}). Each 2-subtree is processed with such an algorithm. Each {\it add\_leaf}$\,$\ operation redoes the preprocessing of the static tree algorithm. Another possibility is \paragraph*{Two-hop back edges} The last part of the data structure is a system of lists: Each vertex $p$ of $T$ has a list $L(p)$ that will hold various edges $(x,y)$ that have a pending $make\_edge(x,y)$ operation. Also various edges $(x,p)$ will be created, and will have a list $L(x,p)$ of similar pending edges for $make\_edge$. (See Cases 2 and 3 below.) For simplicity we omit obvious details concerned with opening new lists and discarding used lists. We turn to the algorithms. The procedure for {\it add\_leaf}$\,$$(x,y)$ follows from the description of the data structure (again for simplicity we omit some obvious boundary cases): \bigskip {\narrower {\parindent=0pt Make $x$ the parent of $y$ and $y$ the last child of $x$. If $\Px.$ was full (before this addition of $y$) make $y$ the root of a new prenode of size 1. Otherwise add $y$ to $\Px.$ and increment the size of $\Px.$. If this makes $\Px.$ full let $r$ be the root of $\Px.$ and $p=\pi(r)$. Make a new node $r^-$ and perform {\it add\_leaf}$\,$$(p^-, r^-)$. } } \bigskip Clearly the time for this procedure, excluding the $add\_leaf$ in $T_1$, is $O(1)$. Now consider $make\_edge$ and the associated $nca$ operations. There is not enough information in $T$ to compute arbitrary $nca$s in $O(1)$ time. Our second idea is to settle for approximate $nca$s, in the following sense: Previously an edge $(x,y)$ was represented by back edges directed from $x$ and $y$ to $a=nca(x,y)$. Case 2 below replaces $a$ by $a'=nca(b(B_x),y)$. Clearly this change gives rise to the same blossoms. In Case 1 $a\in B_x$. We replace $a$ by a vertex of $B_x$, again getting an equivalent back edge. In Case 3 $make\_edge$ uses \iffalse either the desired back edges $xa,ya$, $a=nca(x,y)$, or some possibly \fi an``incomplete'' representative edge $xa'$ where $a'$ is on the $xa$path of $T$. If a future $merge$ adds $a'$ to $B_x$ there will be enough information to enlarge \E. with back edges that complete the representation of $(x,y)$. The details are as follows. Suppose a grow, expand or blossom step creates a new outer blossom $B$ and issues a number of operations $make\_edge(x,y)$. Wlog assume $x$ is always a vertex of $B$. Process each such operation as follows. First assume $\Px.$ and $\Py.$ are both full. Let $ca(x^-,y^-)=(c,c_{x^-},c_{y^-})$. (Possibly one or both of $c_{x^-}$, $c_{y^-}$ are equal to $c.)$ Let $r=c^+$, the root of the prenode containing $nca(x,y)$. Execute the unique case that applies: \numcase 1 {$r\in B$} Add $(y,r)$ to \E.. \numcase 2 {$b(B) \in \Pr.-r$} Let $p$ be the first ancestor of $y$ in \Pr.: If $y\in \Pr.$ then $p=y$ else $p=\pi(c^+_{y^-})$. Add $(x,y)$ to $L(p)$. (The processing of $L)p)$ edges is completed in the dfs below.) \numcase 3 {Neither Case 1 or 2 holds} This makes $b(B)$ a proper descendant of a vertex of \Pr., specifically vertex $p=\pi(c^+_{x^-})$. Clearly $x\notin \Pr.$ and $(x,p)$ joins two distinct blossoms. Add $(x,p)$ to \E., marking it ``incomplete'' and setting $L(x,p)=\{(x,y)\}$. \bigskip When $\Px.$ or $\Py.$ is nonfull we use the same procedure but with a modified definition of the triplet $(c,c_{x^-},c_{y^-})$: If $\Px.=\Py.$ then define $r$ to be the root of $\Px.$. Case 1 or 2 will apply. Now assume $\Px.\ne \Py.$. Let $f$ be the first ancestor of $x$ in a full prenode, i.e., $f=\pi (\rho(\Px.))$ if $\Px.$ is nonfull and $f=x$ if $\Px.$ is full. Define $g$ similarly as the first ancestor of $y$ in a full prenode. Then take $ca(f^-,g^-)$ as the triplet $(c,c_{x^-},c_{y^-})$. This triplet gives the true characteristic ancestors of $x$ and $y$ because any nonfull prenode is at the frontier of $T$. The processing of Case 2 edges (on lists $L(p)$) is completed in the $find\_min$ operation that follows all the $make\_edge$s for the new outer blossom $B$. (When the search of Edmonds' algorithm has executed an expand step, there may be a number of such blossoms $B$. Each one is processed separately in $find\_min$.) $find\_min$ sets $r$ to the root of the prenode containing ${b(B)}$ and processes the Case 2 edges for $B$ as follows. \iffalse Recall that each Case 2 edge $(x,y)$ has a corresponding marked vertex $p$ in \Pr., which in turn points to $(x,y)$; there may be more than 1 such $(x,y)$ for a given $p$. The marked vertices $p$ in \Pr. are processed as follows: \fi We will transfer the edges of $L(\beta)$ to the appropriate vertices of \wbeta.: For each vertex $z\in \wbeta.$ define $\varepsilon(z)$ as an edge $(x,y)$ of minimum cost where $(x,y,c_y)\in L(\beta)$ has $c_y=z$; $\varepsilon(z)=\Lambda$ if no such edge exists. Clearly it is valid to We will ignore edges of $L(\beta)$ that are not $\varepsilon$-values. Clearly this is valid, i.e., no blossom step need be done for them.???? \bigskip \def\fm.{$min\_edge$} \iffalse \fm. -- \twolines {return an edge $vw$ of $\cal E$ that has minimum cost subject to the constraint} {that $v$ and $w$ are (currently) in distinct blossoms.} }} \fi {\narrower {\parindent=0pt Initialize every value $\varepsilon(z), z\in \wbeta. $ to $\Lambda$. Scan the list $L(\beta)$ and use each tuple $(x,y,c_y)\in L(\beta)$ to update the value of $\varepsilon(c_y)$. Do a depth-first search of \wbeta.. Start by traversing the path from the root $\rho(\wbeta.)$ to $\beta$. During the entire search keep track of the most recent ancestor $a$ of $\beta$ that has been reached. When the dfs visits any vertex $z$, clearly $a=nca(\beta,z)$. So if $\varepsilon(z)$ is an edge $(x,y)$ add edges $xa,ya$ to \E.. Finally discard $L(\beta)$ and all values $\varepsilon(z)$. \iffalse Since $x\in B$, each marked vertex $p$ has $nca(x,y)$ equal to the current vertex $a$ when $p$ is reached. So add the corresponding 1 or 2 edges $xa,ya$ to \E.. \fi } } \bigskip When the dfs terminates \E. contains the desired back edges for every edge $(x,y)$ covered by Cases 1 or 2. A Case 3 edge $(x,y)$ is represented in \E. by an edge $(x, p)$ for $p=\pi(c^+_{x^-})$. This edge is not a loop, but it only partially represents the desired edge $xa$, $a=nca(x,y)$; also $ya$ is not represented at all. As promised, the representation of a Case 3 edge $(x,y)$ can be completed if a future $merge$ adds $p$ to $B_x$: At that point the base $b(B_x)$ is an ancestor of $p$. So now $(x,y)$ is a Case 1 or 2 edge, and reissuing $make\_edge(x,y)$ will complete its processing. But carrying out this plan requires a small change to the $merge$ algorithm of Section \ref{TBMAlgSec}. Let us begin by observing the various possibilities for Case 3 edges. \iffalse We now describe the remaining processing of this edge $(x,p)$ (and its corresponding Case 3 edge $(x,y)$). \fi \E. may contain many copies of the same edge $(x,p)$. Furthermore these copies may be added to \E. in many different blossom steps (for different blossoms each containing $x$). Also \E. may contain many edges $(x',p')$ where $x'\in B_x$ and $p'$ may be $p$ or a different vertex in the same prenode as $p$. \iffalse different Case 3 edges $(x,y)$ that correspond to the same edge $(x,p)$. Furthermore the corresponding $make\_edge(x,y)$ operations may be issued in many different blossom steps (for different blossoms $B_x$). Also a given blossom $B_x$ may contain other vertices $x'$ with Case 3 edges $(x',y')$ corresponding to $(x',p')$ where $p'$ may be $p$ or a different vertex in the same prenode as $p$. \fi Every incomplete edge $(x,p)\in \E.$ must be preserved for possible future reissues of $make\_edge(x,y)$, $(x,y)\in L(x,p)$. But the algorithm $merge(X,Y)$ (given right before Lemma \ref{MergeCreditLemma}) starts by discarding edges: For the new blossom $Z$, edges with both ends in $Z$ are discarded and only the smallest cost edge joining $Z$ to a given blossom $B\ne Z$ is retained. (See the gather operation on the list $EX_0$. This is required by our charging scheme -- the proof of Lemma \ref{MergeCreditLemma} requires that only 1 edge joins any two blossoms.) We remedy this using a system of lists. Every incomplete edge $(x,p)$ currently in \E. has a list of Case 3 edges $(x',y')$ whose corresponding incomplete edge $(x',p')$ has $x'\in B_x, p'\in B_p$ (for the current blossoms $B_x, B_p$). We modify the gather operation on $EX_0$ in two obvious ways: First, for any blossom $B\ne Z$, as before only the smallest cost incomplete edge $(x,p)$ with $x\in Z, p\in B$ is retained. Any other such edge $(x',p')$ joining $Z$ and $B$ is discarded, but $L(x',p')$ is appended to $L(x,p)$. Second, consider the incomplete edges $(x,p)$ in $EX_0$ that have both ends in $Z$. $(x,p)$ is discarded from \E., but a $make\_edge$ operation is issued for every edge in $L(x,p)$. The last modification for tree-blossom merging is in $find\_min$: If $find\_min$ selects an incomplete edge $(x, p)$, we return the corresponding edge $(x,y)$ of smallest cost. This completes the description of the algorithm. Correctness of this algorithm follows from 3 simple observations. Consider an incomplete edge $(x,p)$ and a corresponding edge $(x,y)$. If $find\_min$ selects $(x, p)$ it is clearly correct, although (as always) this may or may not lead to a blossom step for $(x,y)$. \iffalse (The latter possibility implies $(x,y)$ may be returned by many different $find\_mins$, and these $find\_mins$ needn't be consecutive.) \fi If the search halts with $x$ and $p$ in distinct blossoms, again the implementation is correct (even though \E. has no representative of $ya$). The last possibility is that some $merge$ operation enlarges $B_x$ to a blossom containing $p$ and $make\_edge(x,y)$ is reissued. Either $y\in B_x$ or $(x,y)$ is a Case 1 or 2 edge, as mentioned. To estimate the time note that the dfs of $\Pr.$ uses $O(\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$ time. There are $O(n)$ such searches, $\le 1$ for each blossom. So the dfs's use total time $O(n\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$. The rest of the processing is accounted for by charging $O(1)$ to each $make\_edge$ operation (as noted above an edge $(x,y)$ has $\le 2$ operations $make\_edge(x,y)$). \fi Using the results of \cite{G17}, which leaves the incremental-tree nca problem as the last detail of Edmonds' algorithm, we get the following. \begin{theorem} \label{EdmondsThm} A search of Edmonds' algorithm can be implemented in time $O(m+n\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$ and space $O(m)$. \hfill$\Box$\end{theorem} \iffalse \subsection{Multi-level nca algorithms} \label{3.3Sec} This section begins by generalizing the approach of the previous section to an arbitrary number of trees (which we now call levels). Then it presents a 3-level algorithm to solve the incremental-tree nca problem in linear time and space $O(m+n)$. That algorithm is used in the next section for our most general nca algorithm. HAL, ELIMINATE THE ..'S SINCE I SWITCHED TO REAL NUMBER INTERVALS \paragraph*{The framework} General multi-level nca algorithms are quite similar to the 2-level algorithm of last section. This section gives the high-level organization of an incremental-tree nca algorithm with an arbitrary number of levels. That organization is also used in Section \ref{3.4Sec} to solve the general problem of nca's with linking operations. The terms {\it vertex} and {\it the incremental tree} $T$ refer to the objects of the given problem, e.g., an operation {\it add\_leaf}$\,$$(x,y)$ makes vertex $x$ the parent of vertex $y$ in $T$. A multi-level algorithm works on a (relatively small) number of levels designated $\ell =1,\ldots, L$. At any point in the algorithm a tree that has been built up by $link$ operations is called a {\em link tree}. Each link tree $T$ is represented by a tree $T_\ell$ on every level (although a ``small'' link tree may have $T_\ell$ empty for levels $\ell$ less than some value). We call each of these $T_\ell$s a link tree too. $T_L$ is the tree $T$. Every other $T_\ell$ is a smaller tree derived from $T_{\ell+1}$ by deletions and contractions. Each $T_\ell$ is composed of {\em $\ell$-nodes} (called {\em nodes} if the level is clear). Every node on level $\ell$ arises this way. The algorithm maintains a partition of the nodes of $T_\ell$ into subtrees called {\it $\ell$-prenodes} ({\em prenodes} if the level is clear). Each level has given algorithms that solve the problem on prenodes (see below). \iffalse has an associated partition of a subtree of $T$ into subtrees called {\it $(\ell-1)$-prenodes}, or {\em prenodes} if the level is clear. Contracting all $\ell$-nodes transforms the subtree of $T$ into a tree $T_\ell$. Thus there are $|V(T_\ell)|$ $\ell$-nodes. Level $\ell$ has an associated partition of $T_\ell$ into subtrees called {\it $\ell$-trees}. Each level has an algorithm and corresponding data structures to compute the characteristic ancestors of any two $\ell$-nodes in the same $\ell$-tree. (These characteristic ancestors are $\ell$-nodes.) \fi Each level $\ell$ has an integral size parameter $\mu_\ell$. Any $\ell$-prenode $P$ has $|V(P)|< 2\mu_\ell$. $P$ is {\it big} if $|V(P)|\ge \mu_\ell$. $P$ is {\em medial} if it is not big but it contains a node $\pi(r)$ for $r$ the root of a big prenode. $P$ is {\em small} if it is not big and it does not contain any node $\pi(r)$ for $r$ the root of a prenode, i.e., $P$ is at the frontier. This accounts for every prenode, i.e., we maintain the invariant that a prenode $P$ with $|V(P)|< \mu_\ell$ does not contain $\pi(r)$ for $r$ the root of a medial or small prenode. We take $\mu_1=n+1$. Thus each link tree has at most one 1-prenode and it is small. For $1\le \ell<L$, the $\ell$-nodes are the contractions of the $(\ell+1)$-prenodes. $T_\ell$ is formed from $T_{\ell+1}$ by contracting every big or medial prenode. The invariant implies that $T_\ell$ is a tree (i.e., not a forest). A small prenode in $T_{\ell+1}$ is not represented in levels $\ell$ and lower. \def\wl{\widehat{\,l\,} \def\widehat{\,c\,}{\widehat{\,c\,}} We use this additional notation: Let $x$ be an $\ell$-node, $1\le \ell\le L$. \Px. denotes the prenode containing $x$. If $\Px.$ is not small (in particular $\ell>1$) then $x^-$ denotes the ($\ell-1$)-node that is the contraction of $\Px.$. If $\ell<L$ then $x$ is the contraction of an $(\ell+1)$-prenode $P$, and $x^+$ denotes the root node of that subtree $P$. As before we write functions of nodes like $\pi(x)$, relying on context (i.e., the identity of argument $x$) to indicate which tree $T_\ell$ is being used. Each $T_\ell$ uses this data structure: Each root of a prenode $P$ is marked as such, and stores the size of the prenode $|V(P)|$. Let $x$ be an $\ell$-node in $T_\ell$. $x$ has a parent pointer. $x$ has a pointer to $x^+$ if $\ell<L$, and a pointer to its contracted node $x^-$ if $\ell>1$. Unlike the previous section a node $x$ does not have a pointer to its prenode root $\mathy{\rho}(\wx.)$. A prenode is called {\em nonroot} if it does not contain the root of its link tree. A node in a nonroot prenode has a pointer to its prenode root. Nodes in small prenodes that constitute their entire link tree do not have $O(1)$-time access to their prenode root. \iffalse Unlike the previous section a node $x$ does not have a pointer to its prenode root $\mathy{\rho}(\wx.)$. It does not seem possible to maintain general prenode root pointers within our desired time bound (we elucidate this we present the details of the link routine $l$ below). If $x$ is in a link tree $T_\ell$, $\ell>1$, $x$ can access its prenode root in $O(1)$ time by computing $(x^-)^+$. In fact this is the reason we use contracted that small prenodes get contracted to an $x^-$ node. A 1-node $x$ does not have such access to its link tree root, but we shall see this is not needed. \fi (Note that prenodes are defined by equality of the prenode root). \iffalse If $\wx.$ is small then $x^-$ is undefined, so we can check this condition in $O(1)$ time. But nodes in small prenodes do not have $O(1)$-time access to their prenode root. \fi child pointers; PUT THIS STUFF IN PREVIOUS SECTION TOO \iffalse the children of $x$ that belong to $\Px.$ all occur before the children not in $\Px.$. (This convention allows us to perform a dfs of $\Px.$ in time $O(|V(\Px.)|)$.) \fi The routines used in our data structure are \[ c(x,y,\ell),\ l(x,y,\ell),\ \widehat{\,c\,}(x,y),\ \wl(x,y). \] $c(x,y,\ell)$ is a recursive routine that performs the ca operation on $\ell$-nodes $x$ and $y$; it calls $c(x^-,y^-,\ell-1)$. Similarly $l(x,y,\ell)$ performs the link operation on $\ell$-nodes $x$ and $y$, calling $l(x^-,y^-,\ell-1)$. So for vertices $x,y$ in the given graph the operations $ca(x,y)$ and $link(x,y)$ are implemented by $c(x,y,L)$ and $l(x,y,L)$ respectively. $\widehat{\,c\,}(x,y)$ is a given algorithm. Its arguments $x$, $y$ are nodes in the same prenode at some level $\ell$ (so actually there may be $L$ different $\widehat{\,c\,}$ algorithms, one for each level). $\widehat{\,c\,}$ returns the characteristic ancestors of $x$ and $y$ (which are nodes in $\wx.=\wy.$). $\wl(x,y)$ is a given algorithm, with arguments $x$, $y$ nodes in the same level $\ell$. Node $y$ is the root of a link tree at level $\ell$ and $x$ is a node not in that tree. $\wl(x,y)$ updates the partition of $\ell$-nodes into prenode subtrees so that $x$ is the parent of $y$. \iffalse The routines used in our data structure are \[ l(x,y,\ell),\ \wl(x,y),\quad c(x,y,\ell),\ \widehat{\,c\,}(x,y). \] $l(x,y,\ell)$ is a recursive routine that performs the link operation on $\ell$-nodes $x$ and $y$; it calls $l(x^-,y^-,\ell-1)$. Similarly $c(x,y,\ell)$ performs the ca operation on $\ell$-nodes $x$ and $y$, calling $c(x^-,y^-,\ell-1)$. So for vertices $x,y$ in the given graph the operations $link(x,y)$ and $ca(x,y)$ are implemented by $l(x,y,L)$ and $c(x,y,L)$ respectively. $\widehat{\,c\,}(x,y)$ is a given algorithm. Its arguments $x$, $y$ are nodes in the same prenode at some level $\ell$ (so actually there may be $L$ different $\widehat{\,c\,}$ algorithms, one for each level). $\widehat{\,c\,}$ returns the characteristic ancestors of $x$ and $y$ (which are nodes in $\wx.=\wy.$). $\widehat{\,c\,}(x,y)$ is a given algorithm. Its arguments $x$, $y$ are nodes in the same prenode at some level $\ell$ (so actually there may be $L$ different $\widehat{\,c\,}$ algorithms, one for each level). $\widehat{\,c\,}$ returns the characteristic ancestors of $x$ and $y$ (which are nodes in $\wx.=\wy.$). $\wl(x,y)$ is a given algorithm, with arguments $x$, $y$ nodes in the same level $\ell$. Node $y$ is the root of a link tree at level $\ell$ and $x$ is a node not in that tree. $\wl(x,y)$ updates the partition of $\ell$-nodes into prenode subtrees so that $x$ is the parent of $y$. ============================================== \[ link(x,y),\ l(x,y,\ell),\ \wl(x,y),\quad ca(x,y),\ c(x,y,\ell),\ \widehat{\,c\,}(x,y). \] $link$ and $ca$ have arguments that are vertices, and they implement the solution to the overall problem. $l(x,y,\ell)$ is a recursive routine that performs the link operation on $\ell$-nodes $x$ and $y$; it calls $l(x^-,y^-,\ell-1)$. Thus $link(x,y)$ is the value of $l(x,y,L)$. Similarly $c(x,y,\ell)$ is a recursive routine that performs the ca operation on $\ell$-nodes $x$ and $y$; it calls $c(x^-,y^-,\ell-1)$. ================================================= and $ca(x,y)$, relying on context (i.e., the identity of arguments $x$ and $y$) to indicate which tree $T_\ell$ is being used. \fi On level $L$ the link trees partition the vertices. So every vertex is in a prenode. Now we describe the 2 recursive algorithms starting with $l(x,y,\ell)$. Assume the prenodes \Px. and \Py. exist. This is true for every $L$-node, and we shall see it holds in the recursive calls ($\ell<L$). If x is not in a link tree ie it is not in a prenode. declare \Px. equal to \{x\} and create an $\ell-1$ node $x^-$ if $\ell\ge 2$. similarly for y. Start by making $x$ the parent of $y$. Neither $\Px.$ nor $\Py.$ is small. Do $l(x^-, y^-,\ell-1)$. $\Py.$ is small but $\Px.$ is not. $\Px.$ is small but $\Py.$ is not. \Py. is big: \Px. becomes medial. Let $r$ be the root of \Px.. If $\pi(r)$ exists set $\pi(r^-)\gets \pi(r)^-$. Then (in either case) do $l(x^-,y^-,\ell-1)$. \Py. is medial: Let $r$ be the root of \Px. (recall this involves node $x^-$). Then discard node $x^-$ and for every $z\in \Px.$ reset $z^-\gets y^-$. Reset $(y^-)^+\gets r$. Do $\wl(x,y)$. (The resulting tree becomes $\wz.$ for every $z\in \wx.\cup \wy.$. $\wz.$ is either big or medial.) Both $\Px.$ and $\Py.$ are small. Do $\wl(x,y)$. If $\Px.$ is nonroot then define the root pointer of every node in $\Py.$. If this makes $\Px.$ big then create a node $x^-$ and add it to level $\ell-1$ with $l$ addleaf If big but root just add the node. then make $y$ a singleton $\ell$-prenode, If $\Px.$ is full then make $y$ a singleton $\ell$-prenode, make $x$ the parent of node $y$ and halt. ($\Py.$ remains a valid prenode, still at the frontier if it is nonfull.) Suppose $\Px.$. is nonfull; execute the level $\ell$ algorithm {\it add\_leaf}$\,$$(x,y)$. If $\Px.$ is still not full then halt. Otherwise if $\Px.$ is the unique $\ell$-prenode then make $x^-$ the unique $(\ell-1)$-node and a singleton $(\ell-1)$-prenode, and halt. In the remaining case observe that $w=p_\ell(\mathy{\rho}(\Px.)$ is in a full $\ell$-pretree. Call $al(w^-,x^-,\ell-1)$ to complete the processing. This algorithm is correct because it clearly preserves the defining properties of the data structure. As mentioned above $ca(x,y)$ denotes the overall algorithm, i.e., it is defined for vertices $x,y$. This operation is implemented using the recursive routine $ca(x,y,\ell)$, which computes the characteristic ancestors of distinct ?? $\ell$-nodes $x$, $y$ in $T_\ell$. Thus for vertices $x$ and $y$, $ca(x,y)$ equals $c(x,y,L)$. Finally each level $\ell$ has an algorithm $ca(x,y,P)$, where $x$ and $y$ are $\ell$-nodes in the same $\ell$-prenode. It computes the characteristic ancestors of $x$ and $y$, which is a triplet of $\ell$-nodes in $P$. The high-level outline of $ca(x,y,\ell)$ is to first use $ca(x^-,y^-,\ell-1)$ to find the $\ell$-prenode $P$ that contains $nca(x,y)$, and then use $ca(x,y,P)$ to find the desired level $\ell$ characteristic ancestors. Now we give the detailed description of $ca(x,y,\ell)$. First consider the case where both $\ell$-prenodes $\Px., \Py.$ are full. If $\Px.=\Py.$ then simply return $\widehat{\,c\,}(x,y)$. Otherwise $\Px.$ and $\Py.$ correspond to two distinct $(\ell-1)$-nodes. Set $(a, a_x, a_y)\gets ca(x^-, y^-,\ell-1)$. Thus the $\ell$-prenode $\widehat{a^+}$ contains $nca(x,y)$. Set $b_x$ to the first $\ell$-node ancestor of $x$ in $a$: If $a_x=a$ then $b_x=x$ else $b_x=\pi(a^+_x)$. Define $b_y$ similarly. If $b_x\ne b_y$ then the desired characteristics ancestors are $\widehat{\,c\,}(b_ x,b_ y)$. If $b_x= b_y$ then $b_x=nca(x,y)$. It is easy to find the two other characteristic ancestors -- they are among the nodes $b_x$, $\mathy{\rho}(a_x)$ and $\mathy{\rho}(a_y)$. It remains to consider the case when one or both of $\Px.$, $\Py.$ is nonfull. As above, if $\Px.=\Py.$ then return $ca(x,y,\Px.)$. Otherwise for $z=x,y$ set $c_z$ to the first $\ell$-node ancestor of $z$ in a full prenode: If $z$ is in such a node then $c_z=z$ else $c_z=p_\ell(\mathy{\rho}(P_z))$. If $c_x\ne c_y$ then we seek the desired characteristic ancestors $c_ x$ and $c_ y$ in $T_\ell$. They are found using the procedure for the first case. If $c_x=c_y$ then $c_x=nca(x,y)$ and the other characteristic ancestors are among the nodes $c_x$, $\mathy{\rho}(\Px.)$ and $\mathy{\rho}(\Py.)$. This concludes the algorithm for $ca$. Correctness is clear from the discussion. Next consider {\it add\_leaf}$\,$$(x,y)$. We use a similar recursive routine $al(x,y,\ell)$ where $\ell$-node $x\in T_\ell$ is to be made the parent of a new $\ell$-node $y$. If $\Px.$ is full then make $y$ a singleton $\ell$-prenode, make $x$ the parent of node $y$ and halt. Otherwise $\Px.$. is nonfull; execute the level $\ell$ algorithm {\it add\_leaf}$\,$$(x,y)$. If $\Px.$ is still not full then halt. Otherwise if $\Px.$ is the unique $\ell$-prenode then make $x^-$ the unique $(\ell-1)$-node and a singleton $(\ell-1)$-prenode, and halt. In the remaining case observe that $w=p_\ell(\mathy{\rho}(\Px.)$ is in a full $\ell$-pretree. Call $al(w^-,x^-,\ell-1)$ to complete the processing. This algorithm is correct because it clearly preserves the defining properties of the data structure. \paragraph*{Linear-time incremental trees} For incremental trees take $L=3$ levels with \[\mu_3= \c{(\il 2 n)^2},\ \mu_2= \c{\ifmmode \,{ \rm log}\,\else{\it log }\fi^2 n}. \] Levels two and one use the incremental-tree algorithm of Section \ref{3.1Sec}. The algorithm for level three is specified below. First however observe that levels two and one both use $O(m+n)$ time and $O(n)$ space. To prove this, Lemma \ref{3.5Lemma} shows that for $\ell<3$, an $\ell$-prenode containing $k$ $\ell$-nodes processes $p$ {\it ca} operations in $O(p+k \ifmmode \,{ \rm log}\,\else{\it log }\fi^2\mu_\ell)$ time. For $\ell<L$ an $\ell$-node contains $\prod_{i=\ell+1}^L \mu_i$ vertices. So there are $\le {n\over \mu_{\ell+1} }$ $\ell$-nodes. Thus the total time on level $\ell$ is $O(m+ { n\ifmmode \,{ \rm log}\,\else{\it log }\fi^2\mu_\ell \over \mu_{\ell+1} } )$. For $\ell=1,2$ we have ${ \ifmmode \,{ \rm log}\,\else{\it log }\fi^2\mu_\ell \over \mu_{\ell+1} } =O(1)$. Thus levels one and two use $O(m+n)$ time. A similar argument using Lemma \ref{3.5Lemma} shows that the space for level $\ell=1,2$ is $O( { n\ifmmode \,{ \rm log}\,\else{\it log }\fi \mu_\ell \over \mu_{\ell+1} } )=O(n)$. It remains only to specify the level 3 algorithms {\it add\_leaf}$\,$$(x,y,P)$ and $ca(x,y,P)$, for $x$ and $y$ vertices in prenode $P$. We use a technique similar to microsets \cite{GT85}. Consider a 3-prenode $P$. Each vertex $x$ of $P$ has an identifier $id[x]$, an integer between 1 and $\mu_3$. The $i$th vertex added to $P$ is assigned the identifier $i$. $P$ has 2 associated arrays. The $v$ array translates identifiers to their corresponding vertex, i.e., $v[i]$ specifies the vertex whose identifier is $i$. Each vertex $x$ has an {\it ancestor list} denoted $ancestor[x]$. This is the sequence of identifiers of its ancestors in $P$, starting with the root and ending with $x$. Since a vertex has at most $\mu_3$ ancestors, an ancestor list can be represented in $O(\mu_3 \ifmmode \,{ \rm log}\,\else{\it log }\fi \mu_3) = O((\il 2 n)^3)$ bits. Since we assume a random access machine with a word size of $\ifmmode \,{ \rm log}\,\else{\it log }\fi n$ bits, $ancestor[x]$ fits in $O(1)$ words. Store each ancestor list left-justified in its word, with each identifier written as a string of precisely $\f{\ifmmode \,{ \rm log}\,\else{\it log }\fi \mu_3}+1$ bits. The algorithm for {\it add\_leaf}$\,$$(x,y,P)$ constructs $ancestor[y]$ by adding $y$'s identifier in the appropriate position to $ancestor[x]$. This can be done in $O(1)$ time using arithmetic operations. In addition the algorithm maintains the $v$ array: It adds an entry for $y$. We use a doubling strategy to keep the total space linear: When $P$ grows to $2^i$ vertices, we allocate a new $v$ array of size $2^{i+1}$, copying the current $v$ array into the first $2^i$ entries. The algorithm for $ca(x,y,P)$ forms the boolean {\it exclusive or} of $ancestor[x]$ and $ancestor[y]$. The most significant bit of the result occurs within the bit field that stores $a_x$ in $ancestor[x]$ and $a_y$ in $ancestor[y]$. All boolean operations needed -- {\it exclusive or}, finding the most significant bit, and recovering the appropriate fields for $a_x$ and $a_y$, can be done in $O(1)$ time by table look-up. The appropriate tables are generated in $O(n)$ time. A more detailed discussion of similar algorithms involving table look-up can be found in \cite{AHU, GT85}. This discussion implies that level 3 uses $O(m+n)$ time and $O(n)$ space. This completes the 3-level incremental tree algorithm. \begin{theorem} \label{3.1Thm} The incremental-tree nearest common ancestors problem with $ca$ operations can be solved in $O(m+n)$ time and $O(n)$ space.\hfill$\Box$ \end{theorem} \iffalse We close this section by sketching a simpler version of our algorithm. It achieves the same asymptotic efficiency but applies only to the problem of nearest common ancestors for static trees. In this case the data structure can be simplified from three levels to two. This algorithm seems to be simpler than the static tree algorithm of \cite{HT}, which has the same asymptotic efficiency but uses three levels (called ``plies''). Take $\mu_2= \c{{ \ifmmode \,{ \rm log}\,\else{\it log }\fi n \over 2 }}$. Construct the 1- and 2-trees recursively as follows: If the tree $S$ has less than $\mu_2$ nodes then make it a 2-tree. Otherwise let $S_0$ be a subtree containing $\mathy{\rho}(S)$ and having $\mu_2$ nodes; make $S_0$ a 2-tree and a 1-node, and process the trees of forest $S-S_0$ recursively. The unique tree on level one has $O(n/\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$ nodes. Use the incremental-tree algorithm of Section 3.1 on it. Lemma \ref{3.3Lemma} shows the preprocessing time and space on level one is $O(n)$. Level two uses a different microset data structure, related to the Euler tour technique of \cite{TV}. Represent a tree $S$ of $b$ nodes by a string $\beta$ of balanced parentheses of length $2b-2$. $\beta$ represents a depth-first traversal of $S$ -- ``('' corresponds to when the search descends along an edge for the first time, ``)'' corresponds to when the search ascends along an edge, having explored a subtree. Each node $x$ of $S$ has a canonical representation as the string $\beta_x$, the shortest prefix of $\beta$ that leads to it. Observe that $nca(x,y)$ can be found as follows: Without loss of generality assume that $x\ne y$ and $\beta_y$ is longer than $\beta_x$. Let $\gamma$ be the suffix of $\beta_y$ following $\beta_x$. Let $\delta$ be the shortest prefix of $\gamma$ ending with a ``('' that does not have a matching ``)'' in the entire string $\gamma$. ($\gamma$ may contain some unmatched right parentheses. However $\delta$ still well-defined.) It is easy to see that $\delta$ exists, and $nca(x,y)$ corresponds to the string $\beta_x \delta'$, where $\delta'$ is $\delta$ with its ending left parenthesis removed. For a tree $S$ with at most $\mu_2$ nodes, $\beta$ fits into one word. It is not hard to design a set of tables that can be precomputed in $O(n)$ time, such that for any tree $S$, $ca_S(x,y)$ can be found in $O(1)$ time from $\beta_x$ and $\beta_y$. This gives a two level, static tree algorithm that uses $O(n)$ preprocessing time, $O(n)$ space, and performs a $ca$ query in $O(1)$ time. \fi \subsection{Link operations} \label{3.4Sec} This section extends the multi-level data structure to solve our most general dynamic nca problem. The algorithm processes $m$ $nca$ and {\it link} operations on a set of $n$ nodes in time $O(m\alpha(m,n)+n)$ and linear space $O(n)$. Define Ackermann's function $A(i,j)$ for $i,j\ge 1$ by \begin{eqnarray*} A(1,j)&= &2^j, \hbox{\ for\ } j \ge 1;\\ A(i,1)&=&2, \hbox{\ for\ } i \ge 2;\\ A(i,j) &= &A (i-1, A(i,j-1) ), \hbox{\ for\ } i,j \ge 2. \end{eqnarray*} \noindent Define two inverse functions, \begin{eqnarray*} a(i,n)&=&\min\set {j}{A(i,j)\ge n};\\ \alpha(m,n)&=&\min\set{i}{A(i,4\c{m/n})\ge n}, \hbox{\ for\ } m,n \ge 1. \end{eqnarray*} These definitions differ slightly from those of [T83] but this does not change asymptotic estimates. The most significant difference is that our function $A(i,1)$ is constant compared to a rapidly growing function in [T83]. This makes for a more convenient treatment of the base case in our algorithms. We use some simple properties of Ackermann's function including these inequalities: IS THE EQN NUMBER (?) WRITTEN ON THE CORRECT LINE \begin{eqnarray} \label{4Eqn} A(i,j+1) &\ge &2A(i,j), \hbox{\ for\ } i,j \ge 1;\\ A(i+1,j) &\ge &A(i,2j), \hbox{\ for\ } i \ge1,j \ge 4. \end{eqnarray} \noindent We use the linear-time incremental tree data structure of Theorem \ref{3.1Thm}. Call a tree that is represented by this data structure an {\it incremental tree}. The approach is similar to that of [G85b] for a list splitting problem. We construct a multi-level algorithm recursively. For $\ell\ge 1$ the algorithm with $\ell$ levels is denoted ${\cal A}_\ell$. It calls ${\cal A}_{\ell-1}$ if $\ell> 1$. Using A's function, the trees manipulated by $\A._\ell$ are Algorithm ${\cal A}_\ell$ runs in time $O(m\ell+na(\ell,n))$. Assume that any value $A(i,j)$ that is at most $n$ can be found in $O(1)$ time. This assumption is justified later. Algorithm ${\cal A}_\ell$ is said to work on level $\ell$. The terms {\it node} and {\it link tree} refer to the objects manipulated by ${\cal A}_\ell$, i.e., an instruction {\it link}$(x,y)$ operates on nodes $x,y$ to produce a new link tree. Level $\ell$ has the following structure. There are $a(\ell,n)$ {\it universes} $u$, $u=0,\ldots, a(\ell,n)-1$. Each link tree $T$ is in some universe $u$. If $|V(T)|<4$ then $u=0$. If $|V(T)|\ge 4$ then $T$ is in a universe $u\ge 1$, chosen so that $|V(T)|\in [2A(\ell,u)..2A(\ell,u+1))$. (This is possible since $A(\ell, 1)=2$.) An {\it $\ell$-tree \(in universe $u$\)} is a subtree that has at least $2A(\ell,u)$ nodes. It is represented as an incremental tree. The nodes of $T$ are partitioned into $\ell$-trees. If $T$ contains more than one $\ell$-tree then $T$, with each $\ell$-tree contracted, is represented using the data structure for algorithm ${\cal A}_{\ell-1}$. If $\ell=1$ observe that $T$ has only one $\ell$-tree, since $T$ has less than $2A(1,u+1)=2^{u+2}$ nodes and an $\ell$-tree has at least $2A(1,u)=2^{u+1}$ nodes. Algorithm ${\cal A}_\ell$ uses the following data structure for level $\ell$. Each link tree and $\ell$-tree is represented by its root. A link tree $T$ is stored using parent pointers and children lists. If $r$ is the root of $T$ then $s(r)$ equals the size of $T$. For any node $x$, $u(x)$ equals the universe that contains $x$'s link tree; if $u(x)>0$ then $\widehat x$ designates the $\ell$-tree containing $x$. We implement the operation $link(x,y)$ by a recursive algorithm $l(r,x,y)$, where $r$ is the root of the link tree containing $x$. (Recall that $y$ is the root of its link tree.) To make the initial call to $l$ we must find the root $r$. This is done by a straightforward recursive algorithm: Suppose $u(x)>0$. First set $R$ to the $\ell$-tree containing the root. To do this set $R=\wx.$; if this is not the desired $\ell$-tree then recursively compute $R$ as the root of the tree containing \wx. on level $\ell-1$. Then return the root of the $\ell$-tree $R$. The algorithm for $l(r,x,y)$ is as follows. Let $T_r$ and $T_y$ denote the link trees with root $r$ and $y$ respectively, before the link operation. Set the parent of $y$ to $x$ and add $y$ to the child list of $x$. Increase $s(r)$ by $s(y)$. Let $u=\max\{u(x), u(y)\}$. Execute the first one of the following cases that applies and then return. \bigskip \case {$s(r)\ge 2A(\ell,u+1)$} Make the new link tree $T$ into an $\ell$-tree in universe $u+1$: Traverse $T$ top-down; when visiting a node $v$ do an \al. operation to add $v$ to the new incremental tree. Discard the data structures for $T_r$ and $T_y$. \case {$u(x)>u(y)$} Traverse $T_y$ top-down, doing \al. operations to add each node to the incremental tree $\widehat x$. Discard the data structure for $T_y$. \case {$u(x)<u(y)$} Traverse the path from $x$ to $r$, doing $add\_root$ operations to add each node to the incremental tree $\widehat y$. Then traverse $T_r$ top-down, doing \al. operations to add the other nodes to $\widehat y$. Discard the data structure for $T_r$ and set $u(r)$ to $u$. \case {$u(x)=u(y)$} If $u>0$ then do $l(\widehat r, \widehat x, \widehat y)$ in the data structure for ${\cal A}_{\ell-1}$. } \bigskip This algorithm is correct because it preserves the invariants of the data structure. (We assume the bookkeeping fields $u(v)$ and \wv. are updated when node $v$ is added to a new incremental tree.) In particular note these points. In the first case the new link tree belongs in universe $u+1$ because $s(r)< 4A(\ell,u+1)\le 2A(\ell,u+2)$, by (4). In the second case the incremental tree $\widehat x$ exists, since $x$ is in a positive universe. Similarly in the third case $\widehat y$ exists. The fourth case always has $\ell>1$. (This guarantees that the algorithm ${\cal A}_{\ell-1}$ that is called actually exists.) This is because if $\ell=1$ and $u(r)=u(y)$ then the first case applies, since $s(r)\ge 2^{u+2}=2A(\ell,u+1)$. The algorithm for $ca(x,y)$ is trivial in universe zero. In positive universes it is the multi-level algorithm $c(x,y,\ell)$ given in Section 3.2. Each $\ell$-tree is, in the terminology of that section, an $(\ell - 1)$-node. Hence the first case of the $c$ algorithm is used. It executes $ c(\wx., \wy.,\ell-1)$ and then finds the desired characteristic ancestors by executing the incremental tree $ca$ algorithm on the appropriate $\ell$-tree. The details are in Section 3.2. \begin{lemma} \label{3.6Lemma} Algorithm ${\cal A}_\ell$ executes a sequence of $m$ $ca$ and {\it link} operations on a set of $n$ nodes in $O(m\ell+na(\ell,n))$ time and $O(n)$ space. \end{lemma} \begin{proof} First consider the time. A $ca$ query uses $O(\ell)$ time in a positive universe, since $O(1)$ time is spent on each of $\ell$ levels of recursion. The time is $O(1)$ in universe zero. Estimate the time for {\it links} as follows. Charge each {\it link} operation $O(\ell)$ time to account for the initial computation of root $r$ and the $\ell$ levels of recursion and associated processing in routine $l$. Now observe that the rest of the time for $l$ is proportional to the number of \al. and $add\_root$ operations (including those in recursive calls). This relies on Theorem \ref{3.1Thm}, which shows that each such operation uses (amortized) time $O(1)$. Thus it suffices to show that the total number of \al. and $add\_root$ operations is $O(na(\ell,n))$. In fact we show by induction on $\ell$ that there are at most $ 2 na(\ell,n)$ such operations. Consider first all \al. and $add\_root$ operations except those done in recursive calls. Observe that each such operation is done for a node previously in a lower universe. Thus at most one operation is done for each node in each universe. This gives at most $na(\ell,n)$ operations total. (In particular this establishes the base case of the induction, $\ell=1$.) To bound the operations in recursive calls to ${\cal A}_{\ell-1}$, consider any universe $u>0$. By induction the number of operations associated with each $\ell$-tree in universe $u$ is at most twice the quantity $$a(\ell-1,2A(\ell,u+1)/2A(\ell,u) )\le a(\ell-1,A(\ell,u+1))= a(\ell-1, A(\ell-1,A(\ell,u)))=A(\ell,u).$$ There are at most $n/2A(\ell,u)$ $\ell$-trees in universe $u$. Thus the total number of operations in recursive calls associated with universe $u$ is at most $n$. This gives at most $na(\ell,n)$ operations total. Adding together the two cases shows there are at most $2na(\ell,n)$ \al. and $add\_root$ operations as desired. Next consider the space. At any time, the space is proportional to the number of nodes on all levels. We show by induction on $\ell$ that this number is at most $2n$. There are $n$ nodes on level $\ell$. This establishes the base case $\ell=1$, since there are no lower levels. Suppose $\ell>1$. For $u\ge 1$, over the entire algorithm there are at most $ n/2A(\ell,u)$ distinct $\ell$-trees in universe $u$. By induction the total number of nodes associated with universe $u$, on levels $\ell-1$ and lower, is at most $n/A(\ell,u)$. Since $A(\ell,u)\ge 2^u$, the total number of nodes is at most $n+(n/2+n/4+\ldots\ )\le 2n$. This proves the space bound. \end{proof} The remaining issue is how to choose the number of levels $\ell$. Suppose first that $m$ and $n$ are known when the algorithm begins. Take $\ell= \alpha(m,n)$. Observe that $a(\alpha(m,n),n))\le 4\c{m/n}$ since $A(\alpha(m,n),4\c{m/n})\ge n$. Thus the total time for algorithm ${\cal A}_\ell$ is $ O(m\alpha(m,n)+n)$, the desired bound. We must justify the assumption that any value $A(i,j)$ that is at most $n$ can be found in $O(1)$ time. As part of the initialization the algorithm computes a table $ackermann[i,j]$ for $i, j\le \ifmmode \,{ \rm log}\,\else{\it log }\fi n$; if $A(i,j)\le n$ then $ackermann[i,j]= A(i,j)$, else $ackermann[i,j]= \epsilon$. Thus all desired values of Ackermann's function can be found by table look-up. The desired value of $\ell=\alpha(m,n)$ can also be found. The time to initialize the table is and find $\ell$ is clearly $O(\ifmmode \,{ \rm log}\,\else{\it log }\fi^2 n)$. Now we show that the same time bound can be achieved when $m$ and $n$ are not known in advance. In this setting we allow the operation {\it make\_node}$(x)$ which creates a new node $x$ in a singleton tree. At any time $n$ denotes the total number of {\it make\_node} operations and $m$ denotes the total number of $ca$ and {\it link} operations. The procedure works by using algorithm ${\cal A}_\ell$ where $\ell$ is repeatedly modified. The sequence of operations is divided into {\it periods}. The value of $n$ and $m$ at the beginning of the $i$th period is denoted $n_i$ and $m_i$ respectively. Initialize $i=n_0=m_0=0$ and $\ell=1$. After processing an operation, if $m\ge 1$ and either $n\ge 2n_i$ or $m\ge \max\{2m_i,n\}$ then end the current period $i$ as follows. Set $i\gets i+1$, $n_i\gets n,\ m_i\gets m$. If $\ell \ne \alpha(m,n)$ then assign $\ell \gets \alpha(m,n)$ and reorganize the entire data structure to use algorithm ${\cal A}_\ell$. Do this by making each link tree $T$ an incremental tree and placing it in the correct universe. Note that the reorganization procedure is correct -- $T$ does not have a data structure on level $\ell-1$. Thus the time to reorganize $n$ nodes is $O(n)$. (This includes the time to compute a new table $ackermann[i,j]$, to find $\ell$ and find the universe for each incremental tree. The table stores all values $A(i,j)\le 2n$.) The time to end a period that does not reorganize the data structure is $O(1)$. Now we analyze the total time for the algorithm. Period zero ends after the first {\it ca} or {\it link} operation. Clearly it uses $O(n)$ time, so we restrict attention to periods $i\ge 1$. A reorganization decreases $\ell$ by at most one. To show this consider a period $i\ge 1$ with parameters $n_i$, $m_i$ and $\ell=\alpha(m_i,n_i)$. If $\ell$ decreases then $\c { m_{i+1}\over n_{i+1} }> \c { m_i\over n_i }$. Thus $m_{i+1}> n_{i+1}$ and inspecting the algorithm shows $m_{i+1}\le 2m_i$. {\def\c#1{\Big\lceil {#1}\Big\rceil} Using inequalities (4) shows that $$ A(\ell-2,4 \c{ {m_{i+1} \over n_{i+1}} } ) \le A(\ell-2,4 \c{ {2m_i \over n_i} } ) \le A(\ell-2,8\c{ m_i \over n_i })\le A(\ell-1,4\c{ m_i \over n_i })<n_i.$$ Thus $\ell$ decreases by at most one. A similar calculation proves that $\ell$ increases by at most two, but we do not need this fact. } \begin{theorem} \label{3.2Thm} A sequence of $m$ $nca$ and $link$ operations on a universe of $n$ nodes can be processed in time $O(m\alpha(m,n)+n)$ and space $O(n)$. \end{theorem} \begin{proof} We need only prove the time bound when $m$ and $n$ are not known in advance. For convenience assume that the last operation of the algorithm is followed by a reorganization. This can be ensured by extending the input sequence with a sufficient number of {\it make\_node} operations; this at most doubles $n$ and so does not change the asymptotic time bound. We account for the time by charging each operation a time-varying amount. Define a charge of one unit to be large enough to account for any constant amount of computing. The following invariants are maintained. Each {\it make\_node} operation is charged at most one unit if it is in the current period else at most two units. Each {\it link} or {\it ca} operation is charged at most $\ell$ units if it is in the current period else at most $\ell+3$ units, where $\ell$ denotes the current value of that parameter. After each reorganization the total time (for the entire algorithm) is at most the number of units charged. Clearly these invariants imply the total time is $O(m\ell+n)$. Since the last reorganization ensures $\ell=\alpha(m,n)$ this equals the desired bound. Charge the operations as follows. When a {\it make\_node} operation is executed charge it one unit; this accounts for the $O(1)$ time it uses. When a {\it ca} or {\it link} operation is executed charge it $\ell $ units. This accounts for all the time used by {\it cas} and part of the time used by {\it links}. The remaining time for {\it links} is associated with \al. and $add\_root$ operations, as indicated in the above timing analysis. It is charged in the next reorganization. Consider the end of period $i$. If there is no reorganization then no new charges are made. If the data structure is reorganized then we make charges for $O(m_{i} + n_{i})$ units of computing. Let us show that in accordance with the invariants, after a reorganization the total time has been accounted for. The new charges must account for the remaining time for {\it links} and also the time for reorganization. The former is certainly bounded by $O(n_{i+1} a(\ell,n_{i+1}) )$ where $\ell=\alpha(m_{i},n_{i})$. This expression also bounds the time for reorganization, $O(n_{i+1})$. To show this expression is the desired quantity $O(m_{i} + n_{i})$ it suffices to show that $a(\ell, n_{i+1})\le 4 \c{ {m_{i} \over n_{i}} } +1$, since $n_{i+1}\le 2n_{i}$. The latter follows since $A(\ell,4\c{ {m_{i} \over n_{i}} } +1) \ge 2A(\ell,4\c{ {m_{i} \over n_{i}} } )\ge 2n_{i}\ge n_{i+1}$. To make the charges when the data structure is reorganized, consider two cases. Suppose first that $n_{i+1}= 2n_{i}$. Observe that the value of $\ell$ increases in the reorganization. For this it suffices to show $ \c { {m_ {i+1} \over n_ {i+1} } } \le \c{{m_{i} \over n_{i} }} $, since this inequality implies that $\ell$ does not decrease. If $m_ {i+1} \le n_ {i+1}$ then $\c{m_ {i+1} \over n_ {i+1} }=1$, which gives the inequality. The other possibility is that $m_ {i+1} \le 2m_ {i}$. This implies $ {m_ {i+1} \over n_ {i+1} } \le {m_{i} \over n_{i}} $ so again the inquality holds. Make the timing charge $O( m_{i} + n_{i})$ as follows. For the charge of $O(n_{i})$, charge one unit to each of the $n_{i}$ new {\it make\_node} operations in period $i$. Each such operation is now charged two, preserving the invariant. For the charge of $O(m_{i})$, charge one more unit to each {\it link} or {\it ca} operation in all periods up to $i$. This preserves the invariant since $\ell$ increases. Now suppose $m_{i+1}\ge \max\{2m_{i},n_{i+1} \}$. The number of new operations, $m_{i+1}-m_{i}$, satisfies $m_{i+1}-m_{i} \ge m_{i}$ and $m_{i+1}-m_{i}\ge m_{i+1}/2\ge n_{i+1}/2\ge n_{i}/2$. Thus the timing charge $O( m_{i} + n_{i})$ can be made by charging one more unit to each {\it ca} or {\it link} operation in period $i$. In addition we must preserve the invariant for {\it link} and {\it ca} operations. Suppose the value of $\ell$ decreases in the reorganization. As indicated above, it decreases to $\ell-1$. Charge one less unit to each of the $m_{i}$ {\it link} or $ca$ operations before period $i$, and one more unit to each such operation in period $i$. This does not decrease the total charge since as already noted $m_{i+1}-m_{i} \ge m_{i}$. It preserves the invariant, since the operations before period $i$ are now charged at most $\ell+2$, the operations in period $i$ are charged exactly $\ell +2$, and $\ell +2=(\ell-1)+3$. If the value of $\ell$ increases in the reorganization, the invariant holds {\it a fortiori}. \end{proof} The multi-level method used for Theorem \ref{3.2Thm} can be applied to achieve the same time and space bounds for several other problems. As mentioned above, [G85b] applies it to solve the list splitting problem that arises in {\it expand} steps of Edmonds' algorithm. The technique was recently rediscovered by Han La Poutr\'e: [LaP] presents a multi-level algorithm for the set merging problem (this application is noted in [G85b, p.\ 99]) and a result similar to Theorem 3.2 was independently arrived at [J.A. La Poutr\'e, personal communication]. Other applications include the static cocycle problem introduced in [GS], both for graphic matroids and the job scheduling matroid; the former is useful for various problems involving spanning trees. We will discuss these and other applications in a forthcoming paper. \fi \iffalse \iffalse ; more precisely the vertices contained in $S$ form a subtree that is an $(\ell-1)$-node. Any $(\ell-1)$-node arises in this way. Any $\ell$-prenode with root node $x\ne \mathy{\rho}(T)$ has $p_\ell(x)$ in a full $\ell$-prenode. \fi \iffalse An $S_2$-subtree subtree is {\em full} if it contains exactly $\mu$ nodes. We use superscripts to navigate nodes between $T$ and $T_1$: If $x$ is a vertex, $x^-$ denotes the 1-node containing $x$, if such exists. If $x$ is a 1-node, $x^+$ denotes the root vertex of the $S$-tree corresponding to $x$. in contrast no superscript indicates an object in $T$. For instance if $x$ is a vertex, $x^1$ denotes the 1-node containing $x$, if such exists. and $S(x^1)$ denotes the $S$-subtree corresponding to a 1-node $x^1$. A multi-level algorithm works on a small number of levels designated $\ell =1,\ldots, L$. (Later in this section we choose $L=3$.) Each level $\ell$ has an associated partition of a subtree of $T$ into subtrees called {\it $\ell$-nodes}. Contracting all $\ell$-nodes transforms the subtree of $T$ into a tree $T_\ell$. Thus there are $|V(T_\ell)|$ $\ell$-nodes. Level $\ell$ has an associated partition of $T_\ell$ into subtrees called {\it $\ell$-trees}. Each level has an algorithm and corresponding data structures to compute the characteristic ancestors of any two $\ell$-nodes in the same $\ell$-tree. (These characteristic ancestors are $\ell$-nodes.) \fi \iffalse \begin{figure}[t] \centering \input{2nca.pstex_t} \caption{2-level $nca$ algorithm.} \label{2ncaFig} \end{figure} \fi \iffalse Since $T_1$ has $O(n/\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$ nodes. Hence the space for ancestor tables is $O(n)$. To find $nca(x,y)$ first assume $x$ and $y$ do not belong to the same full pretree. In $T_1$ let $ca(x^-,y^-)=(c,c_{x^-},c_{y^-})$. Thus the subtree $P_c$ in $T$ contains $nca(x,y)$. More precisely this subtree contains vertices $\pi(c^+_{x^-})$ and $\pi(c^+_{y^-})$, and $nca(x,y)$ is the $nca$ of these two vertices. Each pretree has a table that gives $nca(x,y)$ for every possible pair of its vertices. It is not hard to ensure that these tables use $O(n\ifmmode \,{ \rm log}\,\else{\it log }\fi n)$ space. We omit the details since our goal is to eliminate these tables and use linear space. MAYBE MORE DETAILS CURRENTLY IN THE LINEAR SPACE VERSION TRANSFER TO THE TABLE VERSION -- I SUSPECT NO Alternatively we can eliminate the tables and use linear space. \fi \iffalse THE ASSUMPTIONS IN THIS PAR SEEM UNNECESSARY , because i think the algorith adapts correctly to any sequence First any operation $make\_edge(x,y)$ has $x$ as the new outer vertex. We start with three trivial assumptions about $make\_edge$ operations (they are easily incorporated into the search routine that invokes $make\_edge$). Second all $make\_edge$ operations are done in the natural order, specifically, in a blossom step first all $merges$ are performed, after which every new outer vertex $x$ is scanned and $make\_edge(x,y)$ is executed for every edge joining $x$ to an outer vertex. Finally, when a new outer blossom $B$ is created in a grow, expand or blossom step, the last operation $make\_edge(x,y)$ for some $x\in V(B)$ is marked as last. This mark will allow our algorithm to complete the processing of all the new edges from $B$. ============================================= We make a natural assumption about $make\_edge$ operations that is easily incorporated into the search routine: Recall that when the search creates a new outer blossom $B$, the new outer vertices $x\in B$ are scanned and $make\_edge(x,y)$ is executed for every outer vertex $y$ adjacent to $x$. Assume that the last of these $make\_edge$ operations for $B$ is marked as such. This mark to will allow our algorithm to complete the processing of all the new edges from $B$. \fi An $L$-node is a vertex. (Thus $T_L=T$.) Each level $\ell$ has an integral size parameter $\mu_\ell$ (which depends on $n$). With one exception, an $\ell$-tree $S$ contains between $\mu_\ell$ and $2\mu_\ell$ $\ell$-nodes. Futhermore $S$ corresponds to an $(\ell-1)$-node, i.e., the vertices contained in $S$ form a subtree that is an $(\ell-1)$-node. The exception is when there is only one $\ell$-tree. Such a tree $S$ may have less than $\mu_\ell$ $\ell$-nodes, but then $S$ does not correspond to an $(\ell-1)$-node. Take $\mu_1=n$ so there is always at most one 1-tree. ---------------------------------------- There are no difficulties in implementing this rule (for instance note that there is no need to store the root node as an entry in an ancestor table). ---------------------------------------- First note that the doubling rule implies $m_i+n_i\ge 2(m_{i-1}+n_{i-1})$. Letting $k_i$ be the number of operations since the between the $(i-1)$st and $i$th operations, $k_i=(m_i+n_i)- (m_{i-1}+n_{i-1}$, we have $n_i=O(k_i)$, since $n_i= n_{i-1}+(n_i-n_{i-1})$. \f
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Coronal mass ejections (CMEs) are large-scale, spontaneous ejections of plasma and magnetic flux from the lower solar corona into interplanetary space and are major drivers of space weather near earth \citep[e.g.][]{hundhausen1993, lindseyetal1999, webbetal2000}. CMEs and eruptive flares are believed to result from a sudden, explosive release of the free magnetic energy stored in the previously quasi-equilibrium, twisted/sheared coronal magnetic field \citep[see e.g. reviews by][]{forbesetal_2006,chen2011}. Using idealized constructions, both analytical studies and numerical simulations have been carried out to understand the basic underlying magnetic field structures of the eruption precursors, and the physical mechanisms of their sudden eruption \citep[e.g.][]{mikic_linker1994,antiochosetal1999,forbes_priest1995, linetal1998,amarietal2000,sturrock2001,roussevetal2003, toeroek_kliem2005,toeroek_kliem2007,fan_gibson2007,isen_forbes2007,fan2010, aulanieretal2010,demoulin_aulanier2010}. Magneto-hydrodynamic (MHD) models of observed CME events have also been constructed to determine the actual magnetic field evolution and causes for the eruption and the properties of the magnetic ejecta, which are critical for determining the geo-effectiveness of the resulting interplanetary coronal mass ejections (ICMEs) \citep[e.g.][]{mikicetal2008, titovetal2008,kataokaetal2009}. The eruptive event in active region 10930 on December 13, 2006 produced an X3.4 flare and a fast, earth-directed CME with an estimated speed of at least 1774 km/s. The ICME reached the Earth on 14-15 December 2006, with a strong and prolonged southward directed magnetic field in the magnetic cloud, causing a major geomagnetic storm \citep[e.g.][]{liuetal2008,kataokaetal2009}. This event is particularly well observed by Hinode for both the coronal evolution as well as the photospheric magnetic field evolution over a period of several days preceding, during, and after the eruption. The photospheric magnetic field evolution of AR 10930 was characterized by an emerging $\delta$-sunspot with a growing positive polarity, which displayed substantial (counter-clockwise) rotation and eastward motion as it grew (see e.g. the movie provided at the NOAJ website http://solar-b.nao.ac.jp/news/070321Flare/me\_20061208\_15arrow\_6fps.mpg and see also \citet{min_chae2009}). This is indicative of the emergence of a twisted magnetic flux rope with the positive rotating spot being one of its photospheric footpoints. The total rotation of the positive, growing sunspot prior to the onset of the flare is measured to be $240^{\circ}$ by \citet{zhangetal2007} and $540^{\circ}$ by \citet{min_chae2009}, which gives an estimate of the minimum amount of twist that has been transported into the corona in the emerged flux rope. Several studies based on non-linear force-free field extrapolations from the photospheric vector magnetic field measurement for AR 10930 have been carried out to study the coronal magnetic field and the associated free magnetic energy before and after the flare \citep[e.g.][]{schrijveretal2008, inoueetal2008}. In this paper, we present an MHD simulation that model the coronal magnetic field evolution associated with the onset of the eruptive flare in AR 10930 on December 13 2006. The simulation assumes the emergence of an east-west oriented magnetic flux rope into a pre-existing coronal magnetic field constructed based on the SOHO/MDI full-disk magnetogram of the photospheric magnetic field at 20:51:01 UT on December 12. Our simulated coronal magnetic field first achieves a quasi-equilibrium phase during which the coronal flux rope rises quasi-statically as more twisted flux is being transported into the corona through a slow flux emergence. The evolution is then followed by a dynamic eruption, where the erupting flux rope accelerate to a final steady speed of about 830 km/s. The erupting flux rope is found to undergo substantial writhing or rotational motion, and the erupting trajectory is non-radial, being deflected southward and eastward from the local radial direction of the source region. The coronal magnetic field structure just prior to the onset of the eruption reproduces qualitatively the observed morphology and connectivity of the coronal magnetic field, including the formation of an inverse-S shaped pre-eruption sigmoid, as seen in the Hinode XRT images. After the onset of the eruption, the evolution of the post-reconnection loops and their foot-points resulting from the simulated magnetic field is also in qualitative agreement with the morphology of the observed X-ray post-flare brightening and the evolution of the chromosphere flare ribbons. We organize the remainder of the paper as follows. In Section \ref{sec:model}, we describe the MHD numerical model and how the simulation is set up. In Section \ref{sec:result} we describe the resulting evolution of the simulated coronal magnetic field and compare with observations. We summarize the conclusions and discuss future directions for improving the model in Section \ref{sec:conc}. \section{Model Description\label{sec:model}} For the simulation carried out in this study, we solve the following magneto-hydrodynamic equations in a spherical domain: \begin{equation} \frac{\partial \rho}{\partial t} + \nabla \cdot ( \rho {\bf v}) = 0 , \label{eqcont} \end{equation} \begin{equation} \rho \left ( \frac{\partial {\bf v}}{dt} + ({\bf v} \cdot \nabla ) {\bf v} \right ) = - \nabla p - \rho \frac{G M_{\odot}}{r^2} \hat{\bf r}+ \frac{1}{4 \pi} ( \nabla \times {\bf B} ) \times {\bf B}, \label{eqmotion} \end{equation} \begin{equation} \frac{\partial {\bf B}}{\partial t} = \nabla \times ({\bf v} \times {\bf B}), \label{eqinduc} \end{equation} \begin{equation} \nabla \cdot {\bf B} = 0, \label{eqdivb} \end{equation} \begin{equation} \frac{\partial e}{\partial t} = - \nabla \cdot \left [ \left ( \varepsilon + \rho \frac{v^2}{2} + p \right ) {\bf v} - \frac{1}{4 \pi} ({\bf v} \times {\bf B} ) \times {\bf B} \right ] - \rho {\bf v} \cdot \frac{G M_{\odot}}{r^2} \hat{\bf r} , \label{eqetot} \end{equation} \begin{equation} p = \frac{\rho R T}{\mu}, \label{eqstate} \end{equation} where \begin{equation} \varepsilon = {p \over {\gamma - 1} } . \end{equation} \begin{equation} e=\varepsilon + \rho \frac{v^2}{2} + \frac{B^2}{8 \pi} . \end{equation} In the above ${\bf v}$, ${\bf B}$, $\rho$, $p$, $T$, $\varepsilon$, $e$ $R$, $\mu$, $\gamma$, $G$, and $M_{\odot}$ denote respectively the velocity field, the magnetic field, density, pressure, temperature, the internal energy density, the total energy density (internal+kinetic+magnetic), the gas constant, the mean molecular weight, the ratio of specific heats, the gravitational constant, and the mass of the Sun. We have assumed an ideal polytropic gas with $\gamma = 1.1$ for the corona plasma. The above MHD equations are solved numerically without any {\it explicit} viscosity, magnetic diffusion, and non-adiabatic effects. However numerical dissipations are present, and since we are solving the total energy equation in conservative form, the numerical dissipation of kinetic, and magnetic energy is effectively being put back into the internal energy. The basic numerical schemes we use to solve the above MHD equations are as follows. The equations are discretized in spherical domain with $r$, $\theta$, $\phi$ coordinates using a staggered finite-difference scheme \citep{stone_norman1992a}, and advanced in time with an explicit, second order accurate, two-step predictor-corrector time stepping. A modified, second order accurate Lax-Friedrichs scheme similar to that described in \citet[][see eq. (A3) in that paper] {rempeletal2008} is applied for evaluating the fluxes in the continuity and energy equations. Compared to the standard second order Lax-Friedrichs scheme, this scheme significantly reduces numerical diffusivity for regions of smooth variation, while retaining the same robustness in regions of shocks. The standard second order Lax-Friedrichs scheme is used for evaluating the fluxes in the momentum equation. A method of characteristics that is upwind in the Alfv\'en waves \citep{stone_norman1992b} is used for evaluating the ${\bf v} \times {\bf B}$ term in the induction equation, and the constrained transport scheme is used to ensure $\nabla \cdot {\bf B} = 0$ to the machine precision. The simulation is set up where we drive the emergence of a part of a twisted magnetic torus at the lower boundary into a pre-existing coronal potential field, constructed based on the MDI full-disk magnetogram from 20:51:01 UT on December 12, 2006 (Figure \ref{fig1}a). First, from the full-disk MDI magnetogram, a region centered on the $\delta$-spot (the white box in Figure \ref{fig1}a), with an latitudinal extent of $30^{\circ}$ and a longitudinal extent of $45^{\circ}$ is extracted as the lower boundary of the spherical simulation domain. In terms of the the simulation coordinates, the domain spans $r \in [R_{\odot}, 6.25 R_{\odot}]$, $\theta \in [75^{\circ}, 105^{\circ}]$, $\phi \in [-22.5^{\circ},22.5^{\circ}]$, with the center of its lower boundary: $\theta= 90^{\circ}$ and $\phi=0^{\circ}$, corresponding to the center of the white-boxed area in Figure \ref{fig1}a. This domain is resolved by a grid of $512 \times 352 \times 528$, with 512 grid points in $r$, 352 grid points in $\theta$, and 528 grid points in $\phi$. The grid is uniform in the $\theta$ and $\phi$ directions but non-uniform in $r$, with a uniform grid spacing of $dr = 1.028$ Mm in the range of $r= R_{\odot}$ to about $1.6 R_{\odot}$ and a geometrically increasing grid spacing above $1.6 R_{\odot}$, reaching about $dr = 173.4$ Mm at the outer boundary. We assume perfectly conducting walls for the side boundaries, and for the outer boundary we use a simple outward extrapolating boundary condition that allows plasma and magnetic field to flow through. The lower boundary region extracted from the MDI full disk magnetogram (as viewed straight-on) is shown in Figure \ref{fig1}b, where we simply take the interpolated line-of-sight flux density from the full-disk magnetogram and assume that the magnetic field is normal to the surface to obtain the $B_r$ shown in the Figure. The region contains roughly all the flux of the $\delta$-spot and the surrounding pores and plages, to which some of the flux of the $\delta$-spot is connected. The peak field strength in the region is about $3000$ G. A smoothing using a Gaussian filter is carried out on the lower boundary region until the peak field strength is reduced to about $200$ G. This is necessary since the simulation domain corresponds to the corona, with the lower boundary density assumed to be that of the base of the corona, and thus a significant reduction of the field strength from that measured on the photosphere is needed to avoid unreasonably high Alf\'ven speeds, which would put too severe a limit on the time step of numerical integration. After the smoothing, the magnetic flux in a central area, which roughly encompasses the region of the observed flux emergence (including the rotating, positive sunspot) is zeroed out (see Figure \ref{fig1}c) to be the area where the emergence of an idealized, twisted magnetic torus is driven on the lower boundary. The potential field constructed from this lower boundary normal flux distribution in Figure \ref{fig1}c is assumed to be the initial coronal magnetic field for our simulation, which is shown in Figure \ref{fig2}. We zero out the normal flux in the area for driving the flux emergence so that we can specify analytically the subsurface emergence structure in a field free region without the complication of the subsurface extension of a pre-existing flux in the same area. The initial atmosphere in the domain is assumed to be a static polytropic gas: \begin{equation} \rho = \rho_0 \left [ 1 - \left ( 1- \frac{1}{\gamma} \right ) \frac{GM_{\odot}}{R_{\odot}} \frac{\rho_0}{p_0} \left ( 1 - \frac{R_{\odot}}{r} \right ) \right ]^{\frac{1}{1-\gamma}} \end{equation} \begin{equation} p = p_0 \left [ 1 - \left ( 1- \frac{1}{\gamma} \right ) \frac{GM_{\odot}}{R_{\odot}} \frac{\rho_0}{p_0} \left ( 1 - \frac{R_{\odot}}{r} \right ) \right ]^{\frac{\gamma}{1-\gamma}} \end{equation} where $\rho_0 = 8.365 \times 10^{-16} $ g ${\rm cm}^{-3}$, and $p_0 = 0.152$ dyne ${\rm cm}^{-2}$ are respectively the density and pressure at the lower boundary of the coronal domain, and the corresponding assumed temperature at the lower boundary is 1.1 MK. The initial magnetic field in the domain is potential, and thus does not exert any forcing on the atmosphere which is in hydrostatic equilibrium. Figure \ref{fig3} shows the height profiles of the Alfv\'en speed and the sound speed along a vertical line rooted in the peak $B_r$ of the main pre-existing negative polarity spot. For the initial state constructed, the peak Alfv\'en speed is about 24 Mm/s, and the sound speed is 141 km/s at the bottom and gradually declines with height. In most of the simulation domain, the Alfv\'en speed is significantly greater than the sound speed. At the lower boundary (at $r=R_{\odot}$), we impose (kinematically) the emergence of a twisted torus ${\bf B}_{\rm tube}$ by specifying a time dependent transverse electric field ${\bf E}_{\perp}|_{r=R_{\odot}}$ that corresponds to the upward advection of the torus with a velocity ${\bf v}_{\rm rise}$: \begin{equation} {\bf E}_{\perp}|_{r=R_{\odot}} = {\hat{\bf r}} \times \left [ \left ( - \frac{1}{c} \, {\bf v}_{\rm rise} \times {\bf B}_{\rm tube} (R_{\odot}, \theta, \phi, t) \right ) \times {\hat{\bf r}} \right ]. \label{eq_emf} \end{equation} The magnetic field ${\bf B}_{\rm tube}$ used for specifying ${\bf E}_{\perp}|_{r=R_{\odot}}$ is an axisymmetric torus defined in its own local spherical polar coordinate system ($r'$, $\theta'$, $\phi'$) whose polar axis is the symmetric axis of the torus. In the sun-centered simulation spherical coordinate system, the origin of the ($r'$, $\theta'$, $\phi'$) system is located at ${\bf r} = {\bf r}_c = (r_c, \theta_c, \phi_c)$, and its polar axis (the symmetric axis of the torus) is in the plane of the ${\hat {\bf \theta}}$ and ${\hat {\bf \phi}}$ vectors at position ${\bf r}_c$ and tilted from the $- {\hat {\bf \theta}}$ direction clockwise (towards the ${\hat {\bf \phi}}$ direction) by an angle $\delta$. In the ($r'$, $\theta'$, $\phi'$) system, \begin{equation} {\bf B}_{\rm tube} = \nabla \times \left ( \frac{A(r',\theta')}{r' \sin \theta' } \hat{\bf \phi'} \right ) + B_{\phi'} (r', \theta') \hat{\bf \phi'}, \end{equation} where \begin{equation} A(r',\theta') = \frac{1}{4} q a^2 B_t \left( 1 - \frac{\varpi^2(r',\theta')}{a^2} \right)^2 , \label{eqafunc} \end{equation} \begin{equation} B_{\phi'} (r', \theta') = \frac{a B_t}{r' \sin \theta'} \left( 1 - \frac{\varpi^2(r',\theta')}{a^2} \right). \label{eqbph} \end{equation} In the above, $a$ is the minor radius of the torus, $\varpi = (r'^2 + R'^2 -2r'R' \sin \theta')^{1/2}$ is the distance to the curved axis of the torus, where $R'$ is the major radius of the torus, $q$ denotes the angular amount (in rad) of field line rotation about the axis over a distance $a$ along the axis, and $B_t a/R'$ gives the field strength at the curved axis of the torus. The magnetic field ${\bf B}_{\rm tube}$ is truncated to zero outside of the flux surface whose distance to the torus axis is $\varpi = a$. We use $a = 0.035 R_{\odot}$, $R' = 0.063 R_{\odot}$, $q/a = - 0.0308$ rad ${\rm Mm}^{-1}$, $B_t a/R' = 111$ G. The torus center is assumed to be initially located at ${\bf r}_c = (r_c = 0.902 R_{\odot}, \, \theta_c = 90^{\circ}, \, \phi_c = 0^{\circ} ) $, and the tilt of the torus $\delta=0$. Thus the torus is initially entirely below the lower boundary and is in the azimuthal plane. For specifying ${\bf E}_{\perp}|_{r=R_{\odot}}$, we assume that the torus moves bodily towards the lower boundary at a velocity ${\bf v}_{\rm rise} = v_{\rm rise} \hat{{\bf r}}_c$, where $v_{\rm rise}$ is described later. The imposed velocity field at the lower boundary is a constant ${\bf v}_{\rm rise}$ in the area where the emerging torus intersects the lower boundary and zero in the rest of the area. The resulting normal flux distribution on the lower boundary after the imposed emergence has stopped is shown in Figure \ref{fig1}d. In it an east-west oriented bipolar pair has emerged, where the positive spot represents the emerging, rotating positive sunspot at the south edge of the dominant negative spot in Figure \ref{fig1}b, and the negative spot corresponds to the flux in the fragmented pores and plages to the west of the rotating positive sunspot in Figure \ref{fig1}b. Observational study by \citet{min_chae2009} found that the minor, fragmented pores of negative polarity emerged and moved westward while the positive rotating sunspot moved eastward, suggesting that they are the counterpart to which the positive rotating sunspot is {\it at least partly} connected to (see Figure 2 in \citet{min_chae2009}). This is one of the reasons that we model the coronal magnetic field in this study with the emergence of an east-west oriented twisted flux rope. After the emergence is stopped, the transverse electric field on the lower boundary (eq. [\ref{eq_emf}]) is set to zero and the magnetic field is line-tied at the lower boundary. At the end of the emergence, the peak normal field strength in the emerged bipolar region on the lower boundary reaches 121 G, compared to the 178 G peak normal field strength in the dominant negative pre-existing spot in the initial lower boundary field. Due to the substantial smoothing of the observed normal magnetic flux density, the total unsigned flux on the lower boundary of our simulation is only about 30\% of that on the photosphere in the boxed area shown in Figure \ref{fig1}a. However, the ratio of the emerged flux (in the flux rope) over the total flux on the lower boundary, $\sim 10$\%, for our simulation is about the same as the ratio of the observed emerged flux (in the positive rotating sunspot) over the total flux in the boxed area in Figure \ref{fig1}a. Note that although the coronal temperature and density are used at the lower boundary, the dynamic property of the lower boundary reflects the property of the photosphere. The lower boundary is assumed to be ``infinitely heavy'' such that the magnetic stress exerted on it from the corona does not result in any motion of the field line foot-points (field anchoring or line-tying) and that the lower boundary evolves in a prescribed way by a kinematically imposed flux emergence associated with the upward advection of a twisted flux rope. Thus dynamically the lower boundary is meant to approximate the photosphere, which can support cross-field currents and the resulting magnetic stresses. However the thermodynamic conditions of the corona (instead of the photosphere) are used for the lower boundary so that (1) we do not have to resolve the small (about 150 km) photospheric pressure scale height in a simulation of the large scale coronal evolution of a CME (size scale on the order of a solar radius), and (2) we avoid solving the complex energy transport associated with coronal heating, radiative cooling, and thermal conduction, which would be required if we were to include the thermodynamics of the photosphere-chromosphere-corona system in the simulation. Here for modeling the large scale, magnetically dominated dynamic evolution of the CME initiation, we greatly simplify the thermodynamics (assuming an ideal polytropic gas for the coronal plasma throughout the domain), and focus on the magnetic field evolution of the corona in response to the imposed flux emergence and field-line anchoring representative of the heavy photospheric lower boundary. In the remainder of the paper, quantities are expressed in the following units unless otherwise specified: $R_{\odot} = 6.96 \times 10^{10} \, {\rm cm}$, $\rho_0 = 8.365 \times 10^{-16} \, {\rm g/cm^3}$, $B_0 = 20 \, {\rm G}$, $v_{a0} = B_0 / \sqrt{4 \pi \rho_0 } = 1951 \, {\rm km/s}$, $\tau_{a0} = R_{\odot} / v_{a0} = 356.8 \, {\rm s}$, as units for length, density, magnetic field, velocity and time respectively. Due to the large peak Alfv\'en speed ($\sim 12 v_{a0} \sim 24,000$ km/s) in the domain (see Figure \ref{fig3}), we initially drive the emergence of the twisted torus through the lower boundary at a fairly high speed over a period of $t=0$ to $t=1.2$ with $v_{\rm rise} = 0.05 v_{a0} \sim 98 $km/s, which is just under the sound speed at the lower boundary but significantly slower than the Alfv\'en speed. In this way we build up the pre-eruption coronal magnetic field approximately quasi-statically and yet fast enough to minimize numerical diffusion. After $t=1.2$, we significantly reduce the driving speed of the flux emergence at the lower boundary to $v_{\rm rise} = 0.01 v_{a0}$ and thus allow the coronal magnetic field to evolve quasi-statically until it erupts dynamically. \section{Results\label{sec:result}} Figures \ref{fig4} and \ref{fig5} show snapshots of the 3D coronal magnetic field evolution (as viewed from 2 different perspectives) after the initial stage of relatively fast emergence has ended at $t=1.2$, and the speed for driving the flux emergence at the lower boundary has been reduced to $v_{\rm rise} = 0.01 v_{a0}$. The view shown in Figure \ref{fig4} corresponds to the observation perspective at the time of the flare, for which the center of the emerging region (also the center of the simulation lower boundary) is located at $7.1^{\circ}$S and $24^{\circ}$W from the solar disk center (or the line-of-sight). GIF movies for the evolution shown in Figure \ref{fig4} and Figure \ref{fig5} are available in the electronic version of the paper. We see that the emerged coronal flux rope settles into a quasi-static rise phase and then undergoes a dynamic eruption. Figure \ref{fig6} shows the evolution of the rise velocity $v_r$ measured at the apex of the tracked axis of the emerged flux rope (triangle points), and also measured at the leading edge of the flux rope (crosses). After the emergence is slowed down at $t=1.2$, the rise velocity at the apex of the flux rope axis slows down, and undergoes some small oscillations as the flux rope settles into a quasi-static rise. The quasi-static rise phase extends from about $t=1.2$ until about $t=2.5$, over a time period of $1.3$, long compared to the dynamic time scale of $\sim 0.1$ for the estimated Alfv\'en crossing time of the flux rope. At about $t=2.5$, the flux rope axis starts to accelerate significantly and a dynamic eruption ensues. The flux emergence is stopped at $t=2.8$, after which the flux rope continues to accelerate outward. We are able to follow the acceleration of the axial field line up to $v_r = 0.54 = 1050 $ km/s at $t=3.$, when the axial field line undergoes a reconnection and we are subsequently unable to track it. Figure \ref{fig6} also shows $v_r$ measured at the leading edge of the low density cavity (as shown in Figure \ref{fig7}), corresponding to the expanding flux rope. We find that by the time of about $t=3.2$, a shock front followed by a condensed sheath has formed ahead of the flux rope cavity (see Figure \ref{fig7} at $t=3.25$), and the $v_r$ measured at the front edge of the cavity (or the inner edge of the sheath) reaches a steady speed of about 0.425 or 830 km/s (see crosses in Figure \ref{fig6}). When the flux rope begins significant acceleration (at $t \approx 2.5$), the decay index $n \equiv d \ln | {\bf B}_p | / d \ln h$ which describes the rate of decline of the corresponding potential field ${\bf B}_p$ with height $h$ is found to be $n \approx 1.2$ at the apex of the flux rope axis, and $n \approx 1.4$ at the apex of the flux rope cavity. These values are smaller than the critical value of $n_{\rm crit} = 1.5$ for the onset of the torus instability for a circular current ring \citep{bateman1978, kliem_toeroek2006, demoulin_aulanier2010}, although there is a range of variability for the critical value $n_{\rm crit}$, which can be as low as $1$, depending on the shape of the current channel of the flux rope \citep[e.g.][]{demoulin_aulanier2010}. For a 3D anchored flux rope, as is the case here, it is difficult to obtain an analytical determination of $n_{\rm crit}$ for the instability or loss of equilibrium of the flux rope \citep{isen_forbes2007}. The exact critical point for the onset of the torus instability would depend on the detailed 3D magnetic field configuration. On the other hand, a substantial amount of twist has been transported into the corona at the onset of eruption. At $t=2.5$, the self-helicity of the emerged flux rope reaches about $-1.02 \Phi_{\rm rope}^2$, where $\Phi_{\rm rope}$ is the total magnetic flux in the rope, corresponding to field lines in the flux rope winding about the central axis by about 1.02 rotations between the anchored foot points. This suggests the possible development of the helical kink instability of the flux rope \citep[e.g.][]{hood_priest1981, toeroek_kliem2005, fan_gibson2007}. The erupting flux rope is found to undergo substantial writhing or kinking motion as can be seen in the sequences of images (also the movies in the electronic version) in Figures \ref{fig4} and \ref{fig5}. We also find that the trajectory for the eruption of the flux rope is not radial because of the ambient coronal magnetic field: the erupting flux rope is deflected southward and eastward from the local radial direction (see Figures \ref{fig4}, \ref{fig5}, and \ref{fig7} and the associated movies). Using the apex location of the erupting flux rope cavity at $t=3.25$ (Figure \ref{fig7}), we find that the erupting trajectory at that time is deflected by $2.3^{\circ}$ southward and $1.3^{\circ}$ eastward from the local radial direction at the center of flux emergence, and further deflection of the trajectory continues with time. Since the local radial direction at the center of the flux emergence corresponds to $7.1^{\circ}$S and $24^{\circ}$W from the solar disk center (or the line-of-sight), the deflection during the eruption is sending the flux rope towards the line-of-sight in the east-west direction, but further southward away from the line-of-sight in the north-south direction. This is consistent with the observed halo of the CME seen in LASCO C2 and C3 coronagraphs (Figure 2 in \citet{kataokaetal2009} and Figure 1 in \citet{ravindra_howard2010}), where the north-south and east-west asymmetries of the halo distribution indicate that the direction of ejection is more southward and less westward than what would have been expected for a radial ejection from the location of the source region on the solar disk. Figure \ref{fig8} shows the coronal magnetic field as viewed from the side (panels a and b) and viewed from the observing perspective (panels c and d) just before the onset of eruption at $t=2.45$, compared with the Hinode XRT image of the region (panel e) just before the flare. We see that the morphology of the coronal magnetic field and its connectivity are very similar to those shown in the X-ray image. To understand the nature of the bright X-ray sigmoid in the image, we have identified the region of significant magnetic energy dissipation and heating in the simulated magnetic field using both the electric current density $J \equiv | \nabla \times {\bf B} |$ and the increase of entropy $\Delta S = C_v \Delta \ln (p/{\rho}^{\gamma} )$. As pointed out in Section \ref{sec:model}, since we are solving the total energy equation in conservative form, numerical dissipation of magnetic energy and kinetic energy due to the formation of current sheets and other sharp gradients is being implicitly put back into the thermal energy of the plasma, resulting in an increase of the entropy. We have identified regions where there is significant entropy increase with $\Delta S / C_v > 1.15$ and also high electric current density concentration with $J/B > 1/l$ where $l = 10$ times the grid size. Such regions are outlined by the orange iso-surfaces in panels (a) and (c) of Figure \ref{fig8}, and they appear as an inverse-S shaped layer (as viewed from the top), which likely corresponds to the formation of an electric current sheet underlying the anchored flux rope \citep[e.g][]{td1999, low_berger2003,gibsonetal2006}. We have also plotted field lines (purple field lines shown in panels b and d) going through the region of the current layer, which are preferentially heated and are expected to brighten throughout their lengths (due to the high heat conduction along the field lines) in soft-X ray, producing the central dominant X-ray sigmoid seen in the Hinode XST image (panel e). Thus our quasi-equilibrium coronal magnetic field resulting from the emergence of a nearly east-west oriented magnetic flux rope could reproduce the observed overall morphology and connectivity of the coronal magnetic field, including the presence of the observed pre-eruption X-ray sigmoid. We find that both $J/B$ as well as $\Delta S$ peak along the ``left elbow'' portion of the current layer, where the positive polarity flux of the emerged flux rope comes in contact with the flux of the dominant pre-existing negative polarity sunspot, consistent with the brightness distribution along the observed X-ray sigmoid (panel e of Figure \ref{fig8}). Reconnections in this part of the current layer cause some of the flux in the emerged flux rope to become connected with the major negative sunspot (see the green field lines connecting between the dominant negative spot and the emerging positive spot in panel (d) of Figure \ref{fig8}). We have also done a few simulations where we varied the tilt of the emerging flux rope, and found that to reproduce the observed orientation of the sigmoid, the emerging flux rope needs to be nearly east-west oriented. With the onset of the eruptive flare, the soft-X ray observation first shows a transient brightening of the sigmoid, and subsequently the emission is completely dominated by the brightness of the post-flare loops (see panels (a)(c)(e) of Figure \ref{fig9}). In the simulated coronal magnetic field, we find that the current density in the inverse-S shaped current layer intensifies as the flux rope begins to erupt. We can deduce qualitatively the evolution of the post-reconnection (or post-flare) loops from our modeled magnetic field evolution. We traced field lines (see the red field lines in panels (b)(e)(h) of Figure \ref{fig10} and panels (b)(d)(f) of Figure \ref{fig11}) whose apexes are located in the layer of the most intense current density and heating. These field lines are the ones who have just reconnected at their apexes and would slingshot downwards, corresponding to the downward collapsing post-flare loops. The layer of the most intense current density and heating, as outlined by the orange iso-surfaces in panels (a)(d)(g) of Figure \ref{fig10} and panels (a)(c)(e), is identified as where $J/B > 1/l$ with $l = $ 5 times the grid size, and where $\Delta S / C_v > 2.3$. This most intense current layer is found to rise upward with the eruption of the flux rope. The associated post-reconnection field lines are initially low lying and form a narrow sigmoid shaped bundle as can be seen in Figures \ref{fig10}(b) and \ref{fig11}(b). With time, the post-reconnection loops broaden and rise up, showing cusped apexes (Figures \ref{fig10}(e)(h) and Figures \ref{fig11}(d)(f)). The morphology of the post-reconnection loops, which transition from an initially narrow low-lying sigmoid bundle to a broad, sigmoid-shaped row of loops with cusped apexes is in qualitative agreement with the observed evolution of the post-flare X-ray brightening shown in Figures \ref{fig9}(a)(c)(e). The foot points of the post-reconnection loops (panels (c)(f)(i) of Figures \ref{fig10}) can be compared qualitatively with the evolution of the flare ribbons in the lower solar atmosphere as shown in the Hinode SOT observation (panels (b)(d)(f) of Figure \ref{fig9}). The ribbon corresponding to the positive polarity foot points (orange ribbon in Figures \ref{fig10}(c)(f)(i)) of the post-reconnection loops is found to sweep southward across the newly emerged positive polarity spot, similar to the apparent movement of the positive polarity ribbon seen the observation (panels (b)(d)(f) of Figure \ref{fig9}) in relation to the observed positive emerging spot. For the ribbon corresponding to the negative polarity footvpoints (the yellow ribbon in Figures \ref{fig10}(c)(f)(i)), its eastern portion is found to extend and sweep northward into the dominant pre-existing negative spot, while its western hook-shaped portion is found to sweep northward across the newly emerged negative spot. Similarly in the SOT observation (panels (b)(d)(f) of Fig \ref{fig9}), for the negative polarity ribbon, the eastern portion sweeps northward into the dominant negative sunspot, while its western, upward curved hook-shaped portion is found to sweep northward across the minor, fragmented negative pores which have emerged to the west of the main $\delta$-sunspot. The modeled ribbons based on the footpoints certainly differ in many ways in their shape and extent compared to the observed flare ribbons. But they capture some key qualitative features in the observed motions of the flare ribbons in relation to the photospheric magnetic flux concentrations. \section{Discussions\label{sec:conc}} We have presented an MHD model that qualitatively describes the coronal magnetic field evolution of the eruptive flare in AR 10930 on December 13, 2006. The model assumes the emergence of an east-west oriented magnetic flux rope into a pre-existing coronal magnetic field constructed based on the MDI full-disk magnetogram of the photospheric magnetic field at 20:51:01 UT on December 12. As described in Section \ref{sec:model}, a substantial smoothing of the observed photospheric magnetic flux density from the MDI magnetogram is carried out such that the peak field strength on the lower boundary is reduced from $\sim 3000$ G to $\sim 200$ G to avoid the extremely high Alfv\'en speed that would put too severe a limit on the time step of numerical integration. The imposed flux emergence at the lower boundary of an idealized subsurface magnetic torus produces a flux emergence pattern on the lower boundary that is only qualitatively representative of the observed flux emergence pattern (compare Figure \ref{fig1}b and Figure \ref{fig1}d). In the model, the emerging bipolar pair on the lower boundary is more symmetric, more spread-out in spatial extent, and both polarities are transporting left-handed twist (or injecting negative helicity flux) into the corona at the same rate. Whereas in the observation, the positive emerging sunspot is coherent and clearly shows a counter-clockwise twisting motion, indicating an injection of negative helicity flux into the corona, while its counterpart to the west is in the form of fragmented pores \citep[e.g.][]{min_chae2009}. However a quantitative measurement by \citet{parketal2010} using MDI magnetograms also found a significant negative helicity flux associated with these fragmented pores as well (see Figure 4 in their paper). In the simulation, the self-helicity of the emerged portion of the flux rope in the corona at the end of the imposed flux emergence (at $t=2.8$) is $H_{\rm rope} \approx 1.07 \Phi^2$, where $\Phi$ is the normal flux in each polarity of the emerged bipolar region on the lower boundary. This is a measure of the internal twist in the emerged flux rope and it corresponds to field lines twisting about the axis by about 1.07 winds (or $385^{\circ}$ rotation) between the two anchored ends in the emerged flux rope. On the other hand, the total relative magnetic helicity $H_{\rm tot}$ that has been transported into the corona by the imposed flux emergence is found to be $H_{\rm tot} \approx 3.02 \Phi^2$, which is the sum of both the self-helicity of the emerged flux rope $H_{\rm rope}$ as well as the mutual helicity between the emerged flux and the pre-existing coronal magnetic field. The observed amount of rotation of the positive emerging sunspot, ranging from $240^{\circ}$ \citep{zhangetal2007} to $540^{\circ}$ \citep{min_chae2009}, gives an estimate of $( H_{\rm rope}/ \Phi ^2 ) \times 360^{\circ}$ for the emerged flux rope, which is about $385^{\circ}$ in the simulation and is thus within the range of the observed values. After an initial phase where we drive the emergence of the twisted torus at a fairly large (but still significantly sub-Alfv\'enic) speed to quickly build up the pre-eruption field, we slow down the emergence and the coronal magnetic field settles into a quasi-equilibrium phase, during which the coronal flux rope rises quasi-statically as more twist is being transported {\it slowly} into the corona through continued flux emergence. This phase is followed by a dynamic eruption phase where the coronal flux rope accelerates in the dynamic time scale to a steady speed of about 830 km/s. Due to the substantial twist (greater than 1 full wind of field line twist) that has been transported into the corona at the onset of the eruption, the erupting flux rope is found to undergo substantial writhing motion. The erupting flux rope underwent a counter-clockwise rotation that exceeded $90^{\circ}$ by the time the front of the flux rope cavity reached $1.4 R_{\odot}$. We also find that the initial trajectory of the erupting flux rope is not radial, but is deflected southward and eastward from the local radial direction due to the ambient coronal magnetic field. Since the initial coronal flux rope is located at $7.1^{\circ}$S and $24^{\circ}$W from the solar disk center, the deflection is sending the erupting flux rope towards the line-of-sight in the east-west direction, but further away from the line-of-sight in the north-south direction, consistent with the observed halo of the CME seen in LASCO C2 and C3 coronagraphs, where the halo's north-south (east-west) asymmetry appears larger (smaller) than would have been expected from a radial eruption of the flux rope from its location on the solar disk. However, due to the relatively restrictive domain width in $\theta$ ($30^{\circ}$) and $\phi$ ($45^{\circ}$) in our current simulation, the side wall boundary in the south begins to significantly constrain the further southward deflection and expansion of the flux rope by the time the top of the flux rope cavity reaches about $1.4 R_{\odot}$. Thus, we are not able to accurately determine the subsequent trajectory change or the continued writhing of the erupting flux rope beyond this point. A larger simulation with a significantly greater domain size in $\theta$ and $\phi$, that still adequately resolves the coronal magnetic field in the source region, will be carried out in a subsequent study to determine the later properties of the flux rope ejecta. The restrictive domain size may also play a role in the significantly lower steady speed of $830$ km/s reached by the erupting flux rope in the simulation, compared to the observed value of at least $1780$ km/s for the speed of the CME \citep[e.g.][]{ravindra_howard2010}. It has been shown that the rate of spatial decline of the ambient potential magnetic field with height is both a critical condition for the onset of the torus instability of the coronal flux rope \citep[e.g.]{bateman1978, kliem_toeroek2006, isen_forbes2007, fan2010, demoulin_aulanier2010} as well as an important factor in determining the acceleration and the final speed of the CMEs \citep{toeroek_kliem2007}. Even for a sufficiently twisted coronal flux rope that is unstable to the helical kink instability, the spatial decline rate of the ambient potential field is found to determine whether the non-linear evolution of the kink instability leads to a confined eruption (with the flux rope settles into a new kinked equilibrium) or an ejection of the flux rope \citep{toeroek_kliem2005}. The simulation in this paper has assumed perfect conducting walls for the side boundaries where the field lines are parallel to the walls. Thus widening the simulation domain would result in a more rapid expansion and hence a more steep decline of the ambient potential field with height. This would result in a greater acceleration and a faster final speed for the CME based on the results from previous investigations by \citet{toeroek_kliem2005,toeroek_kliem2007}. It may be difficult to distinguish whether the torus or the kink instability initially triggers the eruption given the complex 3D coronal magnetic field, but the final speed of the CME would be strongly affected by the spatial decline rate of the ambient potential field for either cases. The substantial smoothing of the lower boundary magnetic field to reduce the peak Alfv\'en speed is also a major reason for the low final speed of the erupting flux rope in the current simulation. Nevertheless, our simulated coronal magnetic field evolution is found to reproduce several key features of the eruptive flare observed by Hinode. The pre-eruption coronal field during the quasi-static phase reproduces the observed overall morphology and connectivity of the coronal magnetic field, including the presence of the pre-eruption X-ray sigmoid seen in the Hinode XRT images. The presence of the pre-eruption sigmoid in our model is caused by the preferential heating of an inverse-S shaped flux bundle in the flux rope by the formation of an inverse-S shaped current sheet underlying the flux rope. Our simulations suggest that the emerging flux rope needs to be nearly east-west oriented in order to reproduce the observed orientation of the X-ray sigmoid. This is consistent with the suggestion by \citep{min_chae2009} that the counterpart of the emerging, rotating positive sunspot is the minor negative pores to the west of the emerging sunspot (rather than the dominant negative sunspot). After the onset of the eruption, the morphology of the post-flare loops deduced from the simulated field show a transition from an initial narrow, low-lying sigmoid bundle to a broad, sigmoid-shaped row of loops with cusped apexes, in qualitative agreement with the evolution of the post-flare X-ray brightening observed by XRT of Hinode. The apparent motions of the foot points of the post-flare loops in relation to the lower boundary magnetic flux concentrations are also in qualitative agreement with the evolution of the chromospheric flare ribbons observed by Hinode SOT. These agreements suggest that our simulated coronal magnetic field produced by the emergence of an east-west oriented twisted flux rope, with the positive emerging flux ``butting against'' the southern edge of the dominant pre-existing negative sunspot, captures the gross structure of the actual magnetic field evolution associated with the eruptive flare. To improve quantitative agreement, a more accurate determination of the lower boundary electric field \citep{fisheretal2011} that more closely reproduces the observed flux emergence pattern on the lower boundary is needed. \acknowledgements I thank Laural Rachmeler for reviewing the manuscript and for helpful comments. NCAR is sponsored by the National Science Foundation. This work is supported in part by the NASA LWS TR\&T grant NNX09AJ89G to NCAR. The numerical simulations were carried out on the Pleiades supercomputer at the NASA Advanced Supercomputing Division under project GID s0969. Hinode is a Japanese mission developed and launched by ISAS/JAXA, with NAOJ as domestic partner and NASA and STFC (UK) as international partners. It is operated by these agencies in co-operation with ESA and NSC (Norway).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} \label{intro} In the airline scheduling process (ASP), airline crew scheduling (CS) is considered as one of the most important planning activity, owing to multiple reasons. First, the crew cost is the second largest operating cost (after the fuel cost). Second, it's optimization carries a huge potential for enormous cost savings (millions of dollars annually even with marginal improvements). Last, CS is to be performed in the presence of several complex constraints laid down by federations, labor unions, etc. in order to guarantee the safety of crew members. In the last three decades, the airline CS has received unprecedented attention from the operations research (OR) society, leading to the development of numerous CS optimization systems. Over past years, the expansion of airlines' flight operations, to match the exponentially increasing air-travel demand, has lead to a tremendous increase in the number of flights, aircraft and crew members to be scheduled, leaving the state-of-the-practice obsolete. Hence, it is imperative to improve upon the existing optimization systems by leveraging-in the recent technological advancements, enhanced data handling capacities and speed of computations. \par Airline crew scheduling is a combination of complex combinatorial optimization subproblems (NP-Complete and NP-Hard problems \cite{1}). It is decomposed into two problems, namely, \textit{crew pairing} and \textit{crew assignment} problems which are solved sequentially. The former problem is aimed at generating a set of flight sequences (called a \textit{crew pairing}) to cover a finite set of flight legs from an airline's timetable in minimum cost, while satisfying several \textit{legality} constraints linked to the federations' safety rules, airline-specific regulations, labor laws, etc. The aim of the latter problem is to assign crew members to these optimal crew pairings. The scope of this paper is limited to the former problem. \par In CPOP, crew pairings have to satisfy multiple legality constraints (as mentioned-above) in order to be classified as `operational/legal'. To solve the CPOP, it is required to develop a \textit{legal crew pairing generation} approach in order to facilitate only legal crew pairings to the optimization phase. Depending upon the CPOP's scale, legal pairings are generated in two ways: before the optimization phase and during the optimization phase. For each of these approaches, several optimization-based solution methodologies have been proposed in literature. These could be broadly categorized into two techniques: meta-heuristics or mathematical programming based solution approaches. Among the latter category, \textit{Column Generation} \cite{3} is the most widely adopted technique which is proven to be successful for solving large-scale integer programs. It is an efficient search-space exploration technique which exploits the idea that the majority of variables in a large-integer program are non-basic in the optimal solution. Hence, it generates only those pairings which have a high-potential of bringing-in the associated benefits to the objective function. It is an exact method but it's most successful heuristic implimentations, for solving CPOP, could be found in \cite{4} \& \cite{5}. \par Among meta-heuristics, the most successful and widely adopted technique is the \textit{Genetic Algorithm} (GA) which is population-based probabilistic-search method, inspired by the theory of evolution \cite{6}. GAs with customized operators are known to be successful in solving a variety of combinatorial optimization problems (\cite{7, deb}). Several GA-based CPOP solution approaches have been proposed in the literature which are broadly presented in Table~\ref{overview}. \begin{table*}[htbp] \caption{An overview of the GA-based CPOP solution approaches from the literature} \scriptsize \begin{center} \begin{tabular}{cccccccc} \toprule \multirow{1.25}{*}{\textbf{Literature}} & \multirow{2.5}{*}{\textbf{Formulation$^+$}} & \multirow{1.25}{*}{\textbf{Airline}} & \multicolumn{4}{c}{\textbf{Flight Data}} & \multirow{2.5}{*}{\textbf{Airlines}} \\ \cmidrule{4-7} \textbf{Instances} & & \textbf{Timetable} & \textbf{Applicability } & \textbf{\# Flights**} & \textbf{\# Pairings**} & \textbf{Accessibility} & \\ \midrule \cite{7} & SCP & Did not solve CPOP & 11G & 1,000 & 10,000 & Public & - \\ \cite{8} & SPP & - & 40R & 823 & 43,749 & Private & - \\ \cite{9} & SCP & Daily & 28R & 380 & 21,308 & Private & Multiple Airlines \\ \cite{10} & SCP & Monthly & 1R & 2,100 & 11,981 & Private & Olympic Airways \\ \cite{11} & SCP & Monthly & 1R & 710 & 3,308 & Private & Turkish Airlines \\ \cite{12} & - & - & 4R & 506 & 11,116 & Private & Turkish Airlines \\ \cite{13} & SCP & - & 12R & 714 & 43,091 & Private & Turkish Airlines \\ \bottomrule \end{tabular} \vspace{1mm} \footnotesize{\\ $^+$ SCP stands for Set-Covering Problem formulation and SPP stands for Set-Partitioning Problem formulation. * Generated (\#G) \\or Real-world (\#R) test-cases where \# represents the number of test-cases being used for validation. ** The provided values are\\ \hspace{-75mm} the maximum among all the test-cases being used for validation.} \vspace{-1mm} \end{center} \label{overview} \end{table*} The research gap in this literature review could be recorded in two folds. First, in some of these instances, the results are obtained using a subset of the original search-space i.e. all possible legal pairings are not used (\cite{10, 11}). Second, the other instances have been validated on the flight datasets of smaller airlines (a handful of pairings), operating in low-demand regions such as Greece, Turkey, etc. These GA-based solution approaches become obsolete when scaled to the medium-scale flight networks of bigger airlines, operating in the US. Hence, it is imperative to develop an efficient GA for optimizing such CPOPs. \par In an attempt to address these limitations, the first contribution of this paper are related to the proposition of a customized Genetic Algorithm, with enhanced search-space exploration, for solving a real-world airline CPOP. This is achieved by enhancement of the initialization phase and genetic operators (crossover and feasibility-repair heuristic) using the CPOP's domain-knowledge. With these enhancements, the proposed GA is able to generate crew pairing solutions with varying characteristics such as less number of deadhead flights, hotel nights, etc. which are amongst the key performance indicators (KPI), apart from the crew pairing cost, used by airlines for evaluating the performance of their CS. The other contribution of this paper is the comparison of the proposed GA with a column generation based large-scale airline crew pairing optimizer, referred to as \textit{CG-Optimizer}, which has been developed by the authors and validated by GE Aviation. The utility of these contributions is demonstrated on a real-world medium-scale flight dataset (involving 830 flights and 430,873 legal pairings), extracted from the networks of large airlines operating in the US. \section{Airline Crew Pairing Problem} In CPOP, the input data includes a finite set of flight schedule from the airline's timetable, along with the pairings' costing and legality rules. A \textit{crew pairing} is a flight sequence to be flown by a crew member, beginning and ending at the same crew base. Other associated terminologies are explained with the help of a crew pairing example, shown in the Fig.~\ref{crewpair}. \begin{figure}[htbp] \centering \framebox{\parbox{0.95\columnwidth}{\includegraphics[width=0.95\columnwidth, keepaspectratio]{pairing_example.PNG}}} \caption{An example of a crew pairing beginning from crew base, \textit{DAL}} \vspace{-1mm} \label{crewpair} \end{figure} In real-time operations, sometimes crew miss their flight connections due to uncertain events. As a result, they are transported to their scheduled airports either by road transportation (in case of same city airports) or by traveling as passengers in some other flights (in case of distant airports). These flights are called \textit{deadheads} (Dhds). Airlines desire to minimize deadheads in their crew operations (ideally zero) in order to maximize their profits. \subsection{Legal Crew Pairing Generation} As mentioned in Section~\ref{intro}, it is imperative to develop a \textit{legal crew pairing generation} approach in order to facilitate legal pairings to the optimization phase. In small- and medium-scale CPOPs, all legal pairings are generated explicitly before the optimization phase. The same approach is adopted in this work and a duty-network based parallel legal pairing generation algorithm \cite{14} is used for generating all legal pairings explicitly. Interested readers are referred to \cite{14} for an extensive review of the pairing generation literature too. \subsection{Crew Pairing Optimization Problem} The goal of the optimization phase is to find a pairing subset from the generated set of all legal pairings in order to cover the given flights with the minimum cost possible. In literature, the CPOP is modeled either as a set-partitioning problem (SPP; each flight leg is allowed to be covered only once) or as a set-covering problem (SCP; over-coverage of flight legs i.e. deadheads are allowed). In this paper, the SCP formulation is adapted and modified to define the optimization problem for the proposed GA. It's mathematical model is presented in Section~\ref{fitness}. \section{Genetic Algorithm} This section presents a customized GA for solving medium-scale CPOPs of large airlines. For such problems, the number of legal pairings is so huge that it is intractable to consider all of them in the GA's population. Hence, the proposed GA solves the underlying CPOP by initializing the population from smaller pairing sets and improving the population repeatedly by bringing-in new pairings from the rest of the search-space with the help of customized genetic operators. The overall procedure of the proposed GA is mentioned in lines 1-10 of the Algorithm~\ref{PseudoCode} and its components are explained in the following subsections. First, the GA's first population is initialized and afterwards, its main loop is performed in which selection, reproduction (crossover and mutation), and feasibility-repair operators are applied sequentially. In the presented work, these operators are either enhanced or replicated from \begin{algorithm2e}[htbp] \DontPrintSemicolon \footnotesize \textbf{Procedure} GA:\; Deadhead-Min. Initialization Heuristic ($initial\_pop$) \; Evaluate Fitness ($initial\_pop$)\; \While{Term\_Criterion is not met}{ Selection ($parent\_pop$)\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Crossover ($Parent1,\ Parent2$)\ \tcc{\textit{Crossover1/2}} Mutation ($child\_pop$)\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \tcc{\textit{Mutation1/2}} Feasibility-Repair Heuristic ($child\_pop$) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Redundant-Pairing Removal ($child\_pop$)\; Evaluate Fitness ($child\_pop$)\; Population Replacement Step ($combined\_pop$)\; } \; \textbf{Procedure} Deadhead-Min. Initialization Heuristic ($initial\_pop$, \textit{AllPairs}):\; \For{each chromosome in $initial\_pop$}{ \For{expressed part}{ Randomly select a zero-deadhead solution from \textit{AllPairs}\; \uIf{|all flights \textbf{are not} covered|}{ Select pairings from \textit{AllPairs} w.r.t. the number of deadheads they are bringing into the solution\; }} \For{unexpressed part}{ Randomly select pairings from \textit{AllPairs} without replacement }} \; \textbf{Procedure} Crossover2 ($Parent1,\ Parent2$):\; \textit{CombinedPairs} = Combined list of pairings in $Parent1$ and $Parent2$\; \For{each \textit{child} chromosome}{ \For{expressed part}{ Randomly select a zero-deadhead solution from \textit{CombinedPairs} } \For{unexpressed part}{ Select pairings from \textit{CombinedPairs} w.r.t. their dissimilarity with expressed part\ \tcc{Dissimilarity is measured as the difference between the number of flights covered by the selected pairing and the expressed part} }} \; \textbf{Procedure} Redundant-Pairing Removal ($child\_pop$):\; \For{each $CP_i$ in each chromosome in $child\_pop$}{ \uIf{|$b_{1i}^e$ = 1|}{ set $b_{1i}^e$ = 0\; \uIf{|all flights are \textbf{not covered} in chromosome$\backslash CP_i$|}{ set $b_{1i}^e$ = 1 }}} \caption{Pseudo Codes for the Proposed GA and Its Customized Operators} \label{PseudoCode} \end{algorithm2e} the works presented in \cite{7, 10} \& \cite{15}. For simplicity, the generated list of all pairings is referred to as \textit{AllPairs}. \subsection{Chromosome Representation} As mentioned above, the length of each chromosome cannot be equal to the number of pairings in \textit{AllPairs} list. Hence, in this work, a chromosome with 2-bits gene encoding is adapted, as shown in Fig.~\ref{chromosome}. In this chromosome structure, the first bit, $b_{1i}$ represents a binary variable corresponding to the pairing selected in the second bit, $b_{2i}$ (selected from the \textit{AllPairs}). Moreover, being a single-objective optimization problem, it is desired to maintain diversity through additional means in order to prevent premature convergence. The chromosome structure, used in \cite{15}, is adopted in this work which is made up of two parts: \textit{expressed} part (includes pairings that participate in evaluating the quality of the solution) and \textit{unexpressed} part (includes pairings which are not part of the solution but are considered for maintaining diversity with-respect-to the expressed part). In this work, the fixed-length chromosome is used while the lengths of expressed and unexpressed parts are allowed to vary dynamically which is in contrast to the structure given in \cite{15}. \begin{figure}[htbp] \centering \vspace{-1mm} \framebox{\parbox{0.95\columnwidth}{\includegraphics[width=0.95\columnwidth, keepaspectratio]{chromosome.PNG}}} \caption{Chromosome structure} \label{chromosome} \end{figure} \subsection{Deadhead-Minimizing Initialization Heuristic} Mostly in GAs, randomized initialization is performed for generating the initial population i.e. random bits are assigned to each gene. It is known that in the optimization algorithms, exploration is desired upfront and exploitation is desired subsequently. Hence, to support exploration initially and to save some runtime, it is important to generate a diverse as well as a reasonably good-quality initial population. In this work, an effective initialization heuristic, referred to as \textit{Dhd-minimizing Initialization Heuristic}, is proposed which randomly selects pairing that brings less number of deadheads to the solution. This procedure is given in lines 12-22 of the Algorithm~\ref{PseudoCode}. Though the resulting initial population is composed of reasonable good-quality feasible solutions, it also reflects a great extent of diversity. \subsection{Selection} \par This operator is used for selecting the chromosomes which will become the parents for the reproduction of child chromosomes. In the proposed GA, a binary tournament selection operator is adopted in which two sets of two chromosomes, each, are formed randomly. Out of each of these sets, the parent with the best fitness value is passed on to the crossover phase. \subsection{Crossover} \label{cross} Crossover phase is the transition phase in which the genetic information from the parent chromosomes is passed on to the next generation i.e. to reproduce new child chromosomes. In the literature, multiple crossover operators have been proposed such as one-point crossover, two-point crossover, uniform crossover, fusion crossover \cite{7}, etc. In the presented work, the following crossover operators are studied and compared.\\ \--- \textit{Crossover1:} The fusion crossover, proposed in \cite{7}, has been widely adopted in the CPOP's literature and has been found to be most effective. In this crossover, probabilities, based on parents' fitness, are used to decide the genes being passed to the child chromosome.\\ \--- \textit{Crossover2:} In order to improve the convergence rate, it is desired to incorporate greediness in the reproduction operators with the help of domain knowledge. One such example is proposed in \cite{15}. With inspiration from the same work, a new crossover operator is proposed in this work for solving airline CPOPs. In this crossover, the expressed part of a child chromosome constitutes a zero deadhead solution which is made up of randomly selected pairings/columns from the parent chromosomes. The procedure for this crossover is given in lines 24-33 of the Algorithm~\ref{PseudoCode}.\\ Both of these crossover operators are modified in order to generate two child chromosomes from two parent chromosomes by repeating the similar procedure for both of them. \subsection{Mutation} \label{mutate} After crossover, the mutation operator is applied to the resulting child chromosomes. The mutation operator is used to prevent the premature convergence i.e. to avoid getting stuck at local optima, by altering certain genes of the child chromosomes using some probability. In the presented work, two mutation operators are studied and compared, one is a bit-flip mutation operator, referred as \textit{Mutation1}, and the other is the mutation operator proposed in \cite{10}, referred as \textit{Mutation2}, which is dependent on the density of the fittest solution in the population. In Mutation1, if an $i^{th}$ gene is selected for mutation, then the $b_{1i}$ is flipped from 0 to 1 or vice-versa. Whereas in Mutation2, if an $i^{th}$ gene is selected for mutation, then the $b_{1i}$ is mutated from 0 to 1 with a probability equivalent to the percentage of 1s in the fittest individual and vice-versa. \subsection{Feasibility-Repair Heuristic} After the crossover and mutation processes, the feasibility of the resulting child chromosomes is not guaranteed i.e. they may or may not cover all the given flights. Hence, a feasibility-repair heuristic is required to enforce the feasibility in the child chromosomes while at the same time it is desirable to maintain the fitness of the child chromosome during this repair. A repair heuristic, proposed in \cite{7}, is adapted in this work and is modified to involve a redundant-pairing removal step in the end. In this heuristic, a pairing with minimum quality index (given in Eq.~\ref{QI}) is selected for each uncovered flight leg. \begin{equation} \label{QI} \displaystyle QI = \dfrac{CP_i\ Cost}{Number\ of\ uncovered\ flights\ covered\ in\ CP_i} \end{equation} The detailed procedure is given in \cite{7}. A redundant-pairing removal step is added to this heuristic which tries to find and remove those pairings that covers the same flight legs as that of the whole solution without them. This step is explained in lines 35-41 of the Algorithm~\ref{PseudoCode}. \subsection{Fitness Evaluation} \label{fitness} Fitness function is the objective function of the problem, and is used to evaluate the fitness value of a chromosome. In CPOP, the main objective is the minimization of the total crew pairing cost while covering all flights atleast once. Different airlines utilize different costing rules, making each fitness function unique. In this work, fitness function is given in Eq.~\ref{obj} where $P$ and $F$ are the total number of pairings and flights to be covered respectively.\\ \begin{equation} \label{obj} \displaystyle min \{\sum_{j=1}^{P} c_j x_j + (\sum_{j=1}^{P} \sum_{i=1}^{F} a_{ij} x_j - \sum_{i=1}^{F} i)\ *\ DhdPenalty\} \end{equation} \begin{equation} \label{coverage} \displaystyle s. t. \sum_{j=1}^{P} a_{ij} x_{j} \geq 1 \ \ \ \ \forall i \in F \end{equation} In order to be a feasible solution, the chromosome must satisfy the flight-coverage constraints, given in Eq.~\ref{coverage}. In these equations, $c_j$ is the total cost of $j^{th}$ pairing; $x_j$ is the binary-decision variable which represents whether the $j^{th}$ pairing is selected in the solution ($=1$) or not ($=0$); $a_{ij}$ is an auxiliary binary variable which represents whether the $i^{th}$ flight is covered by $j^{th}$ pairing ($=1$) or not ($=0$); and \textit{DhdPenalty} is the deadhead penalty cost set by airlines. \subsection{Population Replacement} The last step of the GA is the population replacement step where the surviving population from the parent and child chromosomes is selected to become the parent population for the next GA iteration, termed as \textit{generation}. There are two main population replacement approaches: generational and steady-state approaches. In this work, generational approach is adopted in which the elitist population (best \textit{n} chromosomes out of \textit{n} parent and \textit{n} child chromosomes are selected) is passed to the next generation. \section{Computational Experiments} All the computational experiments in this work are performed with a real-world medium-scale test-case which includes 839 flights and a single crew base, \textit{DAL} (Dallas, US). This test-case is provided by GE Aviation and has been carved out from the networks of US-based big airlines (operating upto 33000 monthly flights and upto 15 crew bases). It is found that 430,873 legal crew pairings are possible for this test-case which is enormously huge in comparison to the amount of pairings handled with GA-based approaches in the literature. In this work, all the algorithms are implemented using an alternative implementation of \textit{Python v3.6}, called \textit{PyPy v3}, improving the computational speeds by a great extent. All computations are performed on a HP Z640 workstation (2 X Intel$^\circledR$ Xeon$^\circledR$ Processor E5-2630v3 @2.40GHz and 8-Cores/16-Threads, enabled with multi-processing capabilities). \par The parameter settings of the proposed GA, used in these experiments, are given in Table~\ref{paraSet}. \begin{table}[htbp] \vspace{-3mm} \caption{Experimental settings of GA-parameters} \vspace{-3mm} \scriptsize \begin{center} \begin{tabular}{ll} \toprule \textbf{Parameters} & \textbf{Value} \\ \midrule Population Size & $24$\\ Term\_Criterion & $5000\ seconds$ \\ Chromosome Length & $100\ +\ maxLength(initial\_pop)$\\ Crossover Rate & $0.9$\\ Mutation Rate & $3\ *\ (\dfrac{1}{Chromosome Length})$\\ \bottomrule \end{tabular} \end{center} \label{paraSet} \end{table} It is seen that on increasing the population size, the number of GA generations may decrease but it does not affect the overall runtime because on increasing the population size, the generation time also increase. Due to different calculation times of the proposed operators, the overall runtime of the GA is selected as the termination criterion instead of the number of generations so as to carry out a fair comparison among multiple GA-runs. In \cite{16}, $(1/ChromosomeLength)$ is proposed as the lower bound for the optimal mutation rate. With experiments, it is observed that this lower bound should be increased by a factor (3 in this work) in order to test the premature convergence. In this work, variants of GA operators are proposed which are either developed by the authors or adapted from the literature. To solve the above-mentioned test-case and similar problems, it is imperative to find the most effective combination of these operators. For this, four configurations of the GA are implemented and compared whose structure is shown in Table~\ref{GAConfigs}. \begin{table}[htbp] \caption{Structure of GA-configurations} \vspace{-3mm} \scriptsize \begin{center} \begin{tabular}{ccccc} \toprule \multirow{2.5}{*}{\textbf{Operators}} & \multicolumn{4}{c}{\textbf{GA Configuration}} \\ \cmidrule{2-5} & \textbf{GA1} & \textbf{GA2} & \textbf{GA3} & \textbf{GA4} \\ \midrule Proposed Initialization Heuristic & & \cellcolor{black!20} & \cellcolor{black!20} & \cellcolor{black!20} \\ Mutation1 & \cellcolor{black!20} & \cellcolor{black!20} & & \\ Mutation2 & & & \cellcolor{black!20} & \cellcolor{black!20} \\ Crossover1 & \cellcolor{black!20} & \cellcolor{black!20} & \cellcolor{black!20} & \\ Crossover2 & & & & \cellcolor{black!20} \\ \bottomrule \vspace{-6mm} \end{tabular} \end{center} \label{GAConfigs} \end{table} For each of these GA configuration, 10-runs with different random seeds (uniformly distributed between 0 and 1) are performed. The experimental results of these runs are summarized in Table~\ref{GAallruns} and the comprative plots are shown in Fig.~\ref{plot1}. \begin{table}[htbp] \vspace{-3mm} \caption{Experimental results of the GA-runs} \vspace{-3mm} \scriptsize \begin{center} \begin{tabular}{p{7mm}p{4mm}p{10mm}p{6.5mm}p{6.5mm}p{0.15mm}p{7mm}p{4mm}p{4mm}} \toprule \multirow{1.25}{*}{\textbf{Runtime}} & \multirow{2.5}{*}{\textbf{GAs}} & \multicolumn{3}{c}{\textbf{Crew Pairing Cost (USD)}} & & \multicolumn{3}{c}{\textbf{\# Deadheads}} \\ \cmidrule{3-5} \cmidrule{7-9} \multirow{1.25}{*}{\textbf{(sec)}} & & \textbf{$\pmb{\overline x \pm \sigma}$} & \textbf{Best} & \textbf{Worst} & & \textbf{$\pmb{\overline x \pm \sigma}$} & \textbf{Best} & \textbf{Worst} \\ \midrule \multirow{4}{*}{70} & GA1 & 2649823 $\pm$57559 & 2494649 & 2710084 & & 1095$\pm$45 & 977 & 1151 \\ & GA2 & 1417223 $\pm$9380 & 1398427 & 1430115 & & 156$\pm$06 & 149 & 164 \\ \midrule \multirow{8}{*}{5000} & GA1 & 980226 $\pm$23091 & 964857 & 1037504 & & 40$\pm$04 & 35 & 49 \\ & GA2 & 1195229 $\pm$225555 & 957832 & 1430115 & & 98$\pm$61 & 35 & 164 \\ & GA3 & 1192104 $\pm$228745 & 949591 & 1430115 & & 98$\pm$61 & 30 & 164 \\ & GA4 & 993209 $\pm$5337 & 987638 & 1001487 & & 09$\pm$04 & 06 & 21 \\ \bottomrule \label{GAallruns} \end{tabular} \end{center} \end{table} First, the effect of the proposed deadhead-minimizing initialization heuristic is studied. For this, the best solution among the initial populations of GA1 and GA2 are compared (first two rows of Table~\ref{GAallruns}). It is observed that the characteristics (number of deadheads and total cost) of the best initial solution from the GA2-runs are of reasonable high-quality in comparison to that of the GA1-runs. It is to be noted that the initialization runtime for these GA-configurations are almost equivalent because the additional time consumed by the proposed heuristic is compensated by the time required to repair the infeasibility of the solutions from random initialization. Moreover, GA2-runs leads to a better-cost crew pairing solution (best sol. among all seeds) than that from the GA1-runs. Hence, the proposed initialization heuristic is highly effective in achieving a better initial population in the same runtime. Second, the effects of the mutation operator are studied. For this, the GA-configurations, GA2 (using \textit{Mutation1}) and GA3 (using \textit{Mutation2}) are compared, results given in Table~\ref{GAallruns}. From these results, it is observed that GA3-runs lead to a better (w.r.t. both cost and deadheads) crew pairing solution than that from the GA2-runs. However, the difference between them is marginal, equalizing the effects of both mutation operators. Hence, \textit{Mutation2} is considered in the following experiments. Third, the effects of the crossover operator are studied. For this, the GA-configurations, GA3 (using \textit{Crossover1}) and GA4 (using \textit{Crossover2}) are compared. From the plot of GA4 in Fig.~\ref{plot1}, it is evident that the \textit{Crossover2}, proposed in this work, is highly effective in reducing the number of deadheads to a large-extent that too in a very short runtime. However, the cost of the final crew pairing solution from GA4-runs is poorer (marginal) than that from GA3-runs. On analyzing the crew pairings of the GA4-runs' best solution, it is found that the majority of pairings are those which covers less number of flight legs, referred as \textit{short pairings}, increasing the total amount of pairings in the solution. With such large number of short-pairings, the solution becomes too rigid to allow the compact, yet large, pairings (covering a large number of flights in an efficient way) to enter the solution, hence, stopping at local optima. \par As mentioned in Section~\ref{intro}, a large-scale column generation based airline crew pairing optimizer referred to as \textit{CG-Optimizer}, is used to evaluate the performance of the proposed GA-configurations. Developed by the authors, \textit{CG-Optimizer} is a research output of this project which has been validated by GE Aviation. Due to commercial restrictions by the funding-sponsors, the details of \textit{CG-Optimizer} could not be revealed. \textit{CG-Optimizer} has been used to solve a large-scale CPOP, targeting a weekly flight schedule (containing 3202 flights, 15 crew bases, and $>$ billion legal pairings). The best-known solution of the test-case used in this work (839 flights and 1 crew base) is carved out of the solution of this bigger test-case (3202 flights and 15 crew bases) and is compared with the best solutions of the proposed GA-configurations in Table~\ref{bestSols}. \begin{figure}[htbp] \centering \framebox{\parbox{0.97\columnwidth}{\includegraphics[width=0.97\columnwidth, keepaspectratio]{Combined_graph_runtime_vs_cost_and_dhd5.PNG}}} \caption{Characteristic plots of the GA-runs} \vspace{-2mm} \label{plot1} \end{figure} \section{CONCLUSIONS AND FUTURE WORK} This paper proposes an efficient GA, with improved initialization and genetic operators, to solve a real-world medium-scale CPOP (839 flights, 1 crew base, and 430873 pairings), belonging to the network of larger airlines from the US. In this GA, the dhd-minimizing initialization heuristic is highly effective in achieving a better-initial solution ($\approx 78\%$ in cost, $\approx 555\%$ in Dhds) than the randomized initialization in almost the same runtime. On studying the effects of two widely-adopted mutation operators, it is seen that \textit{Mutation2} performs better than \textit{Mutation1} though marginally. In this GA, a dhd-minimizing crossover operator, \textit{Crossover2}, is also proposed which is found to be highly effective in reducing the number of dhds (by a large extent) in short runtimes. Another contribution of this paper is the comparison of the proposed GA with a column generation based large-scale optimizer (\textit{CG-Optimizer}), developed by authors to solve a large-scale CPOP (3202 flights, 15 crew bases, $>$ billion legal pairings). For the given medium-scale CPOP, it is seen that the gap between the results of \textit{CG-Optimizer} and all GA-configurations is more than $\approx 11\%$, making the column generation a superior method to solve medium-scale and large-scale CPOPs. \begin{table}[htbp] \vspace{-2mm} \caption{Best crew pairing solutions of CG-Optimizer and GA-runs} \scriptsize \begin{center} \begin{tabular}{ccccc} \toprule \multirow{2}{*}{\textbf{Algorithms}} & \textbf{Total Cost} & \multirow{2}{*}{\textbf{\# Deadheads}} & \multirow{2}{*}{\textbf{\# Pairings}} & \textbf{\%age Gap} \\ & \textbf{(USD)} & & & \textbf{(Cost)} \\ \midrule \textit{CG-Optimizer} & 850303 & 2 & 142 & 0\\ GA1 & 964858 & 39 & 169 & 13.47\\ GA2 & 957833 & 35 & 172 & 12.65\\ GA3 & 949592 & 30 & 171 & 11.68\\ GA4 & 987639 & 09 & 242 & 16.15\\ \bottomrule \label{bestSols} \end{tabular} \end{center} \end{table} \par In the proposed crossover, \textit{Crossover2}, the greediness towards minimizing dhds is inbuilt in its construct, making the GA biased towards selecting shorter-pairings and driving the search towards local optima. Search-space expansion heuristics \cite{17} and variable mutation rates \cite{7} could be adapted/utilized for improving the performance of the proposed-GA. Moreover, the independent computations in GA (evaluation, etc.) could be parallelized by using the multiprocessing capabilities of the computational hardware. \addtolength{\textheight}{-12cm} \section*{ACKNOWLEDGMENT} The authors would like to acknowledge the invaluable support of GE Aviation team members: Saaju Paulose (Senior Manager), Arioli Arumugam (Senior Director- Data \& Analytics), and Alla Rajesh (Senior Staff Data \& Analytics Scientist) for providing problem definition, real-world test cases, and for sharing domain-knowledge during numerous stimulating discussions which helped the authors in successfully completing this work.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} Highly charged ions represent one of the simplest and most well-understood physical systems, and yet they still continue to provide an extremely rich scope of opportunities for fundamental research. They have been extensively used in past years for various high-precision tests of quantum electrodynamics, making it one of the most well-tested theories in physics \cite{Draganic2003, Gumberidze2005, Shabaev2006109, Volotka_PRL_2012, Shabaev_JPhysCh_2015}. Such a high accuracy has also been employed for a precise determination of the electron mass \cite{Sturm_nature_2014} and it has been proposed to be used for determination of the fine-structure constant \cite{PhysRevLett.96.253002, Volotka_PRL_2014, PhysRevLett.116.100801} and even for search of its hypothetical variation \cite{Andreev_PRL_2005, Berengut2010, Windberger_PRL_2015, Oreshkina_PRAR_2017, bekker2019detection}. Moreover, comparison between the experimental and theoretical results can be used to test theories beyond the Standard Model by setting bounds on parameters of new hypothetical forces~\cite{debierre2019g}. Among various achievements in this field, the most prominent ones include measurements and calculations of the bound-electron $g$~factor in highly charged ions to an extraordinary level of precision \cite{carbon_gfactor, oxygen_gfactor, silicon_gfactor, argon_gfactor, silicon_li_gfactor, Cakir2020}. For all types of high-presicion spectroscopic measurements of highly charged ions \cite{Beiersdorfer1998, TokyoEBIT, Egl2019} the essential and most fundamental quantities are atomic energy levels and corresponding transition energies which are needed to be known to a high level of accuracy from the theoretical side. As the experimental precision is being improved, nuclear structure effects are also becoming observable and thus have to be calculated with an increasing accuracy. The largest correction of this kind is due to the finite nuclear size (FNS) effect. Analytical expressions for the FNS effect were presented in \cite{Shabaev_1993, GLAZOV2002408, Karshenboim2005, Karshenboim2018}. The FNS correction can also be calculated with a higher accuracy numerically by using the Fermi distribution as a model for nuclear charge density \cite{BEIER200079}. However, even this model does not describe any fine details of nuclear charge distributions that are unique for each nucleus. Hence, in order to perform more precise calculations of the FNS correction, it is necessary to use a more realistic nuclear-structure description and go beyond the simple Fermi model. As for other nuclear-structure corrections, we note that significant improvements in the evaluation of nuclear deformation and nuclear polarization effects have been made in recent years \cite{NEFIODOV1996227, Kozhedub2008, nucl_shape2012, nucl_pol_volotka, nucl_shape2019}. In this paper we present calculations of the FNS correction to atomic energy levels and the bound-electron $g$~factor based on a more detailed description of nuclear charge distributions. The nuclear charge densities are calculated in the framework of the Hartree-Fock method based on the Skyrme-type nuclear interaction with adjustable parameters. We employ the \verb|skyrme_rpa| program for this purpose \cite{COLO2013142}. The obtained data is then used to construct the potential of an extended nucleus and numerically solve the Dirac equation for an electron bound in this potential. The theoretically calculated nuclear charge densities are in a good agreement with the experimental ones. These results pave the way for a more accurate description of nuclear-structure effects in atomic systems. The paper is organized as follows. After a brief description of the computational method we discuss the numerical results and their dependence on the Skyrme parameters. We compare our results to the FNS corrections obtained by using experimental nuclear charge distributions as well as simpler charge density models, such as the homogeneously charged sphere approximation and the Fermi distribution. Then we estimate the uncertainties of our calculations and also demonstrate suppression of the FNS effect for the specific differences \cite{spec_dif2002,PhysRevLett.96.253002} of $g$~factors. Relativistic system of units ($\hbar = c = 1$) and Heaviside charge units ($\alpha = e^2/4\pi, e<0$) are used throughout the paper. Three-vectors are denoted by bold letters. \section{Computational method}\label{sec:method} \subsection{Skyrme interaction and nuclear charge density}\label{sec:density} In its standard form, the Skyrme interaction between two nucleons with spatial coordinates ${\bf r}_1$ and ${\bf r}_2$ can be expressed as \cite{CHABANAT1998231}: \begin{align} V({\bf r}_1, {\bf r}_2) & = t_0 \left( 1 + \chi_0 P_\upsigma \right) \delta({\bf r}) \notag \\ & + \dfrac{1}{2} t_1 \left( 1 + \chi_1 P_\upsigma \right) \left[{\bf P}^{\dag2} \delta({\bf r}) + \delta({\bf r}) {\bf P}^2 \right] \notag \\ & + t_2 \left( 1 + \chi_2 P_\upsigma \right) {\bf P}^\dag \cdot \delta({\bf r}) {\bf P} \notag \\ & + \dfrac{1}{6} t_3 \left( 1 + \chi_3 P_\upsigma \right) \rho^\lambda ({\bf R}) \delta({\bf r}) \notag \\ & + iW_0 \left( \boldsymbol{\upsigma}_1 + \boldsymbol{\upsigma}_2 \right) \cdot \left[{\bf P}^\dag \times \delta({\bf r}) {\bf P} \right], \end{align} where ${\bf r} = {\bf r}_1 - {\bf r}_2$, ${\bf R} = \dfrac{1}{2} \left( {\bf r}_1 + {\bf r}_2 \right)$, ${\bf P} = \dfrac{1}{2i} \left( \nabla_1 - \nabla_2 \right)$ (${\bf P}^\dag$ acts to the left), $P_\upsigma = \dfrac{1}{2} \left( 1 + \boldsymbol{\upsigma}_1 \cdot \boldsymbol{\upsigma}_2 \right)$, $\upsigma_i$ with $i \in \{ 1,2,3 \}$ are the Pauli spin matrices, and $\rho$ is the total nucleon density. Here we note that $t_j$, $\chi_j$ ($j~\in~\{ 0,1,2,3 \}$), $W_0$ and $\lambda$ are adjustable parameters of the Skyrme force~\cite{COLO2013142}. Next, in order to derive the Hartree-Fock (HF) equations, single-particle wave functions $\{ \phi^q_i(x) \}$ are introduced, where $x$ denotes the set of spatial and spin coordinates, and the superscript $q$ is used to distinguish between the neutron ($q=``n$'') and proton ($q=``p$'') orbitals. The many-body ground-state wave function is built out of these functions as a Slater determinant, and then by means of the variational principle one can obtain the HF equations of the general form: \begin{equation} \widehat{H}(x,\phi^q_i(x)) \phi^q_i(x) = \epsilon_i \phi^q_i(x), \end{equation} where the Hamiltonian $\widehat{H}$ itself depends on the single particle wave functions. The explicit form of the equations as well as their detailed derivation can be found in various articles, for example in \cite{Vautherin_PhysRevC.5.626}. The HF equations are solved iteratively until self-consistency to a predefined accuracy is achieved. The obtained orbitals can then be used to construct point nucleon densities, in particular, the proton density: \begin{equation} \rho_p({\bf r}) = \sum_{i,\upsigma} |\phi^p_i({\bf r},\upsigma)|^2. \end{equation} Finally, in order to obtain the nuclear charge distribution, the proton density is convoluted with the Gaussian form factor $f_p(r)$ to allow for the finite size of the proton \cite{Vautherin_PhysRevC.5.626}: \begin{align} f_p(r) & = \dfrac{1}{\left( r_0 \sqrt{\pi} \right)^3} e^{-r^2/r^2_0} \ , \ r_0=0.65 \ \mathrm{fm}, \\ & \rho_c({\bf r}) = \int f_p({\bf r}-{\bf r'}) \rho_p({\bf r'}) \, \mathrm{d}^3{\bf r'}. \end{align} We note here that in the following we assume spherical symmetry of nuclear charge distributions. Other expressions for the nuclear charge density $\rho_c(r)$ are used in this paper for comparison purposes, and they include: \vskip 5pt \noindent 1) the homogeniously charged sphere approximation (``Sphere''): \begin{equation} \label{eq:sphere} \rho_c(r) = \begin{cases} \rho_0^{\mathrm{sphere}} & \mathrm{for} \ 0 \leq r \leq \sqrt{\dfrac{5}{3} \langle r^2 \rangle}, \\ 0 & \mathrm{otherwise}; \end{cases} \end{equation} \noindent 2) Fermi distribution (``Fermi''): \begin{equation} \label{eq:Fermi} \rho_c(r) = \dfrac{\rho_0^{\mathrm{Fermi}}}{1+e^{(r-c)/a}}, \end{equation} with the radius parameter $c$ and the diffuseness parameter $a = (2.3 / 4 \, \mathrm{ln}3) \; \mathrm{fm} $ \cite{BEIER200079}; \noindent 3) model-independent analyses of experimental scattering data \cite{DEVRIES1987495}: \noindent a) expansion into a sum of spherical Bessel functions $j_{0}$ of order zero (``Bessel''): \begin{equation} \label{eq:Bessel} \rho_c(r) = \begin{cases} \sum_{\nu} \limits a_{\nu} j_0 \left( \nu \pi r/R \right) & \mathrm{for} \ 0 < r \leq R, \\ 0 & \mathrm{otherwise}, \end{cases} \end{equation} where $R$ is the cutoff radius; \noindent b) expansion into a sum of Gaussians (``Gauss''): \begin{align} \label{eq:Gauss} & \rho_c(r) = \sum_i A_i \left( e^{- \left[ (r-R_i)/ \gamma \right]^2} + e^{- \left[ (r+R_i)/ \gamma \right]^2} \right), \\ A_i & = Q_i \left[ 2 \pi^{3/2} \gamma^3 \left( 1 + 2R^2_i / \gamma^2 \right) \right]^{-1}, \quad \sum_i Q_i = 1, \notag \end{align} where $R_{i}$ and $Q_{i}$ are the positions and the amplitudes of the Gaussians, respectively, and the parameter $\gamma$ is related to the root-mean-square radius $R_{\mathrm{G}}$ of the Gaussians as follows: $R_{\mathrm{G}} = \gamma \sqrt{3/2}$. In this paper, the constants $\rho_0^{\mathrm{sphere}}$ and $\rho_0^{\mathrm{Fermi}}$ as well as the coefficients $a_{\nu}$ and $Q_i$ in Eqs.~\eqref{eq:sphere}~--~\eqref{eq:Gauss} are chosen to fulfil the following normalization condition: $4\pi \int_{0}^{\infty} \rho_c(r) r^{2} \, \mathrm{d}r = 1$. \subsection{Dirac equation}\label{sec:Dirac} Once the nuclear charge density $\rho_c(r)$ is known, one can construct the potential describing the interaction between an electron and the nucleus as follows \cite{reiher2009relativistic}: \begin{align} V(r) = \dfrac{-4\pi \alpha Z}{r} \int_{0}^{r} & \rho_c(r') r'^{2} \, \mathrm{d}r' \notag \\ & - 4\pi \alpha Z \int_{r}^{\infty} \rho_c(r')r' \, \mathrm{d}r', \end{align} where $Z$ is the nuclear charge, and $\alpha$ is the fine structure constant. This potential then enters the Dirac equation which determines the energy levels $E$ and the four-component wave functions $\psi ({\bf r})$ of a bound electron \cite{greiner2000relativistic}: \begin{equation} \label{eq:Dirac} \left[ \boldsymbol{\alpha} \cdot {\bf p} + \beta m_{e} + V(r) \right] \psi ({\bf r}) = E \psi ({\bf r}), \end{equation} where $\boldsymbol{\alpha}$ and $\beta$ are the usual Dirac matrices, and $m_{e}$ is the electron mass. For an arbitraty central potential the electron wave function splits into radial and angular parts as: \begin{equation} \label{eq:Psi} \psi_{n \kappa m}({\bf r}) = \dfrac{1}{r} \left( \begin{array}{c} G_{n \kappa}(r)\Omega_{\kappa m}(\theta,\varphi) \\[3pt] iF_{n \kappa}(r)\Omega_{-\kappa m}(\theta,\varphi) \end{array} \right), \end{equation} where $n$ is the principal quantum number, $\kappa$ is the relativistic angular momentum quantum number, and $m$ is the total magnetic quantum number. The spherical spinors $\Omega_{\pm \kappa m}(\theta,\varphi)$ are the same for any central potential and are well known \cite{johnson2007atomic}. Hence, the problem can be reduced to the following set of radial Dirac equations: \begin{align} \dfrac{\mathrm{d}G}{\mathrm{d}r} + \dfrac{\kappa}{r}G(r) - \left[ m_{e}-V(r) \right] F(r) & = EF(r), \notag \\ -\dfrac{\mathrm{d}F}{\mathrm{d}r} + \dfrac{\kappa}{r}F(r) + \left[ m_{e}+V(r) \right] G(r) & = EG(r), \end{align} where the radial functions $G(r)$ and $F(r)$ satisfy the normalization condition: $\int_{0}^{\infty} \left[ G(r)^{2} + F(r)^{2} \right] \, \mathrm{d}r = 1$. The radial wave functions $G(r)$ and $F(r)$ can then be found analytically for the Coulomb potential \cite{greiner2000relativistic} or in general case numerically, for example, by expanding them in terms of B-splines and solving the resulting generalized matrix eigenvalue equations \cite{johnson2007atomic}. In our basis-set numerical calculations of the radial wave functions we used the dual-kinetic-balance approach \cite{DKB}. In order to obtain the FNS correction to atomic energy levels, the numerically calculated values $E_{\rm{ext}}[n\kappa]$ (in the case of an extended nucleus) are compared to the exact analytical solution $E_{\mathrm{point}}[n\kappa]$ for the Coulomb potential $V(r) = -Z \alpha/r$ (i.e. point-like nucleus): \begin{align} & \Delta E_{\mathrm{FNS}}[n\kappa] = E_{\rm{ext}}[n\kappa] - E_{\mathrm{point}}[n\kappa], \\ E_{\mathrm{point}}[n\kappa] & = m_{e} \left[ 1 + \dfrac{(Z\alpha)^2}{\left( n-|\kappa|+\sqrt{\kappa^2-(Z\alpha)^2} \right)^2} \right]^{-1/2}. \notag \end{align} \subsection{Bound-electron $\boldsymbol{g}$ factor}\label{sec:gfactor} Most generally, a $g$~factor relates the electron's magnetic moment $\boldsymbol{\mu}$ (in units of Bohr magneton $\mu_{\rm{B}}=|e|/2m_{e}$) to its angular momentum $\boldsymbol{M}$: \begin{equation} \dfrac{\boldsymbol{\mu}}{\mu_{\rm{B}}} = -g \boldsymbol{M}, \ \mathrm{e.g.} \ \ \dfrac{\boldsymbol{\mu_{l}}}{\mu_{\rm{B}}} = -g_{l} \boldsymbol{l} \ \ \mathrm{and} \ \ \dfrac{\boldsymbol{\mu_{s}}}{\mu_{\rm{B}}} = -g_{s} \boldsymbol{s}, \end{equation} where $\boldsymbol{l}$ is the orbital angular momentum, and $\boldsymbol{s}$ is the spin angular momentum. In the Dirac theory, i.e. without taking into account the radiative corrections, $g_{s}=2$ for a free electron, and $g_{l}$ is known to be exactly 1 \cite{greiner2000quantum}. Thus, the interaction Hamiltonian $\widehat{H}_{\mathrm{int}}$ for an electron in an external homogeneous magnetic field $\boldsymbol{B}=(0,0,B_{z})$ can be expressed as: \begin{equation} \widehat{H}_{\mathrm{int}} = -\boldsymbol{\mu}_{\mathrm{total}} \boldsymbol{\cdot} \boldsymbol{B} = \mu_{\mathrm{B}}(g_{l} \boldsymbol{l} + g_{s} \boldsymbol{s}) \boldsymbol{\cdot} \boldsymbol{B}. \end{equation} The corresponding first-order Zeeman splitting $\Delta E$ can then be written by introducing a new $g$~factor, also called Land\'{e} $g$~factor: \begin{equation} \label{eq:Zeeman1} \Delta E = \expval{\widehat{H}_{\mathrm{int}}}{n \kappa m} = g \mu_{\rm{B}} B_{z} m, \end{equation} We note that Eq. \eqref{eq:Zeeman1} is written in such a way to have the same form as for the simpler case where $l=0$, and it can be considered as a definition of the Land\'{e} $g$~factor of a bound electron. On the other hand, the electromagnetic four-potential $A^{\mu}$ can be chosen in the form $( 0, \, \boldsymbol{A}(\boldsymbol{r})=[ \boldsymbol{B} \times \boldsymbol{r} ]/2 )$, and an application of the minimal coupling principle to the Dirac equation~\eqref{eq:Dirac} implies: \begin{equation} \widehat{H}_{\mathrm{int}}' = -e \boldsymbol{\alpha} \boldsymbol{\cdot} \boldsymbol{A}(\boldsymbol{r}) = |e| \boldsymbol{\alpha} \boldsymbol{\cdot} \boldsymbol{A}(\boldsymbol{r}). \end{equation} In this way, first-order perturbation theory gives: \begin{align} \label{eq:Zeeman2} \Delta E & = \dfrac{|e|}{2} \expval{\boldsymbol{\alpha} \boldsymbol{\cdot} [ \boldsymbol{B} \times \boldsymbol{r} ]}{n \kappa m} \notag \\ & = \dfrac{|e|}{2} B_{z} \expval{[ \boldsymbol{r} \times \boldsymbol{\alpha} ]_{z}}{n \kappa m}. \end{align} A calculation of the matrix element in Eq. \eqref{eq:Zeeman2} using the wave functions of the form \eqref{eq:Psi} \cite{rose1961relativistic} and then taking into account Eq. \eqref{eq:Zeeman1} yields the following general formula for the $g$~factor: \begin{equation} \label{eqn:gext} g_{\mathrm{ext}} [n \kappa] = \dfrac{2\kappa m_{e}}{j(j+1)} \int_{0}^{\infty} G_{n \kappa}(r)F_{n \kappa}(r)r \, \mathrm{d}r, \end{equation} where $j=|\kappa|-1/2$ is the total angular momentum quantum number. In the case of the Coulomb potential $V(r) = -Z \alpha/r$ an~exact analytical calculation can be performed \cite{zapryagaev1979zeeman}, and the result reads: \begin{equation} \label{eqn:gpoint} g_{\mathrm{point}} [n \kappa] = \dfrac{\kappa}{j(j+1)} \left( \kappa \dfrac{E_{\mathrm{point}} [n\kappa]}{m_{e}} - \dfrac{1}{2} \right). \end{equation} Finally, the FNS correction to the $g$ factor for a state $n \kappa$ is obtained by taking the difference between (\ref{eqn:gext}) and (\ref{eqn:gpoint}): \begin{equation} \Delta g_{\mathrm{FNS}} [n \kappa] = g_{\mathrm{ext}} [n \kappa] - g_{\mathrm{point}} [n \kappa]. \end{equation} Other contributions to the $g$~factor are summarized e.g. in Ref. \cite{BEIER200079, ShabaevReview2015, Harman_2018}. \section{Results and discussion}\label{sec:results} \subsection{Choice of Skyrme parametrization}\label{sec:parameters} \begin{table*}[t] \caption{Comparison between the parameters $t_1$, $\chi_0$ and $\chi_3$ from the LNS, SLy5 and SKP Skyrme parameter sets as~well~as the corresponding calculated values of RMS nuclear radius of \ce{^{208}_{82}Pb} nucleus. The FNS corrections to the ground-state energy $\Delta E_{\mathrm{FNS}}[1s_{1/2}]$ (in units of electron's rest energy) and $g$~factor $\Delta g_{\mathrm{FNS}}[1s_{1/2}]$ for hydrogen-like lead~\ce{^{208}_{82}Pb^{81+}} are presented in the last two columns. For comparison, the results for the homogeneously charged sphere approximation are also included in the last row.} \label{table:parameters} {\renewcommand{\arraystretch}{1.5} \renewcommand{\tabcolsep}{0.3cm} \begin{tabular}{ccccccc} \hline \hline Parameter set & $t_1$ & $\chi_0$ & $\chi_3$ & $\sqrt{\langle r^2 \rangle}$, fm & $\Delta E_{\mathrm{FNS}}[1s_{1/2}] \times 10^4$ & $\Delta g_{\mathrm{FNS}}[1s_{1/2}] \times 10^4$ \\ \hline LNS & 266.735 & 0.06277 & -0.03413 & 5.3238 & 1.2483 & 4.3014 \\ SLy5 & 483.13 & 0.778 & 1.267 & 5.5072 & 1.3169 & 4.5369 \\ SKP & 320.62 & 0.29215 & 0.18103 & 5.5242 & 1.3234 & 4.5590 \\ \hline Sphere & - & - & - & 5.5012 & 1.3172 & 4.5380 \\ \hline \hline \end{tabular}} \end{table*} \begin{table*}[!hbtp] \caption{Modifications of the $t_0$ Skyrme parameter within the LNS, SLy5 and SKP parametrizations and the corresponding FNS corrections to the ground-state energy $\Delta E_{\mathrm{FNS}}[1s_{1/2}]$ (in units of electron's rest energy) and $g$~factor $\Delta g_{\mathrm{FNS}}[1s_{1/2}]$ for hydrogen-like lead~\ce{^{208}_{82}Pb^{81+}}.} \label{table:adjust} {\renewcommand{\arraystretch}{1.5} \renewcommand{\tabcolsep}{0.3cm} \begin{tabular}{cr@{}lcc} \hline \hline Parameter set & \multicolumn{2}{c}{Change in $t_0$} & $\Delta E_{\mathrm{FNS}}[1s_{1/2}] \times 10^4$ & $\Delta g_{\mathrm{FNS}}[1s_{1/2}] \times 10^4$ \\ \hline LNS & -2484.&97 $\rightarrow$ -2454.60 (1.22\%) & 1.3148 & 4.5296 \\ SLy5 & -2484.&88 $\rightarrow$ -2486.12 (0.05\%) & 1.3147 & 4.5291 \\ SKP & -2931.&70 $\rightarrow$ -2935.95 (0.15\%) & 1.3147 & 4.5291 \\ \hline \hline \end{tabular}} \end{table*} First, let us discuss the influence of the choice of a Skyrme parameter set on the computational results. For this purpose, we consider the FNS correction to the ground-state ($1s_{1/2}$) energy and $g$~factor for hydrogen-like lead \ce{^{208}_{82}Pb^{81+}}. In Table \ref{table:parameters} three different widely used parametrizations (known as LNS, SLy5 and SKP \cite{COLO2013142}) are compared. In the first three columns of Table \ref{table:parameters} some selected Skyrme parameters are presented in order to illustrate large differences between these parameter sets. These differences can be seen even more clearly by comparing the values of root-mean-square (RMS) nuclear radius obtained by using each of the parameter sets. As a result, the FNS corrections presented in the last two columns also vary significantly in such a way that the results can turn out to be larger or smaller than the FNS corrections obtained in the homogeneously charged sphere approximation (using the RMS radius value of 5.5012 fm for \ce{^{208}_{82}Pb} \cite{ANGELI201369}). However, it is well known that the magnitude of the FNS correction is highly influenced by the value of RMS nuclear radius \cite{Shabaev_1993, GLAZOV2002408}. Hence, it is natural to adjust Skyrme parameters to reproduce the experimental RMS radius beforehand and only then calculate the FNS correction. We found that the RMS radius is most sensitive with respect to varying the Skyrme parameter $t_0$. In Table \ref{table:adjust} the results of such adjustments in $t_0$ (to obtain $\sqrt{\langle r^2 \rangle}=5.5012 \ \mathrm{fm}$) are shown. It can be seen that once the value of RMS radius is fixed, the calculated magnitudes of the FNS corrections become stable, despite the significant differences between the parameter sets. We tested this observation on a wide range of ions and parametrizations, and we found that the ambiguity in the choice of a Skyrme parameter set was largely suppressed in all cases simply by adjusting the RMS nuclear radius. All the FNS corrections, presented in the following discussion, were obtained by using the SLy5 parameter set, one of the most widely used parametrizations of the Skyrme force, and the parameter $t_0$ was adjusted to reproduce the experimental values of RMS nuclear radii in each particular case. \subsection{Energy levels and importance of the RMS radius}\label{sec:FS} \begin{figure}[!htbp] \centering \includegraphics[width=8.55cm]{Ca_rho.pdf} \caption{(colors online) Comparison between an experimental (``Gauss'') and different model charge distributions for \ce{^{40}_{20}Ca} nucleus. The names of the distributions are explained in Section \ref{sec:density}.}% \label{fig:Ca_charge} \end{figure} \begin{table*}[!htbp] \caption{Finite nuclear size (FNS) corrections $\Delta E_{\mathrm{FNS}}$ (in units of electron's rest energy) to the energies of the states $1s_{1/2}$, $2s_{1/2}$ and $2p_{1/2}$ for highly charged hydrogen-like ions \ce{^{40}_{20}Ca^{19+}}, \ce{^{116}_{50}Sn^{49+}} and \ce{^{208}_{82}Pb^{81+}}. Different models of the nuclear charge distributions were used in the calculations. The presented calculation uncertainties are due to the experimental uncertainties in RMS nuclear radii \cite{ANGELI201369}. The names of the distributions are explained in Section \ref{sec:density}.} \label{table:FS} {\renewcommand{\arraystretch}{1.5} \renewcommand{\tabcolsep}{0.3cm} \begin{tabular}{clccc} \hline \hline \multirow{2}{*}{\ce{^{40}_{20}Ca^{19+}}} & \multirow{2}{*}{} & $\Delta E_{\mathrm{FNS}}[1s_{1/2}]$ & $\Delta E_{\mathrm{FNS}}[2s_{1/2}]$ & $\Delta E_{\mathrm{FNS}}[2p_{1/2}]$ \\ && $\times 10^{8}$ & $\times 10^{9}$ & $\times 10^{11}$ \\ \hline & Sphere & 2.8514 & 3.6319 & 1.4696 \\ & Fermi & 2.8502 & 3.6304 & 1.4692 \\ & Skyrme & 2.8502 & 3.6303 & 1.4690 \\ & Bessel & 2.8057 & 3.5737 & 1.4461 \\ & Gauss & 2.8535 & 3.6345 & 1.4708 \\ & Skyrme (+radius unc.) & \phantom{()}2.850(3) & \phantom{()}3.630(4) & \phantom{()}1.469(2) \\ \hline \multirow{2}{*}{\ce{^{116}_{50}Sn^{49+}}} & \multirow{2}{*}{} & $\Delta E_{\mathrm{FNS}}[1s_{1/2}]$ & $\Delta E_{\mathrm{FNS}}[2s_{1/2}]$ & $\Delta E_{\mathrm{FNS}}[2p_{1/2}]$ \\ && $\times 10^{6}$ & $\times 10^{7}$ & $\times 10^{8}$ \\ \hline & Sphere & 3.7906 & 5.3366 & 1.4456 \\ & Fermi & 3.7859 & 5.3299 & 1.4439 \\ & Skyrme & 3.7860 & 5.3301 & 1.4439 \\ & Gauss & 3.7884 & 5.3334 & 1.4448 \\ & Skyrme (+radius unc.) & \phantom{()}3.786(3) & \phantom{()}5.330(4) & \phantom{()}1.444(1) \\ \hline \multirow{2}{*}{\ce{^{208}_{82}Pb^{81+}}} & \multirow{2}{*}{} & $\Delta E_{\mathrm{FNS}}[1s_{1/2}]$ & $\Delta E_{\mathrm{FNS}}[2s_{1/2}]$ & $\Delta E_{\mathrm{FNS}}[2p_{1/2}]$ \\ && $\times 10^{4}$ & $\times 10^{5}$ & $\times 10^{6}$ \\ \hline & Sphere & 1.3172 & 2.2871 & 1.9590 \\ & Fermi & 1.3147 & 2.2827 & 1.9554 \\ & Skyrme & 1.3147 & 2.2827 & 1.9554 \\ & Bessel & 1.3155 & 2.2842 & 1.9566 \\ & Gauss & 1.3155 & 2.2842 & 1.9566 \\ & Skyrme (+radius unc.) & \phantom{(5)}1.3147(4) & \phantom{(5)}2.2827(9) & \phantom{(5)}1.9554(7) \\ \hline \hline \end{tabular}} \end{table*} In Table \ref{table:FS} we present the FNS corrections $\Delta E_{\rm{FNS}}[1s_{1/2}]$, $\Delta E_{\rm{FNS}}[2s_{1/2}]$ and $\Delta E_{\rm{FNS}}[2p_{1/2}]$ calculated by using different nuclear charge distributions for three hydrogen-like ions: \ce{^{40}_{20}Ca^{19+}}, \ce{^{116}_{50}Sn^{49+}} and \ce{^{208}_{82}Pb^{81+}}. The FNS corrections in the ``Bessel'' and ``Gauss'' rows correspond to experimental charge densities. Such densities are obtained by expanding $\rho_c(r)$ into sums of spherical zero-order Bessel functions or Gaussians according to Eqs.~\eqref{eq:Bessel}~--~\eqref{eq:Gauss} and fitting the expansion coefficients (as well as any other parameters) to the measured cross sections. All the values of the fitting parameters used in this paper were taken from Ref.~\cite{DEVRIES1987495}. We note that for \ce{^{208}_{82}Pb} nucleus two sets of the ``Bessel'' coefficients are known \cite{Fr77b, Eu78}, and for simplicity we present here the results only for the parameters from the Ref. \cite{Eu78}. The parameters of all the theoretical charge distributions were adjusted to yield the following experimental values of RMS nuclear radii: $\sqrt{\langle r^2 \rangle}=3.4776(19), \ 4.6250(19) \ \rm{and} \ 5.5012(13) \ \rm{fm}$ for \ce{^{40}_{20}Ca^{19+}}, \ce{^{116}_{50}Sn^{49+}} and \ce{^{208}_{82}Pb^{81+}}, respectively \cite{ANGELI201369}. One peculiar feature can be immediately seen from the results presented in Table \ref{table:FS}: the values obtained in the ``Fermi'' and ``Skyrme'' models agree with each other much better than with the ``experimental values''. The explanation for this observation comes from the fact that the value of RMS nuclear radius turns out to be a crucial input parameter, and the experimental charge distributions do not reproduce RMS radii to the current level of precision (as used in the ``Sphere'', ``Fermi'' and ``Skyrme'' calculations). This interesting effect can be seen more clearly from the Figure~\ref{fig:Ca_charge}, where we compare different nuclear charge distributions employed in the calculations for \ce{^{40}_{20}Ca^{19+}} ion. It is instructive to note that despite the fact that the Skyrme and experimental charge distributions are in an excellent agreement with each other, the difference in the corresponding FNS corrections is larger than even between the ``Skyrme'' and ``Sphere'' values. This surprising result simply comes from the fact that the experimental ``Gauss'' distribution yields $\sqrt{\langle r^2 \rangle}=3.4797 \ \rm{fm}$ instead of 3.4776~fm, and it emphasizes the great influence of the RMS nuclear radius on the magnitude of the FNS effect. The observation described above suggests a straightforward way to estimate the calculation uncertainties for the FNS corrections. Since the RMS nuclear radius turns out to be the main source of uncertainty, one can simply vary the value of the RMS radius within its experimental error bars in the Skyrme model (by varying the $t_0$ parameter) and calculate the corresponding variation in~$\Delta E_{\rm{FNS}}$ or $\Delta g_{\rm{FNS}}$. The calculation uncertainties obtained in such a manner are presented in Tables \ref{table:FS} and \ref{table:specific}. \subsection{$\boldsymbol{g}$ factor and cancellation of the FNS effect in specific differences}\label{sec:specific} \begin{table*}[!htbp] \caption{Finite nuclear size (FNS) corrections $\Delta g_{\mathrm{FNS}}$ to the $g$~factors in $1s_{1/2}$, $2s_{1/2}$ and $2p_{1/2}$ states for highly charged hydrogen-like ions \ce{^{40}_{20}Ca^{19+}}, \ce{^{116}_{50}Sn^{49+}} and \ce{^{208}_{82}Pb^{81+}}. In the last two columns the magnitudes of the remaining FNS contribution to the specific differences $g'_{s}$ and $g'_{p}$ are presented. Different models of the nuclear charge distributions were used in the calculations. The presented calculation uncertainties are due to the experimental uncertainties in RMS nuclear radii \cite{ANGELI201369}. The names of the distributions are explained in Section \ref{sec:density}.} \label{table:specific} {\renewcommand{\arraystretch}{1.5} \renewcommand{\tabcolsep}{0.25cm} \begin{tabular}{clccccc} \hline \hline \multirow{2}{*}{\ce{^{40}_{20}Ca^{19+}}} & \multirow{2}{*}{} & $\Delta g_{\mathrm{FNS}}[1s_{1/2}]$ & $\Delta g_{\mathrm{FNS}}[2s_{1/2}]$ & $\Delta g_{\mathrm{FNS}}[2p_{1/2}]$ & $\Delta g'_{\mathrm{FNS}}[1s_{1/2}, 2s_{1/2}]$ & $\Delta g'_{\mathrm{FNS}}[1s_{1/2}, 2p_{1/2}]$ \\ && $\times 10^{7}$ & $\times 10^{8}$ & $\times 10^{11}$ & $\times 10^{13}$ & $\times 10^{13}$ \\ \hline & Sphere & 1.1316 & 1.4413 & 5.8293 & -2.0 & \phantom{-}0.5 \\ & Fermi & 1.1311 & 1.4407 & 5.7672 & -1.0 & -5.4 \\ & Skyrme & 1.1311 & 1.4406 & 5.8504 & -5.1 & \phantom{-}5.0 \\ & Bessel & 1.1134 & 1.4182 & 5.7560 & \phantom{-}1.5 & \phantom{-}2.5 \\ & Gauss & 1.1324 & 1.4423 & 5.8395 & -2.0 & \phantom{-}1.2 \\ & Skyrme & \multirow{2}{*}{\phantom{()}1.131(1)} & \multirow{2}{*}{\phantom{()}1.441(1)} & \multirow{2}{*}{\phantom{.}5.85(2)} & \multirow{2}{*}{$-$} & \multirow{2}{*}{$-$} \\ & (+radius unc.) &&&&& \\ \hline \multirow{2}{*}{\ce{^{116}_{50}Sn^{49+}}} & \multirow{2}{*}{} & $\Delta g_{\mathrm{FNS}}[1s_{1/2}]$ & $\Delta g_{\mathrm{FNS}}[2s_{1/2}]$ & $\Delta g_{\mathrm{FNS}}[2p_{1/2}]$ & $\Delta g'_{\mathrm{FNS}}[1s_{1/2}, 2s_{1/2}]$ & $\Delta g'_{\mathrm{FNS}}[1s_{1/2}, 2p_{1/2}]$ \\ && $\times 10^{5}$ & $\times 10^{6}$ & $\times 10^{8}$ & $\times 10^{10}$ & $\times 10^{10}$ \\ \hline & Sphere & 1.4426 & 2.0308 & 5.5116 & -7.32 & 3.87 \\ & Fermi & 1.4407 & 2.0282 & 5.5050 & -7.41 & 3.92 \\ & Skyrme & 1.4408 & 2.0282 & 5.5052 & -7.40 & 3.91 \\ & Gauss & 1.4417 & 2.0295 & 5.5086 & -7.41 & 3.92 \\ & Skyrme & \multirow{2}{*}{\phantom{()}1.411(1)} & \multirow{2}{*}{\phantom{()}2.028(2)} & \multirow{2}{*}{\phantom{()}5.505(5)} & \multirow{2}{*}{$-$} & \multirow{2}{*}{$-$} \\ & (+radius unc.) &&&&& \\ \hline \multirow{2}{*}{\ce{^{208}_{82}Pb^{81+}}} & \multirow{2}{*}{} & $\Delta g_{\mathrm{FNS}}[1s_{1/2}]$ & $\Delta g_{\mathrm{FNS}}[2s_{1/2}]$ & $\Delta g_{\mathrm{FNS}}[2p_{1/2}]$ & $\Delta g'_{\mathrm{FNS}}[1s_{1/2}, 2s_{1/2}]$ & $\Delta g'_{\mathrm{FNS}}[1s_{1/2}, 2p_{1/2}]$ \\ && $\times 10^{4}$ & $\times 10^{5}$ & $\times 10^{6}$ & $\times 10^{7}$ & $\times 10^{7}$ \\ \hline & Sphere & 4.5380 & 7.8734 & 6.7814 & -2.271 & 1.138 \\ & Fermi & 4.5292 & 7.8579 & 6.7687 & -2.278 & 1.141 \\ & Skyrme & 4.5291 & 7.8579 & 6.7687 & -2.278 & 1.141 \\ & Bessel & 4.5321 & 7.8630 & 6.7731 & -2.280 & 1.142 \\ & Gauss & 4.5320 & 7.8629 & 6.7730 & -2.280 & 1.142 \\ & Skyrme & \multirow{2}{*}{\phantom{()}4.529(2)} & \multirow{2}{*}{\phantom{()}7.858(3)} & \multirow{2}{*}{\phantom{()}6.769(2)} & \multirow{2}{*}{$-$} & \multirow{2}{*}{$-$} \\ & (+radius unc.) &&&&& \\ \hline \hline \end{tabular}} \end{table*} In general, the same trends as described above for the energy levels hold true also in the case of the FNS corrections to the bound-electron $g$ factor. In this last section we additionally consider the specific differences of the $g$~factors in $1s_{1/2}$ and $2s_{1/2}$ states, as well as in $1s_{1/2}$ and $2p_{1/2}$ states, for all the charge distributions mentioned above. These quantities were introduced \cite{spec_dif2002,PhysRevLett.96.253002} with the aim of supressing the FNS effect. Thus, one can expect the specific differences to have more stable values with respect to the choice of nuclear charge distribution. The specific differences are defined as follows: \begin{align} g'_{s} & = g[\mathrm{2}s_{1/2}] - \xi_{s} g[\mathrm{1}s_{1/2}], \quad \xi_{s} = \dfrac{\Delta g_{\mathrm{FNS}}[2s_{1/2}]}{\Delta g_{\mathrm{FNS}}[1s_{1/2}]}, \notag \\ g'_{p} & = g[\mathrm{2}p_{1/2}] - \xi_{p} g[\mathrm{1}s_{1/2}], \quad \xi_{p} = \dfrac{\Delta g_{\mathrm{FNS}}[2p_{1/2}]}{\Delta g_{\mathrm{FNS}}[1s_{1/2}]}. \end{align} By expanding the analytical (second-order perturbation theory) expression for $\Delta g_{\mathrm{FNS}}$ from Ref. \cite{GLAZOV2002408} in powers of~$(Z \alpha)$, we obtain: \begin{align} \xi_{s} & = \dfrac{1}{8} + 0.110081 (Z \alpha)^{2} + 0.0615871 (Z \alpha)^{4} \notag \\ & + 0.0302009 (Z \alpha)^{6} + 0.0148406 (Z \alpha)^{8} + \{ \mathrm{h.o.} \}, \\ \xi_{p} & = \dfrac{3}{128} (Z \alpha)^{2} + 0.0333355 (Z \alpha)^{4} \notag \\ & + 0.0312421 (Z \alpha)^{6} + 0.0257139 (Z \alpha)^{8} + \{ \mathrm{h.o.} \}. \end{align} The calculated values of $\Delta g'_{\mathrm{FNS}} = g'_{\mathrm{ext}} - g'_{\mathrm{point}}$, together with the FNS corrections to the $g$~factors in $1s_{1/2}$, $2s_{1/2}$ and $2p_{1/2}$ states for \ce{^{40}_{20}Ca^{19+}}, \ce{^{116}_{50}Sn^{49+}} and \ce{^{208}_{82}Pb^{81+}}, are shown in Table \ref{table:specific}. It can be seen that for the specific differences $g'_{s}$ and $g'_{p}$ the FNS effect is indeed suppressed by several orders of magnitude. However, we also note here that instead of using the analytical expression for $\Delta g_{\mathrm{FNS}}$, one could alternatively evaluate $\xi_{s}$ and $\xi_{p}$ numerically, for example, in the framework of the homogeneously charged sphere approximation. In this approach, using the new values of $\xi_{s}$ and $\xi_{p}$ for other nuclear models leads to an even stronger suppression of the FNS effect for the specific differences. For example, in the case of \ce{^{208}_{82}Pb^{81+}} ion, the corrections $\Delta g'_{\mathrm{FNS}}[1s_{1/2}, 2s_{1/2}]$ and $\Delta g'_{\mathrm{FNS}}[1s_{1/2}, 2p_{1/2}]$ within the Skyrme model become only $-1.1 \times 10^{-9}$ and $5.4 \times 10^{-10}$, respectively, which is $2-3$ orders of magnitude smaller than the corresponding values given in Table \ref{table:specific}. This shows that in the case of heavy ions a direct numerical calculation of $\xi_{s}$ and $\xi_{p}$ (from the best available nuclear model) should be preferred over using analytical formulas. \section{Conclusions and outlook}\label{sec:conclusion} We have demonstrated the use of the Skyrme-Hartree-Fock nuclear-structure method as a tool for calculating the finite nuclear size effect in highly charged ions. We have shown that, despite the fact that various parametrizations of the Skyrme force differ from each other drastically, the ambiguity in the choice of a parameter set can be significantly suppressed by fixing the value of root-mean-square nuclear radius. For this purpose, we suggest adjusting a single Skyrme parameter that has the biggest influence on the value of RMS radius, namely, the parameter $t_0$. In this way, the ambiguity associated with the choice of a Skyrme parametrization becomes smaller than the ambiguity stemming from uncertainties in values of nuclear radii. Our results strongly emphasize the importance of the values of RMS nuclear radii in calculations of FNS corrections. We have demonstrated that in some cases the value of nuclear radius can be even more important than the shape of the nuclear charge distribution. In fact, the FNS corrections obtained by means of our approach and by simply using the Fermi distribution agree with each other within the uncertainties in values of nuclear radii. However, it is clear that the Skyrme model provides a more realistic and thus more reliable description of nuclear charge distributions, which will become crucial in the future when the values of nuclear radii are known to a higher level of accuracy. \section*{Acknowledgements} This article comprises parts of the PhD thesis work of Igor Valuev to be submitted to the Heidelberg University, Germany.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Inflation, the most successful \Baojiu{theory to solve the problems of the hot Big Bang model and to explain the seeding of the observed large-scale structures today}, plays a crucial role in \Baojiu{the development} of modern cosmology. The simplest version of \Baojiu{single-field} inflation \citep{Guth:1980zm, Linde:1981mu, Albrecht:1982wi} predicts that primordial density fluctuations obey Gaussian statistics and the corresponding power spectrum follows a simple power law, which is favoured by the cosmic microwave background (CMB) data released by \Baojiu{the} WMAP \citep{Peiris:2003ff, Spergel:2006hy, Komatsu:2008hk, Hinshaw:2012aka} and Planck \citep{Planck:2013jfk, Ade:2015lrj, akrami2020} collaborations. \Baojiu{However, the physical origin of the inflaton field that is believed to have driven inflation is not fully understood yet, and the fact that the very high energy in the early Universe makes it an ideal place to witness the consequences of the laws of fundamental physics offers hope that new physics could be revealed by cosmological observations of the large-scale structures.} \Baojiu{More sophisticated} inflation models leading to primordial non-Gaussianity have been developed \Baojiu{across the last decades} \citep[see][for some reviews]{Bartolo:2004if, chen2010, Celoria:2018euj}. These models predict unique features deviating from \Baojiu{those of the simple single-field inflation model}, which can be classified into \Baojiu{different} types, \Baojiu{each of which} can be attributed to different physical mechanisms \citep[see][for some reviews]{chen2010, Chluba:2015bqa}. \citet{Chen:2016vvw} briefly reviews several representative feature models \Baojiu{of inflation, such as} the sharp feature model which oscillates in linear scale in the primordial power spectrum, can be described by the template of the sinusoidal wiggle in the power-law primordial spectrum, which is also a good approximation for the scalar power spectrum in the axion monodromy model \citep{Flauger:2016idt}. This oscillatory feature could be generated by a localised sharp feature in inflationary potentials or internal field space, and the nature of the sharp feature could correspond to distinct mechanisms, such as a step or bump in the single-field inflationary potential \citep[e.g.,][]{Adams:2001vc, Adshead:2011jq, Bartolo:2013exa, Hazra:2014goa}, or the embedding of new physics \citep[][]{Bozza:2003pr}. Other typical feature models include the resonance model in which the feature oscillates in logarithmic scale \citep[see e.g.][for the mechanisms behind the resonance model]{Wang:2002hf, Bean:2008na, Flauger:2009ab}, and the standard clock model which combines the form of the previous two feature models \citep[e.g.,][]{Chen:2014joa, Chen:2014cwa}. As a result, the \Baojiu{(non)detection of} primordial features can be used to distinguish among different scenarios of inflation. A variety of classes of well-motivated inflationary models, such as the \Baojiu{so-called} general single-field inflation \citep[e.g.,][]{Chen:2006nt, Senatore:2009gt}, the multi-field inflation \citep[see][for a review]{Byrnes:2010em} and so on, are continuously tested with the updated release of data from the Planck mission \citep{Ade:2013ydc, Ade:2015ava, Akrami:2019izv}, though none of them is preferable to the simplest \Baojiu{single-field} inflation \Baojiu{model} so far, which suggests that such features should be fairly weak if they do exist. Since the primordial features are not only imprinted in the CMB, \Baojiu{and some can} also leaves a signature in the matter power spectrum, some future large-scale structure (LSS) surveys with high sensitivity, such as Euclid \citep{racca2016}, DESI \citep{aghamousa2016} and SPHEREx \citep{Dore:2014cca} surveys, will provide the opportunity to search for the primordial features as a complement to CMB data \citep[e.g.,][]{Chen:2016vvw, Ballardini:2016hpi, Palma:2017wxu, LHuillier:2017lgm, Zeng:2018ufm}, \Baojiu{or strengthen the constrains on the strength of such features}. However, \Baojiu{any feature imprinted in the primordial density or curvature field by inflation is subject to the impact of cosmic evolution that leads to today. In particular, even} if the primordial features once existed in the very early Universe, they would have been significantly weakened or even wiped out on small scales in the late-time Universe due to nonlinear structure formation \citep[][]{Beutler:2019ojk, Ballardini:2019tuc}. Meanwhile, the available information on large scales\Baojiu{, where the evolution can be described by linear theory,} is limited due to the cosmic variance. \Baojiu{This can affect the confidence level at which to measure or constrain these features.} A potential method to address this issue is to undo the cosmological evolution in a process usually called reconstruction, which can partially recover the initial density field, \Baojiu{unlocking} the information \Baojiu{that once} existed on small scales. A well-known example \Baojiu{is the reconstruction of} baryonic acoustic oscillation (BAO) features, which sharpens these features in the \Baojiu{galaxy} correlation function \Baojiu{which} provides a standard ruler for \Baojiu{distance measurements} \citep[e.g.,][]{Eisenstein:2006nk, kazin2014, Schmittfull:2015mja, Zhu:2016sjc, Wang:2017jeq, shi2018, sarpa2019, Mao:2020vdp}. Similarly, reconstructing the primordial power spectrum from the observed galaxy catalogues \Baojiu{might} be beneficial \Baojiu{for probing} the primordial features \Baojiu{by using} future LSS surveys, \Baojiu{which is what we set out to check here}. In this work, \Baojiu{as a first step towards assessing the potential benefit of reconstruction,} we assume \Baojiu{additional simple oscillatory} features in the power-law primordial power spectrum. By utilising a suite of large N-body simulations, we study the performance of a nonlinear reconstruction algorithm proposed \Baojiu{recently} by \citet{shi2018,Birkin:2018nag} \Baojiu{to retrieve} the lost primordial features from the mock galaxy catalogue. \Baojiu{In particular, we carry out} parameter fitting to the damped \Baojiu{and reconstructed} wiggles to assess whether reconstruction can lead to more robust constraints on the feature parameters. To investigate the impact of reconstruction in the real galaxy surveys, we also forecast the constraints on the feature parameters for the DESI survey using the Fisher matrix approach, and compare the cases with and without reconstruction. This paper is organised as follows: in Section~\ref{sec:2} we describe \Baojiu{the model of primordial features, the simulations used in this work, and the} methodology of assessing the performance of the reconstruction method \Baojiu{to retrieve} the lost primordial features \Baojiu{due to structure formation}. In Section~\ref{sec:3} we give more details on the approach used to forecast the constraints on the feature parameters for the DESI survey. In Section~\ref{sec:4} we show the results of reconstruction and forecast and discuss the implications of them. Finally, in Section~\ref{sec:5} we conclude our findings and \Baojiu{discuss potential} future improvements. \section{Methodology} \label{sec:2} We start with presenting {the} primordial power spectrum models with oscillatory features {that we adopt in this paper for illustration purpose}. We then describe the simulation runs for these models. It is followed by a brief review of the nonlinear reconstruction method {which will be used to recover the small-scale oscillation features from evolved dark matter and halo fields}. Finally, we {describe the} analytic model to quantify the features measured in the power spectrum {before giving the details of the Fisher matrix forecast in the next section}. \subsection{Models of featured primordial power spectrum} \label{sec:2.1} We take {a powerlaw-type} primordial power spectrum to be our fiducial no-wiggle model, which is given by \begin{equation}\label{eq:2.1} P_{\rm nw}(k) = A_{s} \bigg( \frac{k}{k_{\ast}} \bigg) ^ {n_{s} - 1}, \end{equation} where $k$ is the comoving wavenumber, $A_s$ and $n_s$ are respectively the scalar amplitude and spectral index with the pivot scale given by $k_{\ast} = 0.05 \ \rm Mpc^{-1}$. {Motivated by \citet{Ballardini:2019tuc},} we {consider} three wiggled models which share the same form of oscillatory features, {of which} the featured primordial power spectrum is expressed as \begin{equation}\label{eq:2.2} P_{\rm w}(k) = P_{\rm nw}(k) \Big[ 1 + A \cos \big( \omega k ^ m + \phi \big) \Big], \end{equation} where $A$, $\omega$ and $\phi$ are respectively the amplitude, frequency and phase of the oscillation, $m$ is the power of the comoving wavenumber $k$. Note that even if the primordial features exist, they could be more complicated than any phenomenological models that we are currently using. For now, we cannot determine the precise form of the features, thus we aim at something narrow, which is assuming that we know the functional form and verifying if reconstruction can improve the accuracy of finding the feature parameters. The oscillation parameters of the four models are listed in Table~\ref{tab1}. The initial oscillations of {the} three wiggled models are shown in the red dashed lines in the right panel of Fig.~\ref{fig:1}{, where we have presented the difference between $P_{\rm w}$ and $P_{\rm nw}$}. Within our {interested} range of scales, $k = (0.05-0.5) \ h \rm Mpc^{-1}$, Model 1 has three peaks, Model 2 and Model 3 have only two peaks. The first peak of Model 2 is chosen to be on a smaller scale than that of Model 1. The two peaks of Model 3 are at the same position as the first and third peaks of Model 1. By comparing the reconstructed wiggles of the three wiggled models later, we would be able to comprehend the effect of the reconstruction method on different scales. \subsection{N-body simulations} \label{sec:2.2} {In the regime of linear perturbations, the primordial wiggles preserve their shapes and amplitude $P_{\rm w}/P_{\rm nw}$. However, nonlinear large-scale structure evolution will change this behaviour, leading to damping of $P_{\rm w}/P_{\rm nw}$ at late times. This makes it harder to measure the properties of these primordial oscillations from an evolved density field, even more so for a late-time tracer (e.g., galaxy) field. In order to quantify such degrading effects, N-body cosmological simulations can prove to be a useful tool.} We have run four simulation runs including the no-wiggle model and three wiggled models. First we assume a flat universe and adopt Planck 2018 cosmology, with $h = 0.674$, $\rm \Omega_{m} = 0.3135$, $\Omega_c h^2 = 0.120$, $\Omega_b h^2 = 0.0224$, $\rm \Omega_{\Lambda} = 0.6865$, $n_{s} = 0.965$ and $A_s = 2\times 10^{-9}$ \citep{aghanim2020}. The value of $\sigma_8$ is approximately 0.79 {though} it varies a little bit across different models. We then customise the function of the primordial power spectrum in the Einstein-Boltzmann solver code {\sc camb} \citep{lewis2011} to be Eq.~(\ref{eq:2.1}) for the no-wiggle model and Eq.~(\ref{eq:2.2}) for the wiggled models. We calculate the linear theory matter power spectrum at $z=49$ using {this version of the {\sc camb} code}, which is used as the input matter power spectrum for the publicly available code {2{\sc lpt}ic} \citep{crocce2006} to generate the initial conditions used for the N-body simulations. In the left panel of Fig.~\ref{fig:1} we compare the initial matter power spectrum given by {{\sc camb}} and the matter power spectrum measured from the initial conditions generated using {2{\sc lpt}ic; it can be seen that} they are in good agreement for all models within the range of scales of our interest {(the blowing up at small scales is due to the finite particle resolution).} To {more} conveniently describe the oscillatory features for the wiggled models, {as mentioned above,} we define the relative wiggle {pattern} as \begin{equation}\label{eq:2.0} P_{\rm rw}(k) = \frac{P_{\rm w}(k)}{P_{\rm nw}(k)}-1, \end{equation} which are shown in the right panel of Fig.~\ref{fig:1}. {This} clearly shows that the oscillatory features are perfectly created in the initial conditions of the simulations. Next, we run the simulations using the {parallel} N-body code {{\sc ramses}} \citep{teyssier2002} which is based on the adaptive mesh refinement (AMR) technique. Each simulation is performed with $N = 1024^3$ dark matter particles in a box of size $1024 \ h^{-1} \rm Mpc$, {and} we output four snapshots at different redshifts, respectively as $z = 0$, $0.5$, $1$, and $1.5$. For each snapshot, we use the halo finder {{\sc rockstar}} \citep{behroozi2013} to identify the haloes with the definition of the halo mass $M_{200c}$, where $M_{200c}$ is the mass within a sphere whose average density is 200 times the critical density. Since the low-mass haloes are unable to be fully probed due to the limited simulation resolution, we measure the cumulative halo mass functions (cHMFs) from the {main} haloes with more than $100$ particles to check the validity of the simulation, which show very good agreement with the analytic formulae in \citet{tinker2008}. For each snapshot we establish one dark matter particle catalogue (hereafter DM) and two mock halo catalogues respectively with the number density of $1 \times 10^{-3} (h^{-1} \rm Mpc)^{-3}$ (hereafter H1) and $5 \times 10^{-4} (h^{-1} \rm Mpc)^{-3}$ (hereafter H2). Both host haloes and subhaloes are included in the halo catalogues. We achieve the number density by applying a mass cutoff, i.e., {neglecting} the haloes with smaller masses than the cutoff. By using the power spectrum estimator tool {{\sc powmes}} \citep{colombi2011}, we measure the nonlinear matter power spectrum from DM and nonlinear halo power spectrum separately from H1 and H2. Finally, we take the ratio of the power spectrum of the wiggled models to {the} corresponding power spectrum of the no-wiggle model to obtain the {quantity $P_{\rm rw}$} for all cases. \begin{figure*} \includegraphics[width=1.8\columnwidth]{initial_conditions.png} \caption{{[Colour Online]} The left panel shows the comparison between the initial matter power spectra given by {{\sc camb}} (red dashed lines) and the matter power spectra measured from the initial conditions of the simulations generated using {2{\sc lpt}ic} (black lines), from the bottom up they are respectively the fiducial model, Model 1, Model 2 and Model 3, each model is shifted upwards by a factor of $10$ successively to avoid the clutter of all curves. The right panel shows the {$P_{\rm rw}$ results, cf., Eq.~\eqref{eq:2.0},} obtained from the left panel for {the} three wiggled models, for instance, the bottom curves show the ratio of Model 1 to the fiducial model, followed by the ones for Model 2 and Model 3 upwards; each model is shifted upwards by a constant of $0.15$ successively for the same reason {as above}.} \label{fig:1} \end{figure*} \begin{table} \centering \caption{The oscillation parameters used for the no-wiggle model and three wiggled models. Columns respectively denote (1) the power of the comoving wavenumber; (2) the amplitude, (3) frequency and (4) phase of the oscillation.} \label{tab1} \begin{tabular}{c|c|c|c|c} \hline & $m$ & $A$ & $\omega/\pi$ & $\phi/\pi$\\ & & & $[h^{-1} \rm Mpc]$ & \\ \hline Fiducial & & $0$ & & \\ Model $1$ & $1$ & $0.05$ & $15.00$ & $0$ \\ Model $2$ & $1$ & $0.05$ & $8.57$ & $0$ \\ Model $3$ & $0.631$ & $0.05$ & $7.13$ & $0$ \\ \hline \end{tabular} \end{table} \subsection{Reconstruction} \label{sec:2.3} In order to {partially} recover the primordial features lost in the {structure formation}, we {perform reconstruction of} the initial density field from the late-time density field using {the} nonlinear reconstruction algorithm {described in} \citet{shi2018}. This reconstruction method is based on mass conservation. Without assuming any cosmological model or {having} free parameters except the size of the mesh used to calculate the density field, it uses multigrid Gauss-Seidel relaxation to solve the nonlinear partial differential equation which governs the mapping between the initial Lagrangian and final Eulerian coordinates of particles in evolved density fields. Previous tests show that the reconstructed density field is over $\sim 80\%$ correlated with the initial density field for $k \lesssim 0.6 \ h \rm Mpc^{-1}$, {if reconstruction is performed on the dark matter density field,} which cover the scales of our interest, {though the performance becomes poorer when the method is applied on density fields calculated from sparse tracers \citep{Birkin:2018nag,Wang:2019zuq,Liu:2020pvy}. This method is implemented in a modified version of the {\sc ecosmog} code \citep{li2012, li2013}, which itself is based on {\sc ramses}.} We reconstruct the initial density field separately from the catalogues DM, H1 and H2 for each snapshot. The halo catalogues, {which contain both main and subhaloes,} are assumed to be the same as mock galaxy catalogues hereafter unless otherwise stated. The procedure for the reconstruction from the halo catalogue is principally similar to that from the dark matter particle catalogue, {apart from} two things at the beginning. One is that we prepare the Gadget-format particle data for the {{\sc ecosmog}} code in two ways. The halo catalogue is directly written into Gadget-format tracer particles due to its small number density. However, the very large number of the simulation particles{, along with their strongly non-uniform spatial distribution,} in the dark matter particle catalogues, leads to the requirement of {large memory footprint} when processing the data. To avoid this problem, we use the publicly available {{\sc dtfe}} code \citep{cautun2011}, based on Delaunay tessellation, to calculate the density field on a regular mesh with $512^{3}$ cells employing the triangular shaped cloud (TSC) mass assignment scheme; then the mesh cells are regarded as {uniformly-distributed} fake particles with different masses, which are {transformed} to Gadget format {that can be directly read by {\sc ecosmog}.} The other particular thing is that we calculate the linear halo bias used for the reconstruction from the halo catalogue. The estimate of the halo bias is based on the relation \begin{equation}\label{eq:2.3} b_{1}(r) = \frac{\xi_{\rm hh}(r)}{\xi_{\rm hm}(r)}, \end{equation} where $\xi_{\rm hh}(r)$ is the auto-correlation function of haloes and $\xi_{\rm hm}(r)$ is the cross-correlation function between the haloes and the dark matter particles. We use the publicly available {{\sc cute}} code \citep{alonso2012} to {measure} $\xi_{\rm hh}(r)$ and $\xi_{\rm hm}(r)$ from a given simulation snapshot, and take the ratio between them to obtain the value of linear halo bias as a function of the distance $r$. Since the linear halo bias is theoretically a constant on large scales, we apply the method of least squares to the values on scales $r \gtrsim 10 \ h^{-1} \rm Mpc$ to {obtain an} estimate {of it. Note that when dealing with observational data we do not necessarily have such an accurate measurement of the linear halo or galaxy bias; however, \citet{Birkin:2018nag} find that the exact value of linear bias is not very important for this reconstruction method to recover the phases of the initial density field.} The following steps of reconstruction are then the same for both dark matter particle catalogue and halo catalogues. First, {{\sc ecosmog}} calculates the density field in the Eulerian coordinates using the TSC mass assignment scheme, {and solves the mapping between the Eulerian and Lagrangian coordinates, to get} the displacement potential as well as the displacement field on a regular mesh with $512^{3}$ cells. We then use a Python code to transfer the output fields from the Eulerian coordinates to the Lagrangian coordinates. After that, {because the Lagrangian coordinates are not uniform,} we feed the {{\sc dtfe}} code with the Lagrangian coordinates and displacement field of the mesh cells to calculate the reconstructed density field as the divergence of the displacement field w.r.t. the Lagrangian coordinates. Finally, we measure the reconstructed power spectrum from the reconstructed density field using a post-processing code. \subsection {Parameter fitting to the damped wiggles}} \label{sec:2.4} {As we discussed above, cosmic structure formation leads to damping of the primordial wiggles. Reconstruction is expected to revert some of this damping, but cannot completely undo it. So we need a model for the wiggles of the reconstructed matter or halo power spectrum. Ideally this should be an analytical model since it can be more easily used in the Fisher analysis later. In this subsection, we describe how this is achieved by using a fitting function.} Instead of fitting the {absolute} matter and halo power spectra, we propose an analytic {formula} to directly fit {the quantity $P_{\rm rw}$} obtained from the simulations and reconstructions, {cf.~Eq.~\eqref{eq:2.0}}, which combines {the} oscillatory feature model {and a} Gaussian damping function, given by \begin{equation}\label{eq:2.4} P_{\rm rw}(k,z) = A \cos \big( \omega k ^ m + \phi \big) \exp \bigg[-\frac{k^{2}\zeta(z)^{2}}{2} \bigg], \end{equation} where $\zeta(z)$ is the damping parameter that depends on the redshift $z$. For the fitting of each {measured $P_{\rm rw}(k)$}, we let $\omega$, $\phi$ and $\zeta$ be the free parameters because $\omega$ and $\phi$ play an essential role in determining the position of the peaks, and $\zeta$ quantifies the extent of the damping effect. The parameters $A$ and $m$ are taken to be their theoretical values in Table~\ref{tab1}. We apply the least-squares estimator to obtain the best-fit parameters by minimising \begin{equation}\label{eq:2.5} {\chi^2 =} \sum_{i=1}^{N} \big[ P_{\rm rw,i}(z)-P_{\rm rw}(k_{\rm i}, z; \omega, \phi, \zeta) \big]^{2}, \end{equation} where $P_{\rm rw,i}(z)$ are the data points of wiggle spectrum {in the $i$th $k$ bin at reshift $z$}. Since there is only one realisation of simulation for each model, we assume that the uncertainties of all data points {$P_{\rm rw,i}(z)$} are the same and follow the same Gaussian distribution. {Note that, as the quantity we fit is $P_{\rm rw}=P_{\rm w}/P_{\rm nw}-1$, this is equivalent to doing the fitting of $P_{\rm w}$ with $\sqrt{P_{\rm nw}}$ as uncertainty \citep[e.g.,][]{Feldman:1993ky}.} We calculate the uncertainties of the best-fit parameters based on 95 \% confidence interval, {as} a rough estimate of the size of the errors. To {minimise} the influence of the {cosmic variance} on very large scales, we fit the data within the interval of $k = (0.04-0.6) \ h \rm Mpc^{-1}$, which covers our intended range of scales. \section{Forecast for the DESI survey} \label{sec:3} In order to investigate the impact of reconstruction, we will forecast the constraints on the feature parameters for the DESI survey using the Fisher information matrix{, and compare with the case of doing no reconstruction. For this purpose, we} first model the observed broadband galaxy power spectrum. Then we describe how to calculate the Fisher information matrix, followed by its analytic marginalisation. Finally, we {give} the specifications of the DESI survey. \subsection{Modelling the observed galaxy power spectrum} \label{sec:3.1} By combining the Eqs.~(\ref{eq:2.0}) and (\ref{eq:2.4}), the featured galaxy power spectrum in real space can be modelled as, \begin{equation}\label{eq:4.1} P_{\rm mod}(k,z) = P_{\rm nl}(k,z) \bigg[1 + A \cos \big( \omega k ^ m + \phi \big) \exp \bigg(-\frac{k^{2}\zeta(z)^{2}}{2} \bigg) \bigg], \end{equation} where $P_{\rm nl}(k,z)$ is the nonlinear matter power spectrum without the {primordial} oscillatory features at $z$, which includes the BAO wiggles and is equivalent to the nonlinear matter power spectrum of the no-wiggle model. However, since there is only one {simulation} realisation for {a single} no-wiggle model, which cannot provide a smooth nonlinear matter power spectrum, {and since a fast method to get $P_{\rm mod}$ is more convenient in the Fisher analysis,} we use the {{\sc halofit}} model {in the {\sc camb}} code to calculate $P_{\rm nl}(k,z)$ instead. The broadband galaxy power spectrum in real space is not a direct observable due to the measurement in the angular and redshift coordinates instead of the 3D comoving coordinates. In order to relate the observed galaxy power spectrum $P_{\rm obs}(\boldsymbol{k},z)$ to the modelled galaxy power spectrum $P_{\rm mod}(k,z)$, the standard practice is to project the galaxies to their comoving positions assuming some reference cosmology via the coordinate transformation based on the relations \begin{equation}\label{eq:4.2} k_{\perp}^{\rm ref} = \frac{D_{\rm A}(z)}{D_{\rm A}^{\rm ref}(z)}k_{\perp}, \quad k_{\parallel}^{\rm ref} = \frac{H^{\rm ref}(z)}{H(z)}k_{\parallel}, \end{equation} where {$k_{\parallel}$ and $k_{\perp}$} are respectively the light-of-sight and transverse components of the {wavevector $\boldsymbol{k}$}, i.e., $k^{2} = |\boldsymbol{k}|^2 = k_{\perp}^{2} + k_{\parallel}^{2}$, the superscript $^{\rm ref}$ denotes the reference cosmology, note that the reference cosmology hereafter is the same one used in the simulations unless otherwise stated; $D_{\rm A}(z) = r(z)/(1+z)$ is the angular diameter distance at $z$ with the comoving distance $r(z)$: under the assumption of flat universe it is given by \begin{equation}\label{eq:4.3} r(z) = \frac{c}{H_0}\int_{0}^{z} \dd z^\prime \Big[\Omega_{\rm m}(1+z)^3 +\Omega_{\Lambda}\Big]^{-\frac{1}{2}}, \end{equation} {where $\Omega_\Lambda=1-\Omega_{\rm m}$ is the current density parameter of the cosmological constant,} and the Hubble parameter $H(z)$ is given by \begin{equation}\label{eq:4.4} H(z) = H_0 \Big[\Omega_{\rm m}(1+z)^3 +\Omega_{\Lambda}\Big]^{\frac{1}{2}}. \end{equation} Along with several main factors being considered, i.e., the redshift-space distortions (RSD) and shot noise, we can model the observed galaxy power spectrum as \begin{equation}\label{eq:4.5} P_{\rm obs}(k,\mu,z) = \Bigg[ \frac{D_{\rm A}^{\rm ref}(z)}{D_{\rm A}(z)} \Bigg]^{2} \frac{H(z)}{H^{\rm ref}(z)} \frac{F_{\rm FoG}(k, \mu, z)}{\sigma_{8}^{2}(z)} P_{\rm mod}(k,z) + N_{\rm gal}(z), \end{equation} where $\sigma_{8}(z)$ is the {R.M.S.} linear density fluctuations on the scale of $8h^{-1}\rm Mpc$, $N_{\rm gal}(z) = 1 / \overline{n}_{\rm g}(z)$ is the shot noise with $\overline{n}_{\rm g}(z)$ being the galaxy number density, and the Finger-of-God factor $F_{\rm FoG}(k, \mu, z)$ describing the effect of RSD {is} modelled as \citet{Ballardini:2019tuc} \begin{equation}\label{eq:4.6} F_{\rm FoG}(k, \mu, z) = \frac{\big[b(z)\sigma_8(z) + f(z)\sigma_8(z)\mu^2\big]^2}{1 + k^2 \mu^2 \sigma_{r,p}^2 / 2} \exp \big(- k^2 \mu^2 \sigma_{r,z}^2 \big), \end{equation} where $b(z)$ is the linear halo bias at $z$, \begin{equation} f(z) = \frac{\dd\ln{D(a)}}{\dd\ln{a}}, \end{equation} is the linear growth rate at $z$ with $D(a)$ and $a$ respectively being the linear growth factor and the scale factor (note that we normalise $D(a)$ so that $D(a=1) = 1$ {in this work}), $\mu=\cos{\theta}$ with $\theta$ being the angle between the {wavevector $\boldsymbol{k}$} and the line of sight, i.e., $\mu = k_{\parallel}/k$, $\sigma_{r,p} = \sigma_{p}/[H(z)a]$ is the distance dispersion corresponding to the physical velocity dispersion $\sigma_{p}$ whose fiducial value is taken to be $290 \rm \ km~s^{-1}$, and the {last} exponential damping factor accounts for the redshift error $\sigma(z)$ with $\sigma_{r,z} = c \sigma(z)/H(z)$. \subsection{Fisher information matrix} \label{sec:3.2} The Fisher matrix approach provides a method to propagate the uncertainties of the observable to the constraints on the cosmological parameters. Our calculation of the Fisher matrix is based on \citet{tegmark1997} and \citet{seo2003}, assuming that the power spectrum of a given $k$ mode satisfies {a} Gaussian distribution {which has a variance equal to the power spectrum itself}, and that different {bins} of $k$ are independent of each other for large surveys, the Fisher matrix for each redshift bin, with bin centre at $z = z_{\rm c}$, can be approximated as \begin{eqnarray}\label{eq:4.7} && F_{ij}(z_{\rm c})=\frac{V_{\rm eff}(z_{\rm c})}{4\pi^{2}} \int_{0}^{1} \dd \mu\nonumber\\ && \times\int_{k_{\rm min}}^{k_{\rm max}} \dd k k^{2} \frac{\partial \ln{P_{\rm obs}(k,\mu,z_{\rm c})}}{\partial \theta_{i}} \frac{\partial \ln{P_{\rm obs}(k,\mu,z_{\rm c})}}{\partial \theta_{j}}, \end{eqnarray} where {$k_{\rm min}, k_{\rm max}$ are respectively the minimum and maximum values of $k$ used for the forecast}. We set $k_{\rm min}=0.05 \ h\rm Mpc^{-1}$ and {adopt} two values of $k_{\rm max}$, respectively $0.25 \ h\rm Mpc^{-1}$ and $0.5 \ h\rm Mpc^{-1}$, to compare the constraints for different range of scales. The effective volume of the redshift bin $V_{\rm eff}(z_{\rm c})$ is expressed as \begin{equation}\label{eq:4.8} V_{\rm eff}(z_{\rm c}) = \bigg[1 + \frac{1}{\overline{n}_{\rm g}(z)P_{\rm obs}(k,\mu,z)} \bigg]^{-2} V_{\rm surv}(z_{\rm c}), \end{equation} where $\overline{n}_{\rm g}(z)P_{\rm obs}(k,\mu,z)$ is the signal-to-noise, the comoving survey volume $V_{\rm surv}(z_{\rm c})$ with the redshift bin width $\Delta z$ is given by \begin{equation}\label{eq:4.9} V_{\rm surv}(z_{\rm c}) = \frac{4\pi}{3}\bigg[r\Big(z_{\rm c}+\frac{\Delta z}{2}\Big)^{3} - r\Big(z_{\rm c}-\frac{\Delta z}{2}\Big)^{3} \bigg] \frac{\Omega_{\rm surv}}{\Omega_{\rm sky}}, \end{equation} where $\Omega_{\rm surv}$ and $\Omega_{\rm sky}$ are respectively the survey area and the area of the full sky. Additionally, $\theta$ is the 8-dimensional parameter vector which consists of five cosmological parameters and three oscillation parameters, \begin{equation}\label{eq:4.10} \omega_{\rm c}=\Omega_{\rm c}h^2, \omega_{\rm b}=\Omega_{\rm b}h^2, h, n_{\rm s}, A_{\rm s}, A, \omega, \phi. \end{equation} The partial derivatives of $P_{\rm obs}(k, \mu, z_{\rm c})$ w.r.t. the cosmological parameters are calculated numerically using {the finite difference,} \begin{equation}\label{eq:4.11} \frac{\partial P_{\rm obs}(k, \mu, z_{\rm c})}{\partial \theta_{i}} = \frac{P_{\rm obs}({\theta^{\rm fid}_i} + \Delta \theta_i) - P_{\rm obs}({\theta^{\rm fid}_i} - \Delta \theta_i)}{2\Delta \theta_i}, \end{equation} where $\Delta \theta_i$ is taken to be $10\%$ of the fiducial value of {$\theta^{\rm fid}_i$}, {though we have explicitly checked} that the partial derivative is insensitive to the size of $\Delta \theta_i$. By contrast, the partial derivatives w.r.t. the oscillation parameters can be calculated analytically due to the analytic form of the oscillations. The Fisher matrices of all redshift bins are then summed up to return a $8 \times 8$ Fisher matrix, the inverse of which gives the uncertainties of all parameters. As we mainly aim at the constraints on the oscillation parameters, we marginalise the cosmological parameters using the analytic marginalisation method {given by} \citet{taylor2010}, which can marginalise the nuisance parameters and preserve the information about the target parameters. The marginalised Fisher matrix is identified as \begin{equation}\label{eq:4.12} F_{\alpha\beta}^{\rm M} = F_{\alpha\beta} - F_{\alpha m}F_{mn}^{-1}F_{n\beta}, \end{equation} where the subscripts $_\alpha$ and $_\beta$ denote the target parameters, while $_m$ and $_n$ denote the nuisance parameters. Finally, we {get} the uncertainties of the oscillation parameters from the covariance matrix, i.e., the inverse of the marginalised Fisher matrix. \subsection{Parameters used in the Fisher analysis} \label{sec:3.3} The parameters {used in the Fisher analysis, including those} associated with the specifications of the DESI survey \citep{aghamousa2016} are discussed here. We {start with} the most crucial parameter, the damping parameter $\zeta$ displayed in Table~\ref{tab2}, which depends not only on the redshifts but also on the halo number densities and {-- more importantly --} whether the reconstruction is applied. We only have values of $\zeta$ for four redshifts, i.e., $z=0$, $0.5$, $1$, $1.5$, and two different halo number densities, i.e., $n_{\rm halo}=1\times10^{-3}(h^{-1} \rm Mpc)^{-3}$ and $5\times10^{-4}(h^{-1} \rm Mpc)^{-3}$, but the forecasted number density achievable in the DESI survey varies over the redshift range, so the values of $\zeta$ may not apply to the entire redshift range. As a result, we cut off some high redshift bins which have the number density much smaller than $5\times10^{-4}(h^{-1} \rm Mpc)^{-3}$. We use a bilinear interpolation between the redshift and the number density to estimate an appropriate value of $\zeta$ for a given combination of the redshift and number density. For those the number density is larger than $1\times10^{-3}(h^{-1} \rm Mpc)^{-3}$ or smaller than $5\times10^{-4}(h^{-1} \rm Mpc)^{-3}$, we simply adopt the values of $\zeta$ for $n_{\rm halo}=1\times10^{-3}(h^{-1} \rm Mpc)^{-3}$ or $n_{\rm halo}=5\times10^{-4}(h^{-1} \rm Mpc)^{-3}$ instead. {In this work, we use different values of $\zeta$ for the different models as obtained using the fitting method described in Section \ref{sec:2.4}, and we will comment on this point again later.} As we consider both emission line galaxies (ELGs) and luminous red galaxies (LRGs) in the DESI survey, which have different number densities and redshift distributions, different range of redshift bins is chosen for ELGs and LRGs in the Fisher analysis. After {throwing away} the redshift bins with very small number densities, we take the range of $z=(0.6-1.3)$ for ELGs and $z=(0.6-0.9)$ for LRGs, and the redshift bin width is by default $\Delta z=0.1$. In addition to the calculation of effective survey volume, by following the DESI survey, the fixed values of $\overline{n}_{\rm g}(z)P_{\rm obs}(0.14,0.6,z)$ are used for the signal-to-noise, two survey areas are considered including the expected survey area of 14,000 $\rm deg^2$ and 9,000 $\rm deg^2$ as the pessimistic case \citep{aghamousa2016}. As for the Finger-of-God factor, the linear halo bias for ELGs and LRGs is simply defined in terms of the growth factor via \citep{aghamousa2016} \begin{equation}\label{eq:4.13} b_{\rm ELG}(z)D(z) = 0.84 \quad\mathrm{and}\quad b_{\rm LRG}(z)D(z) = 1.70. \end{equation} The DESI survey defines the redshift error as $\sigma(z)=0.0005/(1+z)$, in this case, the exponential damping factor is very close to 1 for our intended range of scales, so we neglect it in the calculation. \begin{figure*} \includegraphics[width=2.0\columnwidth]{pk_models.png} \caption{{[Colour Online]} Comparisons among the linear (black solid line), nonlinear (blue dash-dotted line) and reconstructed (red dashed line) {$P_{\rm rw}$}. The linear {$P_{\rm rw}$} is measured from the initial conditions generated using 2LPTic, the nonlinear {$P_{\rm rw}$} is measured from the output snapshots of the simulations, and the reconstructed {$P_{\rm rw}$} is obtained from the {reconstructed density field}. Each row represents one redshift $z$ which is shown on the right side. The three columns denote, respectively, {the results from} the dark matter particle catalogue DM and the halo catalogues H1 and H2. Every four {rows} from the bottom up respectively belong to Model 1, Model 2 and Model 3.} \label{fig:2} \end{figure*} \section{Results and discussion} \label{sec:4} In this section, we will first compare the linear, nonlinear and reconstructed $P_{\rm rw}$ measured {for all models and redshifts}. Then we {present} the results of the analytic fit to {more quantitatively demonstrate} the improvement by the reconstruction. Finally we show the results of the constraints on the oscillation parameters and give forecast for the DESI survey. \subsection{Comparisons among wiggle spectra} \label{sec:4.1} In Fig.~\ref{fig:2} we compare the results of the linear, nonlinear and reconstructed {$P_{\rm rw}(k)$} obtained from DM, H1 and H2 at the four redshifts for the three wiggled models. The black solid lines represent the linear {$P_{\rm rw}(k)$} obtained from the initial conditions of the simulations, which are equivalent to the primordial oscillatory features. The blue dashed lines represent the nonlinear {$P_{\rm rw}(k)$} obtained from the output snapshots of the simulations, which are also referred to as the unreconstructed {$P_{\rm rw}(k)$} for {convenience.} It can be seen that the wiggles on small scales are gradually {damped} as the redshift decreases. The red dash-dotted lines represent the reconstructed {$P_{\rm rw}(k)$} obtained from the {reconstructed density field}, which {helps to partially retrieve} the lost wiggles. The {$P_{\rm rw}(k)$ results} shown in the first column are obtained from DM, which exhibit some common characteristics for {all} three wiggled models. By comparing the unreconstructed {results} with the {linear-theory predictions}, it seems that the {scale} at which the wiggles {start to be weakened} becomes larger as {time progresses, though} the specific range {differs} a bit for different models due to the discrepancies in their original shape of the oscillations. For instance, {deviation from linear theory} at $z=0$ is at $k \gtrsim 0.09 \ h \rm Mpc^{-1}$ for Model 1, $k \gtrsim 0.07 \ h \rm Mpc^{-1}$ for Model 2 and $k \gtrsim 0.08 \ h \rm Mpc^{-1}$ for Model 3. Furthermore, the wiggles on scales $k \gtrsim 0.3 \ h \rm Mpc^{-1}$ are {almost} totally lost at $z=0$, {and} so the recovery of the wiggles on scales $0.3 \lesssim k \lesssim 0.5 \ h \rm Mpc^{-1}$ would be {an} important {objective of} reconstruction. By comparing the reconstructed {$P_{\rm rw}$ results} with the {linear-theory prediction}, it can be seen that, {despite some} imperfection, the reconstruction still to a large extent {achieves this by} {retrieving} the initial oscillations on our {interested} scales, i.e., $0.05 \lesssim k \lesssim 0.5 \ h \rm Mpc^{-1}$. {The} success of the reconstruction from the dark matter particles {is largely thanks to their high number density, which allows the late-time nonlinear density field to be accurately produced: in this sense, reconstruction from DM can be considered as an idealised case or an upper limit, which is difficult to achieve in real observations. For a rough comparison, we have shown, in the middle and right columns of Fig.~\ref{fig:2},} the $P_{\rm rw}$ results obtained from {the two} halo catalogues, H1 and H2, {which have number densities similar to typical real galaxy catalogues}. {These results are less impressive} than those for the dark matter particles because of the much smaller halo number densities. Also due to the small halo number densities, {these results are noisier,} which in theory can be {made smoother} by having more realisations of simulations{, or equivalently a larger volume}. By comparing the results of H1 and H2 for the same model, we find that there is no {significant difference} in the unreconstructed {$P_{\rm rw}(k)$} at the same redshift {because the number densities of these two halo catalogues only differ by a factor of 2}. In most cases the reconstructed {$P_{\rm rw}$ results} of H1 seem {slightly} better compared to those of H2, as a result of the {slightly larger} number densities {in H1, though the difference is again insignificant visually;} {we will revisit this point when discussing} the analytic fit in the next subsection. {Comparing the results with and without reconstruction, it is clear that the former does give less damped and sharper oscillation features, confirming that reconstruction can help to partially recover the lost wiggles. This recovery is more substantial at lower redshifts than at higher redshifts, since at higher redshifts there is little damping to start with. At lower redshifts, on the other hand, reconstruction can even recover some of the wiggles at $k\sim0.5\ h \rm Mpc^{-1}$ where there is literally nothing in the unreconstructed case. We expect that this will greatly help in the accurate measurements of wiggle parameters, especially in models with few wiggles at $k\lesssim 0.3\ h \rm Mpc^{-1}$ -- we will discuss this in the parameter fittings next.} \begin{figure*} \includegraphics[width=2.0\columnwidth]{fit_model1.png} \caption{{[Colour Online]} The analytic fit to the unreconstructed and reconstructed {$P_{\rm rw}$} for Model 1. The black solid lines represent the measured {$P_{\rm rw}$} and the red dashed lines represent the fitting curves given by our analytic model{, Eq.~\eqref{eq:2.4}}. The thin lines are for the unreconstructed cases and the thick lines are for the reconstructed cases. The three columns {from left to right} respectively denote the dark matter particle catalogue DM, and the halo catalogues H1 and H2. Every two rows from the bottom up represent the same redshift shown on the right side. {In each group of two rows, the upper one is for the unreconstructed, and the lower one for the reconstructed, case.}} \label{fig:3} \end{figure*} \begin{figure*} \includegraphics[width=2.0\columnwidth]{fit_model2.png} \caption{{[Colour Online]} The same as Fig.~\ref{fig:3} but for Model 2.} \label{fig:4} \end{figure*} \begin{figure*} \includegraphics[width=2.0\columnwidth]{fit_model3.png} \caption{{[Colour Online]} The same as Fig.~\ref{fig:3} but for Model 3.} \label{fig:5} \end{figure*} \begin{table*} \centering \caption{The best-fit parameters of $\omega$, $\phi$ and $\zeta$ and their {$95\%$} uncertainties for {the} three wiggled models {studied in this work}. The values of $\omega$ and $\phi$ in the table are respectively in the units of $\pi \ h^{-1}\rm Mpc$ and $\pi$, {and} their corresponding theoretical values are shown below the title of each model {on the top of the table}. DM denotes the dark matter particle catalogue, H1 the halo catalogue with $n_{\rm halo}=1\times10^{-3}(h^{-1} \rm Mpc)^{-3}$ and H2 the halo catalogue with $n_{\rm halo}=5\times10^{-4}(h^{-1} \rm Mpc)^{-3}$. Each group of six rows includes the unreconstructed and reconstructed cases for the same redshift.} \label{tab2} \begin{tabular}{|p{0.2cm}|p{0.3cm}|p{0.2cm}|p{1.4cm}|p{1.4cm}|p{1.4cm}|p{1.4cm}|p{1.4cm}|p{1.4cm}|p{1.4cm}|p{1.4cm}|p{1.4cm}|} \hline \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{Model 1} & \multicolumn{3}{|c|}{Model 2} & \multicolumn{3}{|c|}{Model 3}\\ \multicolumn{3}{|c|}{} & \multicolumn{3}{|c|}{$\omega=15$, $\phi=0$} & \multicolumn{3}{|c|}{$\omega=8.57$, $\phi=0$} & \multicolumn{3}{|c|}{$\omega=7.13$, $\phi=0$}\\ \hline $z$ & & $\rm paras$ & $\rm DM$ & $\rm H1$ & $\rm H2$ & $\rm DM$ & $\rm H1$ & $\rm H2$ & $\rm DM$ & $\rm H1$ & $\rm H2$\\ \hline \multirow{6}{*}{$0.0$} & \multirow{3}{*}{unrec} & $\omega$ & $14.70\pm0.24$ & $14.81\pm0.36$ & $14.67\pm0.37$ & $7.07\pm0.74$ & $7.84\pm0.34$ & $7.76\pm0.41$ & $6.47\pm0.16$ & $6.74\pm0.25$ & $6.43\pm0.24$ \\ & & $\phi$ & $0.00\pm0.03$ & $0.01\pm0.05$ & $0.00\pm0.04$ & $0.09\pm0.08$ & $0.02\pm0.04$ & $0.04\pm0.05$ & $0.11\pm0.04$ & $0.06\pm0.06$ & $0.14\pm0.06$ \\ & & $\zeta$ & $7.43\pm0.26$ & $6.95\pm0.40$ & $7.27\pm0.41$ & $7.23\pm0.95$ & $6.21\pm0.44$ & $6.90\pm0.53$ & $6.96\pm0.23$ & $6.62\pm0.35$ & $6.68\pm0.36$ \\ & \multirow{3}{*}{rec} & $\omega$ & $15.05\pm0.02$ & $15.04\pm0.21$ & $14.94\pm0.20$ & $8.58\pm0.03$ & $8.65\pm0.09$ & $8.76\pm0.23$ & $7.19\pm0.02$ & $7.32\pm0.09$ & $7.12\pm0.14$ \\ & & $\phi$ & $-0.01\pm0.01$ & $0.00\pm0.05$ & $0.01\pm0.04$ & $0.00\pm0.01$ & $-0.02\pm0.02$ & $-0.04\pm0.04$ & $-0.01\pm0.01$ & $-0.05\pm0.03$ & $0.03\pm0.05$ \\ & & $\zeta$ & $2.06\pm0.03$ & $3.58\pm0.28$ & $4.00\pm0.25$ & $2.05\pm0.05$ & $3.41\pm0.13$ & $4.07\pm0.32$ & $2.11\pm0.04$ & $3.41\pm0.14$ & $3.75\pm0.22$ \\ \hline \multirow{6}{*}{$0.5$} & \multirow{3}{*}{unrec} & $\omega$ & $14.83\pm0.09$ & $14.58\pm0.20$ & $14.64\pm0.26$ & $8.06\pm0.29$ & $8.01\pm0.23$ & $8.08\pm0.22$ & $6.75\pm0.06$ & $6.74\pm0.15$ & $6.51\pm0.17$ \\ & & $\phi$ & $0.00\pm0.01$ & $0.05\pm0.03$ & $0.02\pm0.04$ & $0.02\pm0.04$ & $0.03\pm0.03$ & $0.02\pm0.03$ & $0.07\pm0.02$ & $0.06\pm0.05$ & $0.12\pm0.05$ \\ & & $\zeta$ & $5.88\pm0.10$ & $5.72\pm0.23$ & $6.18\pm0.30$ & $5.53\pm0.40$ & $5.27\pm0.32$ & $5.71\pm0.29$ & $5.33\pm0.08$ & $4.96\pm0.23$ & $5.58\pm0.26$ \\ & \multirow{3}{*}{rec} & $\omega$ & $15.04\pm0.02$ & $15.03\pm0.08$ & $14.95\pm0.13$ & $8.58\pm0.02$ & $8.61\pm0.07$ & $8.64\pm0.15$ & $7.19\pm0.02$ & $7.23\pm0.07$ & $7.26\pm0.11$ \\ & & $\phi$ & $-0.01\pm0.01$ & $0.00\pm0.02$ & $0.01\pm0.03$ & $0.00\pm0.01$ & $-0.01\pm0.02$ & $-0.01\pm0.03$ & $-0.01\pm0.01$ & $-0.03\pm0.03$ & $-0.03\pm0.04$ \\ & & $\zeta$ & $1.51\pm0.04$ & $3.12\pm0.10$ & $3.83\pm0.17$ & $1.52\pm0.04$ & $3.17\pm0.10$ & $3.75\pm0.20$ & $1.53\pm0.04$ & $3.04\pm0.11$ & $3.46\pm0.16$ \\ \hline \multirow{6}{*}{$1.0$} & \multirow{3}{*}{unrec} & $\omega$ & $14.90\pm0.04$ & $14.73\pm0.17$ & $14.80\pm0.17$ & $8.29\pm0.13$ & $8.15\pm0.16$ & $8.05\pm0.19$ & $6.88\pm0.02$ & $6.72\pm0.13$ & $6.63\pm0.12$ \\ & & $\phi$ & $0.00\pm0.01$ & $0.02\pm0.03$ & $0.00\pm0.03$ & $0.01\pm0.02$ & $0.01\pm0.03$ & $0.01\pm0.03$ & $0.05\pm0.01$ & $0.08\pm0.04$ & $0.10\pm0.03$ \\ & & $\zeta$ & $4.73\pm0.05$ & $4.90\pm0.21$ & $5.48\pm0.20$ & $4.39\pm0.18$ & $4.38\pm0.22$ & $5.08\pm0.26$ & $4.19\pm0.03$ & $4.30\pm0.20$ & $4.99\pm0.17$ \\ & \multirow{3}{*}{rec} & $\omega$ & $15.01\pm0.01$ & $15.02\pm0.10$ & $15.02\pm0.15$ & $8.58\pm0.01$ & $8.60\pm0.07$ & $8.52\pm0.14$ & $7.19\pm0.01$ & $7.12\pm0.09$ & $7.03\pm0.10$ \\ & & $\phi$ & $0.00\pm0.01$ & $0.00\pm0.02$ & $-0.01\pm0.03$ & $0.00\pm0.01$ & $-0.01\pm0.02$ & $0.01\pm0.03$ & $-0.01\pm0.01$ & $0.02\pm0.03$ & $0.04\pm0.04$ \\ & & $\zeta$ & $1.11\pm0.03$ & $3.23\pm0.14$ & $3.81\pm0.20$ & $1.09\pm0.04$ & $3.09\pm0.11$ & $3.56\pm0.20$ & $1.12\pm0.03$ & $3.04\pm0.13$ & $3.55\pm0.15$ \\ \hline \multirow{6}{*}{$1.5$} & \multirow{3}{*}{unrec} & $\omega$ & $14.94\pm0.02$ & $14.82\pm0.17$ & $14.64\pm0.22$ & $8.43\pm0.07$ & $8.17\pm0.14$ & $7.84\pm0.18$ & $6.95\pm0.01$ & $6.81\pm0.15$ & $6.70\pm0.14$ \\ & & $\phi$ & $0.00\pm0.01$ & $0.01\pm0.03$ & $0.03\pm0.03$ & $0.00\pm0.01$ & $0.02\pm0.03$ & $0.07\pm0.03$ & $0.04\pm0.01$ & $0.05\pm0.05$ & $0.08\pm0.04$ \\ & & $\zeta$ & $3.90\pm0.02$ & $4.43\pm0.21$ & $5.17\pm0.26$ & $3.60\pm0.10$ & $4.07\pm0.20$ & $4.92\pm0.25$ & $3.41\pm0.02$ & $3.92\pm0.22$ & $4.73\pm0.21$ \\ & \multirow{3}{*}{rec} & $\omega$ & $15.01\pm0.01$ & $15.07\pm0.09$ & $14.94\pm0.23$ & $8.58\pm0.01$ & $8.49\pm0.10$ & $8.60\pm0.23$ & $7.17\pm0.01$ & $7.19\pm0.11$ & $7.18\pm0.12$ \\ & & $\phi$ & $0.00\pm0.01$ & $-0.01\pm0.02$ & $0.01\pm0.05$ & $0.00\pm0.01$ & $0.02\pm0.02$ & $-0.01\pm0.05$ & $0.00\pm0.01$ & $-0.02\pm0.04$ & $-0.01\pm0.04$ \\ & & $\zeta$ & $0.88\pm0.03$ & $3.24\pm0.12$ & $3.90\pm0.29$ & $0.84\pm0.03$ & $3.08\pm0.15$ & $3.66\pm0.32$ & $0.83\pm0.02$ & $3.19\pm0.16$ & $3.65\pm0.17$ \\ \hline \end{tabular} \end{table*} \subsection {Wiggle parameter fitting}} \label{sec:4.2} {Figs.~\ref{fig:3}, \ref{fig:4} and \ref{fig:5} show,} respectively, the results of the analytic fit to {the} unreconstructed and reconstructed {$P_{\rm rw}$ results} for {the} three models {studied in this work}. It can be seen that, in most cases, {the} analytic model {Eq.~\eqref{eq:2.4}, with a Gaussian damping function characterised by the parameter $\zeta(z)$,} fits the {data} very well. The corresponding best-fit parameters of $\omega$, $\phi$ and $\zeta(z)$, {as well as} their uncertainties, are displayed in Table~\ref{tab2}, which assist the understanding from a quantitative perspective. As mentioned before, we mainly focus on the results of H1 and H2, and thus the results of DM would be taken as a reference and not be discussed in detail. The three parameters are mainly determined by the remaining peaks in the wiggles. We shall first discuss the results of the damping parameter, followed by the oscillation parameters, and then combine them to clarify the improvement given by reconstruction. The damping parameter $\zeta$ effectively describes the extent of the nonlinear effects in {structure formation} and {characterises the suppression} of the primordial oscillations. It is {zero in the linear regime, such as} at $z=49$, and gradually increases as the redshift {decreases because the structures become progressively more nonlinear}. Thus reconstruction aims to reduce $\zeta$ and retrieve the primordial oscillations. In Table~\ref{tab2} it shows that, despite the reconstructed values of $\zeta$ are not reduced to zero due to the imperfection of reconstruction, they are evidently smaller than the unreconstructed values in all cases, and the uncertainties of $\zeta$ are also reduced after reconstruction in most low-redshift ($z<1$) cases, which {confirms} that the reconstruction succeeds in retrieving the lost wiggles to a great extent. Specifically, by comparing the cases among different models but same catalogues and redshifts, the {corresponding} values after reconstruction seem to be {nearly} independent of {the} model, which implies that the improvement on the {recovery} of the wiggles does not depend on the shape of the primordial oscillations\footnote{{This makes sense given that the amplitude of the primordial oscillations is small in this work, and so the wiggles can be considered as small perturbations to the primordial density field. Reconstruction, on the other hand, is sensitive to the overall distribution of matter.}}. This is similar to the unreconstructed $\zeta$, which supports the qualitative inference in the previous subsection that the nonlinear regime is similar at the same redshift for different models, although the unreconstructed values of {$\zeta$ in} Model 1 are commonly a bit larger than those {in} Model 2 and Model 3. Additionally, for each model, the reconstructed values of {$\zeta$ in} H1 are smaller than those {in} H2 at the same redshift, and the same trend {can be seen} in the unreconstructed values as well, which is attributed to that H1 has twice {the} halo number density as H2. Furthermore, for each catalogue, it appears that the reconstructed $\zeta$ is only reduced with increasing redshift at low redshift ($z<1$), it cannot be further reduced at high redshift ($z>1$). Taking Model 1 as an example, it could also be found in Fig.~\ref{fig:3} that the fitting curves of reconstructed wiggles at high redshift show very little difference compared with the low-redshift cases. By contrast, the unreconstructed $\zeta$ decreases with increasing redshift, thus the difference between the reconstructed and unreconstructed $\zeta$ seems to be large at low redshift {and} small at high redshift. In other words, the improvement given by the reconstruction is effective at low redshift but relatively limited at high redshift. Next, we shall {focus on} whether the improved wiggles after reconstruction can {lead to} more accurate {measurements of} the oscillation parameters $\omega$ and $\phi$. Regarding the oscillation frequency $\omega$, the reconstructed values of $\omega$ are much closer to the theoretical values than the unreconstructed values in all cases, which is especially evident at low redshifts. Except for a few high-redshift cases, the improvement on the uncertainties after reconstruction is evident in most cases as well. Specifically, when comparing amongst different models, it appears that the reconstructed $\omega$ values of Model 1 show {slightly better performances over} those of Model 2 and Model 3, which is because the reconstructed wiggles of Model 1 have four evident peaks within the fitting range of scales at all redshifts, while Model 2 and Model 3 only have two; {clearly} more peaks {can} enhance the accuracy of the fit. Similarly, the unreconstructed values of Model 1 seem to be closer to the theoretical values than those of Model 2 and Model 3; more details will be discussed later. For each model, the reconstructed values of H1 are a little bit better than those of H2 at the same redshift in most low-redshift cases, which {can be explained by} the larger halo number density of H1. By contrast, there is no evident distinction of the unreconstructed values at the same redshift between H1 and H2, which could result from the non-negligible noise in the wiggle spectra. It can be checked by having more realisations of the simulations in the future. Moreover, for each catalogue, the reconstructed values become closer to the theoretical values as the redshift increases {in the} low-redshift {range, but} it cannot be further improved at high redshift, which is similar to how the redshift affects the reconstructed $\zeta$ {as we found} above. By contrast, it is shown that the unreconstructed values of DM become more accurate with increasing redshift, but those of H1 and H2 do not show the same trend, which could be caused by the evident noise in the wiggle spectra due to the wide gap between the number densities of dark matter particles and haloes. Therefore, the reconstruction mainly improves the prediction of $\omega$ at low redshift, and the predictions for Model 2 and Model 3 are improved more than those for Model 1. The situation is quite different in {the case of} the oscillation phase $\phi$. The unreconstructed values of Model 1 and Model 2 are determined very well in most cases, so the reconstructed values only show a little bit improvement on the {unreconstructed} $\phi$ even for low-redshift cases. However, for Model 3 the unreconstructed values largely deviate from the theoretical value in all cases, and the unreconstructed values of H2 deviate even further than those of H1 at the same redshift. Therefore the reconstruction {once again} shows its advantage {of allowing more accurate measurement of} $\phi$, especially for H2 at low redshift. When combining the results of {all} three parameters, it seems that the reconstruction is {most} useful at low redshifts, $z<1$, and Model 2 and Model 3 benefit more from {it} than Model 1. Although the remaining peaks of Model 1 are kept very well so that its reconstructed results are better than those of the other two models, the improvement {is} relatively limited for {it}. Thus the improvement depends less on how great the reconstructed wiggles are, {and more} on how poor the primordial wiggles are kept {before reconstruction}; in other words, the reconstruction would be {more} useful if the primordial wiggles are lost {to a greater extent}. As we mentioned before, the wiggles on scales $k \gtrsim 0.3 \ h \rm Mpc^{-1}$ are totally lost at $z=0$, Model 1 has exactly the first two original peaks {outside} this range of scales, so these two peaks are effectively preserved at low redshift. By contrast, Model 3 has one original peak at the same position of the first peak of Model 1 which is effectively preserved, and {its} second peak {is} at the same position of the third peak of Model 1, which is almost lost. Hence the primordial wiggles of Model 3 are {preserved less well} than those of Model 1, and Model 3 would benefit more from the reconstruction. Similarly, Model 2 has two original peaks {in the range $k\lesssim0.5\ h \rm Mpc^{-1}$}: the first is on {a} smaller scale compared with the first peak of other two models, and so it is not preserved as {well} as the first peak of {the} other two models due to the larger damping effect, while the second peak is totally wiped out. Thus the primordial wiggles of Model 2 are kept even worse than those of Model 3. However, since the fitting range of scales includes the first trough to the left of the first peak for Model 2 but not for Model 3, {this} partially balances the accuracy of the fit for Model 2. Therefore the improvement by the reconstruction is similar for Model 2 and Model 3, and both benefit from reconstruction {substantially} more than Model 1. \begin{figure*} \includegraphics[width=2.0\columnwidth]{conf_ell_model1.png} \caption{{[Colour Online]} {Forecasts of} constraints on the oscillatory feature parameters for {a DESI-like} survey with {a} survey area of $14,000$ $\rm deg^2$, {for} the primordial oscillations of Model 1. The left side is for LRGs and the right side is for ELGs. The upper panels show the 1D marginalised likelihoods. The middle and lower panels show the marginalised $68\%$ and $95\%$ confidence contours for every two out of three feature parameters. The green and grey colours represent, respectively, the cases for $k_{\rm max}=0.25h \rm Mpc^{-1}$ with and without reconstruction, while {the} blue and red colours represent the cases for $k_{\rm max}=0.5h \rm Mpc^{-1}$ with and without reconstruction. } \label{fig:6} \end{figure*} \begin{figure*} \includegraphics[width=2.0\columnwidth]{conf_ell_model2.png} \caption{{[Colour Online]} The same as Fig.~\ref{fig:6} but for Model 2.} \label{fig:7} \end{figure*} \begin{figure*} \includegraphics[width=2.0\columnwidth]{errorA.png} \caption{{[Colour Online]} Forecasts of the marginalised uncertainties of the oscillation amplitude $A$ as a function of the frequency $\omega$, {for the} three models. The first column is for LRGs and the second column is for ELGs. The bottom panels are for Model 1, followed by Model 2 and Model 3 upwards. The dotted black lines mark the theoretical amplitudes {of the oscillations, $A=0.05$,} used in the forecasts. {The meanings of the different colours and line styles are indicated in the legends.} The same colours represent the cases with same $k_{\rm max}$ and same situation of reconstruction but different survey areas; the thick lines are for the survey area of $14,000$ $\rm deg^2$ and the thin lines are for $9,000$ $\rm deg^2$.} \label{fig:8} \end{figure*} \subsection{Constraints on oscillation parameters for DESI-like survey} \label{sec:4.3} Since the results of the constraints on the oscillation parameters are similar between Model 2 and Model 3, we take Model 1 and Model 2 as two examples to {illustrate and} discuss how the reconstruction {could} improve the constraints in {a} real galaxy survey. We shall first talk about some common features exhibited in both models, {and} then clarify the distinctions between them. {Finally}, we also forecast how much the uncertainties of the feature amplitude {can be} improved after reconstruction for three models. Figs.~\ref{fig:6} and \ref{fig:7} show, respectively, the constraints on the oscillation parameters for {a DESI-like} survey with a survey area {of} 14,000 $\rm deg^2$, based on the primordial oscillations of Model 1 and Model 2. The marginalised likelihoods of the oscillation parameters shown in the upper panels of both figures indicate that the unreconstructed cases with $k_{\rm max} = 0.5 \ h \rm Mpc^{-1}$ {(red lines)} give better constraints than the unreconstructed cases with $k_{\rm max} = 0.25 \ h \rm Mpc^{-1}$ {(grey)}, because {in the former case} more $k$ modes {are} included in the Fisher matrix which increase the accuracy of the constraints. Additionally, by comparing the likelihoods of the same $k_{\rm max}$, {we find} that reconstruction leads to {more} robust constraints on the parameters, {because it} successfully recovers {some of} the lost peaks {in} the nonlinear regime and provides more effective $k$ modes. Furthermore, stronger constraints are shown for ELGs {(right panels)} compared with LRGs {(left panels)}, since the former has more available redshift bins and larger halo number density for the same redshift bins. The above trends are also shown in the marginalised {2D} confidence contours {in the lower part of the corner plots in Fig.~\ref{fig:6} and \ref{fig:7}}. In particular, it can be seen in most contours that the cases for $k_{\rm max} = 0.5 \ h \rm Mpc^{-1}$ are largely improved by reconstruction as the boundary of 68 \% limits shrinks to be around the boundary of 95 \% limits after reconstruction. It is because the peaks on scales $k \gtrsim 0.2 \ h \rm Mpc^{-1}$ are heavily damped at low redshift, {and} the recovered wiggles at $k = (0.2-0.5) \ h \rm Mpc^{-1}$ significantly contribute to the constraints. By contrast, since the peaks on scales $k \lesssim 0.2 \ h \rm Mpc^{-1}$ are preserved very well, the {recovery of} wiggles for $k_{\rm max} = 0.25 \ h \rm Mpc^{-1}$ {is} not as important as {in the case} for $k_{\rm max} = 0.5 \ h \rm Mpc^{-1}$. Besides, the $A$-$\omega$ and $A$-$\phi$ contours for $k_{\rm max} = 0.25 \ h \rm Mpc^{-1}$ show {that these} parameters are degenerate with each other, and these degeneracies are {broken and replaced with} stronger constraints {when} $k_{\rm max} = 0.5 \ h \rm Mpc^{-1}$, which include more $k$ modes. However, {in all cases} the $\omega$-$\phi$ contours show that {the two} parameters are strongly degenerate. It is understood from the {$P_{\rm rw}$ results} that the region $k = (0.1-0.3) \ h \rm Mpc^{-1}$ dominates the results of constraints, because on larger scales ($k \lesssim 0.1 \ h \rm Mpc^{-1}$) the uncertainty is large due to {cosmic variance, while} on smaller scales ($k \gtrsim 0.3 \ h \rm Mpc^{-1}$) the wiggles are damped which have {less} influence on the constraints. {We have checked explicitly that} combinations of $\omega$ and $\phi$ {along the degeneracy direction shown in Figs.~\ref{fig:6} and \ref{fig:7}} {lead to little change to the $P_{\rm rw}$ curves in the range of} $k = (0.1-0.3) \ h \rm Mpc^{-1}$. {This is a feature of the oscillation model itself, which is why a strong degeneracy is still present even in the cases of reconstruction (albeit significantly less strong than the unreconstructed cases).} When comparing the constraints between Model 1 and Model 2, it appears that the choice of $k_{\rm max}$ has a different influence on the constraints. For both models, the constraints are similar for $k_{\rm max} = 0.5 \ h \rm Mpc^{-1}$ but {significantly} different for $k_{\rm max} = 0.25 \ h \rm Mpc^{-1}$. For Model 2 there is a wide gap {between the marginalised 1D distributions of both the unreconstructed and reconstructed cases,} when increasing $k_{\rm max}$ from $0.25 \ h \rm Mpc^{-1}$ to $0.5 \ h \rm Mpc^{-1}$, which could be attributed to {the fact} that Model 2 has few peaks at $k \lesssim 0.25 \ h \rm Mpc^{-1}$. Thus for the primordial oscillations with relatively {low} frequency, increasing $k_{\rm max}$ would be very beneficial to improve the constraints. Furthermore, the improvement on the constraints by the reconstruction shows that Model 2 benefits more from the reconstruction than Model 1, which is consistent with the analysis in Section~\ref{sec:4.2}. Lastly, {as \citet{Ballardini:2019tuc},} we show the marginalised uncertainties of $A$ as a function of $\omega$ for {the} three models in Fig.~\ref{fig:8} and discuss the implications of {the results}. First, we focus on Model 1 and Model 2 which show almost identical variations of the uncertainties {w.r.t.~$\omega$}, due to their identical form of the oscillations --- {we note that these are the same oscillation model with different choices of $\omega$, and thus we should have expected {\it exactly} identical constraints in Fig.~\ref{fig:8}. However, as we have discussed above, the best-fit values of $\zeta$ are slightly different (cf.~Table \ref{tab2}), even though we expect any model dependence of $\zeta$ to be weak (this suggests that more realisations of simulations are needed to measure $\zeta$ as functions of halo number density, redshift, and model (or model parameter), etc., more accurately, which will be left for future work).} As expected, ELGs {place slightly} tighter constraints than LRGs due to their larger number densities {and redshift range}. The sharp peaks appeared at $\omega \simeq 100 \ h^{-1}\rm Mpc$ are due to the degeneracy between the oscillatory features and the BAO {wiggles} over the signal-dominated range of scales. For $\omega \gtrsim 150 \ h^{-1}\rm Mpc$ the uncertainties do not vary with $\omega$, but for smaller $\omega$ things are complicated for different $k_{\rm max}$. For $k_{\rm max} = 0.25 \ h \rm Mpc^{-1}$ we can see {an increase} in the uncertainties at $\omega \lesssim 50 \ h^{-1}\rm Mpc$, while {a} similar {increase} starts to appear at even smaller $\omega$ -- $20 \ h^{-1}\rm Mpc$ -- for $k_{\rm max} = 0.5 \ h \rm Mpc^{-1}$. Thus larger $k_{\rm max}$ has an {extra} advantage of {significantly} reducing the uncertainties for small $\omega${, in addition to giving more stringent constraints (everything else the same) for all $\omega$ overall}. {By comparing} the pairs of {curves with} the same colours, i.e., {the} same cases {($k_{\rm max}$ and reconstructed vs.~unreconstructed)} but different survey areas, {we find that, as expected, a} larger survey area always {gives} better constraints. {Most interestingly, everything else equal, doing reconstruction can significantly reduce the uncertainties of $A$. As an example, for large $\omega$ values, in the case of $k_{\rm max}=0.5\ h \rm Mpc^{-1}$ and survey area equal to $14,000~{\rm deg}^2$, reconstruction reduces $\sigma(A)$ from $\sim0.003$ to $\sim0.002$, and this improvement is stronger than not performing reconstruction, but instead going from $9,000$ to $14,000$ ${\rm deg}^2$ with $k_{\rm max}$ fixed to $0.25$ or $0.5\ h \rm Mpc^{-1}$, or increasing $k_{\rm max}$ from $0.25$ to $0.5\ h \rm Mpc^{-1}$ keeping the survey area fixed to either $9,000$ or $14,000$ ${\rm deg}^2$. A similarly good improvement can be seen with $k_{\rm max}=0.25\ h \rm Mpc^{-1}$ or survey area equal to $9,000$ ${\rm deg}^2$, when doing reconstruction. In certain cases, e.g., the large-$\omega$ regime of the lower panels of Fig.~\ref{fig:8}, reconstruction with $k_{\rm max}=0.25\ h \rm Mpc^{-1}$ and a survey area equal to $9,000$ ${\rm deg}^2$ (the thin green dashed line) can lead to comparable constraints to not doing reconstruction but with $k_{\rm max}=0.5\ h \rm Mpc^{-1}$ and a survey area equal to $14,000$ ${\rm deg}^2$ (the thick orange dot-dashed line). Given that increasing survey area is usually not possible, and increasing $k_{\rm max}$ is also not straightforward given the effort required to model the nonlinear regime of matter clustering, especially with RSD, performing reconstruction seems to be a cheap way to maximise the exploitation and scientific return of survey data.} \Baojiu{The behaviours of Model 3 are broadly similar to those of Model 1 and Model 2, e.g., both the absolute and the relative heights of the different curves, as well as their shapes are the same as before. There are, however, some notable differences, e.g., the main peaks in $\sigma(A)$ in Model 3 are at slightly different values of $\omega$ from the other models, and the curves are also less smooth. As mentioned above, the bump (which has the structure of a double peak) of $\sigma(A)$ for Model 1 and Model 2 is related to the BAO peak in the matter/galaxy correlation function, which is at $\simeq100h^{-1}{\rm Mpc}$. The primordial wiggles for those two models, in configuration space, correspond to a spike at matter or halo separation $r=\omega$. In those two models, when $\omega\gg100h^{-1}{\rm Mpc}$, the BAO and primordial peaks are separated afar and thus the former does not affect the accuracy of the measurement for the latter. As $\omega$ approaches $100h^{-1}{\rm Mpc}$ from above, the BAO and primordial peaks start to `interfere', leading to changes of both the amplitude and shape of the latter, making it harder to measure its parameters accurately. We speculate that the dip --- which causes the double-peak structure in $\sigma(A)$ for Model 1 and Model 2 --- is due to the fact that, when the primordial peak does not coincide well with the centre of the (rather wide) BAO peak, its shape can be affected in an asymmetric manner, making the measurement of its parameters even more inaccurate. In contrast, the structure of the primordial wiggles in Model 3 is much more complicated in configuration space, because $m\ne1$ in Eq.~\eqref{eq:2.2}: this can cause the differences in the fine details of $\sigma(A)$ between this and the other models.} \section{Conclusions} \label{sec:5} In this paper, we have investigated the effect of \Baojiu{density} reconstruction on retrieving \Baojiu{hypothetical} oscillatory features in the primordial power spectrum \Baojiu{which are erased} on small scales in the late-time Universe due to the nonlinear cosmological evolution. \Baojiu{We considered} three different oscillatory features \Baojiu{that are added} to a simple power-law primordial power spectrum, \Baojiu{for which} we ran N-body simulations and \Baojiu{identified dark matter halo} catalogues from the output snapshots \Baojiu{at a number of redshifts}. We reconstructed the initial density field from the \Baojiu{particle data, and halo} catalogues \Baojiu{with different number densities.} Finally, we compared \Baojiu{the fitted feature parameters from} the unreconstructed and reconstructed \Baojiu{density fields,} to identify the improvement by reconstruction. We showed that reconstruction \Baojiu{can be very effective in helping retrieve} the lost wiggles \Baojiu{--- with the finite volume of our simulations, not only does it lead to much less biased best-fit values of the feature parameters, but it also substantially shrinks the measurement uncertainty.} The improvement was especially \Baojiu{strong where the primordial features have been more severely erased to start with, such as at $z<1$.} \Baojiu{In order to forecast} the constraints on the feature parameters for \Baojiu{a typical DESI-like galaxy} survey, \Baojiu{we} modelled the observed broadband galaxy power spectrum based on the \Baojiu{{\sc halofit}} model with the addition of our oscillatory features, then used the analytic marginalised Fisher matrix to calculate the constraints on the oscillation parameters regarding the specifications of \Baojiu{DESI LRGs and ELGs}. We found that reconstruction led to more robust constraints on the oscillation parameters, \Baojiu{with the equivalent effects of enlarging the survey area (but at a much smaller cost) and/or increasing the $k$ range.} \Baojiu{While reconstruction is commonly used in improving the measurement of the BAO scale, and hence the determination of the expansion rate and properties of dark energy, this work has demonstrated that similar applications are possible in other cases where certain features in matter clustering are present. This is particularly true if these features are in the mildly nonlinear regime, $0.1\lesssim{k/(h{\rm Mpc}^{-1})}\lesssim0.5$, since this range of scales is what the nonlinear reconstruction method used here helps most: on even larger scales the benefit of reconstruction is insignificant, while on even smaller scales reconstruction won't help much. Indeed, while a comparison with other, e.g., the standard, reconstruction methods is beyond the scope of this paper, based on experience we would expect those latter methods should also improve the prospective of constraining potential primordial features.} \Baojiu{The methodology exemplified in this paper assumes that we know the functional form of the primordial features {\it a priori} --- this is how we forecasted constraints on the oscillation amplitude $A$. However, the reconstruction step is completely independent of any assumption of a particular primordial feature, and hence any method developed for detecting general features from the matter clustering should apply to and benefit from the reconstructed density field.} \Baojiu{As a first step, the present study is based on various simplifications, and we discuss a couple here which can be improved in the future. The first is related to the damping parameter $\zeta$ post-reconstruction. As we have seen, $\zeta$ controls the improvement over the unreconstructed case. The simulations carried out for this work --- due to their limited resolution, box size, output snapshots numbers, coverage of models and number of realisations --- did not allow us to more accurately quantify how $\zeta$ depends on the oscillation model (though we suspect that any dependence on parameters $A,\omega$ should be weak as long as $A$ is small), redshift, or the tracer type or number density. It certainly would be great if better simulations will become available, allowing further improvements on these aspects. } \Baojiu{The second is related to the modelling of redshift-space distortions (RSD), for which we have adopted a simplistic prescription and well pushed beyond the limit (e.g., $k\simeq0.5~h{\rm Mpc}^{-1}$) where it is expected to work. This is not an issue for a forecast work, but for constraints using real data it should be treated more carefully. The reconstruction method here has been extended to remove RSD from observed galaxy catalogues \citep{Wang:2019zuq}, though that is unlikely to work reliably at $k$ as large as $\simeq0.5~h{\rm Mpc}^{-1}$. Of course, we can always cut $k_{\rm max}$ to something that we are comfortable with. However, as mentioned above, if we would like to take maximum benefit from reconstruction, it is likely that we need to go substantially beyond $k\simeq0.1~h{\rm Mpc}^{-1}$. This can be achieved, for example, by using emulators of redshift-space galaxy or halo clustering \citep[see, e.g.,][]{Zhai:2018plk,Kobayashi:2020zsw}; actually, as long as the primordial oscillations are weak (as implied by current null detections), one might assume that their presence has little or negligible impact on RSD.} \Baojiu{The ultimate objective, of course, is to apply this method to real observation data from future galaxy surveys such as Euclid and DESI. For this, the above-mentioned improvements, amongst many others, would need to be done properly. These will be left for future works, in which we plan to carry out updated forecasts for these surveys and eventually real constraints.} \section*{Acknowledgements} We thank collaborators within Euclid and DESI for various discussions while this project was going on. YL thanks Robert Smith for his support during this project. BL is supported by the European Research Council through ERC Starting Grant ERC-StG-716532-PUNCA, and the Science Technology Facilities Council (STFC) through ST/T000244/1 and ST/P000541/1. HMZ is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) [funding reference number CITA 490888-16]. This work used the DiRAC@Durham facility managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K00042X/1, ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National e-Infrastructure. \section*{Data Availability} Simulation data used in this work can be made available upon request to the authors. \input{ms.bbl} \bibliographystyle{mnras} \bsp \label{lastpage} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \begin{wraptable}{r}{6cm} \begin{tabular}{lrr} Method & Triangles & Clique \\ \toprule GCN & 50.0 & 50.0 \\ GCN+D & 75.7 & 83.2 \\ GCN+D+ID & 80.4 & 83.4 \\ GIN & 74.1 & 97 \\ GIN+D & 75.0 & 99.4 \\ GIN+D+ID & 70.5 & 100.0 \\ GAT & 50.0 & 50.0 \\ GAT+D & 88.5 & 99.9 \\ GAT+D+ID & 94.1 & 100.0 \\ \midrule SVM+WL & 67.2 & 73.1 \\ SVM+Graphlets & 99.6 & 60.3 \\ \midrule FCNN & 55.6 & 54.6 \\ TF & 100.0 & 70.0 \\ TF+AM & 100.0 & 100.0 \\ TF-IS+AM & 86.7 & 100.0 \\ TF-IS+AM4 & 97.5 & 100.0 \\ \end{tabular} \caption{Test accuracy, \% on proposed synthetic datasets. We expect that a method should achieve 100\% accuracy on both datasets.} \label{tab:synch_datasets} \vspace{-0.75cm} \end{wraptable} Many tasks need to handle the graph representation of data in areas such as chemistry \cite{Wale}, social networks \cite{fan2019graph}, and transportation \cite{Zhao2019}. Furthermore, it is not limited to these graph tasks but also includes images \cite{ML-GCN_CVPR_2019} and 3D polygons \cite{Point-GNN} that are possible to convert to graph data formats. Because of these broad applications, Graph Deep Learning is an important field in machine learning research. Graph neural networks (GNNs, \cite{scarselli2008graph}) is a common approach to perform machine learning with graphs. Most graph neural networks update the graph node vector embeddings using the message passing. Node vector embeddings are usually initialized with data features and local graph features like node degrees. Then, for a $(n+1)$-th stacked layer, the new node state is computed from the node vector representation of the previous layer ($n$). There exist many approaches, and most of them follow this big pattern. Message passing occurs on the graph adjacency matrix and is completely baked in the algorithm. Deep learning for graphs uses data sets from a variety of fields~\cite{Jure2014snapnets,hu2020open}. For example, there are protein molecules, social networks, and web networks. Several tools to investigate and visualize theses graph data are proposed~\cite{Ryan2015graphtool}. Usually, properties of graphs are measured using graph invariants, i.e., order, size, connectivity, etc. Applying GNNs to a task that can be solved by a simple classifier with graph invariants as input features is excessive. This is because there are usually fast and excellent algorithms for calculating graph invariants. Therefore, it is a very important question whether the target graph problem is hard enough to be a target for the development of graph deep learning, but it has not been investigated. Researchers in traditional graph algorithms have utilized synthetic data with various characteristics to investigate these issues. For example, G-set~\cite{stefanxxxgset} is used in the study of the Max-cut problem. This data set is an artificially generated data set with difficult characteristics for the Max-cut problem. We employ this direction. \citet{Dehmamy2019UnderstandGNN} gave theoretical insights in graph topology learning power of GNNs. However, to our best understanding, there were counterexample topological datasets which were not easily solvable by GNNs. Based on the structure of the algorithms, it is assumed that deeper representations in GNNs capture adjacency information, including graph topology \cite{Xu2019}. As a counterexample, we propose two seemingly easy synthetic graph classification tasks, focusing solely on graph topology: triangle detection task and clique distance task. Modern GNNs perform surprisingly poorly on both tasks, see Table~\ref{tab:synch_datasets}, failing to find these ``bermuda'' triangles in the first proposed task. In addition to that, we suggest an approach that can solve both synthetic tasks perfectly. \section{Synthetic Tasks} \label{sec:synthetic_tasks} In this section, we describe the proposed synthetic tasks with a way to generate them. The tasks are detecting a presence of a triangle (loop of a length 3) in a graph (\textbf{Triangle}) or detecting whether a distance between two cliques is lower or higher than a certain threshold (\textbf{Clique Distance}). Within the generation process, we employ logistic regression-based filtering which removes spurious features from the data, leaving presumably only the features which should be directly related to solving the task at hand. \subsection{Undermanned Logistic Regression Filtering} \label{sub:undermanned_logistic_filtering} \begin{wrapfigure}{r}{6cm} \includegraphics[width=0.35\textwidth]{figs/clique_dist.pdf} \caption{An example from Clique distance dataset. The shortest path between cliques is highlighted in red.} \label{fig:clique_dist} \vspace{-0.30cm} \end{wrapfigure} The objective of synthetic datasets is to check whether a model can process a target phenomenon. Spurious correlations in synthetically created data interfere with this objective and can be alternatively seen as if the data contains a sub-task that is not one we are interested in. We employ \emph{undermanned} logistic regression filtering to combat this effect. We build a logistic regression classifier with a just-not-enough feature template set, so it will be impossible for it to solve the task at hand. We then select mostly data items that can not be solved with this undermanned classifier. Perfectly, such a classifier should have low accuracy even on unfiltered data, but that can be not the case for randomly generated data. The filtering procedure builds on the overlapping $n$-fold cross-validation. First, similarly to standard cross-validation, we split the dataset into $n$ folds. We train a classifier on $m < n$ folds and use the remaining $n - m$ for the validation. By repeating the training-validation process $n$ times, shifting folds each time in a round-robin fashion, we can compute for each data item the number of times the classifier was able to produce the correct result. Intuitively, we are not interested in data for which the classifier was always correct, so we sample the final dataset by biasing the items which had at least one classification error. We use the following feature templates for both of our datasets: 1) Node degree (number of edges), 2) Node degrees of both edge ends, 3) Number of nodes in the graph, 4) Number of edges in the graph. The first two feature templates are used two times: to check both the presence of a feature (e.g., there exists a node with a degree of five) and its count. This feature set should not be enough to detect the presence of a triangle in the data. Still, the logistic regression classifier has the accuracy of $82.5\%$ on the unfiltered data for the clique distance problem and $87\%$ for the triangle problem. We generate 200k graph candidates (100k for each class) and filter them to produce 10k train data and 1k test data. \begin{figure} \centering \includegraphics[width=.35\textwidth]{figs/triangles_pos.pdf} \includegraphics[width=.35\textwidth]{figs/triangles_neg.pdf} \caption{Two data examples from the Triangles dataset. The left one has a triangle highlighted in bold red. The right one does not have any triangles.} \label{fig:triangles} \end{figure} \subsection{Triangles Dataset} In the Triangles dataset, each data item is a graph that contains either exactly one triangle (3-clique) or exactly zero triangles. We generate it by sampling both random (nodes are connected to other random nodes) and $k$ nearest neighbor graphs (nodes are sampled as points on a 2D plane and then connected to their neighbors). Then the edges which belong to triangles are removed from the graph until only one or zero triangles remain. An example of data is shown in Figure~\ref{fig:triangles}. The graphs themselves are rather complicated with usually multiple cycles of different lengths present in a graph. \subsection{Clique Distance Dataset} The second synthetic dataset checks whether an approach can find high-order patterns in the data. We generate a base BA-graph \cite{Albert:2002:rmp} and then attach two cliques to two distinct nodes of the base graph. Connection parameter $m$ is set to $k - 2$, where $k$ is the clique size. This enforces a fact that the BA graphs can contain cliques only up to the $k - 1$ and there is no noise in the data. We sample the number of nodes uniformly from 5 to 20 and set the clique size to 4. An example of data is shown on Figure~\ref{fig:clique_dist}. Graphs which has the shortest distance between the cliques below a threshold get a label of 0, otherwise 1. We set the distance threshold to 4. \section{Experiments} \label{sec:experiments} We perform experiments on both synthetic and real-world data. Real-world datasets are standard benchmarks for the graph classification tasks, and synthetic datasets highlight specific capabilities of the proposed model. All experiments were performed with 10-fold cross-validation with a fixed random seed and we report average scores. We use three types of graph neural networks as baselines: original \textbf{GCN} \cite{scarselli2008graph}, \textbf{GIN} \cite{Xu2019}, \textbf{GAT} \cite{Velickovic2018}. We use the GNN implementations provided by BenchmarkingGnns project \cite{dwivedi2020benchmarkgnns}. As a non-neural baseline we use SVM \cite{svm-paper} with WL kernel \cite{shervashidze2011weisfeiler}. We label it as \textbf{SVM+WL}. For synthetic datasets, we also try graphlet sampling kernel \cite{pmlr-v5-shervashidze09a}. We label it as \textbf{SVM+Graphlets}. We use GraKel \cite{JMLR:v21:18-370} framework for kernel implementations. We use six layers for all GNNs and leave other parameters at 100k settings of the provided configuration files. We use three types of input node features for GNNs. The first one is uniform: each node is initialized with the same feature vector. It is not denoted specifically. The second type adds node degree features encoded as a single scalar. We denote it as \textbf{+D}. The last one also adds node ID features, denoted as \textbf{+ID} in one-hot encoding. We do not use specific node features for SVM kernels. \subsection{Node Index Shuffling + Transformer Approach} \label{sub:our_approach} We also include a proposed method, which uses a Transformer \cite{NIPS2017_7181} model as a state update equation. We use Transformer in two configurations. The first variant uses Transformer as is, effectively treating graph as fully connected. We denote it as \textbf{TF}. The second variant, labeled as \textbf{TF+AM}, limits the self-attention to the graph adjacency matrix. Limiting the Transformer's self-attention to a graph adjacency matrix makes the overall method a hybrid between the message-passing approach and embedding topology into the embeddings. For TF, we use four Transformer layers with four heads and a hidden layer dimension of 32 for self-attention. In the TF+AM setting, we add graph adjacency matrix masking to two first Transformer layers. In the AM4 setting, we use adjacency matrix masking in all four Transformers, effectively ending up with a GNN-like network. We use a concatenation of local node embeddings and the sum of adjacent node embeddings as initial features. We also permute the numbering of individual nodes, while keeping the graph topology intact. We hypothesize that the node shuffling makes it capture graph topology directly in the embeddings. This method is inspired by a tensor decomposition approach and in the Appendix we provide an extension for handling node and edge labels as well. We also report variants without index shuffling (\textbf{-IS}) and a fully-connected neural network baseline (\textbf{FCNN}). \subsection{Synthetic Data} \begin{wraptable}{r}{5cm} \begin{tabular}{lrrrr} Method & NCI1 & MUTAG \\ \toprule GCN & 79.0 & 87.5 \\ GIN & 81.0 & 90.0 \\ GAT & 78.7 & 87.1 \\ SVM+WL & 78.4 & 82.8 \\ \midrule TF & 75.9 & 86.5 \\ TF+AM & 82.6 & 88.7 \\ \end{tabular} \caption{Test F1 score, \% on real-world datasets} \label{tab:real_datasets} \end{wraptable} Firstly, we test all the algorithms on the proposed synthetic data. Table~\ref{tab:synch_datasets} shows the prediction results of the tested algorithms. In the Triangles dataset, GIN seems to capture some features even in the default setting, but both GCN and GAT completely fail to learn anything producing the same label for all graphs. When adding node degree features, the accuracy of GCN and GAT significantly increases, but node IDs do not add significant further impact. SVM with WL kernel also fails to achieve 100\% accuracy. WL kernel focuses on labels and is weak for unlabeled tasks. Graphlet kernel, on the other hand, achieves almost perfect accuracy. We believe that its failures come from the stochastic nature of the graphlet sampling. Both variants of the proposed method achieve perfect accuracy. Graphlet kernel-based SVM in theory should be a perfect match for this problem because it can capture triangles directly. Other methods have to perform cycle detection. A method should develop a way of tracking node identities to be able to detect cycles. However, GNNs, even if provided with node identity information, can not detect triangles perfectly. On the other hand, our proposed method can. We hypothesize that label shuffling can be a strong data augmentation for this task, however, as an additional experiment, even without label shuffling the TF+AM setting can achieve almost perfect accuracy ($>$99\%). Additionally, it seems that attention-based methods (TF variants and GAT) perform better in detecting topology structures. The Clique distance dataset has the picture for GNNs similar to the Triangles problem with the default setting, but GIN and GAT can solve the problem with additional features while GCN still struggles with it. Node identities do not seem to be of much help for it as well. SVMs perform poorly on this task because their kernels do not have enough representative power. Our method in its basic form struggles with this dataset. Occasionally it can learn good topological embeddings and perform well, but the situation is not stable. However, adding an adjacency mask even to one Transformer layer changes the situation drastically, and the experimental setting of two adjacency-restricted layers and two full Transformer layers is always able to solve the task. \subsection{Real-World Data} We additionally check whether the proposed method keeps its performance with real-world datasets as well. As seen in Table~\ref{tab:real_datasets}, we select standard benchmark graph datasets: NCI1, MUTAG \cite{shervashidze2011weisfeiler}. We use the parameters for GNNs provided by \cite{dwivedi2020benchmarkgnns}. We also employ loss weighting for unbalanced datasets for all methods. We do not use Graphlet kernel SVM for real-world data. The general picture is the same for all real-world datasets: GCN and plain TF being the lowest, with GIN, GAT, and TF+AM being higher. \section{Conclusion} \label{sec:conclusion} We propose two seemingly simple synthetic datasets which show that GNNs can not extract graph topology features reliably. We also propose a method that reliably extracts graph topology features, at least from proposed datasets. The future work would be to investigate how it would be possible to modify existing GNN methods to extract topology features or find new regimes where GNNs break. \section{Acknowledgments} We thank Shotaro Yano for his help with generating the datasets. We also thank Masafumi Shingu for discussions on our approach for handling topological problems.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $E$ be an elliptic curve over a totally real field $K$. We say that $E$ is modular if there exists a Hilbert eigenform $f$ over $K$ of parallel weight $2$ such that $L(E,s)=L(f,s)$. The classical Shimura-Taniyama conjecture asserts that all elliptic curves over $\mathbb{Q}$ are modular. This conjecture for semistable elliptic curves, which was the crucial step in proving Fermat's Last Theorem, was proved by Wiles \cite{Wi} and Tayor-Wiles \cite{TW}. Later, the general case of the conjecture was completed by Breuil-Conrad-Diamond-Taylor \cite{BCDT}. The Shimura-Taniyama conjecture has a natural generalization to totally real fields: \begin{Conj} \label{ST} Let $K$ be a totally real number field. Then, any elliptic curve over $K$ is modular. \end{Conj} Recently, a breakthrough on this problem was brought by Freitas-Le Hung-Siksek. In their paper \cite{FLS}, they prove Conjecture \ref{ST} for any quadratic field. The aim of this paper is to attack Conjecture \ref{ST} for abelian totally real fields. More precisely, the main theorem is the following: \begin{Thm} \label{mainab} Let $K$ be a totally real number field which is abelian over $\mathbb{Q}$. Suppose that $K$ is unramified at every prime above 3,5, and 7. Then, any elliptic curve over $K$ is modular. \end{Thm} \begin{Rem} In contrast to the proof of \cite[Theorem 1]{FLS}, we do not use geometric techniques on modular curves. Instead, our proof involves the modularity lifting theorem \cite{SW1} in an essential way, and hence at present we cannot remove the hypothesis that $K$ is abelian. \end{Rem} In the rest of this section, we describe the structure of the proof of Theorem \ref{mainab}.\\ Firstly, we prove the following proposition, which is a complementary result of \cite[Theorem 7]{FLS}; we consider elliptic curves with additive reduction at a prime dividing $p=5$ or $7$ instead of semi-stable reduction. The proof is given in Section \ref{pfmainadd}. \begin{Prop} \label{mainadd} Let $p=5$ or $7$. Let $K$ be a totally real field, $\mathfrak{p}$ a prime of $K$ dividing $p$, and $E$ an elliptic curve over $K$. Assume that $K$ is unramified at $\mathfrak{p}$ and that $E$ has additive reduction at $\mathfrak{p}$ with $\bar{\rho}_{E,p}: \mathrm{Gal}(\bar{K}/K)\rightarrow \mathrm{GL}_2(\mathbb{F}_p)$ (absolutely) irreducible. Then $E$ is modular, unless either of the following exceptional cases hold: \begin{enumerate} \item $p=5$, $v_\mathfrak{p}(j_E)\equiv 1$ mod $3$, and $E$ has additive potential good (supersingular) reduction at $\mathfrak{p}$, or \item $p=7$, $v_\mathfrak{p}(j_E)\equiv 2$ mod $3$, and $E$ has additive potential good (ordinary) reduction at $\mathfrak{p}$. \end{enumerate} \end{Prop} Here, $j_E$ denotes the $j$-invariant of $E$, and $v_\mathfrak{p}$ is the normalized discrete valuation of $K$ at $\mathfrak{p}$. Also, $\bar{\rho}_{E,p}$ denotes the mod $p$ Galois representation defined by the $p$-torsion points of $E$. Note that, for $p\neq 2$, $\bar{\rho}_{E,p}$ is irreducible if and only if $\bar{\rho}_{E,p}$ is absolutely irreducible: This follows from the presence of the complex conjugates in $G_K$. The basic strategy for the proof of Proposition \ref{mainadd} is based on \cite[Theorem 7]{FLS}, but because we treat the cases of additive reduction, we need to look at local mod $p$ Galois representations more carefully. The local computations are carried out in Section \ref{local}. For this, we heavily use the results of Kraus \cite{Kra}.\\ Secondly, combining Proposition \ref{mainadd} and \cite[Theorem 7]{FLS}, we prove the following theorem. The proof is given in Section \ref{pfmain2}. \begin{Thm}\label{main2} Let $K$ be a totally real field in which 7 is unramified. If $E$ is an elliptic curve over $K$ with $\bar{\rho}_{E,7}$ (absolutely) irreducible, then $E$ is modular. \end{Thm} Theorem \ref{main2} is seen as a mod 7 variant of the following theorem due to Thorne: \begin{Thm}\label{Thorne} (\cite[Theorem 7.6]{Th}) Let $K$ be a totally real field with $\sqrt{5}\notin K$. If $E$ is an elliptic curve over $K$ with $\bar{\rho}_{E,5}$ (absolutely) irreducible, then $E$ is modular. \end{Thm} Finally, we apply Theorem \ref{main2} and Theorem \ref{Thorne} to prove Theorem \ref{mainab}; we show that, for an elliptic curve $E$ which is not yet known to be modular, a quadratic twist of $E$ becomes semi-stable at all primes dividing 3, in which case we already know its modularity by \cite{Fr}. The detail is given in Section \ref{pfmainab}. \section{Local computations} \label{local} First, we fix the notation of this section: \begin{enumerate} \item $p$ is a prime number. \item $F$ is an absolutely unramified $p$-adic local field. \item $E$ is an elliptic curve over $F$. \item $v$ is the normalized $p$-adic discrete valuation of $F$. \item $\omega_1:I\rightarrow \mu_{p-1}(\bar{F})\rightarrow \mathbb{F}_p^\times$ denotes the fundamental character of level $1$, and $\omega_2, \omega_2':I\rightarrow \mu_{p^2-1}(\bar{F})\rightarrow \mathbb{F}_{p^2}^\times$ denote the fundamental characters of level $2$. Here, $I$ is the inertia subgroup of $G_F$. \end{enumerate} For the proof of Proposition \ref{mainadd}, we consider only the cases when $E$ has additive reduction; that is, $E$ has potential multiplicative reduction, potential good ordinary reduction, or potential good supersingular reduction. In the following subsection, we treat these three cases separately, and we review the results of Kraus in \cite{Kra} without proof. We note that, although Kraus proves his results for elliptic curves over $\mathbb{Q}_p$, the proofs also work without change for those over any absolutely unramified $p$-adic field. \subsection*{Potential multiplicative reduction case}\label{pm} \begin{Prop} \label{semist} Let $p\geq 3$ be a prime number, $F$ an unramified extension of $\mathbb{Q}_p$, and $E$ an elliptic curve over $F$ with additive potential multiplicative reduction. Then, the restriction of $\bar{\rho}_{E,p}$ to the inertia subgroup $I$ is of the form \begin{equation} \label{Msemist} \bar{\rho}_{E,p}|_{I}\simeq \begin{pmatrix} \omega_1^{\frac{p+1}{2}} & * \\ 0 & \omega_1^{\frac{p-1}{2}} \\ \end{pmatrix}. \end{equation} \end{Prop} \begin{proof} See \cite[PROPOSITION 10]{Kra}. \end{proof} Since the projective image of \eqref{Msemist} is of the form $\begin{pmatrix} \omega_1 & * \\ 0 &1 \\ \end{pmatrix}$, we obtain the following corollary: \begin{Cor} \label{Csemist} In the setting of Proposition \ref{semist}, the projective image $\mathbb{P}\bar{\rho}_{E,p}(G_F)$ contains a cyclic subgroup of order $p-1$. \end{Cor} \subsection*{Potential ordinary reduction case} \begin{Prop} \label{ord} Let $p\geq 5$ be a prime number, $F$ an unramified extension of $\mathbb{Q}_p$, and $E$ an elliptic curve over $F$ with additive potential ordinary reduction. Denote $\Delta$ for a minimal discriminant of $E$ and $v$ for the normalized discrete valuation of $F$. Set $\alpha=(p-1)v(\Delta)/12$, which is an integer as noted just before 2.3.2 in \cite{Kra}. Then, the restriction of $\bar{\rho}_{E,p}$ to the inertia subgroup $I$ is of the form \begin{equation} \label{Mord} \bar{\rho}_{E,p}|_I\simeq \begin{pmatrix} \omega_1^{1-\alpha} & * \\ 0 & \omega_1^{\alpha} \\ \end{pmatrix}. \end{equation} \end{Prop} \begin{proof} See \cite[PROPOSITION 1]{Kra}. \end{proof} The projective image of \eqref{Mord} is of the form $\begin{pmatrix} \omega_1^{1-2\alpha} & * \\ 0 &1 \\ \end{pmatrix}$, and $\omega_1^{1-2\alpha}$ is a character of order $m:=\frac{p-1}{(p-1,1-2\alpha)}$. Thus, the projective image $\mathbb{P}\bar{\rho}_{E,p}(G_F)$ contains a cyclic subgroup of order $m$. In the following, we compute the order $m$ for certain $p$, which we will take as $5$ or $7$ in Section \ref{pfmainadd} Suppose first that $p$ is a prime number of the form $p=2^a+1$ for an integer $a\geq 2$. Since $1-2\alpha$ is an odd integer, $1-2\alpha$ is prime to $p-1=2^a$ so that we have $m=p-1$. Thus, we have the following corollary. \begin{Cor} \label{Cord5} Let $p$ be a prime number of the form $p=2^a +1$ with $a\geq 2$ an integer, $F/\mathbb{Q}_p$ an unramified extension, and $E$ an elliptic curve over $F$ with additive potential good ordinary reduction. Then, the projective image $\mathbb{P}\bar{\rho}_{E,p}(G_F)$ contains a cyclic group of order $p-1$. \end{Cor} Suppose next that $p$ is a prime number of the form $p=3\cdot 2^a+1$ with $a\geq 1$ an integer. Since $\alpha=v(\Delta)/2$ is an integer, $1-2\alpha=1-v(\Delta)$ is odd. Thus, we have \[ m = \begin{cases} \frac{p-1}{3} & (v(\Delta)\equiv 1\ \mathrm{mod}\ 3) \\ p-1 & (\mathrm{otherwise}). \end{cases} \] Therefore, we obtain the following corollary: \begin{Cor} \label{Cord7} Let $p$ be a prime number of the form $p=3\cdot 2^a+1$ for an integer $a\geq 1$, $F/\mathbb{Q}_p$ an unramified extension, and $E$ be an elliptic curve over $F$ with additive potential good ordinary reduction. Let also $\Delta$ be a minimal discriminant of $E$. Then, $\mathbb{P}\bar{\rho}_{E,p}(G_F)$ contains a cyclic group of order $(p-1)/3$ or $p-1$, depending on whether $v(\Delta)\equiv 1$ mod $3$ or $v(\Delta)\equiv 0,2$ mod $3$, respectively. \end{Cor} \subsection*{Potential supersingular reduction case} As in the previous subsections, we begin with Kraus' result. \begin{Prop} \label{ss} Let $p\geq 5$ be a prime number, $F$ an unramified extension of $\mathbb{Q}_p$, and $E$ an elliptic curve over $F$ with additive potential supersingular reduction. We choose a minimal model \[y^2=x^3+Ax+B\] of $E$. Also, let $\Delta$ denote a minimal discriminant of $E$. \begin{itemize} \item[(a)] If $(v(\Delta), v(A), v(B))$ is one of the triples $(2,1,1), (3,1,2), (4,2,2),$ $(8,3,4)$, $(9,3,5)$, or $(10,4,5)$, then $\bar{\rho}_{E,p}$ is wildly ramified. \item[(b)] If $(v(\Delta), v(A), v(B))$ is not any of the above triples, then the restriction of $\bar{\rho}_{E,p}$ to the inertia subgroup $I$ is given by \begin{equation} \label{Mss} \bar{\rho}_{E,p}|_{I}\otimes \mathbb{F}_{p^2}\simeq \begin{pmatrix} \omega_2^\alpha {\omega_2'}^{p-\alpha} & 0 \\ 0 & {\omega_2'}^\alpha {\omega_2}^{p-\alpha} \\ \end{pmatrix}. \end{equation} Here, $\alpha=(p+1)v(\Delta)/12$ is an integer as noted in \cite[PROPOSITION 2]{Kra}. \end{itemize} \end{Prop} \begin{proof} The part (a) is a consequence of LEMME 2 and PROPOSITION 4 in \cite{Kra}. The part (b) follows directly from PROPOSITION 2 and LEMME 2 in \cite{Kra}. \end{proof} From the case (a) in the above proposition, we immediately obtain the following corollary: \begin{Cor} \label{ssa} Let the notation be as in Proposition \ref{ss}. If the condition of (a) holds, then the projective image $\mathbb{P}\bar{\rho}_{E,p}(G_F)$ contains a $p$-group. \end{Cor} Next, we consider the case (b) in the Proposition \ref{ss}. The image of \eqref{Mss} in $\mathrm{PGL}_2(\mathbb{F}_{p^2})$ is of the form $\begin{pmatrix} \omega_2^{-(p-1)(2\alpha+1)} & 0 \\ 0 & 1 \\ \end{pmatrix}$. Since the character $\omega_2^{-(p-1)(2\alpha+1)}$ is of order $n:=\frac{p+1}{(p+1,2\alpha+1)}$, the projective image $\mathbb{P}(\bar{\rho}_{E,p}\otimes \mathbb{F}_{p^2})(G_F)$ (and hence $\mathbb{P}(\bar{\rho}_{E,p})(G_F)$) contains a cyclic subgroup of order $n$. In the rest of this subsection, we make computations of the number $n$ for certain $p$. We will apply them to the case $p=5$ or $7$ in Section \ref{pfmainadd}. Suppose first that $p$ is a prime number of the form $p=2^a-1$ with $a\geq 3$ an integer. Since $\alpha$ is an integer, $2\alpha+1$ is prime to $p+1=2^a$ so that $n=p+1$. Thus, we have proved the following corollary: \begin{Cor} \label{ss7} Let $p$ be a prime number of the form $p=2^a-1$ with $a\geq 3$ an integer, $F/\mathbb{Q}_p$ an unramified extension, and $E$ an elliptic curve over $F$ with additive potential good supersingular reduction. Assume the condition of (b) in Proposition \ref{ss} holds. Then, the projective image $\mathbb{P}\bar{\rho}_{E,p}(G_F)$ contains a cyclic group of order $p+1$. \end{Cor} Suppose next that $p$ is a prime number of the form $p=3\cdot 2^a-1$ with $a\geq 1$ an integer. Since $\alpha = v(\Delta)/2$ is an integer, $2\alpha +1=v(\Delta)+1$ is odd. Thus, we have \[ n = \begin{cases} \frac{p+1}{3} & (v(\Delta)\equiv 2\ \mathrm{mod}\ 3) \\ p+1 & (\mathrm{otherwise}). \end{cases} \] Therefore, we obtain the following corollary: \begin{Cor} \label{ss5} Let $p$ be a prime number of the form $p=3\cdot 2^a-1$ with $a\geq 1$ an integer, $F/\mathbb{Q}_p$ an unramified extension, and $E$ an elliptic curve over $F$ with additive potential good supersingular reduction. Let also $\Delta$ be a minimal discriminant of $E$. Assume the condition of (b) in Proposition \ref{ss} holds. Then, $\mathbb{P}\bar{\rho}_{E,p}(G_F)$ contains a cyclic group of order $(p+1)/3$ or $p+1$, depending on whether $v(\Delta)\equiv 2$ mod $3$ or $v(\Delta)=0,1$ mod $3$, respectively. \end{Cor} \section{Proof of Proposition \ref{mainadd}} \label{pfmainadd} Before proceeding to the proof of Proposition \ref{mainadd}, we cite two theorems from \cite{FLS}. First, the following theorem is a consequence of a combination of modularity switching arguments and a modularity lifting theorem. As remarked in \cite{FLS}, the modularity lifting theorem used there is relatively a direct consequence of the theorem of Breuil-Diamond, which is also an accumulation of works by many people. \begin{Thm} \label{mlt} (\cite[Theorem 3 and Theorem 4]{FLS}) Let $E$ be an elliptic curve over a totally real field $K$. If $p= 5$ or $7$, and if $\bar{\rho}_{E,p}|_{G_{K(\zeta_p)}}$ is absolutely irreducible, then $E$ is modular. \end{Thm} Second, the following result will be useful in our setting for proving that $\bar{\rho}_{E,p}|_{G_{K(\zeta_p)}}$ is absolutely irreducible. \begin{Thm} \label{projim} (\cite[Proposition 9.1]{FLS}) Let $p= 5$ or $7$, and $K$ be a totally real field satisfying $K\cap \mathbb{Q}(\zeta_p)=\mathbb{Q}$. For an elliptic curve $E$ over $K$ such that $\bar{\rho}_{E,p}$ is (absolutely) irreducible but $\bar{\rho}_{E,p}|_{G_{K(\zeta_p)}}$ is absolutely reducible, we have the following: \begin{enumerate} \item If $p=5$, then $\bar{\rho}_{E,5}(G_K)$ is a group of order $16$, and its projective image $\mathbb{P}\bar{\rho}_{E,5}(G_K)$ is isomorphic to $(\mathbb{Z}/2\mathbb{Z})^2$. \item If $p=7$, then $\mathbb{P}\bar{\rho}_{E,7}(G_K)$ is isomorphic to $S_3$ or $D_4$. \end{enumerate} \end{Thm} We are now ready to prove Proposition \ref{mainadd}. Let $p=5$ or $7$, $K$ a totally real field, and $\mathfrak{p}$ a prime ideal of $K$ dividing $p$. Let $E$ be an elliptic curve over $K$. Also, denote by $\Delta$ a minimal discriminant of $E_\mathfrak{p}:=E\otimes_K K_\mathfrak{p}$. We make the following assumptions as in the statement of Proposition \ref{mainadd}: \begin{enumerate} \item $K$ is unramified at $\mathfrak{p}$, \item $\bar{\rho}_{E,p}$ (absolutely) irreducible, and \item $E$ has additive reduction at $\mathfrak{p}$. \end{enumerate} We split the proof into three cases according to reduction of $E$:\\ (i) If $E$ has additive potential multiplicative reduction at $\mathfrak{p}$, then Corollary \ref{Csemist} for $E_\mathfrak{p}$ implies that $\mathbb{P}\bar{\rho}_{E,p}(G_K)$ has a cyclic subgroup of order $p-1$. Thus, Theorem $\ref{projim}$ implies that $\bar{\rho}_{E,p}|_{G_{K(\zeta_p)}}$ cannot be absolutely reducible. Hence $E$ is modular by Theorem \ref{mlt}.\\ (ii) Suppose next that $E$ has additive potential ordinary reduction at $\mathfrak{p}$.\\ If $p=5$, then Corollary \ref{Cord5} for $E_\mathfrak{p}$ shows that $\mathbb{P}\bar{\rho}_{E,5}(G_K)$ contains a cyclic subgroup of order $4$. Thus, by Theorem \ref{projim}, $\bar{\rho}_{E,5}|_{G_{K(\zeta_5)}}$ is absolutely irreducible, whence $E$ is modular. Also, if $p=7$ and $v(\Delta)\equiv 0,2$ mod $3$, then Corollary \ref{Cord7} shows that $\mathbb{P}\bar{\rho}_{E,7}(G_K)$ has a cyclic subgroup of order $6$. Hence, Theorem \ref{projim} implies that $\bar{\rho}_{E,7}|_{G_{K(\zeta_7)}}$ is absolutely irreducible, in which case $E$ is modular. We consider the remaining cases; that is, $p=7$ and $v_\mathfrak{p}(\Delta)\equiv 1$. If $j_E\neq 0$, then these cases are equivalent to the case $v_\mathfrak{p}(j_E)\equiv 2$ modulo $3$; in fact, this follows by taking a minimal model $y^2=x^3+Ax+B$ of $E_\mathfrak{p}$ and noting that $j_E=1728A^3/\Delta$. If $j_E =0$, then $E$ is modular since $E$ becomes defined over $\mathbb{Q}$ after suitable (solvable) base change.\\ (iii) Finally, suppose that $E$ has additive potential supersingular reduction at $\mathfrak{p}$. If the condition (a) in Proposition \ref{ss} holds, then Corollary \ref{ssa}, Theorem \ref{projim}, and Theorem \ref{mlt} show that $E$ is modular. Assume the condition (b) in Proposition \ref{ss} holds. Then we have the following two cases: \begin{itemize} \item If $p=5$ and $v(\Delta)\equiv 0,1$ mod $3$, then $\mathbb{P}\bar{\rho}_{E,5}(G_K)$ contains a cyclic subgroup of order 6 by Corollary \ref{ss5}. Hence, Theorem \ref{projim} and Theorem \ref{mlt} show that $E$ is modular. If $j_E\neq 0$, then the remaining case when $p=5$ and $v_\mathfrak{p}(\Delta)\equiv2$ mod $3$ can be rephrased as $v_\mathfrak{p}(j_E)\equiv 1$ modulo $3$. If $j_E=0$, then $E$ is modular as in (ii). \item If $p=7$, then $E$ is modular by Corollary \ref{ss7} with Theorem \ref{projim}, and Theorem \ref{mlt}. \end{itemize} In summary, combining (i), (ii), and (iii), we see that $E$ is modular unless the following conditions hold: \begin{enumerate} \item $p=5$, $v_\mathfrak{p}(j_E)\equiv 1$ mod $3$, and $E$ has additive potential good (supersingular) reduction at $\mathfrak{p}$, or \item $p=7$, $v_\mathfrak{p}(j_E)\equiv 2$ mod $3$, and $E$ has additive potential good (ordinary) reduction at $\mathfrak{p}$. \end{enumerate} This proves Proposition \ref{mainadd}. \section{Proof of Theorem \ref{main2}}\label{pfmain2} Let $K$ and $E$ be as in Theorem \ref{main2}. If $E$ has semi-stable reduction at some prime dividing $7$, then the assertion follows from \cite[Theorem 7]{FLS}. So suppose that $E$ has additive reduction at every prime $\mathfrak{p}|7$. By Proposition \ref{mainadd} and Theorem \ref{mlt}, we have only to consider the case when $E$ has additive potential ordinary reduction at every prime $\mathfrak{p}|7$ and $\bar{\rho}_{E,7}|_{G(K(\zeta_7))}$ is absolutely reducible. In this case, we may apply the modularity lifting theorem \cite{SW2} due to Skinner-Wiles in order to prove the modularity of $E$. \begin{Rem} A similar argument does not reprove Theorem \ref{Thorne} even if $K$ is just unramified at $5$; in fact, Proposition \ref{mainadd} implies that an elliptic curve $E$ over $K$ with $\bar{\rho}_{E,5}|_{G(K(\zeta_5))}$ absolutely reducible must have additive potential supersingular reduction at every prime $\mathfrak{p}|5$. In such a case, the theorem of Skinner-Wiles \cite{SW2} is not available. \end{Rem} \begin{Rem} In his thesis \cite{Bao}, Le Hung essentially shows the following; if $K$ is a totally real field unramified at 5 and 7, and if $E$ is an elliptic curve over $K$ with both $\bar{\rho}_{E,p}$ ($p=5,7$) irreducible, then $E$ is modular. This follows from \cite[Proposition 6.1]{Bao} combined with the modularity lifting theorem due to Skinner-Wiles \cite{SW2}. \end{Rem} \begin{Rem} Very recently, S. Kalyanswamy \cite{Ka} announced to prove a version of Theorem \ref{main2}. He actually proves a new modularity theorem \cite[Theorem 3.4]{Ka} for certain Galois representations, and applies it to elliptic curves in \cite[Theorem 4.4]{Ka}. For clarity, we describe the difference between Theorem \ref{main2} and \cite[Theorem 4.4]{Ka}: Kalyanswamy considers elliptic curves over a totally real field $F$ with $F\cap \mathbb{Q}(\zeta_7)=\mathbb{Q}$, which is weaker than the assumption that $F$ is unramified at 7, while he also imposes an additional condition on the mod 7 Galois representations. Therefore, both Theorem \ref{main2} and \cite[Theorem 4.4]{Ka} have their own advantage. \end{Rem} \section{Proof of Theorem \ref{mainab}} \label{pfmainab} For the proof of Theorem \ref{mainab}, we need another modularity theorem due to Freitas \cite{Fr}. This theorem essentially follows from \cite{SW1}, \cite{SW2}, and Theorem \ref{mlt}. \begin{Thm} \cite[Theorem 6.3]{Fr} \label{semi} Let $K$ be an abelian totally real field where 3 is unramified. Let $E$ be an elliptic curve over $K$ semistable at all primes $\mathfrak{p}|3$. Then, $E$ is modular. \end{Thm} Also, we note a well-known result of a torsion version of Neron-Ogg-Shafarevich. \begin{Lem} \cite[VII, Exercises 7.9]{Sil} \label{inertia} Let $F$ be a local field, $E$ an elliptic curve over $F$ with potential good reduction, and $m\geq 3$ an integer relatively prime to the residual characteristic of $F$. \begin{itemize} \item[(a)] The inertia group of $F(E[m])/F$ is independent of $m$. \item[(b)] The extension $F(E[m])/F$ is unramified if and only if $E$ has good reduction. \end{itemize} \end{Lem} Now we are ready to prove Theorem \ref{mainab}. \vspace{0.3cm}\\ \textit{Proof of Theorem \ref{mainab}.\ } Let $K$ be as in Theorem \ref{mainab} and $E$ be an elliptic curve over $K$. We prove that $E$ is modular. By Theorem \ref{main2} and Theorem \ref{Thorne}, we may assume that both $\bar{\rho}_{E,5}$ and $\bar{\rho}_{E,7}$ are reducible; that is, $\bar{\rho}_{E,p}$ for $p=5,7$ factors through a Borel subgroup $B(\mathbb{F}_p)$. Note that $B(\mathbb{F}_5)$ (resp. $B(\mathbb{F}_7)$) is of order $4^2\cdot5$ (resp. $6^2\cdot7$). Let $\mathfrak{p}$ be a prime of $K$ dividing $3$. By Lemma \ref{inertia} (a), the representations of the inertia subgroup $I_\mathfrak{p}$ on $E[5]$ and $E[7]$ factor through the same quotient $I_\mathfrak{p}'$. Since $|I_\mathfrak{p}'|$ divides $\mathrm{gcd}(4^2\cdot5, 6^2\cdot7)=4$, $I_\mathfrak{p}'$ is tame (and so cyclic) of order dividing 4. Subgroups of order 4 in $B(\mathbb{F}_7)$ are not cyclic, and hence $I_\mathfrak{p}'$ is trivial or of order 2. If $E_\mathfrak{p}=E\otimes K_\mathfrak{p}$ has potential good reduction, $E_\mathfrak{p}$ becomes of good reduction over at most a quadratic extension of $K_\mathfrak{p}$ by Lemma \ref{inertia} (b). Also, in the case where $E_\mathfrak{p}$ has potential multiplicative reduction, $E_\mathfrak{p}$ becomes isomorphic to a Tate curve after a quadratic extension of $K_v$. Summarizing both cases, we see that $E_\mathfrak{p}$ becomes semi-stable after a quadratic base change. This implies that, if $E_\mathfrak{p}$ has bad reduction, a quadratic twist of $E_\mathfrak{p}$ (by a uniformizer of $K_\mathfrak{p}$) is semi-stable. Using the weak approximation theorem, we find $d\in K$ such that the quadratic twist $E^{(d)}$ of $E$ by $d$ is semi-stable at every prime $\mathfrak{p}|3$. Theorem \ref{semi} shows that $E^{(d)}$ is modular. Since the modularity of elliptic curves is invariant under quadratic twists, it follows that $E$ is modular. \hfill $\Box$ \section*{Acknowledgement} The author expresses his deepest gratitude to his advisor Prof. Takeshi Saito, who gave him many helpful comments to enhance the arguments in this paper and encouraged him during his writing this paper. His thanks also goes to Bao Le Hung, who kindly answered to the author's question on Skinner-Wiles' theorem.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{A Lower Bound for RTLW with short lemmas} \label{sec:smlem} In this section we prove a lower bound showing that learning only short clauses does not help a DLL algorithm for certain hard formulas. The proof system corresponding to DLL algorithms with learning restricted to clauses of length $k$ is, according to \pref{sec:regwrti-dll}, regWRTI with the additional restriction that every used lemma is a clause of length at most $k$. We prove a lower bound for a stronger proof system that allows arbitrary lemmas instead of just input lemmas, drops the regularity restriction, and uses the general weakening rule instead of just w-resolution, i.e., RTLW as defined in \pref{sec:wrtl}. We define RTLW($k$) to be the restriction of RTLW in which every lemma used, i.e., every leaf label that does not occur in the initial formula, is of size at most $k$. The hard example formulas we prove the lower bound for are the well-known Pigeonhole Principle formulas. This principle states that there can be no 1-to-1 mapping from a set of size $n+1$ into a set of size $n$. In propositional logic, the negation of this principle gives rise to an unsatisfiable set of clauses $PHP_n$ in the variables $x_{i,j}$ for $1 \leq i\leq n+1$ and $1 \leq j\leq n$\@. The variable $x_{i,j}$ is intended to state that $i$ is mapped to $j$. The set $PHP_n$ consists of the following clauses: \begin{enumerate}[$\bullet$] \item the \emph{pigeon clause} $P_i = \bigl\{ x_{i,j} \, ;\, 1\leq j \leq n \bigr\}$ for every $1\leq i\leq n+1$. \item the \emph{hole clause} $H_{i,j,k} = \{ \bar{x}_{i,k} , \bar{x}_{j,k} \}$ for every $1\leq i<j \leq n+1$ and $k \leq n$. \end{enumerate} It is well-known that the pigeonhole principle requires exponential size dag-like resolution proofs: Haken \cite{Haken85} shows that every RD refutation of $PHP_n$ is of size $2^{\Omega(n)}$. Note that the number of variables is $O(n^2)$, so that this lower bound is far from maximal. In fact, Iwama and Miyazaki \cite{iwamiy99} prove a larger lower bound for tree-like refutations. \begin{thm}[Iwama and Miyazaki \cite{iwamiy99}] \label{the:treelb} Every resolution tree refutation of $PHP_n$ is of size at least $(n/4)^{n/4}$. \end{thm} We will show that for $k\leq n/2$, RTLW($k$) refutations of $PHP_n$ are asymptotically of the same size $2^{\Omega(n\log n)}$ as resolution trees. On the other hand, it is known \cite{BusPit97} that dag-like resolution proofs need not be much larger than Haken's lower bound: there exist RD refutations of $PHP_n$ of size $2^n\cdot n^2$. These refutations are even regular, and thus can be simulated by regWRTI. Hence $PHP_n$ can be solved in time $2^{O(n)}$ by some variant of \textsc{DLL-L-UP} when learning arbitrary long clauses, whereas our lower bound shows that any DLL algorithm that learns only clauses of size at most $n/2$ needs time $2^{\Omega(n\log n)}$. In fact, we will prove our lower bound for the weaker \emph{functional} pigeonhole principle $FPHP_n$, which also includes the following clauses: \begin{enumerate}[$\bullet$] \item The functional clause $F_{i,j,k} = \{ \bar{x}_{i,j} , \bar{x}_{i,k} \}$ for every $1\leq i \leq n+1$ and every $1\leq j<k\leq n$. \end{enumerate} While the lower bound of Iwama and Miyazaki is only stated for the clauses $PHP_n$, it is easily verified that their proof works as well when the functional clauses are added to the formula. Our lower bound proof uses the fact that resolution trees with weakening (RTW) are natural, i.e., preserved under restrictions in the following sense: \begin{prop} Let $R$ be a RTW proof of $C$ from $F$ of size $s$, and $\rho$ a restriction. There is an RTW proof $R'$ for $\rest{C}{\rho}$ from $\rest{F}{\rho}$ of size at most $s$. \end{prop} We denote the resolution tree $R'$ by $\rest{R}{\rho}$. Since this proposition is well-known a proof will not be given. Next, we need to bring refutations in RTLW($k$) to a certain normal form. First, we show that it is unnecessary to use clauses as lemmas that are subsumed by axioms in the refuted formula. \begin{lem} \label{lem:subs} If there is a RTLW($k$) refutation of some formula $F$ of size $s$, then there is a RTLW($k$) refutation of $F$ of size at most $2s$ in which no clause $C$ with $C \supseteq D$ for some clause $D$ in $F$ is used as a lemma. \end{lem} \proof If a clause $C$ with $C\supseteq D$ for some $D\in F$ is used as a lemma, replace every leaf labeled $C$ by a weakening inference of $C$ from $D$. \qed Secondly, we need the fact that an RTLW($k$) refutation does not need to use any tautological clauses, i.e., clauses of the form $C \cup \{ x , \bar{x}\}$ for a variable $x$. \begin{lem} \label{lem:taut} If there is a RTLW($k$) refutation of some formula $F$ of size $s$, then there is a RTLW$(k$) refutation of $F$ of size at most $s$ that contains no tautological clause. \end{lem} \proof Let $P$ be an RTLW($k$)-refutation of $F$ of size $s$ that contains $t$ occurrences of tautological clauses. We transform $P$ into a refutation $P'$ of size $|P'|\leq s$ such that $P'$ contains fewer than $t$ occurrences of tautological clauses. Finitely many iterations of this process yields the claim. We obtain $P'$ as follows. Since the final clause of~$P$ is not tautological, if $t>0$, there must be a tautological clause $C \cup \{x,\bar{x}\}$ which is resolved with a clause $D\cup \{x\}$ to yield a non-tautological clause $C\cup D\cup \{x\}$. The idea is to cut out the subtree~$T_0$ that derives the clause $C\cup\{x,\bar x\}$, and derive $C\cup D\cup\{x\}$ by a weakening from $D\cup \{ x\}$. This gives a ``proof''~$P_0$ with fewer tautological clauses than~$P$. However, $P_0$~may not be a valid proof, since some of the clauses in~$T_0$ might be used as lemmas in $P_0$. To fix this, we shall extract parts of~$T_0$ and plant them onto~$P_0$ so that all lemmas used are derived. In order to make this construction precise, we need the notion of trees in which some of the used lemmas are not derived. A \emph {partial RTLW} from~$F$ is defined to be a tree~$T$ which satisfies all the conditions of an RTLW, except that some leaves may be labeled by clauses that occur neither in~$F$ nor earlier in $T$; these are called the \emph{open leaves} of~$T$. We construct $P'$ in stages by defining, for $i\geq 0$, a partial RTLW refutation~$P_i$ of~$F$ and a partial RTLW derivation~$T_i$ of $C\cup\{x,\bar x\}$ from~$F$ with the following properties: \begin{enumerate}[$\bullet$] \item All open leaves in $P_i$ appear in $T_i$. The first open leaf in~$P_i$ is denoted $C_i$. \item All open leaves in $T_i$ appear in $P_i$ before~$C_i$. \item $|P_i| + |T_i| = |P|$ . \end{enumerate} $P_0$ and~$T_0$ were defined above and certainly satisfy the two properties. Given $P_i$ and~$T_i$, we construct $P_{i+1}$ and~$T_{i+1}$ as follows: We locate the first occurrence of $C_i$ in $T_i$ and let $T^\ast_i$ be the subtree of~$T_i$ rooted at this occurrence. We form $T_{i+1}$ by replacing in~$T_i$ the subtree~$T^\ast_i$ by a leaf labeled~$C_i$. And, we form $P_{i+1}$ by replacing the first open leaf,~$C_i$, in~$P_i$ by the tree~$T^\ast_i$. The invariants are easily seen to be preserved. Obviously, $|P_{i+1}| + |T_{i+1}| = |P_i| + |T_i| = |P|$. The open leaves of~$T^\ast_i$ appear in~$P_i$ before~$C_i$, and therefore, any open leaf in~$P_{i+1}$, and in particular, $C_{i+1}$ if it exists, must occur after the (formerly open leaf) clause~$C_i$. New open leaves in~$T_i$ are~$C_i$ and possibly some lemmas derived in~$T^\ast_i$, and these all occur in~$P_{i+1}$ before~$C_{i+1}$. Since $P_{i+1}$ contains fewer open leaves than~$P_i$ for every~$i$, there is an~$m$ such that $P_m$ contains no open leaves, and thus is an RTLW refutation. We then discard~$T_m$ and set $P' := P_m$. Each lemma used in~$P'$ was a lemma in~$P$, thus $P'$ is also an RTLW($k$) refutation. Note that the total number of occurrences of tautological clauses in $P_{i+1}$ and~$T_{i+1}$ combined is the same as in $P_i$ and~$T_i$ combined. This is also equal to the number of tautological clauses in~$P$. Furthermore, $T_m$~must contain at least one tautological clause, namely its root $C\cup\{x,\bar x\}$. It follows that $P^\prime$ has fewer tautological clauses than~$P$. \qed A matching $\rho$ is a set of pairs $\bigl\{ (i_1,j_1) , \ldots , (i_k,j_k) \bigr\} \subset \{1,\ldots,n+1\} \times \{ 1 ,\ldots,n\}$ such that all the $i_\nu$ as well as all the $j_\nu$ are pairwise distinct. The size of $\rho$ is $|\rho| = k$. A matching $\rho$ induces a partial assignment to the variables of $PHP_n$ as follows: \[ \rho(x_{i,j}) = \begin{cases} 1 & \text{if } (i,j) \in \rho \\ 0 & \text{if there is } (i,j') \in \rho \text{ with } j\neq j' \\ & \text{ or } (i',j) \in \rho \text{ with } i\neq i'\\ \text{undefined} & \text{otherwise.} \end{cases} \] We will identify a matching and the assignment it induces. The crucial property of such a matching restriction $\rho$ is that $\rest{FPHP_n}\rho$ is -- up to renaming of variables -- the same as $FPHP_{n-|\rho|}$. The next lemma states that a short clause occurring as a lemma in an RTLW refutation can always be falsified by a small matching restriction. \begin{lem}\label{lem:smallrestr} Let $C$ be a clause of size $k \leq n/2$ such that \begin{enumerate}[$\bullet$] \item $C$ is not tautological, \item $C \not\supseteq H_{i,i',j}$ for any hole clause $H_{i,i',j }$, \item $C \not\supseteq F_{i,j,j'}$ for any functional clause $F_{i,j,j'}$. \end{enumerate} Then there is a matching $\rho$ of size $|\rho| \leq k$ such that $\rest{C}{\rho} = \Box$. \end{lem} \proof First, we let $\rho_1$ consist of all those pairs $(i,j)$ such that the negative literal $\bar{x}_{i,j}$ occurs in $C$. By the second and third assumption, these pairs form a matching. All the negative literals in $C$ are set to $0$ by $\rho_1$, and by the first assumption, no positive literal in $C$ is set to $1$ by $\rho_1$. Now consider all pigeons $i_1, \ldots , i_r$ mentioned in positive literals in $C$ that are not already set to $0$ by $\rho_1$, i.e., that are not mentioned in any of the negative literals in $C$. Pick $j_1, \ldots , j_r$ from the $n/2$ holes not mentioned in $C$, and set $\rho_2 := \bigl\{ (i_1 , j_1) , \ldots , (i_r,j_r) \bigr\}$. This matching sets the remaining positive literals to $0$, thus for $\rho := \rho_1 \cup \rho_2$, we have $\rest{C}{\rho} = \Box$. Clearly the size of~$\rho$ is at most~$k$ since we have picked at most one pair for each literal in~$C$. \qed Finally, we are ready to put all ingredients together to prove our lower bound. \begin{thm} \label{the:wrtlklb} For every $k\leq n/2$, every RTLW($k$)-refutation of $FPHP_n$ is of size $2^{\Omega(n \log n)}$. \end{thm} \proof Let $R$ be an RTLW($k$)-refutation of $FPHP_n$ of size~$s$. By Lemmas \ref{lem:subs} and~\ref{lem:taut}, $R$~can be transformed into~$R'$ of size at most~$2s$ in which no clause is tautological and no clause used as a lemma is subsumed by a clause in $FPHP_n$. Let $C$ be the first clause in~$R'$ which is used as a lemma; $C$~is of size at most~$k$. The subtree~$R_C$ of~$R'$ rooted at~$C$ is a resolution tree for~$C$ from $FPHP_n$. By \pref{lem:smallrestr}, there is a matching restriction~$\rho$ of size $|\rho|\leq k$ such that $\rest{C}{\rho} = \Box$. Then $\rest{R_C}{\rho}$ is a resolution tree with weakening refutation of $\rest{FPHP_n}{\rho}$, which is the same as $FPHP_{n-k}$. By \pref{pro:rtweak}, applications of the weakening rule can be eliminated from $\rest{R_C}\rho$ without increasing the size. Therefore by \pref{the:treelb}, $R_C$ is of size \[ \Bigl(\frac{n-k}{4}\Bigr)^{\frac{n-k}{4}} \geq \Bigl(\frac{n}{8}\Bigr)^{\frac{n}{8}} \] and hence the size of $R$ is at least $$ s \geq \frac12|R_C| \geq 2^{\Omega(n \log n)}.\eqno{\qEd}$$ \section{Preliminaries}\label{sec:prelim} \paragraph{Propositional logic.} Propositional formulas are formed using Boolean connectives $\lnot$, $\land$, and $\lor$. However, this paper works only with formulas in conjunctive normal form, namely formulas that can be expressed as a set of clauses. We write $\overline x$ for the negation of~$x$, and $\overline{\overline{x}}$ denotes~$x$. A {\em literal}~$l$ is defined to be either a variable~$x$ or a negated variable~$\overline x$. A clause~$C$ is a finite set of literals, and is interpreted as being the disjunction of its members. The empty clause is denoted~$\Box$. A {\em unit} clause is a clause containing a single literal. A set~$F$ of clauses is interpreted as the conjunction of its clauses, i.e., a conjunctive normal form formula (CNF). An assignment~$\alpha$ is a (partial) mapping from the set of variables to $\{0,1\}$, where we identify $1$ with {\em True} and $0$ with {\em False}. The assignment~$\alpha$ is implicitly extended to assign values to literals by letting $\alpha(\overline x) = 1-\alpha(x)$, and the domain, $\dom(\alpha)$, of~$\alpha$ is the set of literals assigned values by~$\alpha$. The {\em restriction} of a clause~$C$ under~$\alpha$ is the clause \begin{equation*} \rest{C}{\alpha} = \left\{ \begin{array}{llll} 1 & \text{if there is a } l \in C \text{ with } \alpha(l)=1\\ 0 & \text{if } \alpha(l)=0 \text{ for every } l \in C\\ \set{l \in C}{l \not\in \dom(\alpha)}& \text{otherwise} \end{array} \right. \end{equation*} The \emph{restriction} of a set $F$ of clauses under~$\alpha$ is \begin{equation*} \rest{F}{\alpha} = \left\{ \begin{array}{llll} 0 & \text{if there is a } C \in F \text{ with } \rest{C}{\alpha}=0\\ 1 & \text{if } \rest{C}{\alpha}=1 \text{ for every } C \in F\\ \set{\rest{C}{\alpha}}{C \in F} \setminus \{1\} & \text{otherwise} \end{array} \right. \end{equation*} If $\rest F \alpha = 1$, then we say $\alpha$ \emph{satisfies}~$F$. An assignment is called {\em total} if it assigns values to all variables. We call two CNFs $F$ and~$F^\prime$ \emph{equivalent} and write $F\equiv F^\prime$ to indicate that $F$ and~$F^\prime$ are satisfied by exactly the same total assignments. Note, however, that $F\equiv F^\prime$ does not always imply that they are satisfied by the same partial assignments. If $\epsilon \in \{0,1\}$ and $x$~is a variable, we define $x^\epsilon$ by letting $x^0$ be~$x$ and $x^1$ be~$\overline{x}$. \paragraph{Resolution.} Suppose that $C_0$ and~$C_1$ are clauses and $x$ is a variable with $x \in C_0$ and $\overline x \in C_1$. Then the {\em resolution rule} can be used to derive the clause $C = (C_0\setminus\penalty10000\{x\})\cup (C_1\setminus \{\overline x\})$. In this case we write $C_0,C_1 \vdash_x C$ or just $C_0,C_1 \vdash C$. A \emph{resolution proof} of a clause~$C$ from a CNF $F$ consists of repeated applications of the resolution rule to derive the clause~$C$ from the clauses of~$F$. If $C = \Box$, then $F$~is unsatisfiable and the proof is called a {\em resolution refutation}. We represent resolution proofs either as graphs or as trees. A {\em resolution dag} (RD) is a dag $G=(V,E)$ with labeled edges and vertices satisfying the following properties. Each node is labeled with a clause and a variable, and, in addition, each edge is labeled with a literal. There must be a single node of out-degree zero, labeled with the conclusion clause. Further, all nodes with in-degree zero are labeled with clauses from the initial set~$F$. All other nodes must have in-degree two and are labeled with a variable~$x$ and a clause $C$ such that $C_0,C_1\vdash_x C$ where $C_0$ and~$C_1$ are the labels on the the two immediate predecessor nodes and $x\in C_0$ and $\overline x\in C_1$. The edge from $C_0$ to~$C$ is labeled~$\overline x$, and the edge from $C_1$ to~$C$ is labeled~$x$. (The convention that that $x\in C_0$ and $\overline x$ is on the edge from~$C_0$ might seem strange, but it allows a more natural formulation of Theorem~\ref{the:regWprops} below.) A resolution dag~$G$ is \emph{$x$-regular} iff every path in~$G$ contains at most one node that is labeled with the variable~$x$. $G$~is \emph{regular} (or a regRD) if $G$ is $x$-regular for every~$x$. We define the {\em size} of a resolution dag~$G=(V,E)$ to be the number $|V|$ of vertices in the dag. $\var(G)$ is the set of variables used as resolution variables in~$G$. Note that if $G$ is a resolution proof rather than a refutation, then $\var(G)$ may not include all the variables that appear in clause labels of~$G$. A {\em resolution tree} (RT) is a resolution dag which is tree-like, i.e., a dag in which every vertex other then the conclusion clause has out-degree one. A regular resolution tree is called a regRT for short. The notion of (p-)simulation is an important tool for comparing the strength of proof systems. If $\mathcal Q$ and~$\mathcal R$ are refutation systems, we say that $\mathcal Q$ {\em simulates}~$\mathcal R$ provided there is a polynomial~$p(n)$ such that, for every ~$\mathcal R$-refutation of a CNF~$F$ of size~$n$ there is a $\mathcal Q$-refutation of~$F$ of size~$\le p(n)$. If the $\mathcal Q$-refutation can be found by a polynomial time procedure, then this called a {\em p-simulation}. Two systems that simulate (resp, p-simulate) each other are called {\em equivalent} (resp, {\em p-equivalent}). Some basic prior results for simulations of resolution systems include: \begin{thm} \hspace*{1ex} \begin{enumerate}[\em(a)] \setlength{\itemsep}{0pt} \item {\rm \cite{Tseitin68}} Regular tree resolution (regRT) p-simulates tree resolution (RT). \item {\rm \cite{Goerdt1993,Alekhnovich2002}} Regular resolution (regRD) does not simulate resolution (RD). \item {\rm \cite{BEGJ00}} Tree resolution (RT) does not simulate regular resolution (regRD). \end{enumerate} \end{thm} \paragraph{Weakening and w-resolution.} The {\em weakening} rule allows the derivation of any clause $C^\prime \supseteq C$ from a clause~$C$. However, instead of using the weakening rule, we introduce a {\em w-resolution} rule that essentially incorporates weakening into the resolution rule. Given two clauses $C_0$ and~$C_1$, and a variable~$x$, the {\em w-resolution rule} allows one to infer $C = (C_0\setminus\{x\})\cup (C_1\setminus \{\overline x\})$. We denote this condition $C_0, C_1 \vdash^w_x C$. Note that $x\in C_0$ and $\overline x\in C_1$ are not required for the w-resolution inference. We use the notations WRD, regWRD, WRT, and regWRT for the proof systems that correspond to RD, regRD, RT, and regRT (respectively) but with the resolution rule replaced with the w-resolution rule. That is, given a node labeled with $C$, an edge from $C_0$ to $C$ labeled with $\bar x$ and an edge from $C_1$ to $C$ labeled with $x$, we have $C = (C_0\setminus\{x\})\cup (C_1\setminus \{\overline x\})$. Similarly, we use the notations RDW and RTW for the proof systems that correspond to RD and RT, but with the general weakening rule added. In an application of the weakening rule, the edge connecting a clause $C^\prime \supseteq C$ with its single predecessor $C$ does not bear any label. The resolution and weakening rules can certainly p-simulate the w-resolution rule, since a use of the w-resolution rule can be replaced by weakening inferences that derive $C_0\cup\{x\}$ from $C_0$ and $C_1\cup\{\overline x\}$ from $C_1$, and then a resolution inference that derives~$C$. The converse is not true, since w-resolution cannot completely simulate weakening; this is because w-resolution cannot introduce completely new variables that do not occur in the input clauses. According to the well-known subsumption principle, weakening cannot increase the strength of resolution though, and the same reasoning implies the same about w-resolution; namely, we have: \begin{prop}\label{pro:subsumeweak} Let $R$ be a WRD proof of~$C$ from~$F$ of size~$n$. Then there is an RD proof~$S$ of~$C^\prime$ from~$F$ of size $\le n$ for some $C^\prime\subseteq C$. Furthermore, if $R$ is regular, so is~$S$, and if $R$ is a tree, so is~$S$. \end{prop} \proof The proof of the theorem is straightforward. Writing $R$ as a sequence $C_0, C_1, \ldots, C_n = C$, define clauses $C_i^\prime \subseteq C_i$ by induction on~$i$ so that the new clauses form the desired proof~$S$. For $C_i\in F$, let $C^\prime_i =C_i$. Otherwise $C_i$~is inferred by w-resolution from $C_j$ and~$C_k$ w.r.t.\ a variable~$x$. If $x\in C_j$ and $\overline x \in C_k$, let $C_i^\prime$ be the resolvent of $C_j^\prime$ and~$C_k^\prime$ as obtained by the usual resolution rule; if not, then let $C^\prime_i$ be $C^\prime_j$ if $x\notin C^\prime_j$, or~$C^\prime_k$ if $\overline x \notin C^\prime_k$. It is easy to check that each $C_i^\prime \subseteq C_i$ and that, after removing duplicate clauses, the clauses~$C^\prime_j$ form a valid resolution proof~$S$. If $R$~is regular, then so is~$S$, and if $R$~is a tree so is~$S$. \qed Essentially the same proof shows the same property for the system with the full weakening rule: \begin{prop}\label{pro:rtweak} Let $R$ be a RDW proof of~$C$ from~$F$ of size~$s$. Then there is an RD proof~$S$ of~$C^\prime$ from~$F$ of size $\le s$ for some $C^\prime\subseteq C$. Furthermore, if $R$ is regular, so is~$S$, and if $R$ is a tree, so is~$S$. \end{prop} There are several reasons why we prefer to work with w-resolution, rather than with the weakening rule. First, we find it to be an elegant way to combine weakening with resolution. Second, it works well for using resolution trees (with input lemmas, see the next section) to simulate DLL search algorithms. Third, since weakening and resolution together are stronger than w-resolution, w-resolution is a more refined restriction on resolution. Fourth, for regular resolution, using w-resolution instead of general weakening can be a quite restrictive condition, since any w-resolution inference $C_0, C_1 \wres_x C$ ``uses up'' the variable~$x$, making it unavailable for other resolution inferences on the same path, even if the variable does not occur at all in $C_0$ and~$C_1$. The last two reasons mean that w-resolution can be rather weak; this strengthens our results below (Theorems \ref{the:regWRTIforLearnables} and~\ref{the:rWRTIsimDLL}) about the existence of regular proofs that use w-resolution. The following simple theorem gives some useful properties for regular w-resolution. \begin{thm}\label{the:regWprops} Let $G$ be a regular w-resolution refutation. Let $C$ be a clause in~$G$. \begin{enumerate}[\em(a)] \item Suppose that $C$~is derived from $C_0$ and~$C_1$ with the edge from~$C_0$ (resp.~$C_1$) to~$C$ labeled with~$\overline x$ (resp.~$x$). Then $\overline x\notin C_0$, and $x\notin C_1$. \item Let $\alpha$ be an assignment such that for every literal~$l$ labeling an edge on the path from~$C$ to the final clause, $\alpha(l) = True$. Then $\rest C \alpha = 0$. \end{enumerate} \end{thm} \proof The proof of part a.~is based on the observation that if $\overline x \in C_0$, then also $\overline x \in C$. However, by the regularity of the resolution refutation, every clause on the path from~$C$ to the final clause~$\Box$ must contain~$\overline x$. But clearly $\overline x\notin\Box$. Part b.~is a well-known fact for regular resolution proofs. It holds for similar reasons for regular w-resolution proofs: the proof proceeds by induction on clauses in the proof, starting at the final clause~$\Box$ and moving up towards the leaves. Part~a.\ makes the induction step trivial. \qed \paragraph{Directed acyclic graphs} We define some basic concepts that will be useful for analyzing both resolution proofs and conflict graphs (which are defined below in \pref{sec:dll-up}). Let $G=(V,E)$ be a dag. The set of leaves (nodes in~$V$ of in-degree~0) of~$G$ is denoted $\leafs{G}$. The {\em depth} of a node~$u$ in~$V$ is defined to equal the maximum number of edges on any path from a leaf of~$G$ to the node~$u$. Hence leaves have depth~$0$. The subgraph rooted at~$u$ in~$G$ is denoted $\graph u G$; its nodes are the nodes~$v$ for which there is a path from $v$ to~$u$ in~$G$, and its edges are the induced edges of~$G$. \section{DLL algorithms with clause learning} \label{sec:dll-up} \subsection{The basic DLL algorithm} \label{sec:basic_dll} The DLL proof search algorithm is named after the authors Davis, Logeman and Loveland of the paper where it was introduced~\cite{DavisLogemann1962}. Since they built on the work of Davis and Putnam~\cite{DavisPutnam1960}, the algorithm is sometimes called the DPLL algorithm. There are several variations on the DLL algorithm, but the basic algorithm is shown in \pref{fig:dll}. The input is a set~$F$ of clauses, and a partial assignment~$\alpha$. The assignment~$\alpha$ is a set of ordered pairs $(x,\epsilon)$, where $\epsilon\in\{0,1\}$, indicating that $\alpha(x)=\epsilon$. The DLL algorithm is implemented as a recursive procedure and returns either \texttt{UNSAT} if $F$ is unsatisfiable or otherwise a satisfying assignment for~$F$. \begin{figure}[htbp] \begin{center} \begin{minipage}{1.0\linewidth} \tt \small \begin{tabbing} 123\=123455\=12345\=12345\=12345\=12345\=12345\=12345 \kill \>{\sc DLL}($F,\alpha$)\\ \>1\>if $\rest{F}{\alpha} = 0$ then\\ \>2\>\>return UNSAT\\ \>3\>if $\rest{F}{\alpha} = 1$ then\\ \>4\>\>return $\alpha$ \\ \>5\>choose $x \in \var(\rest{F}{\alpha})$ and $\epsilon\in\{0,1\}$ \\ \>6\>$\beta \leftarrow${\sc DLL}($F,\alpha \cup \{(x,\epsilon)\}$)\\ \>7\>if $\beta \neq$ UNSAT then\\ \>8\>\>return $\beta$\\ \>9\>else\\ \>10\>\>return {\sc DLL}($F,\alpha \cup \{(x,1-\epsilon)\}$) \end{tabbing} \end{minipage} \caption{The basic DLL algorithm.} \label{fig:dll} \end{center} \end{figure} Note that the DLL algorithm is not fully specified, since line~5 does not specify how to choose the branching variable~$x$ and its value~$\epsilon$. Rather one can think of the algorithm either as being nondeterministic or as being an algorithm schema. We prefer to think of the algorithm as an algorithm schema, so that it incorporates a variety of possible algorithms. Indeed, there has been extensive research into how to choose the branching variable and its value \cite{Freeman1995,Nadel2002}. There is a well-known close connection between regular resolution and DLL algorithms. In particular, a run of DLL can be viewed as a regular resolution tree, and vice-versa. This can be formalized by the following two propositions. \begin{prop} \label{pro:DLL_RT2} Let $F$ be an unsatisfiable set of clauses and $\alpha$~an assignment. If there is an execution of \hbox{\rm{\sc DLL}($F,\alpha$)} that returns \texttt{UNSAT} and performs $s$ recursive calls, then there exists a clause $C$ with $\rest{C}{\alpha} = 0$ such that $C$~has a regular resolution tree~$T$ from~$F$ with $|T| \leq s+1$ and $\var(T) \cap \dom(\alpha) = \varnothing$. \end{prop} The converse simulation of \pref{pro:DLL_RT2} holds, too, that is, a regular resolution tree can be transformed directly in a run of \textsc{DLL}. \begin{prop} \label{pro:DLL_RT1} Let $F$ be an unsatisfiable set of clauses. Suppose that $C$~has a regular resolution proof tree~$T$ of size~$s$ from~$F$. Let $\alpha$ be an assignment with $\rest{C}{\alpha} = 0$ and $\var(T) \cap \dom(\alpha) = \varnothing$. Then there is an execution of \hbox{\rm{\sc DLL}($F,\alpha$)}, that returns \texttt{UNSAT} after at most $s-1$ recursive calls. \end{prop} The two propositions are based on the following correspondence between resolution trees and a DLL search tree: first, a leaf clause in a resolution tree corresponds to a clause falsified by~$\alpha$ (so that $\rest F \alpha = 0$), and second, a resolution inference with respect to a variable~$x$ corresponds to the use of $x$~as a branching variable in the DLL algorithm. Together the two propositions give the following well-known exact correspondence between regular resolution trees and DLL search. \begin{thm}\label{the:DLL_RT} If $F$ is unsatisfiable, then there is an execution of \hbox{\rm {\sc DLL}($F,\varnothing$)} that executes with $< s$ recursive calls if and only if there exists a regular refutation tree for~$F$ of size $\le s$. \end{thm} \subsection{Learning by unit propagation} Two of the most successful enhancements of DLL that are used by most modern SAT solvers are unit propagation and clause learning. \emph{Unit clause propagation} (also called Boolean constraint propagation) was already part of the original DLL algorithm and is based on the following observation: If $\alpha$ is a partial assignment for a set of clauses~$F$ and if there is a clause $C\in F$ with $\rest C \alpha = \{ l \}$ a unit clause, then any $\beta\supset \alpha$ that satisfies~$F$ must assign $l$ the value {\em True}. There are a couple of methods that the DLL algorithm can use to implement unit propagation. One method is to just use unit propagation to guide the choice of a branching variable by modifying line~5 so that, if there is a unit clause in~$\rest F \alpha$, then $x$ and~$\epsilon$ are chosen to make the literal true. More commonly though, DLL algorithms incorporate unit propagation as a separate phase during which the assignment~$\alpha$ is iteratively extended to make any unit clause true until there are no unit clauses remaining. As the unit propagation is performed, the DLL algorithm keeps track of which variables were set by unit propagation and which clause was used as the basis for the unit propagation. This information is then useful for clause learning. \emph{Clause learning} in DLL algorithms was first introduced by Silva and Sakallah~\cite{SilvaSakallah1996} and means that new clauses are effectively added to~$F$. A learned clause~$D$ must be implied by~$F$, so that adding $D$ to~$F$ does not change the space of satisfying assignments. In theory, there are many potential methods for clause learning; however, in practice, the only useful method for learning clauses is based on unit propagation as in the original proposal \cite{SilvaSakallah1996}. In fact, all deterministic state of the art SAT solvers for structured (non-random) instances of SAT are based on clause learning via unit propagation. This includes solvers such as Chaff~\cite{MoskewiczMalik2001}, Zchaff~\cite{MahajanFu2004} and MiniSAT~\cite{EenBiere2005}. These DLL algorithms apply clause learning when the set~$F$ is falsified by the current assignment~$\alpha$. Intuitively, they analyze the {\em reason} some clause~$C$ in~$F$ is falsified and use this reason to infer a clause~$D$ from~$F$ to be learned. There are two ways in which a DLL algorithm assigns values to variables, namely, by unit propagation and by setting a branching variable. However, if unit propagation is fully carried out, then the first time a clause is falsified is during unit propagation. In particular, this happens when there are two unit clauses $\rest{C_1}\alpha = \{x \}$ and $\rest{C_2}\alpha = \{ \overline x \}$ requiring a variable~$x$ to be set both {\em True} and {\em False}. This is called a {\em conflict}. The reason for a conflict is analyzed by building a conflict graph. Generally, this is done by maintaining an \emph{unit propagation graph} that tracks, for each variable which has been assigned a value, the reason that implies the setting of the variable. The two possible reasons are that either (a)~the variable was set by unit propagation when a particular clause~$C$ became a unit clause, in which case $C$~is the reason, or (b)~the variable was set arbitrarily as a branching variable. The unit propagation graph~$G$ has literals as its nodes. The leaves of~$G$ are literals that were set true as branching variables, and the internal nodes are variables that were set true by unit propagation. If a literal~$l$ is an internal node in~$G$, then it was set true by unit propagation applied to some clause~$C$. In this case, for each literal~${l^\prime}\not= l$ in~$C$, $\overline{l^\prime}$~is a node in~$G$ and there is an edge from $\overline{ l^\prime }$ to~$l$. If the unit propagation graph contains a conflict it is called a \emph{conflict graph}. More formally, a conflict graph is defined as follows. \begin{defi} A {\em conflict graph}~$G$ for a set~$F$ of clauses under the assignment~$\alpha$ is a dag $G=(V\cup\{\Box\},E)$ where $V$ is a set of literals and where the following hold: \begin{enumerate}[(a)] \setlength{\itemsep}{1pt} \item For each $l\in V$, either (i)~$l$ has in-degree~0 and $\alpha(l)=1$, or (ii)~there is a clause~$C\in F$ such that $C = \{l\} \cup \{ l^\prime : (\overline {l^\prime},l)\in E\}$. For a fixed conflict graph~$G$, we denote this clause as~$C_l$. \item There is a unique variable~$x$ such that $V\supseteq\{x,\overline x\}$. \item The node~$\Box$ has only the two incoming edges $(x,\Box)$ and $(\overline x,\Box)$. \item The node $\Box$ is the only node with outdegree zero. \end{enumerate} \end{defi} Let $\leafs{G}$ denote the nodes in~$G$ of in-degree zero. Then, letting $\alpha_G = \{ (x,\epsilon) : x^\epsilon \in \leafs G \}$, the conflict graph~$G$ shows that every vertex~$l$ must be made true by any satisfying assignment for~$F$ that extends~$\alpha$. Since for some~$x$, both $x$ and $\overline x$ are nodes of~$G$, this implies $\alpha$ cannot be extended to a satisfying assignment for~$F$. Therefore, the clause $D = \{ \overline l : l\in\leafs G \}$ is implied by~$F$, and $D$~can be taken as a learned clause. We call this clause~$D$ the {\em conflict clause} of~$G$ and denote it $\Cc G$. There is a second type of clause that can be learned from the conflict graph~$G$ in addition to the conflict clause~$\Cc G$. Namely, let $l\not = \Box$ be any non-leaf node in~$G$. Further, let $\leafs { \graph l G }$ be the set of leaves~$l^\prime$ of~$G$ such that there is a path from~$l^\prime$ to~$l$. Then, the clauses in~$F$ imply that if all the leaves~$l^\prime \in \leafs{\graph l G}$ are assigned true, then $l$~is assigned true. Thus, the clause $D = \{ l \} \cup \{ \overline {l^\prime} : l^\prime \in \leafs{\graph l G} \}$ is implied by~$F$ and can be taken as a learned clause. This clause~$D$ is called the {\em induced clause} of $G_l$ and is denoted $\ic l G$. In the degenerate case where $\graph l G$ consists of only the single literal~$l$, this would make $\ic l G$ equal to $\{ l, \overline l \}$; rather than permit this as a clause, we instead say that the induced clause does not exist. In practice, both conflict clauses $\Cc G$ and induced clauses~$\ic l G$ are used by SAT solvers. It appears that most SAT solvers learn the \emph{first-UIP} clauses~\cite{SilvaSakallah1996}, which equal $\Cc{G}$ and $\ic l {G^\prime}$ for appropriately formulated~$G$ and~$G^\prime$. Other conflict clauses that can be learned include \emph{all-UIP} clauses~\cite{ZhangMadigan2001}, \emph{rel-sat} clauses~\cite{BayardoSchrag:CSPlookback}, \emph{decision} clauses~\cite{ZhangMadigan2001}, and \emph{first cut} clauses~\cite{BeameKautzSabharwal2004}. All of these are conflict clauses $\Cc{G}$ for appropriate~$G$. Less commonly, multiple clauses are learned, including clauses based on the cuts advocated by the mentioned works \cite{SilvaSakallah1996,ZhangMadigan2001}, which are a type of induced clauses. In order to prove the correspondence in \pref{sec:regwrti-dll} between DLL with clause learning and regWRTI proofs, we must put some restrictions on the kinds of clauses that can be (simultaneously) learned. In essence, the point is that for DLL with clause learning to simulate regWRTI proofs it is necessary to learn multiple clauses at once in order to learn all the clauses in a regular input subproof. But on the other hand, for regWRTI to simulate DLL with clause learning, regWRTI must be able to include regular input proofs that derive all the learned clauses so as to have them available for subsequent use as input lemmas. Thus, we define a notion of ``compatible clauses'' which is a set of clauses that can be simultaneously learned. For this, we define the notion of a series-parallel decomposition of a conflict graph~$G$. \begin{defi} A graph~$H=(W,E^\prime)$ is a {\em subconflict graph} of the conflict graph~$G=(V,E)$ provided that $H$~is a conflict graph with $W\subseteq V$ and $E^\prime\subseteq E$, and that each non-leaf vertex of~$H$ (that is, each vertex in $W\setminus \leafs H$) has the same in-degree in~$H$ as in~$G$. $H$~is a {\em proper} subconflict graph of~$G$ provided there is no path in~$G$ from any non-leaf vertex of~$H$ to a vertex in~$\leafs H$. \end{defi} Note that if $l$~is a non-leaf vertex in the subconflict graph~$H$ of~$G$, then the clause $C_l$ is the same whether it is defined with respect to~$H$ or with respect to~$G$. \begin{defi} Let $G$~be a conflict graph. A {\em decomposition} of~$G$ is a sequence $H_0\subset H_1\subset \cdots\subset H_k$, $k\ge 1$, of distinct proper subconflict graphs of~$G$ such that $H_k=G$ and $H_0$~is the dag on the three nodes~$\Box$ and its two predecessors $x$ and~$\overline x$. \end{defi} A decomposition of~$G$ will be used to describe sets of clauses that can be simultaneously learned. For this, we put a structure on the decomposition that describes the exact types of clauses that can be learned: \begin{defi} A {\em series-parallel decomposition}~$\mathcal H$ of~$G$ consists of a decomposition $H_0,\ldots,H_k$ plus, for each $0\le i<k$, a sequence $H_i=H_{i,0}\subset H_{i,1}\subset \cdots \subset H_{i,m_i}=H_{i+1}$ of proper subconflict graphs of~$G$. Note that the sequence \[ H_0=H_{0,0}, H_{0,1}, H_{0,2},\ldots, H_{0,m_0}=H_1=H_{1,0}, H_{1,1}, \ldots, H_{k-1,m_{k-1}} = H_k \] is itself a decomposition of~$G$. However, we prefer to view it as a two-level decomposition. A {\em series} decomposition is a series-parallel decomposition with trivial parallel part, i.e., with $k=1$. A {\em parallel} decomposition is series-parallel decomposition in which $m_i=1$ for all~$i$. Note that we always have $H_i\not= H_{i+1}$ and $H_{i,j}\not=H_{i,j+1}$. \end{defi} Figure~\ref{seriesparallelFig} illustrates a series-parallel decomposition. \begin{defi} For $\mathcal H$ a series-parallel decomposition, the set of {\em learnable clauses}, $\Cc {\mathcal H}$, for~$\mathcal H$ consists of the following induced clauses and conflict clauses: \begin{enumerate}[$\bullet$] \item For each $1\le j \le m_0$, the conflict clause $\Cc {H_{0,j}}$, and \item For each $0<i<k$ and $0<j\le{m_i}$ and each $l \in \leafs{H_i} \setminus \leafs{H_{i,j}}$, the induced clause $\ic l {H_{i,j}}$. \end{enumerate} \end{defi} \begin{figure}[t] \psset{unit=0.04cm} \begin{center} \begin{pspicture}(-50,0)(175,200) \cnodeput(0,0){box}{\makebox(0, 6.6){$\Box$}} \cnodeput(20,20){abar}{\makebox(0, 6.6){$\overline a$}} \cnodeput(-20,20){a}{\makebox(0, 6.6){$a$}} \ncline{a}{box} \ncline{abar}{box} \cnodeput(20,40){c}{\makebox(0, 6.6){$c$}} \cnodeput(-20,50){b}{\makebox(0, 6.6){$b$}} \cnodeput(20,60){d}{\makebox(0, 6.6){$d$}} \cnodeput(0,80){e}{\makebox(0, 6.6){$e$}} \ncline{c}{abar} \ncline{b}{a} \ncline{b}{abar} \ncline{d}{c} \ncline{e}{b} \ncline{e}{d} \cnodeput(-20,105){f}{\makebox(0, 6.6){$f$}} \cnodeput(20,105){g}{\makebox(0, 6.6){$g$}} \cnodeput(-25,130){h}{\makebox(0, 6.6){$h$}} \cnodeput(15,130){i}{\makebox(0, 6.6){$i$}} \ncline{f}{e} \ncline{g}{e} \ncline{h}{f} \ncline{i}{f} \ncline{i}{g} \cnodeput(-20,160){j}{\makebox(0, 6.6){$j$}} \cnodeput(20,160){k}{\makebox(0, 6.6){$k$}} \cnodeput(-5,180){ell}{\makebox(0, 6.6){$\ell$}} \cnodeput(25,180){m}{\makebox(0, 6.6){$m$}} \ncline{j}{h} \ncline{j}{i} \ncline{k}{i} \ncline{ell}{j} \ncline{ell}{k} \ncline{m}{k} \psline[linewidth=1.5pt](-30,30)(30,30) \psline[linewidth=1.5pt](-30,92)(30,92) \psline[linewidth=1.5pt](-30,147)(30,147) \psline[linewidth=1.5pt](-30,195)(30,195) \pscurve[linewidth=1.5pt,linestyle=dotted](-30,62)(-25,62)(20,50)(30,50) \pscurve[linewidth=1.5pt,linestyle=dotted](-30,67)(-25,67)(25,72)(30,72) \psline[linewidth=1.5pt,linestyle=dotted](-30,115)(30,115) \pscurve[linewidth=1.5pt,linestyle=dotted]% (-30,120)(-25,120)(10,138)(20,140)(30,140) \rput[l](33,30){$H_0 = H_{0,0}$} \rput[l](33,50){$H_{0,1}$} \rput[l](33,72){$H_{0,2}$} \rput[l](33,92){$H_1=H_{0,3}=H_{1,0}$} \rput[l](33,115){$H_{1,1}$} \rput[l](33,137){$H_{1,2}$} \rput[l](33,147){$H_2=H_{1,3}=H_{2,0}$} \rput[l](33,194){$H_3=H_{2,1}$} \rput[c](150,200){\underline{Learnable clauses}} \rput[c](150,188){$\{ \overline\ell, h \}$ } \rput[c](150,175){$\{ \overline\ell, \overline m, i \}$ } \rput[c](150,150){$\{ \overline h, \overline i, e \}$ } \rput[c](150,138){$\{ \overline f, \overline i, e \}$ } \rput[c](150,115){$\{ \overline f, \overline g, e \}$ } \rput[c](150,92){$\{ \overline e \}$ } \rput[c](150,72){$\{ \overline b, \overline d \}$ } \rput[c](150,50){$\{ \overline b, \overline c \}$ } \end{pspicture} \end{center} \caption{A series-parallel decomposition. Solid lines define the sets~$H_i$ of the parallel part of the decomposition, and dotted lines define the sets $H_{i,j}$ in the series part. Each line (solid or dotted) defines the set of nodes that lie below the line. The learnable clauses associated with each set are shown in the right column. } \label{seriesparallelFig} \end{figure} It should be noted that the definition of the parallel decomposition incorporates the notion of ``cut'' used by Silva and Sakallah~\cite{SilvaSakallah1996}. The DLL algorithm shown in \pref{fig:dll-l-up} chooses a single series-parallel decomposition~$\mathcal H$ and learns some subset of the learnable clauses in~$\Cc {\mathcal H}$. It is clear that this generalizes all of the clause learning algorithms mentioned above. The algorithm schema \textsc{DLL-L-UP} that is given in \pref{fig:dll-l-up} is a modification of the schema \textsc{DLL}. In addition to returning a satisfying assignment or \texttt{UNSAT}, it returns a modified formula that might include learned clauses. If $F$ is a set of clauses and $\alpha$~is an assignment then \textsc{DLL-L-UP}($F,\,\alpha$) returns $(F',\alpha')$ such that $F^\prime \supseteq F$ and $F^\prime$ is equivalent to~$F$ and such that $\alpha^\prime$~either is \texttt{UNSAT} or is a satisfying assignment for~$F$.\footnote{ Our definition of \textsc{DLL-L-UP} is slightly different from the version of the algorithm as originally defined in Hoffmann's thesis \cite{Hoffmann2007}. The first main difference is that we use series-parallel decompositions rather the compatible set of subconflict graphs of Hoffmann~\cite{Hoffmann2007}. The second difference is that our algorithm does not build the implication graph incrementally by the use of explicit unit propagation; instead, it builds the implication graph once a conflict has been found.} \begin{figure}[htbp] \begin{center} \begin{minipage}{1.0\linewidth} \tt \small \begin{tabbing} 123\=123455\=12345\=12345\=12345\=12345\=12345\=12345\=12345\=12345\=12345\=12345 \kill \>{\sc DLL-L-UP}($F,\alpha$)\\ \>1\>if $\rest{F}{\alpha} = 1$ then return ($F,\alpha$) \\ \>2\>if there is a conflict graph for~$F$ under~$\alpha$ then \\ \>3\>\>choose a conflict graph~$G$ for~$F$ under~$\alpha$ \\ \>4\>\>\>and a series-parallel decomposition~$\mathcal H$ of~$G$ \\ \>5\>\>choose a subset $S$ of $\Cc{{\mathcal H}}$ ~~ -- the learned clauses \\ \>6\>\>return ($F\cup S$, UNSAT) \\ \>7\>choose $x \in \var(\rest{F}{\alpha})$ and $\epsilon \in \{0,1\}$\\ \>8\>($G,\beta$)$\leftarrow${\sc DLL-L-UP}($F,\alpha \cup \{(x,\epsilon)\}$)\\ \>9\>if $\beta \neq $ UNSAT then \\ \>10\>\>return ($G,\beta$)\\ \>11\>return {\sc DLL-L-UP}($G,\alpha \cup \{(x,1-\epsilon)\})$ \end{tabbing} \end{minipage} \caption{DLL with Clause Learning.} \label{fig:dll-l-up} \end{center} \end{figure} The \textsc{DLL-L-UP} algorithm as shown in \pref{fig:dll-l-up} does not explicitly include unit propagation. Rather, the use of unit propagation is hidden in the test on line~2 of whether unit propagation can be used to find a conflict graph. In practice, of course, most algorithms set variables by unit propagation as soon as possible and update the implication graph each time a new unit variable is set. The algorithm as formulated in \pref{fig:dll-l-up} is more general, and thus covers more possible implementations of \textsc{DLL-L-UP}, including algorithms that may change the implication graph retroactively or may pick among several conflict graphs depending on the details of how $F$~can be falsified. There is at least one implemented clause learning algorithm that does this \cite{FMM:SAT04zChaff}. As shown in \pref{fig:dll-l-up}, if $\rest F \alpha$ is false, then the algorithm must return \texttt{UNSAT} (lines 2-6). Sometimes, however, we use instead a ``non-greedy'' version of \textsc{DLL-L-UP}. For the non-greedy version it is optional for the algorithm to immediately return \texttt{UNSAT} once $F$ has a conflict graph. Thus the non-greedy \textsc{DLL-L-UP} algorithm can set a branching variable (lines 7-11) even if $F$~has already been falsified and even if there are unit clauses present. This non-greedy version of \textsc{DLL-L-UP} will be used in the next section to simulate regWRTI proofs. The constructions of Section~\ref{sec:regwrti-dll} also imply that \textsc{DLL-L-UP} is p-equivalent to the restriction of \textsc{DLL-L-UP} in which only series decompositions are allowed. That is to say, \textsc{DLL-L-UP} with only series decompositions can simulate any run of \textsc{DLL-L-UP} with at most polynomially many more recursive calls. \section{w-resolution trees with lemmas}\label{sec:wrtl} This section first gives an alternate characterization of resolution dags by using \emph{resolution trees with lemmas}. We then refine the notion of lemmas to allow only \emph{input lemmas}. For non-regular derivations, resolution trees with lemmas and resolution trees with input lemmas are both proved below to be p-equivalent to resolution. However, for regular proofs, the notions are apparently different. (In fact we give an exponential separation between regular resolution and regular w-resolution trees with input lemmas.) Later in the paper we will give a tight correspondence between resolution trees with input lemmas and DLL search algorithms. The intuition for the definition of a resolution tree with lemmas is to allow any clause proved earlier in the resolution tree to be reused as a leaf clause. More formally, assume we are given a resolution proof tree~$T$, and further assume~$T$ is {\em ordered} in that each internal node has a left child and a right child. We define $ <_T $ to be the post-ordering of~$T$, namely, the linear ordering of the nodes of~$T$ such that if $u$~is a node in~$T$ and $v$~is in the subtree rooted at $u$'s left child, and $w$~is in the subtree rooted at $u$'s right child, then $v <_T w <_T u$. For $F$ a set of clauses, a {\em resolution tree with lemmas} (RTL) proof from~$F$ is an ordered binary tree such that (1)~each leaf node~$v$ is labeled with either a member of~$F$ or with a clause that labels some node $u <_T v$, and (2)~each internal node~$v$ is labeled with a variable~$x$ and a clause~$C$, such that~$C$ is inferred by resolution w.r.t.~$x$ from the clauses labeling the two children of~$v$, and (3)~the unique out-degree zero node is labeled with the conclusion clause~$D$. If $D=\Box$, then the RTL proof is a refutation. {\em w-resolution trees with lemmas} (WRTL) are defined just like RTL's, but allowing w-resolution in place of resolution, and \emph{resolution trees with lemmas and weakening} (RTLW) are defined in the same way, but allowing the weakening rule in addition to resolution. An RTL or WRTL proof is {\em regular} provided that no path in the proof tree contains more than one (w-)resolution using a given variable~$x$. Note that paths follow the tree edges only; any maximal path starts at a leaf node (possibly a lemma) and ends at the conclusion. It is not hard to see that resolution trees with lemmas (RTL) and resolution dags (RD) p-simulate each other. Namely, an RD can be converted into an RTL by doing a depth-first, leftmost traversal of the RD. In addition, it is clear that regular RTL's p-simulate regular RD's. The converse is open, and it is false for regular WRTL, as we prove in \pref{sec:regwrti-dll}: intuitively, the problem is that when one converts an RTL proof into an RD, new path connections are created when leaf clauses are replaced with edges back to the node where the lemma was derived. We next define resolution trees with input lemma (RTI) proofs. These are a restricted version of resolution trees with lemmas, where the lemmas are required to have been derived earlier in the proof by \emph{input proofs}. Input proofs have also been called \emph{trivial proofs} by Beame et al.~\cite{BeameKautzSabharwal2004}, and they are useful for characterizing the clause learning permissible for DLL algorithms. \begin{defi} An {\em input resolution tree} is a resolution tree such that every internal node has at least one child that is a leaf. Let $v$~be a node in a tree~$T$ and let $T_v$ be the subtree of~$T$ with root~$v$. The node~$v$ is called an {\em input-derived node} if $T_v$~is an input resolution tree. \end{defi} Often the node~$v$ and its label~$C$ are identified. In this case, $C$~is called an {\em input-derived clause}. In RTI proofs, input-derived clauses may be reused as lemmas. Thus, in an RTI proof, an input-derived clause is derived by an input proof whose leaves either are initial clauses or are clauses that were already input-derived. \begin{defi} A {\em resolution tree with input lemmas} (RTI) proof~$T$ is an RTL proof with the extra condition that every lemma in~$T$ must appear earlier in~$T$ as an input-derived clause. That is to say, every leaf node~$u$ in~$T$ is labeled either with an initial clause from~$F$ or with a clause that labels some input-derived node $v <_T u$. \end{defi} The notions of w-resolution trees with input lemmas (WRTI), regular resolution trees with input lemmas (regRTI), and regular w-resolution trees with input lemmas (regWRTI) are defined similarly.% \footnote{A small, but important point is that w-resolution inferences are not allowed in input proofs, even for input proofs that are part of WRTI proofs. We have chosen the definition of input proofs so as to make the results in \pref{sec:regwrti-dll} hold that show the equivalence between regWRTI proofs and DLL-L-UP search algorithms. Although similar results could be obtained if the definition of input proof were changed to allow w-resolution inferences, it would require also using a modified, and less natural, version of clause learning.} It is clear that the resolution dags (RD) and resolution trees with lemmas (RTL) p-simulate resolution trees with input lemmas (RTI). Somewhat surprisingly, the next theorem shows that the converse p-simulation holds as well. \begin{thm}\label{the:RD->RTI} Let $G$ be a resolution dag of size~$s$ for the clause~$C$ from the set~$F$ of clauses. Let $d$ be the depth of~$C$ in~$G$. Then there is an RTI proof~$T$ for~$C$ from~$F$ of size $< 2sd$. If $G$ is regular then $T$ is also regular. \end{thm} \proof The dag proof~$G$ can be unfolded into a proof tree~$T^\prime$, possibly exponentially bigger. The proof idea is to prune clauses away from~$T^\prime$ leaving a RTI proof~$T$ of the desired size. Without loss of generality, no clause appears more than once in~$G$; hence, for a given clause~$C$ in the tree~$T^\prime$, every occurrence of~$C$ in~$T^\prime$ is derived by the same subproof~$T^\prime_C$. Let $d_C$ be the depth of~$C$ in the proof, i.e., the height of the tree~$T^\prime_C$. Clauses at leaves have depth~$0$. We give the proof tree~$T^\prime$ an arbitrary left-to-right order, so that it makes sense to talk about the $i$-th occurrence of a clause~$C$ in~$T^\prime$. We define the $j$-th occurrence of a clause~$C$ in~$T^\prime$ to be \emph{leafable}, provided $j > d_C$. The intuition is that the leafable clauses will have been proved as a input clause earlier in~$T$, and thus any leafable clause may be used as a lemma in~$T$. To form~$T$ from~$T^\prime$, remove from~$T^\prime$ any clause~$D$ if it has a successor that is leafable, so that every leafable occurrence of a clause either does not appear in~$T$ or appears in~$T$ as a leaf. To prove that $T$~is a valid RTI proof, it suffices to prove, by induction on~$i$, that if $C$~has depth $d_C=i>0$, then the $i$-th occurrence of~$C$ is input-derived in~$T$. Note that the two children $C_0$ and~$C_1$ of~$C$ must have depth $<d_C$. Since every occurrence of~$C$ is derived from the same two clauses, these occurrences of $C_0$ and~$C_1$ must be at least their $i$-th occurrences. Therefore, by the induction hypothesis, the children $C_0$ and~$C_1$ are leafable and appear in~$T$ as leaves. Thus, since it is derived by a single inference from two leaves, the $i$-th occurrence of~$C$ is input-derived. It follows that $T$ is a valid RTI proof. If the proof~$G$ was regular, clearly $T$~is regular too. To prove the size bound for~$T$, note that $G$ has at most $s-1$ internal nodes. Each one occurs at most $d$ times as an internal node in~$T$, so $T$ has at most $d(s-1)$ internal nodes. Thus, $T$~has at most $2d\cdot (s-1) +1 < 2sd$ nodes in all. \qed The following two theorems summarize the relationships between our various proof systems. We write ${\mathcal R}\equiv{\mathcal Q}$ to denote that $\mathcal R$ and~$\mathcal Q$ are p-equivalent, and ${\mathcal Q}\le {\mathcal R}$ to denote that $\mathcal R$ p-simulates $\mathcal Q$. The notation ${\mathcal Q} < {\mathcal R}$ means that $\mathcal R$ p-simulates~$\mathcal Q$ but $\mathcal Q$ does not simulate~$\mathcal R$. \begin{thm}\label{the:all_equiv} $\text{RD} \equiv \text{WRD} \equiv \text{RTI} \equiv \text{WRTI} \equiv \text{RTL} \equiv \text{WRTL}$ \end{thm} \proof The p-equivalences $\text{RD} \equiv \text{WRD}$ and $\text{RTI} \equiv \text{WRTI}$ and $\text{RTL} \equiv \text{WRTL}$ are shown by (the proof of) \pref{pro:subsumeweak}. The simulations $\text{RTI} \le \text{RTL} \equiv \text{RD}$ are straightforward. Finally, $\text{RD} \le \text{RTI}$ is shown by Theorem~\ref{the:RD->RTI}. \qed For regular resolution, we have the following theorem. \begin{thm}\label{the:hierarchy} $\text{regRD} \equiv \text{regWRD} \leq \text{regRTI} \leq \text{regRTL} \leq \text{regWRTL} \leq \text{RD}$ and $\text{regRTI} \leq \text{regWRTI} \leq \text{regWRTL}$. \end{thm} \proof $\text{regRD} \equiv \text{regWRD}$ and $\text{regWRTL} \leq \text{RD}$ follow from the definitions and the proof of \pref{pro:subsumeweak}. The p-simulations $\text{regRTI} \leq \text{regRTL} \leq \text{regWRTL}$ and $\text{regRTI} \leq \text{regWRTI} \leq \text{regWRTL}$ follow from the definitions. The p-simulation $\text{regRD} \leq \text{regRTI}$ is shown by Theorem~\ref{the:RD->RTI}. \qed Below, we prove, as \pref{the:regRDnosimregWRTI}, that $\text{regRD} < \text{regWRTI}$. This is the only separation in the hierarchy that is known. In particular, it is open whether $\text{regRD} < \text{regRTI}$, $\text{regRTI} < \text{regRTL}$, $ \text{regRTL} < \text{regWRTL}$, $ \text{regWRTL} < \text{RD}$ or $\text{regWRTI} < \text{regWRTL}$ hold. It is also open whether regWRTI and regRTL are comparable. \section{Conclusion} \section{Introduction}\label{sec:intro} Although the satisfiability problem for propositional logic (SAT) is NP-complete, there exist SAT solvers that can decide SAT on present-day computers for many formulas that are relevant in practice \cite{SilvaSakallah1996, MoskewiczMalik2001, MahajanFu2004, BerreSimon2003, BerreSimon2004, BerreSimon2005}. The fastest SAT solvers for structured problems are based on the basic backtracking procedures known as DLL algorithms \cite{DavisLogemann1962}, extended with additional techniques such as clause learning. DLL algorithms can be seen as a kind of proof search procedure since the execution of a DLL algorithm on an unsatisfiable CNF formula yields a tree-like resolution refutation of that formula. Conversely, given a tree-like resolution refutation, an execution of a DLL algorithm on the refuted formula can be constructed whose runtime is roughly the size of the refutation. By this exact correspondence, upper and lower bounds on the size of tree-like resolution proofs transfer to bounds on the runtime of DLL algorithms. This paper generalizes this exact correspondence to extensions of DLL by clause learning. To this end, we define natural, rule-based resolution proof systems and then prove that they correspond to DLL algorithms that use various forms of clause learning. The motivation for this is that the correspondence between a clause learning DLL algorithm and a proof system helps explain the power of the algorithm by giving a description of the space of proofs which is searched by it. In addition, upper and lower bounds on proof complexity can be transferred to upper and lower bounds on the possible runtimes of large classes of DLL algorithms with clause learning. We introduce, in {\pref{sec:wrtl}}, tree-like resolution refinements using the notions of a resolution tree with lemmas (RTL) and a resolution tree with input lemmas (RTI). An RTL is a tree-like resolution proof in which every clause needs only to be derived once and can be copied to be used as a leaf in the tree (i.e., a lemma) if it is used several times. As the reader might guess, RTL is polynomially equivalent to general resolution. Since DLL algorithms use learning based on unit propagation, and since unit propagation is equivalent to input resolution (sometimes called ``trivial resolution'' \cite{BeameKautzSabharwal2004}), it is useful to restrict the lemmas that are used in a RTL to those that appear as the root of input subproofs. This gives rise to proof systems based on resolution trees with input lemmas (RTI). Somewhat surprisingly, we show that RTI can also simulate general resolution. A resolution proof is called {\em regular} if no variable is used as a resolution variable twice along any path in the tree. Regular proofs occur naturally in the present context, since a backtracking algorithm would never query the same variable twice on one branch of its execution. It is known that regular resolution is weaker than general resolution~\cite{Goerdt1993,Alekhnovich2002}, but it is unknown whether regular resolution can simulate regular RTL or regular RTI. This is because, in regular RTL/RTI proofs, variables that are used for resolution to derive a clause can be reused on paths where this clause appears as a lemma. For resolution and regular resolution, the use of a weakening rule does not increase the power of the proof system (by the subsumption principle). However, for RTI and regular RTL proofs, the weakening rule may increase the strength of the proof system (this is an open question, in fact), since eliminating uses of weak inferences may require pruning away parts of the proof that contain lemmas needed later in the proof. Accordingly, \pref{sec:wrtl} also defines proof systems regWRTL and regWRTI that consist of regular RTL and regular RTI (respectively), but with a modified form of resolution, called ``w-resolution'', that incorporates a restricted form of the weakening rule. In {\pref{sec:dll-up}} we propose a general framework for DLL algorithms with clause learning, called \textsc{DLL-L-UP}. The schema \textsc{DLL-L-UP} is an attempt to give a short and abstract definition of modern SAT solvers and it incorporates all common learning strategies, including all the specific strategies discussed by Beame et al.~\cite{BeameKautzSabharwal2004}. {\pref{sec:regwrti-dll}} proves that, for any of these learning strategies, a proof search tree can be transformed into a regular WRTI proof with only a polynomial increase in size. Conversely, any regular WRTI proof can be simulated by a ``non-greedy'' DLL search tree with clause learning, where by ``non-greedy'' is meant that the algorithm can continue decision branching even after unit propagation could yield a contradiction. In {\pref{sec:dll-learn}} we give another generalization of DLL with clause learning called {\textsc{DLL-Learn}}. The algorithm {\textsc{DLL-Learn}}{} can simulate the clause learning algorithm \textsc{DLL-L-UP}. More precisely, we prove that {\textsc{DLL-Learn}}{} p-simulates, and is p-simulated by, regular WRTL. The {\textsc{DLL-Learn}}{} algorithm is very similar to the ``pool resolution'' algorithm that has been introduced by Van Gelder~\cite{VanGelder2005} but differs from pool resolution by using the ``w-resolution'' inference in place of the ``degenerate'' inference used by Van Gelder (the terminology ``degenerate'' is used by Hertel et al.~\cite{BHPvG:clauselearn}). Van Gelder has shown that pool resolution can simulate not only regular resolution, but also any resolution refutation which has a regular depth-first search tree. The latter proof system is the same as the proof system regRTL in our framework, therefore the same holds for {\textsc{DLL-Learn}}{}. It is unknown whether {\textsc{DLL-Learn}}{} or \textsc{DLL-L-UP} can p-simulate pool resolution or vice versa. Sections \ref{sec:dll-up}-\ref{sec:dll-learn} prove the equivalence of clause learning algorithms with the two proof systems regWRTI and regWRTL. Our really novel system is regWRTI: this system has the advantage of using input lemmas in a manner that closely matches the range of clause learning algorithms that can be used by practical DLL algorithms. In particular, the regWRTI proof system's use of input lemmas corresponds directly to the clause learning strategies of Silva and Sakallah \cite{SilvaSakallah1996}, including first-UIP, relsat, and other clauses based on cuts, and including learning multiple clauses at a time. Van Gelder~\cite{VanGelder2005} shows that pool resolution can also simulate these kinds of clause learning (at least, for learning single clauses), but the correspondence is much more natural for the system regWRTI than for either pool resolution or {\textsc{DLL-Learn}}{}. It is known that DLL algorithms with clause learning and restarts can simulate full (non-regular, dag-like) resolution by learning every derived clause, and doing a restart each time a clause is learned~\cite{BeameKautzSabharwal2004}. Our proof systems, regWRTI and {\textsc{DLL-Learn}}{}, do not handle restarts; instead, they can be viewed as capturing what can happen between restarts. Another approach to simulating full resolution is via the use of ``proof trace extensions'' introduced by Beame et al.~\cite{BeameKautzSabharwal2004}. Proof trace extensions allow resolution to be simulated by clause learning DLL algorithms, and a related construction is used by Hertel et al.~\cite{BHPvG:clauselearn} to show that pool resolution can ``effectively'' p-simulate full resolution. These constructions require introducing new variables and clauses in a way that does not affect satisfiability, but allow a clause learning DLL algorithm or pool resolution to establish non-satisfiability. However, the constructions by Beame et al.~\cite{BeameKautzSabharwal2004} and the initially circulated preprint of Hertel et al.~\cite{BHPvG:clauselearn} had the drawback that the number of extra introduced variables depends on the size of the (unknown) resolution refutation. \pref{sec:varexp} introduces an improved form of proof trace extensions called ``variable extensions''. Theorem~\ref{the:pte_trick} shows that variable extensions can be used to give a p-simulation of full resolution by regWRTI (at the cost of changing the formula that is being refuted). Variable extensions are simpler and more powerful than proof trace extensions. Their main advantage is that a variable extension depends only on the number of variables, not on the size of the (unknown) resolution proof. The results of \pref{sec:varexp} were first published in the second author's diploma thesis \cite{Hoffmann2007}; the subsequently published version of the article of Hertel et al.~\cite{BHPvG:clauselearn} gives a similarly improved construction (for pool resolution) that does not depend on the size of the resolution proof and, in addition, does not use degenerate resolution inferences. One consequence of Theorem~\ref{the:pte_trick} is that regWRTI can effectively p-simulate full resolution. This improves on the results of Hertel et al.~\cite{BHPvG:clauselearn} since regWRTI is not known to be as strong as pool resolution. It remains open whether regWRTI or pool resolution can p-simulate general resolution without variable extensions. \pref{sec:smlem} proves a lower bound that shows that for certain hard formulas, the pigeonhole principle $PHP_n$, learning only small clauses does not help a DLL-algorithm. We show that resolution trees with lemmas require size exponential in $n\log n$ to refute $PHP_n$ when the size of clauses used as lemmas is restricted to be less than $n/2$. This bound is asymptotically the same as the lower bound shown for tree-like resolution refutations of $PHP_n$ \cite{iwamiy99}. On the other hand, there are regular resolution refutations of $PHP_n$ of size exponential in~$n$~\cite{BusPit97}, and our results show that these can be simulated by \textsc{DLL-L-UP}. Hence the ability of learning large clauses can give a DLL-algorithm a superpolynomial speedup over one that learns only short clauses. \section{Equivalence of regWRTI and DLL-L-UP}\label{sec:regwrti-dll} \subsection{regWRTI simulates DLL-L-UP} We shall prove that regular WRTI proofs are equivalent to non-greedy \hbox{\textsc{DLL-L-UP}} searches. We start by showing that every \textsc{DLL-L-UP} search can be converted into a regWRTI proof. As a first step, we prove that, for a given series-parallel decomposition~$\mathcal H$ of a conflict graph, there is a single regWRTI proof~$T$ such that every learnable clause of~$\mathcal H$ appears as an input-derived clause in~$T$. Furthermore, $T$~is polynomial size; in fact, $T$ has size at most quadratic in the number of distinct variables that appear in the conflict graph. This theorem generalizes earlier, well-known results of Chang \cite{Chang1970} and Beame et al.~\cite{BeameKautzSabharwal2004} that any individual learned clause can be derived by input resolution (or, more specifically, that unit resolution is equivalent to input resolution). The theorem states a similar fact about proving an entire set of learnable clauses simultaneously. \begin{thm} \label{the:regWRTIforLearnables} Let $G$~be a conflict graph of size~$n$ for~$F$ under the assignment~$\alpha$. Let $\mathcal H$ be a series-parallel decomposition for~$G$. Then there is a regWRTI proof~$T$ of size~$\le n^2$ such that every learnable clause of~$\mathcal H$ is an input-derived clause in~$T$. The final clause of~$T$ is equal to $\Cc{G}$. Furthermore, $T$~uses as resolution variables, only variables that are used as nodes (possibly negated) in $G\setminus \leafs{G}$. \end{thm} First we prove a lemma. Let the subconflict graphs $H_0\subset H_1\subset \cdots \subset H_k$ and $H_{0,0}\subset H_{0,1} \subset \cdots \subset H_{k-1,m_{k-1}}$ be as in the definition of series-parallel decomposition. \begin{lem} \label{lem:lemmaA} \hspace*{1em} \begin{enumerate}[\em(a)] \item There is an input proof~$T_0$ from~$F$ which contains every conflict clause $\CC {H_{0,j}}$, for $j=1,\ldots,m_0$. Every resolution variable in~$T_0$ is a non-leaf node (possibly negated) in~$H_1$. \item Suppose that $1\le i<k$ and $u$~is a literal in $\leafs{H_i}$. Then there is an input proof~$T^u_i$ which contains every (existing) induced clause $\ic{u}{H_{i,j}}$ for $j=1,\ldots,m_i$. Every resolution variable in~$T^u_i$ is a non-leaf node (possibly negated) in the subgraph $(H_{i+1})_u$ of~$H_{i+1}$ rooted at~$u$. \end{enumerate} \end{lem} \proof We prove part~a.\ of the lemma and then indicate the minor modifications needed to prove part~b. The construction of~$T_0$ proceeds by induction on~$j$ to build proofs $T_{0,j}$; at the end, $T_0$~is set equal to~$T_{0,m_0}$. Each proof $T_{0,j}$ ends with the clause $\CC{H_{0,j}}$ and contains the earlier proof~$T_{0,j-1}$ as a subproof. In addition, the only variables used as resolution variables in~$T_{0,j}$ are variables that are non-leaf nodes (possibly negated) in~$H_{0,j}$. To prove the base case $j=1$, we must show that $\CC{H_{0,1}}$ has an input proof~$T_{0,1}$. Let the two immediate predecessors of~$\Box$ in~$G$ be the literals $x$ and~$\overline x$. Define a clause~$C$ as follows. If $x$ is not a leaf in~$H_{0,1}$, then we let $C = C_x$; recall that $C_x$~is the clause that contains the literal~$x$ and the negations of literals that are immediate predecessors of~$x$ in the conflict graph. Otherwise, since $H_{0,1}\not= H_0$, $\overline x$ is not a leaf in~$H_{0,1}$, and we let $C=C_{\overline x}$. By inspection, $C$~has the property that it contains only negations of literals that are in~$H_{0,1}$. For $l\in C$, define the $\{0,1\}$-depth of~$l$ as the maximum length of a path to~$\overline l$ from a leaf of~$H_{0,1}$. If all literals in~$C$ have $\{0,1\}$-depth equal to zero, then $C = \CC{H_{0,1}}$, and $C$~certainly has an input proof from~$F$ (in fact, since $C=C_x$ or $C=C_{\overline x}$, we must have $C\in F$). Suppose on the other hand, that $C$ is a subset of the nodes of~$H_{0,1}$ with some literals of non-zero $\{0,1\}$-depth. Choose a literal~$l$ in~$C$ of maximum $\{0,1\}$-depth~$d$ and resolve $C$ with the clause $C_{\overline {l}}\in F$ to obtain a new clause~$C^\prime$. Since $C_{\overline {l}}\in F$, the resolution step introducing~$C^\prime$ preserves the property of having an input proof from~$F$. Furthermore, the new literals in~$C^\prime\setminus C$ have $\{0,1\}$-depth strictly less than~$d$. Redefine $C$~to be the just constructed clause~$C^\prime$. If this new $C$ is a subset of~$\CC{H_{0,1}}$ we are done constructing~$C$. Otherwise, some literal in~$C$ has non-zero $\{0,1\}$-depth. In this latter case, we repeat the above construction to obtain a new~$C$, and continue iterating this process until we obtain~$C\subset \CC{H_{0,1}}$. When the above construction is finished, $C$~is constructed as a clause with a regular input proof~$T_{0,1}$ from~$F$ (the regularity follows by the fact that variables introduced in~$C^\prime$ have $\{0,1\}$ depth less than that of the resolved-upon variable). Furthermore $C\subset \CC{H_{0,1}}$. In fact, $C = \CC{H_{0,1}}$ must hold, because there is a path, in~$H_{0,1}$, from each leaf of~$H_{0,1}$ to~$\Box$. That completes the proof of the $j=1$ base case. For the induction step, with $j>1$, the induction hypothesis is that we have constructed an input proof~$T_{0,j}$ such that $T_{0,j}$ contains all the clauses $\CC{H_{0,p}}$ for $1\le p \le j$ and such that the final clause in~$T_{0,j}$ is the clause $\CC{H_{0,j}}$. We are seeking to extend this input proof to an input proof $T_{0,j+1}$ that ends with the clause $\CC{H_{0,j+1}}$. The construction of~$T_{0,j+1}$ proceeds exactly like the construction above of~$T_{0,1}$, but now we start with the clause $C = \CC{H_{0,j}}$ (instead of $C=C_x$ or~$C_{\overline x}$), and we update~$C$ by choosing the literal~${l}\in C$ of maximum $\{0,j+1\}$-depth and resolving with~$C_{\overline {l}}$ to derive the next~$C$. The rest of the construction of~$T_{0,j+1}$ is similar to the previous argument. For the regularity of the proof it is essential that $H_{0,j}$ is a proper subconflict graph of $H_{0,j+1}$. By inspection, any literal~$l$ used for resolution in the new part of~$T_{0,j+1}$ is a non-leaf node in~$H_{0,j+1}$ and has a path from~$l$ to some leaf node of~$H_{0,j}$. Since $H_{0,j}$ is proper, it follows that $l$~is not an inner node of~$H_{0,j}$ and thus is not used as a resolution literal in~$T_{0,j}$. Thus $H_{0,j+1}$ is regular. This completes the proof of part~a. The proof for part~b.\ is very similar to the proof for part~a. Fixing $i>0$, let $u$~be any literal in $\leafs{H_{i,0}}$. We need to prove, for $1\le j\le m_i$, there is an input proof~$T_{i,j}^u$ from~$F$ such that (a)~$T_{i,j}^u$~contains every existing induced clause $\ic u {H_{i,k}}$ for $1\le k<j$, and (b)~$T_{i,j}^u$ ends with the induced clause $\ic u {H_{i,j}}$, and (c)~the resolution variables used in~$T^u_{i,j}$ are all non-leaf nodes (possibly negated) of $V_{(H_{i,j})_u}$. The proof is by induction on~$j$. One starts with the clause $C = C_u$. The main step of the construction of $T^u_{i,j+1}$ from~$T^u_{i,j}$ is to find the literal $v\not=u$ in~$C$ of maximum $\{i,j\}$-depth, and resolve $C$ with~$C_{\overline v}$ to obtain the next~$C$. This process proceeds iteratively exactly like the construction used for part~a. This completes the proof of \pref{lem:lemmaA}. \qed We now can prove \pref{the:regWRTIforLearnables}. \pref{lem:lemmaA} constructed separate regular input resolution proofs $T_{0,m_0}=T_0$ and~$T_{i,m_i}^u=T_i^u$ that included all the learnable clauses of~$\mathcal H$. To complete the proof of \pref{the:regWRTIforLearnables}, we combine all these proofs into one single regWRTI proof. For this, we construct proofs $T^*_i$ of the clause $\CC {H_i}$. $T^*_1$~is just~$T_{0}$. The proof~$T^*_{i+1}$ is constructed from~$T^*_i$ by successively resolving the final clause of~$T^*_i$ with the final clauses of the proofs~$T^u_i$, using each $u \in \leafs {H_i}\setminus \leafs{H_{i+1}}$ as a resolution variable, taking the~$u$'s in order of increasing $\{i,m_i\}$-depth to preserve regularity. Letting $T = T^*_k$, it is clear that $T^*_k$~contains all the clauses from~$\Cc{\mathcal H}$, and, by construction, $T^*_k$~is regular. To bound the size of~$T$, note that any regular input proof~$S$ has size $2r+1$ where $r$ is the number of distinct variables used as resolution variables in~$S$. Since $T$ is regular, and is formed by combining the regular input proofs $T_0$, $T^u_i$ in a linear fashion, the total size of~$T$ is less than $ n + \sum_{k=0}^{n-1}(2k+1) = n^2+1$. This completes the proof of \pref{the:regWRTIforLearnables}. \hfill $\Box$ Note that, since the final clause of~$T$ contains only literals from $\leafs G$, $T$~does not use any variable that occurs in its final clause as a resolution variable. \medskip We can now prove the first main result of this section, namely, that regWRTI proofs polynomially simulate \textsc{DLL-L-UP} search trees. \begin{thm}\label{the:rWRTIsimDLL} Suppose that $F$~is an unsatisfiable set of clauses and that there is an execution of a (possibly non-greedy) \textsc{DLL-L-UP} search algorithm on input~$F$ that outputs \texttt{UNSAT} with $s$~recursive calls. Then there is a regWRTI refutation of~$F$ of size at most $s\cdot n^2$ where $n = |\var(F) |$. \end{thm} \proof Let $S$ be the search tree associated with the \textsc{DLL-L-UP} algorithm's execution. We order~$S$ so that the \textsc{DLL-L-UP} algorithm effectively traverses~$S$ in a depth-first, left-to-right order. We transform~$S$ into a regWRTI proof tree~$T$ as follows. The tree~$T$ contains a copy of~$S$, but adds subproofs at the leaves of~$S$ (these subproofs will be derivations of learned clauses). For each internal node in~$S$, if the corresponding branching variable was~$x$ and was first set to the value~$x^\epsilon$, then the corresponding node in~$T$ is labeled with $x$ as the resolution variable, and its left incoming edge is labeled with~$x^{\epsilon}$ and its right incoming edge is labeled with~$x^{1-\epsilon}$. For each node~$u$ in~$S$, let $\alpha_u$~be the assignment at that node that is held by the \textsc{DLL-L-UP} algorithm upon reaching that node. By construction, $\alpha_u$~is equivalently defined as the assignment that has $\alpha_u(l) = 1$ for literal~$l$ that labels an edge on the path (in~$T$) between~$u$ and the root of~$T$. For a node~$u$ that is a leaf of~$S$, the \textsc{DLL-L-UP} algorithm chooses a conflict graph~$G_u$ with a series-parallel decomposition~${\mathcal H}_u$ such that every leaf node~$l$ of~$G_u$ is a literal set to true by~$\alpha_u$. Also, let~$F_u$ be the set~$F$ of original clauses augmented with all clauses learned by the \textsc{DLL-L-UP} algorithm before reaching node~$u$. By \pref{the:regWRTIforLearnables}, there is a proof~$T_u$ from the clauses~$F_u$ such that every learnable clause of~${\mathcal H}_u$ appears in $T_u$ as in input-derived clause. Hence, of course, every clause learned at~$u$ by the \textsc{DLL-L-UP} algorithm appears in~$T_u$ as an input-derived clause. The leaf node~$u$ of~$S$ is then replaced by the proof~$T_u$ in~$T$. Note that by \pref{the:regWRTIforLearnables} and the definition of conflict graphs, the final clause~$C_u$ of~$T_u$ is a clause that contains only literals falsified by~$\alpha_u$. So far, we have defined the clauses~$C_u$ that label nodes~$u$ in~$T$ only for leaf nodes~$u$. For internal nodes~$u$, we define $C_u$~inductively by letting $v$ and~$w$ be the immediate predecessors of~$u$ in~$T$ and defining $C_u$~to be the clause obtained by (w-)resolution from the clauses $C_v$ and~$C_w$ with respect to the branching variable~$x$ that was picked at node~$u$ by the \textsc{DLL-L-UP} algorithm. Clearly, using induction from the leaves of~$S$, the clause~$C_u$ contains only variables that are falsified by the assignment~$\alpha_u$. This makes $T$ a regWRTI proof. Let $r$~be the root node of~$S$. Since $\alpha_r$~is the empty assignment, the clause~$C_r$ must equal the empty clause~$\Box$. Thus $T$~is a regWRTI refutation of~$F$ and \pref{the:rWRTIsimDLL} is proved. \qed Since DLL clause learning based on first cuts has been shown to give exponentially shorter proofs than regular resolution~\cite{BeameKautzSabharwal2004}, and since \pref{the:rWRTIsimDLL} states that regWRTI can simulate DLL search algorithms (including ones that learn first cut clauses), we have proved that regRD does not simulate regWRTI: \begin{thm}\label{the:regRDnosimregWRTI} $\text{regRD} < \text{regWRTI}$. \end{thm} Hoffmann \cite{Hoffmann2007} gave a direct proof of \pref{the:regRDnosimregWRTI} based on the variable extensions described below in \pref{sec:varexp}. \subsection{DLL-L-UP simulates regWRTI} We next show that the non-greedy \textsc{DLL-L-UP} search procedure can simulate any regWRTI proof~$T$. The intuition is that we split~$T$ into two parts: the \emph{input parts} are the subtrees of~$T$ that contain only input-derived clauses. The \emph{interior part} of~$T$ is the rest of~$T$. The interior part will be simulated by a \textsc{DLL-L-UP} search procedure that traverses the tree~$T$ and at each node, chooses the resolution variable as the branching variable and sets the branching variable according to the label on the left incoming edge. In this way, the tree~$T$ is traversed in a depth-first, left-to-right order. The input parts of~$T$ are not traversed however. Once an input-derived clause is reached, the \textsc{DLL-L-UP} search learns all the clauses in that input subproof and backtracks returning \texttt{UNSAT}. The heart of the procedure is how a conflict graph and corresponding series-parallel decomposition can be picked so as to make all the clauses in a given input subproof learnable. This is the content of the next lemma. \begin{lem}\label{lem:lemmaB} Let $T$~be a regular input proof of~$C$ from a set of clauses~$F$. Suppose that $\alpha$ falsifies~$C$, that is, $\rest C \alpha = 0$. Further suppose no variable in~$C$ is used as a resolution variable in~$T$. Then there is a conflict graph~$G$ for~$F$ under~$\alpha$ and a series decomposition~$\mathcal H$ for~$G$ such that the set of learnable clauses of~${\mathcal H}$ is equal to the set of input-derived clauses of~$T$. \end{lem} Recall that a series decomposition just means a series-parallel decomposition with a trivial parallel part, i.e, $k=1$ in the definition of series-parallel decompositions. \proof Without loss of generality, $F$~is just the set of initial clauses of~$T$. Let the input proof~$T$ contain clauses $C_{m+1}=C, C_{m}, \ldots,C_1, D_{m},\ldots,D_1$ as illustrated in \pref{fig:regRTI} with $m=4$. Each $C_{i+1}$~is inferred from $C_{i}$ and~$D_{i}$ by resolution on~$l_{i}$, where $\overline{l_i}\in C_i$ and $l_i \in D_i$. For each~$i$, we have $D_i = \{l_i\} \cup D^\prime_i$, where $ D^\prime_i\subseteq C_{i+1}$. Likewise, $C_i = \{\overline{l_i}\} \cup C^\prime_i$, where $ C^\prime_i\subseteq C_{i+1}$. \begin{figure} \begin{center} \psset{unit=1cm} \begin{pspicture}(-6,-0.2)(3,4) \pscircle*(-4,4){0.07} \pscircle*(-2,4){0.07} \pscircle*(-3,3){0.07} \pscircle*(-1,3){0.07} \pscircle*(-2,2){0.07} \pscircle*(0,2){0.07} \pscircle*(-1,1){0.07} \pscircle*(1,1){0.07} \pscircle*(0,0){0.07} \psline(-4,4)(0,0) \psline(-2,4)(-3,3) \psline(-1,3)(-2,2) \psline(0,2)(-1,1) \psline(1,1)(0,0) \uput[-45](0.5,0.5){$\overline{l_4}$} \uput[-45](-0.5,1.5){$\overline{l_3}$} \uput[-45](-1.5,2.5){$\overline{l_2}$} \uput[-45](-2.5,3.5){$\overline{l_1}$} \uput[-135](-0.5,0.5){$l_4$} \uput[-135](-1.5,1.5){$l_3$} \uput[-135](-2.5,2.5){$l_2$} \uput[-135](-3.5,3.5){$l_1$} \uput[45](1,1){$D_4$} \uput[45](0,2){$D_3$} \uput[45](-1,3){$D_2$} \uput[45](-2,4){$D_1$} \uput[225](-1,1){$C_4$} \uput[225](-2,2){$C_3$} \uput[225](-3,3){$C_2$} \uput[135](-4,4){$C_1$} \uput[-90](0,0){$C_{5}=C$} \end{pspicture} \end{center} \caption{A regular input proof of~$C$. Edges are labeled $l_i$ or $\overline{l_i}$. The $C_i$'s and $D_i$'s are clauses.} \label{fig:regRTI} \end{figure} As illustrated in Figure~\ref{fig:conflictDecomp}, we construct conflict graphs $H_{0,0} = \{\Box,l_1,\overline{l_1}\} \subset H_{0,1} \subset \cdots \subset H_{0,m} =G$ which form a series decomposition of~$G$. $H_{0,i}$~will be a conflict graph from the set of clauses $\{C_1,D_1,\ldots,D_i\}$ under~$\alpha_i$ where $\alpha_i$~is the assignment that falsifies all the literals in~$C_{i+1}$. Indeed, the leaves of~$H_{0,i}$ are precisely the negations of literals in~$C_{i+1}$. For $i>0$, the non-leaf nodes of $H_{0,i}$ are $\overline{l_1}$ and $l_1,\ldots,l_i$. The predecessors of~$\overline{l_1}$ are defined to be the literals~$u$ with $\overline{u} \in C_1^\prime$, that is $C_{\overline{l_1}} = C_1$. Likewise, the predecessors of~$l_i$ are the literals~$u$ with $\overline{u} \in D_i^\prime$ so that $C_{l_i} = D_i$. To start with, we define $H_{0,0}$ to equal $\{\Box, l_1, \overline l_1\}$. Let $H_{0,i}$ be already constructed. Then we have $\overline{l}_{i+1} \in C_{i+1}$ since $C_{i+2}$ is inferred by resolution on~$l_{i+1}$ from~$C_{i+1}$. It follows that $\alpha_i(l_{i+1}) = 1$ and that $l_{i+1}$~is a leaf in~$H_{0,i}$. We obtain~$H_{0,i+1}$ from~$H_{0,i}$ by adding the predecessors of~$l_{i+1}$ (i.e., the literals~$u$ with $\overline{u} \in D_{i+1}^\prime $) to~$H_{0,i}$. The leaves of~$H_{0,i+1}$ are now exactly the negations of the literals in the clause~$C_{i+2}^\prime$. Finally the graph $H_{0,m} = G$ and the series decomposition $\mathcal{H}$ defined by the graphs $H_{0,i}$ is as wanted. This completes the proof of \pref{lem:lemmaB}. \qed \begin{figure} \begin{center} \psset{yunit=1.2cm} \psset{xunit=0.8cm} \begin{pspicture} \begin{pspicture}(-6,-0.2)(7,8) \rput(0,0){$\Box$} \pnode(0,0){BOX} \pscircle(0,0){0.4cm} \rput(0,6){$l_4$} \pnode(0,6){L4} \pscircle(0,6){0.4cm} \rput(1.5,4.5){$l_3$} \pnode(1.5,4.5){L3} \pscircle(1.5,4.5){0.4cm} \rput(3.0,3.0){$l_2$} \pnode(3.0,3.0){L2} \pscircle(3.0,3.0){0.4cm} \rput(4.5,1.5){$l_1$} \pnode(4.5,1.5){L1} \pscircle(4.5,1.5){0.4cm} \rput(-4.5,1.5){$\overline{l_1}$} \pnode(-4.5,1.5){L1neg} \pscircle(-4.5,1.5){0.4cm} \psset{nodesep=0.4cm} \ncline{->}{L4}{L3} \ncline{->}{L3}{L2} \ncline{->}{L2}{L1} \ncline{->}{L1}{BOX} \ncline{->}{L4}{L1neg} \ncline{->}{L3}{L1neg} \ncline{->}{L2}{L1neg} \ncline{->}{L1neg}{BOX} \rput(5.5,3.0){$D^{\prime\prime}_1$} \rput(4.2,4.7){$D^{\prime\prime}_2$} \rput(2.7,6.2){$D^{\prime\prime}_3$} \rput(0.0,7.5){$D^{\prime\prime}_4$} \rput(-5.2,3.2){$C^{\prime\prime}_1$} \pnode(5.5,3.0){D1} \pnode(4.2,4.7){D2} \pnode(2.7,6.2){D3} \pnode(0.0,7.5){D4} \pnode(-5.2,3.2){C1} \ncline[doubleline=true]{->}{D1}{L1} \ncline[doubleline=true]{->}{D2}{L2} \ncline[doubleline=true]{->}{D3}{L3} \ncline[doubleline=true]{->}{D4}{L4} \ncline[doubleline=true]{->}{C1}{L1neg} \psset{arcangle=-25} \ncarc{->}{L4}{L2} \ncarc{->}{L3}{L1} \psset{arcangle=-35} \ncarc{->}{L4}{L1} \psset{linestyle=dotted,linewidth=1.5pt} \psline(-6.5,2.25)(6.5,2.25) \psline(-6.1,3.75)(6.1,3.75) \psline(-5.7,5.25)(5.7,5.25) \psline(-5.3,6.75)(5.3,6.75) \psline(-4.9,8.25)(4.9,8.25) \uput[0](6.5,2.25){$H_{0,0}$} \uput[0](6.1,3.75){$H_{0,1}$} \uput[0](5.7,5.25){$H_{0,2}$} \uput[0](5.3,6.75){$H_{0,3}$} \uput[0](4.9,8.25){$H_{0,4}$} \end{pspicture} \end{center} \caption{A conflict graph and a series decomposition. The solid lines and arcs indicate edges that may or may not be present. The notations $C^{\prime\prime}_1$ and $D^{\prime\prime}_i$ indicate zero or more literals, and the double lines indicate an edge from each literal in the set. The dashed lines indicate cuts, and thereby the sets $H_{0,i}$ in the series decomposition. Namely, the set~$H_{0,i}$ contains the nodes below the corresponding dotted line.} \label{fig:conflictDecomp} \end{figure} We can now finish the proof that \textsc{DLL-L-UP} simulates regWRTI. \begin{thm}\label{the:DLLsimrWRTI} Suppose that $F$ has a regWRTI proof of size~$s$. Then there is an execution of the non-greedy {\rm \textsc{DLL-L-UP}} algorithm with the input \hbox{\rm ($F,\varnothing$)} that makes $<s$~recursive calls. \end{thm} \proof Let $T$~be a regWRTI refutation of~$F$. The \textsc{DLL-L-UP} algorithm works by traversing the proof tree~$T$ in a depth-first, left-to-right order. At each non-input-derived node~$u$ of~$T$, labeled with a clause~$C$, the resolution variable for that clause is chosen as the branching variable~$x$, and the variable~$x$ is assigned the value 1 or~0, corresponding to the label on the edges coming into~$u$. By part~b.\ of \pref{the:regWprops}, the clause~$C$ is falsified by the assignment~$\alpha$. At each input-derived node of~$T$, the \textsc{DLL-L-UP} algorithm learns the clauses in the input subproof above~$u$ by using the conflict graph and series decomposition given by \pref{lem:lemmaB}. Since the \textsc{DLL-L-UP} search cannot find a satisfying assignment, it must terminate after traversing the (non-input) nodes in the regWRTI refutation tree. The number of recursive calls will equal twice the number of non-input-derived nodes of~$T$, which is less than~$s$. \qed \section{Generalized DLL with clause learning}\label{sec:dll-learn} \subsection{The algorithm DLL-Learn} This section presents a new formulation of DLL with learning called \textsc{DLL-Learn}. This algorithm differs from \textsc{DLL-L-UP} in two important ways. First, unit propagation is no longer used explicitly (although it can be simulated). Second, the \textsc{DLL-Learn} algorithm uses more information that arises during the DLL search process, namely, it can infer clauses by resolution at each node in the search tree. This makes it possible for \textsc{DLL-Learn} to simulate regular resolution trees with full lemmas; more specifically, \textsc{DLL-Learn} is equivalent to regWRTL. The {\textsc{DLL-Learn}}{} algorithm is very similar to the pool resolution system introduced by Van Gelder~\cite{VanGelder2005}. Furthermore, our Theorem~\ref{the:DLL-Learn=regwRTL} is similar to results obtained by Van Gelder for pool resolution. Our constructions differ mostly in that we use w-resolution in place of the degenerate resolution inference of Van Gelder~\cite{VanGelder2005}. Loosely speaking, Van Gelder's degenerate resolution inference is a method of allowing resolution to operate on any two clauses without any weakening. Conversely, our w-resolution is a method for allowing resolution to operate on any two clauses, but with the maximum reasonable amount of weakening. The idea of \textsc{DLL-Learn} is to extend DLL so that it can learn a new clause~$C$ at each node in the search tree. As usual, the new clause will satisfy $F \equiv F\cup\{C\}$. At leaves, \textsc{DLL-Learn} does not learn a new clause, but marks a preexisting falsified clause as ``new''. At internal nodes, after branching on a variable~$x$ and making two recursive calls, the \textsc{DLL-Learn} algorithm can use w-resolution to infer a new clause, $C_{DLL(F,\alpha)}$, from the two identified new clauses, $C_0$ and~$C_1$ returned by the recursive calls. Since $x$ does not have to occur in $\var(C_0)$ and~$\var(C_1)$, $C$~is obtained by a w-resolution instead of resolution. The \textsc{DLL-Learn} algorithm shown in \pref{fig:dll-learn} uses non-greedy detection of contradictions. Namely, the ``{\tt optionally do}'' on line~2 of \pref{fig:dll-learn} allows the algorithm to continue to branch on variables even if the formula is already unsatisfied. This feature is needed for a direct proof of \pref{the:DLL-Learn=regwRTL}. In addition, it could be helpful in an implementation of the algorithm: Think of a call of \textsc{DLL}$(F,\alpha)$ such that $\rest{F}{\alpha} = 0$ and suppose that all of the falsified clauses $C \in F$ are very large and thus undesirable to learn. It might, for example, be the case that $\rest{F}{\alpha}$ contains two conflicting unit clauses $\rest{C_0}{\alpha}=\{x\}$ and $\rest{C_1}{\alpha}= \{\neg x\}$, where $C_0$ and~$C_1$ are small. In that case, it could be better to branch on the variable~$x$ and to learn the resolvent of $C_0$ and~$C_1$. There is one situation where it is not optional to execute lines 3-4; namely, if $\alpha$~is a total assignment and has assigned values to all variables, then the algorithm must do lines 3-4. Note that it is possible to remove $C_0$ and~$C_1$ from $F$ in line~13 if they were previously learned. Additionally, in an implementation of \textsc{DLL-Learn} it could be helpful to tag~$C_i$ as the new clause in~$H$ in line~13 if $C_i \subseteq C$ for an $i\in\{0,1\}$ instead of learning~$C$ --- this would be essentially equivalent to using Van Gelder's degenerate resolution instead of w-resolution. \begin{figure}[htbp] \begin{center} \begin{minipage}{1.0\linewidth} \tt \small \begin{tabbing} 123\=123455\=12345\=12345\=12345\=12345\=12345\=12345\=12345\=12345\=12345\=12345 \kill \>{\sc DLL-Learn}($F,\alpha$)\\ \>1\>if $\rest{F}{\alpha} = 1$ then return ($F,\alpha$) \\ \>2\>if $\rest{F}{\alpha} = 0$ then optionally do \>\>\>\>\>\>\>\>\> \\ \>3\>\>tag a $C \in F$ with $\rest{C}{\alpha}=0$ as the new clause\\ \>4\>\>return ($F,$\hspace{0.2em}UNSAT)\\ \>5\>choose $x \in \var(F) \setminus \dom(\alpha)$ and a value $\epsilon \in \{0,1\}$\\ \>6\>($G,\beta$)$\leftarrow${\sc DLL-Learn}($F,\alpha \cup \{(x,\epsilon)\})$\\ \>7\>if $\beta \neq $ UNSAT then return ($G,\beta$)\\ \>8\>($H,\gamma$)$\leftarrow${\sc DLL-Learn}($G,\alpha \cup \{(x,1-\epsilon)\})$\\ \>9\>if $\gamma \neq$ UNSAT then return ($H,\gamma$)\\ \>10\>select the new $C_0 \in G$ and the new $C_1 \in H$\\ \>11\>$C \leftarrow (C_0 - \{x^{1-\epsilon}\}) \cup (C_1 - \{x^\epsilon\})$\\ \>12\>$H \leftarrow H \cup \{C\}$ \>\>\>\>\>\>\>\>\> -- {\sl learn a clause}\\ \>13\>tag $C$ as the new clause in~$H$. \\ \>14\>return ($H,$\hspace{0.2em}UNSAT) \end{tabbing} \end{minipage} \caption{DLL with a generalized learning.} \label{fig:dll-learn} \end{center} \end{figure} It is easy to verify that, at any point in the \textsc{DLL-Learn} algorithm, when a clause~$C$ is tagged as new, then $\rest C \alpha = 0$. There is a straightforward, and direct, translation between executions of the \textsc{DLL-Learn} search algorithm on input $(F,\varnothing)$ and regWRTL proofs of~$F$. An execution of \textsc{DLL-Learn}($F,\varnothing$) can be viewed as traversing a tree in depth-first, left-to-right order. If there are $s-1$ recursive calls to \textsc{DLL-Learn}, the tree has $s$~nodes. Each node of the search tree is labeled with the clause tagged in the corresponding call to \textsc{DLL-Learn}. Thus, leaves of the tree are labeled with clauses that either are from~$F$ or were learned earlier in the tree. The clause on an internal node of the tree is inferred from the clauses on the two children using w-resolution with respect to the branching variable. Finally, the clause~$C$ labeling the root node, where $\alpha = \varnothing$, must be the empty clause, since $\alpha$~must falsify~$C$. In this way the search algorithm describes precisely a regWRTL proof tree. Conversely, any regWRTL refutation of~$F$ corresponds exactly to an execution of the \textsc{DLL-Learn}($F,\varnothing$). This translation between \textsc{DLL-Learn} and regWRTI proof trees gives the following theorem. \begin{thm}\label{the:DLL-Learn=regwRTL} Let $F$~be a set of clauses. There exists a regWRTL refutation of~$F$ of size~$s$ if and only if there is an execution of \textsc{DLL-Learn}$(F,\varnothing)$ that performs exactly $s-1$ recursive calls. \end{thm} It follows as a corollary of Theorems \ref{the:hierarchy} and~\ref{the:DLL-Learn=regwRTL} that \textsc{DLL-Learn} can polynomially simulate \textsc{DLL-L-UP}. \section{Variable Extensions}\label{sec:varexp} This section introduces the notion of a \emph{variable extension} of a CNF formula. A variable extension augments a set~$F$ of clauses with additional clauses such that modified formula $\ve F$~is satisfiable if and only if $F$ is satisfiable. Variable extensions will be used to prove that regWRTI proofs can simulate resolution dags, in the sense that if there is an RD refutation of~$F$, then there is a polynomial size regWRTI refutation of~$\ve F$. Hence, \textsc{DLL-Learn} and the non-greedy version of \textsc{DLL-L-UP} can simulate full (non-regular) resolution in the same sense. Our definition of variable extensions is inspired by the proof trace extensions of Beame et al.~\cite{BeameKautzSabharwal2004} that were used to separate DLL with clause learning from regular resolution dags. A similar construction was used by Hertel et~al.~\cite{BHPvG:clauselearn} to show that pool resolution can simulate full resolution. Our results strengthen and extend the prior results by applying directly to regWRTI proofs. More importantly, in contrast to proof trace extensions, variable extensions do not depend on the size of a (possibly unknown) resolution proof but only on the number of variables in the formula. \begin{defi} Let $F$ be a set of clauses and $|\var(F)|=n$. The set of \emph{extension variables} of~$F$ is $\ev{F} = \{q,p_1, \ldots, p_n\}$, where $q$ and~$p_i$ are new variables. The \emph{variable extension} of~$F$ is the set of clauses \[ \ve{F} ~=~ F \cup \big\{ \{q, \bar l\}:l \in C \in F\big\} \cup \big\{\{p_1,p_2, \ldots, p_n \}\big\}. \] \end{defi} Obviously $\ve F$ is satisfiable if and only if~$F$ is. Furthermore, $|\ve F| = O(|F|)$. Suppose that $G$ is a resolution dag (RD) proof from~$F$. We can reexpress~$G$ as a sequence of (derived) clauses $C_1,C_2,\ldots, C_t$ which has the following properties: (a)~$C_t$~is the final clause of~$G$, and (b)~each $C_i$ is inferred by resolution from two clauses $D$ and~$E$, where each of $D$ and~$E$ either are in~$F$ or appear earlier in the sequence as $C_j$ with $j<i$. Basically, the sequence is an ordinary resolution refutation, but with the clauses from~$F$ omitted. \begin{lem}\label{lem:pte_trick_helper} Suppose that $D,E\vdash_x C$. Then, there is an input resolution proof tree~$T_C$ of the clause~$\{q\}$ from $\ve F \cup \{D,E\}$ such that $C$~appears in~$T_C$ and such that $|T_C| = 2\cdot |C|+3$. \end{lem} \proof The proof~$T_C$ starts by resolving $D$ and~$E$ to yield~$C$. It then resolves successively with the clauses $\{q,\overline l\}$, for $l\in C$, to derive~$\{q\}$. \qed \begin{thm}\label{the:pte_trick} Let $F$ be a set of clauses, $n = |\var(F)|$, and let $C$~be a clause. Suppose that $G$ is a resolution dag proof of~$C$ from~$F$ of size~$s$. Then, there is a regWRTI proof~$T$ of~$C$ from~$\ve F$ of size $\le 2s\cdot(d+2)+1$ where $d = \max \{ |D| : D\in G \}\le n$. \end{thm} \proof Let $C_1,\ldots, C_t$ be a sequence of the derived clauses in~$G$ as above. Without loss of generality, $t< 2^n$ since $F$~also has a regular resolution tree refutation, and this has depth at most~$n$, and thus has $<2^n$ internal nodes. Let $T^\prime$~be a binary tree with $t$~leaves and of height~$h = \lceil \log_2 t \rceil \le n$. For each node~$u$ in~$T^\prime$, let $l(u)$~be the level of~$u$ in ~$T^\prime$, namely, the number of edges between $u$ and the root. Label $u$ with the variable~$p_{l(u)}$. Also, label every node~$u$ in~$T^\prime$ with the clause~$\{q\}$. $T^\prime$ will form the middle part of a regWRTI proof: each clause $\{q\}$ at level $i$ is inferred by w-resolution from its two children clauses (also equal to~$\{q\}$) with respect to the variable~$p_i$. Now, we expand $T^\prime$ into a regWRTI proof tree~$T^{\prime\prime}$. For this, for $1\le i\le t$, we replace the $i$-th leaf of~$T^\prime$ with a new subproof~$T_{C_i}$ defined as follows. Letting $C_i$ be as above, let $D_i$ and~$E_i$ be the two clauses from which $C_i$~is inferred in~$G$. Then replace $i$-th leaf of~$T^\prime$ by the input proof~$T_{C_i}$ from \pref{lem:pte_trick_helper} which contains~$C_i$ and ends with the clause~$\{q\}$. Note that each of $D_i$ and~$E_i$ either is in~$F$ or appeared as an input clause in a proof, $T_{D_i}$ or~$T_{E_i}$, inserted at an earlier leaf of~$T^\prime$. Therefore $T^{\prime\prime}$ is a valid regWRTI proof of~$\{q\}$ from~$\ve F$. Since there are at most $s-1$ internal nodes in~$T^\prime$ and each $T_{C_i}$ has size $\le 2d+3$, $T^{\prime\prime}$ has size at most $(s-1) + s\cdot(2d+3)$. Finally, we form a regWRTI proof of~$C$ by modifying~$T^{\prime\prime}$ by adding a new root labeled with the clause~$C$ and the resolution variable~$q$. Let the left child of this new root be the root of~$T^{\prime\prime}$, and let the right child be a new node labeled also with~$C$. (This is permissible since $C$~is input-derived in~$T^{\prime\prime}$.) Label the left edge coming to the new root with the literal~$\overline q$, and the right edge with the literal~$q$. This makes $C$ inferred from $\{q\}$ and~$C$ by w-resolution with respect to~$q$. $T$~is a valid regWRTI of size at most $s+1+s\cdot(2d+3) = 2s\cdot(d+2)+1$. \qed Since \textsc{DLL-L-UP} and \textsc{DLL-Learn} simulate regWRTI, \pref{the:pte_trick} implies that these two systems p-simulate full resolution by the use of variable extensions: \begin{cor} Suppose that $F$ has a resolution dag refutation of size~$s$. Then both \textsc{DLL-L-UP} and \textsc{DLL-Learn}, when given $\ve F$ as input, have executions that return \texttt{UNSAT} after at most $p(s)$ recursive calls, for some polynomial~$p$. \end{cor} We now consider some issues about ``naturalness'' of proofs based on resolution with lemmas. Beame et al.~\cite{BeameKautzSabharwal2004} defined a refutation system to be natural provided that, whenever $F$ has a refutation of size~$s$, then $\rest F \alpha$ has a refutation of size at most~$s$. We need a somewhat relaxed version of this notion: \begin{defi} Let $\mathcal R$ be a refutation system for sets of clauses. The system~$\mathcal R$ is {\em p-natural} provided, there is a polynomial~$p(s)$, such that, whenever a set~$F$ has an $\mathcal R$-refutation of size~$s$, and $\alpha$~is a restriction, then $\rest F \alpha$ has an $\mathcal R$-refutation of size $\le p(s)$. \end{defi} The next proposition is well-known. \begin{prop} Resolution dags (RD) and regular resolution dags (regRD) are natural proof systems. \end{prop} As a corollary to Theorem~\ref{the:pte_trick} we obtain the following theorem. \begin{thm}\label{the:equivNatural} \hspace*{1em} \begin{enumerate}[\em(a)] \item regWRTI is equivalent to RD if and only if regWRTI is p-natural. \item regWRTL is equivalent to RD if and only if regWRTL is p-natural. \end{enumerate} \end{thm} \proof Suppose that $\text{regWRTI}\equiv \text{RD}$. Then, since RD is natural, we have immediately that regWRTI is p-natural. Conversely, suppose that regWRTI is p-natural. By \pref{the:hierarchy}, RD p-\penalty10000simulates regWRTI. So it suffices to prove that regWRTI p-simulates RD. Let $F$~have an RD refutation of size~$s$. By \pref{the:pte_trick}, $\ve F$ has a regWRTI proof of size~$2s(s+2)+1$. Let $\alpha$~be the assignment that assigns the value~$1$ to each of the extension variables $q$ and $p_1,\ldots,p_n$. Since $\rest {\ve F} \alpha$ is~$F$ and since regWRTI is p-natural, $F$~has a regWRTI proof of size at most $p(2s(s+2)+1)$. This proves that regWRTI p-simulates RD, and completes the proof of~a. The proof of {b.} is similar. \qed Theorem~\ref{the:equivNatural} is stated for the equivalence of systems with RD. It could also be stated for {\em p-equivalent} but then one needs an ``effective'' version of p-natural, where the $\mathcal R$-refutation of~$\rest F \alpha$ is computable in polynomial time from $\alpha$ and a $\mathcal R$-refutation of~$F$.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Modal logic has good algorithmic and model theoretic properties. It is well-known that formulas of propositional modal logics under Kripke semantics can be naturally encoded in first-order logic, using the so-called {\em standard translation}. But since first-order logic is not so well-behaved, in particular the (finite) satisfiability problems are undecidable, it was natural to ask what the right image of the standard translation is and 'Why is modal logic so robustly decidable?' (the last question asked literally by Vardi in \cite{Var97}). In order to briefly review some of the answers given, let us have a short look at the standard translation. One assigns to every propositional atom $A$, a unary relation $A(x)$, which is understood as '$A$ is true in world $x$,' and each reachability relation $R$ corresponds to a binary relation $R(x,y)$. This assignment is extended inductively to arbitrary modal formulas: for every modal formula $\phi$, one inductively defines a first-order formula $tr(\phi,x)$ which expresses that '$\phi$ is true in world $x$', where the boxes and diamonds are handled by explicit first-order quantification over $R$-accessible points (cf.~\cite{HandbookML}). For example, the modal formula $P \wedge \Diamond ( Q \vee \Box \neg P)$ translates into the following formula \begin{equation}\label{eq:eg1} Px \wedge \exists y (Rxy \wedge (Qy \vee \forall z (Ryz \rightarrow \neg Pz))) \end{equation} One can observe that the formulas obtained under the standard translation follow some patterns: (i) variables appear in some fixed order and no rescoping of variables occurs, (ii) quantifiers are relativized by atomic formulas, (iii) negation is applied only to subformulas with a single free variable. These patterns motivated the studies of corresponding fragments of first-order logic defined by appropriately restricting the syntax. Moreover, as observed by Gabbay \cite{gabbay1981}, by properly reusing variables one can restrict their number needed for the standard translation to {\em two}. E.g.~in the previous formula we could replace the variable $z$ by $x$ obtaining: \begin{equation}\label{eq:eg2} {Px \wedge \exists y (Rxy \wedge (Qy \vee \forall x (Ryx \rightarrow \neg Px)))}. \end{equation} This observation is crucial as already the three-variable fragment of first-order is undecidable, even for relational signatures featuring only unary nad binary predicates \cite{KMW62}. In the next section we introduce the fragments of first-order logic defined by the above mentioned restrictions more formally and we shortly characterize their fundamental properties in terms of the {\em finite} and {\em tree model properties} and in terms of decidability in finite and unrestricted models. Throughout the paper we refer to these languanges as the {\em base languages}. In Section \ref{sec:extensions} we review main results concerning satisfiability and finite satisfiability of some popular extensions of the base fragments. In Section \ref{sec:finsat} we sketch a few approaches of proving finite satisfiability for those fragments that do not enjoy the finite model property. \section{Base languages} We define the base languages assuming relational signatures not containing any constants or function symbols. \begin{definition} {\em The two variable fragment}: By the $k$-variable fragment of a logic $\cal L$, denoted ${\cal L}^k$, we mean the set of formulas of $\cal L$ featuring at most $k$ distinct variables. In particular \mbox{$\mbox{\rm FO}^k$}{} denotes the set of all first-order formulas with at most $k$ variables. The fragment \mbox{$\mbox{\rm FO}^3$}{} is already undecidable \cite{KMW62}, therefore, we are most interested in the two-variable fragment, \mbox{$\mbox{\rm FO}^2$}. \end{definition} \begin{definition}{\em The fluted fragment} \cite{purdy:quine76b}: Let $\bar{x}_\omega= x_1, x_2, \ldots$ be a fixed sequence of variables. We define the sets of formulas $\mbox{$\mbox{\rm FL}$}^{[k]}$ (for $k \geq 0$) by structural induction as follows: (i) any atom $\alpha(x_\ell, \ldots, x_k)$, where $x_\ell, \dots, x_k$ is a contiguous subsequence of $\bar{x}_\omega$, is in $\mbox{$\mbox{\rm FL}$}^{[k]}$; (ii) $\mbox{$\mbox{\rm FL}$}^{[k]}$ is closed under boolean combinations; (iii) if $\phi$ is in $\mbox{$\mbox{\rm FL}$}^{[k+1]}$, then $\exists x_{k+1} \phi$ and $\forall x_{k+1} \phi$ are in $\mbox{$\mbox{\rm FL}$}^{[k]}$. The set of \textit{fluted formulas} is defined as \smash{$\mbox{$\mbox{\rm FL}$} = \bigcup_{k\geq 0} \mbox{$\mbox{\rm FL}$}^{[k]}$}. A \textit{fluted sentence} is a fluted formula over an empty set of variables, i.e.~an element of $\mbox{$\mbox{\rm FL}$}^{[0]}$. Thus, when forming Boolean combinations in the fluted fragment, all the combined formulas must have as their free variables some suffix of some prefix $x_1, \dots, x_k$ of $\bar{x}_\omega$; and when quantifying, only the last variable in this sequence may be bound. This is illustrated by the fluted sentence in~\eqref{eq:eg1}. \end{definition} \begin{definition}{\em The guarded fragment} \cite{ABN98}, \mbox{$\mbox{\rm GF}$}{}, is defined as the least set of formulas such that: (i) every atomic formula belongs to \mbox{$\mbox{\rm GF}$}{}; (ii) \mbox{$\mbox{\rm GF}$}{} is closed under logical connectives $\neg, \vee, \wedge, \rightarrow$; and (iii) quantifiers are appropriately relativised by atoms. More specifically, in \mbox{$\mbox{\rm GF}$}{}, condition (iii) is understood as follows: if $\phi$ is a formula of \mbox{$\mbox{\rm GF}$}{}, $\alpha$ is an atomic formula featuring all the free variables of $\varphi$, and $\bar{x}$ is any sequence of variables in $\alpha$, then the formulas ${\forall} {\bar{x}}(\alpha \rightarrow \phi)$ and ${\exists} {\bar{x}}(\alpha \wedge \phi )$ belong to \mbox{$\mbox{\rm GF}$}{}. In this context, the atom $\alpha$ is called a {\em guard}. The equality symbol when present in the signature is also allowed in guards. \end{definition} \begin{definition} {\em The unary negation fragment} \cite{StC13}, \mbox{$\mbox{\rm UNF}$}{}, consists of formulas in which the use of negation is restricted only to subformulas with at most one free variable. More precisely, \mbox{$\mbox{\rm UNF}$}{} is defined as the least set of formulas such that: (i) every atomic formula of the form $R(\bar{x})$ or $x=y$ belongs to \mbox{$\mbox{\rm UNF}$}{}; (ii) \mbox{$\mbox{\rm UNF}$}{} is closed under logical connectives $\vee$, $\wedge$ and under existential quantification; (iii) if $\phi(x)$ is a formula of \mbox{$\mbox{\rm UNF}$}{} featuring no free variables besides (possibly) $x$, then $\neg \phi(x)$ belongs to \mbox{$\mbox{\rm UNF}$}. \end{definition} The base languages are incomparable in terms of expressive power. In particular, the formula $x\neq y$ is in \mbox{$\mbox{\rm FO}^2$}{} but not in \mbox{$\mbox{\rm UNF}$}. Formula (1) lies in the intersection of \mbox{$\mbox{\rm FO}^3$}{} and \mbox{$\mbox{\rm FL}$}{}, while (2) is not fluted. Both formulas are guarded and in \mbox{$\mbox{\rm UNF}$}{} (the universal quantifier is used as a shortcut in a standard way). The property: \begin{align}\label{eq:eg3} & \mbox{ \begin{minipage}{10cm} \begin{tabbing} No lecturer introduces any professor to every student\\ $\forall x_1 ($\=$\mbox{lecturer}(x_1) \rightarrow$ $\neg \exists x_2 ($\=$\mbox{prof}(x_2) \wedge$ $\forall x_3 (\mbox{student}(x_3) \rightarrow \mbox{intro}(x_1,x_2,x_3))))$ \end{tabbing} \end{minipage} } \end{align} belongs to \mbox{$\mbox{\rm FL}$}$^3$ but is neither two-variable, nor guarded or in \mbox{$\mbox{\rm UNF}$}. The property: \begin{align}\label{eq:eg4} & \mbox{ \begin{minipage}{10cm} \begin{tabbing} Some node lies on a cycle of length 4\\ $\exists x_1 \exists x_2( \wedge \exists x_3 (Ex_2x_3 \wedge \exists x_4 (Ex_3x_4 \wedge \exists x_5 (Ex_4x_5 \wedge x_1=x_5))))$ \end{tabbing} \end{minipage} } \end{align} is in \mbox{\rm FO}$^5$ and in \mbox{$\mbox{\rm UNF}$}, but is neither fluted (the variables in the subformula $x_5=x_1$ do not match the fixed ordering $x_1,\ldots,x_5$) nor guarded (none of the atoms in the subformula $Ex_4x_5 \wedge x_1=x_5$ can be treated as a guard of the quantifier $\exists x_5$). In the sequel we are concerned with two version of the classical decision problem. For a given logic $\cal L$, $\ensuremath{\textit{Sat}}(\cal L)$ is the problem to decide, given a formula $\phi$ of $\cal L$, if $\phi$ is satisfiable. Similarly, $\ensuremath{\textit{FinSat}}(\cal L)$ is the problem to decide, given a formula $\phi$ of $\cal L$, if $\phi$ is {\em finitely} satisfiable. i.e.~if it has a finite model. For first-order logic both problems are undecidable \cite{Tur37,Tra50,Tra63} and recursively inseparable \cite{Tra53}. For our base languages the problems are decidable thanks to the {\em finite model property} that we explain below. \subsection{Finite Model Property and Tree Model Property} We say that a logic $\cal L$ has the {\em finite model property} (FMP), if every satisfiable formula of $\cal L$ has a finite model. If $\cal L$ has the finite property then the problems \ensuremath{\textit{Sat}}($\cal L$) and \ensuremath{\textit{FinSat}}($\cal L$) coincide. Moreover, if $\cal L$ is a subset of first-order logic having the finite model property, then \ensuremath{\textit{Sat}}($\cal L$) (=\ensuremath{\textit{FinSat}}($\cal L$)) is decidable. In many cases, the finite model property of some logic comes with a bound on the size of minimal models from which a direct upper bound for the computational complexity of the corresponding satisfiability problem can be derived. (For a given formula it suffices to generate all possible structures within the given size bound and check if any of them satisfies the formula). As already mentioned all four of our base languages have the FMP, and hence are decidable. Concerning the bounds on the size of minimal models, \mbox{$\mbox{\rm FO}^2$}{} has the {\em exponential} model property, and this was the property used in \cite{GKV97} to obtain the tight upper bound on the complexity of the satisfiability problem. An algebraic proof of the finite model property for \mbox{$\mbox{\rm GF}$}{} can be found in \cite{AndrekaHN99}. In \cite{Gra99} FMP for \mbox{$\mbox{\rm GF}$}{} was shown via the extension property for partial automorphisms of Hrushovski, Herwig and Lascar. In case of unbounded arities the size of the minimal models that can be easily obtained from this construction is triply exponential in the size of the formula, and not optimal for deciding finite satisfiability. FMP for \mbox{$\mbox{\rm UNF}$}{} was shown by a reduction to the analogous result for modal logic (which has a very simple proof using filtration~\cite{FischerL79}). FMP was used to show decidability of the fluted fragment \cite{Pur96}; the complexity bounds for the bounded variable fragments are not yet tight and they correspond to the best known bounds on the size of minimal models. Namely, the following is known \cite{P-HST16}: \bit \item \ensuremath{\textit{Sat}}(\mbox{$\mbox{\rm FL}$}) is non-elementary; \item \ensuremath{\textit{Sat}}(\mbox{$\mbox{\rm FL}$}$^{2k}$) is $k$-\textsc{NExpTime}-hard and \ensuremath{\textit{Sat}}(\mbox{$\mbox{\rm FL}$}$^{k}$) is in $k$-\textsc{NExpTime}, for all $k\geq 1$. \footnote{There is some very recent work in progress towards showing that \ensuremath{\textit{Sat}}(\mbox{$\mbox{\rm FL}$}$^{2k}$) is $k$-\textsc{NExpTime}-complete.} \eit Tight complexity bounds of the satisfiability problem (= finite satisfiability problem) for the remaining base languages are known: \bit \item \ensuremath{\textit{Sat}}(\mbox{$\mbox{\rm FO}^2$}) is \textsc{NExpTime}-complete \cite{GKV97}. \item \ensuremath{\textit{Sat}}(\mbox{$\mbox{\rm GF}$}) is 2\textsc{-ExpTime}-complete; \ensuremath{\textit{Sat}}(\mbox{$\mbox{\rm GF}$}$^k$) is \textsc{ExpTime}-complete for all $k\geq 2$ \cite{Gra99}. \item \ensuremath{\textit{Sat}}(\mbox{$\mbox{\rm UNF}$}) is 2\textsc{-ExpTime}-complete; the same holds for \ensuremath{\textit{Sat}}(\mbox{$\mbox{\rm UNF}$}$^k$) for all $k\geq 3$ \cite{StC13}. \eit The optimal complexity bounds for \ensuremath{\textit{Sat}}(\mbox{$\mbox{\rm GF}$}) have been obtained by a generalization of the tree model property, known already as an important tool from modal logic. We say that $\cal L$ has the {\em (generalised) tree model property}, TMP, iff every satisfiable $\phi\in \cal L$ has a tree (tree-like) model. The fragments \mbox{$\mbox{\rm FO}^2$}{} and \mbox{$\mbox{\rm FL}$}{} do not enjoy the tree model property as they allow to write formulas of the form $\forall x \forall y Rxy$, enforcing all elements of a model to be connected. Gr\"adel showed \cite{Gra99} that every formula of \mbox{$\mbox{\rm GF}$}{} with $k$ variables is satisfiable only if it has a model of bounded degree such that the Gaifman graph of this model has tree width at most $k+1$. Similar property holds for \mbox{$\mbox{\rm UNF}$}~\cite{StC13}. Tree-like models allow the use of powerful tools. For example, in the $\mu$-calculus, we can interpret them in the monadic second order theory of the infinite tree and use Rabin's theorem (this reduction gives decidability but not good complexity) \cite{Rab69}. The proof of Rabin's theorem uses tree automata, and by constructing tree automata directly, one usually gets good algorithms. However, tree-like models are usually infinite, so TMP is not suitable to decide the finite satisfiability problem. But it might help to improve the complexity bounds, when FMP can be shown independently. In next sections we will concentrate on logics that do not enjoy FMP and where other techniques to decide finite satisfiability are applied. Before moving on we want to remark on an important pre-processing phase used in the decision procedures for the (finite) satisfiability problems. \subsection{Normal forms} When designing algorithms for (finite) satisfiability in our base languages it is useful to restrict attention to formulas in certain {\em normal forms}. The precise notion depends on the logic but in all cases the normal form formulas are obtained by iteratively substituting subformulas of the form $\exists y \psi$, for quantifier-free $\psi$ by atoms $R(\bar{y})$, where $R$ is a fresh predicate letter, $\bar{y}$ denotes the free variables of $\exists y \psi$, and adding appropriate definitions for $R$. Below we recall the corresponding lemmas for \mbox{$\mbox{\rm FO}^2$}{}, \mbox{$\mbox{\rm GF}$}{} and $\FLk{}$. \begin{lemma}[\cite{GKV97}]\label{lem:FOnf} For every \mbox{$\mbox{\rm FO}^2$}{}-sentence $\phi$ one can construct in polynomial time an \mbox{$\mbox{\rm FO}^2$}{}-sentence $\phi'$ of the form: $$\phi':=\forall x \forall y \alpha \wedge \bigwedge_{i\in I} \forall x \exists y \beta_i,$$ where $\alpha$ and $\beta_i$ are quantifier-free such that $\phi' \models \phi$ and every model of $\phi$ can be expanded to a model of $\phi'$; moreover, if $n$ is the length of $\phi$, then $\phi'$ contains at most $n$ predicate symbols and has length $O(n \log n)$. \end{lemma} \begin{lemma}[\cite{Gra99}]\label{lem:GFnf} For every \mbox{$\mbox{\rm GF}$}{}-sentence $\phi$ one can construct in polynomial time a \mbox{$\mbox{\rm GF}$}{}-sentence $\phi'$ of the form: $$\phi':=\bigwedge_{j} \forall \bar{x} (\alpha_j(\bar{x})\rightarrow \vartheta_j(\bar{x})) \wedge \bigwedge_{i} \forall \bar{x} (\beta_i(\bar{x}) \rightarrow \exists \bar{y} (\beta_i(\bar{y}) \wedge \psi_i(\bar{x},\bar{y})))$$ such that $\phi' \models \phi$ and every model of $\phi$ can be expanded to a model of $\phi'$. Here the $\alpha_j$, $\beta_i$, $\gamma_i$ are guards and the $\vartheta_j$, $\psi_i$ are quantifier-free; the length of $\phi'$ is linear in the length of $\phi$. \end{lemma} \begin{lemma}[\cite{P-HST16}]\label{lem:FL-normalform} Let $\phi$ be a $\FLk{m}$-sentence over a signature $\sigma$. We can compute, in exponential time, a disjunction $\psi=\bigvee_{k} \psi_k$ over a signature $\sigma'$, where each $\psi_k$ is an $\FLk{m}$-sentence of the form $$\psi_k:=\bigwedge_{j} \forall \bar{x} (\alpha_j(\bar{x}) \rightarrow \forall {x'} \beta_j(\bar{x},x'))\wedge \bigwedge_{i} \forall \bar{x} (\gamma_i(\bar{x}) \rightarrow \exists {x'} \delta_i(\bar{x},x'))$$ such that $\psi \models \phi$, every model of $\phi$ can be expanded to a model of $\psi$; moreover, if $n$ is the length of $\phi$, then each $\psi_k$ has length $O(n \log n)$, and $\sigma'$ consists of $\sigma$ together with some additional predicates of arity at most $m-1$. Here, in each conjunct $\bar{x}$ is a contiguous seguence $x_1\ldots x_l$ for some $l$ ($1\leq l<m$), $x'=x_{l+1}$, and all of the formulas $\alpha_j, \gamma_i \in \FLk{[l]}$ and $\beta_j,\delta_i\in \FLk{[l+1]}$ are quantifier-free. \end{lemma} Lemma \ref{lem:FL-normalform} seems weaker than Lemmas \ref{lem:FOnf} and \ref{lem:GFnf} but when aiming at any complexity bound from or above \textsc{ExpTime}, it also allows one to restrict attention to formulas in normal form (one can consider the disjuncts of $\psi$ one by one and check if any of them is satisfiable). We also remark that every $\FLk{}$-formula over a signature $\sigma$ consisting of predicate symbols of arity at most $k$ when transformed to the fluted normal form gives a formula in $\FLk{k}$. Thus, the fluted formulas obtained by the standard translation from modal logic after normalization belong to $\FLk{2}$. \subsection{Historical Remarks} The observation that model logic can be seen as a fragment of the two-variable first-order logic was made in 1981 by Gabbay \cite{gabbay1981}. At that time it was known that \mbox{$\mbox{\rm FO}^2$}{} has the doubly exponential model property and is decidable in 2\textsc{-NExpTime}{} as shown in 1975 by Mortimer \cite{Mor75}. The {\em exponential} model property and, hence, tight complexity bounds for \mbox{$\mbox{\rm FO}^2$}{} were shown by Gr\"adel et.~al.~in 1997 \cite{GKV97}. In the same year Gr\"adel, Otto and Rosen published another article \cite{GOR99}, where their performed a test for robust decidability of \mbox{$\mbox{\rm FO}^2$}{} studying its extensions by adding additional operators corresponding to the operators used in modal logics. This test failed, most of the extensions turned out to lead to undecidable formalism, and therefore \mbox{$\mbox{\rm FO}^2$}{} was not accepted as the {\em right} image of the standard translation of modal logic (more in the next section). One year later in this context Andr{\'e}ka, van Benthem and N\'{e}meti put forward the guarded fragment \cite{ABN98}. \mbox{$\mbox{\rm GF}$}{} does have the the hoped-for nice properties, and has been widely accepted as a {\em better} proposal. This fragment inspired researchers over the past two decades and brought results having applications in other areas like description logics and database theory. \mbox{$\mbox{\rm UNF}$}{} is a young fragment, introduced by Segoufin and ten Cate in 2013 \cite{StC13} as an orthogonal (to \mbox{$\mbox{\rm GF}$}{}) generalisation of modal logic, that enjoys the same nice properties. An important additional property of \mbox{$\mbox{\rm UNF}$}{} is that it contains unions of conjunctive queries\footnote{A conjunctive query is an existentially quantified conjunction of atoms.}, a class very important in the field of databases. Hence, it is not surprising that \mbox{$\mbox{\rm UNF}$}{} and \mbox{$\mbox{\rm GF}$}{} have already been generalised to the {\em guarded negation fragment} that retains the good properties of both \mbox{$\mbox{\rm UNF}$}{} and \mbox{$\mbox{\rm GF}$}{} logics \cite{BtCS15}. The origins of the fluted fragment can be traced to a paper given by Quine to the 1968 {\em International Congress of Philosophy}~\cite{purdy:quine69}, in which the author defined what he called the {\em homogeneous $m$-adic formulas}. In these formulas, all predicates have the same arity $m$, and all atomic formulas have the same argument sequence $x_1, \dots, x_m$. The restriction that all predicates have the same arity is abandoned in~\cite{purdy:quine76b} published in 1976. The history of discovering the decidability and complexity of \mbox{$\mbox{\rm FL}$}{} is complicated, and may be a reason why \mbox{$\mbox{\rm FL}$}{} has been curiously neglected in the context of our discussion. In particular, an earlier claim that \mbox{$\mbox{\rm FL}$}{} has the exponential model property \cite{purdy:purdy02} has been just disproved by showing that $\mbox{$\mbox{\rm FL}$}$ has the finite model property, and its satisfiability (= finite satisfiability) problem is decidable, but {\em not} elementary \cite{P-HST16}. \section{Extensions}\label{sec:extensions} Modal logic is a very weak formalism in terms of expressive power, however, numerous extensions of ML have been designated to overcome these limitations, leading to extensions that still have good algorithmic properties. Such extensions can be defined by extending the language adding new operators or restricting the classes of frames by adding new axioms. In this section we look at the impact of similar extensions on our base languages. \subsection{Additional operators} We first survey the impact of adding transitive closure operators, (monadic) fixed-points or counting quantifiers that appear, respectively, in propositional dynamic logic and in temporal logics, in the $\mu$-calculus and in graded modal logics. Any of the additional operators implies loss of the FMP that can be shown be writing {\em infinity axioms}, i.e.~satisfiable formulas that have only infinite models. As an example, consider the $\mbox{$\mbox{\rm FO}^2$}$-formula with the transitive closure operator, TC, applied to a binary predicate symbol: \begin{equation}\label{eq:TC-infinite} \forall x \neg Rxx \wedge \forall x\exists y Rxy \wedge \forall x\forall y (\mbox{TC}(Rxy)\leftrightarrow Rxy). \end{equation} This formula is satisfiable and any model of the formula embeds a copy of the natural order relation. We can enforce essentially the same property using fixed-points: \begin{equation}\label{eq:FP-infinite} \forall x \exists y Rxy \wedge \forall x\forall y \left(Rxy\rightarrow [\mbox{lfp}_{W,x} (Ryx\rightarrow Wy)]x\right). \end{equation} Here the lfp is the set of points that have only finitely many $R$-predecessors. A modification of the above examples in the extension of \mbox{$\mbox{\rm FO}^2$}{} with {\em counting quantifiers,} $\mbox{C$^2$}$, can be written as follows: \begin{equation}\label{eq:fluted-infinity} \exists x \forall y \neg Ryx \wedge \forall x \exists y Rxy \wedge \forall x\exists ^{\leq 1} y Ryx. \end{equation} Gr\"adel et.~al. studied several extensions of \mbox{$\mbox{\rm FO}^2$}{}, in particular the extensions obtained by adding the transitive closure operator and (restricted) monadic fixed points. In \cite{GOR99} they showed that the extensions of \mbox{$\mbox{\rm FO}^2$}{} by either transitive closure (in fact, even by transitivity, cf.~next subsection) or fixed points leads to undecidability for both the satisfiability and the finite satisfiability problems. Decidability of the satisfiability problem for \mbox{$\mbox{\rm FO}^2$}{} with counting quantifiers came as sort of surprise and was shown independently in 1997 in \cite{GOR97} and \cite{PST97}. It was also shown that the size of a minimal finite model of a \mbox{C$^2$}{}-formula $\phi$ is at least doubly exponential in $|\phi|$ even when the counting quantifiers are only of the form $\exists^{=1}$. \textsc{NExpTime}-completeness of both \ensuremath{\textit{Sat}}{(\mbox{C$^2$})} and \ensuremath{\textit{FinSat}}{(\mbox{C$^2$})} was later proved by Pratt-Hartmann in \cite{PH05}. The situation with the guarded fragment was different: \mbox{$\mbox{\rm GF}$}{} extended with monadic fixed points is decidable and of the same complexity as the base \mbox{$\mbox{\rm GF}$}: see Gr\"adel and Walukiewicz \cite{GW99} for the satisfiability problem, and B\'ar\'any and Boja\'nczyk \cite{BaranyB12} for the finite satisfiability problem (note that \cite{GW99} is published in 1999 and \cite{BaranyB12} 13 years later). Similar properties hold for the unary negation fragment. Despite of the loss of FMP, decidability and complexity are retained when the fragment is extended by (monadic) fixed point operators \cite{StC13}. To the best of our knowledge, extensions of \mbox{$\mbox{\rm UNF}$}{} by adding transitive closure or counting have not yet been studied. As for the other two extensions of \mbox{$\mbox{\rm GF}$}{}, adding either counting or transitive closure leads to undecidability. So special attention has been turned towards the two-variable guarded fragment, where counting quantifiers can be added at no additional cost. Also a decidable extension with restricted transitive closure has been identified in \cite{Michaliszyn09} (finite satisfiability remains open), however the complexity jumps by one exponential in comparison with \mbox{$\mbox{\rm GF}^2$}{}. These results are summarized in Table~\ref{tab:extensions}. \begin{table} \centering \begin{tikzpicture}[x=3.2cm,y=1.2cm] \draw (0,0) grid [step=1] (4,5); \draw[style=thick] (0,4) -- (4,4); \node at (1.5,4.5) {Transitive Closure}; \node at (2.5,4.5) {Fixed Points}; \node at (3.5,4.5) {Counting}; \node at (0.5,3.5) {\mbox{$\mbox{\rm FO}^2$}{}}; \node at (0.5,2.5) {\mbox{$\mbox{\rm GF}^2$}{}}; \node at (0.5,1.5) {\mbox{$\mbox{\rm GF}$}{}}; \node at (0.5,0.5) {\mbox{$\mbox{\rm UNF}$}{}}; \node at (1.5,3.5) {undecidable \cite{GOR99}}; \draw[dashed,gray] (1,2.5) -- (2,2.5); \node at (2.0,3.0) [below left,inner sep=1pt] {2\textsc{-ExpTime}{} \cite{Michaliszyn09}$^{*)}$}; \node at (1.5,2.0) [above,inner sep=5pt] {\ensuremath{\textit{FinSat}}{}:\quad?}; \node at (1.5,1.5) {undecidable \cite{GOR99}}; \node at (1.5,0.5) {?}; \node at (2.5,3.5) {undecidable \cite{GOR99}}; \node at (2.5,2.7) {\textsc{ExpTime}{}}; \node at (2.5,2.0) [above,inner sep=2pt] {\ensuremath{\textit{FinSat}}{}:\cite{BaranyB12} \ensuremath{\textit{Sat}}{}:\cite{GW99}}; \node at (2.5,1.7) {2\textsc{-ExpTime}{}}; \node at (2.5,1.0) [above,inner sep=2pt] {\ensuremath{\textit{FinSat}}{}:\cite{BaranyB12} \ensuremath{\textit{Sat}}{}:\cite{GW99}}; \node at (2.5,0.5) {2\textsc{-ExpTime}{} \cite{StC13}}; \node at (3.5,3.5) {\textsc{NExpTime}{} \cite{PH05}}; \node at (3.5,2.5) {\textsc{ExpTime}{} \cite{PH07}}; \node at (3.5,1.5) {undecidable \cite{Gra99}}; \node at (3.5,0.5) {?}; \draw (0,5) -- (1,4); \node at (1.0,5.0) [below left,inner sep=5pt] {Extension}; \node at (0.0,4.0) [above right,inner sep=5pt] {Logic}; \end{tikzpicture} \caption{Overviews of principal extensions of the base languages. The complexity bounds are tight. Key to cells: if not indicated otherwise the values apply to both \ensuremath{\textit{Sat}}{} and \ensuremath{\textit{FinSat}}{} of the corresponding extension. $^{*)}$ only \ensuremath{\textit{Sat}}{} and subject to certain syntactic restrictions.} \end{table}\label{tab:extensions} In Table \ref{tab:extensions} we do not list $\FLk{}$ as this kind of extensions have not yet been properly studied. In~\cite{purdy:purdy99} the author considers what he calls {\em extended fluted logic}, in which, in addition to the usual predicate functors, we have equality, the ability to exchange arguments in binary atomic formulas and {\em functions} (the requirement that certain specified predicates be interpreted as the graph of a function---a property easily expressed using counting quantifiers). This extension evidently contains infinity axioms, e.g.~formulas equivalent to formula (\ref{eq:fluted-infinity}), hence the claim of \cite{purdy:purdy99} that this extension has FMP is false. And it remains open whether $\FLk{}$ with counting, but without the other above mentioned functors, enjoys the finite model property and whether it is decidable. \subsection{Restricted classes of structures} In modal correspondence theory various conditions on the accessibility relations allow one to restrict the class of Kripke structures considered, e.g. to transitive structures for the modal logic K4, transitive and reflexive---for S4, or equivalence structures for the modal logic S5, and still obtain well-behaved fragments. Also in temporal logics, very natural are classes of structures with some kind of orderings, where they model time flow. The central condition here is {\em transitivity}. The transitivity axiom is a simple universal first-order formula: \begin{equation}\label{eq:trans-axiom} \forall x \forall y \forall z (Rxy\wedge Ryz \rightarrow Rxz) \end{equation} however, it is expressible in neither of our base languages, because it contains three variables, has no guard, and the atom $Rxz$ is not fluted. Moreover, adding transitivity axioms allows one to write sentences that have only infinite models (e.g.~replacing the last conjunct in the formula (\ref{eq:TC-infinite}) by the transitivity axiom (\ref{eq:trans-axiom})). Hence, the question therefore arises as to whether transitivity (or related properties like orderings or equivalence relations) could be added at reasonable computational cost. We have already seen that it cannot be done in general. In the past years various extensions of \mbox{$\mbox{\rm FO}^2$}{} and \mbox{$\mbox{\rm GF}^2$}{} were investigated in which certain distinguished binary relation symbols are declared to denote transitive relations, equivalence relations, or linear orderings. It turns out that the decidability of these fragments usually depends on the {\em number} of the distinguished relation symbols available. For three linear orders, both satisfiability and finite satisfiability are undecidable~\cite{Kie2011,Otto01}. Similarly for three equivalence relations~\cite{Kie05}. Turning to transitive relations, the satisfiability problem becomes undecidable for both satisfiability and finite satisfiability of \mbox{$\mbox{\rm FO}^2$}{} in the presence of two transitive relations (or even in the presence of one transitive relation and one equivalence relation \cite{KT09}). The complexity bounds for such decidable extensions of \mbox{$\mbox{\rm FO}^2$}{} and \mbox{$\mbox{\rm GF}^2$}{} are in many cases identical, a notable exception being the case of two equivalences, which, for \mbox{$\mbox{\rm GF}^2$}{} yields a 2\textsc{-ExpTime}-complete logic~\cite{Kie05}, and for \mbox{$\mbox{\rm FO}^2$}{}---a 2\textsc{-NExpTime}-complete logic~\cite{KMP-HT14}. Table \ref{tabela} summarizes the above results. We do not list there extensions of \mbox{$\mbox{\rm GF}^2$}{} with linear orders, as linear orders actually destroy the guardedness of a logic: any pair of elements is guarded by a linear order, and the results from \mbox{$\mbox{\rm FO}^2$}{} with linear orders can be applied to \mbox{$\mbox{\rm GF}^2$}. \begin{table}[th] \begin{center} \small\hspace{0mm} \begin{tabular}{|c||c|c|c|c|}\hline & & \multicolumn{3}{|c|}{} \\ {\large{Logic} } & {{Special symbols}} & \multicolumn{3}{c|}{{Number of special symbols in the signature}} \\ \cline{3-5} & & 1 & 2 & 3 or more \\ \hline \hline & & & & \\ \Large{\bf \mbox{$\mbox{\rm GF}^2$}{}} & Transitivity & 2\textsc{-ExpTime} & undecidable & undecidable \\ & & \ensuremath{\textit{Sat}}{}: \cite{Kie05} \ensuremath{\textit{FinSat}}{}: \cite{KT07,KT-FinSatGFTG} & \cite{Kie05,Kaz06} & \cite{GMV99} \\ \cline{2-5} \cline{2-5} FMP & & & & \\ \textsc{ExpTime} & Equivalence &FMP, {\textsc{NExpTime}} & {2\textsc{-ExpTime}} & undecidable\\ \cite{Gra99} & &\cite{KO05}& \cite{KP-HT15} & \cite{KO05}\\ \hline\hline & & & &\\ {\Large{{ \mbox{$\mbox{\rm FO}^2$}{}}}} & Transitivity & {{{ in 2\textsc{-NExpTime}}}} \cite{ST13}$^{*)}$ & undecidable & undecidable \\ & & \ensuremath{\textit{FinSat}}{}: ? & \cite{Kie05,Kaz06} & \cite{GOR99} \\ \cline{2-5} FMP \cite{Mor75} & & & & \\ \textsc{NExpTime} & Linear order & \textsc{NExpTime} & \ensuremath{\textit{Sat}}{}: ? & undecidable\\ \cite{GKV97} & & \cite{Otto01} & {\sc ExpSpace}$^{**)}$ \cite{SZ12} & \cite{Otto01,Kie2011} \\ \cline{2-5} & & & & \\ & Equivalence & {FMP}, \textsc{NExpTime} & {2\textsc{-NExpTime}} & undecidable \\ & & {\cite{KO05}} & \cite{KMP-HT14} & \cite{KO05}\\ \cline{2-5} \hline \end{tabular} \end{center} \caption{{Overview of two variable logics over restricted classes of structures. Unless indicated otherwise, the complexity bounds are tight. Key to symbols: $^{*)}$ only general satisfiability and for a restricted variant $^{**)}$ only finite satisfiability and subject to certain restrictions on signatures.}}\label{tabela} \end{table} For \mbox{$\mbox{\rm GF}^2$}{}, it also makes sense to study variants in which the distinguished predicates may appear only in guards \cite{GMV99}. In this case, \mbox{$\mbox{\rm GF}^2$}{} with {\em any} number of equivalences appearing only as guards remains \textsc{NExpTime}-complete \cite{Kie05}, while \mbox{$\mbox{\rm GF}^2$}{} with {\em any} number of transitive relations appearing only as guards is 2\textsc{-ExpTime}-complete \cite{ST01,Kie03} (tight complexity bounds for the finite satisfiability problem are established in \cite{KT-FinSatGFTG}). The properties of \mbox{$\mbox{\rm UNF}$}{} and $\FLk{}$ over restricted classes of structures have not yet been investigated. Obviously, decidability results for extensions of $\mbox{$\mbox{\rm FO}^2$}{}$ imply decidability of the same extensions of $\FLk{2}$ or \mbox{$\mbox{\rm UNF}$}$^2$. Also it is not difficult to see that the undecidability result for \mbox{$\mbox{\rm FO}^2$}{} with three equivalence relations can be adapted to the fluted case, hence the satisfiability and the finite satisfiability problems for $\FLk{2}$ with at least three equivalence relations is undecidable. Other cases need more detailed inspection, additional research, perhaps also novel techniques. It is clear that classes of structures defined by stipulating that some binary predicates satisfy some universal first-order formula by no means exhausts the relevant possibilities. One may as well consider e.g.~well-founded structures, trees or forests; notions not expressible in first-order logic which arise naturally in a wide range of contexts. Also the world of guarded logics is much richer than shown above. E.g.~more liberal guardedness conditions (loosely- or clique-guarded, packed fragment) and guarded fragments of other logics have been studied (guarded second order logic, Datalog LITE). \section{Deciding FinSat}\label{sec:finsat} Before we review some techniques used for solving the finite satisfiability problem for logics without FMP we first notice a few potential difficulties. Let $\phi$ be the following formula, where $P_0, \ldots, P_{n-1}$ are unary predicates and $R$ is a binary predicate: \begin{equation}\label{eq:trans-finite} \phi= \exists x P_0x \wedge \bigwedge_{0\leq i <n} \forall x (P_ix \rightarrow \exists y (Rxy \wedge P_{i+1}y))\wedge \bigwedge_{0\leq i<j<n} \forall x \neg (P_ix \wedge P_jx). \end{equation} The formula $\phi$ has a simple infinite model that is an $R$-chain of elements on which the unary predicates alternate. In order to get a finite model the $R$-chain must close into cycles. By using combinations of the unary predicates to encode a binary number at a given point of a model, one can easily enforce those cycles to have exponential length w.r.t.~the length of the formula. If additionally $R$ is declared transitive, these cycles induce $R$-cliques. Note that $\phi$ is a formula in all our base languages. Smallest finite models might also be relatively large w.r.t.~the length of the formula used to define them (and also in comparison to the optimal upper complexity of the algorithms deciding finite satisfiability). Recall e.g.~the example from \cite{GOR97}, where a family of finitely satisfiable \mbox{C$^2$}{}-formulas $\{\phi_n\}_{n \in \N}$ over a signature with one binary and $n$ unary predicate symbols is given, such that every finite model of $\phi_n$ contains an isomorphic copy of a full binary tree of height $2^n$ and $\phi_n$ has length $O(n\log n)$. Hence every model of $\phi_n$ has size at least $2^{2^n}$. We remark at this point that both \ensuremath{\textit{Sat}}{(\mbox{C$^2$}{})} and \ensuremath{\textit{FinSat}}{(\mbox{C$^2$}{})} are \textsc{NExpTime}-complete. This suggest that when designing efficient algorithms for the finite satisfiability problem one can not rely on properties of unrestricted models or on direct constructions of models of minimal size. In this context let us also mention two titles of papers from the DL community praising unrestricted reasoning versus finite reasoning: 'Nominals, inverses, counting, and conjunctive queries or: Why infinity is your friend!' \cite{RudolphG10} and 'The curse of finiteness: Undecidability of database-inspired reasoning problems in very expressive description logics' \cite{Rudolph16}. \subsection{More or less natural reductions} A perhaps most natural approach to establish (un)decidability or tight complexity bounds of some logic is to reduce formulas of one logic to another one. This classical approach can be illustrated by the extension of \mbox{$\mbox{\rm UNF}$}{} by fixed points, UNFP. In fact in \cite{StC13} an exponential reduction from UNFP to the modal $\mu$-calculus is presented that additionally preserves finiteness of the models. This immediately gives $2\textsc{-ExpTime}$-upper bounds for the complexity of both the satisfiability and the finite satisfiability problems. The same reduction allows one to deduce also FMP of \mbox{$\mbox{\rm UNF}$}{} (under this reduction a formula from \mbox{$\mbox{\rm UNF}$}{} translates to a modal formula without fixed points) and TMP of UNFP. Another natural idea of solving the finite satisfiability problem for a logic that has a decidable satisfiability problem, might be to reduce the first problem to the later. This concept has an additional advantage, as for unrestricted reasoning and a family of some simple logics there is a wide range of applicable algorithms (such as e.g.~tableau algorithms that rely on TMP or resolution calculi), which often perform well in practical implementations. This approach has been investigated by Rosati in \cite{Rosati08} for a relatively inexpressive logic called DL-Lite$_{\cal F}$ that already lacks FMP. The idea has been later extended by Garcia~et.al.~to the logic Horn-{\cal{ALCQI}} \cite{GarciaLS14}. It requires additional research to find out if this concept can be further extended to non-Horn logics. \subsection{Finitary unravellings and locally acyclic structures} When we are concerned with a decidable logic that does not have FMP but has TMP, a natural idea is to study finitary unravellings, obtained by 'bending' some edges in the tree-like models to keep the structure finite but at the same time similar to a tree, i.e.~acyclic. Here, when saying that a structure is {\em acyclic} we mean that its {\em hypergraph} is. We have already observed that it is not always possible for a given formula $\phi$ to get a model of $\phi$ that is at the same time finite and acyclic, cf.~the formula in (\ref{eq:trans-finite}). To address the above idea a notion of {\em $k$-acyclic} structures, where $k$ is some parameter, is introduced. Informally, in a $k$-acyclic structure $\str{A}$ there are no cycles of length at most $k$; more precisely, every induced sub-hypergraph of the hypergraph of $\str{A}$ of up to $k$ vertices is acyclic. The aim then is roughly to show that if a formula $\phi$ has a finite model then $\phi$ has a $k$-acyclic model, where $k$ depends only on $\phi$. Having such a property in hand, one can restrict attention to locally acyclic structures, that are usually easier to handle. This approach has been introduced by Otto \cite{Otto04} for \mbox{$\mbox{\rm GF}$}{} over restricted signatures and later extended in \cite{BGO13} to full \mbox{$\mbox{\rm GF}$}{}, showing that every finite structure is \mbox{$\mbox{\rm GF}$}-bisimilar to a finite structure whose hypergrah is locally acyclic. As an application of the general result a new proof of the (small) finite model property for \mbox{$\mbox{\rm GF}$}{} with optimal bounds on the size of minimal models is obtained. We remark that the above results underlie the correctness of the reduction outlined in the previous subsection from UNFP to the $\mu$-calculus in the finite case. They are also one of the main ingredients of the decidability proof for the finite satisfiability problem for the extension of \mbox{$\mbox{\rm GF}$}{} with fixed points \cite{BaranyB12}. \subsection{Deciding (Fin)Sat by reduction to linear or integer programming} Here we briefly describe a less direct approach that has successfully been applied to extensions of \mbox{$\mbox{\rm FO}^2$}{} to get optimal complexity bounds when the logic allows one to formulate sentences that have relatively large finite models w.r.t.~optimal complexity bounds for (finite) satisfiability. The brief idea is to identify (finitely many types of) building blocks of a potential model and connecting conditions for them, and describe them in a {\em succinct way}. It turns out that these conditions often can be described by a set of (in)equalities. In such cases the approach has an additional advantage, namely it allows one to solve simultaneously both $\ensuremath{\textit{Sat}}({\cal L})$ and $\ensuremath{\textit{FinSat}}({\cal L})$: in case of $\ensuremath{\textit{FinSat}}({\cal L})$ given $\phi\in {\cal L}$ we look for solutions of the corresponding equation system over $\N$, in case of $\ensuremath{\textit{Sat}}({\cal L})$ we look for solutions over so-called extended integers, $\N \cup \{\aleph_0\}$.\footnote{E.g.~the equation $x+1=x$ has no integer solution, but has a solution over extended integers $x=\aleph_0$. If such equation appears positively in the conditions describing models of a formula $\phi$ we deduce that $\phi$ has no finite models.} Moreover, this approach does not depend on TMP and gives hope to think about practical implementation using existing linear/integer programming solvers. This approach has been applied to establish optimal upper complexity bounds for an expressive description logic with (restricted) counting quantifiers in \cite{LST05} (\textsc{ExpTime}), for \mbox{C$^2$}{} in \cite{PH05} (\textsc{NExpTime}), and for the quarded fragment of \mbox{C$^2$}{} in \cite{PH07} (\textsc{ExpTime}). The (N)\textsc{ExpTime}-upper bounds should be contrasted with the remark that in these logics the size of minimal models is doubly exponential in the size of the formula. The linear/integer programming approach has been made more transparent in \cite{KMP-HT14} when dealing with the extension of \mbox{$\mbox{\rm FO}^2$}{} with two equivalence relations, \mbox{$\mbox{\rm FO}^2$}+$\{E_1,E_2\}$. Suppose $E_1$ and $E_2$ are the equivalence symbols in the signature. The strategy employed in \cite{KMP-HT14} starts with the observation that the {\em intersections} (i.e.~equivalence classes of the coarsest common refinement $E_1 \cap E_2$ of the equivalence relations) arising in any model of a formula $\phi$ could, without loss of generality, be assumed to have cardinality exponentially bounded as a function of the size of $\phi$. In any such model, every $E_1$-class, and also every $E_2$-class, is the union of some set of such 'small intersections'; and any given $E_1$-class and $E_2$-class are either disjoint, or have exactly one common intersection. This decomposition into equivalence classes allowed one to picture such a model as an edge-coloured, bipartite graph: the $E_1$-classes are the left-hand vertices; the $E_2$-classes are the right-hand vertices; and two vertices are joined by an edge just in case they share an intersection, with the colour of that edge being the isomorphism type of the intersection concerned. Evidently, the formula $\phi$ imposes constraints on the types of intersections that may arise, and on how intersections may be organized into $E_1$- and $E_2$-classes; and it was showed in~\cite{KMP-HT14} how these constraints translated to conditions on the induced bipartite graph of equivalence classes. In this way, the original (finite) satisfiability problem for \mbox{$\mbox{\rm FO}^2$}+$\{E_1,E_2\}$ was nondeterministically reduced to the problem of determining the existence of a (finite) edge-coloured bipartite graph satisfying certain conditions on the local configurations it realizes. The latter problem was called BGESC (for 'bipartite graph existence with skew constraints and ceilings'). By showing BGESC and its finite version to be \NPTime-complete, an optimal 2\textsc{-NExpTime}-upper bound for both the satisfiability and the finite satisfiability problems for \mbox{$\mbox{\rm FO}^2$}+$\{E_1,E_2\}$ was obtained. Membership in $\NPTime$ for both BGESC and the finite BGESC problems was shown by a nondeterministic polynomial reduction to an integer-programming problem. In \cite{KMP-HT14} and later in \cite{KP-HT15} two simpler variants of the BGESC problem were introduced called, respectively, BGE and BGE$^*$. They were shown to remain in $\textsc{PTime}$ via polynomial reductions to the linear programming problems (for the finite versions) and reductions to the satisfiability problem for propositional Horn clauses (for the unrestricted versions). Reductions to the (finite) BGE$^*$ problem were used in \cite{KP-HT15} to show the optimal $2\textsc{-ExpTime}$-upper bound for the satisfiability and finite satisfiability problems for the guarded fragment of \mbox{$\mbox{\rm FO}^2$}+$\{E_1,E_2\}$. The above approach has already been successfully applied to get optimal upper complexity bounds for extensions of \mbox{$\mbox{\rm FO}^2$}{} where the operation of {\em equivalence closure} can be applied to one or more binary predicates \cite{KMP-HT14,KP-HT15}. Such operators can be used to express non-first-order notions such as {\em reachability} or {\em connectedness} in undirected graphs---notions often encountered in practise. It remains open whether the linear/integer programming approach might be helpful in designing optimal decision procedures for logics with more than two variables, and in particular when the signatures feature predicates of higher arity. \subsection{Remarks} In this section we discussed several logics for which it required more care to proof decidability of the finite satisfiability problem than to prove decidability of the satisfiability problem. This by no means is a general trade. In particular, there are logics such that \ensuremath{\textit{Sat}}{($\cal{L}$)} is undecidable and \ensuremath{\textit{FinSat}}{($\cal{L}$)} is decidable, or vice versa (see e.g.~\cite{MichOW12} for a family of examples from the elementary modal logics). We have also mentioned fragments for which decidability of \ensuremath{\textit{FinSat}}{($\cal L$)} has been solved and the status of \ensuremath{\textit{Sat}}{($\cal L$)} remains open, this include some extensions of \mbox{$\mbox{\rm FO}^2$}{} with order relations (cf.~\cite{SZ12,BDM11} for a detailed picture). In this area one can also find fragments for which the complexity of the finite satisfiability problem jumps to classes like {\em vector addition systems} which are $\textsc{ExpSpace}$-hard and are known to be decidable but no elementary upper bound has been found so far (cf.~\cite{Kosaraju82}). An example of this phenomenon is the extension of \mbox{C$^2$}{} with one linear order and one successor of a linear order augmented with an additional binary relation studied in \cite{CW15}. \section{Conclusion} The picture concerning decidability of the (finite) satisfiability problems for extensions of fragments of first-order logic defined as the natural image of the standard translation of modal logic is multidimensional and colourful. Current research in this area, apart from studying the open question already mentioned, involves investigation of logics used by combining several operators from the already well understood fragments, identifying smaller fragments with better algorithmic properties, and optimization of known algorithms towards practical implementation. Finite model reasoning is crucial to both the theory and practice of computation. It is still not well understood when addressing the problem of query answering---the central reasoning problem of database theory. We believe that this problem will gain a lot of attention in the nearest future and will intensively use results from the areas outlined in this talk.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Motion of a test particle around a black hole is a very old topic of investigation to know the behavior of the gravitational field around black hole(for example [1-3] and any other book on general relativity). Such investigation was started long back when within a year of publication of general theory of relativity by Einstein(1915)[4-7], Schwarzchild [8] gave a vacuum solution to the Einstein field equations. The solution describes the geometry of vacuum space-time outside a spherical massive body and is known today as Schwarzchild black hole solution. At present in all text books on general relativity there is an exhaustive study of the motion of a test particle around the Schwarzchild black hole(see for example [1-3]). In the present work, an investigation of the motion of a test particle is done around a general static non-rotating black hole. A general formula for determining bending of light is evaluated and is tested for Schwarzchild black hole.Gravitational field outside the black hole is approximated by Pseudo-Newtonian(PN) gravitational potential and is compared with the corresponding effective potential for test particle motion. Finally all results are verified for Reissner-Nordstr\"{o}m black hole solution.\\ The paper is organized as follows: Section II deals with the motion of a massive test particle around a general spherically symmetric non-rotating black hole. Also effective potential,energy and condition for circular orbit are determined for the black hole. Further some comments are presented point wise. In section III trajectory of a photon and bending of light is studied. Pseudo-Newtonian gravitational potential is determined and it is compared with effective potential in section IV. As an example all results are deduced for Reissner-Nordstr\"{o}m black hole in section V. The paper ends with a short conclusion in section VI. \\ \section{Motion Of a Test Particle around A General Black Hole} The line element of a static spherically symmetric space time which describes a black hole can be written as, \begin{equation} ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\Omega_{2}^{2} \end{equation} where $f(r)$ is at least a $c^{2}$ function and it should satisfy the following conditions so that line element (eq.1) describes a black hole solution: i) $f(r)$ must have a zero at some positive r (say $r_{h}$) so that time dilation is infinite at $r_{h}$, ii) The Kretschmann scalar ($\alpha=R^{ijkl}R_{ijkl}$) should be finite at $r=r_{h}$ but it diverges at $r=0$ i.e. the space time described by equation (1) has curvature singularity only at $r=0$. Here $$d\Omega_{2}^{2}=d\theta^{2}+\sin^{2}\theta d\phi^{2}$$is the metric on unit two sphere. Suppose we consider the motion of a test particle of rest mass m around the black hole. The corresponding Lagrangian will be \begin{equation} 2L=-f(r)\left(\frac{dt}{d\lambda}\right)^{2}+f(r)^{-1}\left(\frac{dr}{d\lambda}\right)^{2}+ r^{2}\left(\frac{d\theta}{d\lambda}\right)^{2}+r^{2}\sin^{2}\theta\left(\frac{d\phi}{d\lambda}\right)^{2} \end{equation} where $\lambda$ is any affine parameter.\\ As the Lagrangian has two cyclic co-ordinates $t$ and $\phi$ so the corresponding momenta must be constant. This leads to \begin{equation} E=-\frac{p_{0}}{m} \end{equation} a constant and \begin{equation} L=\frac{p_{\phi}}{m} \end{equation} is also a constant. As the space-time is spherically symmetric so the motion is always confined to a plane which for convenience chosen to be the equatorial plane $\theta=\frac{\pi}{2}$. The explicit form of the momentum components are $$p^{~0}=E\frac{m}{f(r)}$$ $$p^{~r}=m\frac{dr}{d\lambda}$$ \begin{equation} p^{~\theta}=0 \end{equation} $$ p^{~\phi}=\frac{mL}{r^{2}}$$ Using the above expressions for momentum components in the energy momentum conservation relation \begin{equation} p^{\mu}p_{\mu}=-m^{2} \end{equation} we obtain \begin{equation} \left(\frac{dr}{d\lambda}\right)^{2}=E^{2}-V^{2}(r) \end{equation} Where \begin{equation} V^{2}(r)=f(r)\left(1+\frac{L^{2}}{r^{2}}\right) \end{equation} is called the (square of the) effective potential.\\ Now differentiating both sides of equation (7) we have \begin{equation} \frac{d^{2}r}{d\lambda^{2}}=-\frac{1}{2}\frac{dV^{2}(r)}{dr} \end{equation} Also from (5) the momentum in the $\phi$ direction gives \begin{equation}\frac{d\phi}{d\lambda}=\frac{L}{r^{2}} \end{equation} So eliminating the affine parameter between $(7)$ and $(10)$ the differential equation of the trajectory of the particle in the equatorial plane is given by \begin{equation} \left(\frac{dr}{d\phi}\right)^{2}=\frac{r^{4}}{L^{2}}\left[E^{2}-f(r)\left(1+\frac{L^{2}}{r^{2}}\right)\right] \end{equation} Which can be written as $$\left(\frac{dr}{d\phi}\right)^{2}=\frac{r^{4}}{L^{2}}\psi(r)$$ Where \begin{equation} \psi(r)=E^{2}-f(r)\left(1+\frac{L^{2}}{r^{2}}\right) \end{equation} We can make the following conclusions on the trajectory of the particle:\\ $\bullet~~~$ The energy of the particle should not be less than the potential $V(r)$ i.e. for a given $E$ the trajectory should be such that the radial range is restricted to those radii for which $V$ is smaller than $E$.\\ $\bullet~~~$ If $\psi(r)>0$ for all values of $r$ then the particle comes from infinity and moves directly to the origin. This is called the terminating escape orbit.\\ $\bullet~~~$ If $\psi(r)$ has one positive zero then the particle either starts from finite distance moves directly to the origin (known as terminating bound orbit) or it may move on an escape orbit with a finite impact parameter $\frac{L}{E}$.\\ $\bullet~~~$ If $\psi(r)$ has two positive zeros then we have two possible cases: I. if $\psi(r)>0$ between the two zeros then the trajectory is called periodic bound orbit like planetary orbit or II. if $\psi(r)<0$ between these two zeros then the trajectory is either an escape orbit or a terminating bound orbit.\\ $\bullet~~~$ The points where $\psi(r)=0$ are known as turning points of the trajectory i.e. the value of r which satisfies \begin{equation} E^{2}=f(r)\left(1+\frac{L^{2}}{r^{2}}\right) \end{equation} are turning points and Eq.(13) determines the potential curves.\\ $\bullet~~~$For circular orbit $ \left(r=constant\right)$ we have from (9)$\frac{dV^{2}(r)}{dr}=0$ i.e. circular orbits are possible for those radial co ordinates which correspond to maximum (unstable) or minimum(stable) of the potential. Thus for circular orbit we must have \begin{equation} \frac{f'(r)}{f(r)}=\frac{2L^{2}}{r(r^{2}+L^{2})} \end{equation} \section{Trajectory of a photon: Bending of Light} To determine the photon trajectory we shall proceed as before. Here from the energy momentum conservation relation we have \begin{equation} \left(\frac{dr}{d\lambda}\right)^{2}=E^{2}-f(r)\frac{L^{2}}{r^{2}} \end{equation} i.e. $V_{L}(r)^{2}=f(r)\frac{L^{2}}{r^{2}}$ is the (square of the) effective potential. So differentiating both sides we get\begin{equation} \frac{d^{2}r}{d\lambda^{2}}=-\frac{1}{2}\frac{dV_{L}^{2}(r)}{dr} \end{equation} Thus the differential path of a light ray is given by \begin{equation} \frac{d\phi}{dr}=\pm\frac{1}{r^{2}\sqrt{\left[\frac{1}{b^{2}}-\frac{1}{r^{2}}f(r)\right]}} \end{equation} where $b=\frac{L}{E}$. Now for photon circular orbit we have $$\frac{dv^{2}_{L}}{dr}=0$$ i.e. the radius of the circular orbit satisfies $$rf'(r)=2f(r)$$ One may note that the radius of photon circular orbit is independent of the angular momentum of photon. In particular, for a Schwarzchild black hole the radius of photon circular orbit is $3M$ .For an ingoing photon choosing $u=\frac{1}{r}$ we have \begin{equation} \frac{d\phi}{du}=\frac{1}{\sqrt{\left[\frac{1}{b^{2}}-u^{2}F(u)\right]}} \end{equation} where $F(u)=f(\frac{1}{u})$.\\ Note that if $F(u)$ is a constant then $(18)$ has the solution \begin{equation} r\sin(\phi-\phi_{0})=b \end{equation} (choosing the constant to be unity) a straight line. This is expected as f(r)=constant means the space time is minkowskian (having no gravitational effect) and photon trajectory will be straight line. Further we see that at large distance (small $u$) the gravitational field due to the black hole will be negligible so we may expand $F(u)$ in a power series of u i.e. \begin{equation} F(u)=1+c_{1}u+c_{2}u^{2}+\cdots \end{equation} So keeping up to first order in $u$ we have from $(18)$\begin{equation} \frac{d\phi}{du}=\frac{1}{\sqrt{\left[\frac{1}{b^{2}}-u^{2}-c_{1}u^{3}\right]}} \end{equation} Let $y=u\left(1+\frac{c_{1}u}{2}\right)$ then $u=y\left(1-\frac{c_{1}y}{2}\right)+0\left(u^{2}\right)$. Then the above differential equation becomes \begin{equation} \frac{d\phi}{dy}=\frac{1-c_{1}y} {\sqrt{\frac{1}{b^{2}}-y^{2}}}+0\left(u^{2}\right) \end{equation} So on integration, \begin{equation} \phi=\phi_{0}-\frac{c_{1}}{b}+\sin^{-1}{(by)}+c_{1}\sqrt{\frac{1}{b^{2}}-y^{2}} \end{equation} If we choose $\phi_{0}$ as the initial incoming direction of light i.e. $\phi\longrightarrow\phi_{0}$ as $y\longrightarrow0$ and as in the approximation $y=\frac{1}{b}$ corresponds to the smallest $r$ that photon can travel then \begin{equation} \phi_{y=\frac{1}{b}}=\phi_{0}-\frac{c_{1}}{b}+\frac{\pi}{2} \end{equation} So the angle of deflection is $\delta=\frac{\pi}{2}-\frac{c_{1}}{b}$, as the photon comes from infinity to the point of closest approach. Hence the total deflection would be $\delta=\pi-\frac{2c_{1}}{b}$. Therefore, considering the straight line path the net amount of deflection will be \begin{equation} \Delta \phi=-\frac{2c_{1}}{b} \end{equation} If we take schwarzschild solution then we have $c_{1}=-2M$ and $\Delta\phi=\frac{4M}{b}$. Further, if it so happen that $c_{1}=0$ then we choose $F(u)\simeq1+c_{2}u^{2}$. Using the transformation $$y=u\left(1+\frac{c_{2}u^{2}}{2}\right)$$ the net amount of deflection will be given by \begin{equation} \Delta \phi=\frac{3 \pi c_{2}}{4b^{2}} \end{equation} \section{Pseudo-Newtonian gravitational and effective potentials} Based on a general heuristic method([9]) ,the PN gravitational potential can be defined as \begin{equation} \psi=\int\frac{l_{c}^{2}}{r^{3}}dr \end{equation} where, r is the usual radial co-ordinate and $l_{c}$ is the general relativistic specific angular momentum i.e. $l_{c}(=\frac{L_{c}}{E_{c}})$ is the ratio of the conserved angular momentum and energy per particle mass, related to the circular geodesic in the equatorial plane.\\ In newtonian theory, the gravitational potential $\psi_{n}$ is given by $$\psi_{n}=\int\frac{l_{cn}^{2}}{r^{3}}dr$$ with $l_{cn}$, the newtonian angular momentum per mass of the particle moving in the circular orbit. The motivation of choosing PN potential (27) is to match the newtonian angular momentum per particle mass on a circular orbit with the general relativistic angular momentum. In the present study, the general relativistic conserved angular momentum and energy per particle mass for circular orbit are given by (from equations (13) and (14)) \begin{equation} L_{c}=\left[\frac{r^{3}f'(r)}{2f(r)-rf'(r)}\right]^{\frac{1}{2}} \end{equation} and \begin{equation} E_{c}=\frac{\sqrt{2}f(r)}{\sqrt{\left[2f(r)-rf'(r)\right]}} \end{equation} i.e. \begin{equation} l_{c}=\frac{1}{f(r)}\sqrt{\frac{r^{3}f'(r)}{2}} \end{equation} Hence from (27) the PN gravitational potential is \begin{equation} \psi=c-\frac{1}{2f(r)} \end{equation} where the constant of integration 'c' is determined from the known result of the Schwarzchild black hole as follows:(note that c has no physical meaning)\\ For Schwarzchild black hole, the well known Paczy\'{n}ski-Witta gravitational potential([10]) is \begin{equation} \psi_{PW}=-\frac{M}{r-2M} \end{equation} substituting in eq.(31) we get $c=\frac{1}{2}$ and hence the PN gravitational potential for a general spherically symmetric black hole described by eq.(1) is \begin{equation} \psi=\frac{1}{2}\left[1-\frac{1}{f(r)}\right] \end{equation} As for static radius $(r_{s})$ gravitational potential should be zero so from (33) we have $f(r_{s})=1$. Hence from equations (28),(29) the circular orbit of the test particle exist for radius $(r_{c})$ in the range([11]) $$r_{a}<r_{c}<r_{s}$$ where $r_{a}$ satisfies $$rf'(r)='2f(r)$$ i.e. $r_{a}$ is the photon circular orbit. Thus all circular orbits of the test particle are lower bounded by the photon circular orbit and are extended upto the static radius.\\ Further, from eq.(33) we see that the PN potential diverges at the event horizon (i.e.$f(r)=0$) reaches its maximum value at $r=r_{m}$ (where$f'(r_{m})=0$) and then decreases for $r>r_{m}$ i.e. the gravitational field corresponding to PN potential become repulsive for $r>r_{m}$. Also if the metric(1) becomes asymptotically flat(i.e. $f(r)\longrightarrow1$ as $r\longrightarrow\infty$) then $\psi\longrightarrow0$ asymptotically([11],[12]). Moreover, for central gravitational fields, if we assume that the motion of the test particle is confined to the equatorial plane then for Keplarian motion along the radial direction gives \begin{equation} \frac{1}{2}\left(\frac{dr}{dt}\right)^{2}=e-v_{eff} \end{equation} where e stands for total PN energy per particle mass and $v_{eff}$ stands for PN effective potential per particle mass having explicit form ([13]) \begin{equation} v_{eff}=\psi+\frac{l^{2}}{2r^{2}} \end{equation} where $\psi$ is the PN gravitational potential given by eq.(33) and $l$ is the PN angular momentum per particle mass. Thus circular Keplarian orbits are characterized by the extrema of $v_{eff}$(i.e.$\frac{dv_{eff}}{dr}=0$) and we have \begin{equation} l_{c}^{2}=\frac{r^{3}f'(r)}{2\left[f(r)\right]^{2}} \end{equation} which can be written as (using eq.s (28) and (29)) \begin{equation} l_{c}=\frac{L_{c}}{E_{c}} \end{equation} The corresponding expression for energy is \begin{equation} e_{c}=\frac{1}{2}\left[1+\frac{rf'(r)-2f(r)}{2\left(f(r)\right)^{2}}\right] \end{equation} using $E_{c}$ it can be written as \begin{equation} e_{c}=\frac{1}{2}\left[1-\frac{1}{E_{c}^{2}}\right] \end{equation} It is to be noted that angular momentum per particle mass is same for both General Relativity as well as for PN effective potential theory.\\ We shall now examine the stability of circular orbits studied above. As stability criteria is determined by the extrema of the effective potential $v_{eff}$, so we have $$\frac{\partial l_{c}^{2}}{\partial r}>0$$ for stable circular orbit and $$\frac{\partial l_{c}^{2}}{\partial r}<0$$ for unstable circular orbit. Due to identical nature of $l_{c}$ both GR and PN potential theory have same criteria for stability. Further, inner and outer marginally stable circular orbits corresponds to extrema of $l_{c}^{2}$ and we have \begin{equation} 2r\left[f'(r)\right]^{2}=f(r)\left[3f'(r)+rf''(r)\right] \end{equation} Hence for stability one should have \begin{equation} 2r \left[f'(r) \right]^{2}<f(r) \left[3f'(r)+rf''(r) \right] \end{equation} Finally, from the above expressions (eq.s (36) and (38)) of $l_{c}^{2}$ and $e_{c}$ we have the following observations:\\ $\bullet~~~~~$ At the event horizon($f(r)=0$) both $l_{c}^{2}$ and $e_{c}$ diverge while $E_{c}$ and $L_{c}$ exist if $[2f(r)-rf'(r)]$ is positive definite. For example in case of Schwarzchild-de Sitter space time([11]) they become finite while for our Reissner-Nordstr\"{o}m black hole (see next section) they do not exist.\\ $\bullet~~~~~$ At the static radius ($f(r)=1$) both $l_{c}^{2}$ and $e_{c}$ vanish while $E_{c}$ and $L_{c}$ depend on choice of $f(r)$.\\ $\bullet~~~~$ At the photon circular orbit ($rf'(r)=2f(r)$) both $L_{c}$ and $E_{c}$ diverge while $e_{c}=\frac{1}{2}$ and $l_{c}$ is finite there.\\ Thus we conclude that both in PN approach and in relativistic approach the circular orbits are bounded from above by static radius while bound from below by event horizon in PN approach and that by the radius of photon orbit in relativistic approach. This is due to the fact that we do not get photon circular orbit in PN approach([11]).\\ \section{An Example: Reissner-Nordstr\"{o}m Black Hole} For Reissner-Nordstr\"{o}m(R-N) black hole solution we have \begin{equation} f(r)=1-\frac{2M}{r}+\frac{q^{2}}{r^{2}} \end{equation} where M being the mass and q the charge of black hole. So for horizon we have $f(r)=0$ i.e. \begin{equation} r_{\pm}=M\pm\sqrt{M^{2}-q^{2}} \end{equation} Black hole solution exists for $$M^{2}>q^{2}$$ and we have event horizon at $$r_{h}=r_{+}$$ and $$r_{c}=r_{-}$$ corresponds to black hole Cauchy horizon. When $$M^{2}=q^{2}$$ then both the horizons coincide and it is the case of extremal black hole. The static radius is given by ($f(r)=1$) \begin{equation} r_{s}=\frac{q^{2}}{2M} \end{equation} The radius at which $f'(r)=0$gives the solution$$r_{m}=\frac{q^{2}}{M}$$ One may note that $$r_{s}<r_{m}<r_{+}$$ So both the static radius and $r_{m}$ lie inside the event horizon. Hence they have no physical significance for R-N black hole. Also the radius of photon circular orbits are given by \begin{equation} r_{pc}=\frac{1}{2}\left[3M\pm\sqrt{9M^{2}-8q^{2}}\right] \end{equation} \textbf{Relativistic Theory}:\\ For circular orbit the conserved angular momentum and energy per particle mass are \begin{equation} L_{c}=\frac{r\sqrt{Mr-q^{2}}}{\sqrt{r^{2}-3Mr+2q^{2}}} \end{equation} Variation of $L_{c}$ with respect to the variation of $q$ and $r$ for $M=1$ has been shown in figure 1 [variables in the figures are in any standard units]. \begin{figure} \includegraphics[height=3in, width=3in]{figsumanta1.eps}~~~ \vspace{1mm} Figure 1 \vspace{6mm}Figure 1: The figure shows the variation of relativistic conserved angular momentum $L_{c}$ with respect to variation of q and r for the choice $M=1$.\hspace{1cm} \vspace{6mm} \end{figure} \begin{equation} E_{c}=\frac{r\left[1-\frac{2M}{r}+\frac{q^{2}}{r^{2}}\right]}{\sqrt{r^{2}-3Mr+2q^{2}}} \end{equation} \begin{equation} l_{c}=\frac{L_{c}}{E_{c}}=\frac{\sqrt{Mr-q^{2}}}{1-\frac{2M}{r}+\frac{q^{2}}{r^{2}}} \end{equation} with effective potential \begin{equation} V_{eff}=\sqrt{\left[\left(1-\frac{2M}{r}+\frac{q^{2}}{r^{2}}\right)\left(1+\frac{L^{2}}{r^{2}}\right)\right]} \end{equation} \textbf{Pseudo-Newtonian Theory}:\\ PN gravitational potential and effective potentials are \begin{equation} \psi=\frac{q^{2}-2Mr}{2\left[r^{2}-2Mr+q^{2}\right]} \end{equation} The graph of $\psi$ for variation of both q and r with $M=1$ is presented in figure 2. Also in figure 3, we have shown the dependence of $\psi$ by drawing the graphs of $\psi$ for three different values of q. \begin{figure} \includegraphics[height=3in, width=3in]{figsumanta2.eps}~~~ \vspace{6mm} Figure 2 \vspace{6mm}Figure 2: Here variation of PN gravitational potential $\psi$ with variation of both q and r for $M=1$ is shown.\hspace{1cm} \vspace{6mm} \end{figure} \begin{figure} \includegraphics[height=3in, width=3in]{figsumanta3.eps}~~~ \vspace{6mm} Figure 3 \vspace{6mm}Figure 3:Here PN gravitational potential $\psi$ is drawn for three different values of q. The upper one for $q=0$, middle one for $q=0.8$ and the lower one for $q=1$. \hspace{1cm} \vspace{6mm} \end{figure} \begin{equation} v_{eff}=\frac{q^{2}-2Mr}{2\left[r^{2}-2Mr+q^{2}\right]}+\frac{l^{2}}{2r^{2}} \end{equation} Also we have \begin{equation} l_{c}=\frac{\sqrt{Mr-q^{2}}}{1-\frac{2M}{r}+\frac{q^{2}}{r^{2}}} \end{equation} The graphical presentation of $l_{c}$ for the variation of both q and r with $M=1$ has been shown in figure 4. Also graphically a comparative study of $L_{c}$(given by eq.(46)) and $l_{c}$(given by eq.(52)) has been presented in figure 5(a) and 5(b) for two different values of q. \begin{figure} \includegraphics[height=3in, width=3in]{figsumanta4.eps}~~~ \vspace{6mm} Figure 4 \vspace{6mm}Figure 4:The figure shows the graphical representation of PN angular momentum per particle mass $l_{c}$ for variation of both q and r with $M=1$ \hspace{1cm} \vspace{6mm} \end{figure} \begin{figure} \includegraphics[height=3in, width=3in]{figsumanta5a.eps}~~~ \includegraphics[height=3in, width=3in]{figsumanta5b.eps}~~~ \vspace{6mm} Figure 5(a) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~Figure 5(b) \vspace{6mm}Figure 5(a) and 5(b) show the comparative study of the variation of relativistic angular momentum $L_{c}$(given by eq.(46)) and PN angular momentum per particle mass $l_{c}$(given by eq.(52)) for $q=0.8$ and $0.6$ respectively.Where upper one gives $L_{c}$ and lower one $l_{c}$ \hspace{1cm} \vspace{6mm} \end{figure} \begin{equation} e_{c}=\frac{q^{4}-Mr^{3}-4Mrq^{2}+4M^{2}r^{2}}{2\left[r^{2}-2Mr+q^{2}\right]^{2}} \end{equation} The marginally stable circular orbits are given by the positive root of the equation \begin{equation} Mr^{3}-6M^{2}r^{2}+9Mq^{2}r-4q^{4}=0 \end{equation} If the cubic eq has three positive real roots($r_{m1}<r_{m2}<r_{m3}$)then $r=r_{m3}$ and $r=r_{m1}$ are respectively the radii of the outer and inner marginally stable circular orbits. Finally, for stable circular orbit we have \begin{equation} Mr^{3}-6M^{2}r^{2}+9Mq^{2}r-4q^{4}>0 \end{equation} \section{Conclusion} In this work we give a general formulation of the trajectory of a test particle (or a photon) around any spherically symmetric black hole in four dimensional space time. We also classify the trajectories by studying the possible positive zeros of the function $$\psi(r)=E^{2}-f(r)\left(1+\frac{L^{2}}{r^{2}}\right)$$ So once $f(r)$ is given for a given black hole we can immediately tell the trajectories of a test particle or a photon around it. In the PN approach physical parameters for circular orbits are evaluated and are compared with the corresponding quantities in relativistic treatments. Stability condition for the circular orbits are determined and bounds of the marginally stable circular orbits are compared in both formalism. As an example we have applied our results for R-N black hole solution.\\ It is to be noted that the above analysis of the trajectory is not restricted to Einstein gravity, it can also be applied to any black hole solution in a modified gravity theory. Further our analysis can be extended to any higher dimension. For example, the metric ansatz for a $n$ dimensional black hole is written in the form \begin{equation} ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\Omega_{n-2}^{2} \end{equation} Where $d\Omega_{n-2}^{2}$ is the metric on unit $n-2$ sphere and is given by $$d\Omega_{1}^{2}=d\phi^{2},$$ $$d\Omega_{i+1}^{2}=d\theta_{i}^{2}+\sin^{2}\theta_{i}d\Omega_{i}^{2}, i\geq1 $$ Then due to spherical symmetry of the space time the motion of a test particle can be restricted to the equatorial plane defined by $\theta_{i}=\frac{\pi}{2}, i\geq1$. As before energy $E$ and the angular momentum $L$ are two conserved quantities and the differential equation for the path of the test particle becomes \begin{equation} \left(\frac{dr}{d\phi}\right)^{2}=\frac{r^{4}}{L^{2}}\left[E^{2}-f(r)\left(\epsilon+\frac{L^{2}}{r^{2}}\right)\right] \end{equation} where $\epsilon=0$ or $1$ for photon or massive particle. Finally, one may note that throughout our calculations we have not used any specific gravity theory. So if we have a black hole solution given by equation (1) not only in Einstein gravity but also in any other gravity theory, the above analysis of the trajectory of a test particle is valid. Moreover from equations (56) and (57), we may conclude that above analysis of particle trajectory can be extended to any dimension of space time.\\ For future work, an extension of the above approach to non-spherical systems (particularly axi-symmetric) would be interesting.\\ {\bf Acknowledgement:}\\ The first author is thankful to Dr.Prabir Kr. Mukherjee, Department of Physics, Presidency College, Kolkata, for valuable help in preparing the manuscript.The first author is also thankful to DST, Govt.of India for awarding KVPY fellowship.The authors greatly acknowledge the warm hospitality at IUCAA, Pune where apart of the work has been done.\\\\ {\bf References:}\\ $[1]$. Narlikar.J.V {\it Lectures on General Relativity and Cosmology} (The Macmillan Company of India) (1978)\\ $[2]$ Schutz.Bernerd {\it A First Course in General Relativity} (Cambridge University Press)(1995)\\ $[3]$ Ray.D.Inverno {\it Introducing Einstein's General Theory of Relativity} (Clarenden Press,Oxford)(2003) \\ $[4]$ Einstein.A {\it Preuss, Akad.Wiss.Berlin, Sitzber,} \textbf{778},\textbf{799},\textbf{831} and \textbf{844} (1915)\\ $[5]$ Einstein.A {\it Preuss, Akad.Wiss.Berlin, Sitzber,} \textbf{142} (1917) \\ $[6]$ Einstein.A {\it The Meaning of Relativity}(Methuen, London) (1951) \\ $[7]$ Einstein. A {\it Relativity: the special and general theory} (Methuen, London) (1920)\\ $[8]$ Schwarzchild.K {\it Sitzber, Deut.Akad.Wiss.Berlin.KI.Math-Phys. Tech.} \textbf{189} (1923) \\ $[9]$ Mukhopadhyay.B {\it Astrophysical Journal} {\bf 581} (2002) 427.\\ $[10]$ Paczy\'{n}ski.B and Witta.p {\it Astron.Astrophys.} \textbf{88} (1980) 23.\\ $[11]$ Stuchl\'{i}k.Z and Kov\'{a}\v{r}.J {\it Int.J.Mod.Phys.D} \textbf{17} (2008) 2089.\\ $[12]$ Stuchl\'{i}k.Z, Slan\'{y} and Kov\'{a}\v{r}.J {\it Class.Quantum.Grav} \textbf{26} (2009) 215013.\\ $[13]$ Minser.C.W, Thorne.K.S and Wheeler.J.A, {\it Gravitation} (Freeman, San Francisco, 1973).\\ \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Meteor clusters are the occurrence of many (typically more than three) meteors detected in a restricted portion of the sky within a few seconds \citep[typically less than 5s; for a review see][and references]{Koten2017}. The existence of meteor clusters has been suspected for decades, but only a handful (namely six) of observations have been reported \citep{Koten2017}. In particular, the Leonid storms occurring between 1998 and 2002 raised the question as to whether or not such observations happened simply by chance, given the high number of meteors recorded during each event. Evidence of their genuine existence has been reported that disfavors their observation simply being the result of statistical fluctuations \citep{Watanabe2002,Toth2004a}. Given the ever growing number of meteor cameras surveying the sky around the globe every night, such events are expected to be more frequently reported. However, even with more than a thousand meteor detection cameras running today \citep[see][for a review of all the networks]{Koten2019}, meteor cluster observations are still very rare events. The exact origin of meteor clusters is poorly known. A probable process is thermal stressing of very fragile comet dust \citep{Watanabe2003}, a hypothesis recently confirmed by \cite{Capek2022} for the case of the 2016 September $\epsilon$- Perseid (SPE) cluster. The meteoroid disruption drives the level of meteor showers, which itself depends on the structure of the comet. \cite{Jenniskens2008} reported a lack of fluffy meteoroid in an old Leonid trail, which these authors suggested is possibly explained by meteoroid self-disruption in the inter-planetary space (although, other hypotheses might explain this observation). In order to explain the present quasi-steady-state of the level of sporadic meteors and of the amount of zodiacal dust, models must take the meteoroid life expectancy into account as well as the replenishment mechanism \citep{Wiegert2009,Levasseur2020}. Such mechanisms include the gravitational perturbation of long-period comets, the structure and population of the Oort cloud, the role of giant planets (especially Jupiter) in removing or accreting small bodies in the inner Solar System, and so on. Therefore, the frequency of meteoroid self-fragmentation in space has implications for our current understanding of the Solar System. Since \cite{Koten2017}, only one meteor cluster observation has been reported \citep{HawaiiCluster2021}, although an extensive search was recently performed among the Geminids \citep{Koten2021}. One open question refers to the frequency of spontaneous meteoroid breakup in interplanetary space, which would lead to meteor clusters and how these breakups would influence the lifetime expectancy of meteoroids. Here, we report another unambiguous detection of a meteor cluster of 34 fragments, detected within 7.5 s during the 2022 $\tau$-Herculids outburst caused by the 1995 trail ejected from Jupiter family comet 73P/Schwassmann-Wachmann 3 (hereafter 73P). The detection was realized using a novel computer-vision application that was able to detect 100\,\% of the meteors that a human eye can see in the video. The results enable a discussion of the origin, frequency, and implications of such events. \section{Observations}\label{sec:obs} \begin{figure*}[!htbp] \centering \includegraphics[width=0.9\textwidth,keepaspectratio]{fig_cluster_num} \caption{Composite closeup view of the detection of the 34 $\tau$-Herculids meteor cluster from an airborne observation campaign. The center of the field of view points toward the little dipper (\textit{Ursa Minor}) constellation.} \label{fig:composite} \end{figure*} \newcommand{\MR}[1]{\multirow{2}{*}{#1}} \begin{table* \centering \caption{$\tau$-Herculid meteor cluster characterization: beginning time in UT, duration ($\mathcal{D}$) in seconds, and apparent magnitude ($\mathcal{M}$). Both ground-truth and automatic detection results (see Sec.~\ref{sec:detection}) are reported.} {\small \begin{tabular}{rrrrrr|rrrrrr} \toprule & \multicolumn{2}{c}{Ground truth} & \multicolumn{2}{c}{Automatic detection} & & & \multicolumn{2}{c}{Ground truth} & \multicolumn{2}{c}{Automatic detection} & \\ \cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){8-9} \cmidrule(lr){10-11} {\#} & {Beginning} & {$\mathcal{D}$} & {Beginning} & {$\mathcal{D}$} & {$\mathcal{M}$} & {\#} & {Beginning} & {$\mathcal{D}$} & {Beginning} & {$\mathcal{D}$} & {$\mathcal{M}$} \\ \midrule 1 & {06:48:55.959} & 0.30 & {06:48:55.959} & 0.30 & -0.19 & 19 & {06:48:59.009} & 0.20 & {06:48:59.059} & 0.15 & 0.62 \\ 2 & {06:48:56.359} & 0.50 & {06:48:56.409} & 0.35 & -1.50 & 20 & {06:48:59.209} & 0.10 & {06:48:59.209} & 0.10 & 1.79 \\ 3 & {06:48:56.359} & 0.80 & {06:48:56.359} & 0.80 & -2.06 & 21 & {06:48:59.359} & 0.25 & {06:48:59.409} & 0.20 & -0.21 \\ 4 & {06:48:56.909} & 0.10 & {06:48:56.909} & 0.10 & 0.87 & 22 & {06:48:59.559} & 0.35 & {06:48:59.559} & 0.30 & -0.56 \\ 5 & {06:48:57.209} & 0.10 & {06:48:57.209} & 0.10 & 1.19 & 23 & {06:48:59.759} & 0.35 & {06:48:59.759} & 0.10 & 0.73 \\ 6 & {06:48:57.259} & 0.20 & {06:48:57.309} & 0.10 & 0.92 & 24 & {06:48:59.809} & 0.15 & {06:48:59.809} & 0.10 & 1.33 \\ 7 & {06:48:57.509} & 0.45 & {06:48:57.509} & 0.40 & -1.07 & 25 & {06:48:59.809} & 0.50 & {06:48:59.809} & 0.50 & -0.65 \\ 8 & {06:48:57.559} & 0.15 & {06:48:57.599} & 0.15 & 0.86 & 26 & {06:48:59.859} & 0.25 & {06:48:59.859} & 0.20 & 0.41 \\ 9 & {06:48:57.559} & 0.45 & {06:48:57.599} & 0.25 & -1.40 & 27 & {06:49:00.009} & 0.35 & {06:49:00.009} & 0.30 & -1.72 \\ 10 & {06:48:57.609} & 0.15 & {06:48:57.659} & 0.10 & 0.27 & 28 & {06:49:00.559} & 0.15 & {06:49:00.559} & 0.15 & 0.70 \\ 11 & {06:48:57.709} & 0.45 & {06:48:57.809} & 0.25 & -0.88 & 29 & {06:49:00.609} & 0.30 & {06:49:00.709} & 0.10 & -1.09 \\ 12 & {06:48:57.809} & 0.20 & {06:48:57.809} & 0.15 & 0.92 & \MR{30} & \MR{{06:49:00.709}} & \MR{0.25} & {06:49:00.759} & 0.10 & \MR{0.26} \\ 13 & {06:48:57.859} & 0.50 & {06:48:57.859} & 0.50 & -0.68 & & & & {06:49:00.809} & 0.15 & \\ 14 & {06:48:58.159} & 0.15 & {06:48:58.159} & 0.15 & 1.44 & 31 & {06:49:00.809} & 0.35 & {06:49:00.859} & 0.25 & -0.31 \\ 15 & {06:48:58.559} & 0.55 & {06:48:58.659} & 0.45 & -1.32 & 32 & {06:49:02.009} & 0.30 & {06:49:02.009} & 0.30 & 0.27 \\ 16 & {06:48:58.609} & 0.20 & {06:48:58.659} & 0.10 & -0.31 & 33 & {06:49:02.059} & 0.25 & {06:49:02.059} & 0.20 & -0.82 \\ 17 & {06:48:58.709} & 0.30 & {06:48:58.709} & 0.25 & -0.01 & 34 & {06:49:03.309} & 0.15 & {06:49:03.309} & 0.15 & 0.56 \\ 18 & {06:48:58.859} & 0.15 & {06:48:58.859} & 0.15 & 0.85 & & & & & & \\ \bottomrule \end{tabular} } \label{tab:charac} \end{table*} \subsection{Campaign and instrument}\label{sec:instr} \citet{YeVaubaillon2022} predicted that the 2022 $\tau$-Herculids meteor shower would be visible on 31 May 2022, and indeed it was successfully observed during an airborne observation campaign led by the University of Southern Queensland (F.Z., D.B.) and supported by Rocket Technologies International (S.G.). On board the aircraft, a suite of low-light scientific cameras were installed at several windows, and the sky was monitored continuously. A detailed description of the whole campaign is beyond the scope of this paper, but will be published in a dedicated paper. In addition to the imaging systems, spectroscopic systems were mounted in parallel. Windows on both sides of the aircraft were equipped with cameras, and the flight path was chosen according to the predictions \citep{YeVaubaillon2022}. In this paper, we present results from data collected by a Basler acA1920-155um camera equipped with a Basler 6mm f/1.4 lens. The gain was set at maximum value (36) and 20 images per second were taken during the 4 hours of the flight. In order to compensate for the airplane (Phenom 300) roll motion, a G6-Max camera stabilizer was used. The camera was controlled with a RaspBerry-4 mini-computer, running the "RMS" acquisition and meteor detection software \citep{Vida.et.al2016,Vida2021}. In addition, an AMOS-Spec-HR camera (Comenius Univ.) was mounted at another plane window. The hardware was a DMK 33UX252 (resolution of $2048 \times 1536$ px and set to 14fps) equipped with a 6 mm, F/1.4 lens, providing a FOV of $60 \times 45 \deg$. \subsection{Detection of the meteor cluster}\label{sec:descri} With the described settings, we detected 165 $\tau$-Herculids meteors and five sporadic meteors. At the time of the shower outburst maximum (around 05:00 UT), we detected about one meteor per minute. When the level of the shower was decreasing, starting at 06:48:56 UT, the Basler camera detected 34 meteors, all coming from the $\tau$-Herculids radiant. The AMOS camera, being slightly more sensitive, allowed the detection of 38 meteors within 11.3 s. Figure~\ref{fig:composite} shows a composite image of the meteor cluster (as detected by the Basler camera). This cluster observation was not reported by any of the ground-based meteor networks. \section{Meteor cluster characterization}\label{sec:cara} The whole meteor cluster characterization was performed with the data from the Basler camera and is detailed in Table ~\ref{tab:charac}. Further characterizations of the whole shower are ongoing. The total time duration of the event is 11.3 s, but the Basler camera detected it for 7.5 s only. The maximum angular distance between all the meteors is $\sim 50 \; \deg$. The average airplane position during the cluster was lat=34.20 $\deg$, lon=-101.88 $\deg$, alt=14201 m and the camera was pointing towards its left hand side. The entry velocity was computed using the algorithm developed by \cite{Neslusan1998}. Given the low entry velocity of the $\tau$-Herculids (12.2 km.s$^{-1}$), it is reasonable to assume an average meteor altitude of $90$ km. Individual meteor azimuth and elevation (above the horizon) were measured. The relative apparent angular distance between each fragment is in the range $[0.25;50.2]\;\deg$, the measured elevation is within $[41.2;70.6]\;\deg$, and the physical distance between two fragments is within $[0.4;90.5]$ km. Adding the total duration of the event, the maximum possible physical distance between all the fragments is $D_m=[227;244]$ km. Following the methodology of \cite{Koten2017}, we find that, assuming a Poissonian distribution of meteors (this assumption is discussed in Sect. \ref{sec:discuss:freq}), the probability of such clustering by chance is $\sim 5.5 \times 10^{-22}$ at best. As a result, we consider that the chance observation of this number of meteors during such a short time period is highly improbable and conclude that the disintegration of a parent $\tau$-Herculid meteoroid took place in the interplanetary space. Assuming a zero ejection velocity for all the fragments, the maximum time between the parent meteoroid disintegration in interplanetary space and the Earth atmosphere entry strongly depends on the considered meteoroid size. We converted the apparent magnitude into an absolute magnitude (assuming a meteor altitude of $90$ km), then converting this latter into an equivalent photometric mass \citep[using][]{Hughes1995} and radius. The latter ranges from $7.5$ mm to $22.4$ mm (assuming a density of 2500 kg.m$^{-3}$). The total mass of the initial meteoroid is estimated to be 1.16 kg. The maximum age of the cluster is computed using the smallest size, as this is the most sensitive to the solar radiation pressure, and is found to lie in the range $[13.3;13.8]$ days. Figure~\ref{fig:sfd} shows the absolute magnitude distribution of the cluster. The population index is $r=2.01$, corresponding to a differential size distribution of $s=3.09$. This feature is discussed in Sect. \ref{sec:discuss:prop}. \begin{figure}[!htbp] \centering \includegraphics[width=0.49\textwidth,keepaspectratio]{plot_cumulative_magnitude2} \caption{Cumulative absolute magnitude distribution of the cluster fragments.} \label{fig:sfd} \end{figure} \section{Computer-vision detection}\label{sec:detection} \begin{figure*}[!htbp] \centering \includegraphics[width=1\textwidth,keepaspectratio]{fig_detection_chain} \caption{Computer-vision detection detailed chain. Plain gray boxes correspond to input data, plain white boxes are the processing, and the italicized brown texts are the processing outputs.} \label{fig:motion_detection} \end{figure*} \begin{figure*}[!htbp] \centering \includegraphics[width=0.9\textwidth,keepaspectratio]{plot_meteors_duration_histo} \label{plot:meteors_duration} \caption{Overlapping duration of the ground truth and the automatic detection for the meteor cluster. The automatic detection bars are placed relative to the beginning time of each meteor.} \end{figure*} The real-time open-source software detection chain named \emph{Fast Meteor Detection Toolbox} (FMDT) was applied in order to detect the meteor in the imaging data\footnote{FMDT repository: \url{https://github.com/alsoc/fmdt}}. FMDT is derived from software designed to detect meteors on board the ISS or a Cubesat~\citep{Millet2022_Meteorix_COSPAR,Millet2022_Meteorix_WGN,Petreto2018_SIMD_GPU_OF_DASIP}. FMDT is foreseen to be applied to airborne camera systems; for example in atmospheric balloons or aircraft. It is robust to camera movements thanks to a motion-compensation algorithm. Figure~\ref{fig:motion_detection} presents the whole FMDT detection chain. For each pair of images, an intensity hysteresis threshold, a connected component labeling, and an analysis algorithm~\citep{Lemaitre2020_SIMD_CCL_WPMVP,Lacassagne2009_LSL_ICIP} are applied to get a list of connected components (CCs) with their bounding boxes and surface $S$. Moreover, it also provides the first raw moments to compute the centroid $(x_G,y_G)=(S_x/S,S_y/S)$ of each blob of pixels. A morphological threshold is then set on the surface $S$ to reject small and big CCs. A $k$-nearest neighbor matching process is then applied to extract pairs of CCs from images $I_{t+0}$ and $I_{t+1}$ with $t$ the image number in the video sequence. These matches are used to perform a first global motion estimation (rigid registration). This motion estimation is used to classify the CCs into two classes, namely nonmoving stars and moving meteors, according to the following criterion: $|e_k-\bar{e_t}| > \sigma_t$ where $e_k$ is the compensation error of the CC number $k$, $\bar{e_t}$ the average error of compensation of all CCs of image $I_t$, and $\sigma_t$ the standard deviation of the error. A second motion estimation is carried out with only nonmoving star CCs in order to obtain a more accurate motion estimation and a more robust classification. Finally, piece-wise tracking is carried out by extending the ($t+0,t+1$) matching with ($t+1,t+2$) matching to reduce the number of false-positive detections. For the present video data, the geometric mean error $e_t$ for the whole sequence is 0.91 pixels for the first estimation and 0.18 for the second one. The apparent speed varies from 3 up to 10 pixels/frame. For the considered video sequence, FMDT was able to detect and track 100\% of the meteors in the video that are visible to the naked eye, with only four false positives. The proposed solution was compared with a {manual} detection (where an expert watched and labeled the entire video). This {manual} detection constitutes the "ground truth" and was first able to detect 28 meteors, with meteors 4, 8, 16, 18, 20, and 29 being missed. The ground truth was then enhanced thanks to the automatic detection chain. This demonstrates the need for an automated system for meteor detection. Figure~\ref{plot:meteors_duration} shows the overlap between meteors detected automatically and those of the ground truth. We define the tracking rate $\mathcal{T}_r$ as the ratio of the cumulative duration of the automatically detected meteors and of the cumulative duration of the ground-truth meteors: \begin{equation*} \mathcal{T}_r = \left( \sum_{m=1}^{34} \mathcal{D}_m^\text{auto-detect} \right) / \left( \sum_{m=1}^{34} \mathcal{D}_m^\text{ground truth} \right), \end{equation*} where $\mathcal{D}_m$ is the duration of the considered meteor $m$. In the observed video sequence, $\mathcal{T}_r = 80.4 \%$. A video of the sequence with meteor tracking is available online\footnote{Meteor cluster sequence with highlighted detection: \\ \url{https://lip6.fr/adrien.cassagne/data/tauh/tracks.mp4}}. For most of the meteors, the automatic detection is very close to reality. Moreover, the minimum time required for a {manual} detection is close to the full time of the video sequence while the proposed application is real-time and compatible with the cubesat power consumption constraint. Moreover, FMDT is able to leverage multi-core processor architectures through a task graph description and the use of the AFF3CT multi-threaded runtime library~\citep{Cassagne2019a,Cassagne2021, Cassagne2022b}. AFF3CT was designed for digital communication systems but is well adapted to real-time image processing. \section{Discussion}\label{sec:discuss} \subsection{Meteoroid properties}\label{sec:discuss:prop} The measured cluster differential size distribution index $s=3.1$ is slightly lower than expected from a collision cascade \citep[3.5; see e.g.,][]{OBrienGreenberg2005}. We examine how the derived $s$ value compares to other measurements. Reanalyzing the 2016 SPE cluster, of which the parent body is unknown, \cite{Capek2022} find a shallow $s=1.85$. Similarly, for decameter-size fragments ejected by 73P, \cite{Reach2009} found two relatively low size distributions of $s=1.84$ and $2.56$ for the smallest and the largest fragments, respectively. However, given the suspected rocket effect involved in such an event, the physical process is presumably different from what is at play in a meteor cluster. Measurements for comet 67P/Churyumov-Gerasimenko provide a wide range of values, depending on the comet heliocentric distance and meteoroid size range \citep[see][and references for a review]{Guttler2019}. For sizes comparable to meteoroids responsible for visual meteors, extreme low values of $s=1.8$ were derived when the comet was at high heliocentric distance \citep{Merouane2016}. Higher values of $s>3.5$ were found after perihelion \citep{Fulle2004,Moreno2017,Fulle2010}. Last but not least, an extreme high value of $s=6.4$ was found by \cite{Vida2021} for a meteoroid fragmenting in the atmosphere. Interestingly, extreme values of $s$ are derived for two drastically different meteoroid environments. The lowest $s$ values are found when the meteoroid breaks up in interplanetary space, and very high values are found when this takes place in the Earth's atmosphere. Whether this difference reflects the way meteoroids interact with gaseous environment is unclear, and investigating this matter would require additional work. The value reported here is high compared to \cite{Koten2017}, but is still at the low end of all reported values. \cite{Guttler2019} provide a review of the literature for 67P, and recall that meteoroids do not fragment in the coma. It is worth pointing out that the size distribution changes as a function of heliocentric distance. If this is true for all comets, and as the meteoroids of a given trail are ejected at different heliocentric distances, a meteoroid trail is therefore composed of a wide variety of meteoroid subtrail each described with a unique size distribution. In addition, within a trail, the meteoroids are mixed because of the relatively wide range of sizes and ejection velocity vectors. The portion of meteoroid subtrail sampled by the Earth during a meteor shower is therefore a mixture of all these size distributions. The size distribution $s$ has a tremendous influence on the level of a shower \citep[number of meteor per unit of time, ][]{Vaubaillon2005a,Vaubaillon2005b}. Future work is needed to quantify the influence of modeling a variable size distribution for the prediction of the meteor showers, and how this might reconcile past post-predictions with observations (e.g., the 2006 Leonids). \subsection{Meteoroid cluster frequency}\label{sec:discuss:freq} Meteoroids are known to fragment in the atmosphere with very high probability \citep[$90\%$; see][]{Subasinghe2016}. The reason for a cluster observation is (see above) an interplanetary fragmentation event. However, this is very rarely observed: only seven clusters have been reported in the past $\sim 40$ years of meteor observations. With the new data from the $\tau$-Herculids campaign, we find that the probability of such a cluster observation is $P \sim 5.5 \times 10^{-22}$ (see Sect. \ref{sec:cara}). \cite{Sampson2007} points out that the assumption of a Poisson distribution might not be appropriate, and generally underestimates $P$. However, if in the case presented here an extreme error of a factor $10^9$ is assumed, this still leads to $P\sim 10^{-13}$, showing the extreme rarity of the phenomenon. Computing a meteor cluster observation frequency would require the consideration of the limiting magnitude as a function of time, the software detection efficiency, and the camera running efficiency, among others, for each meteor-detection camera. As such a thorough study is out of the scope of this paper, we attempt to provide an order-of-magnitude estimate. In the past 20 years, $\sim 2 \times 10^6$ meteors were observed by the IMO during a total effective observation time of $\sim 8 \times 10^6$ hours \citep{Molau2021}. The EDMOND database currently\footnote{https://www.meteornews.net/edmond/edmond/edmond-database/, accessed on 28 July 2022} counts $\sim 4.6 \times 10^6$ meteors gathered between 2000 and 2016 \citep[][]{Kornos2014}. The SonotaCo and GMN networks have recorded totals of respectively $\sim 3.5 \times 10^5$ and $\sim 2.2 \times 10^5$ meteors over the past 14 years \citep{SonotaCo2021,Vida2021}. During this time, only a handful of clusters were reported \citep{Koten2017}. From our experience, meteor-detection software from video data (RMS, UFOCapture and FreeTure) is able to detect more than one meteor in a given frame. This is enough to conclude that the occurrence of a meteor cluster happens with a frequency of less than one in a million meteors. All reported clusters happened during a meteor shower \citep[][and this work]{Watanabe2003,Koten2017}. Their orbits cover all possible cometary orbits: $\tau$-Herculids for Jupiter family comet(JFC)-type, Leonids for Halley type (HT), and September Perseids for long-period (LP) type. The time between the disruption in the interplanetary space and the entry in the atmosphere was estimated to be only a few days. This represents less than $0.3$\% of the orbital period of a JFC meteoroid. A short heliocentric distance presumably increases the chances of self-fragmentation of meteoroids, given the higher micro-meteoroid space density, higher temperature and thermal stresses, and generally higher influence of radiation on their rotation state. \subsection{Origin of meteoroid self-fragmentation}\label{sec:discuss:orig} The cluster presented here became visible nearly 2 h after the expected maximum $\tau$-Herculids shower outburst caused by the trail ejected from comet 73P in 1995. No encounter with any other trail was expected at this time. The age of the parent meteoroids cannot be pinpointed, but given the lifetime expectancy of Jupiter family streams, this is likely to be a few hundred years at most \citep{Vaubaillon2019}. Out of the currently eight meteor cluster detections (including this study), only \cite{PiersHawkes1993} was not related to a known meteor shower. The extreme fragility of some cometary meteoroids \citep{Hornung2016} might explain this feature. The often-quoted physical processes responsible for meteoroid fragmentation in interplanetary space are thermal stresses, collision, rotational outburst, and outgassing of volatile material. The processes involved in the natural release of meteoroids from an active asteroid were described by \cite{Jewitt2015}. \cite{Capek2022} found that thermal stress was most probably responsible for the 2016 SPE meteor cluster. \subsection{Future application of the developed algorithm}\label{sec:discuss:algo} In addition to a cluster detection, we present a first application of the new processing chain for meteor detection named FMDT. This toolbox is derived from the CubeSat project Meteorix dedicated to the detection of meteors and space debris from space \citep{Rambaux19, Rambaux21_JIMO}. This detection chain allow the real-time identification of meteors and enable autonomous selection of scientific data to be sent back to Earth from on board a CubeSat. The full chain also contains an optical flow algorithm for accurate motion estimation. The agreement in detections between the ``RMS'' software \citep{Vida.et.al2016,Vida2021} and the new approach proposed by our team \citep{Millet2022_Meteorix_COSPAR,Millet2022_Meteorix_WGN,Petreto2018_SIMD_GPU_OF_DASIP} allows us to test and validate the approach implemented and to increase the Technology Readiness Level to 5. Such a tool might be used for future detection of meteors from orbiting spacecraft \citep[using e.g., the SPOSH camera;][]{Bouquet2014,Oberst2011}, or more generally from mobile observation platforms \citep{Vaubaillon2021}. \section{Conclusion} We describe and fully characterize the eighth meteor cluster ever reported. Based on our analysis of the observation data, we conclude that the probability of a cluster meteor observation is less than one in a million observed meteors. The derived differential size distribution index $s=3.1$ is relatively shallow. This index varies with heliocentric distance for regular comet outgassing. Future meteor-shower-prediction models might take this phenomenon into account for better accuracy. We developed an open-source computer-vision-based toolbox, namely the Fast Meteor Detection Toolbox (FMDT) to detect and track meteors. In spite of the acquisition camera instability caused by the aircraft, it was able to detect 100 \% of the meteors that are detectable in the video with the naked eye, even those of high magnitude. \begin{acknowledgements} MoMet is supported by "Programme National de Planetologie" and IMCCE / Observatoire de Paris / PSL. Airborne campaign organized and funded through RTI and supported by University of Southern Queensland. Maxime Millet PhD grant is funded by Region Ile-de-France. Dr. Fabian Zander is funded by the Australian Research Council through the DECRA number DE200101674. Dr. Juraj Toth was supported by ESA contract No. 4000128930 /19/NL/SC, the Slovak Research and Development Agency grant APVV-16-0148, the Slovak Grant Agency for Science grant VEGA 1/0218/22. \end{acknowledgements} \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $V$ be a vertex operator (super)algebra, and for a fixed positive integer $k$, consider the tensor product vertex operator (super)algebra $V^{\otimes k}$ (see \cite{FLM3}, \cite{FHL}). Any element $g$ of the symmetric group $S_k$ acts on $V^{\otimes k}$ as a vertex operator (super)algebra automorphism, and thus it is appropriate to consider $g$-twisted $V^{\otimes k}$-modules. In \cite{BDM}, the author along with Dong and Mason constructed and classified the $g$-twisted $V^{\otimes k}$-modules for $V$ a vertex operator algebra. In \cite{B-superpermutation-odd}, the first author constructed and classified the $(1 \;2 \; \cdots \; k)$-twisted $V^{\otimes k}$-modules for $V$ a vertex operator superalgebra and $k$ odd. In addition, in \cite{B-superpermutation-odd} and \cite{BV-fermion}, we showed that the construction and classification for the case of an even order permutation is fundamentally different in the super case than that for odd order permutations, and conjectured that parity-twisted $V$-modules instead of untwisted $V$-modules were playing a central role. In the present paper, as our main result, we give an explicit construction and classification of $(1 \; 2\; \cdots \; k)$-twisted $V^{\otimes k}$-modules for $k$ even and $V$ any vertex operator superalgebra. In particular, we show that for $k$ even, the category of weak $(1 \; 2\; \cdots \; k)$-twisted $V^{\otimes k}$-modules is isomorphic to the category of weak {\it parity-twisted} $V$-modules. Here the parity map $\sigma$ on any $\mathbb{Z}_2$-graded vector space is the identity on the even subspace and $-1$ on the odd subspace. This result is in contrast to the results of \cite{BDM} for vertex operator algebras and the results of \cite{B-superpermutation-odd} for vertex operator superalgebras for when $k$ is odd; in these cases it was shown that the category of weak $(1 \; 2\; \cdots \; k)$-twisted $V^{\otimes k}$-modules is isomorphic to the category of weak {\it untwisted} $V$-modules. Thus this class of examples we construct and classify in this paper (i.e., for the case when $k$ is even and $V$ is a vertex operator superalgebra) are of fundamental importance in understanding the role of the parity map and parity-twisted modules in the theory of vertex operator superalgebras. The results of this paper give formulas for the graded dimensions of $(1 \; 2 \; \cdots \; k)$-twisted $V^{\otimes k}$-modules in terms of the graded dimensions of parity-twisted $V$-modules, and vice versa, as in Corollary \ref{graded-dimension-corollary}. Next we give some background on the theory of twisted modules in general, followed by some further comments on implications of the results of this paper for supersymmetric theories and for lattice vertex operator superalgebras. Twisted vertex operators were discovered and used in \cite{LW}. Twisted modules for vertex operator algebras arose in the work of I. Frenkel, J. Lepowsky and A. Meurman \cite{FLM1}, \cite{FLM2}, \cite{FLM3} for the case of a lattice vertex operator algebra and certain lifts of the lattice isometry $-1$, in the course of the construction of the moonshine module vertex operator algebra (see also \cite{Bo}). This structure came to be understood as an ``orbifold model" in the sense of conformal field theory and string theory. Twisted modules are the mathematical counterpart of ``twisted sectors", which are the basic building blocks of orbifold models in conformal field theory and string theory (see \cite{DHVW1}, \cite{DHVW2}, \cite{DFMS}, \cite{DVVV}, \cite{DGM}, as well as \cite{KS}, \cite{FKS}, \cite{Ba1}, \cite{Ba2}, \cite{BHS}, \cite{dBHO}, \cite{HO}, \cite{GHHO}, \cite{Ba3} and \cite{HH}). Orbifold theory plays an important role in conformal field theory and in supersymmetric generalizations, and is also a way of constructing a new vertex operator (super)algebra from a given one. Formal calculus arising {}from twisted vertex operators associated to an even lattice was systematically developed in \cite{Le1}, \cite{FLM2}, \cite{FLM3} and \cite{Le2}, and the twisted Jacobi identity was formulated and shown to hold for these operators (see also \cite{DL2}). These results led to the introduction of the notion of $g$-twisted $V$-module \cite{FFR}, \cite{D}, for $V$ a vertex operator algebra and $g$ an automorphism of $V$. This notion records the properties of twisted operators obtained in \cite{Le1}, \cite{FLM1}, \cite{FLM2}, \cite{FLM3} and \cite{Le2}, and provides an axiomatic definition of the notion of twisted sectors for conformal field theory. In general, given a vertex operator algebra $V$ and an automorphism $g$ of $V$, it is an open problem as to how to construct a $g$-twisted $V$-module. A theory of twisted operators for integral lattice vertex operator superalgebras and finite automorphisms that are lifts (in a certain way) of a lattice isometry were studied in \cite{DL2} and \cite{Xu}, and the general theory of twisted modules for vertex operator superalgebras was developed by Li in \cite{Li2}. Certain specific examples of permutation-twisted sectors in superconformal field theory have been studied from a physical point of view in, for instance, \cite{FKS}, \cite{BHS}, \cite{Maio-Schellekens1}, \cite{Maio-Schellekens2}. Here we would like to point out implications for the current work applied to lattice vertex operator superalgebras. If $V_K$ is a vertex operator superalgebra associated to a positive-definite integral lattice $K$, then $V^{\otimes k}_K$ is the vertex operator superalgebra $V_L$ associated to the lattice $L$ where $L$ is the orthogonal direct sum of $k$ copies of $K$. In the case when the lattice $K$ is even, $V_K$ is a vertex operator algebra, and a $k$-cycle permutation of $V^{\otimes k}_K = V_L$ is a lift of a lattice isometry of $L$ of order $k$. Thus one can use either the construction of \cite{BDM} or the construction of Lepowsky, \cite {Le1}, \cite{DL2}, to develop a theory of $(1 \; 2 \; \cdots \; k)$-twisted $V^{\otimes k}_K$-modules. This overlap of constructions was studied by the first author, along with Huang and Lepowsky, in \cite{BHL}. One of the interests in this overlap is that it holds potential for understanding the geometric underpinnings of twisted theory. In particular, it can be used to study the relationship between the space-time geometric setting of the orbifolding used to construct the twisted sectors following the work first initiated by Lepowsky versus the worldsheet geometric setting of the orbifolding used to construct the twisted sectors following \cite{BDM}. This overlap of constructions of $(1 \; 2 \; \cdots \; k)$-twisted $V^{\otimes k}_K$-modules for lattice vertex operator algebras holds also for the setting of lattice vertex operator superalgebras if and only if $k$ is odd. That is if $K$ is integral instead of even as a lattice, and the positive integer $k$ is odd, the $k$-cycle permutation isometry acting on $L$ by permuting the $k$ copies of $K$ lifts to the $k$-cycle automorphism of the vertex operator superalgebra $V^{\otimes k}_K = V_L$. Then one can use either the construction of \cite{B-superpermutation-odd} or the construction of Lepowsky extended to the super setting as in \cite{DL2}, \cite{Xu} (see also \cite{BV-fermion}) to develop a theory of $(1 \; 2 \; \cdots \; k)$-twisted $V^{\otimes k}$-modules. However, for the case of a lattice vertex operator superalgebra, when $k$ is even, one {\it can not} use the theory of \cite{DL2}, \cite{Xu} to construct the $(1 \; 2 \; \cdots \; k)$-twisted $V^{\otimes k}_K$-modules. This is because in order to carry out the program in \cite{DL2}, \cite{Xu}, one must double the order of the lattice isometry in the $k$ even case. But then the lift of the lattice permutation isometry $(1 \; 2 \; \cdots \; k)$ to an automorphism of the vertex operator superalgebra $V_L$ results in an automorphism of order $2k$; it does not result under this lifting to the $k$-cycle permutation automorphism of order $k$ acting on $V_K^{\otimes k}$. The details of why this is true are given in \cite{BV-fermion}, in particular, in Remark 4.1. Thus for $k$ even, the construction in the present paper is the {\it only} known construction and classification of these $(1 \; 2 \; \cdots \; k)$-twisted modules for lattice vertex operator superalgebras. In fact, the construction and classification we give here can be used to help shed light on the open problem of how to construct and classify twisted modules for lattice vertex operator superalgebras in general, as well as for other vertex operator superalgebras and a general automorphism. Another important application of the results of this paper comes from considering vertex operator superalgebras which are also supersymmetric. This is the setting of two-dimensional, holomorphic, superconformal field theory, where the vertex operator superalgebras which describe genus-zero particle interactions have additional supersymmetric structure. A supersymmetric vertex operator superalgebra is a vertex operator superalgebra that, in addition to being a representation of the Virasoro algebra, is also a representation of the $N=n$ Neveu-Schwarz algebra (a Lie superalgebra extension of the Virasoro algebra), where $n$ is the degree of supersymmetry. See, e.g., \cite{Barron-announce}--\cite{Barron-n2twisted}. In the case, when $V$ is an $N = n$ supersymmetric vertex operator superalgebra, for $n = 1,2$, the parity-twisted $V$-modules are naturally a representation of the $N = n$ Ramond algebra. The $N=n$ Ramond algebra is another extension of the Virasoro algebra to a Lie superalgebra related to the $N=n$ Neveu-Schwarz algebra. In physics terms, the (untwisted) modules for the supersymmetric vertex operator superalgebras are called the ``Neveu-Schwarz sectors" and the parity-twisted modules are called the ``Ramond sectors". Then the permutation-twisted modules (i.e., the permutation-twisted sectors) for supersymmetric vertex operator superalgebras are the basic building blocks for permutation orbifold superconformal field theory. The current work along with the work of the first author in \cite{B-superpermutation-odd} implies that all permutation-twisted sectors arising in a permutation orbifold superconformal field theory are built up as tensor products of Neveu-Schwarz sectors (coming from the odd cycles) and Ramond sectors (coming from the even cycles). The case of $(1 \; 2)$-twisted $(V \otimes V)$-modules is an especially important class of twisted theories for supersymmetric vertex operator superalgebras since it is often the case, cf. \cite{Barron-n2twisted}, that an N=2 supersymmetric vertex operator superalgebra has the form $V\otimes V$ for $V$ an N=1 supersymmetric vertex operator superalgebra, and the transposition $(1 \;2)$ then is a ``mirror map". This is the setting for ``mirror-twisted sectors" which give rise to representations of the ``topological N=2 superconformal algebra" (also called the ``twisted N=2 superconformal algebras"), yet another super extension of the Virasoro algebra. See, for example \cite{Barron-varna} and \cite{Barron-n2twisted}. The present work shows that there is an intimate connection between these mirror-twisted modules and parity-twisted modules, i.e., between mirror-twisted sectors and Ramond sectors. In particular, for N=2 supersymmetric vertex operator superalgebras of the form $V \otimes V$ as studied in \cite{Barron-n2twisted}, and for the mirror map $\kappa$ realized as the permutation $(1 \; 2)$ as given in \cite{Barron-n2twisted}, our main result in this paper implies that the category of weak $\kappa$-twisted $(V\otimes V)$-modules is isomorphic to the category of weak parity-twisted $V$-modules. In other words, for this mirror map $\kappa$, the category of mirror-twisted sectors for $V \otimes V$ is isomorphic to the category of N=1 Ramond twisted sectors of $V$. The details of this application of the results of this paper are given by the first author in \cite{B-varna2013}. One of the motivations for trying to use parity-twisted modules to construct permutation-twisted modules for even order cycles comes from the examples studied by the authors previously in \cite{BV-fermion} of $(1 \; 2 \; \cdots \; k)$-twisted $V^{\otimes k}$-modules for $k$ even and $V$ the one free fermion vertex operator superalgebra following the work of Dong and Zhao in \cite{DZ2}. This construction for the special case of free fermions studied in \cite{BV-fermion} is completely different than that which we develop in the present paper or as the first author developed in \cite{B-superpermutation-odd}. But the shape of the classification in \cite{BV-fermion} and the graded dimensions calculated in this work, led us to conjecture that the permutation-twisted modules for even order permutations could be achieved in general using parity-twisted modules. In this paper we prove this conjecture, by directly constructing a $(1 \; 2 \; \cdots \; k)$-twisted $V^{\otimes k}$-module structure given a parity-twisted $V$-module. Note that above we allude to the possibility of building up permutation-twisted $V^{\otimes k}$-modules for general permutations from the cyclically-twisted modules as constructed in this paper for even cycles and in \cite{B-superpermutation-odd} for odd cycles. However, in the setting of vertex operator superalgebras, this patching together of $g_1 g_2$-twisted $(V_1 \otimes V_2)$-modules from $g_j$-twisted $V_j$-modules, for $j = 1,2$, has subtleties that arise which complicate the situation in comparison to the nonsuper case as handled in for instance \cite{BDM} for $g$ written as the product of disjoint cycles. We hope to address these issues in future work. In addition, in this work we point out a clarification made first in \cite{BV-fermion} about the definition of $g$-twisted $V$-module for $V$ a vertex operator superalgebra. In particular, we point out in Remark \ref{parity-stability-remark} below that the notion of ``parity-unstable $g$-twisted $V$-module" as used in, for instance, \cite{DZ}, \cite{DZ2}, \cite{DH}, arises from a notion of $g$-twisted $V$-module that is not the natural categorical definition. In Remark \ref{parity-stability-remark}, we recall our result from \cite{BV-fermion}, showing that these so called ``parity-unstable $g$-twisted $V$-modules" always come in pairs that together form a ``parity-stable $g$-twisted $V$-module". Thus it is more appropriate to take the definition of $g$-twisted $V$-module to be a ``parity-stable $g$-twisted $V$-module" in the language of these other works, and then ``parity-unstable $g$-twisted $V$-modules" are simply parity-unstable {\it invariant subspaces} of a (properly defined) $g$-twisted $V$-module. This is the point of view we take in this paper. This fact we proved in \cite{BV-fermion} concerning the nature of parity-unstable invariant subspaces of parity-stable $g$-twisted $V$-modules can be used to clarify and simplify many aspects of past works, such as \cite{DZ}, \cite{DZ2}, \cite{DH}. This paper is organized as follows. In Section \ref{definitions-section}, we recall the definition of vertex operator superalgebra, various notions of twisted modules, and some of their properties. In Section \ref{Delta-section}, we define the operator $\Delta_k(z)$ on a vertex operator superalgebra $V$ following \cite{B-superpermutation-odd} where the first author generalized the analogous operator defined in \cite{BDM} to the setting of vertex operator superalgebras. We then recall several important properties of $\Delta_k(z)$ proved in \cite{B-superpermutation-odd} which are needed in subsequent sections. This $\Delta_k(z)$ is the main operator from which our twisted vertex operators will be built, in analogy to the nonsuper setting of \cite{BDM} and the odd order super setting of \cite{B-superpermutation-odd}, but now in conjunction with parity-twisted vertex operators acting on a parity-twisted $V$-module rather than, as in \cite{BDM} and \cite{B-superpermutation-odd}, operating in conjunction with vertex operators acting on an untwisted $V$-module. In Section \ref{tensor-product-setting-section}, we develop the setting for $(1 \; 2 \; \cdots \; k)$-twisted $V^{\otimes k}$-modules and study the vertex operators for a parity-twisted $V$-module modified by the orbifolding $x \rightarrow x^{1/k}$ and composing with the operator $\Delta_k(x)$. In particular we derive the supercommutator formula for these operators showing that these operators satisfy the twisted Jacobi identity for odd vectors in $V$ if and only if $k$ is an even integer. In Section \ref{tensor-product-twisted-construction-section}, we use the operators to define a weak $g = (1 \; 2 \; \cdots \; k)$-twisted $V^{\otimes k}$-module structure on any weak parity-twisted $V$-module in the case when $k$ is even. As a result we construct a functor $T_g^k$ {}from the category of weak parity-twisted $V$-modules to the category of weak $g$-twisted $V^{\otimes k}$-modules such that $T_g^k$ maps weak admissible (resp., ordinary) parity-twisted $V$-modules into weak admissible (resp., ordinary) $g$-twisted $V^{\otimes k}$-modules. In addition, $T_g^k$ preserves irreducible objects. In Section \ref{classification-section}, we define a weak parity-twisted $V$-module structure on any weak $g$-twisted $V^{\otimes k}$-module, for $V$ a vertex operator superalgebra and $g = (1 \; 2 \; \cdots k)$ for $k$ even. In so doing, we construct a functor $U_g^k$ {}from the category of weak $g$-twisted $V^{\otimes k}$-modules to the category of weak parity-twisted $V$-modules such that $T_g^k \circ U_g^k$ and $U_g^k \circ T_g^k$ are the identity functors on their respective categories. We then use this construction and classification of $(1 \; 2\; \cdots \; k)$-twisted $V^{\otimes k}$-modules in terms of parity-twisted $V$-modules to show in Corollary \ref{graded-dimension-corollary} how the graded dimensions of $(1 \; 2\; \cdots \; k)$-twisted $V^{\otimes k}$-modules are given by the graded dimensions of parity-twisted $V$-modules under the change of variables $q \mapsto q^{1/k}$. We have intentionally organized this paper to parallel the odd order case developed in \cite{B-superpermutation-odd} as well as the nonsuper case developed in \cite{BDM} so as to highlight the similarities and differences between these settings. \section{Vertex operator superalgebras, twisted modules and some of their properties}\label{definitions-section} In this section we recall some of the formal calculus we will need, and we recall the notions of vertex superalgebra and of vertex operator superalgebra, following the notational conventions of \cite{LL}. We also recall some properties of such structures. Then we present the notion of $g$-twisted module for a vertex operator superalgebra and an automorphism $g$ following \cite{B-superpermutation-odd} and \cite{BV-fermion}. We discuss some categorical aspects of this definition. Then we briefly discuss the parity map and parity-twisted modules for a vertex operator superalgebra. \subsection{Formal calculus}\label{formal-calculus-section} Let $x, x_0, x_1, x_2,$ etc., denote commuting independent formal variables. Let $\delta (x) = \sum_{n \in \mathbb Z} x^n$. We will use the binomial expansion convention, namely, that any expression such as $(x_1 - x_2)^n$ for $n \in \mathbb C$ is to be expanded as a formal power series in nonnegative integral powers of the second variable, in this case $x_2$. For $r \in \mathbb{C}$ we have \begin{equation}\label{delta-function1} x_2^{-1}\left(\frac{x_1-x_0} {x_2}\right)^{r} \delta\left(\frac{x_1-x_0} {x_2}\right) = x_1^{-1}\left(\frac{x_2+x_0} {x_1}\right)^{-r}\delta \left(\frac{x_2+x_0} {x_1}\right) , \end{equation} and it is easy to see that for $k$ a positive integer, \begin{equation}\label{delta-function2} \sum_{p=0}^{k-1}\left(\frac{x_1-x_{0}}{x_2}\right)^{p/k} x_2^{-1}\delta\left(\frac{x_1-x_0}{x_2}\right)= x_2^{-1}\delta\Biggl(\frac{(x_1-x_0)^{1/k}}{x_2^{1/k}}\Biggr). \end{equation} Therefore, we have the $\delta$-function identity \begin{equation}\label{delta-function3} x_2^{-1} \delta \Biggl( \frac{(x_1 - x_0)^{1/k}}{x_2^{1/k}} \Biggr) = x_1^{-1} \delta \Biggl( \frac{(x_2 + x_0)^{1/k}}{x_1^{1/k}} \Biggr). \end{equation} We also have the three-term $\delta$-function identity \begin{equation}\label{three-term-delta} x_{0}^{-1}\delta\left(\frac{x_1-x_2}{x_{0}}\right)-x_{0}^{-1} \delta\left(\frac{x_2-x_1}{-x_{0}}\right)=x_2^{-1}\delta\left(\frac{x_1-x_0}{x_2}\right). \end{equation} Let $R$ be a ring, and let $O$ be an invertible linear operator on $R[x, x^{-1}]$. We define another linear operator $O^{\Lo}$ by \[O^{\Lo} \cdot x^n = O^n x^n \] for any $n \in \mathbb Z$. For example, since the formal variable $z^{1/k}$ can be thought of as an invertible linear multiplication operator from $\mathbb C [x, x^{-1}]$ to $\mathbb{C}[z^{1/k},z^{-1/k}] [x,x^{-1}]$, we have the corresponding operator $z^{(1/k) \Lo}$ {}from $\mathbb C [x,x^{-1}]$ to $\mathbb{C}[z^{1/k},z^{-1/k}] [x,x^{-1}]$. Note that $z^{(1/k) \Lo}$ can be extended to a linear operator on $\mathbb C [[x,x^{-1}]]$ in the obvious way. \subsection{Vertex superalgebras, vertex operator superalgebras, and some of their properties} A {\it vertex superalgebra} is a vector space which is $\mathbb Z_2$-graded (by {\it sign} or {\it parity}) \begin{equation} V= V^{(0)} \oplus V^{(1)} \end{equation} equipped with a linear map \begin{eqnarray} V &\longrightarrow& (\mbox{End}\,V)[[x,x^{-1}]]\\ v &\mapsto& Y(v,x)= \sum_{n\in\mathbb Z}v_nx^{-n-1} \nonumber \end{eqnarray} such that $v_n \in (\mathrm{End} \; V)^{(j)}$ for $v \in V^{(j)}$, $j \in \mathbb Z_2$, and equipped with a distinguished vector ${\bf 1} \in V^{(0)}$, (the {\it vacuum vector}), satisfying the following conditions for $u, v \in V$: \begin{eqnarray} u_nv & \! = \! & 0\ \ \ \ \ \mbox{for $n$ sufficiently large};\\ Y({\bf 1},x) & \! = \! & Id_V;\\ Y(v,x){\bf 1} & \! \in \! & V[[x]]\ \ \ \mbox{and}\ \ \ \lim_{x\to 0}Y(v,x){\bf 1}\ = \ v; \end{eqnarray} and for $u,v \in V$ of homogeneous sign, the {\it Jacobi identity} holds \begin{equation*} x^{-1}_0\delta\left(\frac{x_1-x_2}{x_0}\right) Y(u,x_1)Y(v,x_2) - (-1)^{|u||v|} x^{-1}_0\delta\left(\frac{x_2-x_1}{-x_0}\right) Y(v,x_2)Y(u,x_1) \end{equation*} \begin{equation} = x_2^{-1}\delta \left(\frac{x_1-x_0}{x_2}\right) Y(Y(u,x_0)v,x_2) \end{equation} where $|v| = j$ if $v \in V^{(j)}$ for $j \in \mathbb Z_2$. This completes the definition. We denote the vertex superalgebra just defined by $(V,Y,{\bf 1})$, or briefly, by $V$. Note that as a consequence of the definition, we have that there exists a distinguished endomorphism $T \in (\mathrm{End} \; V)^{(0)}$ defined by \[ T(v) = v_{-2} {\bf 1} \ \ \ \ \mbox{for $v \in V$} \] such that \[ [T, Y(v,x)] = Y(T(v), x) \ = \ \frac{d}{dx} Y(v,x),\] (cf. \cite{LL}, \cite{Barron-alternate}, \cite{Barron-n2axiomatic}). A {\it vertex operator superalgebra} is a vertex superalgebra with a distinguished vector $\omega\in V_2$ (the {\it conformal element}) satisfying the following conditions: \begin{equation} [L(m),L(n)]=(m-n)L(m+n)+\frac{1}{12}(m^3-m)\delta_{m+n,0}c \end{equation} for $m, n\in \mathbb Z,$ where \begin{equation} L(n)=\omega_{n+1}\ \ \ \mbox{for $n\in \mathbb Z$, \ \ \ \ i.e.},\ Y(\omega,x)=\sum_{n\in\mathbb Z}L(n)x^{-n-2} \end{equation} and $c \in \mathbb{C}$ (the {\it central charge} of $V$); \begin{equation} T = L(-1) \ \ \ \ \mbox{i.e.}, \ \frac{d}{dx}Y(v,x)=Y(L(-1)v,x) \ \mbox{for $v \in V$}; \end{equation} $V$ is $\frac{1}{2}\mathbb Z$-graded (by {\it weight}) \begin{equation} V=\coprod_{n\in \frac{1}{2} \mathbb Z}V_n \end{equation} such that \begin{eqnarray} L(0)v & \! = \! & nv \ = \ (\mbox{wt}\,v)v \ \ \ \mbox{for $n \in \frac{1}{2} \mathbb Z$ and $v\in V_n$}; \\ {\rm dim} \, V_n & \! < \! & \infty ;\\ V_n & \! = \! & 0 \ \ \ \ \mbox{for $n$ sufficiently negative}; \end{eqnarray} and $V^{(j)} = \coprod_{n\in \mathbb Z + \frac{j}{2}}V_n$ for $j \in \mathbb Z_2$. This completes the definition. We denote the vertex operator superalgebra just defined by $(V,Y,{\bf 1},\omega)$, or briefly, by $V$. \begin{rem}{\em For the purposes of this paper we do not assume any supersymmetric properties of a vertex operator superalgebra. That is we do not assume that $V$ is necessarily a representation for any super extension of the Virasoro algebra. However one of the main motivations for constructing and classifying permutation-twisted modules for tensor product vertex operator superalgebras is the application to constructing mirror-twisted sectors for N=2 supersymmetric vertex operator superalgebras as discussed in \cite{Barron-varna}, \cite{Barron-n2twisted}, and as presented as an application to the results of this paper in \cite{B-varna2013}. } \end{rem} \begin{rem}\label{VOSAs-tensor-remark} {\em Note that if $(V, Y, {\bf 1})$ and $(V', Y', {\bf 1}')$ are two vertex superalgebras, then $(V \otimes V', \; Y \otimes Y', \; {\bf 1} \otimes {\bf 1}')$ is a vertex superalgebra where \begin{equation}\label{define-tensor-product} (Y \otimes Y') (u \otimes u', x) (v \otimes v') = (-1)^{|u'||v|} Y(u,x)v \otimes Y'(u',x)v'. \end{equation} If in addition, $V$ and $V'$ are vertex operator superalgebras with conformal vectors $\omega$ and $\omega'$ respectively, then $V\otimes V'$ is a vertex operator superalgebra with conformal vector $\omega \otimes {\bf 1}' + {\bf 1} \otimes \omega'$.} \end{rem} \begin{rem}\label{parity-grading-on-V} {\em As a consequence of the definition of vertex operator superalgebra, independent of the requirement that as a vertex superalgebra we should have $v_n \in (\mathrm{End} \, V)^{(|v|)}$, we have that $\mathrm{wt} (v_n u ) = \mathrm{wt} u + \mathrm{wt} v - n -1$, for $u,v \in V$ and $n \in \mathbb Z$. This implies that $v_n \in (\mathrm{End} \, V)^{(j)}$ if and only if $v \in V^{(j)}$ for $j \in \mathbb{Z}_2$, without us having to assume this as an axiom. } \end{rem} \subsection{The notion of twisted module}\label{twisted-module-definition-section} Let $(V, Y, {\bf 1})$ and $(V', Y', {\bf 1}')$ be vertex superalgebras. A {\it homomorphism of vertex superalgebras} is a linear map $g: V \longrightarrow V'$ of $\mathbb Z_2$-graded vector spaces such that $g({\bf 1}) = {\bf 1}'$ and \begin{equation}\label{automorphism} g Y(v,x) =Y'(gv,x)g \end{equation} for $v\in V.$ Note that this implies that $g \circ T = T'\circ g$. If in addition, $V$ and $V'$ are vertex operator superalgebras with conformal elements $\omega$ and $\omega'$, respectively, then a {\it homomorphism of vertex operator superalgebras} is a homomorphism of vertex superalgebras $g$ such that $g(\omega) = \omega'$. In particular $g V_n\subset V'_n$ for $n\in \frac{1}{2} \mathbb{Z}$. An {\it automorphism} of a vertex (operator) superalgebra $V$ is a bijective vertex (operator) superalgebra homomorphism from $V$ to $V$. If $g$ is an automorphism of a vertex (operator) superalgebra $V$ such that $g$ has finite order, then $V$ is a direct sum of the eigenspaces $V^j$ of $g$, \begin{equation} V=\coprod_{j\in \mathbb Z /k \mathbb Z }V^j, \end{equation} where $k \in \mathbb{Z}_+$ and $g^k = 1$, and \begin{equation} V^j=\{v\in V \; | \; g v= \eta^j v\}, \end{equation} for $\eta$ a fixed primitive $k$-th root of unity. We denote the projection of $v \in V$ onto the $j$-th eigenspace, $V^j$, by $v_{(j)}$. Let $(V, Y, \mathbf{1})$ be a vertex superalgebra and $g$ an automorphism of $V$ of period $k$. A {\em $g$-twisted $V$-module} is a $\mathbb{Z}_2$-graded vector space $M = M^{(0)} \oplus M^{(1)}$ equipped with a linear map \begin{eqnarray} V &\rightarrow& ({\rm End} \, M)[[x^{1/k},x^{-1/k}]] \\ v &\mapsto& Y_g(v,x)=\sum_{n\in \frac{1}{k} \mathbb{Z} }v^g_n x^{-n-1}, \nonumber \end{eqnarray} with $v_n^g \in (\mathrm{End} \; M)^{(|v|)}$, such that for $u,v\in V$ and $w\in M$ the following hold: \begin{equation} v^g_nw=0 \mbox{ if $n$ is sufficiently large}; \end{equation} \begin{equation} Y_g({\bf 1},x)= Id_M; \end{equation} the twisted Jacobi identity holds: for $u, v\in V$ of homogeneous sign \[x^{-1}_0\delta\left(\frac{x_1-x_2}{x_0}\right) Y_g(u,x_1)Y_g(v,x_2)- (-1)^{|u||v|} x^{-1}_0\delta\left(\frac{x_2-x_1}{-x_0}\right) Y_g(v,x_2)Y_g(u,x_1) \] \begin{equation}\label{twisted-Jacobi} = \frac{x_2^{-1}}{k} \sum_{ j \in \mathbb{Z}/k\mathbb{Z}} \delta\left( \eta^j \frac{(x_1-x_0)^{1/k}}{x_2^{1/k}}\right) Y_g(Y(g^ju,x_0)v,x_2). \end{equation} This completes the definition of $g$-twisted $V$-module for a vertex superalgebra. We denote the $g$-twisted $V$-module just defined by $(M, Y_g)$ or just by $M$ for short. We note that the generalized twisted Jacobi identity (\ref{twisted-Jacobi}) is equivalent to \[x^{-1}_0\delta\left(\frac{x_1-x_2}{x_0}\right) Y_g(u,x_1)Y_g(v,x_2)- (-1)^{|u||v|} x^{-1}_0\delta\left(\frac{x_2-x_1}{-x_0}\right) Y_g(v,x_2)Y_g(u,x_1) \] \begin{equation}\label{twisted-Jacobi-eigenspace} = x_2^{-1} \left(\frac{x_1-x_0}{x_2}\right)^{-r/k} \delta\left( \frac{x_1-x_0}{x_2}\right) Y_g(Y(u,x_0)v,x_2) \end{equation} for $u \in V^r$, $r = 0, \dots, k-1$. In addition, this implies that for $v\in V^r$, \begin{equation}\label{Y-for-eigenvector} Y_g(v,x)=\sum_{n\in r/k + \mathbb Z }v^g_n x^{-n-1}, \end{equation} and that \begin{equation}\label{limit-axiom} Y_g(gv,x) = \lim_{x^{1/k} \to \eta^{-1} x^{1/k}} Y_g(v,x) \end{equation} for $v \in V$. Note that if $g = 1$, then a $g$-twisted $V$-module is a $V$-module for the vertex superalgebra $V$. If $(V, Y, {\bf 1}, \omega)$ is a vertex operator superalgebra and $g$ is a vertex operator superalgebra automorphism of $V$, then since $\omega \in V^{0}$, we have that $Y_g(\omega,x)$ has component operators which satisfy the Virasoro algebra relations and $Y_g(L(-1)u,x)=\frac{d}{dx}Y_g(u,x)$. In this case, a $g$-twisted $V$-module as defined above, viewed as a vertex superalgebra module, is called a {\em weak $g$-twisted $V$-module} for the vertex operator superalgebra $V$. A {\em weak admissible} $g$-twisted $V$-module is a weak $g$-twisted $V$-module $M$ which carries a $\frac{1}{2k}\mathbb N$-grading \begin{equation}\label{m3.12} M=\coprod_{n\in\frac{1}{2k}\mathbb N}M(n) \end{equation} such that $v^g_mM(n)\subseteq M(n+{\rm wt} \; v-m-1)$ for homogeneous $v\in V$, $n \in \frac{1}{2k} \mathbb N$, and $m \in \frac{1}{k} \mathbb Z$. We may assume that $M(0)\ne 0$ if $M\ne 0$. If $g = 1$, then a weak admissible $g$-twisted $V$-module is called a weak admissible $V$-module. \begin{rem}\label{even-grading-remark} {\em Note that if $k$ is even where $k$ is the order of $g$, then the grading of a weak admissible $g$-twisted $V$-module can be assumed to be a $\frac{1}{k}\mathbb{N}$ grading. } \end{rem} An (ordinary) $g$-twisted $V$-module is a weak $g$-twisted $V$-module $M$ graded by $\mathbb C$ induced by the spectrum of $L(0).$ That is, we have \begin{equation}\label{g3.14} M=\coprod_{\lambda \in{\mathbb C}}M_{\lambda} \end{equation} where $M_{\lambda}=\{w\in M|L(0)^gw=\lambda w\}$, for $L(0)^g = \omega_1^g$. Moreover we require that $\dim M_{\lambda}$ is finite and $M_{n/2k +\lambda}=0$ for fixed $\lambda$ and for all sufficiently small integers $n$. If $g = 1$, then a $g$-twisted $V$-module is a $V$-module. A {\it homomorphism of weak $g$-twisted $V$-modules}, $(M, Y_g)$ and $(M', Y_g')$, is a linear map $f: M \longrightarrow M'$ satisfying \begin{equation} f(Y_g(v, x)w) = Y'_g(v, x) f(w). \end{equation} for $v \in V$, and $w \in M$. If in addition, $M$ and $M'$ are weak admissible $g$-twisted $V$-modules, then a {\it homomorphism of weak admissible $g$-twisted $V$-modules}, is a homomorphism of weak $g$-twisted $V$-modules such that $f(M(n)) \subseteq M'(n)$. And if $M$ and $M'$ are ordinary $g$-twisted $V$-modules, then a {\it homomorphism of $g$-twisted $V$-modules}, is a homomorphism of weak $g$-twisted $V$-modules such that $f(M_\lambda) \subseteq M'_\lambda$. We note here that an example of an automorphism of a vertex operator superalgebra is the {\it parity map} \begin{eqnarray}\label{define-parity} \sigma : V &\longrightarrow& V\\ v & \mapsto & (-1)^{|v|} v . \nonumber \end{eqnarray} \begin{rem}\label{parity-stability-remark} {\em In many works on vertex superalgebras, e.g. \cite{Li2}, \cite{DZ}, \cite{DZ2}, \cite{DH}, \cite{Barron-varna}, \cite{Barron-n2twisted}, the condition that $v_n^g \in (\mathrm{End} \; M)^{(|v|)}$ for $v \in V$, is not given as one of the axioms of a $g$-twisted $V$-module $M$ for a vertex superalgebra $V$. That is, it is not assumed that the $\mathbb{Z}_2$-grading of $V$ is compatible with the $\mathbb{Z}_2$-grading of $M$ via the action of $V$ as super-endomorphisms acting on $M$. Then in, for instance, \cite{DZ}, \cite{DZ2}, \cite{DH}, the notion of ``parity-stable $g$-twisted $V$-module" is introduced for those modules that are representative of the $\mathbb{Z}_2$-grading of $V$, and modules that do not have this property are called ``parity-unstable". Thus a ``parity-unstable $g$-twisted $V$-module" is a vector space $M$ that satisfies all the axioms of our notion of $g$-twisted $V$-module except for $v_n^g \in (\mathrm{End} \; M)^{(|v|)}$. That is there exists no $\mathbb{Z}_2$-grading on $M$ such that the operators $v_n^g$ act as even or odd endomorphisms on $M$ according to the sign (or parity) of $v$. However in \cite{BV-fermion}, we prove that any so called ``parity-unstable $g$-twisted $V$-module" can always be realized as a subspace of a $g$-twisted $V$-module in the sense of the notion of $g$-twisted $V$-module we give above. In particular, in \cite{BV-fermion} we proved the following (reworded to fit our current setting): \vspace{-.2in} \begin{quote} \begin{thm}\label{parity-stability-theorem}(\cite{BV-fermion}) Let $V$ be a vertex superalgebra and $g$ an automorphism. Suppose $(M, Y_M)$ is a ``parity-unstable $g$-twisted $V$-module" (in the sense of \cite{DZ}). Then $(M, Y_M \circ \sigma_V)$ is a ``parity-unstable $g$-twisted $V$-module" which is not isomorphic to $(M, Y_M)$. Moreover $(M, Y_M) \oplus (M, Y_M \circ \sigma_V)$ is a ``parity-stable $g$-twisted $V$-module" (in the sense of \cite{DZ}), i.e., a $g$-twisted $V$ module in terms of the definition given above in this paper. \end{thm} \end{quote} Requiring weak twisted modules to be ``parity stable" as part of the definition gives the more canonical notion of twisted module from a categorical point of view, for instance, for the purpose of defining a $(V_1 \otimes V_2)$-module structure on $M_1 \otimes M_2$ for $M_j$ a $V_j$-module, for $j = 1,2$. (See e.g., (\ref{define-tensor-product})). In particular, the notion of a $g$-twisted $V$-module corresponding to a {\it representation} of $V$ as a vertex superalgebra only holds for ``parity-stable $g$-twisted $V$-modules", in that the vertex operators acting on a $g$-twisted $V$-module have coefficients in $\mathrm{End} \, M $ such that, the operators $v_m^g$ have a $\mathbb{Z}_2$-graded structure compatible with that of $V$. For instance the operators $v_0^g$, for $v \in V$, give a representation of the Lie superalgebra generated by $v_0$ in $\mathrm{End} \, V$ if and only if $M$ is ``parity stable". This corresponds to $V$ acting (via the modes of the vertex operators) as endomorphisms on $M$ in the category of vector spaces (i.e., via even or odd endomorphisms) rather than in the category of $\mathbb{Z}_2$-graded vectors spaces (i.e., as grade-preserving and thus strictly even endomorphisms). However, it is interesting to note that, as is shown in \cite{BV-fermion}, for a lift of a lattice isometry, the ``twisted modules" for a lattice vertex operator superalgebra constructed following \cite{DL2}, \cite{Xu}, naturally sometimes give rise to pairs of parity-unstable invariant subspaces in the language of the current paper, i.e., to pairs of ``parity-unstable $g$-twisted modules", that then must be taken as a direct sum to realize the actual $g$-twisted module that is constructed. } \end{rem} \subsection{Parity-twisted $V$-modules} A crucial example in the study of $g$-twisted $V$-modules for $V$ a vertex superalgebra is that of parity-twisted $V$-modules. (Not to be confused with the notion discussed above of ``parity-stable" or ``parity-unstable" modules.) Above in (\ref{define-parity}), we define the parity automorphism, denoted $\sigma$, for any vertex superalgebra. Thus we have the notion of a {\it parity-twisted $V$-module}, also denoted by {\it $\sigma$-twisted $V$-module}. \begin{rem}\label{parity-grading-remark} {\em Note that it follows from the definitions, that any weak admissible $\sigma$-twisted module for a vertex operator superalgebra is $\mathbb{N}$-graded. } \end{rem} If $V$, in addition to being a vertex operator superalgebra is N=1 or N=2 supersymmetric, i.e., is also a representation of the N=1 or N=2 Neveu-Schwarz algebra super extension of the Virasoro algebra, then a $\sigma$-twisted $V$-module is naturally a representation of the N=1 or N=2 Ramond algebra, respectively, cf. \cite{Barron-varna}, \cite{Barron-n2twisted}, \cite{B-varna2013} and references therein. \section{The operator $\Delta_k (x)$}\label{Delta-section} \setcounter{equation}{0} In this section, we recall the operator $\Delta_k(x)$ on a vertex operator superalgebra $V$ for a fixed positive integer $k$ as first defined in \cite{BDM} and then extended to vertex operator superalgebras in \cite{B-superpermutation-odd}. In Section \ref{tensor-product-twisted-construction-section}, we will use $\Delta_k(x)$ for $k$ even to construct a $(1 \; 2 \; \cdots \; k)$-twisted $V^{\otimes k}$-module {}from a parity-twisted $V$-module. Let $\ZZ$ denote the positive integers. Let $x$, $z$, and $z_0$ be formal variables commuting with each other. Consider the polynomial \[\frac{1}{k} (1 + x)^k - \frac{1}{k} \in x \mathbb C [x] . \] Following \cite{B-superpermutation-odd}, for $k \in \ZZ$, we define $a_j \in \mathbb C$ for $j \in \ZZ$, by \begin{equation}\label{define-a} \exp \Biggl( - \sum_{j \in \ZZ} a_j \Lx \Biggr) \cdot x = \frac{1}{k} (1 + x)^k - \frac{1}{k} . \end{equation} For example, $a_1=(1-k)/2$ and $a_2=(k^2-1)/12.$ Let \begin{eqnarray*} f(x) &=& z^{1/k} \exp \Biggl(- \sum_{j \in \ZZ} a_j \Lx \Biggr) \cdot x \\ &=& \exp \Biggl(- \sum_{j \in \ZZ} a_j \Lx \Biggr) \cdot z^{(1/k) \Lo} \cdot x\\ &=& \frac{z^{1/k}}{k} (1 + x)^k - \frac{z^{1/k}}{k} \; \; \in z^{1/k}x\mathbb{C}[x]. \end{eqnarray*} Then the compositional inverse of $f(x)$ in $x\mathbb{C}[z^{-1/k}, z^{1/k}][[x]]$ is given by \begin{eqnarray*} f^{-1} (x) &=& z^{- (1/k) \Lo} \exp \Biggl( \sum_{j \in \ZZ} a_j \Lx \Biggr) \cdot x \\ &=& z^{- 1/k} \exp \Biggl( \sum_{j \in \ZZ} a_j z^{- j/k} \Lx \Biggr) \cdot x \\ &=& (1 + k z^{- 1/k} x)^{1/k} - 1 \end{eqnarray*} where the last line is considered as a formal power series in $z^{-1/k}x \mathbb{C}[z^{-1/k}][[x]]$, i.e., we are expanding about $x = 0$ taking $1^{1/k} = 1$. Let $V= (V, Y, {\bf 1}, \omega)$ be a vertex operator superalgebra. In $({\rm End} \;V)[[z^{1/2k}, z^{-1/2k}]]$, define \begin{equation}\label{Delta-for-a-module} \Delta_k (z) = \exp \Biggl( \sum_{j \in \ZZ} a_j z^{- \frac{j}{k}} L(j) \Biggr) (k^\frac{1}{2})^{-2L(0)} \left(z^{\frac{1}{2k}\left( k-1\right)}\right)^{- 2L(0)} . \end{equation} In \cite{B-superpermutation-odd}, we proved the following proposition and lemma. \begin{prop}\label{psun1} (\cite{B-superpermutation-odd}) Let $V$ be a vertex operator superalgebra. We have the following identity in $(({\rm End} \;V)[[z^{1/2k}, z^{-1/2k}]]) [[z_0, z_0^{-1}]]$ \[\Delta_k (z) Y(u, z_0) \Delta_k (z)^{-1} = Y(\Delta_k (z + z_0)u, \left( z + z_0 \right)^{1/k} - z^{1/k} ) ,\] for all $u \in V$. \end{prop} \begin{lem}\label{c2.5} (\cite{B-superpermutation-odd}) For $V$ a vertex operator superalgebra, in $({\rm End} \; V)[[z^{1/2k}, z^{-1/2k}]]$, we have \begin{eqnarray} \Delta_k (z) L(-1) - \frac{1}{k} z^{1/k - 1} L(-1) \Delta_k (z) &=& \frac{\partial}{\partial z}\Delta_k (z) , \label{identity in voa}\\ \Delta_k (z)^{-1} L(-1) - k z^{- 1/k + 1} L(-1) \Delta_k (z)^{-1} &=& k z^{- 1/k + 1} \frac{\partial}{\partial z} \Delta_k (z)^{-1} . \label{second identity in voa} \end{eqnarray} \end{lem} \section{The setting of $(1 \; 2 \; \cdots k$)-twisted $V^{\otimes k}$-modules and the operators $Y_\sigma(\Delta_k(x)u, x^{1/k})$ for a $\sigma$-twisted $V$-module $(M_\sigma, Y_\sigma)$}\label{tensor-product-setting-section} \setcounter{equation}{0} Now we turn our attention to tensor product vertex operator superalgebras. Let $V=(V,Y,{\bf 1},\omega)$ be a vertex operator superalgebra, and let $k$ be a fixed positive integer. Then by Remark \ref{VOSAs-tensor-remark}, $V^{\otimes k}$ is also a vertex operator superalgebra, and the permutation group $S_k$ acts naturally on $V^{\otimes k}$ as signed automorphisms. That is $(j \; j+1) \cdot (v_1 \otimes v_2 \otimes \cdots \otimes v_k) = (-1)^{|v_j||v_{j+1}|} (v_1 \otimes v_2 \otimes \cdots v_{j-1} \otimes v_{j+1} \otimes v_j \otimes v_{j+2} \otimes \cdots \otimes v_k)$, and we take this to be a right action so that, for instance \begin{eqnarray} \qquad (1 \; 2 \cdots k) : V \otimes V \otimes \cdots \otimes V \! \! \! \! &\longrightarrow & \! \! \! \! V \otimes V \otimes \cdots \otimes V\\ v_1 \otimes v_2 \otimes \cdots \otimes v_k \! \! \! \! \! \! & \mapsto & \! \! \! \! \! \! (-1)^{|v_1|(|v_2| + \cdots + |v_k|)} v_2 \otimes v_3 \otimes \cdots \otimes v_k \otimes v_1. \nonumber \end{eqnarray} (Note that in \cite{BDM}, this action was given as a left action. For convenience, we make the change here to a right action as in \cite{BHL} and \cite{B-superpermutation-odd}). Let $g=(1 \; 2 \; \cdots \; k)$. In the next section, we will construct a functor $T_g^k$ {}from the category of weak $\sigma$-twisted $V$-modules to the category of weak $g$-twisted modules for $V^{\otimes k}$ for the case when $k$ is even. This construction will be based on the operators $Y_\sigma(\Delta_k(x)u, x^{1/k})$ for a parity-twisted $V$-module $(M_\sigma, Y_\sigma)$. Thus in this section, we establish several properties of these operators. For $v\in V$, and $k$ any positive integer, denote by $v^j\inV^{\otimes k}$, for $j = 1,\dots, k$, the vector whose $j$-th tensor factor is $v$ and whose other tensor factors are ${\bf 1}$. Then $gv^j=v^{j-1}$ for $j=1,\dots,k$ where $0$ is understood to be $k$. Suppose that $W$ is a weak $g$-twisted $V^{\otimes k}$-module, and let $\eta$ be a fixed primitive $k$-th root of unity. We first make some general observations for this setting following \cite{BDM} and \cite{B-superpermutation-odd}. First, it follows {}from the definition of twisted module (cf. (\ref{limit-axiom})) that the $g$-twisted vertex operators on $W$ satisfy \begin{equation} Y_g(v^{j+1},x) = Y_g(g^{-j} v^1,x) = \lim_{x^{1/k}\to \eta^{j}x^{1/k}} Y_g(v^1,x). \end{equation} Since $V^{\otimes k}$ is generated by $v^j$ for $v\in V$ and $j=1,...,k,$ the twisted vertex operators $Y_g(v^1,x)$ for $v\in V$ determine all the twisted vertex operators $Y_g(u,x)$ on $W$ for any $u\inV^{\otimes k}$. This observation is very important in our construction of twisted modules. Secondly, if $u,v\in V$ are of homogeneous sign, then by (\ref{twisted-Jacobi}) the twisted Jacobi identity for $Y_g(u^1,x_1)$ and $Y_g(v^1,x_2)$ is \begin{multline}\label{k1} x^{-1}_0\delta\left(\frac{x_1-x_2}{x_0}\right) Y_g(u^1,x_1)Y_g(v^1,x_2)\\ -(-1)^{ |u||v|} x^{-1}_0\delta\left(\frac{x_2-x_1}{-x_0}\right) Y_g(v^1,x_2)Y_g(u^1,x_1)\\ =\frac{1}{k}x_2^{-1}\sum_{j=0}^{k-1}\delta\Biggl(\eta^j\frac{(x_1-x_0)^ {1/k}}{x_2^{1/k}}\Biggr)Y_g(Y(g^ju^1,x_0)v^1,x_2). \end{multline} Since $g^{-j}u^1=u^{j+1}$, we see that $Y(g^{-j}u^1,x_0)v^1$ only involves nonnegative integer powers of $x_0$ unless $j=0\ ({\rm mod} \; k).$ Thus the we have the supercommutator \begin{multline}\label{k2} [Y_g(u^1,x_1),Y_g(v^1,x_2)] \\ = \; {\rm Res}_{x_0}\frac{1}{k}x_2^{-1}\delta \Biggl(\frac{(x_1-x_0)^{1/k}}{x_2^{1/k}}\Biggr)Y_g(Y(u^1,x_0)v^1,x_2) . \end{multline} This shows that the component operators of $Y_g(u^1,x)$ for $u\in V$ on $W$ form a Lie superalgebra. For $u\in V$ and $\Delta_k(z)$ given by (\ref{Delta-for-a-module}), and $(M_\sigma, Y_\sigma)$ a parity-twisted $V$-module, define \begin{equation} \bar{Y}_\sigma (u,x)=Y_\sigma (\Delta_k(x)u,x^{1/k}) . \end{equation} For example, taking $u=\omega$, and recalling that $a_2= (k^2-1)/12$, we have \begin{eqnarray}\label{sun1} \bar{Y}_\sigma(\omega,x) &=&Y_\sigma \left(\frac{x^{2(1/k - 1)}}{k^2}\Bigl(\omega + a_2 \frac{c}{2}x^{-2/k} \Bigr),x^{1/k}\right) \\ &=& \frac{x^{2(1/k - 1)}}{k^2}Y_\sigma (\omega,x^{1/k}) +\frac{(k^2-1)c}{24k^2}x^{-2} \nonumber \end{eqnarray} where $c$ is the central charge of $V$. \begin{rem}\label{wrong-space-remark}{\em Since $Y_\sigma (v, x) \in x^{|v|/2} (\mathrm{End}\, M_\sigma) [[x, x^{-1}]]$, and for $k$ even \begin{equation} \Delta_k(x)u \in \left\{ \begin{array}{ll} V^{(0)} [[x^{1/k}, x^{-1/k}]] & \mbox{if $u$ is even}\\ \\ x^{1/2k} V^{(1)} [[x^{1/k}, x^{-1/k}]] & \mbox{ if $u$ is odd} \end{array} \right. , \end{equation} we have that if $k$ is even \begin{equation} \bar{Y}_\sigma (u,x) = Y_\sigma (\Delta_k(x)u,x^{1/k}) \in (\mathrm{End}\, M_\sigma) [[x^{1/k}, x^{-1/k}]] . \end{equation} When we put a weak $g$-twisted $V^{\otimes k}$-module structure on $M_\sigma$, this operator $\bar{Y}_\sigma (u,x)$ will be the twisted vertex operator acting on $M_\sigma$ associated to $u^1$, where we will assume $k$ is even. Note however, that if $k$ is odd, then for $u$ odd in $V$, we have $Y_\sigma (\Delta_k(x)u,x^{1/k}) \in (\mathrm{End}\, M_\sigma) [[x^{1/2k}, x^{-1/2k}]]$. This is a reflection of the $k$ odd case as constructed and classified in \cite{B-superpermutation-odd} being fundamentally different from the $k$ even case. } \end{rem} Next we study the properties of the operators $\bar{Y}_\sigma(u,x)$, following and generalizing \cite{BDM} and \cite{B-superpermutation-odd}. \begin{lem}\label{l3.1} For $u\in V$ \[\bar{Y}_\sigma(L(-1)u,x)=\frac{d}{dx}\bar{Y}_\sigma(u,x).\] \end{lem} \begin{proof} By Lemma \ref{c2.5}, and the $L^\sigma(-1)$-derivative property for $(M_\sigma, Y_\sigma)$, we have \begin{eqnarray*} \bar{Y}_\sigma(L(-1)u,x) &=& Y_\sigma(\Delta_k(x)L(-1)u,x^{1/k})\\ &=& Y_\sigma(\frac{d}{dx}\Delta_k(x)u,x^{1/k}) + k^{-1} x^{1/k - 1}Y_\sigma(L(-1)\Delta_k(x)u,x^{1/k})\\ &=& Y_\sigma(\frac{d}{dx}\Delta_k(z)u,x^{1/k}) + \left. k^{-1} x^{1/k - 1}\frac{d}{dy} Y_\sigma(\Delta_k(x)u,y) \right|_{y=x^{1/k}}\\ &=& Y_\sigma(\frac{d}{dx}\Delta_k(x)u,x^{1/k}) + \left.\frac{d}{dy}Y_\sigma(\Delta_k(x)u,y^{1/k}) \right|_{y=x}\\ &=& \frac{d}{dx}Y_\sigma(\Delta_k(x)u,x^{1/k})\\ &=& \frac{d}{dx}\bar{Y}_\sigma(u,x) \end{eqnarray*} as desired. \end{proof} \begin{lem}\label{l3.2} For $u,v\in V$ of homogeneous sign, we have the supercommutator \begin{multline}\label{first-supercommutator} [\bar{Y}_\sigma(u,x_1),\bar{Y}_\sigma(v,x_2)] \\ = \left\{ \begin{array}{ll} {\rm Res}_{x_0}\frac{x_2^{-1}}{k} \delta\Biggl(\frac{(x_1-x_0)^{1/k}}{x_2^{1/k}}\Biggr)\bar{Y}_\sigma(Y(u,x_0)v,x_2) & \mbox{if $k$ is even}\\ {\rm Res}_{x_0} \frac{x_2^{-1}}{k} \delta\Biggl(\frac{(x_1-x_0)^{1/k}}{x_2^{1/k}}\Biggr)\bar{Y}_\sigma(Y(u,x_0)v,x_2) \left( \frac{x_1 - x_0}{x_2} \right)^{ \frac{|u|}{2k}} & \mbox{if $k$ is odd} \end{array} . \right. \end{multline} \end{lem} \begin{proof} The supercommutator formula for the weak $\sigma$-twisted $V$-module $M_\sigma$ is given by \begin{multline}\label{commutator for Lemma 3.3} [Y_\sigma(u,x_1),Y_\sigma(v,x_2)] \\ = {\rm Res}_{x}x_2^{-1}\left(\frac{x_1-x}{x_2} \right)^{-|u|/2} \delta\left(\frac{x_1-x}{x_2} \right)Y_\sigma(Y(u,x)v,x_2) \end{multline} which is a consequence of the twisted Jacobi identity on $M_\sigma$, for $u$ of homogeneous sign (parity) in $V$. Replacing $Y_\sigma(u,x_1)$ and $Y_\sigma (v,x_2)$ by $Y_\sigma (\Delta_k(x_1)u,x_1^{1/k})$ and $Y_\sigma(\Delta_k(x_2)v,x_2^{1/k})$, respectively, in the supercommutator formula, we have the supercommutator \begin{multline}\label{substitution equation} [\bar{Y}_\sigma(u,x_1),\bar{Y}_\sigma(v,x_2)] \\ = {\rm Res}_{x}x_2^{-1/k} \Biggl( \frac{x_1^{1/k}-x}{x_2^{1/k}}\Biggr)^{-|u|/2} \delta\Biggl( \frac{x_1^{1/k}-x}{x_2^{1/k}}\Biggr) Y_\sigma (Y(\Delta_k(x_1)u,x)\Delta_k(x_2)v,x_2^{1/k}) . \end{multline} We want to make the change of variable $x = x_1^{1/k}-(x_1-x_0)^{1/k}$ where by $x_1^{1/k}-(x_1-x_0)^{1/k}$ we mean the power series expansion in positive powers of $x_0$. For $n \in \mathbb{Z}$, it was shown in \cite{BDM} that \begin{equation} \left. (x_1^{1/k} - x)^n \right|_{x = x_1^{1/k}-(x_1-x_0)^{1/k}} = (x_1 - x_0)^{n/k}. \end{equation} Thus substituting $x = x_1^{1/k}-(x_1-x_0)^{1/k}$ into \[ x_2^{-1/k} \Biggl( \frac{x_1^{1/k}-x}{x_2^{1/k}}\Biggr)^{-|u|/2} \delta\Biggl( \frac{x_1^{1/k}-x}{x_2^{1/k}}\Biggr) Y_\sigma (Y(\Delta_k(x_1)u,x)\Delta_k(x_2)v,x_2^{1/k}) ,\] we have a well-defined power series given by \begin{multline*} x_2^{-1/k} \Biggl(\frac{(x_1-x_0)^{1/k}}{x_2^{1/k}}\Biggr)^{-|u|/2} \delta\Biggl(\frac{(x_1-x_0)^{1/k}}{x_2^{1/k}}\Biggr)\\ Y_\sigma (Y(\Delta_k(x_1)u, x_1^{1/k}-(x_1-x_0)^{1/k})\Delta_k(x_2)v,x_2^{1/k}) . \end{multline*} Let $f(z_1,z_2,x)$ be a complex analytic function in $z_1, z_2$, and $x$, and let $h(z_1,z_2,z_0)$ be a complex analytic function in $z_1, z_2$, and $z_0$. Then if $f(z_1,z_2,h(z_1,z_2,z_0))$ is well defined, and thinking of $z_1$ and $z_2$ as fixed ( i.e., considering $f(z_1,z_2,h(z_1,z_2,z_0))$ as a Laurent series in $z_0$) by the residue theorem of complex analysis, we have \begin{equation}\label{residue change of variables} {\rm Res}_x f(z_1,z_2,x)={\rm Res}_{z_0} \left( \frac{\partial}{\partial z_0} h(z_1,z_2,z_0) \right)f(z_1,z_2,h(z_1,z_2,z_0)) . \end{equation} This course remains true for $f$ and $h$ formal power series in their respective variables. Thus making the change of variable $x= h(x_1,x_2,x_0) = x_1^{1/k}-(x_1-x_0)^{1/k}$, using (\ref{substitution equation}), (\ref{residue change of variables}), and the $\delta$-function identity (\ref{delta-function3}), we obtain \begin{eqnarray*} \lefteqn{ [\bar{Y}_\sigma(u,x_1),\bar{Y}_\sigma(v,x_2)] = }\\ &=& {\rm Res}_{x_0}\frac{1}{k}x_2^{-1/k} (x_1-x_0)^{1/k-1} \Biggl(\frac{(x_1-x_0)^{1/k}}{x_2^{1/k}}\Biggr)^{-|u|/2} \delta\Biggl( \frac{(x_1-x_0)^{1/k}}{x_2^{1/k}}\Biggr) \\ & & \quad Y_\sigma (Y(\Delta_k(x_1)u,x_1^{1/k}-(x_1-x_0)^{1/k}) \Delta_k(x_2)v,x_2^{1/k})\\ &=& {\rm Res}_{x_0}\frac{1}{k} x_2^{-1} \Biggl(\frac{x_1-x_0}{x_2}\Biggr)^{-|u|/2k} \delta\Biggl( \frac{(x_1-x_0)^{1/k}}{x_2^{1/k}}\Biggr) \\ & & \quad Y_\sigma (Y(\Delta_k(x_1)u,x_1^{1/k}-(x_1-x_0)^{1/k})\Delta_k(x_2)v,x_2^{1/k})\\ &=& {\rm Res}_{x_0}\frac{1}{k}x_1^{-1} \Biggl(\frac{x_2+x_0}{x_1}\Biggr)^{|u|/2k} \delta\Biggl( \frac{(x_2+x_0)^{1/k}}{x_1^{1/k}}\Biggr) \\ & & \quad Y_\sigma (Y(\Delta_k(x_1)u,x_1^{1/k}-(x_1-x_0)^{1/k})\Delta_k(x_2)v,x_2^{1/k}). \end{eqnarray*} Now we observe that \begin{multline} Y_\sigma (Y(\Delta_k(x_1)u,x_1^{1/k}-(x_1-x_0)^{1/k})\Delta_k(x_2)v,x_2^{1/k}) \\ \in \left\{ \begin{array}{ll} x_1^{|u|/2k} (\mathrm{End} \, M_\sigma) [[x_0]] [[x_1^{1/k}, x_1^{-1/k}]][[x_2^{1/2k}, x_2^{-1/2k}]] & \mbox{if $k$ is even}\\ \\ (\mathrm{End} \, M_\sigma) [[x_0]] [[x_1^{1/k}, x_1^{-1/k}]][[x_2^{1/2k}, x_2^{-1/2k}]] & \mbox{if $k$ is odd} \end{array} \right. . \end{multline} Thus letting $p(k) = 0$ if $k$ is even and $p(k) = 1$ if $k$ is odd, using the $\delta$-function substitution property (see e.g., \cite{LL}) and Proposition \ref{psun1}, we obtain \begin{eqnarray*} \lefteqn{ [\bar{Y}_\sigma(u,x_1),\bar{Y}_\sigma(v,x_2)] = }\\ &=& {\rm Res}_{x_0}\frac{1}{k}x_1^{-1} \Biggl(\frac{x_2+x_0}{x_1}\Biggr)^{p(k) |u|/2k} \delta\Biggl( \frac{(x_2+x_0)^{1/k}}{x_1^{1/k}}\Biggr) \\ & & \quad Y_\sigma (Y(\Delta_k(x_2+x_0)u,(x_2+x_0)^{1/k}-x_2^{1/k}) \Delta_k(x_2)v,x_2^{1/k})\\ &=& {\rm Res}_{x_0}\frac{1}{k}x_2^{-1} \Biggl(\frac{x_1-x_0}{x_2}\Biggr)^{-p(k) |u|/2k} \delta\Biggl( \frac{(x_1-x_0)^{1/k}}{x_2^{1/k}}\Biggr) \\ & & \quad Y_\sigma (Y(\Delta_k(x_2+x_0)u,(x_2+x_0)^{1/k}-x_2^{1/k}) \Delta_k(x_2)v,x_2^{1/k})\\ &=& {\rm Res}_{x_0}\frac{1}{k}x_2^{-1} \Biggl(\frac{x_1-x_0}{x_2}\Biggr)^{-p(k) |u|/2k} \delta\Biggl(\frac{(x_1-x_0)^ {1/k}}{x_2^{1/k}}\Biggr)\\ & & \quad Y_\sigma (\Delta_k(x_2)Y(u,x_0)v,x_2^{1/k}) \\ &=& {\rm Res}_{x_0}\frac{1}{k}x_2^{-1} \Biggl(\frac{x_1-x_0}{x_2}\Biggr)^{-p(k) |u|/2k} \delta\Biggl(\frac{(x_1-x_0)^ {1/k}}{x_2^{1/k}}\Biggr)\bar{Y}_\sigma(Y(u,x_0)v,x_2), \end{eqnarray*} giving (\ref{first-supercommutator}). \end{proof} \section{The construction of a weak $(1 \; 2 \; \cdots \; k$)-twisted $V^{\otimes k}$-module structure on a weak $\sigma$-twisted $V$-module $(M_\sigma, Y_\sigma)$ for $k$ even}\label{tensor-product-twisted-construction-section} \setcounter{equation}{0} Let $M_\sigma =(M_\sigma,Y_\sigma)$ be a weak $\sigma$-twisted $V$-module. Now we begin our construction of a weak $g$-twisted $V^{\otimes k}$-module structure on $M_\sigma$ when $k$ is an even positive integer and $g = (1 \; 2\; \cdots \; k)$. Since establishing the properties of $\Delta_k(x)$ as in Section \ref{Delta-section} in the super case and proving the supercommutator (\ref{first-supercommutator}), our construction of a weak $g$-twisted $V^{\otimes k}$-module structure on $M_\sigma$ in the case when $k$ is even follows the same spirit of the construction as in \cite{BDM} and \cite{B-superpermutation-odd}, but now with careful modifications for the fact that we are working with the $\sigma$-twisted vertex operators on the weak $\sigma$-twisted module $M_\sigma$ rather than just untwisted vertex operators on an untwisted weak module as in \cite{BDM} and \cite{B-superpermutation-odd}. That is, for $k$ even, we construct these weak $g$-twisted $V^{\otimes k}$-modules by first defining $g$-twisted vertex operators on a weak parity-twisted $V$-module $M_\sigma$ for a set of generators which are mutually local (see \cite{Li2}). These $g$-twisted vertex operators generate a local system which is a vertex superalgebra. We then construct a homomorphism of vertex superalgebras {}from $V^{\otimes k}$ to this local system which thus gives a weak $g$-twisted $V^{\otimes k}$-module structure on $M_\sigma$. For $u\in V$, and $j = 0, \dots, k-1$, set \begin{equation} \label{define-g-twist} Y_g(u^1,x) = \bar{Y}_\sigma(u,x)\quad \mbox{and} \quad Y_g(u^{j+1},x) = \lim_{x^{1/k}\to \eta^{j} x^{1/k}} Y_g(u^1,x) . \end{equation} \begin{rem}\label{generalized-remark}{\em From the supercommutator (\ref{first-supercommutator}) for $\bar{Y}_\sigma$, we see that defining $g$-twisted operators as above for the case when $k$ is odd, can not result in a twisted module structure on $M_\sigma$ due to appearance of the extra term involving $(x_2^{-1} (x_1 - x_0))^{|u|/2k}$. This parallels the obstruction as discussed in \cite{B-superpermutation-odd} in trying to put a $g$-twisted $V^{\otimes k}$-module structure on an untwisted weak $V$-module if $k$ is even. That is, we get the same obstruction term in that case---see \cite{B-superpermutation-odd}, Lemma 4.3 and Remarks 4.1 and 5.1. } \end{rem} Note that $Y_g(u^j,x)=\sum_{p=0}^{k-1}Y_g^p(u^j,x)$ where $Y_g^p(u^j,x)=\sum_{n\in \frac{p}{k} + \mathbb Z}u^j_nx^{-n-1}$. \begin{lem}\label{l3.3} Let $u,v\in V$ of homogeneous sign. Then we have the supercommutator \begin{multline}\label{3.1} [Y_g(u^j,x_1),Y_g(v^m,x_2)] \\ = \; {\rm Res}_{x_0}\frac{1}{k}x_2^{-1}\delta \Biggl(\frac{\eta^{j-m}(x_1-x_0)^{1/k}}{x_2^{1/k}}\Biggr)Y_g((Y(u,x_0)v)^m,x_2) \end{multline} where $(Y(u,x_0)v)^m=\sum_{n\in\mathbb Z}(u_nv)^m x_0^{-n-1},$ and \begin{multline}\label{3.2} [Y_g^p(u^j,x_1),Y_g(v^m,x_2)] \\ = {\rm Res}_{x_0}\frac{1}{k}x_2^{-1}\eta^{(m-j)p}\left(\frac{x_1-x_0} {x_2}\right)^{-p/k}\delta\left(\frac{x_1-x_0} {x_2}\right)Y_g((Y(u,x_0)v)^m,x_2). \end{multline} \end{lem} \begin{proof} By Lemma \ref{l3.2}, equation (\ref{3.1}) holds if $j=m =1$ and $k$ is even. Then using (\ref{define-g-twist}), we obtain equation (\ref{3.1}) for any $j,m = 1,...,k$. Equation (\ref{3.2}) is a direct consequence of (\ref{3.1}). \end{proof} By Lemma \ref{l3.3} for $u,v\in V$ of homogeneous sign, there exists a positive integer $N$ such that \begin{equation}\label{a3.3} (x_1-x_2)^N [Y_g(u^j,x_1), Y_g(v^m,x_2)]=0. \end{equation} Taking the limit $x^{1/k} \longrightarrow \eta^{j-1} x^{1/k}$ in Lemma \ref{l3.1}, for $j=1,\dots,k$, we have \begin{equation} Y_g(L(-1)u^j,x)=\frac{d}{dx}Y_g(u^j,x). \end{equation} Thus the operators $Y_g(u^j,x)$ for $u \in V$, and for $j=1,\dots,k$, are mutually local and generate a local system $A$ of weak twisted vertex operators on $(M_\sigma , L(-1))$ in the sense of \cite{Li2}. Let $\rho$ be a map {}from $A$ to $A$ such that $\rho Y_g(u^j,x)=Y_g(u^{j-1},x)$ for $u\in V$ and $j=1,...,k$. By Theorem 3.14 of \cite{Li2}\footnote{There is a typo in the statement of Theorem 3.14 in \cite{Li2}. The $V$ in the theorem should be $A$. That is, the main result of the theorem is that the local system $A$ of the theorem has the structure of a vertex superalgebra.}, the local system $A$ generates a vertex superalgebra we denote by $(A,Y_A)$, and $\rho$ extends to an automorphism of $A$ of order $k$ such that $M_\sigma$ is a natural weak generalized $\rho$-twisted $A$-module in the sense that $Y_A(\alpha(x),x_1)=\alpha(x_1)$ for $\alpha(x)\in A$ are $\rho$-twisted vertex operators on $M_\sigma$. \begin{rem} {\em $\rho$ is given by \begin{equation} \rho a(x) = \lim_{x^{1/k} \to \eta^{-1}x^{1/k}} a(x) \end{equation} for $a(x)\in A$; see \cite{Li2}.} \end{rem} Let $A^j=\{c(x)\in A| \rho c(x)=\eta^j c(x)\}$ and $a(x)\in A^j$ of homogeneous sign in $A$. For any integer $n$ and $b(x)\in A$ of homogeneous sign, the operator $a(x)_{n}b(x)$ is an element of $A$ given by \begin{eqnarray}\label{3.3} a(x)_{n}b(x)={\rm Res}_{x_{1}}{\rm Res}_{x_{0}}\left(\frac{x_{1}-x_{0}}{x}\right)^{j/k}x_{0}^{n}\cdot X \end{eqnarray} where \[ X=x_{0}^{-1}\delta\left(\frac{x_{1}-x}{x_{0}}\right)a(x_{1}) b(x)- (-1)^{|a||b|} x_{0}^{-1} \delta\left(\frac{x-x_{1}}{-x_{0}}\right)b(x)a(x_{1}).\] Or, equivalently, $a(x)_{n}b(x)$ is defined by: \begin{eqnarray}\label{3.4} \sum_{n\in \mathbb{Z}}\left(a(x)_{n}b(x)\right)x_{0}^{-n-1} ={\rm Res}_{x_{1}}\left(\frac{x_{1}-x_{0}}{x}\right)^{j/k} \cdot X. \end{eqnarray} Thus following \cite{Li2}, for $a(z)\in A^j$, we define $Y_A(a(z),x)$ by setting $Y_A(a(z), x_0)b(z)$ equal to (\ref{3.4}). \begin{lem}\label{l3.5} For $u, v \in V$ of homogeneous sign, we have the supercommutator \[ [Y_A(Y_g(u^j,x),x_1), Y_A(Y_g(v^m,x),x_2)]=0\] for $j,m = 1,\dots, k$, with $j \neq m$. \end{lem} \begin{proof} The proof is analogous to the proof of Lemma 3.6 in \cite{BDM} where we use the vertex superalgebra structure of $A$ rather than just the vertex algebra structure and we use the supercommutators of Lemma \ref{l3.3}. \end{proof} Let $Y_g(u^i,z)_n$ for $n \in \mathbb{Z}$ denote the coefficient of $x^{-n-1}$ in the vertex operator $Y_A(Y_g(u^i,z), x)$ for $u \in V$. That is \[Y_A(Y_g(u^i,z), x) = \sum_{n \in \mathbb Z} Y_g(u^i,z)_n \; x^{-n-1} \in (\mathrm{End} \; A) [[x, x^{-1}]].\] \begin{lem}\label{l3.6} For $u_1,...,u_k\in V$, we have \begin{multline*} Y_A(Y_g(u^1_1,z)_{-1}\cdots Y_g(u_{k-1}^{k-1},z)_{-1}Y_g(u_k^k,z),x) \\ = Y_A(Y_g(u^1_1,z),x)\cdots Y_A(Y_g(u_{k-1}^{k-1},z),x)Y_A(Y_g(u_k^k,z),x) . \end{multline*} \end{lem} \begin{proof} The proof is analogous to the proof of Lemma 3.7 of \cite{BDM}, and Lemma 5.5 of \cite{B-superpermutation-odd}, but modified to the current setting. From the Jacobi identity on $A$, and Lemma \ref{l3.5}, we have, for $1\leq i<j\leq k$, \begin{eqnarray*} \lefteqn{Y_A(Y_g(u^i, z)_{-1} Y_g(v^j,z), x)}\\ &=& \mathrm{Res}_{x_1} \mathrm{Res}_{x_0} x_0^{-1} \left( x^{-1}_0\delta\left(\frac{x_1-x}{x_0}\right) Y_A(Y_g(u^i,z),x_1)Y_A(Y_g(v^j,z),x) \right. \\ & & \quad \left. - (-1)^{|u||v|} x^{-1}_0\delta\left(\frac{x-x_1}{-x_0}\right) Y_A(Y_g(v^j,z),x)Y_A(Y_g(u^i,z),x_1)\right)\\ &=& \mathrm{Res}_{x_1} \Bigl( (x_1-x)^{-1} Y_A(Y_g(u^i,z),x_1)Y_A(Y_g(v^j,z),x) \\ & & \quad - (-1)^{|u||v|} (x-x_1)^{-1} Y_A(Y_g(v^j,z),x)Y_A(Y_g(u^i,z),x_1)\Bigr)\\ &=& \sum_{n <0} Y_g(u^i,z)_n x^{-n-1} Y_A(Y_g(v^j,z),x) \\ & & \quad - (-1)^{|u||v|} Y_A(Y_g(v^j,z),x) \sum_{n \geq 0} Y_g(u^i,z)_n x^{-n-1} \\ &=& \sum_{n \in \mathbb Z} Y_g(u^i,z)_n x^{-n-1} Y_A(Y_g(v^j,z),x) \\ &=& Y_A(Y_g(u^i, z), x) Y_A(Y_g(v^j,z), x). \end{eqnarray*} The result follows by induction. \end{proof} Define the map $f : V^{\otimes k} \longrightarrow A$ by \begin{eqnarray*} f: V^{\otimes k} &\longrightarrow& A\\ u_1\otimes\cdots \otimes u_k = (u_1^1)_{-1}\cdots (u_{k-1}^{k-1})_{-1}u^k_k \! \! \! &\mapsto& \! \! \! Y_g(u_1^1,z)_{-1}\cdots Y_g(u_{k-1}^{k-1},z)_{-1}Y_g(u_k^k,z) \end{eqnarray*} for $u_1,...,u_k\in V$. Then $f(u^j)=Y_g(u^j,z).$ \begin{lem}\label{l3.7} $f$ is a homomorphism of vertex superalgebras. \end{lem} \begin{proof} We follow the spirit of the proof of \cite{B-superpermutation-odd} and \cite{BDM}, but must be careful when we need properties of the $\sigma$-twisted vertex operators $Y_\sigma$ on $M_\sigma$ rather than the less complicated case of needing only vertex operators on a weak $V$-module as in \cite{B-superpermutation-odd} and \cite{BDM}. We need to show that \[fY(u_1\otimes\cdots \otimes u_k,x)=Y_A( Y_g(u_1^1,z)_{-1}\cdots Y_g(u_{k-1}^{k-1},z)_{-1}Y_g(u_k^k,z),x)f\] for $u_i\in V.$ Take $v_i\in V$ for $i=1,...,k.$ Then \begin{eqnarray*} \lefteqn{fY(u_1\otimes\cdots \otimes u_k,x)(v_1\otimes \cdots\otimes v_k) }\\ &=& \! \! (-1)^s f(Y(u_1,x)v_1\otimes\cdots Y(u_k,x)v_k)\\ &=& \! \! (-1)^s Y_g(Y(u_1^1,x)v_1^1,z)_{-1}\cdots Y_g(Y(u_{k-1}^{k-1},x)v_{k-1}^{k-1},z)_{-1} Y_g(Y(u_k^k,x)v_k^k,z) \end{eqnarray*} for $s = \sum_{j=1}^{k-1} |v_j| \sum_{i = j + 1}^k |u_i|$. By Lemma \ref{l3.6}, we have \begin{multline*} Y_A( Y_g(u_1^1,z)_{-1}\cdots Y_g(u_{k-1}^{k-1},z)_{-1}Y_g(u_k^k,z),x) f(v_1\otimes \cdots \otimes v_k) \\ = Y_A(Y_g(u^1_1,z),x)\cdots Y_A(Y_g(u_{k-1}^{k-1},z),x)Y_A(Y_g(u_k^k,z),x) Y_g(v_1^1,z)_{-1}\\ \cdots Y_g(v_{k-1}^{k-1},z)_{-1}Y_g(v_k^k,z). \end{multline*} By Lemma \ref{l3.5}, it is enough to show that $$ Y_g(Y(u^j,x)v^j,z)=Y_A(Y_g(u^j,z),x)Y_g(v^j,z)$$ for $u,v\in V$ and $j=1,...,k.$ In fact, in view of the relation between $Y(u^1,z)$ and $Y(u^j,z)$ for $u\in V,$ we only need to prove the case $j=1.$ By Proposition \ref{psun1}, \begin{eqnarray*} Y_g(Y(u^1,x_0)v^1,x_2) &=& Y_\sigma(\Delta_k(x_2)Y(u,x_0)v,x_2^{1/k})\\ &=& Y_\sigma(Y(\Delta_k(x_2+x_0)u,(x_2+x_0)^{1/k}-x_2^{1/k})\Delta_k(x_2)v,x_2^{1/k}). \end{eqnarray*} On the other hand, \[Y_A(Y_g(u^1,x_2),x_0)Y_g(v^1,x_2)=\sum_{p=0}^{k-1}{\rm Res}_{x_1} \left(\frac{x_1-x_{0}}{x_2}\right)^{p/k}X\] where \begin{multline} X=x_{0}^{-1}\delta\left(\frac{x_1-x_2}{x_{0}}\right)Y_g(u^1,x_1) Y_g(v^1,x_2)\\ -(-1)^{|u||v|}x_{0}^{-1}\delta\left(\frac{x_2-x_1}{-x_{0}}\right) Y_g(v^1,x_2)Y_g(u^1,x_1). \end{multline} By equation (\ref{a3.3}), there exists a positive integer $N$ such that \[(x_1-x_2)^NY_g(u^1,x_1)Y_g(v^1,x_2)= (-1)^{|u||v|} (x_1-x_2)^NY_g(v^1,x_2)Y_g(u^1,x_1).\] Thus, using the three-term $\delta$-function identity (\ref{three-term-delta}), we have \begin{eqnarray*} X &=& x_{0}^{-1}\delta\left(\frac{x_1-x_2}{x_{0}}\right)Y_g(u^1,x_1) Y_g(v^1,x_2)\\ & &\quad - (-1)^{|u||v|} x_{0}^{-1} \delta\left(\frac{x_2-x_1}{-x_{0}}\right) x_0^{-N}(x_1-x_2)^NY_g(v^1,x_2)Y_g(u^1,x_1)\\ &=& x_{0}^{-1}\delta\left(\frac{x_1-x_2}{x_{0}}\right)x_0^{-N} \left((x_1-x_2)^NY_g(u^1,x_1)Y_g(v^1,x_2)\right)\\ & & \quad - (-1)^{|u||v|} x_{0}^{-1} \delta\left(\frac{x_2-x_1}{-x_{0}}\right) x_0^{-N}\left((x_1-x_2)^NY_g(u^1,x_1)Y_g(v^1,x_2)\right)\\ &=& x_2^{-1}x_0^{-N}\delta\left(\frac{x_1-x_0}{x_2}\right) \left((x_1-x_2)^NY_g(u^1,x_1)Y_g(v^1,x_2)\right) . \end{eqnarray*} Therefore using the $\delta$-function relation (\ref{delta-function2}), we have \begin{multline*} Y_A(Y_g(u^1,x_2),x_0)Y_g(v^1,x_2) \\ = {\rm Res}_{x_1} x_0^{-N} x_2^{-1} \delta\Biggl(\frac{(x_1-x_0)^{1/k}}{x_2^{1/k}}\Biggr) \left((x_1-x_2)^NY_g(u^1,x_1)Y_g(v^1,x_2)\right). \end{multline*} Let $x$ be a new formal variable which commutes with $x_0,x_1,x_2.$ Then using the $\delta$-function identities of Section \ref{formal-calculus-section} and the definition of $Y_g$ given by (\ref{define-g-twist}), we have \begin{eqnarray*} \lefteqn{x_2^{-1/k}\delta\Biggl(\frac{x_1^{1/k}-x}{x_2^{1/k}} \Biggr)\left((x_1-x_2)^N Y_g(u^1,x_1)Y_g(v^1,x_2)\right) }\\ &=& x^{-1}\delta\Biggl(\frac{x_1^{1/k}-x_2^{1/k}}{x}\Biggr) \left((x_1-x_2)^N Y_g(u^1,x_1)Y_g(v^1,x_2)\right)\\ & & \quad - \; x^{-1} \delta\Biggl(\frac{-x_2^{1/k}+x_1^{1/k}}{x} \Biggr)\left((x_1-x_2)^N Y_g(u^1,x_1)Y_g(v^1,x_2)\right)\\ &=& (x_1-x_2)^N x^{-1}\delta\Biggl(\frac{x_1^{1/k}-x_2^{1/k}}{x}\Biggr) Y_\sigma (\Delta_k(x_1)u,x_1^{1/k})Y_\sigma (\Delta_k(x_2)v,x_2^{1/k})\\ & &\quad -\; (x_1-x_2)^N x^{-1} \delta\Biggl(\frac{-x_2^{1/k} + x_1^{1/k}}{x}\Biggr) Y_\sigma (\Delta_k(x_2)v,x_2^{1/k})Y_\sigma (\Delta_k(x_1)u,x_1^{1/k})\\ &=& (x_1-x_2)^N x_2^{-1/k}\delta\Biggl(\frac{x_1^{1/k}-x}{x_2^{1/k}}\Biggr) Y_\sigma (Y(\Delta_k(x_1)u,x)\Delta_k(x_2)v,x_2^{1/k}). \end{eqnarray*} Note that the first term in the above formula is well defined when $x$ is replaced by $x_1^{1/k}-(x_1-x_0)^{1/k}$, and therefore the last term is also well defined under this substitution. Thus since $k$ is even \begin{eqnarray*} \lefteqn{ x_0^{-N}x_2^{-1/k}\delta\Biggl( \frac{(x_1-x_0)^{1/k}}{x_2^{1/k}}\Biggr)\left((x_1-x_2)^N Y_g(u^1,x_1)Y_g(v^1,x_2)\right) }\\ &=& x_2^{-1/k}\delta\Biggl(\frac{(x_1-x_0)^{1/k}}{x_2^{1/k}}\Biggr) Y_\sigma (Y(\Delta_k(x_1)u,x_1^{1/k}-(x_1-x_0)^{1/k})\Delta_k(x_2)v,x_2^{1/k})\\ &=& x_2^{-1/k}\delta\Biggl(\frac{(x_1-x_0)^{1/k}}{x_2^{1/k}}\Biggr)\\ & & \quad Y_\sigma (Y(\Delta_k(x_2+x_0)u,(x_2+x_0)^{1/k}-x_2^{1/k})\Delta_k(x_2)v,x_2^{1/k}). \end{eqnarray*} Finally using Proposition \ref{psun1}, we have \begin{eqnarray*} \lefteqn{Y_A(Y_g(u^1,x_2),x_0)Y_g(v^1,x_2) }\\ &=&{\rm Res}_{x_1}x_2^{-1}\delta\Biggl(\frac{(x_1-x_0)^{1/k}}{x_2^{1/k}}\Biggr)\\ & & \quad Y_\sigma (Y(\Delta_k(x_2+x_0)u,(x_2+x_0)^{1/k}-x_2^{1/k}) \Delta_k(x_2)v,x_2^{1/k})\\ &=& Y_\sigma (Y(\Delta_k(x_2+x_0)u,(x_2+x_0)^{1/k}-x_2^{1/k})\Delta_k(x_2)v,x_2^{1/k})\\ &=& Y_g(Y(u^1,x_0)v^1,x_2), \end{eqnarray*} as desired. \end{proof} Let $(M_\sigma,Y_\sigma)$ be a weak $\sigma$-twisted $V$-module, $k$ a positive even integer, and $g = (1 \; 2 \; \cdots \; k)$. Define $T_g^k(M_\sigma,Y_\sigma) = (T_g^k(M_\sigma), Y_g) = (M_\sigma, Y_g)$. That is $T_g^k(M_\sigma, Y_\sigma)$ is $M_\sigma$ as the underlying vector space and the vertex operator $Y_g$ is given by (\ref{define-g-twist}). Now we state our first main theorem of the paper. \begin{thm}\label{main1} $(T_g^k(M_\sigma),Y_g)$ is a weak $g$-twisted $V^{\otimes k}$-module such that $T_g^k(M_\sigma)=M_\sigma$, and $Y_g$, defined by (\ref{define-g-twist}), is the linear map {}from $V^{\otimes k}$ to $({\rm End} \; T_g^k(M_\sigma))[[x^{1/k}, \\ x^{-1/k}]]$ defining the twisted module structure. Moreover, (1) $(M_\sigma, Y_\sigma)$ is an irreducible weak $\sigma$-twisted $V$-module if and only if $(T_g^k(M_\sigma), Y_g)$ is an irreducible weak $g$-twisted $V^{\otimes k}$-module. (2) $M_\sigma$ is a weak admissible $\sigma$-twisted $V$-module if and only if $T_g^k(M_\sigma)$ is a weak admissible $g$-twisted $V^{\otimes k}$-module. (3) $M_\sigma$ is an ordinary $\sigma$-twisted $V$-module if and only if $T_g^k(M_\sigma)$ is an ordinary $g$-twisted $V^{\otimes k}$-module. \end{thm} \begin{proof} It is immediate {}from Lemma \ref{l3.7} that $T_g^k(M_\sigma)=M_\sigma$ is a weak $g$-twisted $V^{\otimes k}$-module with $Y_g(u^1,x)=\bar Y_\sigma(u,x)$. Note that with \begin{equation}\label{Delta-inverse} \Delta_k (x)^{-1} = ( x^{1/2k})^{-( 1 - k) 2L(0)} (k^{1/2})^{2L(0)} \exp \Biggl( -\sum_{j \in \ZZ} a_j x^{- j/k} L(j) \Biggr), \end{equation} we have \begin{equation}\label{for-grading} Y_g((\Delta_k(x)^{-1}u)^1,x)=\bar Y_\sigma (\Delta_k(x)^{-1}u,x)=Y_\sigma(u,x^{1/k}) , \end{equation} and all twisted vertex operators $Y_g(v,x)$ for $v\in V^{\otimes k}$ can be generated {}from $Y_g(u^1,x)$ for $u\in V.$ It is clear now that $M_\sigma$ is an irreducible weak $\sigma$-twisted $V$-module if and only if $T_g^k(M_\sigma)$ is an irreducible weak $g$-twisted $V^{\otimes k}$-module, proving statement (1). For statement (2), we first assume that $M_\sigma$ is a weak admissible $\sigma$-twisted $V$-module. Then from the definition of a weak admissible $\sigma$-module and Remark \ref{parity-grading-remark}, we have $M_\sigma = \coprod_{n\in \mathbb{N}} M_\sigma(n)$ such that for $m \in \frac{1}{2} \mathbb{Z}$, the component operator $u_m^\sigma$ of $Y_\sigma(u, z)$, satisfies $u_m^\sigma M_\sigma(n)\subset M_\sigma({\rm wt} \; u-m-1+n)$ if $u\in V$ is of homogeneous weight. Define a $\frac{1}{k}\mathbb N$-gradation on $T_g^k(M_\sigma)$ such that $T_g^k(M_\sigma)(n/k) = M_\sigma(n)$ for $n\in \mathbb N$. Recall that $Y_g(v,x) = \sum_{m \in \frac{1}{k} \mathbb{Z}} v^g_m x^{-m-1}$ for $v\in V^{\otimes k}$. We need to show that $v^g_mT_g^k(M_\sigma)(n)\subset T_g^k(M_\sigma)({\rm wt} \; v-m-1+n)$ for $m\in\frac{1}{k} \mathbb Z$, and $n \in \mathbb N$. Since all twisted vertex operators $Y_g(v,x)$ for $v\in V^{\otimes k}$ can be generated {}from $Y_g(u^1,x)$ for $u\in V$, it is enough to show $(u^1)^g_m T_g^k(M_\sigma)(n) \subset T_g^k(M_\sigma)({\rm wt} \; u-m-1+n)$. Let $u\in V_p$ for $p\in\frac{1}{2} \mathbb Z$. Then \[\Delta_k(x)u=\sum_{j=0}^{\infty}u(j)x^{p/k - p - j/k}\] where $u(j)\in V_{p-j}.$ Thus \begin{eqnarray*} Y_g(u^1,x) &= &Y_\sigma (\Delta_k(x)u,x^{1/k})\ = \ \sum_{j=0}^{\infty}Y_\sigma(u(j),x^{1/k}) x^{p/k - p - j/k} \\ &=& \sum_{j=0}^{\infty}\sum_{l \in \frac{1}{2} \mathbb{Z}} u(j)^\sigma_l x^{(-l-1)/k} x^{p/k - p - j/k} \nonumber. \end{eqnarray*} Therefore for $m \in \frac{1}{k} \mathbb{Z}$, we have \[(u^1)^g_m=\sum_{j=0}^{\infty}u(j)_{(1-k)p-j-1+km+k}.\] Since the weight of $u(j)_{(1-k)p-j-1+km+k}$ is $k(p-m-1)$, we see that for $n \in \frac{1}{k} \mathbb N$, we have $(u^1)^g_mT_g^k(M_\sigma)(n)=(u^1)^g_mM_\sigma(kn) \subset M_\sigma (k(p-m-1+n)) =T_g^k(M_\sigma)(p-m-1+n)$, showing that $T_g^k(M_\sigma)$ is a weak admissible $g$-twisted $V^{\otimes k}$-module. Conversely, we assume that $T_g^k(M_\sigma)$ is a weak admissible $g$-twisted $V^{\otimes k}$-module, i.e., we have $T_g^k(M_\sigma) = \coprod_{n\in \frac{1}{k}\mathbb{N}} T_g^k(M_\sigma)(n)$ such that for $m \in \frac{1}{k} \mathbb{Z}$, the component operator $u^g_m$ of $Y_g(u, x)$ satisfies $u^g_m T_g^k(M_\sigma)(n)\subset T_g^k(M_\sigma)({\rm wt} \; u-m-1+n)$ if $u\in V^{\otimes k}$ is of homogeneous weight. Define an $\mathbb N$-gradation on $M_\sigma$ such that $M_\sigma (n) = T_g^k(M_\sigma)(n/k)$ for $n\in \mathbb N.$ Note that by again letting $u\in V_p$ for $p\in\frac{1}{2} \mathbb Z$, then \[\Delta_k(x)^{-1} u=\sum_{j=0}^{\infty}u[j]x^{ p -p/k - j}\] where $u[j]\in V_{p-j}.$ Thus Equation (\ref{for-grading}) implies \begin{eqnarray*} Y_\sigma (u, x) &=& Y_g((\Delta_k(x^k)^{-1}u)^1,x^k) \ = \ \sum_{j=0}^\infty Y_g(u[j]^1, x^k) x^{pk - p -jk} \\ &=& \sum_{j=0}^\infty \sum_{l \in \frac{1}{k} \mathbb{Z}} (u[j]^1)^g_l x^{-kl-k} x^{pk - p -jk} \end{eqnarray*} and thus for $m \in \frac{1}{2} \mathbb{Z}$ \[u_m^\sigma = \sum_{j=0}^\infty (u[j]^1)^g_{\frac{1}{k}((k-1)p -jk -k + m + 1) }.\] The weight of $(u[j]^1)^g_{\frac{1}{k}((k-1)p -jk -k + m + 1) }$ is $\frac{1}{k}(p-m-1)$. Therefore for the weak $\sigma$-twisted $V$-module $M_\sigma$, we have for $n \in \mathbb N$, that $u^\sigma_m M_\sigma (n) = u^\sigma_m T_g^k(M_\sigma) (n/k) \subset T_g^k(M_\sigma) ( \frac{1}{k} (p - m - 1 + n)) = M_\sigma(p-m-1 + n)$, finishing the proof of (2). In order to prove (3) we write $Y_g(\bar\omega,x) = \sum_{n\in\mathbb Z} L^g(n)x^{-n-2}$ where $\bar\omega=\sum_{j=1}^k\omega^j$. We have \[Y_g(\bar\omega,x)=\sum_{j=0}^{k-1} \ \lim_{x^{1/k}\mapsto \eta^{-j}x^{1/k}} Y_g(\omega^1,x).\] It follows {}from (\ref{sun1}) that \begin{equation}\label{L(0)-conversion} L^g(0)=\frac{1}{k}L^\sigma (0)+\frac{(k^2-1)c}{24k}. \end{equation} This immediately implies (3). \end{proof} Let $V$ be an arbitrary vertex operator superalgebra and $g$ an automorphism of $V$ of finite order. We denote the categories of weak, weak admissible and ordinary generalized $g$-twisted $V$-modules by $\mathcal{ C}^g_w(V),$ $\mathcal{ C}^g_a(V)$ and $\mathcal{ C}^g(V)$, respectively. Now again consider the vertex operator superalgebra $V^{\otimes k}$ and the $k$-cycle $g = (1 \; 2 \; \cdots \; k)$. Define \begin{eqnarray*} T_g^k: \mathcal{ C}^\sigma_w(V) &\longrightarrow& \mathcal{ C}^g_w(V^{\otimes k})\\ (M_\sigma,Y_\sigma) &\mapsto& (T_g^k(M_\sigma),Y_g) = (M_\sigma,Y_g)\\ f &\mapsto& T_g^k(f) = f \end{eqnarray*} for $(M_\sigma,Y_\sigma)$ an object and $f$ a morphism in $\mathcal{ C}^\sigma_w(V)$. The following corollary to Theorem \ref{main1} follows immediately. \begin{cor}\label{c3.10} If $k$ is even, then $T_g^k$ is a functor {}from the category $\mathcal{ C}^\sigma_w(V)$ of weak parity-twisted $V$-modules to the category $\mathcal{ C}^g_w(V^{\otimes k})$ of weak $g = (1 \; 2 \; \cdots \; k)$-twisted $V^{\otimes k}$-modules, such that: (1) $T_g^k$ preserves irreducible objects; (2) The restrictions of $T_g^k$ to $\mathcal{ C}^\sigma_a(V)$ and $\mathcal{ C}^\sigma(V)$ are functors {}from $\mathcal{ C}^\sigma_a(V)$ and $\mathcal{ C}^\sigma(V)$ to $\mathcal{C}^g_a(V^{\otimes k})$ and $\mathcal{ C}^g(V^{\otimes k})$, respectively. \end{cor} In the next section we will construct a functor $U_g^k$, in the case when $k$ is even, {}from the category $\mathcal{ C}^g_w(V^{\otimes k})$ to the category $\mathcal{ C}^\sigma_w(V)$ such that $U_g^k \circ T_g^k = id_{\mathcal{ C}^\sigma_w(V)}$ and $T_g^k \circ U_g^k = id_{\mathcal{ C}^g_w(V^{\otimes k})}$. \section{Constructing a weak $\sigma$-twisted $V$-module structure on a weak $g = (1 \; 2 \; \cdots \; k)$-twisted $V^{\otimes k}$-module for $k$ even}\label{classification-section} \setcounter{equation}{0} For $k \in \ZZ$ and $g = (1\; 2\; \cdots \; k)$, let $M_g=(M_g,Y_g)$ be a weak $g$-twisted $V^{\otimes k}$-module. Motivated by the construction of weak $g$-twisted $V^{\otimes k}$-modules {}from weak $\sigma$-twisted $V$-modules in Section \ref{tensor-product-twisted-construction-section}, we consider \begin{equation}\label{define U} Y_g((\Delta_k(x^k)^{-1}u)^1,x^k) \end{equation} for $u\in V$ where $\Delta_k (x)^{-1}$ is given by (\ref{Delta-inverse}). Note that (\ref{define U}) is multivalued since $Y_g((\Delta_k(x)^{-1}u)^1,x) \in (\mathrm{End} \, M_g) [[x^{1/2k}, x^{-1/2k}]]$. Thus we define \begin{equation}\label{define-sigma-twist} Y_\sigma(u,x) = Y_g((\Delta_k(x^k)^{-1}u)^1,x^k) \end{equation} to be the unique formal Laurent series in $(\mathrm{End} \, M_g) [[x^{1/2}, x^{-1/2}]]$ given by taking $(x^{k})^{1/2k} = x^{1/2}$. Our goal in this section is to construct a functor $U_g^k : \mathcal{ C}_w^g(V^{\otimes k}) \rightarrow \mathcal{ C}^\sigma_w(V)$ with $U_g^k(M_g,Y_g) = (U_g^k(M_g),Y_\sigma) = (M_g,Y_\sigma)$ for the case when $k$ is even. If we instead define $Y_\sigma$ by taking $(x^{2k})^{1/k} = \eta^j x^{1/2}$ for $\eta$ a fixed primitive $k$-th root of unity and $j=1,\dots,k-1$, then $(M_g,Y_\sigma)$ will not be a weak $\sigma$-twisted $V$-module. Further note that this implies that if we allow $x=z$ to be a complex number and if we define $z^{1/k}$ using the principal branch of the logarithm, then much of our work in this section is valid if and only if $-\pi/k < \mathrm{arg} \; z < \pi/k$. \begin{lem}\label{l4.1} For $u\in V,$ we have \begin{eqnarray*} Y_\sigma(L(-1)u,x) &=& \left(\frac{d}{dx}((x^k)^{1/k})\right)\frac{d}{dx}Y_\sigma(u,x)\\ &=& \frac{d}{dx} Y_\sigma(u,x) \end{eqnarray*} on $M_g$. Thus the $L(-1)$-derivative property holds for $Y_\sigma$ on $M_g$. \end{lem} \begin{proof} The proof is similar to that of Lemma \ref{l3.1}. By Lemma \ref{c2.5} we have \[\Delta_k(x)^{-1}L(-1)-kx^{-1/k + 1}L(-1)\Delta_k(x)^{-1} = kx^{-1/k + 1}\frac{d}{dx}\Delta_k(x)^{-1}.\] Making the change of variable $x\to x^k$ gives \[\Delta_k(x^k)^{-1} L(-1) - k(x^k)^{-1/k} x^k L(-1) \Delta_k(x^k)^{-1} = (x^k)^{-1/k}x\frac{d}{dx}\Delta_k(x^k)^{-1}.\] Thus if $(x^k)^{1/k} = \eta^j x$, we have \begin{eqnarray*} \lefteqn{\frac{d}{dx}Y_g((\Delta_k(x^k)^{-1}u)^1,x^k)} \\ &=& Y_g((\frac{d}{dx}\Delta_k(x^k)^{-1}u)^1,z^k)+\frac{d}{dx} Y_g((\Delta_k(z^k)^{-1}u)^1,x^k)|_{x=z}\\ &=& Y_g((\frac{d}{dx}\Delta_k(x^k)^{-1}u)^1,x^k)+ kx^{k-1}Y_g(L(-1)(\Delta_k(x^k)^{-1}u)^1,x^k)\\ &=& \eta^j Y_g((\Delta_k(x^k)^{-1}L(-1)u)^1,x^k). \end{eqnarray*} Since by definition $Y_\sigma(u,x) = Y_g((\Delta_k(x^k)^{-1}u)^1,x^k)$ with $(x^k)^{1/k} = x$, the result follows. \end{proof} \begin{lem}\label{l4.2} Let $u,v\in V$. Then on $M_g$, we have the supercommutator \begin{multline} [Y_\sigma (u,x_1),Y_\sigma (v,x_2)] \\ = {\rm Res}_{x_0} x_2^{-1}\delta\left(\frac{x_1-x_0}{x_2}\right) \left(\frac{x_1-x_0}{x_2}\right)^{\frac{1}{2}(1-k)|u|} Y_\sigma (Y(u, x_0)v,x_2), \end{multline} i.e., \begin{multline} [Y_\sigma (u,x_1), Y_\sigma (v,x_2) ] \\ = \left\{ \begin{array}{ll} {\rm Res}_{x_0} x_2^{-1}\delta\left(\frac{x_1-x_0}{x_2}\right) \left(\frac{x_1-x_0}{x_2}\right)^{\frac{|u|}{2}} Y_\sigma (Y(u, x_0)v,x_2) & \mbox{if $k$ is even}\\ {\rm Res}_{x_0} x_2^{-1}\delta\left(\frac{x_1-x_0}{x_2}\right) Y_\sigma (Y(u, x_0)v,x_2) & \mbox{if $k$ is odd} \end{array} \right. . \end{multline} \end{lem} \begin{proof} The proof is similar to the proof of Lemma \ref{l3.2} and is analogous to the proof of Lemma 4.2 in \cite{BDM}, but with the significant change that we go from $g$-twisted operators to $\sigma$-twisted operators, rather than from $g$-twisted operators to untwisted operators. From the twisted Jacobi identity on $(M_g, Y_g)$, we have \begin{equation}\label{commutator for Lemma 4.2} [Y_g(u^1,x_1),Y_g(v^1,x_2)] \; = \; {\rm Res}_{x_0}\frac{1}{k}x_2^{-1} \delta\Biggl(\frac{(x_1-x_0)^{1/k}}{x_2^{1/k}}\Biggr)Y_g( Y(u^1,x_0)v^1,x_2). \end{equation} Therefore, \begin{eqnarray*} \lefteqn{[Y_\sigma(u,x_1),Y_\sigma(v,x_2)]} \\ &=& [Y_g((\Delta_k(x^k_1)^{-1}u)^1,x_1^k),Y_g((\Delta_k(x_2^k)^{-1}v)^1,x_2^k)] \\ &=& {\rm Res}_{x}\frac{1}{k} x_2^{-k} \delta \left( \frac{(x_1^k - x)^{1/k}}{x_2} \right) Y_g( Y( (\Delta_k(x_1^k)^{-1}u)^1,x) (\Delta_k(x_2^k)^{-1}v)^1,x_2^k). \end{eqnarray*} We want to make the change of variable $x=x_1^k-(x_1-x_0)^{k}$ where we choose $x_0$ such that $((x_1 - x_0)^k)^{1/k} = x_1 - x_0$. Then noting that $(x_1^k - x)^{n/k} |_{x = x_1^k-(x_1-x_0)^{k}} = (x_1 - x_0)^n$ for all $n \in \mathbb{Z}$, and using (\ref{residue change of variables}), we have \begin{eqnarray*} \lefteqn{[Y_\sigma(u,x_1),Y_\sigma(v,x_2)]} \\ &=& {\rm Res}_{x_0} x_2^{-k} (x_1-x_0)^{k-1} \delta \left( \frac{ x_1-x_0}{x_2} \right) Y_g( Y((\Delta_k(x_1^k)^{-1}u)^1,x_1^k - (x_1-x_0)^{k})\\ & & \quad (\Delta_k(x_2^k)^{-1}v)^1,x_2^k)\\ &=& {\rm Res}_{x_0} x_2^{-1}\delta\left(\frac{x_1-x_0}{x_2}\right) Y_g( Y( (\Delta_k(x_1^k)^{-1}u)^1, x_1^k - (x_1-x_0)^{k})\\ & & \quad (\Delta_k(x_2^k)^{-1}v)^1,x_2^k)\\ &=& {\rm Res}_{x_0} x_1^{-1}\delta\left(\frac{x_2+x_0}{x_1}\right) Y_g( Y( (\Delta_k(x_1^k)^{-1}u)^1, x_1^k - (x_1-x_0)^{k})\\ & & \quad (\Delta_k(x_2^k)^{-1}v)^1,x_2^k)\\ &=& {\rm Res}_{x_0} x_1^{-1}\delta\left(\frac{x_2+x_0}{x_1}\right) \left(\frac{x_2+x_0}{x_1}\right)^{\frac{1}{2}(1-k)|u|} Y_g((Y(\Delta_k((x_2+x_0)^k)^{-1}u, \\ & & \quad (x_2+x_0)^k-x_2^{k}) \Delta_k(x_2^k)^{-1}v)^1,x_2^k)\\ &=& {\rm Res}_{x_0} x_2^{-1} \delta \left( \frac{x_1-x_0}{x_2}\right) \left(\frac{x_1-x_0}{x_2}\right)^{\frac{1}{2}(k-1)|u|} Y_g((Y(\Delta_k((x_2+x_0)^k)^{-1}u, \\ & & \quad (x_2+x_0)^k-x_2^{k}) \Delta_k(x_2^k)^{-1}v)^1,x_2^k). \end{eqnarray*} Thus the proof is reduced to proving \begin{eqnarray*} Y(\Delta_k((x_2+x_0)^k)^{-1}u,(x_2+x_0)^k - x_2^{k})\Delta_k(x_2^k)^{-1} = \Delta_k(x_2^k)^{-1} Y \left(u, x_0 \right), \end{eqnarray*} i.e., proving \begin{equation}\label{3.10} \Delta_k(x_2^k)Y(\Delta_k((x_2+x_0)^k)^{-1}u,(x_2+x_0)^k- x_2^{k})\Delta_k(x_2^k)^{-1} = Y \left(u, x_0 \right). \end{equation} In Proposition \ref{psun1}, substituting $u$, $z$ and $z_0$ with $ \Delta_k((x_2+x_0)^k)^{-1}u,$ $x_2^k$ and $(x_2+x_0)^k - x_2^{k}$, respectively, gives equation (\ref{3.10}). \end{proof} Let $(M_g,Y_g)$ be a weak $g$-twisted $V$-module, for $k$ a positive even integer, and $g = (1 \; 2 \; \cdots \; k)$. Define $U_g^k(M_g,Y_g) = (U_g^k(M_g), Y_\sigma) = (M_g, Y_\sigma)$. That is $U_g^k(M_g, Y_g)$ is $M_g$ as the underlying vector space and the $\sigma$-twisted vertex operator $Y_\sigma$ is given by (\ref{define-sigma-twist}). \begin{thm}\label{t4.l} Given a weak $g$-twisted $V$-module $(M_g,Y_g)$, with the notations as above, $U_g^k(M_g,Y_g) = (U_g^k(M_g),Y_\sigma) = (M_g,Y_\sigma)$ is a weak $\sigma$-twisted $V$-module. \end{thm} \begin{proof} Since the $L(-1)$-derivation property has been proved for $Y_\sigma$ in Lemma \ref{l4.1}, we only need to prove the twisted Jacobi identity. In this setting, the twisted Jacobi identity is equivalent to the supercommutator formula given by Lemma \ref{l4.2} for the case when $k$ is even, and the associator formula. The associator formula states that for $u,v\in V$ and $w \in U_g^k(M_g)$, there exists a positive integer $n$ such that \[(x_0+x_2)^{|u|/2 + n}Y_\sigma(u,x_0+x_2)Y_\sigma(v,x_2)w=(x_2+x_0)^{|u|/2 + n}Y_\sigma(Y(u,x_0)v,x_2)w .\] Here we are using the fact that the eigenspaces for the parity automorphism $\sigma$ are given by $V^0 = V^{(0)}$ and $V^1 = V^{(1)}$. Write $u^1=\sum_{j=0}^{k-1}u^1_{(j)}$ where $gu^1_{(j)} = \eta^ju^1_{(j)}$. Then {}from the twisted Jacobi identity, we have the following associator: There exists a positive integer $m$ such that for $n\geq m,$ \[(x_0+x_2)^{j/k+n}Y_g(u^1_{(j)},x_0+x_2)Y_g(v^1,x_2)w= (x_2+x_0)^{j/ k+n}Y_g(Y(u^1_{(j)},x_0)v^1,x_2)w\] for $j=0,...,k-1$. Replacing $x_2$ by $x_2^k$ and $x_0$ by $(x_0+x_2)^k-x_2^k$ gives \begin{multline*} (x_0+x_2)^{j+kn}Y_g(u^1_{(j)},(x_0+x_2)^k)Y_g(v^1,x_2^k)w\\ = (x_2+x_0)^{j+kn}Y_g(Y(u^1_{(j)},(x_2+x_0)^k-x_2^k)v^1,x_2^k)w. \end{multline*} Note that if $a\in V^{\otimes k}$ such that $ga=\eta^ja$, then $Y_g(a,x)=\sum_{l\in j/k+\mathbb Z}a_nx^{-l-1}$. Thus there exists a positive integer $m_j$ such that if $n_j\geq m_j$, then \begin{multline*} (x_0+x_2)^{n_j}Y_g(u^1_{(j)},(x_0+x_2)^k)Y_g(v^1,x_2^k)w\\ =(x_2+x_0)^{n_j} Y_g(Y(u^1_{(j)},(x_2+x_0)^k-x_2^k)v^1,x_2^k)w \end{multline*} for $j=0,...,k-1$. As a result we see that there exists a positive integer $m$ such that if $n\geq m$, then \begin{multline*} (x_0+x_2)^{n}Y_g(u^1,(x_0+x_2)^k)Y_g(v^1,x_2^k)w\\ = (x_2+x_0)^{n}Y_g(Y(u^1,(x_2+x_0)^k-x_2^k)v^1,x_2^k)w. \end{multline*} Note that $\Delta_k(x)^{-1} u \in (x^{\frac{1}{2k}})^{(k-1)2 \mathrm{wt} \, u} V[x^{-1}]$. Thus for $k$ even, we have that $\Delta_k(x^k)^{-1} u \in x^{-|u|/2} V[x, x^{-1}]$. Therefore we can write $\Delta_k((x_0+x_2)^k)^{-1}u = (x_0 + x_2)^{-|u|/2}\sum_{j \in \mathbb{N}} u_j(x_0+x_2)^{s_j}$ for some $u_j\in V$ and integers $s_j \in \mathbb{Z}$, and note that this is a finite sum. Similarly we have a finite sum $\Delta_k(x_2^k)^{-1}v=x_2^{-|v|/2} \sum_{j\in \mathbb{N}}v_j x_2^{t_j}$ for some $v_j \in V$ and $t_j \in \mathbb{Z}$. Thus there exists a positive integer $m$ such that if $n\geq m$, then \begin{multline*} (x_0+x_2)^{n+s_i}Y_g(u_i^1,(x_0+x_2)^k)Y_g(v_j^1,x_2^k)w\\ =(x_2+x_0)^{n+s_i}Y_g(Y(u_i^1,(x_2+x_0)^k-x_2^k)v_j^1,x_2^k)w \end{multline*} for all $i,j\in \mathbb{N}$. Finally, using equation (\ref{3.10}), we have for $n\geq m,$ \begin{eqnarray*} \lefteqn{(x_0+x_2)^{|u|/2 + n} Y_\sigma(u,x_0+x_2)Y_\sigma(v,x_2)w} \\ &=& (x_0+x_2)^{|u|/2 + n} Y_g((\Delta_k((x_0+x_2)^k)^{-1}u)^1, (x_0+x_2)^k)\\ & & \quad Y_g((\Delta_k(x_2^k)^{-1}v)^1,x_2^k) w\\ &=& \sum_{i,j \geq 0}(x_0+x_2)^{n+s_i}x_2^{-|v|/2 + t_j}Y_g(u_i^1,(x_0+x_2)^k) Y_g(v_j^1,x_2^k)w\\ &=& \sum_{i,j \geq 0}(x_2+x_0)^{n+s_i}x_2^{-|v|/2 + t_j}Y_g(Y(u_i^1, (x_2+x_0)^k-x_2^k)v_j^1,x_2^k)w\\ &=& (x_2+x_0)^{ |u|/2 + n} Y_g(Y(\Delta_k((x_2+x_0)^k)^{-1}u)^{1},(x_2+x_0)^k-x_2^k)\\ & & \quad (\Delta_k(x_2^k)^{-1}v)^1,x_2^k)w\\ &=& (x_2+x_0)^{|u|/2 + n} Y_g((\Delta_k(x_2^k)^{-1}Y(u,x_0)v)^1,x_2^k)w\\ &=& (x_2+x_0)^{|u|/2 + n} Y_\sigma(Y(u,x_0)v,x_2)w \end{eqnarray*} completing the proof. \end{proof} \begin{thm}\label{t4.ll} For $k$ an even positive integer, and $g = (1 \; 2\; \cdots \; k)$, the map $U_g^k$ is a functor {}from the category $\mathcal{ C}_w^g(V^{\otimes k})$ of weak $g$-twisted $V^{\otimes k}$-modules to the category $\mathcal{ C}_w^\sigma(V)$ of weak $\sigma$-twisted $V$-modules such that $T_g^k \circ U_g^k = id_{\mathcal{ C}_w^g(V^{\otimes k})}$ and $U_g^k \circ T_g^k = id_{\mathcal{ C}_w^\sigma(V)}$. In particular, the categories $\mathcal{ C}_w^g(V^{\otimes k})$ and $\mathcal{ C}_w^\sigma(V)$ are isomorphic. Moreover, (1) The restrictions of $T_g^k$ and $U_g^k$ to the category of admissible $\sigma$-twisted $V$-modules $\mathcal{ C}_a^\sigma(V)$ and to the category of admissible $g$-twisted $V^{\otimes k}$-modules $\mathcal{ C}_a^g(V^{\otimes k})$, respectively, give category isomorphisms. In particular, $V^{\otimes k}$ is $g$-rational if and only if $V$ is $\sigma$-rational. (2) The restrictions of $T_g^k$ and $U_g^k$ to the category of ordinary $\sigma$-twisted $V$-modules $\mathcal{ C}^\sigma(V)$ and to the category of ordinary $g$-twisted $V^{\otimes k}$-modules $\mathcal{ C}^g(V^{\otimes k})$, respectively, give category isomorphisms. \end{thm} \begin{proof} It is trivial to verify $T_g^k \circ U_g^k = id_{\mathcal{ C}_w^g(V^{\otimes k})}$ and $U_g^k \circ T_g^k = id_{\mathcal{ C}_w^\sigma(V)}$ {}from the definitions of the functors $T_g^k$ and $U_g^k$. Parts (1) and (2) follow {}from Theorem \ref{main1}. \end{proof} Using the functor $T_g^k$ giving the isomorphism between the categories $\mathcal{C}^\sigma(V)$ and $\mathcal{C}^g(V^{\otimes k})$ as well as the actual construction of $g$-twisted $V^{\otimes k}$-modules from $\sigma$-twisted $V$-modules, we have a correspondence between graded traces of modules in $\mathcal{C}^\sigma (V)$ and modules in $\mathcal{C}^g(V^{\otimes k})$. In particular, from (\ref{L(0)-conversion}), we have the following corollary. \begin{cor}\label{graded-dimension-corollary} Let $g = (1 \; 2 \; \cdots \; k)$ for $k$ even. Then $(M_\sigma, Y_\sigma)$ is an ordinary $\sigma$-twisted $V$-module with graded dimension \[ \mathrm{dim}_q M_\sigma = tr_{M_\sigma} q^{-c/24 + L^\sigma (0)} = q^{-c/24} \sum_{\lambda \in \mathbb{C}} \mathrm{dim} (M_\lambda) q^\lambda \] if and only if $(T_g^k(M_\sigma), Y_g)$ is an ordinary $(1 \; 2 \; \cdots \; k)$-twisted $V^{\otimes k}$-module with graded dimension \begin{eqnarray*}\mathrm{dim}_q T_g^k(M_\sigma) &=& tr_{T_g^k(M_\sigma)} q^{-kc/24 + L^g(0)} = tr_{T_g^k(M_\sigma)} q^{-kc/24 +\frac{1}{k} L^\sigma (0) + (k^2 - 1)c/24k} \\ &=& \mathrm{dim}_{q^{1/k}} M_\sigma . \end{eqnarray*} \end{cor}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Let $\Sigma$ be a compact connected oriented surface with one boundary component, and let $g\geq 0$ be the genus of $\Sigma$. A \emph{homology cobordism} of $\Sigma$ is a pair $(M,m)$ where $M$ is a compact connected oriented $3$-manifold and $m: \partial\left(\Sigma \times [-1,1] \right) \to \partial M$ is an orientation-preserving homeomorphism such that the inclusions $m_\pm: \Sigma \to M$ defined by $x \mapsto m(x,\pm 1)$ induce isomorphisms $H_*(\Sigma;\mathbb{Z}) \to H_*(M;\mathbb{Z})$. Thus the $3$-manifold $M$ is a cobordism (with corners) between $\partial_+ M := m_+(\Sigma)$ and $\partial_- M := m_-(\Sigma)$. It is convenient to denote the cobordism $(M,m)$ simply by $M$, the convention being that the boundary parametrization is always denoted by the lower-case letter $m$. In particular, we shall denote the \emph{trivial cobordism} $(\Sigma \times [-1,1], \operatorname{Id})$ simply by $\Sigma \times [-1,1]$. The set of homeomorphism classes of homology cobordisms of $\Sigma$ is denoted by $$ \mathcal{C} := \mathcal{C}(\Sigma), $$ where two homology cobordisms $M,M'$ are considered \emph{homeomorphic} if there is an orientation-preserving homeomorphism $f:M \to M'$ such that $f|_{\partial M} \circ m = m'$. The \emph{composition} of two cobordisms $M$ and $M'$ is defined by ``stacking'' $M'$ on the top of $M$, i.e.\ we define \begin{displaymath} M \circ M' := M \cup_{m_+ \circ (m'_-)^{-1}} M' \end{displaymath} with $\partial(M \circ M')$ parametrized in the obvious way. So there is a monoid structure on $\mathcal{C}$. The \emph{mapping class group} of $\Sigma$ is the group of isotopy classes of self-homeomorphisms of $\Sigma$ that leave the boundary pointwise invariant. We shall denote it by $$ \mcg := \mcg(\Sigma). $$ The mapping cylinder construction $\mathbf{c}: \mcg \to \mathcal{C}$ is defined in the usual way by \begin{equation} \label{eq:mapping_cylinder} \mathbf{c}(s) := \big(\Sigma \times [-1,1], (\operatorname{Id} \times (-1)) \cup (\partial \Sigma \times \operatorname{Id}) \cup (s \times 1)\big). \end{equation} Since the homomorphism $\mathbf{c}$ is injective, we shall sometimes consider the group $\mcg$ as a submonoid of $\mathcal{C}$ and remove $\mathbf{c}$ from our notation. A base point $\star$ being fixed on $\partial \Sigma$, the mapping class group acts on the fundamental group $\pi := \pi_1(\Sigma,\star)$. The resulting homomorphism $ \rho: \mcg \to \operatorname{Aut}(\pi) $ is known as the \emph{Dehn--Nielsen representation} and is injective. For each $k\geq 0$, this representation induces a group homomorphism \begin{equation} \label{eq:rho_k_mcg} \rho_k: \mcg \longrightarrow \operatorname{Aut}(\pi/\Gamma_{k+1} \pi) \end{equation} where $\pi = \Gamma_1 \pi \supset \Gamma_2 \pi \supset \Gamma_3 \pi \supset \cdots$ denotes the lower central series of $\pi$. The \emph{Johnson filtration} of the mapping class group is the decreasing sequence of subgroups $$ \mcg = \mcg[0] \supset \mcg[1] \supset \mcg[2] \supset \mcg[3] \supset \cdots $$ where $\mcg[k]$ denotes the kernel of $\rho_k$ for all $k\geq 1$. The group $\pi$ being residually nilpotent, the Johnson filtration has trivial intersection. By virtue of Stallings' theorem \cite{Stallings}, each homomorphism $\rho_k$ can be extended to the monoid $\mathcal{C}$, so that we have a filtration $$ \mathcal{C} = \mathcal{C}[0] \supset \mathcal{C}[1] \supset \mathcal{C}[2] \supset \mathcal{C}[3] \supset \cdots $$ of $\mathcal{C}$ by submonoids \cite{GL_tree}. But the intersection of this filtration is far from being trivial since, for $g=0$, we have $\mathcal{C}[k] = \mathcal{C}$ for all $k\geq 0$. The first term $\mcg[1]$ of the Johnson filtration is known as the \emph{Torelli group} of the surface $\Sigma$. This is the subgroup of $\mcg$ acting trivially on the first homology group $H:= H_1(\Sigma;\mathbb{Z})$. We shall denote it here by $$ \Torelli := \Torelli(\Sigma). $$ The study of this group, from a topological point of view, started with works of Birman \cite{Birman_Siegel} and was followed by Johnson in the eighties. The reader is referred to his survey \cite{Johnson_survey} for an overview of his work. The Torelli group is residually nilpotent for the following reason: the commutator of two subgroups $\mcg[k]$ and $\mcg[l]$ of the Johnson filtration is contained in $\mcg[k+l]$ for all $k,l\geq 1$, so that the lower central series of $\Torelli$ $$ \Torelli = \Gamma_1 \Torelli \supset \Gamma_2 \Torelli \supset \Gamma_3 \Torelli \supset \cdots $$ is finer than the Johnson filtration (i.e.\ we have $\Gamma_k \Torelli \subset \mcg[k]$). The graded Lie ring $$ \operatorname{Gr}^\Gamma \Torelli := \bigoplus_{i\geq 1} \frac{\Gamma_i \Torelli}{\Gamma_{i+1} \Torelli} $$ has been computed explicitly in degree $i=1$ by Johnson \cite{Johnson_abelianization} and in degree $i=2$ with rational coefficients by Morita \cite{Morita_Casson1, Morita_Casson2}. Computations in degree $i=2$ with integral coefficients have also been done by Yokomizo \cite{yokomizo}. Johnson's computation of the abelianized Torelli group $\Torelli/\Gamma_2 \Torelli$ involves the action of $\Torelli$ on $\pi/\Gamma_3 \pi$ as well as the Rochlin invariant of homology $3$-spheres in the form of some homomorphisms introduced by Birman and Craggs \cite{BC}. Morita's computation of $(\Gamma_2\Torelli/\Gamma_3 \Torelli)\otimes \mathbb{Q}$ involves refinements of the latter invariants, namely the action of $\Torelli$ on $\pi/\Gamma_4 \pi$ and the Casson invariant. Besides, Hain found a presentation of the graded Lie algebra $\operatorname{Gr}^\Gamma\! \Torelli \otimes \mathbb{Q}$ using mixed Hodge theory \cite{Hain}. Thus $3$-dimensional invariants play an important role in Johnson and Morita's works on the Torelli group. In the same perspective, let us consider the monoid $$ \mathcal{IC} := \mathcal{IC}(\Sigma) $$ of \emph{homology cylinders} over $\Sigma$, i.e.\ homology cobordisms $M$ such that $(m_-)_*^{-1}\circ (m_+)_*$ is the identity of $H$. The Torelli group $\Torelli$ embeds into the monoid $\mathcal{IC}$ by the homomorphism $\mathbf{c}$ and, for $g=0$, the monoid $\mathcal{IC}$ can be identified with the monoid of homology $3$-spheres. The monoid of homology cylinders has been introduced in great generality by Goussarov \cite{Goussarov} and Habiro \cite{Habiro} in connection with finite-type invariants of $3$-manifolds. Their approach to the monoid $\mathcal{IC}$ and, at the same time, to the group $\Torelli$ has been the subject of several recent works: see the survey \cite{HM_survey}. As a substitute to the lower central series for the monoid $\mathcal{IC}$, Goussarov and Habiro consider the filtration $$ \mathcal{IC} = Y_1 \mathcal{IC} \supset Y_2 \mathcal{IC} \supset Y_3 \mathcal{IC} \supset \cdots $$ where the submonoid $Y_i \mathcal{IC}$ consists of the homology cylinders that are $Y_i$-equivalent to $\Sigma \times [-1,1]$. Here two compact oriented $3$-manifolds $M$ and $M'$ (with parametrized boundary if any) are said to be \emph{$Y_k$-equivalent} if $M'$ can be obtained from $M$ by ``twisting'' an arbitrary embedded surface $E$ in the interior of $M$ with an element of the $k$-th term of the lower central series of the Torelli group $\Torelli(E)$ of $E$. (Here the surface $E$ has an arbitrary position in $M$, but it is assumed to be compact connected oriented with one boundary component.) The identity $\mathcal{IC} = Y_1 \mathcal{IC}$ is not trivial \cite{Habiro,Habegger}. This means in genus $g=0$ that any homology $3$-sphere is $Y_1$-equivalent to $S^3$ which, according to Hilden \cite{Hilden}, has first been observed by Birman. (More generally, the $Y_1$-equivalence is characterized for closed oriented $3$-manifolds by Matveev in \cite{Matveev}.) The ``clasper calculus'' developed by Goussarov and Habiro \cite{Goussarov_graphs,Habiro,GGP} offers the appropriate tools to study the $Y_k$-equivalence relation, since this relation is generated by ``clasper surgery'' along graphs with $k$ nodes. With these methods, Goussarov \cite{Goussarov,Goussarov_graphs} and Habiro \cite{Habiro} proved among other things that each quotient monoid $Y_i \mathcal{IC}/ Y_{i+1}$ is an abelian group and that $$ \operatorname{Gr}^Y \mathcal{IC} := \bigoplus_{i\geq 1} \frac{Y_i \mathcal{IC}}{Y_{i+1}} $$ has a natural structure of graded Lie ring. This has been computed explicitly in degree $i=1$ by Habiro in \cite{Habiro} and the authors in \cite{MM}, where the $Y_2$-equivalence on $\mathcal{IC}$ is shown to be classified by the action on $\pi/\Gamma_3 \pi$ and the Birman--Craggs homomorphism. Thus the determination of $\mathcal{IC}/Y_2$ goes parallel to Johnson's computation of $\Torelli/\Gamma_2 \Torelli$ and the two groups happen to be isomorphic via $\mathbf{c}$ (for $g\geq 3$). Besides, diagrammatic descriptions of the graded Lie algebra $\operatorname{Gr}^Y \mathcal{IC} \otimes \mathbb{Q}$ are obtained in \cite{HM_SJD} using clasper calculus and the LMO homomorphism $$ Z: \mathcal{IC} \longrightarrow \jacobi(H_\mathbb{Q}) $$ which takes values in a graded algebra $\jacobi(H_\mathbb{Q})$ of diagrams ``colored'' by the symplectic vector space $H_\mathbb{Q} := H_1(\Sigma;\mathbb{Q})$. This invariant of homology cylinders is derived from a functorial extension \cite{CHM} of the LMO invariant \cite{LMO}, so that it is universal among rational-valued finite-type invariants.\\ In this paper, we shall classify the $Y_3$-equivalence relation on $\mathcal{IC}$. In addition to the action $\rho_3(M) \in \operatorname{Aut}(\pi/\Gamma_4\pi)$ and to the Casson invariant $\lambda(M) \in \mathbb{Z}$ -- which have been both used by Morita for the computation of $(\Gamma_2 \Torelli/ \Gamma_3\Torelli)\otimes \mathbb{Q}$ -- we need the Alexander polynomial of homology cylinders $M$ relative to their ``bottom'' boundary $\partial_- M$. More precisely, we define the \emph{relative Alexander polynomial} of $M\in \mathcal{IC}$ as the order of the relative homology group of $(M,\partial_-M)$ with coefficients twisted by $m_{\pm,*}^{-1}:H_1(M;\mathbb{Z}) \to H$: $$ \Delta(M,\partial_-M) := \operatorname{ord} H_1(M, \partial_-M; \mathbb{Z}[H]) \ \in \mathbb{Z}[H]/\pm H. $$ The multiplicative indeterminacy in $\pm H$ can be fixed by considering a relative version of the Reidemeister--Turaev torsion introduced by Benedetti and Petronio \cite{BP,FJR}. For this refinement of the relative Alexander polynomial, it is necessary to fix an Euler structure on $(M,\partial_-M)$, i.e.\ a homotopy class of vector fields on $M$ with prescribed behaviour on the boundary. We shall see that homology cylinders $M$ have a preferred Euler structure $\xi_0$, so that the class $\Delta(M,\partial_-M)$ has a preferred representative $$ \tau(M,\partial_-M;\xi_0) \in \mathbb{Z}[H]. $$ This invariant of homology cylinders features the same finiteness properties as the Reidemeister--Turaev torsion of closed oriented $3$-manifolds \cite{Massuyeau_torsion}. More precisely, if we denote by $I$ the augmentation ideal of $\mathbb{Z}[H]$, then the reduction of $\tau(M,\partial_-M;\xi_0) $ modulo $I^{k+1}$ is for every $k \geq 0$ a finite-type invariant of degree $k$ in the sense of Goussarov and Habiro. In particular, a ``quadratic part'' $$ \alpha(M) \in I^2/I^3 \simeq S^2H $$ can be extracted from $\tau(M,\partial_-M;\xi_0)$ and is a finite-type invariant of degree $2$. Then our characterization of the $Y_3$-equivalence for homology cylinders takes the following form. \smallskip\smallskip \noindent {\bf Theorem~A.} {\it Let $M$ and $M'$ be two homology cylinders over $\Sigma$. The following assertions are equivalent: \begin{enumerate} \item[(a)] $M$ and $M'$ are $Y_3$-equivalent; \item[(b)] $M$ and $M'$ are not distinguished by any Goussarov--Habiro finite-type invariants of degree at most $2$; \item[(c)] $M$ and $M'$ share the same invariants $\rho_3, \alpha$ and $\lambda$; \item[(d)] The LMO homomorphism $Z$ agrees on $M$ and $M'$ up to degree $2$. \end{enumerate} } \smallskip \noindent In genus $g=0$, the theorem asserts that two homology $3$-spheres are $Y_3$-equivalent if and only if they have the same Casson invariant, which is due to Habiro \cite{Habiro}. The equivalence between conditions (a), (b) and (d) is based on the universality of the LMO homomorphism among $\mathbb{Q}$-valued finite-type invariants, its good behaviour under clasper surgery and the torsion-freeness of a certain space of diagrams. Next the equivalence of condition (c) with the other three follows by determining precisely how the invariants $\rho_3, \alpha$ and $\lambda$ are diagrammatically encoded in the LMO homomorphism. We emphasize that the Birman--Craggs homomorphism, which is needed to classify the $Y_2$-equivalence, do not appear explicitly in the above statement for the $Y_3$-equivalence: indeed it is determined by the triplet $(\rho_3,\alpha,\lambda)$ as we shall see in detail. Furthermore, we shall use the diagrammatic techniques of clasper calculus to compute the group $\mathcal{IC}/Y_3$ and show some of its properties. We shall also be interested in the \emph{$J_k$-equivalence} relation among homology cylinders. This relation is defined for every $k\geq 1$ in a way similar to the $Y_k$-equivalence but using the Johnson filtration of the Torelli group instead of its lower central series. It follows from these definitions that $J_1 =Y_1$ and that $Y_k$ implies $J_k$ for every $k\geq 1$. But the converse is not true for $k\geq 2$ as illustrated by the following statements. \smallskip\smallskip \noindent {\bf Theorem~B.} {\it Two homology cylinders $M$ and $M'$ over $\Sigma$ are $J_2$-equivalent if and only if we have $\rho_2(M)=\rho_2(M')$. } \smallskip\smallskip \noindent In genus $g=0$, the theorem asserts that any homology $3$-sphere is $J_2$-equivalent to $S^3$. This is already noticed by Morita in \cite{Morita_Casson1} and follows from Casson's observation that any homology $3$-sphere is obtained from $S^3$ by a finite sequence of surgeries along $(\pm1)$-framed knots \cite{GM}. (A generalization of this result to closed oriented $3$-manifolds is proved by Cochran, Gerges and Orr in \cite{CGO}.) Although it does not seem to have been observed before, Theorem~B easily follows from the computation of $\mathcal{IC}/Y_2$ done in \cite{MM}. \smallskip\smallskip \noindent {\bf Theorem~C.} {\it Two homology cylinders $M$ and $M'$ over $\Sigma$ are $J_3$-equivalent if and only if we have $\rho_3(M)=\rho_3(M')$ and $\alpha(M)=\alpha(M')$. } \smallskip\smallskip \noindent In genus $g=0$, we obtain that any homology $3$-sphere is $J_3$-equivalent to $S^3$, which is due to Pitsch \cite{Pitsch}. The proof of Theorem~C makes use of Theorem~A. Let us now come back to the Casson invariant of homology cylinders which appears in Theorem~A. In contrast to $\rho_3$ and $\alpha$, the invariant $\lambda$ is not canonical. Indeed it depends on the choice of a Heegaard embedding $j: \Sigma \hookrightarrow S^3$ and $\lambda(M) := \lambda_j(M)$ is defined as the Casson invariant of the homology $3$-sphere obtained by inserting $M$ in place of a regular neighborhood of $j(\Sigma)\subset S^3$. In the case of the Torelli group, Morita considered the restriction of $\lambda$ to the \emph{Johnson subgroup} $$ \Johnson := \Johnson(\Sigma) $$ which coincides with the second term $\mcg[2]$ of the Johnson filtration. He proved that $\lambda|_{\Johnson}$ is a group homomorphism which can be written as the sum of two homomorphisms $\Johnson \to \mathbb{Q}$: one of them is not canonical and is determined by $\rho_3$, while the other one is (up to a $1/24$ factor) a canonical homomorphism $d:\Johnson \to 8\mathbb{Z}$ and is called the \emph{core of the Casson invariant}. In the case of homology cylinders, the second term $\mathcal{C}[2]$ of the Johnson filtration of $\mathcal{C}$ is denoted by $$ \mathcal{KC} := \mathcal{KC}(\Sigma). $$ \smallskip\smallskip \noindent {\bf Theorem~D.} {\it Assume that $g\geq 3$. Then there is a unique extension of the core of the Casson invariant to the monoid $\mathcal{KC}$\\[-0.5cm] $$ \xymatrix{ \Johnson \ar[r]^-d \ar[d]_-\mathbf{c} & 8\mathbb{Z} \\ \mathcal{KC} \ar@{-->}[ru]_-d & } $$ that is invariant under $Y_3$-equivalence and under the action of the mapping class group by conjugation. Moreover, the monoid homomorphism $d:\mathcal{KC} \to 8\mathbb{Z} $ can be written explicitly in terms of $\rho_3$, $\alpha$ and $\lambda$. } \smallskip\smallskip \noindent When $M\in \mathcal{KC}$ belongs to the Johnson subgroup $\Johnson$, we have $\alpha(M)=0$ and our formula for $d(M)$ coincides with Morita's formula \cite{Morita_Casson1,Morita_Casson2}. We shall also see that the assumption $g\geq 3$ in Theorem~D can be removed by stabilization of the surface $\Sigma$. In another paper \cite{Morita_Casson3}, Morita gave a topological interpretation of $d(h)$ for $h\in \Johnson$ as the signature defect of the mapping torus of $h$ equipped with a certain $2$-framing. It would be very interesting to generalize this intrinsic description of $d: \Johnson \to 8\mathbb{Z}$ to the monoid $\mathcal{KC}$. See \cite{DP} in this connection.\\[-0.5cm] The paper is organized as follows: {\small \tableofcontents} \newpage In the rest of the paper, we shall use the following conventions. We denote by $I$ the interval $[-1,1]$. An abelian group $G$, or its action on a set, is written additively, except when it is seen as a subgroup of the group of units of $\mathbb{Z}[G]$. Besides, (co)homology groups are taken with $\mathbb{Z}$ coefficients if no coefficients are specified. \begin{acknowledgement} The authors would like to thank Kazuo Habiro for stimulating discussions about the diagrammatic description of the group $\mathcal{IC}/Y_3$, and for his comments on the first version of this manuscript. They are also grateful to Takuya Sakasai and the referee for their remarks. The first author was partially supported by the French ANR research project ANR-08-JCJC-0114-01. The second author is supported by the French ANR research project ANR-11-JS01-00201. \end{acknowledgement} \section{Preliminaries on the equivalence relations} \label{sec:preliminaries} In this section, we give the precise definitions of the $Y_k$-equivalence and the $J_k$-equivalence relations on $3$-manifolds, and we recall their relationship with the Goussarov--Habiro theory of finite-type invariants. \subsection{Definition of the equivalence relations} \label{subsec:definition_relations} Let $R$ be a closed oriented surface, which may be empty or disconnected. We consider compact connected oriented $3$-manifolds $M$ whose boundary is \emph{parametrized} by $R$, i.e.\ $M$ comes with an orientation-preserving homeomorphism $R \to \partial M$ which is denoted by the lower-case letter $m$. Two such manifolds with parametrized boundary $M$ and $M'$ are considered \emph{homeomorphic} if there is an orientation-preserving homeomorphism $f:M\to M'$ such that $f \circ m =m'$. We denote by $\mathcal{V}(R)$ the set of homeomorphism classes of compact connected oriented $3$-manifolds with boundary parametrized by $R$. One way to modify an $M \in \mathcal{V}(R)$ is to choose a compact oriented connected surface $S \subset \operatorname{int}(M)$ with one boundary component, and an element $s\in \Torelli(S)$ of the Torelli group of $S$. We then define $$ M_s := \big(M \setminus \operatorname{int}(S\times [-1,1])\big) \cup \mathbf{c}(s) $$ where $S\times [-1,1]$ denotes a regular neighborhood of $S$ in $M$ and $\mathbf{c}(s)$ is the mapping cylinder of $s$ defined by (\ref{eq:mapping_cylinder}). The boundary parametrization of $M_S$ is defined from $m$ in the obvious way. The move $M \leadsto M_s$ in $\mathcal{V}(R)$ is called a \emph{Torelli surgery}. Let $k\geq 1$ be an integer, and consider two compact connected oriented $3$-manifolds $M$ and $M'$ with boundary parametrized by $R$. We say that $M$ is \emph{$Y_k$-equivalent} to $M'$ if there is a Torelli surgery $M \leadsto M_s$ such that $M_s=M' \in \mathcal{V}(R)$ and the gluing homeomorphism $s$ belongs to the $k$-th term $\Gamma_k \Torelli(S)$ of the lower central series of $\Torelli(S)$. (Recall that the \emph{lower central series} of a group $G$ is defined inductively by $\Gamma_1 G:=G$ and $\Gamma_{k+1} G :=[G,\Gamma_{k}G]$ for all $k\geq 1$.) It is easily checked that the $Y_k$-equivalence is an equivalence relation on the set $\mathcal{V}(R)$. The \emph{$J_k$-equivalence} relation on $\mathcal{V}(R)$ is defined in a similar way using the $k$-th term of the Johnson filtration instead of the lower central series. Thus we have defined an infinity of equivalence relations, which are organized as follows: $$ \begin{array}{cccccccccccc} Y_1 & \Longleftarrow & Y_2 & \Longleftarrow & Y_3 & \Longleftarrow & \cdots & Y_k & \Longleftarrow & Y_{k+1} & \Longleftarrow & \cdots \\ \parallel & & \Downarrow & & \Downarrow && & \Downarrow & & \Downarrow && \\ J_1 & \Longleftarrow & J_2 & \Longleftarrow & J_3 & \Longleftarrow & \cdots & J_k & \Longleftarrow & J_{k+1} & \Longleftarrow & \cdots \end{array} $$ The weakest of these relations, namely the $Y_1$-equivalence, is already non-trivial since a Torelli surgery $M \leadsto M_s$ comes with a canonical isomorphism \begin{equation} \label{eq:iso_homology} \xymatrix{ H_1(M) \ar@{-->}[rr]^-{ \Phi_s}_-\simeq & & H_1(M_s),\\ &H_1 \big(M \setminus \operatorname{int}(S\times [-1,1])\big) \ar@{->>}[lu]^-{\operatorname{incl}_*} \ar@{->>}[ru]_-{\operatorname{incl}_*} & } \end{equation} whose existence is easily deduced from the Mayer--Vietoris theorem. In this paper, we shall restrict our study to the class of homology cylinders over $\Sigma$, as defined in the introduction, i.e.\ to the subset $$ \mathcal{IC}(\Sigma) \subset \mathcal{V}\big(\partial(\Sigma\times [-1,1])\big). $$ The $Y_k$-equivalence and the $J_k$-equivalence relations are preserved by \emph{stabilization} of the surface $\Sigma$. More precisely, assume that $\Sigma^s$ is the boundary connected sum of $\Sigma$ with another compact connected oriented surface $\Sigma'$ having one boundary component as shown in Figure \ref{fig:stabilization}. Then, there is a canonical injection \begin{equation} \label{eq:stabilization} \mathcal{IC}(\Sigma) \longrightarrow \mathcal{IC}(\Sigma^s), \ M \longmapsto M^s \end{equation} obtained by gluing to any homology cylinder $M$ over $\Sigma$ the trivial cylinder $\Sigma'\times I$ along the square $(\Sigma\cap \Sigma')\times I$. Then the implications $M \stackrel{Y_k}{\sim} N \Rightarrow M^s \stackrel{Y_k}{\sim} N^s$ and $M \stackrel{J_k}{\sim} N \Rightarrow M^s \stackrel{J_k}{\sim} N^s$ obviously hold true for any $k\geq 1$. \begin{figure}[h] \begin{center} {\labellist \small \hair 0pt \pinlabel {$\Sigma^s$} [t] at 572 -4 \pinlabel {$\Sigma$} [lb] at 40 50 \pinlabel {$\Sigma'$} [lb] at 610 50 \endlabellist} \includegraphics[scale=0.3]{stabilization} \end{center} \caption{A stabilization $\Sigma^s$ of the surface $\Sigma$.} \label{fig:stabilization} \end{figure} The $Y_k$-equivalence and the $J_k$-equivalence relations can also be defined in terms of Heegaard splittings. Let us formulate this in the case of homology cylinders. A homology cylinder $M$ over $\Sigma$ has a ``bottom surface'' $\partial_- M = m_-(\Sigma)$ and a ``top surface'' $\partial_+ M = m_+(\Sigma)$. Some collar neighborhoods of them are suggestively denoted by $\partial_- M \times [-1,0]$ and $\partial_+ M \times [0,1]$. A \emph{Heegaard splitting} of $M$ of \emph{genus} $r$ is a decomposition $$ M = M_- \cup M_+, $$ where $M_-$ is obtained from $\partial_- M \times [-1,0]$ by adding $r$ $1$-handles along $\partial_- M \times \{0\}$, $M_+$ is obtained from $\partial_+ M \times [0,1]$ by adding $r$ $1$-handles along $\partial_+ M \times \{0\}$, and $M_-\cap M_+ = \partial M_- \cap \partial M_+$ is called the \emph{Heegaard surface}. Note that the Heegaard surface is a $2$-sided compact connected surface of genus $g+r$ with one boundary component which is properly embedded in $M$; we give it the orientation inherited from $M_-$. Any homology cylinder $M$ has a Heegaard splitting, since the cobordism $M$ can be obtained from the trivial cobordism $\Sigma \times[-1,1]$ by attaching simultaneously some $1$-handles along the ``top surface'' $\Sigma \times \{1\}$ and, next, some $2$-handles along the new ``top surface''. This fact follows from elementary Morse theory and is true for any $3$-dimensional cobordism \cite{Milnor}. \begin{lemma} Two homology cylinders $M,M'$ over $\Sigma$ are $Y_k$-equivalent (respectively $J_k$-equivalent) if and only if there is a Heegaard splitting $M=M_- \cup M_+$ with Heegaard surface $S$ and an $s \in \Gamma_{k}\Torelli(S)$ (respectively an $s \in \mcg(S)[k]$) such that $M' = M_- \cup_s M_+$. \end{lemma} \begin{proof} Assume that $M'=M_e \in \mathcal{IC}$ where $M_e$ is the result of a Torelli surgery along a compact connected oriented surface $E \subset \operatorname{int}(M)$ with one boundary component. We consider a regular neighborhood $E \times [-1,1]$ of $E$ in $M$ that does not meet the collar neighborhood $\partial_-M \times [-1,0]$, and where $E\times\{0\}$ is the surface $E$ itself. Next we connect $E \times [-1,0]$ to $\partial_-M \times [-1,0]$ by a solid tube $T$: more precisely, $T$ meets $E \times [-1,0]$ along a disk of $E \times \{-1\}$ and it meets $\partial_-M \times [-1,0]$ along a disk of $\partial_-M \times \{0\}$. Thus the union $$ L_- := \left(\partial_-M \times [-1,0]\right) \cup T \cup \left(E \times [-1,0]\right) $$ is obtained from $\partial_-M \times [-1,0]$ by attaching some $1$-handles (twice the genus of $E$). Let $L_+$ be the closure in $M$ of $M\setminus L_-$, which we regard as a cobordism with corners from $R:=L_-\cap L_+$ to $\partial_+ M$. This cobordism has a handle decomposition \begin{equation}\label{eq:L+} L_+=\big(\left(R\times [0,1]\right) \cup \hbox{$1$-handles}\big)\cup \hbox{$2$-handles}. \end{equation} Note that the surface $R \subset M$ contains $E$ and, after an isotopy, we can assume that the $1$-handles in (\ref{eq:L+}) are attached to $R\times \{1\}$ outside the surface $E\times \{1\}$. Now, we can ``stretch'' each of these $1$-handles towards $R \times \{0\}$, where by ``stretching'' a $1$-handle $[-1,1]\times D^2$ we mean replacing it with $[-1-\varepsilon,1+\varepsilon]\times D^2$ for some positive $\varepsilon$, so that the ``stretched'' $1$-handles are now attached to $R\times \{0\}$ outside the surface $E\times \{0\}$. Furthermore, we can ``contract'' the resulting $1$-handles so that they are all disjoint from $\partial_+M$. Here, by ``contracting'' a $1$-handle $D^1\times D^2$ we mean replacing it with $D^1\times (\varepsilon D^2)$ for some $\varepsilon\in\, ]0,1[$. We now define $M_-$ to be the union of $L_-$ with these ``stretched'' and ``contracted'' $1$-handles, and we define $M_+$ as the exterior in $L_+$ of those $1$-handles. Thus we have found a Heegaard splitting $M_-\cup M_+$ of $M$, whose Heegaard surface $S:= M_-\cap M_+$ contains $E$ as a subsurface. The conclusion easily follows. \end{proof} \subsection{Description by clasper surgery} \label{subsec:clasper} Generators for the $Y_k$-equivalence relations are known, which makes these easier to study than the $J_k$-equivalence relations. Indeed the $Y_k$-equivalence is generated by surgery along graph claspers of degree $k$. This viewpoint, which we shall briefly recall, has been developed by Goussarov \cite{Goussarov,Goussarov_graphs} and Habiro \cite{Habiro}: the $Y_k$-equivalence relation is the same as the ``$(k-1)$-equivalence'' in \cite{Goussarov} or the ``$A_k$-equivalence'' in \cite{Habiro}. Let $M$ be a compact oriented $3$-manifold. In the terminology of \cite{Habiro}, a {\em graph clasper} in $M$ is a compact, connected surface $G$ embedded in $\operatorname{int}(M)$, which comes decomposed between {\em leaves}, {\em nodes} and {\em edges}. Leaves are annuli and nodes are discs. Edges are $1$-handles connecting leaves and nodes, so that each edge has two ``ends" (the attaching locus of the $1$-handle). Each leaf should have exactly one end of an edge attached to it, while each node should have exactly three ends of edges attached to it. See Figure \ref{fig:graph_clasper} for an example of a graph clasper. \begin{figure}[h] \begin{center} {\labellist \small \hair 0pt \pinlabel {an edge} [l] at 94 21 \pinlabel {a node} [l] at 190 333 \pinlabel {a leaf} [l] at 608 165 \pinlabel {$=$} at 809 90 \endlabellist} \includegraphics[scale=0.3]{graph_clasper} \end{center} \caption{An example of graph clasper with $3$ nodes, $3$ leaves and $6$ edges. (And how it is drawn with the blackboard framing convention.)} \label{fig:graph_clasper} \end{figure} \noindent Given a graph clasper $G\subset M$, one can forget the leaves of $G$ and collapse the rest to a one-dimensional graph. This finite graph, which has vertices of valency $1$ or $3$, is called the \emph{shape} of $G$. Then one can be interested in graph claspers of a specific shape. For example, a \emph{$Y$-graph} is a graph clasper with shape $\textsf{Y}$, and a graph clasper is said to be \emph{looped} if its shape contains a loop (as is the case in Figure \ref{fig:graph_clasper}). The {\em degree} of a graph clasper $G$ is the number of nodes contained in $G$. Graph claspers of degree $0$ are called \emph{basic claspers} in \cite{Habiro} and consist of only one edge and two leaves: $$ \includegraphics[scale=0.3]{basic_clasper} $$ Surgery along a graph clasper $G\subset M$ is defined as follows. We first replace each node with three leaves linking like the Borromean rings in the following way: \begin{center} {\labellist \small \hair 0pt \pinlabel {$\longrightarrow$} at 409 143 \endlabellist} \includegraphics[scale=0.3]{to_Borromean} \end{center} Thus, we obtain a disjoint union of basic claspers. Next, we replace each basic clasper with a $2$-component framed link as follows: \begin{center} {\labellist \small \hair 0pt \pinlabel {$\longrightarrow$} at 460 50 \endlabellist} \includegraphics[scale=0.3]{to_framed_link} \end{center} Then, \emph{surgery} along the graph clasper $G$ is defined to be the surgery along the framed link thus obtained in $M$. The resulting $3$-manifold is denoted by $M_G$. \begin{proposition}[Habiro \cite{Habiro}] \label{prop:Y_clasper} For any integer $k\geq 1$, the $Y_k$-equivalence relation is generated by surgeries along graph claspers of degree $k$. \end{proposition} \noindent See the appendix of \cite{Massuyeau_DSP} for a proof. A ``calculus of claspers'' is developed in \cite{Goussarov_graphs,Habiro,GGP}, in the sense that some specific ``moves'' between graph claspers are shown to produce by surgery homeomorphic $3$-manifolds. This calculus can be regarded as a topological analogue of the commutator calculus in groups \cite{Habiro}. Thanks to Proposition \ref{prop:Y_clasper}, this calculus has proved to be a very efficient tool for the study of the $Y_k$-equivalence relations and, here again, we shall use it in a crucial way. For the reader's convenience, we have collected all the technical results on claspers that we shall need in Appendix \ref{sec:calc_clasper}. In particular, we need a number of relations satisfied by graph claspers $G$ with \emph{special} leaves, i.e.\ leaves which bound disks disjoint from $G$ and which are $(-1)$-framed. \subsection{Relationship with finite-type invariants} The $Y_k$-equivalence relations are closely connected to finite-type invariants. Here, we are referring to the Goussarov--Habiro theory of finite-type invariants for compact oriented $3$-manifolds \cite{Goussarov,GGP,Habiro}, which essentially generalizes Ohtsuki's theory \cite{Ohtsuki} for homology $3$-spheres but differs from the Cochran--Melvin theory \cite{CM}. We fix as in \S \ref{subsec:definition_relations} a closed oriented surface $R$. We also consider a $Y_1$-equivalence class $\mathcal{Y} \subset \mathcal{V}(R)$. An invariant $f: \mathcal{Y} \to A$ of manifolds in this class with values in an abelian group $A$ is said to be a \emph{finite-type invariant} of \emph{degree} at most $d$ if we have $$ \sum_{P \subset \{0,\dots,d\}}(-1)^{|P|} \cdot f(M_{P}) = 0 \ \in A $$\\[-0.2cm] for any $M \in \mathcal{Y}$, for any pairwise-disjoint surfaces $S_0 \sqcup \cdots \sqcup S_{d} \subset \operatorname{int}(M)$ and for any $s_0 \in \Torelli(S_0),\dots, s_d \in \Torelli(S_d)$, where $M_{P}\in \mathcal{V}(R)$ is obtained from $M$ by simultaneous Torelli surgeries along the surfaces $S_p$ for which $p\in P$. In other words, $f$ behaves like a ``polynomial'' map of degree at most $d$ with respect to Torelli surgeries. The original definition by Goussarov \cite{Goussarov} and Habiro \cite{Habiro} involves clasper surgery instead of Torelli surgery, but it follows from Proposition \ref{prop:Y_clasper} that the two definitions are equivalent. \begin{lemma}[Goussarov \cite{Goussarov}, Habiro \cite{Habiro}] \label{lem:Y_FTI} Let $M,M' \in \mathcal{Y}$. If $M$ and $M'$ are $Y_{d+1}$-equivalent, then we have $f(M)=f(M')$ for any finite-type invariant $f: \mathcal{Y} \to A$ of degree at most $d$. \end{lemma} \begin{proof} Let $S \subset \operatorname{int}(M)$ be the surface along which the Torelli surgery $M \leadsto M_s$ yields $M'$ for some $s \in \Gamma_{d+1} \Torelli(S)$. Let $\mathbb{Z}\! \cdot\! \mathcal{Y}$ be the abelian group freely generated by the set $\mathcal{Y}$, to which $f$ extends by linearity. We denote by $\mathbb{Z}[\Torelli(S)]$ the group ring of $\Torelli(S)$ with augmentation ideal $I$. The map $\Torelli(S) \to \mathcal{Y}$ defined by $r \mapsto M_r$ extends by linearity to a map $\zeta: \mathbb{Z}[\Torelli(S)] \to \mathbb{Z}\! \cdot\! \mathcal{Y}$. It follows from the definition of a finite-type invariant that $$ f\circ \zeta\left(I^{d+1}\right) =0. $$ By assumption, $s-1 \in \mathbb{Z}[\Torelli(S)]$ belongs to $I^{d+1}$, so that we have $f(M_s-M)=0\in A$. \end{proof} According to \cite{Habiro,Habegger}, any homology cylinder over $\Sigma$ is $Y_1$-equivalent to the trivial cylinder, so that the above applies to the class $\mathcal{Y}:=\mathcal{IC}(\Sigma)$. Goussarov and Habiro have conjectured the converse of Lemma \ref{lem:Y_FTI} to be true for homology cylinders, and they proved this converse in genus $g=0$ \cite{Goussarov_graphs,Habiro}. The conjecture is also known to be true in degree $d=2$ (as will follow from Theorem A too), and in some weaker forms \cite{Massuyeau_DSP}. \section{Some classical invariants of homology cylinders}\label{sec:invariants} In this section, we define the topological invariants of homology cylinders that are needed to characterize the $Y_k$-equivalence and the $J_k$-equivalence relations for $k=2$ and $k=3$. We also describe some of their relationship, and their variation under surgery along a graph clasper. \subsection{Johnson homomorphisms} \label{subsec:Johnson} The Johnson homomorphisms have been introduced and studied by Johnson \cite{Johnson_first_homomorphism,Johnson_survey} and Morita \cite{Morita_abelian} on the Torelli group, and by Garoufalidis--Levine \cite{GL_tree} and Habegger \cite{Habegger} on the monoid of homology cylinders. First of all, we recall how the Johnson filtration is defined. Using the same notation as in the introduction, we set $\pi:=\pi_1(\Sigma,\star)$ for the fundamental group of $\Sigma$ (with base point $\star$ on the boundary), and we denote by $ \pi = \Gamma_1 \pi \supset \Gamma_2 \pi \supset \Gamma_3 \pi \supset \cdots $ the lower central series of $\pi$. Let $(M,m)$ be a homology cobordism of $\Sigma$. According to Stallings \cite{Stallings}, the map $m_{\pm}$ induces an isomorphism $m_{\pm,*}$ at the level of the $k$-th nilpotent quotient $\pi_1(\cdot)/\Gamma_{k+1} \pi_1(\cdot)$ of the fundamental group, so that the composition ${m_{-,*}}^{-1}\circ m_{+,*}$ defines an element of $\operatorname{Aut}\left(\pi/\Gamma_{k+1}\pi \right)$. (Here, the base point of $M$ is $m(\star,0)$ and is connected to $m_\pm(\star)=m(\star,\pm1)$ through the segment $m(\{\star\} \times I)\subset \partial M$.) So for each $k\ge 0$ we get a monoid homomorphism $$ \rho_k: \mathcal{C} \longrightarrow \operatorname{Aut}\left(\pi/\Gamma_{k+1}\pi \right), \ M \longmapsto {m_{-,*}}^{-1}\circ m_{+,*} $$ which is the group homomorphism (\ref{eq:rho_k_mcg}) on the mapping class group. Thus, $\rho_k$ should be thought of as the ``$k$-th nilpotent approximation'' of the Dehn--Nielsen representation. The descending sequence of submonoids $$ \mathcal{C} =\mathcal{C}[0] \supset \mathcal{C}[1] \supset \mathcal{C}[2]\supset \cdots, $$ where $\mathcal{C}[k]$ is the kernel of $\rho_k$, is called the \emph{Johnson filtration} of $\mathcal{C}$. We are particularly interested in the monoids $\mathcal{C}[1]=\mathcal{IC}$ and $\mathcal{C}[2]=\mathcal{KC}$. The \emph{$k$-th Johnson homomorphism} $\tau_k$ is then defined in the following way. An element $f\in \operatorname{Hom}\left(H,\Gamma_{k+1}\pi/\Gamma_{k+2}\pi \right)$ defines an automorphism of $\pi/\Gamma_{k+2} \pi$ by sending any $\{x\} \in \pi/\Gamma_{k+2} \pi$ to $f(\{x\})\cdot \{x^{-1}\}$. Thus we obtain an exact sequence of groups $$ 1 \rightarrow \operatorname{Hom}\left(H,\Gamma_{k+1}\pi/\Gamma_{k+2}\pi\right) \rightarrow \operatorname{Aut}\left(\pi/\Gamma_{k+2}\pi\right) \rightarrow \operatorname{Aut}\left(\pi/\Gamma_{k+1}\pi\right) $$ and the restriction of $\rho_{k+1}$ to the submonoid $\mathcal{C}[k]$ yields a monoid homomorphism $$ \tau_k: \mathcal{C}[k] \longrightarrow \operatorname{Hom}\left(H,\Gamma_{k+1}\pi/\Gamma_{k+2}\pi\right) \simeq H^* \otimes \Gamma_{k+1}\pi/\Gamma_{k+2}\pi \simeq H \otimes \Gamma_{k+1}\pi/\Gamma_{k+2}\pi. $$ Here, the group $H$ is identified with $H^*=\operatorname{Hom}(H,\mathbb{Z})$ by the map $h \mapsto \omega(h,\cdot)$ using the intersection form of $\Sigma$ $$ \omega: H \times H \longrightarrow \mathbb{Z}. $$ One usually restricts the target of the $k$-th Johnson homomorphism in the following way. We denote by $\mathfrak{L}(H)$ the graded Lie ring freely generated by $H$ in degree $1$. There is a canonical isomorphism between $\mathfrak{L}(H)$ and the graded Lie ring $\operatorname{Gr}^\Gamma \pi= \bigoplus_{k\geq1} \Gamma_k\pi/\Gamma_{k+1}\pi$ associated to the lower central series of $\pi$ \cite{Bourbaki}. Therefore, $\tau_k$ can be seen with values in $H\otimes \mathfrak{L}_{k+1}(H)$. It turns out that $\tau_k$ takes values in the kernel of the Lie bracket map $$ \operatorname{D}_{k}(H) := \operatorname{Ker}\left([\cdot,\cdot]: H \otimes \mathfrak{L}_{k+1}(H) \longrightarrow \mathfrak{L}_{k+2}(H)\right), $$ see \cite{Morita_abelian,GL_tree}. Here are a few properties of the $k$-th Johnson homomorphism $\tau_k$: \begin{itemize} \item $\tau_k: \mathcal{C}[k]\rightarrow \operatorname{D}_{k}(H)$ is surjective \cite{GL_tree,Habegger}; \item $\tau_k$ is $Y_{k+1}$-invariant (since the map $\rho_{k+1}$ is invariant under $Y_{k+1}$-equivalence as follows, for example, from Lemma \ref{lem:rho_k} below); \item $\tau_k$ is invariant under stabilization in the sense that, if $\Sigma$ is stabilized to a surface $\Sigma^s$ as shown in Figure \ref{fig:stabilization} so that $H$ embeds into $H^s := H_1(\Sigma^s)$, then the following diagram is commutative: $$ \xymatrix{ \mathcal{C}(\Sigma)[k]\ar[d]_-{\tau_k} \ar@{->}[r] & \mathcal{C}(\Sigma^s)[k] \ar[d]^-{\tau_k} \\ \operatorname{D}_{k}(H) \ar@{->}[r] & \operatorname{D}_{k}(H^s). } $$ \end{itemize} We now specialize to the cases $k=1$ and $k=2$ which will be enough for our purposes. Then the free abelian group $\operatorname{D}_{k}(H)$ has the following description. For $k=1$, there is an isomorphism \begin{equation}\label{eq:D_3} \Lambda^3 H \stackrel{\simeq}{\longrightarrow} \operatorname{D}_{1}(H) \end{equation} given by $a \wedge b \wedge c \mapsto a\otimes [b,c]+ c\otimes [a,b] + b \otimes [c,a]$, see \cite{Johnson_first_homomorphism}. For $k=2$, the description of $\operatorname{D}_{k}(H)$ is more delicate and involves $\left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2}$, i.e.\ the symmetric part of the second tensor power of $\Lambda^2 H$. This free abelian group contains an isomorphic image of the degree $2$ part $S^2 \Lambda^2 H$ of the symmetric algebra over $\Lambda^2 H$, which we regard as a quotient of the tensor algebra over $\Lambda^2H$. Indeed, the map $S^2\Lambda^2 H \to \left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2}$ sending $x\cdot y$ to $(x \leftrightarrow y) := x \otimes y + y \otimes x$ is injective, and we denote its image by $\Lambda^2 H \leftrightarrow \Lambda^2H$. Note that we have an isomorphism $$ \frac{\Lambda^2 H}{2\cdot\Lambda^2 H } \stackrel{\simeq}{\longrightarrow} \frac{\left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2}}{\Lambda^2 H \leftrightarrow \Lambda^2H}, \quad \{a \wedge b\}\longmapsto \{ (a\wedge b) \otimes (a\wedge b) \}, $$ hence a short exact sequence of abelian groups: \begin{equation} \label{eq:sym_to_sym} \xymatrix{ 0 \ar[r] & S^2 \Lambda^2H \ar[r]^-{\leftrightarrow} & \left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2} \ar[r] & \frac{\Lambda^2 H}{2\cdot\Lambda^2 H } \ar[r] & 0 } \end{equation} where the map on the right side sends tensors of the form $(a\wedge b) \otimes (a\wedge b)$ to $\{a \wedge b\}$. Finally, note that $\Lambda^4H$ can be embedded in $\left(\Lambda^2 H \leftrightarrow \Lambda^2H\right) \subset \left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2}$ by sending any $4$-vector $a\wedge b\wedge c\wedge d$ to the sum $$ (a\wedge b)\leftrightarrow (c\wedge d) - (a\wedge c)\leftrightarrow (b\wedge d) + (a\wedge d)\leftrightarrow (b\wedge c). $$ \begin{proposition}[Morita, Levine]\label{prop:Morita-Levine} There is a unique isomorphism \begin{equation}\label{eq:D_2} \frac{\left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2}}{\Lambda^4H} \stackrel{\simeq}{\longrightarrow} \operatorname{D}_{2}(H) \end{equation} that is defined by \begin{equation}\label{eq:D_2_formula} \big((a\wedge b)\leftrightarrow (c\wedge d)\big) \mapsto a \otimes [b,[c,d]] + b \otimes [[c,d],a]+ c \otimes [d,[a,b]] + d \otimes [[a,b],c]. \end{equation} \end{proposition} \begin{proof}[Sketch of the proof] The map (\ref{eq:D_2}) is defined by Morita in \cite{Morita_Casson1,Morita_Casson2}, where the abelian group ${\left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2}}/{\Lambda^4H}$ is denoted by $\overline{T}$. There, Morita states that the map is injective, and the bijectivity of (\ref{eq:D_2}) is essentially proved by Levine in \cite{Levine_quasi-Lie}. To show that the map (\ref{eq:D_2}) is uniquely defined, we consider the map $$ \eta: (\Lambda^2 H \leftrightarrow \Lambda^2H) \longrightarrow \operatorname{D}_{2}(H) $$ defined by formula (\ref{eq:D_2_formula}). Taking rational coefficients, we get a homomorphism $\eta \otimes \mathbb{Q}$ from $\left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2} \otimes \mathbb{Q}$ $= (\Lambda^2 H \leftrightarrow \Lambda^2H)\otimes \mathbb{Q}$ to $\operatorname{D}_{2}(H)\otimes \mathbb{Q}$. For all $a,b\in H$, \begin{eqnarray*} (\eta \otimes \mathbb{Q})\big((a\wedge b)\otimes (a\wedge b)\big) &= &\frac{1}{2} (\eta \otimes \mathbb{Q})\big((a\wedge b)\leftrightarrow (a\wedge b)\big)\\ &=& \frac{1}{2} \left(2 \cdot a \otimes [b,[a,b]] + 2 \cdot b \otimes [[a,b],a]\right) \end{eqnarray*} belongs to $\operatorname{D}_2(H) \subset \operatorname{D}_2(H)\otimes \mathbb{Q}$. Thus, the restriction of $\eta \otimes \mathbb{Q}$ to $\left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2}$ takes values in $\operatorname{D}_2(H)$, and a simple computation shows that it vanishes on the image of $\Lambda^4 H$. This discussion shows that the homomorphism (\ref{eq:D_2}) is well-defined and is determined by the formula (\ref{eq:D_2_formula}). Following Levine \cite{Levine_addendum,Levine_quasi-Lie}, we consider the \emph{quasi-Lie ring} $\mathfrak{L}'(H)$ freely generated by $H$ and, similarly to $\operatorname{D}_k(H)$, we define $$ \operatorname{D}'_{k}(H) := \operatorname{Ker}\left([\cdot,\cdot]: H \otimes \mathfrak{L}_{k+1}'(H) \longrightarrow \mathfrak{L}_{k+2}'(H)\right). $$ The natural group homomorphism $\mathfrak{L}'(H) \to \mathfrak{L}(H)$ induces a group homomorphism $\operatorname{D}'(H) \to \operatorname{D}(H)$ which, in degree $2$, happens to be injective but not surjective \cite{Levine_addendum}: \begin{equation} \label{eq:D_to_D} \xymatrix{ 0 \ar[r] & \operatorname{D}'_2(H) \ar[r] & \operatorname{D}_2(H) \ar[r]^-L & (\Lambda^2 H)\otimes \mathbb{Z}_2 \ar[r] & 0. } \end{equation} Here the map $L$ is defined by an application of the ``snake lemma''. Levine also considers the map $\eta': \frac{S^2 \Lambda^2H}{\Lambda^4H} \longrightarrow D'_2(H)$ defined by $$ \left\{ (a\wedge b) \cdot (c\wedge d)\right\} \stackrel{\eta'}{\longmapsto} a \otimes [b,[c,d]] + b \otimes [[c,d],a]+ c \otimes [d,[a,b]] + d \otimes [[a,b],c]. $$ (This map $\eta'$ is actually the degree $2$ case of a more general construction, which transforms tree Jacobi diagrams to elements of $\operatorname{D}'(H)$: see \S \ref{sec:diagrams} in this connection.) From the definition of Levine's map $L$, we see that the following diagram is commutative: $$ \xymatrix{ 0 \ar[r] & \frac{S^2 \Lambda^2H}{\Lambda^4H} \ar[r]^-{\leftrightarrow} \ar[d]^-{\eta'} & \frac{\left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2}}{\Lambda^4H} \ar[r] \ar[d]^-{\eta\otimes \mathbb{Q}} & \frac{\Lambda^2 H}{2\cdot\Lambda^2 H } \ar[r] \ar[d]^-\simeq & 0\\ 0 \ar[r] & \operatorname{D}'_2(H) \ar[r] & \operatorname{D}_2(H) \ar[r]^-L & (\Lambda^2 H)\otimes \mathbb{Z}_2 \ar[r] & 0 } $$ The map $\eta'$ is bijective in degree $2$ \cite{Levine_quasi-Lie}. We conclude that (\ref{eq:D_2}) is an isomorphism. \end{proof} In the sequel, the identifications (\ref{eq:D_3}) and (\ref{eq:D_2}) will be implicit. A formula for the variation of $\tau_1$ under surgery along a $Y$-graph is given in \cite{MM}. Strictly similar arguments give the following formula for $\tau_2$ and graph claspers of degree $2$. \begin{lemma}\label{lem:tau2} Let $H$ be a degree $2$ graph clasper in a homology cylinder $M$ with $4$ leaves $f_1, \dots, f_4$ which are oriented as shown below:\\[-0.4cm] $$ \labellist \small\hair 2pt \pinlabel {$f_4$} [l] at 70 9 \pinlabel {$f_3$} [l] at 69 43 \pinlabel {$f_2$} [r] at 0 41 \pinlabel {$f_1$} [r] at 0 9 \endlabellist \includegraphics[scale=1.3]{H}\\[-0.3cm] $$ Then we have $$ \tau_2\left( M_H\right)-\tau_2(M) = \left\{ (h_1\wedge h_2) \leftrightarrow (h_3 \wedge h_4) \right\} $$ where $h_1,\dots,h_4\in H$ denote the homology classes of $f_1,\dots,f_4$ respectively. \end{lemma} \noindent By clasper calculus, we can always assume that a degree $2$ graph clasper has four leaves. In particular, this lemma implies that surgery along a looped graph clasper $L$ of degree $2$ does not modify $\tau_2$. Combining this with Lemma \ref{lem:doubling} also gives the following. \begin{lemma}\label{lem:tau2_2} Let $Y$ be a $Y$-graph in a homology cylinder $M$ with one special leaf and two oriented leaves $f,f'$ as shown below:\\[-0.4cm] $$ \labellist \small\hair 2pt \pinlabel {$f$} [r] at 0 71 \pinlabel {$f'$} [l] at 78 72 \endlabellist \includegraphics[scale=0.8]{Y_special}\\[-0.3cm] $$ Then we have $$ \tau_2\left(M_Y\right) -\tau_2(M) = \left\{(h \wedge h') \otimes (h\wedge h') \right\} $$ where $h,h'\in H$ denote the homology classes of $f,f'$ respectively. \end{lemma} \subsection{The Alexander polynomial and the Reidemeister--Turaev torsion}\label{subsec:Alexander} There is a relative version of the Alexander polynomial for homology cylinders \cite{Sakasai}. The \emph{relative Alexander polynomial} of an $M\in \mathcal{IC}$ is the order of the relative homology group $H_1(M, \partial_-M; \mathbb{Z}[H])$ whose coefficients are twisted by the ring homomorphism $$ \xymatrix{ \mathbb{Z}[\pi_1(M)] \ar@{->>}[r] & \mathbb{Z}[H_1(M)] \ar[rr]^-{(m_{\pm,*})^{-1}}_-\simeq & & \mathbb{Z}[H]. } $$ This order is defined up to multiplication by a unit of the ring $\mathbb{Z}[H]$, i.e.\ an element of the form $\pm h$ for some $h\in H$. We denote it by $$ \Delta(M,\partial_-M) := \operatorname{ord} H_1(M, \partial_-M; \mathbb{Z}[H]) \ \in \mathbb{Z}[H]/\pm H. $$ \begin{lemma} For all $M \in \mathcal{IC}$, we have $\Delta(M,\partial_-M) \neq 0$. \end{lemma} \begin{proof} We have the following general fact, essentially proved in \cite[Proposition 2.1]{KLW}.\\[-0.3cm] \begin{quote} \textbf{Fact.} \emph{Let $(X,Y)$ be a connected CW pair such that the inclusion $Y\subset X$ induces an isomorphism in homology. Then, for all injective ring homomorphism $\varphi: \mathbb{Z}[H_1(X)] \to \mathbb{F}$ with values in a commutative field $\mathbb{F}$, the homology group $H_*(X,Y;\mathbb{F})$ with coefficients twisted by $\varphi$ is trivial.}\\[-0.3cm] \end{quote} \noindent Thus, an application of the universal coefficient theorem gives $$ H_1(M, \partial_-M; \mathbb{Z}[H]) \otimes_{\mathbb{Z}[H]} Q(\mathbb{Z}[H]) \simeq H_1\big(M, \partial_-M; Q(\mathbb{Z}[H])\big) =0 $$ where $Q(\mathbb{Z}[H])$ denotes the fraction field of the domain $\mathbb{Z}[H]$. We deduce that the $\mathbb{Z}[H]$-module $H_1(M, \partial_-M; \mathbb{Z}[H])$ is torsion. \end{proof} \noindent It has been shown by Milnor for link complements \cite{Milnor_duality} and by Turaev for closed manifolds \cite{Turaev_Alexander} that the Alexander polynomial can be interpreted in dimension $3$ as a kind of Reidemeister torsion. The reader is referred to Turaev's book \cite{Turaev_book} for an introduction to the theory of Reidemeister torsions. The same interpretation holds for the relative Alexander polynomial of homology cylinders. \begin{proposition}\label{prop:Alexander_torsion} Let $M\in \mathcal{IC}$ and let $(X,Y)$ be a cell decomposition of $(M,\partial_-M)$. We denote by $\mu: \mathbb{Z}[\pi_1(X)] \to Q(\mathbb{Z}[H])$ the ring map induced by the isomorphism $(m_{\pm,*})^{-1}: H_1(X)\simeq H_1(M) \to H$ and we denote by $\tau^\mu(X,Y)\in Q(\mathbb{Z}[H])/\pm H$ the relative Reidemeister torsion with abelian coefficients given by $\mu$. Then we have $$ \tau^\mu (X,Y) = \Delta(M,\partial_-M) \ \in \mathbb{Z}[H]/\pm H. $$ \end{proposition} \noindent This follows from \cite[Lemma 3.6]{FJR} for instance. The main argument is that $M$ collapses relatively to $\partial_- M$ onto a cell complex having only $1$-cells and $2$-cells in an equal number. (This fact follows from the existence of a Heegaard splitting for $M$, as discussed in \S \ref{subsec:definition_relations}.) Thus, the computations of $\tau^\mu (X,Y)$ and $\Delta(M,\partial_-M)$ reduces to a single determinant. Thanks to Proposition \ref{prop:Alexander_torsion}, one can use Turaev's refinement of the Reidemeister torsion \cite{Turaev_Euler} to fix the ambiguity in $\pm H$ in the definition of the relative Alexander polynomial. Let $M\in \mathcal{IC}$ and let $(X,Y)$ be a cell decomposition of $(M,\partial_-M)$. An \emph{Euler chain} in $X$ relative to $Y$ is a singular $1$-chain $c$ in $X$ with boundary $$ \partial c = \sum_{\sigma} (-1)^{\dim(\sigma)} \cdot (\hbox{center of } \sigma) $$ where the sum is indexed by the cells $\sigma$ of $X \setminus Y$. Such chains exist since the relative Euler characteristic of the pair $(X,Y)$ is zero. Two Euler chains $c$ and $c'$ are \emph{homologous} if the $1$-cycle $c-c'$ is null-homologous. An \emph{Euler structure} on $X$ \emph{relative} to $Y$ is a homology class of Euler chains. The set $$ \operatorname{Eul}_{\operatorname{c}}(X,Y) $$ of Euler structures on $X$ relative to $Y$ is an $H_1(X)$-affine space. Turaev associates in \cite{Turaev_Euler} to each $\theta \in \operatorname{Eul}_{\operatorname{c}}(X,Y)$ a representative $$ \tau^\mu(X,Y;\theta) \in Q(\mathbb{Z}[H]) $$ of the relative Reidemeister torsion $\tau^\mu(X,Y)$ in such a way that \begin{equation} \label{eq:torsion_affine} \forall h \in H \simeq H_1(X), \quad \tau^\mu\left(X,Y;\theta + \overrightarrow{h}\right) = h \cdot \tau^\mu(X,Y;\theta). \end{equation} We call $\tau^\mu(X,Y;\theta)$ the \emph{Reidemeister--Turaev torsion} (or, in short, \emph{RT torsion}) of the CW pair $(X,Y)$ equipped with $\theta$. The ambiguity in $H$ is fixed by lifting an Euler chain in the class $\theta$ to the maximal abelian cover of $X$, which gives a preferred lift for each cell of $X\setminus Y$. The sign ambiguity is fixed thanks to a correcting multiplicative term: in general, one has to choose an orientation of the $\mathbb{R}$-vector space $H_*(X,Y;\mathbb{R})$ but, in the situation of homology cylinders, this space is trivial. Observe also that, in this situation, $\tau^\mu(X,Y;\theta)$ belongs to $\mathbb{Z}[H]$ by Proposition \ref{prop:Alexander_torsion}. The Euler structures that are defined in the previous paragraph are called \emph{combinatorial} since they are defined for pairs of CW complexes $(X,Y)$. There is also a \emph{geometric} version of Euler structures which are defined in \cite{Turaev_Euler} for pairs of smooth manifolds $(U,V)$: the submanifold $V$ is then assumed to be a union of connected components of $\partial U$. Turaev's correspondence between the two notions of Euler structures involves smooth triangulations. If the manifold $U$ is three-dimensional, Benedetti and Petronio define in \cite{BP} a relative version of the Reidemeister--Turaev torsion for quite general submanifolds $V$ of $\partial U$. This invariant was rediscovered by Friedl, Juh\'asz and Rasmussen in the context of sutured Heegaard--Floer homology \cite{FJR}. The correspondence between combinatorial Euler structures and geometric Euler structures is proved in \cite{BP} using the theory of branched standard spines and in \cite{FJR} using Morse theory. In the sequel, we adopt the latter viewpoint which is better suited to our purposes. The constructions of \cite{FJR} apply to any homology cylinder $M$ over $\Sigma$ in the following way. In order to consider homology cylinders in the smooth category, we smooth the corners of $(\Sigma \times I)$ and we denote the smooth trivial cylinder by $(\Sigma \times I)^{\operatorname{sc}}$: $$ \labellist \small\hair 2pt \pinlabel {$\Sigma \times I$} at 77 31 \pinlabel {$\leadsto$} at 212 31 \pinlabel {$\left(\Sigma \times I\right)^{\operatorname{sc}}$} at 335 31 \endlabellist \includegraphics[scale=0.5]{smooth_corners} $$ The inclusion $(\Sigma \times I)^{\operatorname{sc}} \subset (\Sigma \times I)$ identifies the tangent bundle $T(\Sigma \times I)^{\operatorname{sc}}$ with the restriction of $T \Sigma \times TI$ to $(\Sigma \times I)^{\operatorname{sc}}$. We give the $3$-manifold $M$ a smooth structure and we assume that its boundary is parametrized by a diffeomorphism $m:\partial\left(\Sigma \times I\right)^{\operatorname{sc}} \to \partial M$. This diffeomorphism induces an isomorphism of vector bundles \begin{equation} \label{eq:m*} \mathbb{R}\oplus m_*: T\left(\Sigma \times I\right)^{\operatorname{sc}}|_{\partial \left(\Sigma \times I\right)^{\operatorname{sc}}} \simeq \mathbb{R} \oplus T\partial \left(\Sigma \times I\right)^{\operatorname{sc}} \longrightarrow \mathbb{R} \oplus T\partial M \simeq TM|_{\partial M} \end{equation} between the tangent bundles restricted to the boundaries. We denote by $t$ the coordinate along $I$ in $(\Sigma\times I)$ and by $\frac{\partial}{\partial t}$ the corresponding vector field, which is a non-singular section of $T \Sigma \times TI$. The image by (\ref{eq:m*}) of its restriction to $(\Sigma \times I)^{\operatorname{sc}}$ defines on $\partial M$ a non-singular vector field $v_0$ of $M$ which points outside $M$ on $\partial_+M$, points inside $M$ on $\partial_-M$ and is tangent to $\partial M$ along the circle $m(\partial \Sigma \times \{0\})$: see Figure \ref{fig:v_0}. An \emph{Euler structure} on $M$ \emph{relative} to $\partial_-M$ is an equivalence class of non-singular vectors fields $v$ on $M$ such that $v|_{\partial M}=v_0$. Here two such vector fields $v,v'$ are considered \emph{equivalent} if there is an open $3$-ball $B\subset \operatorname{int}(M)$ such that $v|_{M\setminus B}$ and $v'|_{M\setminus B}$ are homotopic relatively to $\partial M$. Obstruction theory tells us that the set of Euler structures on $M$ relative to $\partial_- M$ $$ \operatorname{Eul}_{\operatorname{g}}(M,\partial_-M) $$ is an $H_1(M)$-affine space since we have $H_1(M)\simeq H^2(M,\partial M)$ by Poincar\'e duality. Let $(X,Y)$ be a cell decomposition of $(M,\partial_- M)$ arising from a handle decomposition of $M$ relative to $\partial_- M$. Then, an $H_1(M)$-equivariant correspondence between combinatorial and geometric Euler structures \begin{equation} \label{eq:comb_to_geom} \xymatrix{ \operatorname{Eul}_{\operatorname{c}}(X,Y) \ar[r]^-\simeq & \operatorname{Eul}_{\operatorname{g}}(M,\partial_-M) } \end{equation} is defined in \cite[\S 3]{FJR} by desingularizing a gradient-like vector field of a Morse function that induces the given handle decomposition. This bijection is similar to the formulation in the closed case of Turaev's correspondence \cite{Turaev_Euler} in terms of Morse theory \cite{HL,Massuyeau_torsion}. Therefore, the \emph{relative RT torsion} of $M$ equipped with $\xi\in \operatorname{Eul}_{\operatorname{g}}(M,\partial_-M)$ is defined as $$ \tau(M,\partial_-M;\xi) := \tau^\mu(X,Y;\theta) \in Q(\mathbb{Z}[H]) $$ where $\theta\in \operatorname{Eul}_{\operatorname{c}}(X,Y)$ corresponds to $\xi$ by the correspondence (\ref{eq:comb_to_geom}). \begin{figure}[h] \begin{center} \labellist \small\hair 2pt \pinlabel {$M$} at 109 35 \pinlabel {$\partial_+M$} [r] at 0 59 \pinlabel {$\partial_-M$} [r] at 0 2 \pinlabel {$m\left((\partial \Sigma \times I)^{\operatorname{sc}}\right)$} [l] at 208 30 \pinlabel {$v_0$} [bl] at 168 75 \endlabellist \includegraphics[scale=0.8]{v0} \end{center} \caption{The non-singular vector field $v_0$ of $M$ defined on $\partial M$.} \label{fig:v_0} \end{figure} We shall need further constructions for relative Euler structures, which are already available in the literature for closed oriented $3$-manifolds. Thus an efficient way to extend them to the case of homology cylinders is to ``close'' any $M \in \mathcal{IC}$ as follows. First of all, we add a $2$-handle $D$ to the surface $\Sigma$ to obtain a closed connected oriented surface $\clocase{\Sigma}$ of genus $g$. Next we glue a $2$-handle $D \times I$ along $M$ to obtain a cobordism $\clocase{M}$ between two copies of $\clocase{\Sigma}$. Finally we obtain a closed connected oriented $3$-manifold $M^\sharp$ by identifying the bottom surface of $\clocase{M}$ with its top surface using the homeomorphism $$ \xymatrix{ \partial_- \clocase{M} & & \ar[ll]_-{m_-\cup \operatorname{Id}_D}^-\cong \clocase{\Sigma} \ar[rr]^-{m_+\cup \operatorname{Id}_D}_-\cong & & \partial_+ \clocase{M}. } $$ Observe that we have $M \subset M^\sharp$ and there is an isomorphism $$ H_1(M) \oplus \mathbb{Z} \overset{\simeq}{\longrightarrow} H_1(M^\sharp), \ (h,x)\longmapsto \operatorname{incl}_*(h) + x\cdot \left[0\times S^1\right], $$ where the circle $0\times S^1$ denotes the co-core $0\times I$ of the $2$-handle $D\times I$ with the two ends identified. A non-singular vector field $v$ on $M$ which coincides with $v_0$ on $\partial M$ can be extended to a non-singular vector field $v^\sharp$ on $M^\sharp$ by gluing it to the vector field $\frac{\partial}{\partial t}$ on $D\times I$. Let $\operatorname{Eul}_{\operatorname{g}}(M^\sharp)$ be the space of geometric Euler structures on the closed $3$-manifold $M^\sharp$, i.e.\ the space of non-singular vector fields on $M^\sharp$ up to homotopy on $M^\sharp$ deprived of an open $3$-ball \cite{Turaev_Euler}. Then a \emph{closure} map for Euler structures \begin{equation} \label{eq:Euler_closure} \xymatrix{ \operatorname{Eul}_{\operatorname{g}}(M,\partial_-M) \ar[r] & \operatorname{Eul}_{\operatorname{g}}(M^\sharp), \quad \xi \ar@{|->}[r] & \xi^\sharp } \end{equation} is defined by $\xi^\sharp:= [v^\sharp]$ if $\xi=[v]$. This map is affine over $\operatorname{incl}_*:H_1(M) \to H_1(M^\sharp)$. Recall that a ``Chern class'' map $c:\operatorname{Eul}_{\operatorname{g}}(M^\sharp) \to H_1(M^\sharp)$ is defined in \cite{Turaev_Euler} for the closed oriented $3$-manifold $M^\sharp$. The Chern class $c([u]) \in H^2(M^\sharp) \simeq H_1(M^\sharp)$ of a $[u]\in \operatorname{Eul}_{\operatorname{g}}(M^\sharp)$ is the obstruction to find a non-singular vector field on $M^\sharp$ linearly independent with $u$. In other words, it is defined as the Euler class (or, equivalently, first Chern class) of a $2$-dimensional oriented vector bundle: $$ c([u]) =e\left(TM^\sharp/\langle u\rangle\right) \ \in H^2(M^\sharp). $$ We define the \emph{relative Chern class} map for the homology cylinder $M$ by the diagram $$ \xymatrix{ \operatorname{Eul}_{\operatorname{g}}(M,\partial_- M) \ar@{.>}[d]_c \ar[r]^-{(\ref{eq:Euler_closure})} & \operatorname{Eul}_{\operatorname{g}}(M^\sharp) \ar[d]^{c}\\ H_1(M) & \ar@{->>}[l]^-p H_1(M)\oplus \mathbb{Z} \simeq H_1(M^\sharp) \quad \quad \quad \quad \quad } $$ where the map $p$ denotes the natural projection. The relative Chern class $c(\xi)\in H_1(M) \simeq H^2(M,\partial M)$ of a $\xi \in \operatorname{Eul}_{\operatorname{g}}(M,\partial_- M)$ can be described without making reference to $M^\sharp$ as follows. Let $w$ be a non-singular vector field on the surface $\Sigma$, and let $w'$ be the image by the isomorphism (\ref{eq:m*}) of the vector field $w\times I$ on $(\Sigma\times I)^{\operatorname{sc}} \subset \Sigma \times I$: thus, $w'$ is a non-singular vector field of $M$ defined on $\partial M$ and linearly independent with $v_0$. Then, for any non-singular vector field $v$ on $M$ representing $\xi$, $c(\xi)$ is the obstruction to extend $w'$ to a non-singular vector field on $M$ linearly independent with $v$: \begin{equation} \label{eq:Chern_as_obstruction} c(\xi) =e(TM/\langle v\rangle, w') \ \in H^2(M,\partial M). \end{equation} \begin{example} \label{ex:Chillingworth} For the mapping cylinder $\mathbf{c}(h)$ of an element $h$ of the Torelli group $\Torelli$, the relative Chern class map also has the following description. Recall that the \emph{Chillingworth homomorphism} \cite{Chillingworth,Johnson_abelianization} $$ t: \Torelli\longrightarrow H^1(\Sigma) $$ maps $h\in \Torelli$ to the obstruction $t(h)$ to find a homotopy between a non-singular vector field $w$ on $\Sigma$ and its image under $h^{-1}$. Under the isomorphisms $H^1(\Sigma)\simeq H_1(\Sigma,\partial \Sigma)\simeq H_1(\Sigma)$, the Chillingworth homomorphism takes values in $H$. Let $\left[\frac{\partial}{\partial t}\right]\in \operatorname{Eul}_{\operatorname{g}}(\mathbf{c}(h),\partial_-\mathbf{c}(h))$ be the Euler structure represented by the ``upward'' vector field. Then we deduce from (\ref{eq:Chern_as_obstruction}) that $c\left(\left[\frac{\partial}{\partial t}\right]\right)$ is equal to $- t(h)\in H_1(\Sigma\times I)\simeq H.$ \end{example} We now give some basic properties of the relative Chern class map. \begin{lemma}\label{lem:c} Let $M\in \mathcal{IC}$. The relative Chern class map $c: \operatorname{Eul}_{\operatorname{g}}(M,\partial_- M) \to H_1(M)$ is affine over the multiplication by $2$ map $H_1(M) \to H_1(M)$, and its image is $2H_1(M)$. \end{lemma} \begin{proof} In the closed case, the Chern class map $\operatorname{Eul}_{\operatorname{g}}(M^\sharp) \to H_1(M^\sharp)$ is known to be affine over the multiplication by $2$ \cite{Turaev_Euler}. Moreover, the closure map (\ref{eq:Euler_closure}) is affine over $\operatorname{incl}_*: H_1(M) \to H_1(M^\sharp)$. We deduce that, for any $\xi \in \operatorname{Eul}_{\operatorname{g}}(M,\partial_-M)$ and $h\in H_1(M)$, $$ c\left(\xi + \overrightarrow{h}\right) = pc\left(\left(\xi + \overrightarrow{h}\right)^\sharp\right) = pc(\xi^\sharp) + \overrightarrow{2 p \operatorname{incl}_*(h)} = c(\xi) + 2 \overrightarrow{h} $$ which proves the first statement. To prove the second statement, it is enough to show that for any $\xi \in \operatorname{Eul}_{\operatorname{g}}(M,\partial_-M)$ the mod $2$ reduction of $c(\xi)$ is trivial. Let $w$ be a non-singular vector field on $\Sigma$, and let $w'$ be the image of $w\times I$ by (\ref{eq:m*}). We deduce from (\ref{eq:Chern_as_obstruction}) that the mod $2$ reduction of $c(\xi)$ is the (primary) obstruction to extend the parallelization $(v_0,w',v_0\wedge w')$ of $M$ on $\partial M$ to the whole of $M$. Since $TM|_{\partial M}$ is isomorphic to $\mathbb{R} \oplus T\partial M$ (using the normal vector field), $(v_0,w',v_0\wedge w')$ defines a spin structure $\sigma$ on $\partial M$, and the latter obstruction is a relative second Stiefel--Whitney class: $$ c(\xi) \! \! \! \mod 2 = w_2(M,\sigma) \in H^2(M,\partial M;\mathbb{Z}_2). $$ Let $(\rho_1,\dots,\rho_{2g})$ be a system of simple oriented closed curves on the surface $\Sigma$ which generates $H$. Since $M$ is a homology cylinder over $\Sigma$, we can find for every $i=1,\dots,2g$ a compact connected oriented surface $R_i$ properly embedded in $M$ such that $\partial R_i= m_+(\rho_i)- m_-(\rho_i)$. Then the quantity $$ \left \langle w_2(M,\sigma), [R_i] \right\rangle \in \mathbb{Z}_2 $$ is the obstruction to extend the spin structure $\sigma|_{\partial R_i}$ to $R_i$ and, so, it vanishes since the latter restricts to the same spin structure of $\rho_i$ on each connected component of $\partial R_i$. (Here we are using the fact that the $1$-dimensional spin cobordism group is $\mathbb{Z}_2$.) Since $[R_1],\dots, [R_{2g}]$ generate $H_2(M,\partial M)$, we conclude that $w_2(M,\sigma)$ is trivial. \end{proof} The following is justified by Lemma \ref{lem:c}. \begin{definition} Let $M\in \mathcal{IC}$. The \emph{preferred} relative Euler structure of $M$ is the unique $\xi_0\in \operatorname{Eul}_{\operatorname{g}}(M,\partial_-M)$ satisfying $c(\xi_0)=0$. \end{definition} \noindent Thus the polynomial $$ \tau(M,\partial_-M;\xi_0) \in \mathbb{Z}[H] \subset Q(\mathbb{Z}[H]) $$ is a topological invariant of homology cylinders which, by Proposition \ref{prop:Alexander_torsion}, can be regarded as a \emph{normalized} version of $\Delta(M,\partial_-M)\in \mathbb{Z}[H]/\!\pm H$. \begin{example} If $M$ is the mapping cylinder of an $h \in \Torelli$, we deduce from Lemma \ref{lem:c} and Example \ref{ex:Chillingworth} that $\xi_0$ is $\left[\frac{\partial}{\partial t}\right] + \overrightarrow{t(h)/2}$. So formula (\ref{eq:torsion_affine}) gives \begin{equation} \label{eq:torsion_mapping_cylinder} \tau\big(\mathbf{c}(h),\partial_-\mathbf{c}(h);\xi_0\big)= t(h)^{\nicefrac{1}{2}} \ \in H \subset \mathbb{Z}[H]. \end{equation} (Recall that $H$ inside $\mathbb{Z}[H]$ is denoted multiplicatively.) This example shows that $\tau(M,\partial_-M;\xi_0)$ \emph{does} depend on the boundary parametrization $m$ of $M$ although the class $\Delta(M,\partial_-M)$ only depends on $m$ through the isomorphism $m_{\pm,*}: H \to H_1(M)$. \end{example} We shall now prove several properties for the relative RT torsion of homology cylinders. A first property is its invariance under stabilization. Indeed, if $\Sigma^s$ is a stabilization of the surface $\Sigma$ as shown in Figure \ref{fig:stabilization} so that $H$ embeds into $H^s := H_1(\Sigma^s)$, then the following diagram is commutative: \begin{equation} \label{eq:stabilization_RT} \xymatrix{ \mathcal{IC}(\Sigma) \ar[r] \ar[d]_-{\tau(\, \cdot\, ,\partial_-\, \cdot\, ;\xi_0)} & \mathcal{IC}(\Sigma^s) \ar[d]^-{\tau(\, \cdot\, ,\partial_-\, \cdot\, ;\xi_0)}\\ \mathbb{Z}[H] \ar[r] & \mathbb{Z}[H^s]. } \end{equation} Next, the relative RT torsion is a limit of infinitely many finite-type invariants. To show this, we need beforehand to understand how Torelli surgeries ``transport'' Euler structures. \begin{lemma}\label{lem:Euler_surgery} Let $M\in \mathcal{IC}$, let $S \subset \operatorname{int}(M)$ be a compact connected oriented surface with one boundary component and let $s\in \Torelli(S)$. Then the Torelli surgery $M \leadsto M_s$ induces a canonical bijection $\Omega_s: \operatorname{Eul}_{\operatorname{g}}(M,\partial_- M)\rightarrow \operatorname{Eul}_{\operatorname{g}}(M_s,\partial_- M_s)$, which is affine over the isomorphism $\Phi_s$ defined at (\ref{eq:iso_homology}) and which fits into the following commutative diagram: $$ \xymatrix{ \operatorname{Eul}_{\operatorname{g}}(M,\partial_-M) \ar[d]_c \ar[r]^{\Omega_s}_-\simeq & \operatorname{Eul}_{\operatorname{g}}(M_s,\partial_- M_s) \ar[d]^{c}\\ H_1(M) \ar[r]^{\simeq}_-{\Phi_s} & H_1(M_s). } $$ \end{lemma} \begin{proof} We know from \cite{Massuyeau_torsion}, which deals with the closed case, that the Torelli surgery $M^\sharp \leadsto {M_s}^\sharp$ induces a canonical bijection $$ \Omega_s: \operatorname{Eul}_{\operatorname{g}}(M^\sharp) \stackrel{\simeq}{\longrightarrow} \operatorname{Eul}_{\operatorname{g}}\left({M_s}^\sharp\right). $$ (The notion of ``Torelli surgery'' is defined in \cite{Massuyeau_torsion} along the boundary of an embedded handlebody instead of an embedded surface $S$ with one boundary component. But the two notions are equivalent since a regular neighborhood of $S$ is a handlebody.) The Euler structure $\Omega_s(\rho)$ associated to a $\rho \in \operatorname{Eul}_{\operatorname{g}}(M^\sharp)$ can be described as follows. Let $u\in \rho$ be a non-singular vector field on $M^\sharp$ which is normal to $S$ and is positive with respect to $S$. Then there is a regular neigborhood $S \times [-1,1]$ of $S$ in $\operatorname{int}(M) \subset M^\sharp$ such that $u$ is the ``upward'' vector field $\frac{\partial}{\partial t}$ on it. Provided the smooth structure on ${M_s}^\sharp$ is appropriately chosen with respect to that of $M^\sharp$, there is a unique vector field $u_s$ on ${M_s}^\sharp$ which coincides with $u$ on $M^\sharp\setminus \operatorname{int}(S \times [-1,1])$ and with $\frac{\partial}{\partial t}$ on $\mathbf{c}(s)$. Then, \cite[Lemma 3.5]{Massuyeau_torsion} tells us that \begin{equation} \label{eq:concrete_description} \Omega_s(\rho) = [u_s] - \overrightarrow{\operatorname{incl}_*(t(s)/2)} \end{equation} where $t: \Torelli(S) \to H^1(S) \simeq H_1(S,\partial S)\simeq H_1(S)$ is the Chillingworth homomorphism. In particular, $\Omega_s(\rho)$ is obtained by modifying the vector field $u_s$ by some ``Reeb turbulentization'' \cite{Turaev_Euler} which is supported in a neighborhood of $S$. This fact implies that, for all $\xi \in \operatorname{Eul}_{\operatorname{g}}(M)$, $\Omega_s(\xi^\sharp)$ belongs to the image of $\operatorname{Eul}_{\operatorname{g}}(M_s)$. So we can define the map $\Omega_s$ in the case of homology cylinders by the following commutative diagram: $$ \xymatrix{ \operatorname{Eul}_{\operatorname{g}}(M,\partial_-M) \ar@{>->}[r]\ar@{-->}[d]_{\exists! \Omega_s}\ & \operatorname{Eul}_{\operatorname{g}}(M^\sharp) \ar[d]_-\simeq^-{\Omega_s}\\ \operatorname{Eul}_{\operatorname{g}}(M_s,\partial_-M_s) \ar@{>->}[r] & \operatorname{Eul}_{\operatorname{g}}({M_s}^\sharp) } $$ In the closed case, the map $\Omega_s$ is affine over the Mayer--Vietoris isomorphism $\Phi_s$ and it is compatible with the Chern class map \cite{Massuyeau_torsion}. We easily deduce from the definitions that the map $\Omega_s$ has the same properties in the case of homology cylinders. \end{proof} In the sequel we denote by $I$ the augmentation ideal of $\mathbb{Z}[H]$. \begin{theorem}\label{thm:torsion_finiteness} Let $d\ge 1$. Then the $d$-th $I$-adic reduction of the relative Reidemeister--Turaev torsion of homology cylinders $$ \tau(M,\partial_-M;\xi_0) \in \mathbb{Z}[H]/I^d $$ is a finite-type invariant of degree at most $d-1$. \end{theorem} \begin{proof}[Sketch of the proof] The analogous statement for closed oriented $3$-manifolds $N$ has been proved in \cite{Massuyeau_torsion} using Heegaard splittings. To understand how the RT torsion changes $\tau(N,\rho) \leadsto \tau(N_s,\Omega_s(\rho))$ when a Torelli surgery $N \leadsto N_s$ is performed, one needs two technical ingredients: \begin{itemize} \item[(i)] a description following \cite{HL} of Turaev's correspondence between combinatorial and geometric Euler structures in terms of Morse theory \cite[\S 2.3]{Massuyeau_torsion}, \item[(ii)] an explicit formula following \cite{Turaev_spinc} which computes the RT torsion of $N$ from a Heegaard splitting by means of Fox's free derivatives \cite[\S 4.1]{Massuyeau_torsion}. \end{itemize} This proof can be adapted in a straightforward way to the case of homology cylinders using the notion of Heegaard splitting defined in \S \ref{subsec:definition_relations} and some technical results from \cite{FJR}. To be more specific, the analogue of (i) can be found in \cite[\S 3.5]{FJR} and the analogue of (ii) is done in \cite[\S 4]{FJR}. The final observation is that a Torelli surgery $M \leadsto M_s$ between homology cylinders preserves the preferred Euler structure $\xi_0$, as follows from Lemma \ref{lem:Euler_surgery}. \end{proof} We shall now identify the ``leading term'' of the relative RT torsion with respect to the $I$-adic filtration of $\mathbb{Z}[H]$. According to Johnson \cite{Johnson_abelianization}, the Chillingworth homomorphism can be recovered from the first Johnson homomorphism by the formula $t(h)= \operatorname{cont} \circ\, \tau_1 (h)$, for all $h\in \Torelli$, where ``$\operatorname{cont}$'' is the contraction homomorphism $\Lambda^3H \to H$ defined by $$ \operatorname{cont} (a\wedge b\wedge c):=2\cdot \left(\omega(a,b)c+\omega(b,c)a+\omega(c,a)b\right). $$ Thus the map $t$ can be extended to a monoid homomorphism $$ t: \mathcal{IC} \longrightarrow H $$ simply by setting $t:= \operatorname{cont} \circ\, \tau_1$. \begin{lemma} \label{lem:I2} For all $M\in \mathcal{IC}$, we have $\tau(M,\partial_-M;\xi_0) = t(M)^{\nicefrac{1}{2}} \mod I^2$. \end{lemma} \begin{proof} The relative RT torsion is invariant by stabilization in the sense of (\ref{eq:stabilization_RT}). Therefore we can assume that $g\geq 3$, and $M$ is then $Y_2$-equivalent to a mapping cylinder $\mathbf{c}(h)$ for some $h\in \Torelli$ \cite{MM}. By Theorem \ref{thm:torsion_finiteness} and Lemma \ref{lem:Y_FTI}, we have $$ \tau(M,\partial_-M;\xi_0) = \tau(\mathbf{c}(h),\partial_-\mathbf{c}(h);\xi_0) \mod I^2. $$ We conclude thanks to (\ref{eq:torsion_mapping_cylinder}) and the fact that $\tau_1$ is invariant under $Y_2$-equivalence. \end{proof} By Lemma \ref{lem:I2}, we can associate to any $M \in \mathcal{IC}$ the symmetric tensor \begin{equation}\label{def:a} \alpha(M):= \left\{ \tau(M,\partial_-M;\xi_0) - t(M)^{\nicefrac{1}{2}}\right\} \ \in \frac{I^2}{I^3}\simeq S^2H. \end{equation} (Here $S^2H$ is identified with $I^2/I^3$ in the usual way, namely $h\cdot h' \mapsto \left\{(h-1)(h'-1)\right\}$.) We shall refer to $\alpha(M)$ as the \emph{quadratic part} of the relative RT torsion. It has the following properties. \begin{proposition}\label{prop:alpha_properties} The map $\alpha: \mathcal{IC} \to S^2H$ is an additive finite-type invariant of degree $2$, which satisfies \begin{equation}\label{eq:alexcyl} \forall f \in \Torelli, \quad \alpha(\mathbf{c}(f))=0. \end{equation} Next, if $G$ is a looped graph clasper of degree $2$ in a homology cylinder $M$ whose leaves $f$ and $f'$ are oriented as follows $$ \labellist \small\hair 2pt \pinlabel {$f$} [r] at 0 16 \pinlabel {$f'$} [l] at 155 16 \endlabellist \includegraphics[scale=0.8]{Phi_graph} $$ and have homology classes $h\in H$ and $h'\in H$ respectively, then we have $$ \alpha(M_G) - \alpha(M) = -2 \cdot h h'. $$ Finally, if $Y$ is a $Y$-graph in a homology cylinder $M$ with two special leaves and one arbitrary leaf $f$ which is oriented in an arbitrary way $$ \labellist \small\hair 2pt \pinlabel {$f$} [r] at 22 20 \endlabellist \includegraphics[scale=0.8]{Y_special2}\\[-0.3cm] $$ and has homology class $h\in H$, then we have $$ \alpha(M_Y)-\alpha(M) = h^2. $$ \end{proposition} In order to prove this proposition, we shall need two other properties of the relative RT torsion of homology cylinders. \begin{lemma}\label{lem:multiplicativity} For all $M,M' \in \mathcal{IC}$, we have $$ \tau\big(M\circ M', \partial_- (M\circ M'); \xi_0\big) = \tau(M,\partial_-M;\xi_0) \cdot \tau(M',\partial_-M';\xi_0) \in \mathbb{Z}[H]. $$ \end{lemma} \begin{lemma}\label{lem:loop} Let $G$ be a looped graph clasper of degree $d\geq 1$ in a homology cylinder $M$, whose leaves $f_1,\dots,f_d$ and loop $\ell$ of edges are oriented as shown below:\\[0.1cm] $$ \labellist \small\hair 2pt \pinlabel {$\varepsilon_1$} at 42.5 37 \pinlabel {$\varepsilon_2$} at 73 37 \pinlabel {$\varepsilon_d$} at 138.5 37 \pinlabel {$f_1$} [r] at 16 13 \pinlabel {$f_2$} [r] at 50 13 \pinlabel {$f_{d}$} [r] at 115 13 \pinlabel {$\ell$} [b] at 2 75 \pinlabel {where} [l] at 196 49 \pinlabel {$0$} at 288 65 \pinlabel {$1$} at 288 33 \pinlabel {$=$} at 308 64 \pinlabel {$=$} at 308 32 \pinlabel {\scriptsize\!\!\! (a half-twist)} [l] at 318 21 \endlabellist \!\!\! \includegraphics[scale=1.1]{looped_graph-new} $$ We denote by $h_1,\dots,h_d\in H$ and $b\in H$ the homology classes of $f_1,\dots,f_d$ and $\ell$, respectively, and we set $\varepsilon:= \varepsilon_1 + \cdots + \varepsilon_d\in \mathbb{Z}_2$. Then we have $$ \tau(M_G,\partial_-M_G;\xi_0) = P_{d,\varepsilon}(b^{-1},h_1,\dots,h_d) \cdot P_{d,\varepsilon}(b,h_1^{-1},\dots,h_d^{-1}) \cdot \tau(M,\partial_-M;\xi_0) $$ where we denote $\displaystyle{P_{d,\varepsilon}(Y,X_1,\dots,X_d) := Y + (-1)^{\varepsilon+1} \prod_{i=1}^d (1-X_i)} \in\mathbb{Z}[Y,X_1,\dots,X_d]$. \end{lemma} The proofs are as follows. \begin{proof}[Proof of Lemma \ref{lem:multiplicativity}] The identity in $Q(\mathbb{Z}[H])/\pm H$ easily follows from the multiplicativity of the Reidemeister torsion of acyclic chain complexes with respect to direct sums \cite{Turaev_book}. Thus there are some unique $\varepsilon \in \{+1,-1\}$ and $h \in H$ such that $$ \tau\big(M\circ M', \partial_- (M\circ M'); \xi_0\big) =\varepsilon h \cdot \tau(M,\partial_-M;\xi_0) \cdot \tau(M',\partial_-M';\xi_0). $$ Setting $x(M) := \tau(M,\partial_-M;\xi_0)-t(M)^{\nicefrac{1}{2}}$, which belongs to $I^2$ by Lemma \ref{lem:I2}, this identity reads \begin{equation} \label{eq:multiplicativity} t(M\circ M')^{\nicefrac{1}{2}} + x(M\circ M')= \varepsilon h \cdot \left(t(M)^{\nicefrac{1}{2}}+x(M) \right) \cdot \left(t(M')^{\nicefrac{1}{2}}+x(M') \right). \end{equation} By reducing (\ref{eq:multiplicativity}) modulo $I$, we obtain $\varepsilon =1$. By reducing (\ref{eq:multiplicativity}) modulo $I^2$, we get $$ \left\{t(M\circ M')^{\nicefrac{1}{2}} - 1 \right\}= \left\{ h\cdot t(M)^{\nicefrac{1}{2}} \cdot t(M')^{\nicefrac{1}{2}} - 1 \right\} \ \in I/I^2 \simeq H $$ which implies $h=1\in H$ (written multiplicatively). \end{proof} \begin{proof}[Proof of Lemma \ref{lem:loop}] Analogues of this formula have already been proved in two contexts: for the Alexander polynomial of knots in \cite{GL_loop} and for the RT torsion of closed $3$-manifolds with Euler structure in \cite{Massuyeau_these}. (It should be observed that, in both situations, the orientation conventions for claspers differ from ours.) Since in the situation of homology cylinders, the Reidemeister torsion can be interpreted as an Alexander polynomial (Proposition \ref{prop:Alexander_torsion}), the proof given by Garoufalidis and Levine in \cite{GL_loop} can be adapted in a straightforward way to obtain that \begin{equation} \label{eq:looped_graph} \tau(M_G,\partial_-M_G;\xi_0) = \eta \cdot k \cdot P_{d,\varepsilon}(b^{-1},h_1,\dots,h_d) \cdot P_{d,\varepsilon}(b,h_1^{-1},\dots,h_d^{-1}) \cdot \tau(M,\partial_-M;\xi_0) \end{equation} where $k\in H$ and $\eta=\pm1$ are unknown. (We leave the details of the computations to the interested reader.) To fix the indeterminacy in $\pm H$, it is enough to reduce (\ref{eq:looped_graph}) modulo $I^2$. We deduce from Lemma \ref{lem:I2} that $$ t(M_G)^{\nicefrac{1}{2}} = \eta k \cdot t(M)^{\nicefrac{1}{2}} \in \mathbb{Z}[H]/I^2. $$ We conclude that $\eta=+1$ and $k=1\in H$ (written multiplicatively). \end{proof} \begin{proof}[Proof of Proposition \ref{prop:alpha_properties}] Assertion (\ref{eq:alexcyl}) follows from (\ref{eq:torsion_mapping_cylinder}). To show the additivity of $\alpha$, consider some $M,M'\in \mathcal{IC}$. We abbreviate $$ x := \tau(M,\partial_-M;\xi_0)-t(M)^{\nicefrac{1}{2}} \quad \hbox{and} \quad x' := \tau(M',\partial_-M';\xi_0)-t(M')^{\nicefrac{1}{2}}. $$ By Proposition \ref{lem:multiplicativity}, $\tau\big(M\circ M', \partial_- (M\circ M'); \xi_0\big)$ is equal to $$ \underbrace{t(M)^{\nicefrac{1}{2}}\cdot t(M')^{\nicefrac{1}{2}}}_{= t(M\circ M')^{\nicefrac{1}{2}}} + (x + x') + \underbrace{\left(t(M)^{\nicefrac{1}{2}}-1\right)x'+ \left(t(M')^{\nicefrac{1}{2}}-1\right)x+xx'}_{\in I^3} $$ and we deduce that $\alpha(M\circ M') =\alpha(M) + \alpha(M')$. Thanks to Theorem \ref{thm:torsion_finiteness}, showing that $\alpha$ is a finite-type invariant of degree $\leq 2$ is equivalent to proving that $\iota \circ t$ has the same property, where $\iota: H \to \mathbb{Z}[H]/I^3$ is the canonical map. Let $M\in \mathcal{IC}$ and let $S_0\sqcup S_1 \sqcup S_2$ be three pairwise-disjoint surfaces in $\operatorname{int}(M)$, together with some elements $s_0\in \Torelli(S_0), s_1\in \Torelli(S_1), s_2\in \Torelli(S_2)$. For $i=0,1,2$ we also set $t_i := t\left(M_{s_i}\right) -t(M) \in H$ (written additively). For each $P\subset \{0,1,2\}$, let $M_P$ be the result of the Torelli surgeries along the surfaces $S_p$ for which $p \in P$. Since $\tau_1$ is a finite-type invariant of degree $1$, $t\left(M_P\right)-t(M)$ is the sum of the $t_p$ for which $p\in P$. Therefore the alternate sum in $\mathbb{Z}[H]$ $$ \sum_{P\subset \{0,1,2\}} \!\! (-1)^{|P|} t\left(M_P\right) = t(M) \!\! \sum_{P\subset \{0,1,2\}} \!\! (-1)^{|P|} t\left(M_P\right)t(M)^{-1} = t(M) \!\! \sum_{P\subset \{0,1,2\}} \!\! (-1)^{|P|} \prod_{p\in P} t_p $$ is equal to $t(M)\cdot(1-t_0)(1-t_1)(1-t_2) \in I^3$. This shows that $\iota \circ t$, and consequently $\alpha$, are finite-type invariants of degree at most $2$. We now prove the surgery formulas. In the case of the looped graph clasper $G$ of degree $2$, we deduce from Lemma \ref{lem:loop} that \begin{eqnarray*} &&\tau(M_G,\partial_-M_G;\xi_0) \\ &=&\left(1 - (1-h) (1-h') \cdot b -b^{-1} \cdot (1-h^{-1}) (1-h'^{-1}) \right) \cdot \tau(M,\partial_-M;\xi_0) \mod I^3\\ &=& \tau(M,\partial_-M;\xi_0) - (1-h) (1-h') \cdot b -b^{-1} \cdot (1-h^{-1}) (1-h'^{-1}) \mod I^3\\ &=& \tau(M,\partial_-M;\xi_0) - (h-1) (h'-1) - (1-h^{-1}) (1-h'^{-1}) \mod I^3. \end{eqnarray*} Since we have $t(M)=t(M_G)$, we conclude that $\alpha(M_G)=\alpha(M)- 2 hh'$. (It also follows that the degree of the finite-type invariant $\alpha$ is precisely $2$.) The case of the $Y$-graph with two special leaves is deduced from Corollary \ref{cor:2spe} and the fact that $\alpha$ is invariant under $Y_3$-equivalence (by Lemma \ref{lem:Y_FTI}). \end{proof} \subsection{The Casson invariant}\label{subsec:casson} \label{subsec:Casson} One can produce from the Casson invariant of homology $3$-spheres an invariant of homology cylinders over $\Sigma$. For this it is necessary to \emph{choose} an embedding of the surface $\Sigma$ in $S^3$ as follows. Let $F_g\subset S^3$ be the surface obtained from the genus $g$ Heegaard surface of $S^3$ by removing a small open disk. We fix an orientation on $F_g$: the handlebody of the genus $g$ Heegaard splitting that induces this orientation on $F_g$ is called \emph{lower} while the other one is called \emph{upper}. An embedding $j:\Sigma \to S^3$ is called a \emph{Heegaard embedding} if its image is $F_g$ and if $j: \Sigma \to F_g$ is orientation-preserving. Let $M$ be a homology cylinder over $\Sigma$. Denote by $S^3(M,j)$ the homology $3$-sphere obtained by ``cutting'' $S^3$ along $j(\Sigma)=F_g$ and by ``inserting'' $M$ at this place: more precisely we define \begin{equation}\label{eq:twisting_S3} S^3(M,j) := \left(S^3 \setminus (j(\Sigma) \times [-1,1])\right) \cup_{j' \circ m^{-1}} M \end{equation} where $j(\Sigma) \times [-1,1]$ denotes a closed regular neighborhood of $j(\Sigma)$ in $S^3$ and $j'$ is the restriction to the boundary of the homeomorphism $j \times \operatorname{Id}: \Sigma \times [-1,1] \to j(\Sigma) \times [-1,1]$. Evaluating the Casson invariant $\lambda$ on this homology $3$-sphere yields a map $$ \lambda_j: \mathcal{IC} \longrightarrow \mathbb{Z}, \ M \longmapsto \lambda\left(S^3(M,j)\right) $$ which strongly depends on the embedding $j$. We sometimes abbreviate $\lambda := \lambda_j$ and the dependence on $j$ is discussed in \S \ref{sec:core_Casson}. It has been proved by Ohtsuki that the Casson invariant of homology $3$-spheres is a finite-type invariant \cite{Ohtsuki}. More generally, the ``sum formulas'' of Morita \cite{Morita_Casson2} or Lescop \cite{Lescop} for the Casson invariant imply that $\lambda_j: \mathcal{IC} \rightarrow \mathbb{Z}$ is a finite-type invariant of degree 2. The same formulas show that $\lambda_j: \mathcal{IC} \rightarrow \mathbb{Z}$ is \emph{not} additive: for any $M,M'\in \mathcal{IC}$, we actually have \begin{equation} \label{eq:sum_formula} \lambda_j(M\circ M')= \lambda_j(M) + \lambda_j(M') +2 \cdot \tau_1(M) \star_j \tau_1(M') \end{equation} where $\star_j: \Lambda^3 H \times \Lambda^3 H \to \mathbb{Z}$ is a certain non-trivial bilinear pairing whose definition depends on $j$ \cite{Morita_Casson2,Lescop,CHM}. Finally, let us observe that the function $\lambda_j$ is preserved by stabilization. More precisely, if the surface $\Sigma$ is stabilized to a surface $\Sigma^s$ of genus $g^s$ as shown in Figure \ref{fig:stabilization} and if the Heegaard embedding $j^s: \Sigma^s \to F_{g^s}$ extends $j: \Sigma \to F_g$, then we have $\lambda_{j^s}(M^s) = \lambda_j(M)$ for all $M \in \mathcal{IC}$. \subsection{The Birman--Craggs homomorphism} \label{subsec:Birman-Craggs} The Birman--Craggs homomorphism is a representation of the Torelli group derived from the Rochlin invariant of spin closed $3$-manifolds \cite{BC,Johnson_BC}. This representation extends in a direct way to the monoid of homology cylinders \cite{Levine_enlargement,MM}. To briefly recall its definition, we need the set $\operatorname{Spin}(\Sigma)$ of spin structures on $\Sigma$, which is an affine space over the $\mathbb{Z}_2$-vector space $H^1(\Sigma;\mathbb{Z}_2)$. As shown in \cite{Johnson_quadratic}, the space of spin structures on $\Sigma$ can be identified with the space $$ \left\{H\otimes \mathbb{Z}_2 \stackrel{q}{\longrightarrow} \mathbb{Z}_2: \forall x,y \in H\otimes \mathbb{Z}_2, \ q(x+y) - q(x) - q(y) = \omega(x, y)\ \operatorname{mod}\ 2 \right\} $$ of \emph{quadratic forms} whose polar form is the intersection pairing mod $2$, and this identification will be tacit in the sequel. We shall denote by $$ B:= \operatorname{Map}(\operatorname{Spin}(\Sigma), \mathbb{Z}_2) $$ the space of boolean functions on $\operatorname{Spin}(\Sigma)$. For every $n\geq 0$, let $B_{\leq n}$ denote the subspace of $B$ consisting of polynomial functions of degree at most $n$, i.e.\ sums of products of $n$ affine functions. In particular, $B_{\leq 1}$ is the space of affine functions and includes the following: \begin{equation} \label{eq:affine_functions} \left\{\begin{array}{rcl} \operatorname{Spin}(\Sigma) & \overset{\overline{1}}{\longrightarrow} & \mathbb{Z}_2\\ q & \longmapsto & 1 \end{array}\right. \quad \quad \quad \left\{\begin{array}{rcl} \operatorname{Spin}(\Sigma) & \overset{\overline{h}}{\longrightarrow} & \mathbb{Z}_2\\ q & \longmapsto & q(h) \end{array}\right. \quad \hbox{where $h\in H$.} \end{equation} The \emph{$n$-th derivative} of a boolean function $f: \operatorname{Spin}(\Sigma) \to \mathbb{Z}_2$ at $\sigma \in \operatorname{Spin}(\Sigma)$ is the map $\operatorname{d}_\sigma^{n}f: H^1(\Sigma;\mathbb{Z}_2)^n \to \mathbb{Z}_2$ defined by \begin{equation} \label{eq:derivative} \operatorname{d}_\sigma^{n}f(y_1,\dots,y_n) := \sum_{P \subset \{1,\dots,n\}}(-1)^{|P|}\cdot f\left(\sigma + \overrightarrow{y_P}\right) \end{equation} where $y_P$ is the sum of the $y_p$'s for which $p\in P$. As a general fact, a map $f$ is polynomial of degree $\leq n$ if and only if $\operatorname{d}_\sigma^{n+1}\! f$ vanishes at some point $\sigma$ and, in this case, $\operatorname{d}_\sigma^{n}f$ does not depend on $\sigma$ and is multilinear. Since the ground field is here $\mathbb{Z}_2$, the signs do not count in (\ref{eq:derivative}) and the $n$-th derivative $\operatorname{d}^{n}_\sigma f$ of a function $f$ vanishes when two arguments are repeated. Thus, we have a canonical isomorphism \begin{equation}\label{eq:iso_Boolean} \operatorname{d}^{n}:B_{\leq n} /B_{\leq n-1} \stackrel{\simeq}{\longrightarrow} \operatorname{Hom}\left(\Lambda^n H^1(\Sigma;\mathbb{Z}_2), \mathbb{Z}_2 \right) \simeq \Lambda^n H_{(2)} \end{equation} where $H_{(2)}:= H_1(\Sigma;\mathbb{Z}_2) = H \otimes \mathbb{Z}_2$ is identified with $\operatorname{Hom}(H^1(\Sigma;\mathbb{Z}_2),\mathbb{Z}_2)$. Now, let $j:\Sigma \hookrightarrow S^3$ be any embedding. Pulling back the (unique) spin structure $\sigma_0$ of $S^3$ by $j$ gives an element $j^\ast\sigma_0\in \operatorname{Spin}(\Sigma)$, and any element of $\operatorname{Spin}(\Sigma)$ can be realized in this way. For a homology cylinder $M$ over $\Sigma$, denote by $S^3(M,j)$ the homology $3$-sphere defined at (\ref{eq:twisting_S3}). Evaluating Rochlin's $\mu$ invariant on this homology sphere yields a monoid homomorphism $ \mathcal{IC} \to \mathbb{Z}_{2}$ which only depends on $j^*\sigma_0\in \operatorname{Spin}(\Sigma)$. By making the spin structure vary on $\Sigma$, ones gets the \emph{Birman--Craggs homomorphism} $$ \beta: \mathcal{IC} \longrightarrow B_{\leq 3}, \ M\longmapsto \left( j^\ast\sigma_0\mapsto \mu\left(S^3(M,j)\right)\right). $$ The map $\beta$ is an additive finite-type invariant of degree $1$, which is preserved by stabilization of the surface $\Sigma$. Let us recall how $\beta$ changes under surgery along a $Y$-graph. \begin{lemma}\label{lem:Y_beta} Let $Y$ be a $Y$-graph in a homology cylinder $M$, whose leaves are ordered and oriented in an arbitrary way. We denote by $h_1,h_2,h_3\in H$ their homology classes and by $f_1,f_2,f_3 \in \mathbb{Z}$ their framing numbers in $M$ (as defined in Appendix \ref{subsec:framing_numbers}). Then, using the notation (\ref{eq:affine_functions}), we have $$ \beta\left(M_Y\right) - \beta(M) = \prod_{i=1}^3 \left(\overline{h_i}+ f_i \cdot \overline{1} \right) \ \in B_{\leq 3}. $$ \end{lemma} \begin{proof} This formula is essentially contained in \cite{MM}. Let $q\in \operatorname{Spin}(\Sigma)$ (which we think of as a quadratic form $H_{(2)} \to \mathbb{Z}_2$ with polar form $\omega\otimes \mathbb{Z}_2$) and let $\sigma$ be the unique spin structure on $M$ whose pull-back by $m_+:\Sigma \to M$ (or, equivalently, by $m_-$) gives $q$. We denote by $FM$ the bundle of oriented frames of $M$ with fiber $\operatorname{GL}_+(3;\mathbb{R})$, and we think of $\sigma$ as an element of $H^1(FM;\mathbb{Z}_2)$ which is not zero on the fiber. We deduce from \cite[Lemma 3.14]{MM} that $$ \langle \beta\left(M_Y\right) - \beta(M), q \rangle = \prod_{i=1}^3 \langle \sigma, t_{L_i} \rangle \ \in \mathbb{Z}_2 $$ where $L_1, L_2,L_3$ denote the leaves of $Y$ and, for any framed oriented knot $K$ in $M$, $t_K \in H_1(FM)$ denotes the homology class of the oriented curve in $FM$ obtained by lifting $K$ with an extra $(+1)$-twist to $FM$. Thus it is enough to check the following. \begin{claim} For any oriented framed knot $K$ in $M$, we have $q([K]) = \langle \sigma, t_K \rangle + \operatorname{Fr}(K)$ where $[K]\in H$ denotes the homology class of $K$ and $\operatorname{Fr}(K) \in \mathbb{Z}$ its framing number. \end{claim} \noindent We set $q'(K) := \langle \sigma, t_K \rangle + \operatorname{Fr}(K) \in \mathbb{Z}_2$ and we first check that $q'(K)$ only depends on the homology class of $K$. For this, let $K_1$ and $K_2$ be two oriented framed knots such that $[K_1]=[K_2] \in H$. Then we can find a compact oriented surface $S \subset M$ such that $\partial S = K_1 \sqcup (-K_2)$ and $K_1$ is $0$-framed with respect to this surface $S$. Let $n$ be the framing of $K_2$ with respect to $S$, and let $K_2'$ be the oriented framed knot obtained from $K_2$ by an extra $(-n)$-twist so that $K'_2$ is $0$-framed with respect to $S$. It follows from \cite[Lemma 2.7]{MM} that $t_{K_1}=t_{K_2'}$ and, using Lemma \ref{lem:framed_connect}, we obtain that $\operatorname{Fr}(K_1)=\operatorname{Fr}(K_2')$. We deduce that $$ \langle \sigma, t_{K_1} \rangle + \operatorname{Fr}(K_1) = \langle \sigma, t_{K_2'} \rangle + \operatorname{Fr}(K_2') = \langle \sigma, t_{K_2} \rangle +n + \operatorname{Fr}(K'_2) = \langle \sigma, t_{K_2} \rangle + \operatorname{Fr}(K_2) \in \mathbb{Z}_2. $$ Thus, we get a map $q':H \to \mathbb{Z}_2$. Moreover this map is quadratic with polar form $\omega\otimes \mathbb{Z}_2$ since, for any oriented framed knots $K$ and $L$ in $M$, we have \begin{eqnarray*} q'([K]+[L]) &= &q'([K \sharp L]) \\ &= &\langle \sigma, t_{K} + t_{L} \rangle + \operatorname{Fr}(K) + \operatorname{Fr}(L) + 2 \operatorname{Lk}(K,L)\\ & =& q'([K])+ q'([L]) + \omega([K],[L]). \end{eqnarray*} (Here we have used \cite[Lemma 2.7]{MM} and Appendix B again.) In particular, it follows that $q'$ factorizes to a quadratic form $q':H_{(2)} \to \mathbb{Z}_2$. To conclude that $q=q'$, it is enough to check that $q([\alpha])=q'([\alpha])$ for any oriented simple closed curve $\alpha$ on $\Sigma$. Let $\alpha_+$ be the oriented framed knot obtained by pushing the curve $m_+(\alpha)$, framed along $m_+(\Sigma)$, in the interior of $M$. The way a spin structure on $\Sigma$ is identified with a quadratic form in \cite{Johnson_quadratic} implies that $q([\alpha])= \langle \sigma, t_{\alpha_+}\rangle$ and, since we have $\operatorname{Fr}(\alpha_+)=0$ in this case, we conclude that $q([\alpha])=q'([\alpha])$. \end{proof} An important property of $\beta$ is that, for any $M \in \mathcal{IC}$, the third derivative of the cubic function $\beta(M)$ is the mod $2$ reduction of $\tau_1(M)$: \begin{equation} \label{eq:d^3beta} \xymatrix{ \mathcal{IC} \ar[r]^-\beta \ar[d]_-{\tau_1} & B_{\leq 3} \ar[r]^-{\operatorname{d}^3} & \Lambda^3 H_{(2)} \ar@{=}[d]\\ \Lambda^3 H \ar@{->>}[rr]_-{\mod 2 } && \Lambda^3 H_{(2)} } \end{equation} This relation is due to Johnson \cite{Johnson_abelianization} (in the case of the Torelli group), and it can be proved by comparing how $\beta$ and $\tau_1$ change under surgery along a $Y$-graph \cite{MM}. The following lemmas give the next derivatives of $\beta$. \begin{lemma}\label{lem:d^2beta} The following diagram is commutative: $$ \xymatrix{ \mathcal{KC} \ar[d]_{\tau_2} \ar[r]^{\beta} & B_{\leq 2} \ar[r]^-{\operatorname{d}^2} & \Lambda^2H_{(2)} \\ \frac{\left(\Lambda^2 H \otimes \Lambda^2 H\right)^{\mathfrak{S}_2}}{\Lambda^4H} \ar@{->>}[rr]_-{L} && \frac{\Lambda^2H}{2 \cdot \Lambda^2H} \ar[u]_-\simeq } $$ Here $\mathcal{KC} = \mathcal{C}[2]$ denotes the second term of the Johnson filtration and $L$ is the homomorphism appearing in the short exact sequence (\ref{eq:sym_to_sym}). \end{lemma} \begin{proof} It is a consequence of (\ref{eq:d^3beta}) that the restriction of $\beta$ to $\mathcal{KC}$ takes its values in $B_{\leq 2}$. Now let $M\in \mathcal{KC}$. Using Lemma \ref{lem:Y_beta}, we can find some $Y$-graphs with one special leaf $G_1, \dots, G_m$ in $(\Sigma \times I)$ such that $$ \beta(M) = \sum_{i=1}^m \beta \left( (\Sigma\times I)_{G_i} \right) \ \in B_{\leq 2}. $$ Since the $Y_2$-equivalence is classified by the pair $(\tau_1,\beta)$ \cite{MM}, we deduce that $$ M\stackrel{Y_2}{\sim} \prod_{i=1}^m (\Sigma\times I)_{G_i}. $$ Therefore, by clasper calculus, we can find graph claspers $H_1,\dots, H_n$ of degree $2$ in $(\Sigma\times I)$ such that $$ M\stackrel{Y_3}{\sim} \prod_{j=1}^n (\Sigma \times I)_{H_j} \circ \prod_{i=1}^m (\Sigma\times I)_{G_i}. $$ Since $\tau_2$ is invariant under $Y_3$-equivalence, we have \begin{equation} \tau_2(M) = \sum_{j=1}^n \tau_2\left((\Sigma\times I)_{H_j} \right) + \sum_{i=1}^m \tau_2\left((\Sigma\times I)_{G_i} \right). \end{equation} We deduce from Lemma \ref{lem:tau2} and Lemma \ref{lem:tau2_2} that $$ L\tau_2(M) = 0 + \sum_{i=1}^m l_i \wedge r_i \ \in \Lambda^2 H_{(2)} $$ where $l_i,r_i \in H_{(2)}$ denote the mod $2$ homology classes of the non-special leaves of $G_i$. We conclude thanks to Lemma \ref{lem:Y_beta}. \end{proof} \begin{rem} The fact that the second Johnson homomorphism $\tau_2$ is related to the Birman--Craggs homomorphism $\beta$ has already been observed by Yokomizo for the Johnson subgroup of the Torelli group \cite{yokomizo}. \end{rem} \begin{lemma}\label{lem:d^1beta} The following diagram is commutative: $$ \xymatrix{ \mathcal{C}[3] \ar[d]_{\alpha} \ar[r]^{\beta} & B_{\leq 1} \ar[r]^{\operatorname{d}^1} & H_{(2)} \ar@{>->}[d]^-s\\ S^2 H \ar@{->>}[rr]_-{\mod 2} & &S^2H_{(2)} } $$ Here $s$ is the square map defined by $x \mapsto x^2$. \end{lemma} \begin{proof} It is a consequence of Lemma \ref{lem:d^2beta} that the restriction of $\beta$ to $\mathcal{C}[3]$ takes its values in $B_{\leq 1}$. Let now $M\in \mathcal{C}[3]$. Using the same arguments as in the proof of Lemma \ref{lem:d^2beta}, we can find some some $Y$-graphs $G_1,\dots ,G_m$ in $(\Sigma \times I)$ with \emph{two} special leaves and graph claspers $H_1,\dots,H_n$ of degree $2$ in $(\Sigma \times I)$ such that $$ M\stackrel{Y_3}{\sim} \prod_{j=1}^n (\Sigma \times I)_{H_j} \circ \prod_{i=1}^m (\Sigma\times I)_{G_i}. $$ Since we have $\tau_2(M)=0$ by assumption and $\tau_2\left((\Sigma\times I)_{G_i}\right)=0$ by Lemma \ref{lem:tau2_2}, we deduce that $\sum_j \tau_2\left((\Sigma \times I)_{H_j}\right)=0$. Then a more delicate use of clasper calculus shows that we can assume each graph clasper $H_j$ to be looped. (This will be proved in Lemma \ref{lem:tau2_loop} below.) We deduce from Proposition \ref{prop:alpha_properties} that $$ \alpha(M) = \sum_{j=1}^n\underbrace{\alpha\left((\Sigma \times I)_{H_j} \right)}_{\in 2\cdot S^2H} + \sum_{i=1}^m \underbrace{\alpha\left((\Sigma\times I)_{G_i}\right)}_{= g_i^2} $$ where $g_i \in H$ is the homology class of the non-special leaf of $G_i$ (which is oriented in an arbitrary way). Therefore, we have $$ \alpha(M)\!\! \mod 2 = \sum_{i=1}^m g_i^2 \in S^2 H_{(2)}. $$ Again, we conclude thanks to Lemma \ref{lem:Y_beta}. \end{proof} \begin{remark} Since $\alpha$ is trivial on the Torelli group, we deduce from the previous lemmas that the function $\beta(f):\operatorname{Spin}(\Sigma)\to \mathbb{Z}_2$ is constant for any $f\in \mcg[3]$. This phenomenon has already been observed by Johnson in \cite[p.178]{Johnson_survey}. \end{remark} The next statement is deduced from the previous lemmas, and it will be used in the proof of Theorem A. \begin{lemma}\label{lem:beta} If two homology cylinders $M$ and $M'$ over $\Sigma$ have the same invariants $\rho_3$, $\alpha$ and $\lambda_j$ (for some Heegaard embedding $j$ of $\Sigma$ in $S^3$), then the Birman--Craggs homomorphism $\beta$ does not distinguish $M$ from $M'$. \end{lemma} \begin{proof} Since the quotient monoid $\mathcal{IC}/Y_3$ is a group according to Goussarov and Habiro \cite{Goussarov,Habiro}, there exists an $N \in \mathcal{IC}$ such that $M\circ N$ is $Y_3$-equivalent to $M'$. The fact that $\rho_3(M)=\rho_3(M')$ implies that $\rho_3(N)=1$, so that $N$ belongs to $\mathcal{C}[3]$. By (\ref{eq:d^3beta}) and Lemma \ref{lem:d^2beta} we obtain that $\beta(N)\in B_{\leq 1}$. Next, the fact that $\alpha(M)=\alpha(M')$ implies that $\alpha(N)=0$, and we deduce from Lemma \ref{lem:d^1beta} that the boolean function $\beta(N): \operatorname{Spin}(\Sigma) \to \mathbb{Z}_2$ is constant. Finally, $M$ and $M'$ having the same Casson invariant $\lambda_j$, we deduce from formula (\ref{eq:sum_formula}) that $\lambda_j(N)=0$. Therefore, we have $$ \beta(N)(j^*\sigma_0) = \mu\left(S^3(N,j)\right) = \lambda\left(S^3(N,j)\right) \! \! \!\! \mod 2 = \lambda_j(N) \! \! \!\! \mod 2 = 0 $$ which shows that $\beta(M')-\beta(M) = \beta(N)$ is the trivial map. \end{proof} \section{Some diagrammatic invariants of homology cylinders} \label{sec:diagrams} In this section, we briefly review the LMO homomorphism (which is a diagrammatic representation of the monoid $\mathcal{IC}$ introduced in \cite{CHM,HM_SJD}) and its connection with clasper surgery. We recall how the invariants $\tau_1$, $\tau_2$ and $\lambda$ introduced in \S \ref{sec:invariants} can be extracted from the LMO homomorphism \cite{CHM}, and we give a similar result for the quadratic part $\alpha$ of the relative RT torsion. \subsection{Jacobi diagrams} \label{subsec:Jacobi} We start by defining the diagrammatic spaces that we shall need. A \emph{Jacobi diagram} is a finite graph whose vertices have valency $1$ or $3$: univalent vertices are called \emph{external} vertices, while trivalent vertices are called \emph{internal} vertices and are assumed to be oriented. (An \emph{orientation} of a vertex is a cyclic ordering of its incident half-edges.) The \emph{internal degree} (or simply \emph{degree}) of a Jacobi diagram is its number of internal vertices, and the \emph{loop degree} is its first Betti number. We call a connected Jacobi diagram of internal degree $0$ a {\it strut}, and we call a connected Jacobi diagram of internal degree $1$ and loop degree $1$ a {\it lasso}. A Jacobi diagram is \emph{colored} by a set $S$ if a map from the set of its external vertices to $S$ is specified. As usual \cite{BN} for figures, we use dashed lines to depict Jacobi diagrams, and we take the cyclic ordering at a trivalent vertex given by the counter-clockwise orientation: see Figure \ref{fig:diagrams_examples} for some examples. \begin{figure}[h] \begin{center} \includegraphics[scale=0.45]{diagrams_examples} \end{center} \caption{Some examples of Jacobi diagrams: the strut, the lasso, the Y graph, the H graph, the $\Phi$ graph and the $\Theta$ graph.} \label{fig:diagrams_examples} \end{figure} As in the previous sections, we denote $H:=H_1(\Sigma)$ and $H_\mathbb{Q} := H\otimes \mathbb{Q}$. We consider the following abelian group: $$ \jacobi(H) := \frac{\mathbb{Z}\cdot \left\{ \begin{array}{c} \hbox{Jacobi diagrams without strut component}\\ \hbox{and with external vertices colored by } H \end{array} \right\}} {\hbox{AS, IHX, loop, multilinearity}}. $$ The ``AS'' and ``IHX'' relations are diagrammatic analogues of the antisymmetry and Jacobi identities in Lie algebras:\\ \begin{center} \labellist \small \hair 2pt \pinlabel {AS} [t] at 102 -5 \pinlabel {IHX} [t] at 543 -5 \pinlabel {$= \ -$} at 102 46 \pinlabel {$-$} at 484 46 \pinlabel {$+$} at 606 46 \pinlabel {$=0$} at 721 46 \endlabellist \centering \includegraphics[scale=0.4]{AS_IHX} \end{center} \vspace{0.5cm} \noindent The ``loop'' relation says that a Jacobi diagram with a looped edge (e.g$.$ a lasso) is trivial, and follows from the IHX relation in internal degree $\geq 2$. The ``multilinearity'' relation states that a Jacobi diagram $D$ having one external vertex $v$ colored by $n_1\cdot h_1 + n_2\cdot h_2$ (with $n_1,n_2 \in \mathbb{Z}$ and $h_1,h_2 \in H$) is equivalent to the linear combination $n_1\cdot D_1 + n_2 \cdot D_2$ where $D_i$ is the Jacobi diagram $D$ with the vertex $v$ colored by $h_i$. The abelian group $\jacobi(H)$ is graded by the internal degree: $$ \jacobi(H) = \bigoplus_{i=0}^\infty \jacobi_i(H), $$ the degree $0$ part being spanned by the empty diagram $\varnothing$. We denote by $\jacobi^c(H)$ the subgroup of $\jacobi(H)$ spanned by connected Jacobi diagrams. Its internal degree $i$ part $\jacobi^c_{i}(H)$ is in turn graded by the loop degree: $$ \jacobi^c_{i}(H)=\bigoplus_{l=0}^{d_i} \jacobi^c_{i,l}(H), $$ where $d_i=(i+2)/2$ if $i$ is even, and $d_i=(i-1)/2$ if $i$ is odd. The projection onto the loop-degree $l$ part of $\jacobi^c_{i}(H)$ is denoted by $$ p_{i,l}:\jacobi^c_{i}(H)\longrightarrow \jacobi^c_{i,l}(H). $$ There is also a version of $\jacobi(H)$ with rational coefficients: this $\mathbb{Q}$-vector space is denoted by $\jacobi(H_\mathbb{Q})$ and is canonically isomorphic to $\jacobi(H)\otimes \mathbb{Q}$. The space $\jacobi(H)$ is well suited for computations which, for instance, involve the representation theory of the symplectic group $\operatorname{Sp}(H)$. However, from a topological point of view, the following variant of $\jacobi(H)$, which has been introduced by Habiro in \cite{Habiro}, is more convenient to use with clasper calculus: see \S \ref{subsec:surgery_map} in this connection. $$ \jacobi^{<}(H) := \frac{\mathbb{Z}\cdot \left\{ \begin{array}{c} \hbox{Jacobi diagrams without strut component and}\\ \hbox{with external vertices colored by } H \hbox{ and totally ordered} \end{array} \right\}} {\hbox{AS, IHX, loop, multilinearity, STU-like}}. $$ The AS, IHX, loop and multilinearity relations are as before, while the ``STU-like'' relation is defined as follows: $$ \begin{array}{c} \labellist \small \hair 2pt \pinlabel {$=\quad \quad \omega(x,y)\cdot$} at 395 50 \pinlabel {$x$} [t] at 2 0 \pinlabel {$y$} [t] at 58 0 \pinlabel {$y$} [t] at 203 0 \pinlabel {$x$} [t] at 256 0 \pinlabel {$-$} at 130 45 \pinlabel {$<$} at 30 4 \pinlabel {$<$} at 229 4 \pinlabel {$\cdots<$} [r] at 0 4 \pinlabel {$\cdots<$} [r] at 198 4 \pinlabel {$< \cdots$} [l] at 61 4 \pinlabel {$< \cdots$} [l] at 261 4 \endlabellist \centering \includegraphics[scale=0.6]{STU} \end{array} $$ (Recall that $\omega:H \times H \to \mathbb{Z}$ denotes the intersection pairing.) Again, there is a rational version of $\jacobi^{<}(H)$, which is denoted by $\jacobi^{<}(H_\mathbb{Q})$ and which is canonically isomorphic to $\jacobi(H)\otimes \mathbb{Q}$. With rational coefficients, there is an isomorphism $$ \chi: \jacobi(H_\mathbb{Q}) \stackrel{\simeq}{\longrightarrow} \jacobi^{<}(H_\mathbb{Q}) $$ defined, for all Jacobi diagram $D \in \jacobi(H_\mathbb{Q})$ with $e$ external vertices, by $$ \chi(D) := \frac{1}{e!} \cdot \left(\hbox{sum of all ways of ordering the $e$ external vertices of $D$} \right) $$ (See \cite[Proposition 3.1]{HM_SJD}.) Besides there is another version of the abelian group $\jacobi^{<}(H)$, which is denoted by $\jacobi^{<}(-H)$ and is defined as $\jacobi^{<}(H)$, except that one uses the symplectic form $-\omega$ in the STU-like relation instead of $\omega$. These two spaces are canonically isomorphic, via the map $$ s: \jacobi^{<}(-H) \stackrel{\simeq}{\longrightarrow} \jacobi^{<}(H) $$ defined by $s(D) := (-1)^{w(D)}D$ for any Jacobi diagram $D$, where $w(D)$ denotes the Euler characteristic of $D$ modulo $2$. Finally, we shall need a third space of Jacobi diagrams. We denote by $L^\pm$ the abelian group freely generated by the set $$ \{1^+,\dots,g^+\} \cup \{1^-,\dots,g^-\}, $$ which consists of two copies of the finite set $\{1,\dots,g\}$, labeled by ``$+$'' and ``$-$'' respectively. Then, we consider the abelian group $$ \jacobi(L^\pm) := \frac{\mathbb{Z}\cdot \left\{ \begin{array}{c} \hbox{Jacobi diagrams without strut component}\\ \hbox{and with external vertices colored by } L^\pm \end{array} \right\}} {\hbox{AS, IHX, loop, multilinearity}} $$ and its version $\jacobi(L^\pm_\mathbb{Q})$ with rational coefficients. We also denote by $\jacobi^c(L^\pm)$ the subgroup of $\jacobi(L^\pm)$ spanned by connected Jacobi diagrams. There is a projection of $\jacobi^c_i(L^\pm)$ onto its loop-degree $l$ part which we denote by $$ p_{i,l}:\jacobi^c_{i}(L^\pm)\longrightarrow \jacobi^c_{i,l}(L^\pm). $$ Of course, if we choose a system of meridians and parallels $(\alpha_i,\beta_i)_{i=1}^g$ on the surface $\Sigma$ as depicted in Figure \ref{fig:basis}, then we have an ``obvious'' isomorphism between $\jacobi(L^\pm)$ and $\jacobi(H)$ which is induced by the group isomorphism \begin{equation}\label{eq:L+-_H} L^\pm \stackrel{\simeq}{\longrightarrow} H,\quad \ i^- \longmapsto [\alpha_i], \ j^+ \longmapsto [\beta_j]. \end{equation} Observe that the subgroup generated by $\{1^-,\dots,g^-\}$ (respectively $\{1^+,\dots,g^+\}$) then corresponds to the Lagrangian subgroup $\langle \alpha_1,\dots, \alpha_g \rangle$ of $H$ (respectively $\langle \beta_1,\dots, \beta_g \rangle$). But there is also a ``non-obvious'' isomorphism between $\jacobi(L^\pm_\mathbb{Q})$ and $\jacobi(H_\mathbb{Q})$, namely the map $\kappa$ defined by the following composition: \begin{equation} \label{eq:kappa} \xymatrix{ \jacobi(L^\pm_\mathbb{Q}) \ar[r]^-\varphi_-\simeq \ar@{-->}@/_1.8pc/[rrr]_-\kappa & \jacobi^{<}(-H_\mathbb{Q}) \ar[r]^-s_-\simeq & \jacobi^<(H_\mathbb{Q}) \ar[r]^-{\chi^{-1}}_-\simeq & \jacobi(H_\mathbb{Q}) } \end{equation} Here the isomorphism $\varphi$ is defined by declaring that ``each $i^-$-colored vertex should be lower than any $i^+$-colored vertex'' and by changing the colors of external vertices according to (\ref{eq:L+-_H}). (See \cite[Lemma 8.4]{CHM}.) Note that $\kappa$ is explicitly given by the formula \begin{equation} \label{eq:kappa_formula} \kappa(D) = (-1)^{w(D)}\cdot \left( \! \! \! \left. \begin{array}{c} \hbox{sum of all ways of $(\times 1/2)$-gluing \emph{some} $i^-$-colored}\\ \hbox{vertices of $D$ with \emph{some} of its $i^+$-colored vertices} \end{array} \! \right| \! \begin{array}{l} j^+ \mapsto [\beta_j] \\ j^- \mapsto [\alpha_j] \end{array} \! \! \! \right) \end{equation} for any Jacobi diagram $D$, where $w(D)$ denotes the Euler characteristic of $D$ modulo $2$, and where a ``$(\times 1/2)$-gluing'' means a gluing together with a multiplication by $1/2$. The reasons to be interested in this more sophisticated isomorphism $\kappa$ will be apparent in the next subsection. \begin{figure} \begin{center} {\labellist \small \hair 0pt \pinlabel {$\alpha_1$} [l] at 310 105 \pinlabel {$\alpha_g$} [l] at 568 65 \pinlabel {$\beta_1$} [b] at 216 80 \pinlabel {$\beta_g$} [b] at 472 69 \endlabellist} \includegraphics[scale=0.52]{surface} \end{center} \caption{The surface $\Sigma$ and a system of meridians and parallels $(\alpha,\beta)$.} \label{fig:basis} \end{figure} We conclude this review of diagrammatic spaces by the following technical fact. \begin{lemma}\label{lem:no_torsion} The abelian groups $\jacobi_2(H)$ and $\jacobi^{<}_2(H)$ are torsion-free. \end{lemma} \begin{proof} The isomorphism $\varphi$, whose definition is recalled in the previous paragraph, exists with integral coefficients. (Indeed, the proof given in \cite[Lemma 8.4]{CHM} works with coefficients in $\mathbb{Z}$ as well.) Therefore, the abelian group $\jacobi^{<}(H)$ is isomorphic to $\jacobi(L^\pm) \simeq \jacobi(H)$, so that it is enough to prove the lemma for $\jacobi_2(H)$. The abelian group $\jacobi_2(H)$ is the direct sum of $\jacobi_2^c(H)$ and $S^2\jacobi_1^c(H) \simeq S^2 \Lambda^3H$. To see that $\jacobi_2^c(H)$ has no torsion, we use the loop degree: $$ \jacobi^c_2(H) = \jacobi^c_{2,0}(H)\oplus \jacobi^c_{2,1}(H)\oplus \jacobi^c_{2,2}(H). $$ By the multilinearity relation, we have $$ \jacobi^c_{2,1}(H)=\left\langle \left. \phin{h}{h'} \right\vert h,h'\in H \right\rangle\simeq S^2H \quad \textrm{and}\quad \jacobi^c_{2,2}(H)= \left\langle \hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm} \right\rangle\simeq \mathbb{Z}. $$ Thus it remains to prove that the group $\jacobi^c_{2,0}(H)$ is torsion-free. Note that we have an isomorphism $\jacobi^c_{2,0}(H) \simeq \frac{S^2 \Lambda^2 H}{\Lambda^4 H}$ defined by $\hn{a}{b}{c}{d} \longmapsto \left((a\wedge b) \leftrightarrow (c\wedge d)\right)$. Now, recall from the proof of Proposition \ref{prop:Morita-Levine} the homomorphism $$ \jacobi^c_{2,0}(H) \simeq \frac{S^2 \Lambda^2 H}{\Lambda^4 H} \stackrel{\eta'}{\longrightarrow} \operatorname{D}'_2(H) $$ defined by $$ \hn{a}{b}{c}{d} \longmapsto a \otimes [b,[c,d]] + b \otimes [[c,d],a]+ c \otimes [d,[a,b]] + d \otimes [[a,b],c]. $$ As shown by Levine \cite{Levine_addendum}, this map is an isomorphism. Since $\operatorname{D}'_2(H)$ is a subgroup of $\operatorname{D}_2(H)$ which has no torsion, we deduce that $\jacobi^c_{2,0}(H)$ is torsion-free. \end{proof} \subsection{The surgery map}\label{subsec:surgery_map} For each integer $k\geq 2$, Habiro has defined in \cite{Habiro} a surjective homomorphism $$ \psi_k: \jacobi^{<,c}_{k}(H)\longrightarrow \frac{Y_k\mathcal{IC}}{Y_{k+1}}, $$ which sends each connected Jacobi diagram $D\in \jacobi^{<,c}_k(H)$ to the homology cylinder obtained from $(\Sigma \times I)$ by surgery along a graph clasper $C(D)$ of degree $k$ with the same shape as $D$. This ``topological realization'' $C(D)$ of the diagram $D$ is defined in the following way: \begin{enumerate} \item Thicken $D$ to a compact oriented surface using the vertex-orientation of $D$, so that vertices are thickened to disks, and edges to bands. For each disk coming in this way from an external vertex, cut a smaller disk in the interior, so as to produce an oriented compact surface $S(D)$, decomposed into disks, bands and annuli. The orientation of $S(D)$ induces an orientation on the core of each annulus as shown on Figure \ref{fig:leaf_orientation}: \begin{figure}[h!] \labellist \small\hair 2pt \pinlabel {{\tiny $+$}} [ c] at 395 113 \endlabellist \includegraphics[scale=0.3]{leaf_orientation} \caption{How to orient a leaf.} \label{fig:leaf_orientation} \end{figure} \item Embed $S(D)$ into the interior of $(\Sigma \times I)$ in such a way that each annulus of $S(D)$ represents in $H\simeq H_1(\Sigma \times I)$ the color of the corresponding external vertex of $D$. The embedded annuli should be in disjoint ``horizontal layers'' of $(\Sigma \times I)$, and their ``vertical height'' along $I$ should respect the total ordering of the external vertices of $D$. The result is a graph clasper $C(D)$ in $(\Sigma \times I)$. \end{enumerate} \noindent Actually, we will only need the surgery map $\psi_k$ in degree $k=2$. The degree $1$ case is special. We have $\jacobi^{<,c}_1(H)=\jacobi^{<}_1(H)\simeq \jacobi_1(H)$, and the surgery map $\psi_1$ is still well-defined on that group (and it is surjective) \emph{provided} the torsion of the abelian group $\mathcal{IC}/Y_2$ is ignored: $$ \psi_1: \jacobi_{1}(H)\longrightarrow \frac{\mathcal{IC}/Y_2}{\operatorname{Tors}(\mathcal{IC}/Y_2)}. $$ (The group $\mathcal{IC}/Y_2$ contains $2$-torsion \cite{Habiro}: its explicit surgery description involves spin structures of $\Sigma$ \cite{MM}.) The following lemma is needed in the proof of Lemma \ref{lem:d^1beta}. We shall prove it by means of the surgery map $\psi_2$. \begin{lemma}\label{lem:tau2_loop} Let $N \in \mathcal{IC}$ be such that $N \stackrel{Y_2}{\sim} \Sigma \times I$ and $\tau_2(N)=0$. Then there exists a disjoint union $L$ of looped graph claspers of degree $2$ and graph claspers of degree $3$ in $(\Sigma \times I)$ such that surgery along $L$ yields $N$. \end{lemma} \begin{proof} By surjectivity of $\psi_2$, we can find $y \in \jacobi^{<,c}_2(H)$ such that $\psi_2(y) = \{N\} \in Y_2\mathcal{IC}/Y_3$. We set $x:=\varphi^{-1} s^{-1}(y) \in \jacobi^{c}_2(L^\pm)$, where the isomorphisms $s$ and $\varphi$ are recalled in \S \ref{subsec:Jacobi}. Lemma \ref{lem:tau2} implies that the following diagram is commutative: $$ \xymatrix{ \jacobi^{<,c}_2(H) \ar[rr]^-{\psi_2} & & \frac{Y_2 \mathcal{IC}}{Y_3} \ar@{->>}[d]^-{\tau_2}\\ \jacobi_2^c(L^\pm) \ar[u]^-{s \varphi}_-\simeq \ar@{->>}[r]_-{-p_{2,0}} & \jacobi_{2,0}^c(L^\pm) \ar[r]_-\simeq & \frac{S^2 \Lambda^2 H}{\Lambda^4H} } $$ (Here, the bottom isomorphism is defined by $\hn{a}{b}{c}{d} \longmapsto \left((a\wedge b) \leftrightarrow (c\wedge d)\right)$ where $a,b,c,d \in L^\pm$ are considered as elements of $H$ by (\ref{eq:L+-_H}).) By assumption, we have $\tau_2 \psi_2(y)=0$ and we deduce that $p_{2,0}(x)=0$, i.e.\ $x$ only consists of looped Jacobi diagrams. The same deduction applies to $y=s \varphi(x)$ and the conclusion follows. \end{proof} \subsection{The LMO homomorphism}\label{subsec:LMO} We briefly review the LMO homomorphism, its construction and main properties. For this purpose, we fix a system of meridians and parallels $(\alpha_i,\beta_i)_{i=1}^g$ on the surface $\Sigma$ as shown in Figure \ref{fig:basis}. Then one can turn any homology cylinder $M$ over $\Sigma$ into a homology $3$-ball $B$ by gluing, for each $i\in \{1,\dots,g\}$, a $2$-handle to the surface $\partial_-M$ along the curve $m_-(\alpha_i)$, and a $2$-handle to the surface $\partial_+M$ along the curve $m_+(\beta_i)$. The cores of these $2$-handles define a $(2g)$-component framed tangle $\gamma$ in the homology $3$-ball $B$. By taking the Kontsevich--LMO invariant of the pair $(B,\gamma)$ and after an appropriate normalization, one can thus associate to $M\in \mathcal{IC}$ an element $\widetilde{Z}^Y(M)$ of (the degree completion of) $\jacobi(L^\pm_\mathbb{Q})$. See \cite{CHM} where the target is denoted by $\jacobi^Y\left(\set{g}^+ \cup \set{g}^-\right)$. The important point is that the colors $1^-,\dots,g^-$ refer to the curves $\alpha_1,\dots,\alpha_g$ while $1^+,\dots,g^+$ refer to $\beta_1,\dots,\beta_g$, so that the definition of $\widetilde{Z}^Y$ depends on the choice of $(\alpha,\beta)$. (The definition is also dependent on the choice of an associator for the Kontsevich integral.) The space $\jacobi(L^\pm_\mathbb{Q})$, equipped with the multiplication \begin{equation}\label{eq:star_product} D \star E := \left(\! \! \begin{array}{c} \hbox{sum of all ways of gluing \emph{some} of the $i^+$-colored vertices of $D$}\\ \hbox{to \emph{some} of the $i^-$-colored vertices of $E$, for all $i=1,\dots,g$} \end{array}\! \! \right) \end{equation} of $L^\pm_\mathbb{Q}$-colored Jacobi diagrams $D$ and $E$, is an associative $\mathbb{Q}$-algebra. The aforementioned normalization is done in such a way that $$ \widetilde{Z}^Y: \mathcal{IC} \longrightarrow \jacobi(L^\pm_\mathbb{Q}) $$ is a monoid homomorphism. The push-out of the multiplication $\star$ by the isomorphism $\kappa$ defined at (\ref{eq:kappa}) is still denoted by $\star$. For any $H_\mathbb{Q}$-colored Jacobi diagrams $D$ and $E$, the multiplication $\star$ on $\jacobi(H_\mathbb{Q})$ is explicitly given by the formula $$ D \star E := \sum_{\substack{V' \subset V,\ W' \subset W\\ \beta\ :\ V' \stackrel{\simeq}{\longrightarrow} W'}}\ \frac{1}{2^{|V'|}} \cdot \prod_{v\in V'} \omega\big(\hbox{color}(v),\hbox{color}(\beta(v))\big)\ \cdot (D \cup_\beta E) $$ where $V$ and $W$ denote the sets of external vertices of $D$ and $E$ respectively, and where the sum is taken over all ways $\beta$ of identifying a part $V'$ of $V$ with a part $W'$ of $W$. Then the \emph{LMO homomorphism} is defined in \cite{HM_SJD} as the composition $$ \xymatrix{ \mathcal{IC}\ar@{-->}@/_1.2pc/[rr]_-Z \ar[r]^-{\widetilde{Z}^Y} &\jacobi(L^\pm_\mathbb{Q}) \ar[r]^-\kappa_-\simeq &\jacobi(H_\mathbb{Q}). } $$ The LMO homomorphism is universal among rational-valued finite-type invariants of homology cylinders \cite{CHM,HM_SJD}. In terms of the surgery map introduced in \S \ref{subsec:surgery_map}, this universal property of $Z$ amounts to the following commutative diagram: \begin{equation} \label{eq:univLMO} \xymatrix{ \jacobi^{<,c}(H_\mathbb{Q}) \ar@{->>}[rr]^-{\psi\otimes \mathbb{Q}}\ar[rrd]_-{\chi^{-1}}^-\simeq && \left(\operatorname{Gr}^Y \mathcal{IC}\right)\otimes \mathbb{Q} \ar[d]^-{\operatorname{Gr} Z} \\ && \jacobi^{c}(H_\mathbb{Q}). } \end{equation} It follows that the maps $\psi \otimes \mathbb{Q}$ and $\operatorname{Gr} Z$ are isomorphisms. The LMO homomorphism is compatible with stabilizations of the surface $\Sigma$. More precisely, assume that the surface $\Sigma$ has been stabilized to a surface $\Sigma^s$ of genus $g^s$ (as shown on Figure \ref{fig:stabilization}) and that the system of meridians and parallels $(\alpha_i,\beta_i)_{i=1}^{g^s}$ chosen on $\Sigma^s$ extends that of $\Sigma$. We denote $H^s:=H_1(\Sigma^s)$ which contains a copy of $H$. Then, the following diagram is commutative: $$ \xymatrix{ \mathcal{IC}(\Sigma) \ar[r] \ar[d]_-Z & \mathcal{IC}(\Sigma^s) \ar[d]^-Z \\ \jacobi(H_\mathbb{Q}) \ar[r] & \jacobi(H_\mathbb{Q}^s) } $$ \subsection{Johnson, Alexander and Casson from LMO}\label{subsec:invariantsLMO} The degree $1$ part of the LMO homomorphism is equivalent to the first Johnson homomorphism \cite{CHM}. More precisely, we have the commutative diagram\\[-0.5cm] \begin{equation} \label{eq:tau1_LMO} \xymatrix{ \mathcal{IC}/Y_2 \ar[d]_{\tau_1} \ar[r]^{Z_1} & \jacobi^{c}_1(H_{\mathbb{Q}}) \ar[d]^-\simeq \\ \Lambda^3 H \ar@{>->}[r] & \Lambda^3 H_\mathbb{Q} } \end{equation} where the isomorphism on the right side is given by $\yn{a}{c}{b} \longmapsto a \wedge b \wedge c$. Similarly, the second Johnson homomorphism corresponds to the ``tree-reduction'' of the degree $2$ part of the LMO homomorphism. \begin{lemma}[See \cite{CHM}] \label{lem:tau2_LMO} There is a commutative diagram \[ \xymatrix{ \mathcal{KC}/Y_{3} \ar[d]_{\tau_2} \ar[r]^{Z_2} & \jacobi^{c}_2(H_{\mathbb{Q}}) \ar@{->>}[d]^{p_{2,0}}\\ \frac{\left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2}}{\Lambda^4H} \ar@{>->}[r] & \jacobi^c_{2,0}(H_\mathbb{Q}) } \] where the bottom monomorphism is given by $\left((a\wedge b) \leftrightarrow (c\wedge d)\right) \mapsto \hn{a}{b}{c}{d}$. \end{lemma} Next, the quadratic part of the relative RT torsion can be extracted from the degree $2$ part of the LMO homomorphism in the following manner. \begin{lemma}\label{lem:alpha_LMO} There is a commutative diagram \[ \xymatrix{ \mathcal{KC}/Y_{3} \ar[d]_{-\frac{\alpha}{2}} \ar[r]^{Z_2} & \jacobi^{c}_2(H_{\mathbb{Q}}) \ar@{->>}[d]^{p_{2,1}}\\ \frac{1}{2}S^2H \ar@{>->}[r] & \jacobi^c_{2,1}(H_ \mathbb{Q}), } \] where the monomorphism on the bottom is defined by $a\cdot b \mapsto \phin{a}{b}$. \end{lemma} \begin{proof} The quotient group $\frac{\mathcal{KC}/Y_{3}}{Y_2\mathcal{IC}/Y_{3}} \simeq \mathcal{KC}/Y_{2}$ consists only of order $2$ elements, as follows from \cite{MM}. So, for any element $\{M\}$ of $\mathcal{KC}/Y_{3}$, we have that $\{M\circ M\}=\{M\}^2$ lies in $Y_2\mathcal{IC}/Y_{3}$. We have that $\alpha(M^2)=2\alpha(M)$ (by Proposition \ref{prop:alpha_properties}) and $Z_2(M^2)=2Z_2(M)$ (since we have $Z(M^2)= Z(M) \star Z(M)$ and $Z_1(M)=0$). Therefore it suffices to establish the commutativity of the diagram on the subgroup $Y_2\mathcal{IC}/Y_{3}$ of $\mathcal{KC}/Y_{3}$. Since the map $\psi_2:\jacobi^{<,c}_2(H) \to Y_2\mathcal{IC}/Y_{3}$ is surjective, it is enough to check that $\alpha \psi_2(D)= -2 p_{2,1} Z_2 \psi_2(D)$ for any generator $D$ of $\jacobi^{<,c}_{2}(H)$. The generators can be of three types, namely H graphs, $\Phi$ graphs or $\Theta$ graphs. Let us first compute the composition $\alpha \circ \psi_2$ on these three types of generators. By Proposition \ref{prop:alpha_properties}, we have \begin{equation} \label{eq:phi} \alpha \psi_2\left(\phio{h}{h'}\right)= -2h h'\in S^2H. \end{equation} The same proposition also implies that \begin{equation} \label{eq:theta} \alpha \psi_2\left(\hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm}\right)=0\in S^2H. \end{equation} \begin{claim} For any $h,i,j,k \in H$, we have \begin{equation}\label{eq:H} \alpha \psi_2\big(\ho{h}{i}{j}{k}\big)= \omega(h,j) \cdot ik- \omega(h,k) \cdot ij- \omega(i,j) \cdot hk+ \omega(i,k) \cdot hj\in S^2H. \end{equation} \end{claim} \noindent In order to prove this claim, we use the decomposition of $\jacobi^{c}_{2}(H_\mathbb{Q})$ into irreducible $\operatorname{Sp}(H_\mathbb{Q})$-modules. This can be found in \cite[\S 5]{HM_SJD}, whose notation we follow: \begin{eqnarray*} \jacobi^{c}_{2}(H_\mathbb{Q}) & = & \jacobi^c_{2,0}(H)\oplus \jacobi^c_{2,1}(H)\oplus \jacobi^c_{2,2}(H)\\ & = & \left(\Gamma_0\oplus \Gamma_{\omega_2}\oplus \Gamma_{2\omega_2}\right)\oplus \left(\Gamma_{2\omega_1}\right)\oplus \left( \Gamma_0\right). \end{eqnarray*} (Note that this formula was obtained in \cite{HM_SJD} for $g\ge 3$ only. But, since both invariants $\alpha$ and $Z$ are well-behaved with respect to stabilization, we can assume without loss of generality that the surface $\Sigma$ has arbitrary high genus $g$.) Since the composition \[ \xymatrix{ \jacobi^{c}_{2}(H_\mathbb{Q})\ar[r]_{\simeq}^{\chi} & \jacobi^{<,c}_{2}(H_\mathbb{Q})\ar[r]_{\simeq}^{\psi_2} & \frac{Y_2\mathcal{IC}}{Y_{3}}\otimes \mathbb{Q} \ar@{->}[r]^-\alpha & S^2 H_\mathbb{Q}\simeq \Gamma_{2\omega_1} } \] is $\operatorname{Sp}(H_\mathbb{Q})$-equivariant, we deduce that it vanishes on the subspace $\jacobi^c_{2,0}(H) \oplus \jacobi^c_{2,2}(H)$. Thus, using the explicit formula for $\chi^{-1}$ given in \cite[Proposition 3.1]{HM_SJD}, we obtain that \begin{eqnarray*} &&\alpha \psi_2\big(\ho{h}{i}{j}{k}\big)\\ & = & \alpha \psi_2 \chi\left( \chi^{-1} \big(\ho{h}{i}{j}{k}\big)\right) \\ & = & \alpha \psi_2 \chi\left( p_{2,1} \chi^{-1} \big(\ho{h}{i}{j}{k}\big)\right) \\ & = & \frac{1}{2} \alpha \psi_2 \chi \left( -\omega(h,j) \phin{i}{k}+\omega(h,k) \phin{i}{j}+\omega(i,j) \phin{h}{k}-\omega(i,k) \phin{h}{j} \right). \end{eqnarray*} Besides we deduce from (\ref{eq:phi}) that $$ \alpha \psi_2 \chi \left(\phin{h}{h'}\right) = \alpha \psi_2\left(\frac{1}{2}\phio{h}{h'}+\frac{1}{2}\phio{h'}{h}\right)= -2 h h'. $$ and the claim follows. Now, in order to complete the proof of the lemma, we use (\ref{eq:univLMO}) to deduce that the composition $p_{2,1} Z_2 \psi_2: \jacobi^{<,c}_2(H) \to \jacobi_{2,1}^c(H_\mathbb{Q})$ coincides with $p_{2,1} \chi^{-1}$. Therefore, the formula for $\chi^{-1}$ in \cite[Proposition 3.1]{HM_SJD} gives $$ p_{2,1} Z_2 \psi_2\left( \hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm} \right) = 0, \quad p_{2,1} Z_2 \psi_2\big( \phio{h}{h'} \big) = \phin{h}{h'}\vspace{-0.2cm} $$ and $$ p_{2,1} Z_2 \psi_2\big(\ho{h}{i}{j}{k}\big) = \frac{1}{2}\big( -\omega(h,j) \phin{i}{k}+\omega(h,k) \phin{i}{j}+\omega(i,j) \phin{h}{k}-\omega(i,k) \phin{h}{j}\big). $$ We conclude thanks to (\ref{eq:phi}), (\ref{eq:theta}) and (\ref{eq:H}) . \end{proof} The relationship between the LMO homomorphism $Z$ and the Casson invariant $\lambda_j$ is more subtle. First of all, the Heegaard embedding $j:\Sigma \to S^3$ necessary to the definition of $\lambda_j$ (see \S \ref{subsec:Casson}) is chosen compatibly with the system of meridians and parallels $(\alpha,\beta)$ on which the construction of $Z$ depends (see \S \ref{subsec:LMO}). More precisely, we assume that the curves $j(\alpha_i)\subset F_g$ bound disks in the lower handlebody of the Heegaard splitting of $S^3$, while the curves $j(\beta_i) \subset F_g$ bound disks in the upper handlebody. The following is observed in \cite{CHM} from the relation between the Casson invariant of homology $3$-spheres and the LMO invariant \cite{LMO}: \begin{equation}\label{eq:cassonLMO} \xymatrix{ \mathcal{KC}/Y_{3} \ar[d]_{\lambda_j} \ar[r]^{\widetilde{Z}^Y_2} & \jacobi^{c}_2(L^\pm_\mathbb{Q}) \ar@{->>}[d]^{p_{2,2}}\\ \mathbb{Z} \ar[r]_(.35){\cdot\frac{1}{2}\hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm}} &\jacobi^{c}_{2,2}(H_\mathbb{Q}) } \end{equation} Since $\widetilde{Z}^Y_2$ is obtained from $Z_2$ by post-composing with the map $\kappa^{-1}=\varphi^{-1} s \chi$, we see that the $\Theta$ part of $\widetilde{Z}^Y_2$ is equal to minus the $\Theta$ part of $Z$ plus something derived from its H part and its $\Phi$ part. In particular, we deduce the following from Lemma \ref{lem:tau2_LMO} and Lemma \ref{lem:alpha_LMO}. \begin{lemma}\label{lem:Casson_LMO} Let $M\in \mathcal{KC}$ be such that $\tau_2(M)=0$ and $\alpha(M)=0$. Then, we have $$ p_{2,2} Z_2(M) = -\frac{\lambda_j(M)}{2} \cdot \hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm}. $$ \end{lemma} \noindent The connection between $Z_2$ and $\lambda_j$ is further investigated in \S \ref{sec:core_Casson}, where we extend the \emph{core} of the Casson invariant to homology cylinders. \section{Characterization of the $Y_3$-equivalence for homology cylinders}\label{sec:Y3} In this section we prove the characterization of the $Y_3$-equivalence relation on $\mathcal{IC}$ (Theorem~A), we give a diagrammatic description of the group $\mathcal{IC}/Y_3$ and we deduce certain properties of this group. \subsection{Proof of Theorem~A} We now show that, given two homology cylinders $M$ and $M'$ over $\Sigma$, the following assertions are equivalent: \begin{enumerate} \item[(a)] $M$ and $M'$ are $Y_3$-equivalent; \item[(b)] $M$ and $M'$ are not distinguished by any Goussarov--Habiro finite-type invariants of degree at most $2$; \item[(c)] $M$ and $M'$ share the same invariants $\rho_3, \alpha$ and $\lambda$; \item[(d)] The LMO homomorphism $Z$ agrees on $M$ and $M'$ up to degree $2$. \end{enumerate} The implication (a)$\Rightarrow$(b) follows from Lemma \ref{lem:Y_FTI}. The implication (b)$\Rightarrow$(d) is guaranteed by the fact that the degree $i$ part of the LMO homomorphism is a finite-type invariant of degree $i$ \cite{CHM}. Thus it remains to prove that (a), (c) and (d) are equivalent. The implication (a)$\Rightarrow$(c) says that $\rho_3$, $\lambda$ and $\alpha$ are invariant under $Y_3$-equivalence. We have seen in \S \ref{subsec:Alexander} and \S \ref{subsec:Casson} that $\alpha$ and $\lambda$ are finite-type invariants of degree $2$: we deduce from Lemma \ref{lem:Y_FTI} that $\alpha$ and $\lambda$ are $Y_3$-invariant. The map $\rho_3: \mathcal{IC} \to \operatorname{Aut}(\pi/\Gamma_4 \pi)$ is $J_3$-invariant (by Lemma \ref{lem:rho_k} below) so that it is also $Y_3$-invariant. We now prove (c)$\Rightarrow$(d). Two homology cylinders $M$ and $M'$ satisfying (c) cannot be distinguished by the first Johnson homomorphism, nor by the Birman--Craggs homomorphism according to Lemma \ref{lem:beta}. We deduce from the characterization of the $Y_2$-equivalence given in \cite{MM} that $M\stackrel{Y_2}{\sim} M'$. The monoid $\mathcal{IC}/Y_3$ being a group \cite{Goussarov,Habiro}, one can find a $D\in Y_2\mathcal{IC}$ such that \begin{equation} \label{eq:D} M\stackrel{Y_3}{\sim} D\circ M'. \end{equation} Since $Z$ is a monoid homomorphism, we deduce that $Z_{\leq 2} (M)$ is the degree $\leq 2$ truncation of $Z_{\leq 2}(D) \star Z_{\leq 2}(M')$, so it suffices to show that $Z_{\leq 2}(D)= \varnothing$ (the empty diagram). But this is equivalent to $Z_2(D)=0$ given that $D \stackrel{Y_2}{\sim} \Sigma \times I$. The decomposition (\ref{eq:D}) and the assumption (c) have three consequences: \begin{enumerate} \item $\rho_3(D)=1$, which implies that $\tau_2(D)=0$, \item $\alpha(D)=0$, \item and $\lambda(D)=0$ by formula (\ref{eq:sum_formula}). \end{enumerate} Thanks to Lemma \ref{lem:tau2_LMO}, Lemma \ref{lem:alpha_LMO} and Lemma \ref{lem:Casson_LMO}, we deduce that $Z_2(D)=0$ as desired. We now prove (d)$\Rightarrow$(a). Let $M,M' \in \mathcal{IC}$ be such that $Z_{\leq 2}(M)=Z_{\leq 2}(M')$. Again, since $\mathcal{IC}/Y_3$ is a group, one can find a $D\in \mathcal{IC}$ such that we have the decomposition (\ref{eq:D}). We deduce from (\ref{eq:D}) and the assumption (d) that $Z_1(D)=0$ which, according to (\ref{eq:tau1_LMO}), is equivalent to $\tau_1(D)=0$. Next, we deduce from (\ref{eq:D}) and the assumption (d) that $Z_2(D)=0$ which, by Lemma \ref{lem:tau2_LMO}, Lemma \ref{lem:alpha_LMO} and Lemma \ref{lem:Casson_LMO}, imply that $\tau_2(D)=0$, $\alpha(D)=0$ and $\lambda(D)=0$. So we have $\beta(D)=0$ by Lemma \ref{lem:beta} and, since the $Y_2$-equivalence is classified by $(\tau_1,\beta)$, we obtain that $D \stackrel{Y_2}{\sim} \Sigma \times I$. The universal property of the LMO homomorphism (\ref{eq:univLMO}) gives, in degree $2$, the following commutative diagram\\[-0.5cm] $$ \xymatrix{ \jacobi^{<,c}_2(H) \ar@{->>}[r]^{\psi_2} \ar[d]_{- \otimes \mathbb{Q}} & Y_2\mathcal{IC}/Y_{3} \ar[d]^{Z_{2}}\\ \jacobi^{<,c}_{2}(H_\mathbb{Q}) & \ar[l]^{\chi}_-\simeq \jacobi^{c}_{2}(H_\mathbb{Q}). } $$ The left vertical arrow is injective by Lemma \ref{lem:no_torsion}, which proves that $Z_{2}$ is injective. Hence $Z_{2}(D)=0$ implies that $D\stackrel{Y_3}{\sim} (\Sigma\times I)$ which, combined with the decomposition (\ref{eq:D}), show that $M$ and $M'$ satisfy (a). This concludes the proof of Theorem A and shows the following. \begin{corollary}\label{cor:varphi2} The abelian group $Y_2\mathcal{IC}/Y_{3}$ is torsion-free, and the surgery map $$ \psi_2: \jacobi^{<,c}_2(H) \rightarrow Y_2\mathcal{IC}/Y_{3} $$ is an isomorphism. \end{corollary} \subsection{Diagrammatic description of $\mathcal{KC}/Y_{3}$} \label{subsec:description1} We aim at giving in this section a diagrammatic description of the group $\mathcal{IC}/Y_{3}$ using the previous results. One way to do that would be to start from the central extension \begin{equation} \label{eq:not_the_best} \xymatrix{0 \ar[r] & Y_2\mathcal{IC}/Y_3 \ar[r] & \mathcal{IC}/Y_3 \ar[r] & \mathcal{IC}/Y_2 \ar[r] & 1} \end{equation} and to use the diagrammatic descriptions of $\mathcal{IC}/Y_2$ and $Y_2\mathcal{IC}/Y_3$ which are given by \cite{MM} and Corollary \ref{cor:varphi2} respectively. However this method is not the most convenient one since $\mathcal{IC}/Y_2$ is \emph{not} torsion-free. Instead we shall proceed in the following way. \begin{lemma}\label{lem:Habiro_sec} We have a central extension of groups \begin{equation} \label{eq:Habiro_sec} \xymatrix{0 \ar[r] & \mathcal{KC}/Y_3 \ar[r] & \mathcal{IC}/Y_3 \ar[r]^{\tau_1} & \Lambda^3 H \ar[r] & 1} \end{equation} where $\mathcal{KC} = \mathcal{C}[2]$ denotes the second term of the Johnson filtration of $\mathcal{C}$. \end{lemma} \begin{proof} As observed in the proof of Lemma \ref{lem:d^2beta}, it follows from \cite{MM} that $\mathcal{KC}/Y_3$ is generated by elements of the form $(\Sigma\times I)_G$, where $G$ is either a degree $2$ graph clasper or a $Y$-graph with (at least) one special leaf. In the first case, $(\Sigma\times I)_G$ is a central element in $\mathcal{IC}/Y_3$ by Lemma \ref{lem:crossingchange}. The same holds in the second case by Lemma \ref{lem:crossingchange} and Lemma \ref{lem:special}. \end{proof} The central extension (\ref{eq:Habiro_sec}) is, up to equivalence, uniquely determined by its characteristic class in $H^2(\Lambda^3H;\mathcal{KC}/Y_3)$. Since $\Lambda^3H$ \emph{is} torsion-free, we have by the universal coefficient theorem that \begin{equation}\label{eq:iso2cocycle} H^2(\Lambda^3H;\mathcal{KC}/Y_3)\simeq \textrm{Hom}(H_2(\Lambda^3H),\mathcal{KC}/Y_3)\simeq \textrm{Hom}(\Lambda^2\Lambda^3H,\mathcal{KC}/Y_3). \end{equation} Thus, in order to describe the (isomorphism type of) the group $\mathcal{IC}/Y_3$, we shall proceed in two steps: first, we shall give in this subsection a diagrammatic description of the group $\mathcal{KC}/Y_3$ and, second, we shall give in the next subsection a diagrammatic description of the characteristic class of (\ref{eq:Habiro_sec}) in $\textrm{Hom}(\Lambda^2\Lambda^3H,\mathcal{KC}/Y_3)$. Our diagrammatic description of $\mathcal{KC}/Y_{3}$ is derived from a group homomorphism $$ \psi_{[2]}: \jacobi^{<,c}_2(H)\oplus \mathbb{Z}\!\cdot\!(H\times H) \oplus \mathbb{Z}\!\cdot\! H\oplus \mathbb{Z}\longrightarrow \mathcal{KC}/Y_{3}, $$ where $\mathbb{Z}\!\cdot\!(H\times H)$ and $\mathbb{Z}\! \cdot\! H$ denote the free abelian groups generated by the sets $H\times H$ and $H$ respectively. We define $\psi_{[2]}$ in the following way: \begin{itemize} \item For all $D\in \jacobi^{<,c}_2(H)$, we set $\psi_{[2]}(D):=\psi_2(D)$ where $\psi_2$ is Habiro's map as defined in \S \ref{subsec:surgery_map}. \item For all $(h,h')\in H\times H$, we set $\psi_{[2]}(h,h')$ to be the $Y_3$-equivalence class of $(\Sigma \times I)_{Y_{h,h'}}$ where $Y_{h,h'}$ is a $Y$-graph obtained as follows. Consider the oriented surface $S$ consisting of a disk, connected by three bands to three annuli, whose cores are oriented as in Figure \ref{fig:leaf_orientation}. Embed $S$ into the interior of $(\Sigma \times I)$ so as to obtain a $Y$-graph with one special leaf and two other leaves satisfying the following: they should have framing number zero, one should represent $h\in H$ and lie in $\Sigma \times [-1,0]$ while the other one should represent $h'\in H$ and lie in $\Sigma \times [0,1]$. \item For all $h\in H$, we set $\psi_{[2]}(h)$ to be the $Y_3$-equivalence class of $(\Sigma \times I)_{Y_h}$ where $Y_h$ is a $Y$-graph obtained as follows. We take the same surface $S$ as before but we embed it into the interior of $(\Sigma \times I)$ so as to obtain a $Y$-graph with two special leaves, and one leaf which is required to represent $h\in H$ and to have framing number zero. \item We set $\psi_{[2]}(1)$ to be the $Y_3$-equivalence class of $(\Sigma \times I)_{Y_s}$ where $Y_{s}$ is a $Y$-graph with three special leaves. \end{itemize} \noindent (We refer the reader to Appendix \ref{subsec:framing_numbers} for the definition of framing numbers in $\Sigma\times I$.) \begin{lemma} The map $\psi_{[2]}$ is a well-defined homomorphism, and it is surjective. \end{lemma} \begin{proof} By \S \ref{subsec:surgery_map}, the fact that $\psi_{[2]}$ is well-defined needs only to be checked on the summand $ \mathbb{Z}\!\cdot\!(H\times H)\oplus \mathbb{Z}\!\cdot\! H\oplus \mathbb{Z}$. The independence on the choice of the disk and bands follows from Lemma \ref{lem:as_special} and Lemma \ref{lem:slide_special}, respectively. We now check the independence on the choice of the leaves. Let $K$ and $K'$ be two possible choices for an oriented leaf of a $Y$-graph, representing the same element in $H$. Then $K$ and $K'$ cobound an embedded oriented surface $F$ of genus $g(F)$ in $(\Sigma\times I)$. Furthermore, since the framing numbers of $K$ and $K'$ in $(\Sigma \times I)$ are equal, we can find such a surface $F$ such that $K$ and $K'$ are $0$-framed with respect to $F$. (This follows, for instance, from Lemma \ref{lem:framed_connect}.) By Lemma \ref{lem:slide_special}, we can assume that $F$ does not intersect any edge of the $Y$-graph. We can also freely assume that $F$ does not intersect any other leaf of the $Y$-graph, since they are either special leaves, or lying in a different ``horizontal layer'' of $(\Sigma\times I)$. So, if we have $g(F)=0$, then $K$ and $K'$ are isotopic as framed knots and we are done. Otherwise, we can decompose $K$ as a framed connected sum of $K'$ and $g(F)$ framed knots, each bounding a genus $1$ surface disjoint from the $Y$-graph and being $0$-framed with respect to it. The result then follows from Corollary \ref{cor:surf}. The surjectivity of $\psi_{[2]}$ is proved by refining the argument used at the beginning of the proof of Lemma \ref{lem:d^2beta}. Let $M\in \mathcal{KC}$. Using the notation (\ref{eq:affine_functions}), the quadratic function $\beta(M)\in B_{\leq 2}$ can be decomposed as $$ \beta(M) = \varepsilon \cdot \overline{1} + \sum_{i=1}^m \overline{g_i} + \sum_{j=1}^n \overline{h_j} \cdot \overline{h'_j} $$ where $\varepsilon \in \{0,1\}$, $g_1,\dots, g_m \in H$ and $h_1,h'_1,\dots, h_n,h'_n \in H$ for some positive integers $m,n$. Let $Y_s$, $Y_g$ (for $g\in H$) and $Y_{h,h'}$ (for $h,h'\in H$) be the $Y$-graphs with special leaves described in the definition of $\psi_{[2]}$. Then, using Lemma \ref{lem:Y_beta}, we see that $$ \beta(M) = \varepsilon\cdot \beta\left((\Sigma \times I)_{Y_s}\right) + \sum_{i=1}^m \beta\left((\Sigma \times I)_{Y_{g_i}} \right) + \sum_{j=1}^n \beta\left((\Sigma \times I)_{Y_{h_j,h'_j}} \right). $$ Since the $Y_2$-equivalence is classified by the couple $(\tau_1,\beta)$, we deduce that $$ M \overset{Y_2}{\sim} {(\Sigma \times I)_{Y_s}}^\varepsilon \circ \prod_{i=1}^m (\Sigma \times I)_{Y_{g_i}} \circ \prod_{j=1}^n (\Sigma \times I)_{Y_{h_j,h'_j}}. $$ Therefore, by clasper calculus, there exists a $D \in \mathcal{IC}$ such that $D\overset{Y_2}{\sim} (\Sigma \times I)$ and $$ M \overset{Y_3}{\sim} D \circ {(\Sigma \times I)_{Y_s}}^\varepsilon \circ \prod_{i=1}^m (\Sigma \times I)_{Y_{g_i}} \circ \prod_{j=1}^n (\Sigma \times I)_{Y_{h_j,h'_j}}. $$ We conclude using the fact that the restriction of $\psi_{[2]}$ to the summand $\jacobi^{<,c}_2(H)$, namely $\psi_2$, is surjective onto the group $Y_2\mathcal{IC}/Y_3$. \end{proof} Next we set $$ \jacobi_{[2]}^{<,c}(H) := \frac{\jacobi^{<,c}_2(H)\oplus \mathbb{Z}\!\cdot\!(H\times H)\oplus \mathbb{Z}\!\cdot\!H \oplus \mathbb{Z}}{(G_0,G_1,G_2,G_3, D_1,D_2,D_3)}, $$ where the relations $(G_0,G_1,G_2,G_3)$ and $(D_1,D_2,D_3)$ are defined as follows: \begin{enumerate} \item[($G_0$)]\quad $(h,h) - (h)$ \ for all $h\in H$, \item[($G_1$)]\quad $2\cdot (h,k) +\hob{h}{h}{k}{k} + \phio{h}{k}$ \ for all $h,k\in H$, \item[($G_2$)]\quad $2\cdot (h) + \phio{h}{h}$ \ for all $h\in H$, \item[($G_3$)]\quad $2\cdot 1+\hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm}$, \item[($D_1$)]\quad $(h+h',k) - (h,k) - (h',k)+ \omega(h,h')\cdot (k) + \hob{h}{h'}{k}{k}$ \ for all $h,h',k\in H$, \item[($D_2$)]\quad $(h,k+k') - (h,k) - (h,k') + \omega(k,k')\cdot(h) + \hob{h}{h}{k}{k'}$ \ for all $h,k,k'\in H$, \item[($D_3$)]\quad $(h+h') - (h) - (h') + \omega(h,h') \cdot $1$ + \phio{h}{h'}$ \ for all $h,h' \in H$. \end{enumerate} Here the generator of the summand $\mathbb{Z}$ is denoted by $1$, the generators of the summand $ \mathbb{Z}\!\cdot\! H$ are denoted by $(h)$ with $h\in H$, and the generators of the summand $ \mathbb{Z}\! \cdot\!(H\times H)$ are denoted by $(h,k)$ with $h,k\in H$. Observe that, thanks to the relation ($G_0$), we could get rid of the summand $ \mathbb{Z}\!\cdot\! H$. Besides, ($G_2$) is a consequence of ($G_0$) and $(G_1)$. Here is yet another relation in $\jacobi_{[2]}^{<,c}(H)$: \begin{lemma} For all $h,h'\in H$, we have \begin{equation}\label{eq:symmetry_defect} (h,h') - (h',h) - \omega(h,h') \cdot \phio{h'}{h} - \frac{\omega(h,h')(\omega(h,h')-1)}{2}\cdot \hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm} = 0 \in \jacobi_{[2]}^{<,c}(H). \end{equation} \end{lemma} \begin{proof} We set $k := h +h'$. Using $(G_0)$, $(D_1)$ and $(D_2)$, we get \begin{eqnarray*} (k) \quad = \quad (k,k) &= & (h,k) + (h',k) - \omega(h,h')\cdot (k) - \hob{h}{h'}{k}{k}\\ &=& \left((h,h') + (h,h) - \omega(h',h)\cdot(h) - \hob{h}{h}{h'}{h}\right) \\ && + \left( (h',h') + (h',h) - \omega(h',h)\cdot(h') - \hob{h'}{h'}{h'}{h} \right)\\ && - \omega(h,h')\cdot (k) - \hob{h}{h'}{k}{k}. \end{eqnarray*} It follows from the IHX and STU-like relations that, for all $a,b,c\in H$, $$ \hob{a}{b}{b}{c} = 0 \ \in \jacobi_{[2]}^{<,c}(H). $$ We deduce that \begin{eqnarray*} &&(1+\omega(h,h'))\cdot (k) - (1+\omega(h,h')) \cdot (h) - (1+\omega(h,h')) \cdot (h') \\ &=& (h,h') + (h',h) - \hob{h}{h}{h'}{h} - \hob{h}{h'}{k}{k}\\ &=& (h,h') + (h',h) + \hob{h}{h}{h'}{h'} \\ &=& (h',h)-(h,h') - \phio{h}{h'}, \end{eqnarray*} where the last equality follows from relation $(G_1)$. Using now $(D_3)$, we get $$ (h',h) - (h,h') - \phio{h}{h'} = -(1+\omega(h,h')) \omega(h,h') \cdot 1 - (1+\omega(h,h') ) \cdot \phio{h}{h'}, $$ and, using $(G_3)$, we obtain $$ (h',h) - (h,h') = \frac{(1+\omega(h,h')) \omega(h,h')}{2} \cdot \hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm} - \omega(h,h')\cdot \phio{h}{h'}. $$ The STU-like relation allows us to conclude. \end{proof} \begin{theorem}\label{thm:iso_KC/Y3} The map $\psi_{[2]}$ factorizes to an isomorphism $$ \psi_{[2]}: \jacobi_{[2]}^{<,c}(H) \overset{\simeq}{\longrightarrow} \mathcal{KC}/Y_{3} $$ and the group $\jacobi_{[2]}^{<,c}(H)$ is a free abelian group with the same rank as $\jacobi^{<,c}_2(H)$. \end{theorem} Before proving this theorem, we shall draw two of its consequences. First, after one has chosen a basis of the free abelian group $H$, one can derive from this diagrammatic description a presentation of the abelian group $\mathcal{KC}/Y_{3}$. Second, Theorem \ref{thm:iso_KC/Y3} implies the following. \begin{corollary} We have the following commutative diagram of abelian groups, whose rows are short exact sequences: $$ \xymatrix{ 0 \ar[r] & Y_2 \mathcal{IC}/Y_3 \ar[r] & \mathcal{KC}/Y_3 \ar[r]^\beta & B_{\leq 2} \ar[r] & 0\\ 0 \ar[r] & \jacobi^{<,c}_2(H) \ar[r] \ar[u]^-{\psi_2}_-\simeq & \jacobi_{[2]}^{<,c}(H) \ar[u]^-{\psi_{[2]}}_-\simeq \ar[r]^b & B_{\leq 2} \ar@{=}[u] \ar[r] & 0. } $$ Here $B_{\leq 2}$ is the space of polynomial functions $\operatorname{Spin}(\Sigma)\to \mathbb{Z}_2$ of degree $\leq 2$ and, using the notation (\ref{eq:affine_functions}), we define the homomorphism $b$ as follows: $b$ is trivial on $\jacobi^{<,c}_2(H)$, $b$ sends $1\in \mathbb{Z}$ to the constant function $\overline{1}$, $(h)\in \mathbb{Z}\!\cdot\! H$ to the affine function $\overline{h}$, and $(h,k)\in \mathbb{Z}\!\cdot\!(H\times H)$ to the quadratic function $ \overline{h} \cdot \overline{k}$. \end{corollary} \begin{proof} The fact that $b$ is well-defined is easily checked using the following formula: $$ \forall h,k \in H, \quad \overline{h+k} = \overline{h} + \overline{k} + \omega(h,k) \cdot \overline{1} \ \in B_{\leq 1}. $$ The commutativity of the diagram follows from the definition of $\psi_{[2]}$ and from Lemma \ref{lem:Y_beta}. The top sequence is exact according to \cite{MM}. Since the vertical maps are isomorphisms (by Corollary \ref{cor:varphi2} and Theorem \ref{thm:iso_KC/Y3}), the bottom sequence is exact too. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:iso_KC/Y3}] We start by proving that $\psi_{[2]}$ vanishes on $(G_0),(G_1), (G_3)$, and we recall that $(G_2)$ is a consequence of $(G_0)$ and $(G_1)$. First of all, let us prove that \begin{equation} \label{eq:G0} \forall h\in H, \ \psi_{[2]}(h) = \psi_{[2]}(h,h). \end{equation} Let $Y_h$ be a $Y$-graph as described in the definition of $\psi_{[2]}$: its ``non-special'' leaf is denoted by $L$. By Lemma \ref{lem:slide}, the $Y$-graph $Y_h$ is equivalent to a $Y$-graph $Y_h'$ with one special leaf and two other leaves given by $L$ and its parallel $L^\parallel$. Now, using the ``framing number zero'' assumption on $L$, we can (up to $Y_3$-equivalence) put $L$ and $L^\parallel$ in two disjoint ``horizontal layers''. More precisely, since we have $\operatorname{Lk}_+(L,L^\parallel) = \operatorname{Fr}(L)=0$, one can find a framed oriented knot $K$ in a neighborhood of the top surface $\Sigma\times \{+1\}$ and a compact oriented surface $S$ disjoint from $L$ such that $\partial S = L^\parallel \sqcup (-K)$, and both knots are $0$-framed with respect to $S$. By Lemma \ref{lem:slide_special}, we can assume that the edges of $Y_h'$ do not intersect $S$. Next, using Corollary \ref{cor:surf}, we can find a $Y$-graph $Y_h''$ with one special leaf, one leaf given by $L$ and another leaf given by $K$ such that $(\Sigma \times I)_{Y_{h}''} \overset{Y_3}{\sim} (\Sigma \times I)_{Y_h'}$. Since we have $\operatorname{Fr}(K)=\operatorname{Fr}(L^\parallel)=\operatorname{Fr}(L)=0$, the graph $Y_h''$ can play the role of $Y_{h,h}$ in the definition of $\psi_{[2]}$. We conclude that $$ \psi_{[2]}(h)= \left\{ (\Sigma \times I)_{Y_{h}} \right\} = \left\{ (\Sigma \times I)_{Y_{h}''}\right\} = \psi_{[2]}(h,h). $$ Next, the relation \begin{equation}\label{eq:G3} 2\psi_{[2]}(1)=-\psi_{[2]}(\hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm}) \end{equation} is an immediate consequence of Corollary \ref{cor:2spe}(2) and Lemma \ref{lem:twist_as}. To check the relation \begin{equation}\label{eq:G1} \forall h,k\in H, \ 2\psi_{[2]}(h,k)= - \psi_{[2]}\big(\hob{h}{h}{k}{k}\big) - \psi_{[2]}\big(\phio{h}{k}\big), \end{equation} we consider a $Y$-graph $Y_{h,k}$ as described in the definition of $\psi_{[2]}$. Then Lemma \ref{lem:doubling} tells that, up to $Y_3$-equivalence, we can replace two copies of $Y_{h,k}$ by a $\Phi$-graph and an H-graph, which has two pairs of parallel leaves. Then, using the ``framing number zero'' assumption on the non-special leaves of $Y_{h,k}$ and an argument similar to the previous lines, the two leaves of each pair can be put in two disjoint ``horizontal layers''. Relation (\ref{eq:G1}) follows. We now show that $\psi_{[2]}$ vanishes on $(D_3)$. More precisely, we show that Corollary \ref{cor:split_2spe} implies that, for all $h,h'\in H$, \begin{equation}\label{eq:D3} \psi_{[2]}(h+h') = \psi_{[2]}(h) + \psi_{[2]}(h') - \omega(h,h')\cdot \psi_{[2]}(1) - \psi_{[2]}(\phio{h}{h'}). \end{equation} Let $K_h$ (respectively $K_{h'}$) be an oriented framed knot in $\Sigma\times [-1,0]$ (respectively in $\Sigma\times [0,1]$) with framing number zero and representing $h\in H$ (respectively $h'\in H$). Let $K_h\sharp K_{h'}$ denote a framed connected sum of $K_h$ and $K_{h'}$. By Corollary \ref{cor:split_2spe} and Lemma \ref{lem:slide}, we have $$ (\Sigma\times I)_{Y} \stackrel{Y_3}{\sim} \psi_{[2]}(h) + \psi_{[2]}(h')- \psi_{[2]}(\phio{h}{h'}), $$ where $Y$ denotes a $Y$-graph with two special leaves, the third leaf being a copy of $K_h\sharp K_{h'}$. On the other hand, by Lemma \ref{lem:linking_numbers} and Lemma \ref{lem:framed_connect}, the framing number of $K_h\sharp K_{h'}$ is equal to $-\omega(h,h')$. Hence, we can reduce the framing number of $K_h\sharp K_{h'}$ to zero by adding $|\omega(h,h')|$ isolated $(+1)$-twists or $(-1)$-twists, depending on whether $\omega(h,h')$ is positive or negative respectively. Suppose that $\omega(h,h')\leq 0$. Then by Corollary \ref{cor:split_2spe} and Lemma \ref{lem:special}, we have $$ (\Sigma\times I)_{Y} + (-\omega(h,h'))\cdot (\Sigma\times I)_{Y_{s}} \stackrel{Y_3}{\sim} \psi_{[2]}(h+h') $$ where $Y_{s}$ is a $Y$-graph with three special leaves, and equation (\ref{eq:D3}) follows. The case $\omega(h,h')\geq 0$ is shown similarly. By the exact same arguments, one can use Lemma \ref{lem:split_special} to check that \begin{equation}\label{eq:D1} \psi_{[2]}(h+h',k) = \psi_{[2]}(h,k) + \psi_{[2]}(h',k) - \omega(h,h')\cdot \psi_{[2]}(k) - \psi_{[2]}(\hob{h}{h'}{k}{k}) \end{equation} for all $h,h',k \in H$. Here again, we need the ``framing number zero'' assumption for the ``non-special'' leaves in the definition of $\psi_{[2]}$. Similarly, we have \begin{equation}\label{eq:D2} \psi_{[2]}(h,k+k') = \psi_{[2]}(h,k) + \psi_{[2]}(h,k') - \omega(k,k')\cdot \psi_{[2]}(h) - \psi_{[2]}(\hob{h}{h}{k}{k'}). \end{equation} for all $h,k,k'\in H$. We thus have that $\psi_{[2]}$ vanishes on $(D_1)$ and $(D_2)$. So far, we have shown that $\psi_{[2]}$ factorizes to a surjective map $ \psi_{[2]}: \jacobi_{[2]}^{<,c}(H) \to \mathcal{KC}/Y_{3}$. To prove that it is actually an isomorphism, we consider the subgroup $``\frac{1}{2}" \jacobi^{<,c}_2(H)$ of $\jacobi^{<,c}_2(H)\otimes \mathbb{Q}$ generated by $\jacobi^{<,c}_2(H)$ and the following elements: $$ \frac{1}{2}\hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm} \quad \hbox{and} \quad \frac{1}{2}\big(\hob{h}{h}{k}{k} + \phio{h}{k}\big) \ \hbox{(for all $h,k\in H$)}. $$ Thus we have the inclusions $$ \jacobi^{<,c}_2(H) \subset ``\frac{1}{2}" \jacobi^{<,c}_2(H) \subset \frac{1}{2} \jacobi^{<,c}_2(H) \subset \jacobi^{<,c}_2(H)\otimes \mathbb{Q}. $$ We also consider the homomorphism of abelian groups $$ \digamma: \jacobi_{[2]}^{<,c}(H) \longrightarrow ``\frac{1}{2}" \jacobi^{<,c}_2(H) , $$ which is the identity on $\jacobi^{<,c}_2(H)$ and is defined as follows on $ \mathbb{Z}\!\cdot\!(H\times H)\oplus \mathbb{Z}\!\cdot\! H \oplus \mathbb{Z}$: $$ \digamma(h,k) := -\frac{1}{2}\big(\hob{h}{h}{k}{k} + \phio{h}{k}\big), \ \digamma(h) := - \frac{1}{2} \phio{h}{h}, \ \digamma(1) := -\frac{1}{2}\hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm}. $$ A straightforward computation, based on the multilinearity and STU-like relations in $\jacobi^{<,c}_2(H)\otimes \mathbb{Q}$, shows that $\digamma$ is well-defined. We shall prove the following. \begin{claim}\label{claim:iso} The map $\digamma$ is an isomorphism and $``\frac{1}{2}" \jacobi^{<,c}_2(H)$ is a lattice of $\jacobi^{<,c}_2(H)\otimes \mathbb{Q}$. \end{claim} \noindent This will conclude the proof of the theorem since, by the universal property of the LMO homomorphism (\ref{eq:univLMO}), we have the following commutative diagram: $$ \xymatrix{ \jacobi_{[2]}^{<,c}(H) \ar@{->>}[d]_(0.5){\psi_{[2]}} \ar[r]^(0.5){\digamma} & ``\frac{1}{2}" \jacobi^{<,c}_2(H) \ar@{^{(}->}[d]\\ \mathcal{KC}/Y_{3} \ar[r]_(0.45){\chi\circ Z_2} & \jacobi^{<,c}_{2}(H)\otimes \mathbb{Q}. } $$ We now prove Claim \ref{claim:iso}. For that purpose, we shall work with the abelian group $\jacobi(L^\pm)$ defined in \S \ref{subsec:Jacobi}, which is isomorphic to $\jacobi^<(H)$ via the composition $$ \jacobi(L^\pm) \mathop{\longrightarrow}_\simeq^\varphi \jacobi^<(-H) \mathop{\longrightarrow}_\simeq^s \jacobi^<(H). $$ In particular, recall that $L^\pm$ denotes the abelian group freely generated by the set $\{1^+,\dots,g^+\}\cup\{1^-,\dots,g^-\}$. There is a natural order $\preceq$ on this set, which declares that $i^-\preceq j^-\preceq i^+\preceq j^+$ if $i\leq j$. The loop degree gives the following decomposition: $$ \jacobi^{c}_2(L^\pm)= \underbrace{\jacobi^{c}_{2,0}(L^\pm)}_{A:=} \oplus \underbrace{\jacobi^{c}_{2,1}(L^\pm)}_{B:=} \oplus \underbrace{\jacobi^{c}_{2,2}(L^\pm)}_{C:=}. $$ The abelian group $B$ being freely generated by the elements $$ \phin{x}{y} \ \hbox{(for all $x,y\in \{1^+,\dots,g^+\}\cup\{1^-,\dots,g^-\}$ such that $x\preceq y$)}, $$ we also have the decomposition $\jacobi^{c}_2(L^\pm)= A \oplus B' \oplus C$ where $B'$ is the subgroup of $\jacobi^{c}_{2}(L^\pm)$ generated by the elements $$ b(x,y) := -\phin{x}{y} + \hn{x}{y}{y}{x} \ \hbox{(for all $x,y$ such that $x\preceq y$)}. $$ We then consider the subgroup $$ ``\frac{1}{2}" \jacobi^{c}_2(L^\pm) := A \oplus \frac{1}{2} B' \oplus \frac{1}{2}C $$ of $(A\oplus B' \oplus C)\otimes \mathbb{Q}= \jacobi^{c}_2(L^\pm)\otimes \mathbb{Q}$. This is a lattice of $\jacobi^{c}_2(L^\pm)\otimes \mathbb{Q}$ satisfying $$ \jacobi^{c}_2(L^\pm) \subset ``\frac{1}{2}" \jacobi^{c}_2(L^\pm) \subset \frac{1}{2} \jacobi^{c}_2(L^\pm) \subset \jacobi^{c}_2(L^\pm)\otimes \mathbb{Q}. $$ We also consider the group homomorphism $$ \Upsilon:``\frac{1}{2}" \jacobi^{c}_2(L^\pm) \longrightarrow \jacobi_{[2]}^{<,c}(H) $$ that coincides with $s\circ \varphi$ on $A$ and is defined on the basis of $\frac{1}{2} B' \oplus \frac{1}{2}C$ as follows: $$ \Upsilon\left(\frac{1}{2}b(x,y)\right) := \{(x,y)\} \ \hbox{(for all $x,y$ with $x\preceq y$)} \quad \hbox{and} \quad \Upsilon\left(\frac{1}{2} \hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm} \right) := \{1\}. $$ Here an element $x$ of $\{1^+,\dots,g^+\}\cup \{1^-,\dots,g^-\}$ is regarded as an element of $H$ by (\ref{eq:L+-_H}). By construction of $\digamma$ and $\Upsilon$ we have the following commutative diagram: $$ \xymatrix{ & ``\frac{1}{2}" \jacobi^{c}_2(L^\pm) \ar[ld]_-\Upsilon \ar@{^{(}->}[r] \ar[d]^-{s\circ \varphi}& \jacobi^{c}_2(L^\pm)\otimes \mathbb{Q} \ar[d]^-{s\circ \varphi}_-\simeq \\ \jacobi_{[2]}^{<,c}(H) \ar@{->>}[r]_-\digamma & ``\frac{1}{2}" \jacobi^{<,c}_2(H) \ar@{^{(}->}[r] &\jacobi^{<,c}_2(H)\otimes \mathbb{Q} } $$ Therefore Claim \ref{claim:iso} will follow from the surjectivity of $\Upsilon$. In order to prove that $\Upsilon$ is surjective, observe that we have $\Upsilon(x)=\{s\circ\varphi(x)\}$ for all $x\in \jacobi_2^c(L^\pm)\subset ``\frac{1}{2}"\jacobi_2^c(L^\pm)$, so that any element of $\jacobi_{[2]}^{<,c}(H)$ coming from the summand $\jacobi_2^{<,c}(H)$ belongs to the image of $\Upsilon$. It is also clear that $\{1\}$ is in the image of $\Upsilon$. Thus, we just have to check that $(h)$ belongs to $\operatorname{Im}(\Upsilon)$ for all $h\in H$, and that $(h,k)$ belongs to $\operatorname{Im}(\Upsilon)$ for all $h,k\in H$. Relation $(D_3)$ implies that $$ \forall h_1,h_2\in H, \ (h_1+h_2) \equiv (h_1) +(h_2) \mod \operatorname{Im}(\Upsilon) $$ so that $(h)$ can be decomposed as a sum of some $(x)$'s (with $x\in \{1^+,\dots,g^+,1^-,\dots,g^-\}$). Since we have $(x)=\Upsilon\left(\frac{1}{2}b(x,x)\right)$, this shows that $(h)\in \operatorname{Im}(\Upsilon)$. Next, relations $(D_1)$ and $(D_2)$ imply that $$ \forall h_1,h_2\in H, \ (h_1+h_2,k) \equiv (h_1,k) + (h_2,k) \mod \operatorname{Im}(\Upsilon), $$ $$ \forall k_1,k_2\in H, \ (h,k_1+k_2) \equiv (h,k_1) + (h,k_2) \mod \operatorname{Im}(\Upsilon), $$ so that $(h,k)$ writes as a sum of some $(x,y)$'s (where $x,y\in \{1^+,\dots,g^+,1^-,\dots,g^-\}$). Since we have $(x,y)=\Upsilon\left(\frac{1}{2}b(x,y)\right)$ if $x\preceq y$ and since (\ref{eq:symmetry_defect}) implies that $(x,y) \equiv (y,x) \mod \operatorname{Im}(\Upsilon)$ in general, all this shows that $(h,k)\in \operatorname{Im}(\Upsilon)$. \end{proof} \subsection{Diagrammatic description of $\mathcal{IC}/Y_{3}$}\label{subsec:description2} We can now give the diagrammatic description of the isomorphism type of the group $\mathcal{IC}/Y_3$. \begin{theorem}\label{thm:2cocycle} The characteristic class of the central extension \begin{equation}\label{eq:Habiro_sec_bis} \xymatrix{ 0 \ar[r] & \mathcal{KC}/Y_3 \ar[r] & \mathcal{IC}/Y_3 \ar[r]^{\tau_1} & \Lambda^3 H \ar[r] & 1} \end{equation} seen as an element of $$ H^2\left(\Lambda^3H; \mathcal{KC}/Y_3\right) \simeq \operatorname{Hom}\left(\Lambda^2 \Lambda^3H,\mathcal{KC}/Y_3\right) \overset{\psi_{[2]}}{\simeq} \operatorname{Hom}\left(\Lambda^2 \Lambda^3H,\jacobi_{[2]}^{<,c}(H)\right) $$ is the antisymmetric bilinear map $[\cdot,\cdot]: \Lambda^3 H\times \Lambda^3 H \to \jacobi_{[2]}^{<,c}(H)$ defined by $$ [a\wedge b\wedge c,d\wedge e\wedge f] := \left(\begin{array}{c} \quad \omega(c,d)\ho{a}{b}{e}{f} - \omega(b,d)\ho{a}{c}{e}{f} + \omega(a,d)\ho{b}{c}{e}{f} \\ + \omega(c,e)\hob{d}{a}{b}{f} - \omega(b,e)\hob{d}{a}{c}{f} + \omega(a,e)\hob{d}{b}{c}{f} \\ + \omega(c,f)\ho{d}{e}{a}{b} - \omega(b,f)\ho{d}{e}{a}{c} + \omega(a,f)\ho{d}{e}{b}{c} \end{array}\right). $$ \end{theorem} \noindent Theorem \ref{thm:2cocycle} is a refinement of a result of Morita on the Torelli group. Indeed he described in \cite[Theorem 3.1]{Morita_Casson2} the characteristic class of the central extension $$ \xymatrix{0 \ar[r] & \tau_2\left(\Johnson \right) \ar[r]^{\tau_2^{-1}} & \Torelli/\Torelli[3] \ar[r]^{\tau_1} & \Lambda^3 H \ar[r] & 1}. $$ According to Lemma \ref{lem:tau2}, the map $\tau_2 \circ \psi_{[2]}:\jacobi_{[2]}^{<,c}(H) \to \frac{\left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2}}{\Lambda^4H}$ sends $\ho{a}{b}{c}{d}$ to $(a\wedge b) \leftrightarrow (c\wedge d)$. Thus, taking into account the fact that Morita's $\tau_2$ differs from ours by a minus sign, one sees that Theorem \ref{thm:2cocycle} generalizes Morita's description. \begin{proof}[Proof of Theorem \ref{thm:2cocycle}] Let us denote by $e$ the characteristic class in $$ H^2(\Lambda^3H;\mathcal{KC}/Y_3)\simeq \operatorname{Hom}(\Lambda^2\Lambda^3H,\mathcal{KC}/Y_3) $$ of the central extension (\ref{eq:Habiro_sec_bis}), and let $s$ be a setwise section of $\tau_1$. Then the cohomology class $e$ is represented by the $2$-cocycle $c$ (in the bar complex) defined by $$ \forall x,y\in \Lambda^3 H, \ c(x\vert y):=s(x)s(y)s(xy)^{-1} \ \in \mathcal{KC}/Y_3 \subset \mathcal{IC}/Y_3, $$ so that $e\in \operatorname{Hom}(\Lambda^2\Lambda^3H,\mathcal{KC}/Y_3)$ is given by \begin{eqnarray*} \forall x,y\in \Lambda^3 H, \ e(x\wedge y) = c\big((x\vert y)-(y\vert x) \big) &=& c(x\vert y)c(y\vert x)^{-1}\\ & = & s(x)s(y)s(xy)^{-1}s(yx)s(x)^{-1}s(y)^{-1}\\ &= & [ s(x),s(y) ]. \end{eqnarray*} This shows that $e$ is determined by the Lie bracket of the Lie ring of homology cylinders $\operatorname{Gr}^Y\mathcal{IC}$, in the sense that the following diagram is commutative: $$ \xymatrix{ \mathcal{IC}/Y_2\times \mathcal{IC}/Y_2 \ar[r]^(0.6){[\cdot,\cdot]} \ar@{->>}[d]_-{\tau_1\times \tau_1} & Y_2\mathcal{IC}/Y_{3}\ar@{^{(}->}[d] \\ \Lambda^3 H\times \Lambda^3 H \ar[r]_{e} & \mathcal{KC}/Y_3. } $$ But $\tau_1$ induces an isomorphism from $\frac{\mathcal{IC}/Y_2}{\operatorname{Tors}(\mathcal{IC}/Y_2)}$ to $\Lambda^3H\simeq \jacobi^{<,c}_1(H)$ \cite{MM}, with inverse given by the surgery map $\psi_1$ of \S \ref{subsec:LMO}. Moreover, the Lie bracket of $\operatorname{Gr}^Y\mathcal{IC}$ factorizes to $\frac{\mathcal{IC}/Y_2}{\operatorname{Tors}(\mathcal{IC}/Y_2)}$ since $Y_2\mathcal{IC}/Y_{3}$ is torsion-free by Corollary \ref{cor:varphi2}. Thus, we obtain $$ \xymatrix{ \frac{\mathcal{IC}/Y_2}{\operatorname{Tors}(\mathcal{IC}/Y_2)}\times \frac{\mathcal{IC}/Y_2}{\operatorname{Tors}(\mathcal{IC}/Y_2)} \ar[r]^-{[\cdot,\cdot]} & Y_2\mathcal{IC}/Y_{3} \ar@{^{(}->}[d] \\ \Lambda^3 H\times \Lambda^3 H \ar[r]_-{e}\ar[u]_{\simeq}^{\psi_1\times \psi_1} & \mathcal{KC}/Y_3. } $$ Since the surgery map $\psi$ preserves the Lie brackets, we obtain that, for all $a\wedge b \wedge c\in \Lambda^3H$ and $d\wedge e \wedge f\in \Lambda^3H$, \begin{eqnarray*} \psi_{[2]}^{-1}\circ e\big((a \wedge b \wedge c)\wedge (d\wedge e \wedge f)\big) &=& \psi_2^{-1}\left(\big[\psi_1(a \wedge b \wedge c),\psi_1(d\wedge e \wedge f)\big]\right)\\ &=& \left[\yo{a}{b}{c},\yo{d}{e}{f}\right]\\ & = &\yo{a}{b}{ \ \ c<} \yo{d}{e}{f} - \yo{d}{e}{ \ \ f <}\yo{a}{b}{c}. \end{eqnarray*} The desired formula follows from the STU-like relation. \end{proof} We conclude this section with a few consequences of the previous results on the structure of the group $\mathcal{IC}/Y_3$. First of all, a presentation of $\mathcal{IC}/Y_3$ could be obtained from a presentation of $\mathcal{KC}/Y_3$ (which is discussed after Theorem \ref{thm:iso_KC/Y3}) using the short exact sequence (\ref{eq:Habiro_sec_bis}). Besides, we can deduce the following assertions. \begin{corollary}\label{cor:prop_modY3} The group $\mathcal{IC}/Y_{3}$ has the following properties: \begin{itemize} \item[(i)] It is torsion-free; \item[(ii)] Its center is $\mathcal{KC}/Y_3$; \item[(iii)] Its commutator subgroup is strictly contained in $Y_2\mathcal{IC}/Y_3$, and it is the image of $\mathbf{c}(\Gamma_2 \Torelli)$ by the canonical projection $\mathcal{IC} \to \mathcal{IC}/Y_3$. \end{itemize} \end{corollary} \begin{proof} According to Theorem \ref{thm:iso_KC/Y3}, the group $\mathcal{KC}/Y_{3}$ is free abelian. Since $\Lambda^3 H$ is also a free abelian group, assertion (i) follows from the short exact sequence (\ref{eq:Habiro_sec_bis}). We already know that (\ref{eq:Habiro_sec_bis}) is a central extension. To prove assertion (ii), it thus remains to show that any central element $M$ of $\mathcal{IC}/Y_3$ belongs to $\mathcal{KC}/Y_3$. Since $M$ is central, $t:=\tau_1(M)\in \Lambda^3 H$ satisfies $[t,\cdot]=0$ for the bracket introduced in Theorem \ref{thm:2cocycle}. By composing this bracket with $$ \xymatrix{ \jacobi_{2}^{<,c}(H) \subset \jacobi_{2}^{<,c}(H_\mathbb{Q}) \ar[r]^-{\chi^{-1}}_-\simeq & \jacobi_{2}^{c}(H_\mathbb{Q}) \ar@{->>}[r]^-{p_{2,2}} & \jacobi_{2,2}^{c}(H_\mathbb{Q}), } $$ we get a skew-symmetric bilinear form $b_\omega: \Lambda^3 H_\mathbb{Q} \times \Lambda^3 H_\mathbb{Q} \to \jacobi_{2,2}^{c}(H_\mathbb{Q})$ and, again, we have $b_\omega(t,\cdot)=0$. A direct computation shows that, for all $x_1,x_2,x_3,y_1,y_2,y_3 \in H_\mathbb{Q}$, $$ b_\omega(x_1\wedge x_2 \wedge x_3, y_1\wedge y_2 \wedge y_3)= - \frac{1}{4} \left| {\scriptsize \begin{array}{ccc} \omega(x_1, y_1) & \omega(x_1, y_2) & \omega(x_1 , y_3)\\ \omega(x_2 , y_1) & \omega(x_2, y_2) & \omega(x_2 , y_3)\\ \omega(x_3, y_1) & \omega(x_3 , y_2) & \omega(x_3 , y_3) \end{array} }\right|\cdot \hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm}. $$ (See \cite[Lemma 5.4]{HM_SJD}.) By considering a symplectic basis of $(H_\mathbb{Q},\omega)$, it can be seen that $b_\omega$ is itself a symplectic form on $\Lambda^3 H_\mathbb{Q}$. We deduce that $t=0$, i.e.\ $M$ belongs to $\mathcal{KC}/Y_3$. We now prove assertion (iii). The inclusion $\Gamma_n\!\left(\mathcal{IC}/Y_k\right) \subset Y_n\mathcal{IC}/Y_k$ holds true for any integers $k\geq n \geq1$ by results of Goussarov \cite{Goussarov} and Habiro \cite{Habiro}. For $k=3$ and $n=2$, this inclusion is strict for the following reasons: in the case $g=0$, the group $\mathcal{IC}/Y_3$ is abelian but $Y_2\mathcal{IC}/Y_3$ is sent isomorphically to $2\mathbb{Z}$ by the Casson invariant \cite{Habiro}; in the case $g>0$, we know by Proposition \ref{prop:alpha_properties} that the group homomorphism $\alpha: \mathcal{IC}/Y_3 \to S^2H$ is not trivial on $Y_2\mathcal{IC}/Y_3$, but it is on $\Gamma_2\!\left(\mathcal{IC}/Y_3\right)$ since $S^2H$ is abelian. It now remains to show that $\Gamma_2\!\left(\mathcal{IC}/Y_3\right)$ is contained in $\mathbf{c}(\Gamma_2 \Torelli)/Y_3$ (the converse inclusion being trivially true). For this, we consider a finite product of commutators $$ p:=\prod_{i=1}^r [M_i,N_i] $$ in the group $\mathcal{IC}/Y_3$. If we replace one of the $M_i$'s by its product $M_i \cdot K$ with a $K \in \mathcal{KC}/Y_3$, the product $p$ remains unchanged since $K$ belongs to the center of $\mathcal{IC}/Y_3$. But, any $M\in \mathcal{IC}/Y_3$ can be decomposed in the form $M=K\cdot \mathbf{c}(h)$ where $K \in \mathcal{KC}/Y_3$ and $h\in \Torelli$ since we have the short exact sequence (\ref{eq:Habiro_sec_bis}) and $\tau_1: \Torelli \to \Lambda^3 H$ is surjective. Thus we can assume that each $M_i$ and each $N_i$ in $p$ belongs to $\mathbf{c}(\Torelli)/Y_3$. \end{proof} \section{Characterization of the $J_2$-equivalence and the $J_3$-equivalence} \label{sec:J2_J3} In this section we characterize the $J_2$-equivalence and the $J_3$-equivalence relations (Theorem~B and Theorem~C, respectively). \subsection{Proof of Theorem~B} The following lemma with $k=2$ proves the necessary condition in Theorem~B. \begin{lemma} \label{lem:rho_k} Let $M,M'\in \mathcal{IC}$. If $M$ is $J_k$-equivalent to $M'$, then we have $$ \rho_k(M)=\rho_k\left(M'\right) \ \in \operatorname{Aut}(\pi/\Gamma_{k+1}\pi). $$ \end{lemma} \begin{proof} By assumption, there exist a surface $S \subset \operatorname{int}(M)$ and an $s\in \mcg(S)[k]$ such that $M'=M_s$. Let $E$ be the closure of $M\setminus (S\times [-1,1])$ where $S\times [-1,1]$ is a regular neighborhood of $S$ in $M$. Thus $M'$ is obtained by gluing to $E$ the mapping cylinder of $s$. The van Kampen theorem shows that there exists a unique isomorphism between $\pi_1(M)/\Gamma_{k+1} \pi_1(M)$ and $\pi_1(M')/\Gamma_{k+1} \pi_1(M')$ such that the following diagram commutes: $$ \xymatrix @!0 @R=1cm @C=3cm { {\frac{\pi_1(M)}{\Gamma_{k+1} \pi_1(M)}} \ar@{-->}[rr]_-\simeq^-{\exists !} & & {\frac{\pi_1(M')}{\Gamma_{k+1} \pi_1(M')}} .\\ &{\frac{\pi_1\left(E\right)}{\Gamma_{k+1}\pi_1\left(E\right)}} \ar@{->>}[lu] \ar@{->>}[ru] & } $$ This triangular diagram can be ``expanded'' as follows: $$ \xymatrix{ &\frac{\pi_1\left(\Sigma\right)}{\Gamma_{k+1}\pi_1\left(\Sigma\right)} \ar[dl]^-\simeq_-{m_{+,*}} \ar[dr]_-\simeq^-{m_{+,*}'} \ar[dd]&\\ \frac{\pi_1\left(M\right)}{\Gamma_{k+1}\pi_1\left(M\right)}\ar[rr] |!{[ur];[dr]}\hole^<<<<<<<<{\simeq} &&\frac{\pi_1\left(M'\right)}{\Gamma_{k+1}\pi_1\left(M'\right)}\\ &\frac{\pi_1\left(E\right)}{\Gamma_{k+1}\pi_1\left(E\right)}\ar[lu]\ar[ru]&\\ &\frac{\pi_1\left(\Sigma\right)}{\Gamma_{k+1}\pi_1\left(\Sigma\right)} \ar@/^1pc/[uul]_-\simeq^-{m_{-,*}} \ar@/_1pc/[uur]^-\simeq_-{m_{-,*}'} \ar[u]& } $$ The front faces of this bipyramidal diagram commute. Therefore its back faces are commutative too, and we deduce that $\rho_k(M)=\rho_k(M')$. \end{proof} To prove the sufficient condition in Theorem~B, assume that $M,M' \in \mathcal{IC}$ are such that $\rho_2(M)=\rho_2(M')$. Since $\mathcal{IC}/Y_2$ is a group \cite{Goussarov,Habiro}, there is a $D\in \mathcal{IC}$ such that $M$ is $Y_2$-equivalent to $D\circ M'$. We deduce that $\rho_2(D)=1$ or, equivalently, that $\tau_1(D)=0$. As in the proof of Lemma \ref{lem:d^2beta}, we can use \cite{MM} to find some $Y$-graphs with special leaves $G_1, \dots, G_m$ in $(\Sigma \times I)$ such that $D$ is $Y_2$-equivalent to $\prod_{i=1}^m (\Sigma\times I)_{G_i}$. Since surgery along a $Y$-graph with special leaf is equivalent to a Dehn twist along a bounding simple closed curve of genus $1$, each cobordism $(\Sigma\times I)_{G_i}$ is $J_2$-equivalent to $(\Sigma \times I)$. Using the fact that ``$Y_2 \Rightarrow J_2$'', we conclude that $$ M\stackrel{J_2}{\sim} D \circ M' \stackrel{J_2}{\sim} \prod_{i=1}^m (\Sigma\times I)_{G_i} \circ M' \stackrel{J_2}{\sim} (\Sigma \times I) \circ M' = M'. $$ \subsection{Proof of Theorem~C} The necessary condition in Theorem~C follows from Lemma \ref{lem:rho_k} and the next result (with $k=3$). \begin{lemma} \label{lem:tau_mod_Ik} Let $M,M'\in \mathcal{IC}$. If $M$ is $J_k$-equivalent to $M'$, then we have $$ \tau(M',\partial_-M';\xi_0)- \tau(M,\partial_-M;\xi_0) \in I^k. $$ \end{lemma} \noindent The analogous statement for closed $3$-manifolds is proved in \cite[Lemma 4.14]{Massuyeau_torsion}. The proof is easily adapted to homology cylinders: see the proof of Theorem \ref{thm:torsion_finiteness}. To prove the sufficient condition in Theorem C, we need the following result of Morita. \begin{proposition}[Morita \cite{Morita_Casson1}]\label{prop:Morita} There exists a homology $3$-sphere $P$ that is $J_3$-equivalent to $S^3$ and whose Casson invariant is $+1$. \end{proposition} \noindent This is proved in \cite[Proposition 6.4]{Morita_Casson1}. Since this will play a crucial role in the sequel, we would like to develop a little bit Morita's argument. \begin{proof}[Proof of Proposition \ref{prop:Morita}] Let $R$ be a compact connected oriented surface of genus $2$ with one boundary component, and let $j:R\to S^3$ be a Heegaard embedding as explained in \S \ref{subsec:Casson}. Morita shows that there exists an element $\psi\in \mcg(R)[3]$ such that $\lambda_j(\psi)=1$. To prove this, he uses his decomposition formula for the Casson invariant (which is recalled in \S \ref{subsec:review_Morita}) and he considers the following element of $\left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2}$: $$ s_1:= (\alpha_1\wedge \beta_1)\leftrightarrow (\alpha_2\wedge \beta_2) -(\alpha_1\wedge \alpha_2)\leftrightarrow (\beta_1\wedge \beta_2) + (\alpha_1\wedge \beta_2)\leftrightarrow (\beta_1\wedge \alpha_2). $$ Note that $s_1\in \Lambda^4H\subset \left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2}$. Morita claims that there exists a family of elements $u_1,v_1,\dots,u_r,v_r$ of $H$ such that $\omega(u_i,v_i)=1$ for each $i\in \{1,\dots, r\}$, and \begin{equation}\label{eq:s1} s_1 = 3\cdot (\alpha_1\wedge \beta_1 + \alpha_2\wedge \beta_2)^{\otimes 2} + \sum_{i=1}^r \pm (u_i\wedge v_i)^{\otimes 2}. \end{equation} For example, it can be checked that the following equality holds \begin{eqnarray*} s_1 \quad =& & 3\cdot (\alpha_1\wedge \beta_1 + \alpha_2\wedge \beta_2)^{\otimes 2} \\ & + & \big((2\alpha_1 + \beta_2)\wedge (\beta_1 + \alpha_2)\big)^{\otimes 2} - \big((\alpha_1 + \beta_1 + \alpha_2)\wedge (\alpha_1 + \beta_1 + \beta_2)\big)^{\otimes 2} \\ & + & \big((\alpha_1 - \beta_2)\wedge \beta_1\big)^{\otimes 2} + \big( \alpha_1\wedge (\beta_1 - \alpha_2)\big)^{\otimes 2} \\ & - & 2\cdot\big( \alpha_1\wedge (\beta_1 + \alpha_2)\big)^{\otimes 2} - 2\cdot \big(\alpha_2\wedge (\alpha_1 + \beta_2)\big)^{\otimes 2} \\ & - & \big( \alpha_1\wedge (\beta_1 + \alpha_2 + \beta_2)\big)^{\otimes 2} + \big( \alpha_2\wedge (\alpha_1 + \beta_1 + \beta_2)\big)^{\otimes 2} \\ & + & \big((\alpha_1 + \beta_1 + \alpha_2)\wedge \beta_2\big)^{\otimes 2} - \big((\alpha_1 + \alpha_2 + \beta_2)\wedge \beta_1\big)^{\otimes 2} \\ & + & \big( \alpha_1\wedge (\beta_1 + \beta_2)\big)^{\otimes 2} - \big((\beta_1 + \alpha_2)\wedge \beta_2\big)^{\otimes 2} + \big((\alpha_1 + \alpha_2)\wedge \beta_1\big)^{\otimes 2} \\ & - & 7\cdot(\alpha_1\wedge \beta_1)^{\otimes 2} - 2\cdot(\alpha_2\wedge \beta_2)^{\otimes 2}, \end{eqnarray*} which proves the existence of such a family. Now, for each pair of elements $(u_i,v_i)$ such that $\omega(u_i,v_i)=1$, there exists some $\phi_i\in \operatorname{Sp}(H)$ such that $\phi_i(u_i)=\alpha_1$ and $\phi_i(v_i)=\beta_1$. Therefore, there exists a genus one bounding simple closed curve $\gamma_i$ such that the Dehn twist $T_{\gamma_i}$ along $\gamma_i$ satisfies $\tau_2(T_{\gamma_i})=(u_i\wedge v_i)^{\otimes 2}$. Let also $\gamma_0$ be a genus two bounding simple closed curve such that $\tau_2(T_{\gamma_0})=(\alpha_1\wedge \beta_1 + \alpha_2\wedge \beta_2)^{\otimes 2}$. (Here, we have used Morita's formula \cite[Proposition 1.1]{Morita_Casson1} for $\tau_2$: see (\ref{eq:tau2BSCC}) below.) Then $$ \psi := T_{\gamma_0}^{-3} \circ \prod_{i=1}^rT_{\gamma_i}^{\mp 1} \ \in \mcg(R)[2] $$ has the desired properties. Indeed we have $$ \tau_2(\psi)=\{-s_1\}=0 \ \in \left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2}/\Lambda^4H, $$ so that $\psi$ actually belongs to $\mcg(R)[3]$, and $$ \lambda_j(\psi)=-\frac{1}{24}d(\psi) = -\frac{1}{24}\left( -3\cdot 8+ 0\right)=1. $$ Here, $d:\mcg(R)[2]\to \mathbb{Z}$ denotes Morita's core of the Casson invariant and two of his formulas are used: see (\ref{eq:decomposition_d}) and (\ref{eq:core_bscc}) below. This completes the proof. \end{proof} We now prove the sufficient condition in Theorem~C. Let $M,M' \in \mathcal{IC}$ be such that $\rho_3(M)=\rho_3(M')$ and $\alpha(M)=\alpha(M')$. Since $\mathcal{IC}/Y_3$ is a group \cite{Goussarov,Habiro}, there is a $D\in \mathcal{IC}$ such that $M$ is $Y_3$-equivalent to $D\circ M'$. This $D$ satisfies $\rho_3(D)=1$ and $\alpha(D)=0$. Let $P$ be the homology $3$-sphere from Proposition \ref{prop:Morita} and set $$ D' := \left(\Sigma \times I\right) \sharp P^{\sharp \lambda_j(D)}. $$ Then, $D$ and $D'$ share the same invariants $\rho_3,\alpha$ and $\lambda_j$ so that they are $Y_3$-equivalent by Theorem~A. Using the fact that ``$Y_3 \Rightarrow J_3$'', we conclude that $$ M\stackrel{J_3}{\sim} D \circ M' \stackrel{J_3}{\sim} D' \circ M' = M' \sharp P^{\sharp \lambda_j(D)} \stackrel{J_3}{\sim} M'. $$ \begin{remark} According to Habiro \cite{Habiro}, the $Y_4$-equivalence for homology $3$-spheres is also classified by the Casson invariant. One can wonder whether there exists a homology $3$-sphere that is $J_4$-equivalent to $S^3$ and whose Casson invariant is $+1$. It would follow from an affirmative answer to this question, and the same argument as above, that any homology $3$-sphere is $J_4$-equivalent to $S^3$, thus improving Pitsch's result \cite{Pitsch}. \end{remark} \section{Core of the Casson invariant for homology cylinders}\label{sec:core_Casson} In this section, we extend Morita's definition of the core of the Casson invariant \cite{Morita_Casson1,Morita_Casson2} to the monoid $\mathcal{KC}=\mathcal{C}[2]$ (Theorem~D). At this point, it is important to emphasize how our sign conventions and notation differ from Morita's. The $k$-th Johnson homomorphism $\tau_k: \mcg[k] \to H \otimes\mathcal{L}_{k+1}(H)$ defined in \S \ref{subsec:Johnson} corresponds to $-\tau_{k+1}$ in Morita's papers. (Note the shift of index and the minus sign: we have identified $H$ with $H^*$ by $h\mapsto \omega(h,\cdot)$ while Morita uses the map $h\mapsto \omega(\cdot,h)$ in his papers.) Besides, for a Heegaard embedding $j: \Sigma \to S^3$, our map $\lambda_j\circ \mathbf{c}: \Torelli \to \mathbb{Z}$ defined in \S \ref{subsec:Casson} corresponds to $-\lambda_j^*$ in Morita's papers. \subsection{A quick review of Morita's work}\label{subsec:review_Morita} We summarize some of the results obtained by Morita in \cite{Morita_Casson1,Morita_Casson2}. Let $j:\Sigma \to S^3$ be a Heegaard embedding as in \S \ref{subsec:Casson}. Morita proved that the restriction of $\lambda_j$ to $\Johnson=\mcg[2]$ is a group homomorphism. This restricted map ``suffices'' for the understanding of the Casson invariant, since any homology $3$-sphere is $J_2$-equivalent to $S^3$. Furthermore, Morita showed that $\lambda_j:\Johnson \to \mathbb{Z}$ decomposes as a sum of two homomorphisms, one being completely determined by the second Johnson homomorphism $\tau_2$, and the other one -- which he calls the \emph{core} of the Casson invariant -- being independent of the embedding $j$. Let us recall this decomposition in more detail. To define the first of the two homomorphisms in Morita's decomposition, let $\mathcal{A}$ be the algebra over $\mathbb{Z}$ generated by elements $l(u,v)$, for each pair of elements $u$, $v$ of $H$, and subject to the relations $$ l(n\cdot u+n'\cdot u',v)=n\cdot l(u,v) + n' \cdot l(u',v)\quad \textrm{and}\quad l(v,u) = l(u,v) +\omega(u,v) $$ for all $u,u',v \in H$ and $n,n'\in \mathbb{Z}$. The embedding $j:\Sigma \to S^3$ defines an algebra homomorphism $\varepsilon_j:\mathcal{A}\rightarrow \mathbb{Z}$ by setting $$ \varepsilon_j\left( l(u,v) \right) := \operatorname{lk}(u,v^+), $$ where $v^+$ is a push-off of $v$ in the positive normal direction of $F_g= j(\Sigma)\subset S^3$. Let $\theta: \left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2}\rightarrow \mathcal{A}$ be the group homomorphism defined by $$ \left\{\begin{array}{lcl} \theta\left( (u\wedge v)\otimes (u\wedge v)\right) &:= &l(u,u)l(v,v) - l(u,v)l(v,u)\\ \theta\left( (a\wedge b)\leftrightarrow (c\wedge d) \right) &:= & l(a,c)l(b,d) - l(a,d)l(b,c) - l(d,a)l(c,b) + l(c,a)l(d,b). \end{array}\right. $$ Using Casson's formula relating the variation of his invariant under surgery along a $(\pm 1)$-framed knot to the Alexander polynomial of that knot, Morita was able to compute the value of $\lambda_j$ on a twist $T_\gamma$ along a bounding simple closed curve $\gamma \subset \Sigma$. He found that \begin{equation} \label{eq:lambda_bscc} \lambda_j(T_\gamma) = \varepsilon_j\circ \theta(\omega_\gamma\otimes \omega_\gamma) \end{equation} where $\omega_\gamma$ is the symplectic form of the subsurface of $\Sigma$ bounded by $\gamma$. (More precisely, we have $\omega_\gamma=\sum_{i=1}^h u_i\wedge v_i$ if the subsurface has genus $h$ and if $(u_i,v_i)_{i=1}^h$ is any symplectic basis of its first homology group.) Let also $\overline{d}: \left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2}\rightarrow \mathbb{Z}$ be the homomorphism defined by $$ \left\{\begin{array}{lcl} \overline{d}\left( (u\wedge v)\otimes (u\wedge v) \right)& := & 0\\ \overline{d}\left( (a\wedge b)\leftrightarrow (c\wedge d) \right) &:= & \omega(a,b)\omega(c,d) - \omega(a,c)\omega(b,d) + \omega(a,d)\omega(b,c).\\ \end{array}\right. $$ It turns out that $ \overline{q_j}:=\varepsilon_j\circ \theta + \frac{1}{3}\overline{d} $ vanishes on $\Lambda^4 H\subset \left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2}$, so that we can see it as a homomorphism $\overline{q_j}:\operatorname{D}_2(H) \to \mathbb{Z}$. (Recall that the target $\operatorname{D}_2(H)$ of $\tau_2$ is identified with $\left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2}/\Lambda^4H$ by Proposition \ref{prop:Morita-Levine}.) Hence we obtain a homomorphism $$ q_j := -\overline{q_j}\circ \tau_2:\Johnson \longrightarrow \mathbb{Q}. $$ Another description of $q_j$ is given by Perron in \cite{Perron} using Fox's differential calculus. The definition of the second homomorphism in Morita's decomposition of $\lambda_j$ is more delicate. Let $k: \mcg \to H$ be a crossed homomorphism whose homology class generates $H^1(\mcg;H) \simeq \mathbb{Z}$, see \cite{Morita_families1}, and which is invariant under stabilization of the surface $\Sigma$. There is a $2$-cocycle $c_k: \mcg \times \mcg \to \mathbb{Z}$ associated to $k$ by the formula $c_k(\phi,\psi) := \omega(k(\phi^{-1}),k(\psi))$. This cocycle represents the first characteristic class of surface bundles $e_1 \in H^2(\mcg)$ introduced by Morita in \cite{Morita_characteristic1,Morita_characteristic2}. Meyer's signature $2$-cocycle on $\operatorname{Sp}(H)$ \cite{Meyer} composed with $\rho_1:\mcg \to \operatorname{Sp}(H)$ gives another $2$-cocycle $\tau$ on $\mcg$ such that $[-3\tau]=e_1$. Since $\mcg$ is perfect (in genus $g\geq 3$), there is a unique $1$-cochain $d_k:\mcg \to \mathbb{Z}$ whose coboundary is $c_k+3\tau$ and which is preserved by stabilization of the surface $\Sigma$. Because the $1$-cocycle $k$ vanishes on $\Johnson$, the restriction of $d_k$ to $\Johnson$ is a group homomorphism $$ d:\Johnson \longrightarrow \mathbb{Z} $$ which does not depend on the choice of $k$. The group $\Johnson$ is according to Johnson \cite{Johnson_subgroup} generated by Dehn twists along bounding simple closed curves, and Morita proves that \begin{equation} \label{eq:core_bscc} d(T_\gamma)=4h(h-1) \end{equation} for any bounding simple closed curve $\gamma\subset \Sigma$ of genus $h$. He deduces from (\ref{eq:lambda_bscc}) and (\ref{eq:core_bscc}) the following decomposition formula for $\lambda_j$: \begin{equation} \label{eq:decomposition_d} -\lambda_j = \frac{1}{24} d +q_j: \Johnson \longrightarrow \mathbb{Z}. \end{equation} Recall that any homology $3$-sphere is $J_3$-equivalent to $S^3$, as expected by Morita \cite{Morita_Casson1} and proved by Pitsch \cite{Pitsch}. Thus the homomorphism $d:\Johnson \to \mathbb{Z}$, and more precisely its restriction to the subgroup $\mcg[3]$, contains all the topological information on homology $3$-spheres carried by the Casson invariant: Morita calls $d$ the \emph{core of the Casson invariant}. Observe that $d$ takes values in $8 \mathbb{Z}$ according to (\ref{eq:core_bscc}), and that it is obviously trivial in genus $g=0,1$. \subsection{Proof of the existence in Theorem~D}\label{subsec:existence} We now go back to homology cylinders and we consider the submonoid $\mathcal{KC}=\mathcal{C}[2]$ of $\mathcal{IC}$. We shall prove that, for any genus $g\geq 0$, the group homomorphism $d:\Johnson \to 8\mathbb{Z}$ can be extended to a $Y_3$-invariant and $\mcg$-conjugacy invariant monoid homomorphism $d:\mathcal{KC} \to 8\mathbb{Z}$. We start by considering the map $$ d'': \mathcal{KC} \longrightarrow \mathbb{Q} $$ defined by $d'':=p_{2,2}\circ Z_2$. In other words, $d''(M)$ is the coefficient of $\hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm}$ in $Z(M)$. \begin{lemma}\label{lem:d''} The map $d''$ is a monoid homomorphism and has the following properties: \begin{itemize} \item[(i)] It is canonical, i.e.\ it does not depend on the choices which are needed in the construction of the LMO homomorphism $Z$; \item[(ii)] It is invariant under $Y_3$-equivalence and conjugation by $\mcg$; \item[(iii)] It takes values in $\frac{1}{8}\mathbb{Z}$. \end{itemize} \end{lemma} \begin {proof} For any $M,M'\in \mathcal{IC}$, we have $Z(M\circ M')= Z(M)\star Z(M')$ so that the coefficient of $\hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm}$ in $Z(M\circ M')$ is the sum of the same coefficients in $Z(M)$ and $Z(M')$, plus a contribution of $Z_1(M)\star Z_1(M')$. By (\ref{eq:tau1_LMO}) the latter vanishes if $M,M'$ belong to $\mathcal{KC}$, so that $d''$ is a monoid homomorphism. To prove (i), observe that for every $M \in \mathcal{KC}$ the square of $\{M\}\in \mathcal{KC}/Y_3$ belongs to the subgroup $Y_2\mathcal{IC}/Y_3$ (as follows from \cite{MM} or from \S \ref{subsec:description1}). Thus, the formula \begin{equation} \label{eq:half_d''} d''(M)=\frac{1}{2} d''(M\circ M) \end{equation} shows that $d''$ is determined by its restriction to $Y_2\mathcal{IC}$. Since the inverse of $Z_2: (Y_2\mathcal{IC} / Y_3)\otimes \mathbb{Q} \rightarrow \jacobi^c_2(H_\mathbb{Q})$ is the surgery map $\psi_2 \circ \chi_2$ by (\ref{eq:univLMO}), it does not depend on the choices involved in the construction of $Z$. We deduce assertion (i). The invariance of $d''$ under $Y_3$-equivalence is inherited from $Z_2:\mathcal{IC} \to \jacobi_2(H_\mathbb{Q})$. Since $Z_2:Y_2\mathcal{IC} \to \jacobi_2^c(H_\mathbb{Q})$ is $\mcg$-equivariant \cite{HM_SJD}, we have $$ d''\left(\mathbf{c}(f)\circ M \circ \mathbf{c}(f^{-1})\right) \stackrel{(\ref{eq:half_d''})}{=} \frac{1}{2}d''\left(\mathbf{c}(f)\circ M^2\circ \mathbf{c}(f^{-1})\right) = \frac{1}{2}d''(M^2) \stackrel{(\ref{eq:half_d''})}{=} d''(M) $$ for all $M\in \mathcal{KC}$ and $f\in \mcg$. This proves assertion (ii). We prove (iii). For any graph clasper $G$ of degree $2$ in $(\Sigma\times I)$, we have $\widetilde{Z}^Y_2\left( (\Sigma\times I)_{G} \right)=\pm D$ where the Jacobi diagram $D$ has the same shape as $G$ \cite{CHM}. Thus we have $$ \widetilde{Z}^Y_2\left( Y_2\mathcal{IC} \right) \subset \jacobi^c_2(L^\pm) \subset \jacobi^c_2(L^\pm_\mathbb{Q}). $$ As seen in \S \ref{subsec:Jacobi}, we also have $s\circ \varphi\left( \jacobi^c_2(L^\pm) \right) = \jacobi^{<,c}_2(H)$, and it follows from the formula for $\chi^{-1}$ given in \cite{HM_SJD} that $\chi^{-1}\left( \jacobi^{<,c}_2(H) \right)$ is contained in $\frac{1}{4} \jacobi^{c}_2(H)$. Thus, we have $$ Z_2(Y_2 \mathcal{IC})\subset \frac{1}{4} \jacobi^{c}_2(H) \subset \jacobi^{c}_2(H_\mathbb{Q}) $$ so that $d''(Y_2 \mathcal{IC})$ is contained in $\frac{1}{4}\mathbb{Z}$. We conclude thanks to (\ref{eq:half_d''}) that $d''(\mathcal{KC})\subset \frac{1}{8}\mathbb{Z}$. \end{proof} We are now going to express $d''$ in terms of the classical invariants $\lambda_j$, $\alpha$ and $\tau_2$. Here the Heegaard embedding $j:\Sigma \to S^3$ is chosen compatibly with the system of meridians and parallels $(\alpha,\beta)$ as explained in the paragraph preceding Lemma \ref{lem:Casson_LMO}. We shall denote by $\langle \cdot , \cdot \rangle : L^\pm\times L^\pm \rightarrow \mathbb{Z}$ the symmetric bilinear form defined by $$ \forall i,j\in \{1,\dots,g\}, \quad \langle i^+ , j^- \rangle := \delta_{i,j}\quad,\quad \langle i^+ , j^+ \rangle := 0 \quad,\quad \langle i^- , j^- \rangle := 0 $$ and, for all $a,b,c,d\in L^\pm$, we set \begin{eqnarray*} \operatorname{H}_\Phi\big(\hn{a}{b}{c}{d}\big) & := & \langle a,d \rangle\phin{b}{c} + \langle b,c \rangle\phin{a}{d} -\langle a,c \rangle\phin{b}{d} -\langle b,d \rangle\phin{a}{c}, \\ \operatorname{H}_\Theta\big(\hn{a}{b}{c}{d}\big) & := & \big( \langle a,d \rangle\langle b,c \rangle -\langle a,c \rangle\langle b,d \rangle\big) \hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm}, \\ \Phi_\Theta\big(\phin{a}{b}\big) & := & \langle a,b \rangle \hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm}. \end{eqnarray*} \begin{lemma}\label{lem:formula_d''} For all $M\in \mathcal{KC}$, we have \begin{equation}\label{eq:formula_d''} d''(M) = -\frac{\lambda_j(M)}{2} - \frac{1}{4}\Phi_\Theta(\alpha(M)) - \frac{1}{4}\operatorname{H}_\Theta(\tau_2(M)). \end{equation} \end{lemma} \noindent Here, and in the sequel, a tacit identification between $\jacobi^c(L^\pm_\mathbb{Q})$ and $\jacobi^c(H_\mathbb{Q})$ is always through the ``obvious'' isomorphism that transforms $L^\pm_\mathbb{Q}$-colored diagrams into $H_\mathbb{Q}$-colored diagrams by the rules (\ref{eq:L+-_H}). Thus the second Johnson homomorphism $$ \tau_2(M) \in \frac{\left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2}}{\Lambda^4H} \subset \frac{\left(\Lambda^2 H_\mathbb{Q} \otimes \Lambda^2H_\mathbb{Q}\right)^{\mathfrak{S}_2}}{\Lambda^4H_\mathbb{Q}} \simeq \frac{S^2 \Lambda^2H_\mathbb{Q}}{\Lambda^4H_\mathbb{Q}} \simeq \jacobi^c_{2,0}(H_\mathbb{Q}) $$ is seen as an element of $\jacobi^c_{2,0}(L^\pm_\mathbb{Q})$, while the quadratic part of the relative RT torsion $$ \alpha(M) \in S^2H \simeq \jacobi^c_{2,1}(H) $$ is interpreted as an element of $\jacobi^c_{2,1}(L^\pm)$. \begin{proof}[Proof of Lemma \ref{lem:formula_d''}] The ``non-obvious'' isomorphism $\kappa:\jacobi^c(L^\pm_\mathbb{Q})\rightarrow \jacobi^c(H_\mathbb{Q})$ defined by (\ref{eq:kappa_formula}) is given in degree $2$ by the formulas \begin{eqnarray*} \kappa \big(\hn{a}{b}{c}{d}\big) & = & -\hn{a}{b}{c}{d} -\frac{1}{2} \operatorname{H}_\Phi\big(\hn{a}{b}{c}{d}\big) - \frac{1}{4} \operatorname{H}_\Theta\big(\hn{a}{b}{c}{d}\big),\\ \kappa \big(\phin{a}{b}\big) & = & \phin{a}{b} + \frac{1}{2} \Phi_\Theta\big(\phin{a}{b}\big), \\ \kappa (\hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm} ) & = & -\hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm}, \end{eqnarray*} where the labels on the right-hand side of these equalities are understood as elements of $H_\mathbb{Q}$ by the rules (\ref{eq:L+-_H}). Using these formulas, we obtain that $d''=p_{2,2}\circ \kappa\circ \widetilde{Z}^Y_2$ is given, for any $M\in \mathcal{KC}$, by \begin{equation} \label{eq:d''} d''(M) = - p_{2,2}\circ \widetilde{Z}^Y_2(M) + \frac{1}{2}\Phi_\Theta\big(p_{2,1}\circ \widetilde{Z}^Y_2(M)\big) - \frac{1}{4}\operatorname{H}_\Theta\big(p_{2,0}\circ \widetilde{Z}^Y_2(M)\big). \end{equation} By Lemma \ref{lem:tau2_LMO}, we know that $$ p_{2,0}\circ \widetilde{Z}^Y_2(M)=-p_{2,0}\circ Z_2(M) = -\tau_2(M), $$ and by Lemma \ref{lem:alpha_LMO}, we have that $$ \alpha(M) = -2 p_{2,1}\circ Z_2(M) = -2 \left( p_{2,1}\circ \widetilde{Z}^Y_2(M) -\frac{1}{2}\operatorname{H}_\Phi\big(p_{2,0}\circ \widetilde{Z}^Y_2(M)\big) \right). $$ It follows that $$ p_{2,1}\circ \widetilde{Z}^Y_2(M) = -\frac{1}{2} \alpha(M) + \frac{1}{2} \operatorname{H}_\Phi(p_{2,0}\circ \widetilde{Z}^Y_2(M)\big) = -\frac{1}{2} \alpha(M) - \frac{1}{2} \operatorname{H}_\Phi(\tau_2(M)). $$ Finally, recall from (\ref{eq:cassonLMO}) that the coefficient of $p_{2,2}\circ \widetilde{Z}^Y_2(M)$ is $\frac{1}{2}\lambda_j(M)$. The result follows from these interpretations of $p_{2,i}\circ \widetilde{Z}^Y_2(M)$ for $i=0,1,2$ and equation (\ref{eq:d''}). \end{proof} Recall from Proposition \ref{prop:alpha_properties} that $\alpha\circ \mathbf{c}: \Torelli \to S^2H$ is trivial. Therefore, equation (\ref{eq:formula_d''}) gives another decomposition of $\lambda_j$ on the subgroup $\Johnson$: \begin{equation}\label{eq:decomposition_d''} -\lambda_j= 2d'' + \frac{1}{2}\operatorname{H}_\Theta\circ \tau_2: \Johnson \longrightarrow \mathbb{Z}. \end{equation} This identity should be compared to Morita's decomposition (\ref{eq:decomposition_d}). Whereas $d$ depends quadratically on the genus of bounding simple closed curves (\ref{eq:core_bscc}), the next lemma shows that $d''$ depends linearly on the genus. Thus the decomposition (\ref{eq:decomposition_d''}) is essentially Auclair's formula \cite[Theorem 4.4.6]{Auclair}. \begin{lemma}\label{lem:d''_bscc} Let $\gamma\subset \Sigma$ be a bounding simple closed curve of genus $h$, and let $T_\gamma$ denote the Dehn twist along $\gamma$. Then we have $d''(T_\gamma)=-h/8$. \end{lemma} \begin{proof} Since $d''$ is $\mcg$-conjugacy invariant, we can assume without loss of generality that the curve $j(\gamma)$ bounds a disk in the lower handlebody of the genus $g$ Heegaard splitting of $S^3$. Thus the $3$-manifold $S^3(\mathbf{c}(T_\gamma),j)$ is obtained from $S^3$ by surgery along a $(+1)$-framed trivial knot, so that it is homeomorphic to $S^3$. Hence we have $\lambda_j(T_\gamma)=0$, and we deduce from Lemma \ref{lem:formula_d''} that $$ d''(T_\gamma) = -\frac{1}{4}\operatorname{H}_\Theta(\tau_2(T_\gamma)). $$ By \cite[Proposition 1.1]{Morita_Casson1}, we have (taking into account the difference in sign conventions) \begin{equation}\label{eq:tau2BSCC} \tau_2(T_\gamma) = \frac{1}{2}\sum_{i=1}^h\hn{\alpha_i}{\beta_i}{\alpha_i}{\beta_i} + \sum_{1\leq i<j\leq h} \hn{\alpha_i}{\beta_i}{\alpha_j}{\beta_j}. \end{equation} The result follows from the observations that $$ \operatorname{H}_\Theta\big( \hn{\alpha_i}{\beta_i}{\alpha_i}{\beta_i} \big) = \hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm} \quad\textrm{and}\quad \operatorname{H}_\Theta\big( \hn{\alpha_i}{\beta_i}{\alpha_j}{\beta_j} \big) = 0 $$ for all $i\neq j$. \end{proof} We also need the homomorphism $\overline{d'}: \operatorname{D}_2(H) \simeq \left(\Lambda^2 H \otimes \Lambda^2H\right)^{\mathfrak{S}_2}/\Lambda^4H\rightarrow \mathbb{Z}$ defined by Morita \cite{Morita_Casson2} in the following way: $$ \left\{\begin{array}{lcl} \overline{d'}\left( (a\wedge b)\otimes (a\wedge b)\right) & := & -3 \omega(a,b)^2\\ \overline{d'}\left( (a\wedge b)\leftrightarrow (c\wedge d) \right) &:= & -4 \omega(a,b)\omega(c,d) - 2 \omega(a,c)\omega(b,d) + 2 \omega(a,d)\omega(b,c). \end{array}\right. $$ We get a monoid homomorphism $$ d': \mathcal{KC} \longrightarrow \mathbb{Z} $$ by setting $d':=-\overline{d'}\circ \tau_2$. Observe that $d'$ shares the same properties as $d''$: it is canonical and it is invariant under $Y_3$-equivalence as well as under the action of $\mcg$ by conjugation. A simple computation based on (\ref{eq:tau2BSCC}) gives \begin{equation} \label{eq:d'_bscc} d'(T_\gamma)=h(2h+1), \end{equation} for any bounding simple closed curve $\gamma\subset \Sigma$ of genus $h$ \cite{Morita_Casson2}. Morita proved that, in genus $g\geq 2$, any $\mcg$-conjugacy invariant group homomorphism $\Johnson \to \mathbb{Z}$ can be written in a unique way as a linear combination (with rational coefficients) of $d$ and $d'|_\Johnson$ \cite{Morita_Casson2}. The next lemma expresses $8d''|_\Johnson$ in this way. \begin{lemma}\label{lem:d_d'_d''} We have \begin{equation} \label{eq:d_d'_d''} 8d''|_{\Johnson} = \frac{1}{6}d - \frac{1}{3}d'|_{\Johnson}. \end{equation} \end{lemma} \begin{proof} Equation (\ref{eq:d_d'_d''}) can be deduced from (\ref{eq:decomposition_d}), (\ref{eq:decomposition_d''}) and the definition of $d'$ by a direct computation. Alternatively, we can use the fact that $\Johnson$ is generated by Dehn twists along bounding simple closed curves \cite{Johnson_subgroup}. Let $\gamma \subset \Sigma$ be a bounding simple closed curve of genus $h$. Equations (\ref{eq:core_bscc}) and (\ref{eq:d'_bscc}) give $$ \frac{1}{6}d(T_\gamma) - \frac{1}{3}d'(T_\gamma) = \frac{1}{6} \cdot 4h(h-1) - \frac{1}{3}\cdot h(2h+1)=-h $$ and we conclude thanks to Lemma \ref{lem:d''_bscc}. \end{proof} To prove the existence in Theorem~D, we define the monoid map $d:\mathcal{KC} \to \mathbb{Z}$ by \begin{equation}\label{eq:definition_extended_core} d:= 2d'+48 d''. \end{equation} According to Lemma \ref{lem:d''}, this map $d$ is $\mcg$-conjugacy invariant as well as $Y_3$-invariant. According to Lemma \ref{lem:d_d'_d''}, it extends Morita's map $d:\Johnson \to \mathbb{Z}$ through $\mathbf{c}$. The invariant $d:\mathcal{KC} \to \mathbb{Z}$ can be written explicitly in terms of $\lambda_j$, $\alpha$ and $\tau_2$ using Lemma \ref{lem:formula_d''}: \begin{equation} \label{eq:formula_extended_core} \forall M \in \mathcal{KC}, \quad d(M) = - 24\lambda_j(M) - 12 \Phi_\Theta(\alpha(M)) -12 \operatorname{H}_\Theta(\tau_2(M)) -2 \overline{d'}(\tau_2(M)). \end{equation} Besides, it can be written explicitly in terms of $Z$ using Lemma \ref{lem:tau2_LMO}: \begin{equation} \label{eq:formula_extended_core_bis} \forall M \in \mathcal{KC}, \quad d(M) = -2 \overline{d'}\circ p_{2,0}\circ Z_2(M) + 48 p_{2,2}\circ Z_2(M). \end{equation} It remains to prove that $d(\mathcal{KC})$ is contained in $8\mathbb{Z}$. For this, we recall from \S \ref{subsec:description1} that we have an isomorphism $ \psi_{[2]}: \jacobi_{[2]}^{<,c}(H)\to \mathcal{KC}/Y_{3} $ and we shall actually compute $d\circ \psi_{[2]}(D)$ for each generator $D$ of $\jacobi^{<,c}_{[2]}(H)$. \begin{proposition}\label{prop:values_d} The monoid homomorphism $d:\mathcal{KC} \to \mathbb{Z}$ takes the following values on the generators of the group $\mathcal{KC}/Y_{3}$: \begin{eqnarray} \label{eq:line1} d\circ \psi_{[2]} \left(\hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm} \right) & = & 48, \\ \label{eq:line2} d\circ \psi_{[2]} \left(\phio{a}{b} \right) & = & 24\omega(a,b), \\ \label{eq:line3} \quad \quad \quad d\circ \psi_{[2]} \left(\hob{a}{b}{c}{d} \right) & = & 16\omega(a,b)\omega(c,d) - 16 \omega(a,c)\omega(b,d) - 8\omega(a,d)\omega(b,c),\\ \label{eq:line4} d\circ \psi_{[2]}(h,h')&=&12\omega(h,h')(\omega(h,h')-1),\\ \label{eq:line5} d\circ \psi_{[2]}(h)&=&0,\\ \label{eq:line6} d\circ \psi_{[2]}(1)&=&-24, \end{eqnarray} \end{proposition} \begin{proof} Since $ Z_2 \circ \psi_2 = \chi^{-1}$, we obtain that \begin{eqnarray*} Z_2\circ \psi_2 \left(\hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm} \right) & = & \hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm} \\ Z_2 \circ \psi_2 \left(\phio{a}{b} \right) & = & \phin{a}{b} + \frac{1}{2}\omega(a,b)\hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm} \\ Z_2\circ \psi_2 \left(\hob{a}{b}{c}{d} \right) & = & \hn{b}{c}{d}{a} + \frac{1}{4}\left( \omega(a,b)\omega(c,d) - \omega(a,c)\omega(b,d) \right)\hspace{-0.15cm} \figtotext{12}{12}{theta} \hspace{-0.15cm}\\ & &+ \frac{1}{2}\left(\begin{array}{c} \omega(a,b) \phin{c}{d} - \omega(a,c) \phin{b}{d}\\ + \omega(c,d) \phin{a}{b} - \omega(b,d) \phin{a}{c} \end{array}\right). \end{eqnarray*} We deduce from (\ref{eq:formula_extended_core_bis}) the formulas (\ref{eq:line1}), (\ref{eq:line2}) and (\ref{eq:line3}). Next, since $d$ is additive, formula (\ref{eq:line4}) is derived from equations (\ref{eq:line2}) and (\ref{eq:line3}) using relation ($G_1$). Then (\ref{eq:line4}) and relation $(G_0)$ imply (\ref{eq:line5}). Finally, (\ref{eq:line1}) and relation $(G_3)$ give (\ref{eq:line6}). \end{proof} \subsection{Proof of the unicity in Theorem~D} We need some representation theory of the symplectic group $\operatorname{Sp}(H_\mathbb{Q})\simeq \operatorname{Sp}(2g;\mathbb{Q})$. In particular, we need the following facts. \begin{lemma}\label{lem:Z_Q} Let $V$ be a finite-dimensional rational $\operatorname{Sp}(2g;\mathbb{Q})$-module. \begin{itemize} \item[(i)] If $L$ is an abelian subgroup of $V$ stable by the action of $\operatorname{Sp}(2g;\mathbb{Z})$, then $L\otimes \mathbb{Q} \subset V$ is stable by the action of $\operatorname{Sp}(2g;\mathbb{Q})$. \item[(ii)] If $L$ is a lattice of $V$ stable by the action of $\operatorname{Sp}(2g;\mathbb{Z})$, then the action of $\operatorname{Sp}(2g;\mathbb{Q})$ on $V=L\otimes \mathbb{Q}$ is determined by the action of $\operatorname{Sp}(2g;\mathbb{Z})$ on $L$. \item[(iii)] If $L$ is a lattice of $V$ stable by the action of $\operatorname{Sp}(2g;\mathbb{Z})$ and if $f:L\to \mathbb{Z}$ is an $\operatorname{Sp}(2g;\mathbb{Z})$-invariant group homomorphism, then $f\otimes \mathbb{Q}:V \to \mathbb{Q}$ is $\operatorname{Sp}(2g;\mathbb{Q})$-invariant. \end{itemize} \end{lemma} \noindent These facts may belong to folklore. Statement (i) is proved by Asada and Nakamura in \cite[(2.2.8)]{AN}. Statements (ii), (iii) can be proved using the same kind of arguments. We shall assume in the sequel that $g\geq 3$. We denote $$ Y_3\Torelli:= \ker\left(\xymatrix{ \Torelli \ar[r]^-\mathbf{c} & \mathcal{IC} \ar@{->>}[r] & \mathcal{IC}/Y_3} \right) $$ which is a subgroup of $\Torelli$ sitting between $\Gamma_3\Torelli$ and $\Gamma_2 \Torelli$. \begin{lemma} \label{lem:Sp} The action of $\mcg$ by conjugation on $\mathcal{KC}$ (respectively on $\Johnson$) induces an action of $\operatorname{Sp}(H) \simeq \mcg/\Torelli$ on the abelian group $\mathcal{KC}/Y_3$ (respectively on $\Johnson/Y_3 \Torelli$), and this extends in a unique way to an action of $\operatorname{Sp}(H_\mathbb{Q})$ on the vector space $(\mathcal{KC}/Y_3)\otimes \mathbb{Q}$ (respectively on $(\Johnson/Y_3 \Torelli)\otimes \mathbb{Q}$). Under the assumption that $g\geq 3$, the mapping cylinder construction induces an isomorphism $$ \left( \frac{\Johnson}{Y_3\Torelli}\otimes \mathbb{Q}\right)^{\operatorname{Sp}(H_\mathbb{Q})} \mathop{\longrightarrow}_\simeq^\mathbf{c} \left( \frac{\mathcal{KC}}{Y_3}\otimes \mathbb{Q}\right)^{\operatorname{Sp}(H_\mathbb{Q})} $$ between the $\operatorname{Sp}(H_\mathbb{Q})$-invariant subspaces. \end{lemma} \begin{proof} Let $f\in \Torelli$ and $M\in \mathcal{KC}$. Since $\mathcal{KC}/Y_3$ is contained in the center of the group $\mathcal{IC}/Y_3$ by Lemma \ref{lem:Habiro_sec}, the cobordism $\mathbf{c}(f)\circ M \circ \mathbf{c}(f^{-1})$ is $Y_3$-equivalent to $M$. Therefore the action of $\mcg$ on $\mathcal{KC}/Y_3$ factorizes to $\mcg/\Torelli\simeq \operatorname{Sp}(H)$. Regarding $\Johnson/Y_3\Torelli$ as a subgroup of $\mathcal{KC}/Y_3$ via $\mathbf{c}$, we see that the same is true for the action of $\mcg$ on $\Johnson/Y_3\Torelli$. It follows from \cite{MM} that the quotient group $(\mathcal{KC}/Y_3)/(Y_2\mathcal{IC}/Y_3)\simeq \mathcal{KC}/Y_2$ is $2$-torsion, so that the inclusion $Y_2\mathcal{IC} \subset \mathcal{KC}$ induces an $\operatorname{Sp}(H)$-equivariant isomorphism $$ (Y_2\mathcal{IC}/Y_3)\otimes \mathbb{Q} \mathop{\longrightarrow}^\simeq (\mathcal{KC}/Y_3)\otimes \mathbb{Q}. $$ Since the action of $\operatorname{Sp}(H)$ on $Y_2\mathcal{IC}/Y_3$ extends to an action of $\operatorname{Sp}(H_\mathbb{Q})$ on $(Y_2\mathcal{IC}/Y_3)\otimes \mathbb{Q}$ \cite{HM_SJD}, the action of $\operatorname{Sp}(H)$ on $\mathcal{KC}/Y_3$ extends to an action of $\operatorname{Sp}(H_\mathbb{Q})$ on $(\mathcal{KC}/Y_3)\otimes \mathbb{Q}$ and this extension is unique by (ii) of Lemma \ref{lem:Z_Q}. The same is true for the action of $\operatorname{Sp}(H)$ on $\Johnson/Y_3\Torelli$ by (i) and (ii) of Lemma \ref{lem:Z_Q}. To prove the last statement, we use the commutative diagram of $\operatorname{Sp}(H_\mathbb{Q})$-modules $$ \xymatrix{ \frac{\Gamma_2\Torelli}{\Gamma_3\Torelli} \otimes \mathbb{Q} \ar@{->>}[r] \ar@{>->}[rd]_-\mathbf{c}& \frac{\Gamma_2\Torelli}{Y_3\Torelli} \otimes \mathbb{Q} \ar@{->}[d]^-\mathbf{c} \ar[r]^-\simeq & \frac{\Johnson}{Y_3\Torelli} \otimes \mathbb{Q} \ar@{->}[d]^-\mathbf{c}\\ & \frac{Y_2\mathcal{IC}}{Y_3}\otimes \mathbb{Q} \ar[r]^-\simeq& \frac{\mathcal{KC}}{Y_3}\otimes \mathbb{Q} } $$ where the horizontal maps are induced by inclusions and verticals maps are induced by the mapping cylinder construction. The injectivity of the oblique map is proved in \cite[Corollary 1.6]{HM_SJD} assuming that $g\geq 3$. The bijectivity of $({\Gamma_2\Torelli}/{Y_3\Torelli})\otimes \mathbb{Q} \to ({\Johnson}/{Y_3\Torelli}) \otimes \mathbb{Q}$ follows from the fact that $\Johnson/\Gamma_2\Torelli$ is $2$-torsion \cite{Johnson_abelianization}. Passing to the $\operatorname{Sp}(H_\mathbb{Q})$-invariant subspaces, we obtain the following commutative diagram of vector spaces: $$ \xymatrix{ \left(\frac{\Gamma_2\Torelli}{\Gamma_3\Torelli} \otimes \mathbb{Q}\right)^{\operatorname{Sp}(H_\mathbb{Q})} \ar@{->}[d]_-\mathbf{c} \ar[r]^-\simeq \ar[r]^-\simeq & \left(\frac{\Johnson}{Y_3\Torelli} \otimes \mathbb{Q}\right)^{\operatorname{Sp}(H_\mathbb{Q})} \ar@{->}[d]^-\mathbf{c}\\ \left(\frac{Y_2\mathcal{IC}}{Y_3}\otimes \mathbb{Q}\right)^{\operatorname{Sp}(H_\mathbb{Q})} \ar[r]^-\simeq& \left(\frac{\mathcal{KC}}{Y_3}\otimes \mathbb{Q}\right)^{\operatorname{Sp}(H_\mathbb{Q})} } $$ The decompositions into irreducible $\operatorname{Sp}(H_\mathbb{Q})$-modules done in \cite[\S 5]{HM_SJD} show that the first vertical map is an isomorphism. The conclusion follows. \end{proof} We can now prove the unicity in Theorem~D assuming that $g\geq 3$. We deduce from Morita's formula (\ref{eq:decomposition_d}) that the group homomorphism $d:\Johnson \to \mathbb{Z}$ vanishes on $Y_3\Torelli$. Therefore we have a linear map $$ d\otimes\mathbb{Q}: ({\Johnson}/{Y_3\Torelli})\otimes \mathbb{Q} \longrightarrow \mathbb{Q}. $$ Since $d$ is $\mcg$-conjugacy invariant, we deduce from Lemma \ref{lem:Z_Q} (iii) that $d\otimes \mathbb{Q}$ is $\operatorname{Sp}(H_\mathbb{Q})$-invariant. We have the commutative diagram $$ \xymatrix{ \operatorname{Hom}_{\operatorname{Sp}(H_\mathbb{Q})}\left(\frac{\mathcal{KC}}{Y_3}\otimes \mathbb{Q},\mathbb{Q}\right) \ar[r]^-\simeq \ar[d] & \operatorname{Hom}_\mathbb{Q}\left(\left(\frac{\mathcal{KC}}{Y_3}\otimes \mathbb{Q}\right)^{\operatorname{Sp}(H_\mathbb{Q})},\mathbb{Q}\right) \ar[d] \\ \operatorname{Hom}_{\operatorname{Sp}(H_\mathbb{Q})}\left(\frac{\Johnson}{Y_3\Torelli}\otimes \mathbb{Q},\mathbb{Q}\right) \ar[r]^-\simeq & \operatorname{Hom}_\mathbb{Q}\left(\left(\frac{\Johnson}{Y_3\Torelli}\otimes \mathbb{Q}\right)^{\operatorname{Sp}(H_\mathbb{Q})},\mathbb{Q}\right) } $$ where the horizontal maps are restrictions and are isomorphisms according to Schur's lemma; the vertical maps of that diagram are induced by $\mathbf{c}$. We deduce from Lemma \ref{lem:Sp} that $d\otimes \mathbb{Q}$ extends in a unique way through $\mathbf{c}$ to an $\operatorname{Sp}(H_\mathbb{Q})$-invariant linear map $(\mathcal{KC}/Y_3)\otimes \mathbb{Q} \to \mathbb{Q}$. According to Lemma \ref{lem:Z_Q} (iii), an $\operatorname{Sp}(H_\mathbb{Q})$-invariant linear map $(\mathcal{KC}/Y_3)\otimes \mathbb{Q} \to \mathbb{Q}$ is the same as an $\mcg$-conjugacy invariant and $Y_3$-invariant monoid map $\mathcal{KC}\to \mathbb{Q}$. This proves the unicity statement in Theorem~D (as well as the existence statement if one allows values in $\mathbb{Q}$ instead of $8\mathbb{Z}$). \subsection{A stable version of Theorem D} Whatever the genus $g\geq 0$ of $\Sigma$ is, formula (\ref{eq:formula_extended_core}) defines an $\mcg$-conjugacy invariant and $Y_3$-invariant monoid map $d:\mathcal{KC} \to 8\mathbb{Z}$ which extends Morita's map on the subgroup $\Johnson$. The invariants $\lambda_j$, $\alpha$ and $\tau_2$ being preserved by stabilization of the surface $\Sigma$, the homomorphism $d$ is invariant under stabilization. Thus we can summarize the results of this section in the following way. \begin{theorem} There is a unique way to define, for each compact connected oriented surface $\Sigma$ with one boundary component, a monoid homomorphism $d: \mathcal{KC}(\Sigma) \to \mathbb{Z}$ that has the following properties: \begin{itemize} \item $d$ is $Y_3$-invariant and $\mcg(\Sigma)$-conjugacy invariant; \item $d\circ \mathbf{c}:\Johnson(\Sigma) \to \mathbb{Z}$ coincides with Morita's core of the Casson invariant; \item $d$ is preserved under stabilization of the surface $\Sigma$ as shown in Figure \ref{fig:stabilization}: $$ \xymatrix{ \mathcal{KC}(\Sigma) \ar@{>->}[r]\ar[rd]_-d & \mathcal{KC}(\Sigma^s)\ar[d]^-d\\ & \mathbb{Z} } $$ \end{itemize} Moreover, this homomorphism $d$ takes values in $8\mathbb{Z}$ and is given by $$ \forall M \in \mathcal{KC}(\Sigma), \quad d(M) = - 24\lambda_j(M) - 12 \Phi_\Theta(\alpha(M)) -12 \operatorname{H}_\Theta(\tau_2(M)) -2 \overline{d'}(\tau_2(M)). $$ \end{theorem}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{introduction} The lattice regularization of geometries called Dynamical Triangulations (DT) provides us with a regularization of four-dimensional Euclidean quantum gravity within the realm of ordinary quantum field theory \cite{aj,am}. Presently we do not know if such a theory exists. Clearly, if the starting action is just the Einstein-Hilbert action, the resulting theory has to be non-perturbatively defined since an expansion of the Einstein-Hilbert action around a fixed background geometry leads to a non-renormalizable theory and since the continuum Euclidean Einstein-Hilbert action is unbounded from below. The asymptotic safety scenario of Weinberg discussed general conditions which such a non-perturbative field theory should satisfy, using the Wilsonian renormalization group (RG) framework \cite{weinberg}. The central idea was that there should exist a non-Gaussian fixed point which would define the UV limit of the theory. Evidence for such a fixed point has been found both using the $2+\varepsilon$ expansion \cite{kawai} and the so-called exact or functional renormalization group equation (FRG) \cite{FRG}. The so-called Regge version of the Einstein Hilbert action is a natural, geometric implementation of the action on triangulations. Using this action in the DT approach one has two bare (dimensionless) lattice coupling constants related to the gravitational coupling constant $G$ and the cosmological coupling constant $\Lambda$. In this coupling constant space one was looking for a phase transition point which could be a candidate for the proposed asymptotically safe fixed point. A fixed point was found, but the corresponding phase transition turned out to be of first order \cite{firstorder}. Usually, for critical systems on a lattice one can only associate continuum field theories to the fixed points if the transition is higher than first order. This result was disappointing, but in a larger coupling constant space one would expect to see transitions where one could take a continuum limit. One can clearly add higher order curvature terms to the Einstein action in such a way that the theory becomes renormalizable. It has been shown a long time ago that adding $R^2$ terms to the action would make the gravity theory renormalizable because the propagator would fall off like $1/k^4$ and thus improve the UV behavior of the theory \cite{stelle}. The problem with such a realization of renormalizability of quantum gravity is that it is expected to correspond to a non-unitary theory when rotated back to Lorentzian signature, precisely because of the additional poles present in the propagator falling off like $1/k^4$. However, in the context of the RG approach in the Euclidean sector, with infinitely many coupling constants, there should exist a critical surface associated with such a theory. Refined perturbative treatments \cite{max} as well as the the use of FRG methods \cite{roberto,frank} provide evidence for this by identifying a fixed point asymptotically free (i.e. Gaussian) in coupling constants associated with the $R^2$ terms and asymptotically safe in $\Lambda$ and $G$. This fixed point seemingly differs from the ``purely'' asymptotic safe fixed point discussed above, where also the coupling constants associated with the $R^2$-terms are different from zero \cite{frank}. Since DT is a lattice regularization of Euclidean geometries it is natural to consider an enlarged coupling constant space involving higher curvature terms. Such terms would most likely be generated anyway if one could apply the Wilsonian RG techniques to the DT lattices. Similarly, being a lattice regularization, it has the potential to include the non-perturbative contributions alluded to above. It has already been attempted to explicitly include the higher curvature terms in the DT formalism \cite{highcurve}. The Regge action on a d-dimensional triangulation is defined as the sum of the deficit angles around the $(d-2)$-dimensional subsimplices times the $(d-2)$-dimensional ``volumes'' of these subsimplices. This gives a beautiful geometric interpretation to the Einstein action in $d$-dimensional spacetime \cite{regge}. The DT formalism ``builds'' its $d$-dimensional triangulations from identical d-simplices where all links have the same length, $a$, the lattice spacing. For a given $(d-2)$-dimensional subsimplex $t_{d-2}$ let $o(t_{d-2})$ denote the {\it order} of $t_{d-2}$, i.e.\ the number of $d$-simplices to which $t_{d-2}$ belongs. The deficit angle of $t_{d-2}$ is \begin{equation}\label{1.1} \varepsilon(t_{d-2}) = 2\pi - o(t_{d-2}) \theta_d,~~~~\theta_d = \cos^{-1}(1/d). \end{equation} In two dimensions we have $\theta_2=\pi/3$ and there is no intrinsic curvature when we glue together 6 equilateral triangles. Unfortunately there is no equally beautiful geometric realization of higher curvature terms. The attempts to represent higher curvature terms naively as $\varepsilon(t_{d-2})^2$ in 4d suffered from the problem that contrary to the situation in 2d, no flat spacetime can be build from gluing together the equilateral 4d building blocks used in DT. While this does not exclude the possibility that this type of spacetimes could lead to sensible results when used in the path integral, the end result of adding an $\varepsilon(t_{d-2})^2$ term was as follows: for a small coupling constant one found the same phases as without the $\varepsilon(t_{d-2})^2$ term. For large coupling constants the system got stalled in weird configurations minimizing $\varepsilon(t_{d-2})^2$, but having nothing to do with flat space. Somewhat more complicated and less local ways to implement $R^2$ terms are needed in the DT formalism, but so far none that at the same time are useful for computer simulations have been found. However, evidence for a potentially non-trivial phase structure of DT came from another source, namely by changing the measure term \cite{crinkled}. The starting point of DT is the conjecture that the continuum path integral \begin{equation}\label{1.3} Z = \int {\cal D} [g]\, \mbox{e}^{- S^{EH}[g]}, \end{equation} can be represented via a sum over simplicial manifolds built of equilateral four-simplices \begin{equation}\label{1.4} Z =\sum_{{\cal T}} \frac{1}{C({\cal T})}\, \mbox{e}^{- S^{R}[{\cal T}]} . \end{equation} The symmetry factor $C({\cal T})$ is the order of the automorphism group of a triangulation ${\cal T}$. The Regge version of the continuum Einstein-Hilbert action, \begin{equation}\label{1.4a} S^{EH}[g] = - \frac{1}{G} \int \mathrm{d} t \int \mathrm{d}^D x \sqrt{g} (R - 2 \Lambda), \end{equation} has a particularly simple realization in DT since all four-simplices are identical and equilateral: \begin{equation}\label{1.5} S^R[\mathcal{T}] = - \kappa_2 N_2 +\kappa_4 N_4, \end{equation} where $N_2$ is the number of triangles and $N_4$ the number of four-simplices. The bare coupling constants $\kappa_2,\ \kappa_4$ are related to the bare Newton's constant $G$ and the bare cosmological constant $\Lambda$, respectively. In the path integral \rf{1.4} each triangulation carries the same weight (except for the symmetry factor which is one for almost all triangulations). However even in the continuum it is somewhat unclear which measure ${\cal D} [g]$ one should choose for the geometries. In the early history of DT a number of different choices were suggested \cite{measure}, and in \cite{enzo} a 4d measure was proposed which contained a factor $\prod_{t=1}^{N_2} o_t^\beta$: \begin{equation}\label{1.6} \sum_{{\cal T}} \frac{1}{C({\cal T})} ~~\to~~ \sum_{{\cal T}}\frac{1}{C({\cal T})} \; \prod_{t=1}^{N_2} o_t^\beta, \end{equation} where $o_t$ is the order of triangle $t$. In 2d Euclidean quantum gravity, regularized by DT, one can add a similar term, only replacing triangles in \rf{1.6} with vertices. Both in 2d and 4d \rf{1.6} would then refer to $(d-2)$-dimensional subsimplices and via \rf{1.1} to higher curvature terms, although the identification is rather indirect and to a series of higher curvature terms. From a renormalization group point of view it should not be that important, since one is just looking for a new fixed point with different physics. It was eventually shown in \cite{kazakov} that the continuum limit of the 2d lattice theory was independent of any reasonable choice of $\beta$ in \rf{1.6}. The interpretation given in 2d was that higher curvature terms were irrelevant operators in a renormalization group framework (which is true from a naive power counting point of view). In 4d we do not have analytical results and it is possible that the choice of weight factor {\it is} important for a continuum limit\footnote{ The interesting paper \cite{dario} presents a model which has an effective measure term similar to \rf{1.6} and where it actually {\it is} possible to perform some analytic 4d calculations. Unfortunately it is not clear how closely related the model is to the DT models considered in this article. Nevertheless, in this model the measure term can change the phase structure.}, and that if this was the case, the choice \rf{1.6} could be viewed as some effective representation of higher curvature terms. The implementation of the higher curvature terms via \rf{1.6} is less direct then the native (and failed) attempt to simply add $\varepsilon^2(t)$ from \rf{1.1}, as mentioned above. In \cite{crinkled} it was observed that one seemingly entered a phase dominated by a new kind of geometries, named the ``crinkled phase'' by choosing $\beta$ sufficiently negative. The fractal dimension (the Hausdorff dimension) of typical geometries was reported close to 4 and the spectral dimension around 1.7. Potentially this new phase could reflect the presence of higher curvature terms and thus also, according to the FRG results \cite{frank}, a non-Gaussian asymptotically safe fixed point. Interestingly, the same phase was observed when coupling gauge fields to gravity in four dimensions \cite{crinkled,ckr,yukawa}. This was in contrast to the situation for a scalar field coupled to gravity, where little change was observed. However, the reported difference between scalar fields and gauge fields coupled to 4d gravity could be understood as a consequence of a different choice of discretized coupling of matter to the (piecewise linear) geometry. If the gauge fields were coupled in the same way as the scalar fields the back reaction was equally weak as reported for scalar fields. The difference amounted to placing the gauge fields on the triangles of the 4d triangulation or placing them on the so-called dual triangles. It is possible to show that a transformation between the two setups leads to a weight factor of the form \rf{1.6}. This gave some arguments in favor of viewing the crinkled phase as a lattice artifact, since one would not think it should make a significant difference if one used the lattice or the dual lattice for the gauge fields \cite{aak}. However, it is fair to say that the situation was unsettled, with some people claiming that the crinkled phase represented continuum physics \cite{yukawa}. Recently, there has been a renewed interest in the crinkled phase after it was observed that the spectral dimension in the crinkled was scale dependent \cite{Laiho} and seemingly behaved more or less like the spectral dimension in so-called Causal Dynamical Triangulations (CDT)\cite{spectralcdt}. CDT is an attempt to formulate a theory of quantum gravity where the path integral includes only geometries which allow a time foliation (see \cite{physrep} for a review). Such a foliation constraint can best be motivated starting out in spacetimes with Lorentzian signatures, which is how CDT was originally formulated. However, for the purpose of numerical simulations the time direction has been rotated such that the spacetimes studied on the computer have Euclidean signature. The result was a different phase structure compared to the one observed using DT, in particular it includes a second order phase transition line where one might be able to define a continuum limit. This is in principle a desirable situation, and the results in \cite{Laiho} for the spectral dimension open up the possibility that the crinkled phase could be identified with the so-called ``phase C'', in the CDT phase diagram. A priori one can not rule out such an identification\footnote{There are also other possible interpretations of the continuum limit of the CDT theory, in particular that it can be related to Ho\v{r}ava-Lifshitz gravity \cite{hl}. For a detailed discussion we refer to the review \cite{physrep}.}. The geometries which enter in the path integral in CDT after rotation to Euclidean signature are a subset of those used in DT and effectively this restriction could move the theory into the same universality class as the theories with higher curvature terms, i.e.\ (again relying on the FRG picture) into the universality class corresponding to the standard asymptotic safety scenario. This would have an interesting implication. One can show that the CDT theory is unitary (it has a reflection positive transfer matrix related to the lattice time foliation \cite{cdttransfer}) and in this way it would add arguments in favor of the putative asymptotic safety theory actually being unitary, a fact which is not obvious. In the following we investigate the effects of modifying the measure term in the way displayed in eq.\ \rf{1.6}. \section{The numerical setup}\label{setup} Viewing the modification of the measure term as part of the action, our action now depends on three bare coupling constants $\kappa_2$, $\kappa_4$ and $\beta$. In our simulations $\kappa_4$ is not really a coupling constant since we keep $N_4$, the number of four-simplices, (almost) fixed. More precisely we work in a pseudo-canonical ensemble of manifolds with topology $S^4$, and use the partition function \begin{equation}\label{2.1} Z(\kappa_2, \kappa_4, \beta) = \sum_\mathcal{T} \frac{1}{C_\mathcal{T}} \cdot \prod_{t=1}^{N_2} o_t^\beta \cdot \mbox{e}^{-\left[-\kappa_2 N_2 + \kappa_4 N_4 + \varepsilon (N_4 - \bar{N}_4)^2 \right]}. \end{equation} The quadratic term proportional to $\varepsilon$ fixes the total volume around some prescribed value $\bar{N}_4$. To achieve this the bare cosmological constant has to be tuned to its critical value $\kappa_4 \approx \kappa_4^c$, the critical value being the value below which the partition function is divergent. We use Monte Carlo simulations to study expectation values of observables in the ensemble defined by the partition function \rf{2.1}. The set of triangulations of $S^4$ we use are the so-called combinatorial triangulations, where every $4$-simplex is uniquely defined by a set of $5$ distinct vertices and by demanding that two adjacent $4$-simplices share precisely one face (a three-dimensional subsimplex). This is in contrast to the degenerate triangulations, defined in \cite{Degenerate}, and used in the recent study of the crinkled phase \cite{Laiho}. It is believed that the models defined by combinatorial triangulations and degenerate triangulations belong to the same universality class, and using a different class of triangulations than used in \cite{Laiho} will give us a check of the robustness of the results obtained in \cite{Laiho} as well as in this study. In the Monte Carlo simulations we use the standard 5 Pachner moves to update the four-dimensional combinatorial triangulations. For $d$-dimensional combinatorial triangulations of fixed Euler number the $d+1$ Pachner moves are local changes of the triangulations which are ergodic \cite{varsted}. Thus we will be exploring the coupling constant space ($\kappa_2,\beta$). We will use Monte Carlo simulations to generate a number of independent configurations for each value of $\kappa_2$ and $\beta$ in a grid in the ($\kappa_2,\beta$)-plane with $\beta$ between $0$ and $-2 $ varied in steps of $\delta \beta=0.2$ and $\kappa_{2}$ between $0.5$ and $1.5$ varied in steps of $\delta \kappa_{2}=0.1$. Using these we will calculate the expectation values of observables ${\cal O}$ over these configurations: \begin{equation}\label{2.2} \langle {\cal O}\rangle_{conf} = \frac{1}{N_{conf}} \sum_{i=1}^{N_{conf}} {\cal O}_i, \end{equation} where $N_{conf}$ denotes the number of Monte Carlo generated independent configurations at a particular value of coupling constants and ${\cal O}_i$ denotes the value of the observable ${\cal O}$ calculated for the $i^{th}$ configuration, $i=1,\ldots,N_{conf}$. \section{The phase diagram}\label{phases} In order to determine the phase structure of the model we measured a number of ``observables'' which can be used to characterize the geometries in the different phases. Observables which have in the past been useful in distinguishing between the two phases observed for $\beta=0$ include the average number of vertices $\langle N_0 \rangle$ and the average number of triangles $\langle N_2 \rangle$, as well as their associated susceptibilities \begin{equation}\label{3.0}\chi( N_0 ) \equiv \frac{\langle N_0^2 \rangle - \langle N_0 \rangle^2}{ N_4}, ~~~~ \chi( N_2 ) \equiv \frac{\langle N_2^2 \rangle - \langle N_2 \rangle^2}{N_4}. \end{equation} Another observable which will be useful is the radius volume profile $V(r)$. We define and measure it as follows. Given two four-simplices we define a path between these as a piecewise linear path between centers of neighboring four-simplices, connecting the centers of the two four-simplices. The (graph) {\it geodesic distance} between the two four-simplices is defined as the smallest number of steps in the set of paths connecting them. For a given configuration and an initial simplex $i_0$, the number of four-simplices at a geodesic distance $r$ from $i_0$ is denoted as $V(r, i_0)$. The average over configurations and initial points is then given by \begin{equation}\label{3.1} V(r) \equiv \langle \frac{1}{N_4} \sum_{i_0} V(r, i_0) \rangle_{conf}. \end{equation} The average radius is then defined as \begin{equation}\label{3.2} \langle r \rangle \equiv \frac{1}{N_4} \sum_r r \cdot V(r). \end{equation} We also look for the presence of so-called \emph{baby universes} separated by minimal necks. A minimal neck is a set of five tetrahedra, connected to each other, and forming a $4$-simplex which is not present in the triangulation. Cutting the triangulation open along the five tetrahedra will separate the triangulation in two disconnected parts, each with a boundary consisting of the five tetrahedra, the minimal boundary possible for the class of triangulations we consider. The analysis of baby universe distributions has been very useful as a tool to distinguish various phases of different geometries in 4d simplicial quantum gravity \cite{4dbaby}, as well as in the studies of 2d quantum gravity \cite{2dbaby}. \subsection{Grid and phase diagram} \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{PhaseDiagSmallVarN0.pdf} \includegraphics[width=0.49\textwidth]{PhaseDiagSmallAvr.pdf} \end{center} \caption{Density plots of the susceptibility $\chi(N_0)$ (left) and the average radius (right) in the $(\kappa_2,\beta)$ plane for $\langle N_4 \rangle = 160 000$} \label{fig:grid} \end{figure} In the case without non-trivial measure term, i.e.\ when $\beta = 0$, there exist only two phases, namely the \emph{crumpled} phase and the \emph{branched polymers} phase \cite{aj,am,aj1,ckr1}. However, they are separated by a first order transition \cite{firstorder}, as already mentioned. It occurs at $\kappa_2 \approx 1.29$. At this point, we observe a peak in both susceptibilities $\chi(N_0)$ and $\chi(N_2)$, as well as a jump in $\langle r \rangle$. There is also an abrupt change in the baby universe structure as depicted in Fig.\ \ref{fig:tree}. The left graph in Fig.\ \ref{fig:tree} shows the baby universe structure for a typical configuration in the crumpled phase. One has a huge ``parent'' universe decorated with almost minimal baby universes (which are really too small to deserve being called (baby)-universes). The situation is quite the opposite in the branched polymer phase, as shown on the right graph in Fig.\ \ref{fig:tree}. In this phase one has a genuine fractal structure of baby universes of all sizes. From a continuum point of view the problem with this phase is that the spacetime {\it is} too fractal, and spacetime itself, not only the baby universe structure, seems to be described as a 2d fractal tree\footnote{The only exception might be very close to the transition point where arguments have been given in favor of a different interpretation of the fractal structure \cite{js}.}. The additional coupling constant $\beta$ may introduce new phase(s). We have extensively investigated a grid of points in the $(\kappa_2,\beta)$ plane, including the transition point $\beta = 0, \kappa_2 \approx 1.29$. Plots of the susceptibility $\chi(N_0)$ (left) and the average radius (right) for the grid points are shown in Fig. \ref{fig:grid} ($\kappa_2$ - horizontal axis, $\beta$ - vertical axis). For negative $\beta$ the maximum of variance $\chi(N_0)$ (blue line) and a jump in $\langle r \rangle$ (red line) do not coincide any more. It is observed that the branched polymer phase corresponds to large values of $\langle r \rangle$ and a {\it jump} to smaller values of the expectation value is very clear when one leaves the branched polymer phase. In this sense the branched polymer phase can be clearly distinguished from other phases by the red curve in Fig.\ \ref{fig:grid}. The (not very pronounced) peak in the susceptibility seems not to be a signal of a phase transition, as we will discuss later. \begin{figure} \begin{center} \includegraphics[width=0.32\textwidth]{Tree-CRU.pdf} \includegraphics[width=0.32\textwidth]{Tree-CRI.pdf} \includegraphics[width=0.32\textwidth]{Tree-BPM.pdf} \end{center} \caption{Minimal baby universe graph of a typical configuration in, respectively from left to right, the crumpled phase, the crinkled region and the branched polymer phase.} \label{fig:tree} \end{figure} We also observe a region in coupling constant space where the properties of typical configurations are in between those of the crumpled phase and the branched polymer phase. It is natural to try to classify configurations in this region as being in the hypothetical new crinkled phase. This region is located around the point $\kappa_2 = 2.0, \beta = -2.0$ \footnote{For such values of the coupling constants the acceptance rate in the Monte Carlo simulations is relatively low, and simulations take a painfully long time.}. The minimal baby universe structure is shown in Fig.\ \ref{fig:tree}. Let us explain how the graphs shown there are constructed. We look for minimal necks. As already remarked a minimal neck consists of the five tetrahedra forming the boundary of a four-simplex, but such that the four-simplex is not part of the triangulation. We can cut the triangulation in two disconnected parts along the five tetrahedra, In this way we obtain two triangulations, each with a minimal boundary (the five tetrahedra, now belonging to both triangulations). For each triangulation we now repeat this process of finding baby universes and in this way we end up with a number of disconnected universes with boundaries. We represent each universe with a dot and we connect the dots by a link if their boundaries had originally shared at least one tetrahedron. In this way minimal necks naturally equip triangulations with graph structures like the ones shown in Fig.\ \ref{fig:tree}. In the crumpled and branched polymers phases it happens very seldom that two minimal necks are neighbors. In these phases the graphs are thus tree graphs, bearing in mind that the topology of spacetime is that of $S^4$. The situation is different in the crinkled region. In this region we observe triangles of high order. We observe that a number of the tetrahedra sharing such a triangle can belong to two minimal necks. In this way the graph can contain a (long) loop ``twisted'' around a high order triangle. Such loops spoil the tree structure seen in the crumpled and branched polymer phases. For configurations belonging to the crumpled or the branched polymer phases we never observe triangles of high order, while in the crinkled region the maximal order of triangles seems to behave like $\langle \mathrm{Max}\, o_t \rangle \propto N_4^{1/6}$. At a first glance one would expect that the measure term, \begin{equation}\label{3.3} \prod_{t=1}^{N_2} o_t^\beta = e^{\beta \cdot \sum_t \log o_t} \end{equation} would suppress high order triangles for negative $\beta$. What really happens is that the value of the observable conjugate to $\beta$, i.e.\ $\langle \sum_t \log o_t \rangle$, indeed decreases with decreasing $\beta$. However, the distribution of triangles-order $P(o_t)$ has a long tail and this makes it possible that even with a decreasing $\langle \sum_t \log o_t \rangle$ we can have an increasing $\langle \mathrm{Av}\, o_t \rangle$ and $\langle \mathrm{Max}\, o_t \rangle$, which is what we observe. When we move from the branched polymer phase to the crinkled phase the baby universe structure changes relatively smoothly. However, as mentioned above, the transition between the two phases is seen clearly by a jump in $\langle r \rangle$. At the same time one also observes a (small) peak in $\chi(\log o_t)$ (see Fig.\ \ref{fig:pathlogot}). We also measured points outside of the grid region - in a less systematic way - and the results agree with the picture presented above. \vspace{2ex} \noindent Below we summarize characteristics for typical configurations from the branched polymer phase, the crumpled phase and the hypothetical crinkled region. \vspace{2ex} {\noindent \bf The branched polymers phase:} \begin{itemize} \itemsep=0.5mm \item Elongated geometry, $\langle r \rangle \propto N_4^{1 / 2}$ \item Dominated by minimal necks separating baby universes \item Probability of baby universe of size $V$ is $P(V) \propto V^{\gamma-2} (N_4 - V)^{\gamma-2},$ where $\gamma = \frac{1}{2}$ is the string susceptibility exponent. \item Hausdorff dimension $d_h = 2$, spectral dimension $d_s = 4 / 3$ \item Tree-like structure (cf. Fig. \ref{fig:tree}) \end{itemize} \vspace{2ex} {\noindent \bf The crumpled phase:} \begin{itemize} \itemsep=0.5mm \item Collapsed geometry, $\langle r \rangle$ grows slower than any $N_4^\alpha$, $\alpha >0$. \item Two singular vertices of order $o_v \propto N_4$ connected by a singular link of order $o_l \propto N_4^{2/3}$. \item No baby universes beyond the size of a few four-simplices. Thus no susceptibility exponent $\gamma$ (formally $\gamma= - \infty$). \item Hausdorff dimension $d_h= \infty$, spectral dimension $d_s$ infinite or at least large. \end{itemize} \vspace{2ex} {\noindent \bf The crinkled region} \begin{itemize} \itemsep=0.5mm \item The properties interpolate between crumpled and branched polymer regions for finite volume, but seem in most cases to approach those of the crumpled phase with increasing volume. While $\langle r \rangle$ is larger than in the crumpled phase it still grows very slowly with $N_4$. \item One observes triangles of high order, proportional to $N_4^{1/6}$, contrary to the situation in the crumpled and branched polymer regions. \item Many baby universes, but no {\it large} baby universes and thus no finite string susceptibility $\gamma$ (formally $\gamma =-\infty$). \item The baby universes define a tree-like structure, but this structure contains loops related to the triangles of high order (see Fig.\ \ref{fig:tree}). \item The Hausdorff dimension $d_h$ is large (most likely infinite) and the spectral dimension $d_s$ seems also large (growing with volume as far as we can check) \end{itemize} \subsection{The path in the $\boldsymbol{(\beta,\kappa_2)}$ plane} In order to determine if there exists a new \emph{crinkled} phase we need to perform simulations for various total volumes and check scaling of the observable. Because this demands vast CPU resources, we follow the one-dimensional path shown in Fig.\ \ref{fig:phasediagram} instead of using a full grid. \begin{figure}[h] \begin{center} \includegraphics[width=0.8\textwidth]{path-lab.pdf} \end{center} \caption{A tentative phase diagram and a path (color points) from crumpled phase - through crinkled region - to branched polymer phase. The thick gray line denotes the phase transition between branched polymers and other phases, based on the grid measurements.} \label{fig:phasediagram} \end{figure} We performed measurements for three values of the total volume $N_4 = 40\mathrm{k}$, $80\mathrm{k}$ and $160\mathrm{k}$. The path starts at a point in the crumpled phase $(\kappa_2 = 0.5, \beta = 0.0)$ and continuously leads through the crinkled region $(\kappa_2 = 2.0, \beta = -2.0)$ to stop in the branched polymers phase $(\kappa_2 = 2.0, \beta = -1.0)$. If there is a phase transition between a crumpled and a crinkled phase, the path will have to cross it. The path consists of three segments marked with different colors to simplify comparison of plots: a \textcolor{red}{red} vertical segment I at $\kappa_2 = 0.5$, a \textcolor{green}{green} horizontal segment II at $\beta = -2.0,$ and a \textcolor{blue}{blue} vertical segment II at $\kappa_2 = 2.0.$ We now describe the behavior of the various observables when we move along this path. \subsection{$\boldsymbol{N_0}$ and $\boldsymbol{N_2}$ observables} The basic observables, the scaled average number of vertices $\langle N_0 \rangle / N_4$ and triangles $\langle N_2 \rangle / N_4$ are shown in Fig. \ref{fig:pathn}. \begin{figure}[h] \begin{center} \includegraphics[width=0.49\textwidth]{path-n0.pdf} \includegraphics[width=0.49\textwidth]{path-n2.pdf} \end{center} \caption{Plot of $\langle N_0 \rangle / N_4$ (left) and $\langle N_2 \rangle / N_4$ (right) for points along the path. Successive points of the path are on the $x$-axis, the colors of the $x$-axis correspond to the colors of the path segments.} \label{fig:pathn} \end{figure} The successive points on the path are presented on the $x$-axis and we have indicated the separation of the line segments I, II and III by vertical lines. We do not observe any jump of $\langle N_0 \rangle$ or $\langle N_2 \rangle$ on the path between the crumpled phase and crinkled region. There is also no jump between the branched polymer phase and crinkled region, in contrast to what happens at $\beta = 0$ when one moves from the crumpled phase to the branched polymer phase. However, the scaling with $N_4$ changes exactly at the transition between the crinkled and the branched polymer phase. Inside the branched polymer phase $\langle N_0 \rangle \propto N_4$, while this scaling does not hold outside. When we are outside the branched polymer phase curves corresponding to different spacetime volumes $N_4$ do no longer coincide, as can be seen most clearly on the left side of Fig.\ \ref{fig:pathn}. Fig. \ref{fig:pathchi} shows the measured susceptibilities $\chi( N_0 )$ and $\chi( N_2 )$. \begin{figure}[h] \begin{center} \includegraphics[width=0.49\textwidth]{path-varn0.pdf} \includegraphics[width=0.49\textwidth]{path-varn2.pdf} \end{center} \caption{Plot of variances $\chi( N_0 ) \equiv ( \langle N_0^2 \rangle - \langle N_0 \rangle^2) / N_4 $ (left) and $\chi( N_2 )$ (right) for points along the path.} \label{fig:pathchi} \end{figure} Following the path, there is a peak in the susceptibility, located in the red segment. It can also be seen on the grid plot (left plot of Fig. \ref{fig:grid}). However, the peak is decreasing with the total volume $N_4$ and can thus not be viewed as signaling a first or second order transition between the crumpled phase and a hypothetical crinkled phase. In addition there is a small peak - not well visible in Fig. \ref{fig:pathchi} - at the border between branched polymer phase and the crinkled region, being a remnant of a pronounced peak at $\beta = 0$. By itself it would be difficult to claim that this little peak signals a phase transition between the branched polymer phase and the crinkled region. However, as we will show below, there are other observables which behave discontinuously precisely at that point. \subsection{Triangle order $\boldsymbol{o_t}$} Fig. \ref{fig:pathlogot} presents a plot of average (left) and variance (right) of $\log o_t$ for different total volumes $N_4$. Because $\langle \log o_t \rangle$ is conjugate to $\beta$, it increases when $\beta$ increases (red and blue segments). As for $\chi( N_0 )$ also $\chi(\log o_t)= \langle(\log o_t)^2 \rangle - \langle \log o_t \rangle^2 $ has its maximum in the red segment, but again as for $\chi( N_0 )$ it decreases with total volume, and thus does not signal a second or first order transition between the crumpled phase and a possible crinkled phase. There is finally a (small) peak of the variance at the transition to the branched polymer phase, again as for $\chi(N_0)$. \begin{figure}[h] \begin{center} \includegraphics[width=0.49\textwidth]{path-logot.pdf} \includegraphics[width=0.49\textwidth]{path-varlogot.pdf} \end{center} \caption{Left figure: plot of the average $\langle \log o_t \rangle$ for points along the path ($o_t$ is the order of triangle $t$). Right figure: plot of the variance $\chi(\log o_t) \equiv \langle (\log o_t)^2 \rangle - \langle \log o_t \rangle^2$ for points along the path.} \label{fig:pathlogot} \end{figure} \subsection{$\boldsymbol{\langle r \rangle}$ and size of baby universes} In the branched polymer phase, the Hausdorff dimension $d_h = 2$ and the average radius scales as $\langle r \rangle \propto N_4^{1/2}$ \cite{aj1}. As shown in Fig.\ \ref{fig:grid} and Fig.\ \ref{fig:pathavr}, in this phase $\langle r \rangle$ is relatively large. The jump of $\langle r \rangle$ at the boundary of the branched polymer phase is a clear signal of a phase transition. Fig. \ref{fig:pathavr} shows that the jump of $\langle r \rangle$ becomes sharper as the total volume $N_4$ increases. There is no sign of any transition between the crumpled phase and a possible crinkled phase. The structure of baby universes allows us to extract further information about the geometry of a typical configuration. Following the path from the crumpled phase to the crinkled region, we observe the baby universe graphs dissolve gradually, starting out as one huge ``parent-universe'' decorated with minimally small baby universes (left graph of Fig. \ref{fig:tree}), then developing into a connected structure without a distinct parent-universe, but with many loops (middle graph of Fig. \ref{fig:tree}), these loops being associated with triangles of high order. Although the baby universe structures are very different in the crumpled and crinkled regions, we do not observe any abrupt change. When approaching the branched polymer phase, the loops - and high order triangles - disappear, and a tree-like fractal structure emerges (right graph of Fig. \ref{fig:tree}). \begin{figure}[h] \begin{center} \includegraphics[width=0.49\textwidth]{path-avr.pdf} \includegraphics[width=0.49\textwidth]{path-maxmb.pdf} \end{center} \caption{Left figure: plot of $\langle r \rangle / N_4^{1/2}$ for points along the path. Right figure: average size of the largest baby universe for points along the path.} \label{fig:pathavr} \end{figure} Each minimal neck splits a triangulation into two parts. The smaller part is what we have denoted a baby universe. In the branched polymer phase almost surely a minimal neck exists which splits a configuration into two parts of nearly equal size. Thus, the average size of the largest baby universe is very large and close to half of the total volume. However, the situation is very different for typical configurations in the crumpled and in the crinkled regions. Fig. \ref{fig:pathavr} (right) shows the average size of the largest baby universe for successive points of the path. This is maybe the clearest signal of a first order transition. \vspace{24pt} \subsection{The Hausdorff dimension} The Hausdorff dimension reflects certain fractal structures of spacetime. It has been studied intensively in two-dimensional quantum gravity where one can compare numerical and analytical results, and it has been measured in the numerical studies of higher dimensional quantum gravity already referred to above. It has a natural definition on geometries defined by discrete triangulations and in this sense it is an ideal observable to use in the present setup. Let us start with an arbitrary four-simplex in our triangulation. The neighboring four-simplices are said to have distance one to our chosen four-simplex. Continuing this way we can define the spherical shell at distance $r$ from our four-simplex (note that the so defined spherical shell does not need to be connected). The radial volume, i.e.\ the number of four-simplices in the spherical shell at distance $r$, is denoted $V(r)$, as mentioned earlier. We define $d_h$, the Hausdorff dimension, as the (assumed) power like behavior of $V(r)$: \begin{equation}\label{jh1} V(r) \propto r^{d_h-1},~~~1\ll r\ll N_4^{1/d_h}. \end{equation} We only expect this relation to be true as an ensemble average, i.e.\ if we average many different geometries with the appropriate weight coming from the action. Further, we usually average over the starting four-simplices. For a fixed $N_4$ we have corrections to \rf{jh1} and it is often assumed that one can write \begin{equation}\label{jh2} \langle V(r) \rangle_{N_4} \sim N^{1-1/d_h} v(x),~~~x= \frac{r}{N_4^{1 / d_h}}, \end{equation} where \begin{equation}\label{jh3} v(x) = x^{d_h-1} F(x),~~~~F(0) =1. \end{equation} Formulas \rf{jh2} and \rf{jh3} have the form of finite size scaling relations and are convenient to use when trying to determine $d_h$. Note that a consequence of the assumed scaling is that \begin{equation}\label{jh4} \langle r \rangle_{N_4} \propto N_4^{1/d_h}. \end{equation} Let us describe the results of the measurements of the Hausdorff dimension $d_h$. Everywhere in the branched polymer phase we find nice agreement with scaling assumptions \rf{jh2} and \rf{jh3}, and the data are consistent with $d_h=2$, the result for branched polymers. This is in agreement with old results obtained along the line $\beta =0$ in the branched polymer phase. In Fig.\ \ref{BP} we have shown the result of such a finite size scaling for the choice $d_h=2$. One can refine the analysis and determine $d_h$ with reasonable accuracy to be two, but since this is not too important for the discussion we skip the details. In the crumpled and crinkled regions of the phase diagram the scaling \rf{jh2} and \rf{jh3} are not well satisfied and cannot be used to determine a $d_h$ with any precision. This is in agreement with the old observations along the $\beta=0$ part of the crumpled region, where it was judged that the Hausdorff dimension was very large since the configurations were centered around two neighboring vertices of order $N_4$ and the linear extension did hardly change with $N_4$. Let us follow the path on Fig.\ \ref{fig:phasediagram} from the crumpled phase, starting at $\beta=0$ and moving towards the crinkled region. As already emphasized there is no observed phase transition between the crumpled region and the crinkled region. This is also the case when it comes to the Hausdorf dimension. As mentioned, it starts out large at $\beta = 0$. Moving into the crinkled phase the structure of the two singular neighboring vertices is resolved and the extensions of typical configurations grow. Although \rf{jh2} and \rf{jh3} are not well satisfied we have found another way to estimate $d_h$. Surprisingly, the average radial profile is almost symmetric with respect to the reflection $V(r) \rightarrow V(R - r)$. Thus, before performing the average over configurations one can \emph{center} the volume profiles using following procedure. We find \emph{the center of mass} or the average radius of the volume profile $V(r)$, defined as \begin{equation}\label{jh5} r_{av} = \frac{1}{N_4} \sum_r r \cdot V(r), \end{equation} and redefine the radius coordinate $r \rightarrow r - r_{av}$ so that the \emph{center of mass} is located at $r = 0$. Afterwards, we perform the average over configurations and find the value of $d_h$ for which the scaled profile $v_{cm}(x)$ becomes volume independent. The centered radius profiles $V_{cm}(r)$ and the corresponding scaled and centered radius volume profiles $v_{cm}(x)$ are shown in Fig.\ \ref{fig:hausdorff} for $N_4 = 40\mathrm{k}, 80\mathrm{k}, 160\mathrm{k}$ for a choice of coupling constants in the crinkled region. Although the configurations in the crinkled region are not so strongly collapsed as in the crumpled region $d_h$ still comes out very high ( $d_h \approx 21$). Such large values of $d_h$ may indicate that in the infinite volume limit the Hausdorff dimension is infinite. To estimate $d_h$ more precisely one would clearly need larger values of $N_4$. However, the result clearly differs from the $d_h$ in the branched polymer phase and is much closer to the results obtained in the crumpled region. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{avvr-BPM.pdf} \includegraphics[width=0.49\textwidth]{avvr-BPM-scaled.pdf} \end{center} \caption{$V(r)$ and $v(x)$ in the branched polymer phase for $N_4= 40\mathrm{k}$, $80\mathrm{k}$ and $160\mathrm{k}$.} \label{BP} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{avvr-CRI-centered.pdf} \includegraphics[width=0.49\textwidth]{avvr-CRI-centered-scaled.pdf} \end{center} \caption{$V_{cm}(r)$ and $v_{cm}(x)$ in the crinkled region for $N_4= 40\mathrm{k}$, $80\mathrm{k}$ and $160\mathrm{k}$.} \label{fig:hausdorff} \end{figure} \subsection{The spectral dimension} \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{ds-CRU.pdf} \includegraphics[width=0.49\textwidth]{ds-BPM.pdf}\\ \includegraphics[width=0.80\textwidth]{ds-CRI.pdf}\\ \end{center} \caption{ Spectral dimension $d_s$ as a function of diffusion time $\sigma$. Left: crumpled phase ($\kappa_2 = 0.5, \beta =0.0$). Right: branched polymers phase($\kappa_2=2.0, \beta =-1.0$). Bottom: crinkled region ($\kappa_2=2.0, \beta =-2.0$).} \label{fig:spectral} \end{figure} The work reported in this article was triggered by the interesting measurements of the spectral dimension reported in \cite{Laiho}. Let us turn to the measurement of the spectral dimension for our ensemble of quantum geometries. It can be extracted by studying a diffusion process on the given ensemble of geometries. It shares with the Hausdorff dimension the nice property that it can be defined on piecewise linear geometries in a simple way. We will study the diffusion of a particle, performing a random walk between (the centers of) neighboring four-simplices. Denote by $\rho(i, i_0; \sigma)$ the probability that a particle starting at simplex $i_0$ is found at simplex $i$ after the fictitious (discrete) diffusion time $\sigma$. $\rho(i, i_0; \sigma)$ satisfies the following discrete diffusion equation: \begin{equation}\label{jd1} \rho(i, i_0; \sigma + 1) = \frac{1}{5} \sum_{j \leftrightarrow i} \rho(j, i_0; \sigma), \quad \quad \rho(i, i_0; 0) = \delta_{i\,i_0}, \end{equation} where the sum is evaluated over all simplices $j$ adjacent to $i$. Eq.\ \rf{jd1} expresses that the particle performs a random walk, jumping between centers of neighboring four-simplices. The \emph{average return probability}, \begin{equation}\label{jd2} P(\sigma) = \left\langle \left\langle \rho(i_0, i_0; \sigma) \right\rangle_{i_0} \right\rangle_{conf}, \end{equation} describes the probability of finding a particle at the initial point after diffusion time $\sigma$. The inner average is performed over initial simplices $i_0$. The outer average is performed over configurations. Let us define the spectral dimension $d_s(\sigma)$ as \begin{equation}\label{jd3} d_s(\sigma) \equiv -2 \frac{\mathrm{d} \log P(\sigma)}{\mathrm{d} \log \sigma}. \end{equation} For diffusion on $R^d$ the spectral dimension is equal to $d$ and independent of (the continuous) diffusion time $\sigma$. If we consider a smooth compact manifold $d_s$ will be a function of $\sigma$ which in the limit where $\sigma \to 0$ is equal to the topological dimension of the manifold and which in the limit where $\sigma \to \infty$ goes to zero. For diffusion on piecewise linear manifolds as defined here, the short time diffusion reflects the discretization used. Typically one can obtain quite different results for even and odd discretized times if one uses the simple implementation \rf{jd1} for the diffusion. However, usually after some diffusion time has passed one has $d_s(\sigma_{odd}) \approx d_s(\sigma_{even})$ and for $\sigma$ not too large there is a plateau independent of $\sigma$ which we can then identify with {\it the} spectral dimension $d_s$. After that, for a finite $N_4$, the spectral dimension will decrease slowly to zero. In Fig.\ \ref{fig:spectral} we have shown the spectral dimension as a function of diffusion time $\sigma$ in the crumpled, crinkled and branched polymer regions. The values of $N_4$ used are $40\mathrm{k}, 80\mathrm{k}, 160\mathrm{k}$. For $\sigma < 50$ lattice artifacts are pronounced but for larger values $d_s(\sigma_{odd}) \approx d_s(\sigma_{even})$ merge into a smooth curve. In the branched polymer phase we see the plateau mentioned above (and we have not run the diffusion process long enough to see $d_s \to 0$). The value of $d_s$ is close to 4/3, the theoretical value for branched polymers, again providing evidence that the configurations indeed are very much like branched polymers, despite being four-dimensional triangulations. In the crumpled phase we see no plateau at all and clearly the maximum is increasing with $N_4$ and we observe a rapid drop towards zero after the maximum. This reflects the very short distances available for diffusion despite the large values of $N_4$ and thus effectively the high dimensionality of the configurations. If one can talk about a spectral dimension at all it is clearly large. In the crinkled region the behavior of the spectral dimension is somewhat similar to what we observed in the crumpled region, only the maxima of $d_s(\sigma)$ are somewhat smaller and the diffusion time during which $d_s(\sigma)$ is different from zero is longer. This is a reflection of the larger extention of the configurations in the crinkled regions for a given $N_4$. However, the important message is really that the maximum of $d_s(\sigma)$ shows no sign of converging as a function of $N_4$. This is in contract to the situation in four-dimensional CDT, where one also observes a $\sigma$ dependent $d_s$, but as a function of $N_4$ the curves $d_s(\sigma)_{N_4}$ converge to a universal curve $d_s(\sigma)_{N_4=\infty}$. We cannot rule out that the same could happen here for very large $N_4$, but from the present data we cannot identify anything like a universal $d_s(\sigma)_{N_4=\infty}$. \section{Conclusions} As described in the Introduction, introducing $\beta$ as an additional coupling constant in DT-regularized Euclidean Quantum gravity is potentially very interesting. It could unite a number of different approaches to quantum gravity: the DT lattice approach, the higher curvature approach leading to asymptotic freedom and the asymptotic safety approach based on the existence of a non-Gaussian UV fixed point. It could also, in principle, make connection to the CDT lattice approach since at least the spectral dimension in the crinkled phase was reported in \cite{Laiho} to have a scale dependence similar to the one found in the CDT lattice approach to quantum gravity. However, at least applying conventional wisdom, in order to be interesting from a continuum point of view one has to be able to localize a phase transition point where continuum physics is recovered and a whole number of lattice artifacts fade away. Unfortunately we have {\it not} been able to observe such a phase transition point. What we {\it have} observed is a first order phase transition line which is a natural continuation of the first order phase transition between the crumpled and the branched polymer phase observed originally at $\beta=0$. Such a continuation was of course expected when we explored the $(\kappa_2,\beta)$ coupling constant plane, but it could have changed into a second order transition point {\it if} there had been a genuine crinkled phase and a phase transition between the crinkled and the crumpled phases. However, we do not observe any signal, growing with the total volume, of a phase transition between the crumpled phase and the crinkled phase. Configurations in the crinkled region look less ``crumpled'' ($V(r)$, minbu trees, spectral dimension), but the change is gradual when receding from the crumpled phase and it seems to be a finite size effect. While the results reported here are negative results, we nevertheless feel that they are important in the sense that they shown that one should probably not spend more time investigating the so-called crinkled phase. As discussed in the Introduction, there should exist an asymptotically free--asymptotically safe Euclidean ``gravity'' theory, obtained by adding higher curvature terms which serve to make the theory renormalizable and at the same time cure the unboundedness problem of the Euclidean Einstein-Hilbert action. This might not be the gravity theory we want, and if it could in some way be rotated back to spacetime with Lorentzian signature it might not be unitary, but it should exist. Thus we should be able to identify it in the DT lattice approach, {\it provided} we can find a decent way to implement the higher curvature terms in the DT formalism. The present results indicate that the attempts to use the Regge curvature \rf{1.1}, even in some more general way via the suggested measure term \rf{1.6} , are too naive, and they tell us to go back to the drawing board. \vspace{.5cm} \noindent {\bf Acknowledgments.} The authors acknowledge support from the ERC-Advance grant 291092, ``Exploring the Quantum Universe'' (EQU). JA acknowledges support of FNU, the Free Danish Research Council, from the grant ``quantum gravity and the role of black holes''. JJ acknowledges the support of the grant DEC-2012/06/A/ST2/00389 from the National Science Centre Poland. Finally this research was supported in part by the Perimeter Institute of Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Economic Development \& Innovation.
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection{#1}\mdseries} \setlength\arrayrulewidth{.5pt} \newcommand\medrightarrow{\mathrel{{{\color{black}\relbar}\kern-0.9ex\rlap{\color{white}\ensuremath{\blacksquare}}\kern-0.9ex}\joinrel{\color{black}\rightarrow}}} \newcommand\medleftarrow{\mathrel{{\color{black}\leftarrow}\kern-0.9ex\rlap{\color{white}\ensuremath{\blacksquare}}\kern-0.9ex\joinrel{{\color{black}\relbar}}}} \newcommand\medleftrightarrow{\mathrel{\leftarrow\kern-1.685ex\rightarrow}} \newcommand{\medrightarrow}{\medrightarrow} \newcommand{\medleftarrow}{\medleftarrow} \newcommand{\medleftrightarrow}{\medleftrightarrow} \newcommand\Section{Sect.} \definecolor{light-gray}{gray}{0.875} \definecolor{darker-gray}{gray}{0.45} \newcommand{\todo}[1]{{\color{red}#1}} \let\oldDelta=\Delta \let\oldSigma=\Sigma \renewcommand\Sigma{\mathrm{\oldSigma}} \newcommand{\floorceil}[1]{\rlap{\ensuremath{\bm{\kern-0.0416667em\lfloor}}}\bm{\kern-0.0416667em\lceil} #1\bm{\rfloor\kern-0.0416667em}\llap{\ensuremath{\bm{\rceil\kern-0.0416667em}}}} \newcommand{\mathcalx{F}}{\mathcalx{F}} \newcommand{\mathcalx{F}^{-1}}{\mathcalx{F}^{-1}} \newcommand{\floor}[1]{\mathcalx{F}\!(#1)} \newcommand{\ceil}[1]{\mathcalx{F}^{-1}\!(#1)} \newcommand\eqIH{\overset{\smash{\scriptscriptstyle\text{IH}}}{=}} \DeclareMathOperator{\floorlam}{lam} \newcommand{\Sigma_\mathsf{ty}}{\Sigma_\mathsf{ty}} \newcommand{\mathscr{V}}{\mathscr{V}} \newcommand{\VV_\mathsf{ty}}{\mathscr{V}_\mathsf{ty}} \newcommand{\mathscr{I}}{\mathscr{I}} \newcommand{\mathscr{J}}{\mathscr{J}} \newcommand{\II_\mathsf{ty}}{\mathscr{J}_\mathsf{ty}} \newcommand{\III_\mathsf{ty}}{\mathscr{I}_\mathsf{ty}} \newcommand{\mathscr{D}}{\mathscr{D}} \newcommand{R_{\floor{C\theta}}}{R_{\floor{C\theta}}} \newcommand{R}{R} \newcommand{{E}_{\floor{C\theta}}}{{E}_{\floor{C\theta}}} \newcommand{{E}_{\floor{D\theta}}}{{E}_{\floor{D\theta}}} \newcommand{\mathscr{U}}{\mathscr{U}} \newcommand{\mathscr{F}}{\mathscr{F}} \newcommand{\mathscr{E}}{\mathscr{E}} \newcommand{\III^{\smash{{\mathrm{GH}}}}}{\mathscr{I}^{\smash{{\mathrm{GH}}}}} \newcommand{\IIty^{{\mathrm{GH}}}}{\II_\mathsf{ty}^{{\mathrm{GH}}}} \newcommand{\UU^{{\mathrm{GH}}}}{\mathscr{U}^{{\mathrm{GH}}}} \newcommand{\II^{{\mathrm{GH}}}}{\mathscr{J}^{{\mathrm{GH}}}} \newcommand{\EE^{{\mathrm{GH}}}}{\mathscr{E}^{{\mathrm{GH}}}} \newcommand{\DD}{\mathscr{D}} \newcommand{\III^{\smash{{\mathrm{GH}}}}}{\mathscr{I}^{\smash{{\mathrm{GH}}}}} \newcommand{\III}{\mathscr{I}} \newcommand{\II}{\mathscr{J}} \newcommand{\EE}{\mathscr{E}} \newcommand{\UU}{\mathscr{U}} \newcommand{\mathscr{L}}{\mathscr{L}} \newcommand{\LL^{{\mathrm{GH}}}}{\mathscr{L}^{{\mathrm{GH}}}} \newcommand{\infname}[1]{\textsc{#1}} \newcommand{\unified}[1]{\smash{\setlength{\fboxsep}{.3ex}\colorbox{light-gray}{\ensuremath{\vphantom{('q}{#1}}}}} \newcommand{[}{[} \newcommand{]}{]} \newcommand{\subterm}[2]{#1[#2]} \newcommand{\lang}{\begin{picture}(5,7) \put(1.1,2.5){\rotatebox{45}{\line(1,0){6.0}}} \put(1.1,2.5){\rotatebox{315}{\line(1,0){6.0}}} \end{picture}} \newcommand{\rang}{\begin{picture}(5,7) \put(0,2.5){\rotatebox{135}{\line(1,0){6.0}}} \put(0,2.5){\rotatebox{225}{\line(1,0){6.0}}} \end{picture}} \newcommand{\lang\,}{\lang\,} \newcommand{\rang}{\rang} \newcommand{\greensubterm}[2]{#1\lang\, #2\rang} \newcommand{\lang\!\!\leftgreensubterm}{\lang\!\!\lang\,} \newcommand{\rightgreensubterm\!\!\rang}{\rang\!\!\rang} \newcommand{\yellowsubterm}[2]{#1\lang\!\!\leftgreensubterm #2\rightgreensubterm\!\!\rang} \newcommand{\leftyellowsubterm}{\lang\!\!\leftgreensubterm} \newcommand{\rightyellowsubterm}{\rightgreensubterm\!\!\rang} \newcommand{\orangesubterm}[3]{#1\leftyellowsubterm #2.\> #3\rightyellowsubterm} \newcommand{\orangesubtermeta}[3]{\orangesubterm{#1}{#2}{#3}_{\!\eta}} \newcommand{\llbracket}{\llbracket} \newcommand{\rrbracket}{\rrbracket} \newcommand{\interpret}[3]{\smash{\llbracket #1\rrbracket_{#2}^{#3}}} \newcommand{\interpreta}[1]{\interpret{#1}{\mathscr{I}}{}} \newcommand{\interpretaxi}[1]{\interpret{#1}{\mathscr{I}}{\xi}} \newcommand{\interpretfo}[2]{\interpret{#1}{R}{#2}} \newcommand{\interpretho}[2]{\interpret{#1}{\III^{\smash{{\mathrm{GH}}}}}{#2}} \newcommand{\interpretfoxi}[1]{\interpretfo{#1}{\xi}} \newcommand{\interprethoxi}[1]{\interpretho{#1}{\xi}} \renewcommand{\doteq}{\mathrel{\dot\approx}} \newcommand{\approx}{\approx} \newcommand{\not\eq}{\not\approx} \newcommand{\eqR}[2]{#1\sim#2} \newcommand{\namedinference}[3]{\prftree[r]{\relax{\infname{#1}}}{\strut#2}{\strut#3}} \newcommand{\inference}[2]{\namedinference{}{\strut#1}{\strut#2}} \newcommand{\namedsimp}[3]{\prftree[d][r]{\relax{\infname{#1}}}{\strut#2}{\strut#3}} \newcommand{\simp}[2]{\namedinference{}{\strut#1}{\strut#2}} \DeclareMathOperator{\mgu}{mgu} \DeclareMathOperator{\csu}{CSU} % \newcommand\UNIF{\mathrel{\smash{\stackrel{\lower.1ex\hbox{\ensuremath{\scriptscriptstyle ?}}}{=}}}} \newcommand{\tuple}[1]{\bar{#1}} \newcommand\Tuple[1]{\overline{#1}} \newcommand{\cstX}[1]{{\textsf{#1}}} \newcommand{\cst}[1]{{\mathsf{#1}}} \newcommand{\var}[1]{{\mathit{#1}}} \newcommand{\typ}[1]{{\mathit{#1}}} \newcommand\defeq{=} \newcommand{\mathrel{::=}}{\mathrel{::=}} \newcommand\fun{\rightarrow} \newcommand\foralltynospace[1]{\mathsf{\Pi}#1.} \newcommand\forallty[1]{\foralltynospace{#1}\;} \newcommand\fofun{\Rightarrow} \newcommand{\typeargs}[1]{{\langle#1\rangle\kern-0.083333em}} \newcommand\oftype{:} \newcommand\oftypedecl{:} \newcommand{\cst{diff}}{\cst{diff}} \newcommand{\cst{db}}{\cst{db}} \DeclareMathOperator{\len}{length} \newcommand{\mathrel\land}{\mathrel\land} \newcommand{\mathrel\lor}{\mathrel\lor} \newcommand{\mathrel\cup}{\mathrel\cup} \newcommand{\mathrel\cap}{\mathrel\cap} \renewcommand\S{\mathit{S}} \newcommand\betanf[1]{#1\kern+0.083333em{\downarrow}_{\beta\eta}} \newcommand{\mathcalx{T}}{\mathcalx{T}} \newcommand{\mathcalx{Ty}}{\mathcalx{Ty}} \newcommand{\mathcalx{C}}{\mathcalx{C}} \newcommand{{\mathrm{H}}}{{\mathrm{H}}} \newcommand{{\mathrm{GH}}}{{\mathrm{GH}}} \newcommand{{\mathrm{GF}}}{{\mathrm{GF}}} \newcommand{\TT_\HH}{\mathcalx{T}_{\mathrm{H}}} \newcommand{\TT_\GH}{\mathcalx{T}_{\mathrm{GH}}} \newcommand{\TT_\GF}{\mathcalx{T}_{\mathrm{GF}}} \newcommand{\Ty_\HH}{\mathcalx{Ty}_{\mathrm{H}}} \newcommand{\Ty_\GH}{\mathcalx{Ty}_{\mathrm{GH}}} \newcommand{\Ty_\GF}{\mathcalx{Ty}_{\mathrm{GF}}} \newcommand{\CC_\HH}{\mathcalx{C}_{\mathrm{H}}} \newcommand{\CC_\GH}{\mathcalx{C}_{\mathrm{GH}}} \newcommand{\CC_\GF}{\mathcalx{C}_{\mathrm{GF}}} \newcommand{\mathit{Inf}}{\mathit{Inf}} \newcommand{\mathit{GInf}}{\mathit{GInf}} \newcommand{\mathit{Red}}{\mathit{Red}} \newcommand{\mathit{Red}_{\mathrm{I}}}{\mathit{Red}_{\mathrm{I}}} \newcommand{\mathit{Red}_{\mathrm{F}}}{\mathit{Red}_{\mathrm{F}}} \newcommand{\models_{\mathcalx{G}}}{\models_{\mathcalx{G}}} \newcommand{\mathit{HInf}}{\mathit{HInf}} \newcommand{\mathit{FInf}}{\mathit{FInf}} \newcommand{\mathit{GFInf}}{\mathit{GFInf}} \newcommand{\mathit{GHInf}}{\mathit{GHInf}} \newcommand{\mathit{Red}^{\mathcalx{G}}}{\mathit{Red}^{\mathcalx{G}}} \newcommand{{\mathcalx{G}}}{{\mathcalx{G}}} \newcommand{\mathit{concl}}{\mathit{concl}} \newcommand{\mathit{prems}}{\mathit{prems}} \newcommand{\mathit{mprem}}{\mathit{mprem}} \newcommand{\mathit{HSel}}{\mathit{HSel}} \newcommand{\mathit{GHSel}}{\mathit{GHSel}} \newcommand{\mathit{GFSel}}{\mathit{GFSel}} \newcommand{\mathit{Red}_{\mathrm{I}}}{\mathit{Red}_{\mathrm{I}}} \newcommand{\mathit{Red}_{\mathrm{C}}}{\mathit{Red}_{\mathrm{C}}} \newcommand{{\mathit{HRed}}_{\mathrm{C}}}{{\mathit{HRed}}_{\mathrm{C}}} \newcommand{\mathit{HRed}_{\mathrm{I}}}{\mathit{HRed}_{\mathrm{I}}} \newcommand{\mathit{GHRed}_{\mathrm{C}}}{\mathit{GHRed}_{\mathrm{C}}} \newcommand{\mathit{GHRed}_{\mathrm{I}}}{\mathit{GHRed}_{\mathrm{I}}} \newcommand{\mathit{GFRed}_{\mathrm{C}}}{\mathit{GFRed}_{\mathrm{C}}} \newcommand{\mathit{GFRed}_{\mathrm{I}}}{\mathit{GFRed}_{\mathrm{I}}} \newcommand{\hbox{w.r.t.}}{\hbox{w.r.t.}} \newcommand{\pow}[1]{\mathcal{P}(#1)} \newcommand{\mathrel{|\kern-.1ex}\joinrel\approx}{\mathrel{|\kern-.1ex}\joinrel\approx} \newcommand{\mathrel{\raisebox{+0.8pt}{\Large\rlap{\kern0.5pt$\cdot$}}}\gtrsim}{\mathrel{\raisebox{+0.8pt}{\Large\rlap{\kern0.5pt$\cdot$}}}\gtrsim} \newcommand\encOonly{\mathcalx{\kern-0.0416667em O}} \newcommand{\encO}[1]{\encOonly(#1)} \newcommand\encOsubonly[1]{\encOonly_{\kern+0.0416667em\let\>=\kern+0.0416667em#1}} \newcommand{\encOsub}[2]{\encOsubonly{#1}(#2)} \newcommand\encBonly{\mathcalx{B}} \newcommand{\encB}[2]{\encBonly_{#1}(#2)} \newcommand\fosubscript{{\mathsf{fo}}} \newcommand\lsubscript{{\lambda}} \newcommand\fosucc{\succ_{\kern-0.083333em\fosubscript}} \newcommand\lsucc{\succ_{\kern-0.083333em\lsubscript}} \newcommand\fosucceq{\succeq_{\kern-0.0416667em\fosubscript}} \newcommand\lsucceq{\succeq_{\lsubscript}} \newcommand\lsuccsim{\succsim_{\kern+0.0416667em\lsubscript}} \newcommand\lfprec{\prec_\fosubscript} \newcommand\lprec{\prec_\lsubscript} \newcommand\zof[1]{z_{\kern+0.0416667em\let\>=\kern+0.0416667em#1}} \newcommand{\OK}[1]{#1} \newcommand{\NOK}[1]{{\color{red}#1}} \journalname{J.\ Autom.\ Reasoning} \begin{document} \title{Superposition with Lambdas% } \subtitle{} \author{Alexander Bentkamp \and Jasmin Blanchette \and Sophie~Tourret \and Petar~Vukmirovi\'c \and Uwe~Waldmann } \institute{% Alexander Bentkamp \CORR \and Jasmin Blanchette \and Petar~Vukmirovi\'c\at Vrije Universiteit Amsterdam, Department of Computer Science, Section of Theoretical Computer Science, De Boelelaan 1111, 1081 HV Amsterdam, the Netherlands\\ \email{\{a.bentkamp,j.c.blanchette,p.vukmirovic\}@vu.nl} \and Jasmin Blanchette \and Sophie~Tourret \and Uwe~Waldmann \at% Max-Planck-Institut f\"ur Informatik, Saarland Informatics Campus E1 4, 66123 Saarbr\"ucken, Germany \\ \email{\{jblanche,stourret,uwe\}@mpi-inf.mpg.de}% } \date{Received: date / Accepted: date} \maketitle \begin{abstract} We designed a superposition calculus for a clausal fragment of extensional polymorphic higher-order logic that includes anonymous functions but excludes Booleans. The inference rules work on $\beta\eta$-equivalence classes of $\lambda$-terms and rely on higher-order unification to achieve refutational completeness. We implemented the calculus in the Zipperposition prover and evaluated it on TPTP and Isabelle benchmarks. The results suggest that superposition is a suitable basis for higher-order reasoning. \keywords{superposition calculus \and higher-order logic \and refutational completeness} \end{abstract} \section{Introduction} \label{sec:introduction} Superposition \cite{bachmair-ganzinger-1994} is widely regarded as the calculus par excellence for reasoning about first-order logic with equality. To increase automation in proof assistants and other verification tools based on higher-order formalisms, we propose to generalize superposition to an extensional, polymorphic, clausal version of higher-order logic (also called simple type theory). Our ambition is to achieve a \emph{graceful} extension, which coincides with standard superposition on first-order problems and smoothly scales up to arbitrary higher-order problems. Bentkamp, Blanchette, Cruanes, and Waldmann~\cite{bentkamp-et-al-2018} designed a family of superposition-like calculi for a $\lambda$-free clausal fragment of higher-order logic, with currying and applied variables. We adapt their extensional nonpurifying calculus to support \hbox{$\lambda$-terms} (\Section~\ref{sec:the-calculus}). Our calculus does not support interpreted Booleans; it is conceived as the penultimate milestone towards a superposition calculus for full higher-order logic. If desired, Booleans can be encoded in our logic fragment using an uninterpreted type and uninterpreted ``proxy'' symbols corresponding to equality, the connectives, and the quantifiers. Designing a higher-order superposition calculus poses three main challenges: \begin{enumerate} \item Standard superposition is parameterized by a ground-total simplification order~$\succ$, but such orders do not exist for $\lambda$-terms equal up to $\beta$-conversion. The relations designed for proving termination of higher-order term rewriting systems, such as HORPO \cite{jouannaud-rubio-2007} and CPO \cite{blanqui-et-al-2015}, lack many of the desired properties (e.g., transitivity, stability under grounding substitutions). \medskip \item Higher-order unification is undecidable and may give rise to an infinite set of incomparable unifiers. For example, the constraint $\cst{f} \> (y \> \cst{a}) \UNIF y \> (\cst{f} \> \cst{a})$ admits infinitely many independent solutions of the form $\{ y \mapsto \lambda x.\; \cst{f}^n \, x \}.$ \pagebreak[2] \medskip \item In first-order logic, to rewrite into a term~$s$ using an oriented equation $t \approx t'$, it suffices to find a subterm of $s$ that is unifiable with $t$. In higher-order logic, this is insufficient. Consider superposition from $\cst{f} \> \cst{c} \approx \cst{a}$ into $y \> \cst{c} \not\eq y \> \cst{b}$. The left-hand sides can obviously be unified by $\{y \mapsto \cst{f}\}$, but the more general $\{y \mapsto \lambda x.\> z\> x\> (\cst{f}\> x)\}$ also gives rise to a subterm $\cst{f} \> \cst{c}$ after $\beta$-reduction. The corresponding inference generates the clause $z\> \cst{c}\> \cst{a} \not\eq z\> \cst{b} \>(\cst{f}\> \cst{b})$. \end{enumerate} To address the first challenge, we adopt the $\eta$-short $\beta$-normal form to represent $\beta\eta$-equivalence classes of $\lambda$-terms. In the spirit of Jouannaud and Rubio's early joint work \cite{jouannaud-rubio-1998}, we state requirements on the term order only for ground terms (i.e., closed monomorphic $\beta\eta$-equivalence classes); the nonground case is connected to the ground case via stability under grounding substitutions. Even on ground terms, we cannot obtain all desirable properties. We sacrifice compatibility with arguments (the property that $s' \succ s$ implies $s'\>t \succ s\>t$), compensating with an \emph{argument congruence} rule (\infname{ArgCong}), as in Bentkamp et al.\ \cite{bentkamp-et-al-2018}. For the second challenge, we accept that there might be infinitely many incomparable unifiers and enumerate a complete set (including the notorious flex--flex pairs \cite{huet-1975}), relying on heuristics to postpone the combinatorial explosion. The saturation loop must also be adapted to interleave this enumeration with the theorem prover's other activities (\Section~\ref{sec:implementation}). Despite its reputation for explosiveness, higher-order unification is a conceptual improvement over $\cst{SK}$ combinators, because it can often \emph{compute} the right unifier. Consider the conjecture $\exists z. \> \forall x\> y.\> z\> x\> y \approx \cst{f}\> y\> x$. After negation, clausification, and skolemization (which are as for first-order logic), the formula becomes $z\> (\cst{sk}_\cst{x}\> z) \> (\cst{sk}_\cst{y}\> z) \not\eq \cst{f}\> (\cst{sk}_\cst{y}\> z) \> (\cst{sk}_\cst{x}\> z)$. Higher-order unification quickly computes the unique unifier: $\{z \mapsto \lambda x\> y.\> \cst{f}\>y\>x\}$. In contrast, an encoding approach based on combinators, similar to the one implemented in Sledgehammer \cite{meng-paulson-2008-trans}, would blindly enumerate all possible $\cst{SK}$ terms for $z$ until the right one, $\cst{S}\> (\cst{K}\> (\cst{S}\> \cst{f}))\> \cst{K}$, is found. Given the definitions $\cst{S}\> z\> y\> x \approx z\> x\> (y\> x)$ and $\cst{K}\> x\> y \approx x$, the E prover \cite{schulz-et-al-2019} in \emph{auto} mode needs to perform 3757 inferences to derive the empty clause. For the third challenge, the idea is that, when applying $t \approx t'$ to perform rewriting inside a higher-order term $s$, we can encode an arbitrary context as a fresh higher-order variable $z$, unifying $s$ with $z\>t$; the result is $(z\>t')\sigma$, for some unifier~$\sigma$. This is performed by a dedicated \emph{fluid subterm superposition} rule (\infname{FluidSup}). Functional extensionality % is also considered a quintessential higher-order challenge \cite{benzmueller-kohlhase-1998}, although similar difficulties arise with first-order sets and arrays \cite{gupta-et-al-2014}. Our approach is to add extensionality as an axiom and provide optional rules as optimizations (\Section~\ref{sec:extensions}). With this axiom, our calculus is refutationally complete \hbox{w.r.t.}\ extensional Henkin semantics (\Section~\ref{sec:refutational-completeness}). Our proof employs the new saturation framework by Waldmann et al.~\cite{waldmann-et-al-2020-saturation} to derive dynamic completeness of a given clause prover from ground static completeness. We implemented the calculus in the Zipperposition prover \cite{cruanes-2017} (\Section~\ref{sec:implementation}). Our empirical evaluation includes benchmarks from the TPTP~\cite{sutcliffe-2017-tptp} and interactive verification problems exported from Isabelle/HOL \cite{boehme-nipkow-2010} (\Section~\ref{sec:evaluation}). The results clearly demonstrate the calculus's potential. The 2020 edition of the CADE ATP System Competition (CASC) provides further confirmation: Zipperposition finished 20~percentage points ahead of its closest rival. This suggests that an implementation inside a high-performance prover such as E \cite{schulz-et-al-2019} or Vampire \cite{kovacs-voronkov-2013} could fulfill the promise of strong proof automation for higher-order logic (\Section~\ref{sec:discussion-and-related-work}). An earlier version of this article was presented at CADE-27 \cite{bentkamp-et-al-2019-lamsup}. This article extends the conference paper with more explanations, detailed soundness and completeness proofs, including dynamic completeness, and new optional inference rules. We have also updated the empirical evaluation and extended the coverage of related work. Finally, we tightened side condition~4 of \infname{FluidSup}, making the rule slightly less explosive. \section{Logic} \label{sec:logic} Our \relax{extensional polymorphic clausal higher-order logic} is a restriction of full TPTP THF \cite{benzmueller-paulson-2010} to rank-1 (top-level) polymorphism, as in TH1 \cite{kaliszyk-et-al-2016}. In keeping with standard superposition, we consider only formulas in conjunctive normal form, without explicit quantifiers or Boolean type. We use Henkin semantics \cite{henkin-1950,benzmueller-miller-2014,fitting-2002}, as opposed to the standard semantics that is commonly considered the foundation of the HOL systems \cite{gordon-melham-1993}. However, both of these semantics are compatible with the notion of provability employed by the HOL systems. By admitting nonstandard models, Henkin semantics is not subject to G\"odel's first incompleteness theorem, allowing us to claim not only soundness but also refutational completeness of our calculus. \ourparagraph{Syntax} We fix a set $\Sigma_\mathsf{ty}$ of type constructors with arities and a set $\VV_\mathsf{ty}$ of type variables. We require at least one nullary type constructor and a binary function type constructor ${\fun}$ to be present in $\Sigma_\mathsf{ty}$. A \relax{type}~$\tau,\upsilon$ is either a type variable $\alpha\in\VV_\mathsf{ty}$ or has the form $\kappa(\tuple{\tau}_n)$ for an $n$-ary type constructor $\kappa\in\Sigma_\mathsf{ty}$ and types $\tuple{\tau}_n$. We use the notation $\tuple{a}_n$ or $\tuple{a}$ to stand for the tuple $(a_1,\dots,a_n)$ or product $a_1 \times \dots \times a_n$, where $n \ge 0$. We write $\kappa$ for $\kappa()$ and $\tau\fun\upsilon$ for ${\fun}(\tau,\upsilon)$. \relax{Type declarations} have the form $\forallty{\tuple{\alpha}_m}\tau$ (or simply $\tau$ if $m = 0$), where all type variables occurring in $\tau$ belong to $\tuple{\alpha}_m$. We fix a set $\Sigma$ of (function) symbols $\cst{a}, \cst{b}, \cst{c}, \cst{f}, \cst{g}, \cst{h}, \dots$, with type declarations, written as $\cst{f}\oftypedecl\forallty{\tuple{\alpha}_m}\tau$ or~$\cst{f}$, and a set $\mathscr{V}$ of term variables with associated types, written as $\var{x}\oftype\tau$ or~$\var{x}$. The notation $t \oftype\tau$ will also be used to indicate the type of arbitrary terms $t$. We require the presence of a symbol % of type $\forallty\alpha \alpha$ and of a symbol $\cst{diff}\oftype\forallty{\alpha,\beta}(\alpha\fun\beta)\fun(\alpha\fun\beta)\fun{\alpha}$ in $\Sigma$. We use $\cst{diff}$ to express the polymorphic functional extensionality axiom. A \relax{signature} is a pair $(\Sigma_\mathsf{ty},\Sigma)$. In the following, we will define terms in three layers of abstraction: raw $\lambda$-terms, $\lambda$-terms, and terms; where $\lambda$-terms will be $\alpha$-equivalence classes of raw $\lambda$-terms and terms will be $\beta\eta$-equivalence classes of $\lambda$-terms. The \emph{raw $\lambda$-terms} over a given signature and their associated types are defined inductively as follows. Every $x \mathbin: \tau \in\mathscr{V}$ is a raw $\lambda$-term of type $\tau$. If $\cst{f}\oftypedecl\forallty{\tuple{\alpha}_m}\tau \in \Sigma$ and $\tuple{\upsilon}_m$ is a tuple of types, called \emph{type arguments}, then $\cst{f}\typeargs{\tuple{\upsilon}_m}$ (or % $\cst{f}$ if $m = 0$) is a raw $\lambda$-term of type $\tau\{\tuple{\alpha}_m \mapsto \tuple{\upsilon}_m\}$. If $x\mathbin\oftype\tau$ and $t\oftype\upsilon$, then the \emph{$\lambda$-expression} $\lambda x.\> t$ is a raw $\lambda$-term of type $\tau\fun\upsilon$. If $s\oftype\tau\fun\upsilon$ and $t\oftype\tau$, then the \emph{application} $s\>t$ is a raw $\lambda$-term of type $\upsilon$. The function type constructor $\fun$ is right-associative; application is left-associative. Using the spine notation \cite{cervesato-pfenning-2003}, raw $\lambda$-terms can be decomposed in a unique way as a nonapplication \emph{head} $t$ applied to zero or more arguments: $t \> s_1\dots s_n$ or $t \> \tuple{s}_n$ (abusing notation). A raw $\lambda$-term $s$ is a \emph{subterm} of a raw $\lambda$-term $t$, written $t = \subterm{t}{s}$, if $t = s$, if $t = (\lambda x.\>\subterm{u}{s})$, if $t = (\subterm{u}{s})\>v$, or if $t = u\>(\subterm{v}{s})$ for some raw $\lambda$-terms $u$ and $v$. A \emph{proper} subterm of a raw $\lambda$-term $t$ is any subterm of $t$ that is distinct from $t$ itself. A variable occurrence is \emph{free} in a raw $\lambda$-term if it is not bound by a \hbox{$\lambda$-expression}. A raw $\lambda$-term is \emph{ground} if it is built without using type variables and contains no free term variables. The $\alpha$-renaming rule is defined as $(\lambda x.\> t) \kern+0.083333em\medrightarrow_\alpha\kern+0.083333em (\lambda y.\> t\{x \mapsto y\})$, where $y$ does not occur free in $t$ and is not captured by a $\lambda$-binder in $t$. Raw $\lambda$-terms form equivalence classes modulo $\alpha$-renaming, called \emph{$\lambda$-terms}. We lift the above notions on raw $\lambda$-terms to $\lambda$-terms. A substitution $\rho$ is a function from type variables to types and from term variables to $\lambda$-terms such that it maps all but finitely many variables to themselves. We require that it is type-correct---i.e., for each $x\oftype\tau \in \mathscr{V}$, $x\rho$ is of type $\tau\rho$. The letters $\theta,\pi,\rho,\sigma$ are reserved for substitutions. Substitutions $\alpha$-rename $\lambda$-terms to avoid capture; for example, $(\lambda x.\> y)\{y \mapsto x\} = (\lambda x'\!.\> x)$. The composition $\rho\sigma$ applies $\rho$ first: $t\rho\sigma = (t\rho)\sigma$. The notation $\sigma[\tuple{x}_n \mapsto \tuple{s}_n]$ denotes the substitution that replaces each $x_i$ by $s_i$ and that otherwise coincides with $\sigma$. The $\beta$- and $\eta$-reduction rules are specified on $\lambda$-terms as $(\lambda x.\> t)\> u \kern+0.083333em\medrightarrow_\beta\kern+0.083333em t\{x \mapsto u\}$ and $(\lambda x.\> t\> x) \kern+0.083333em\medrightarrow_\eta\kern+0.083333em t$. For $\beta$, bound variables in $t$ are implicitly renamed to avoid capture; for $\eta$, the variable $x$ must not occur free in $t$. The $\lambda$-terms form equivalence classes modulo $\beta\eta$-reduction, called \emph{$\beta\eta$-equivalence classes} or simply \emph{terms}. \begin{conventionx} \label{conv:beta-eta-normal-form} When defining operations that need to analyze the structure of terms, we will use the $\eta$-short $\beta$-normal form $\betanf{t}$, obtained by applying $\medrightarrow_\beta$ and $\medrightarrow_\eta$ exhaustively, as a representative of the equivalence class $t$. In particular, we lift the notions of subterms and occurrences of variables to $\beta\eta$-equivalence classes via their $\eta$-short $\beta$-normal representative. \end{conventionx} Many authors prefer the $\eta$-long $\beta$-normal form \cite{huet-1975,mayr-nipkow-1998,jouannaud-rubio-1998}, but in a polymorphic setting it has the\pagebreak[2] drawback that instantiating a type variable with a functional type can lead to $\eta$-expansion. We reserve the letters $s, t, u, v$ for terms and $x, y, z$ for variables. An equation $s \approx t$ is formally an unordered pair of terms $s$ and $t$. A literal is an equation or a negated equation, written $\lnot\; s \approx t$ or $s \not\eq t$. A clause $L_1 \lor \dots \lor L_n$ is a finite multiset of literals $L_{\!j}$. The empty clause is written as $\bot$. A \emph{complete set of unifiers} on a set~$X$ of variables for two terms $s$~and~$t$ is a set~$U$ of unifiers of $s$ and $t$ such that for every unifier $\theta$ of $s$~and~$t$ there exists a member $\sigma \in U$ and a substitution~$\rho$ such that $x\sigma\rho = x\theta$ for all $x \in X.$ We let $\csu_X(s,t)$ denote an arbitrary (preferably minimal) complete set of unifiers on~$X$ for $s$~and~$t$. We assume that all $\sigma \in \csu_X(s,t)$ are idempotent on $X$---i.e., $x\sigma\sigma = x\sigma$ for all $x \in X.$ The set~$X$ will consist of the free variables of the clauses in which $s$ and $t$ occur and will be left implicit. Given a substitution $\sigma$, the $\sigma$-instance of a term $t$ or clause $C$ is the term $t\sigma$ or the clause $C\sigma$, respectively. If $t\sigma$ or $C\sigma$ is ground, we call it a $\sigma$-ground instance. \ourparagraph{Semantics} A \emph{type interpretation} $\III_\mathsf{ty} = (\mathscr{U}, \II_\mathsf{ty})$ is defined as follows. The \emph{universe} $\mathscr{U}$ is a nonempty collection of nonempty sets, called \emph{domains}. The function $\II_\mathsf{ty}$ associates a function $\II_\mathsf{ty}(\kappa) : \mathscr{U}^n \rightarrow \mathscr{U}$ with each $n$-ary type constructor~$\kappa$, such that for all domains $\mathscr{D}_1,\mathscr{D}_2\in\mathscr{U}$, the set $\II_\mathsf{ty}(\fun)(\mathscr{D}_1,\mathscr{D}_2)$ is a subset of the function space from $\mathscr{D}_1$ to $\mathscr{D}_2$. The semantics is \emph{standard} if $\II_\mathsf{ty}(\fun)(\mathscr{D}_1,\mathscr{D}_2)$ is the entire function space for all $\mathscr{D}_1,\mathscr{D}_2$. A \emph{type valuation} $\xi$ is a function that maps every type variable to a domain. The \emph{denotation} of a type for a type interpretation $\III_\mathsf{ty}$ and a type valuation $\xi$ is defined by $\interpret{\alpha}{\III_\mathsf{ty}}{\xi}=\xi(\alpha)$ and $\interpret{\kappa(\tuple{\tau})}{\III_\mathsf{ty}}{\xi}= \II_\mathsf{ty}(\kappa)(\interpret{\tuple{\tau}}{\III_\mathsf{ty}}{\xi})$. We abuse notation by applying an operation on a tuple when it must be applied elementwise; thus, $\interpret{\tuple{\tau}_n}{\III_\mathsf{ty}}{\xi}$ stands for $\interpret{\tau_1}{\III_\mathsf{ty}}{\xi},\dots, \interpret{\tau_n}{\III_\mathsf{ty}}{\xi}$. A type valuation $\xi$ can be extended to be a \emph{valuation} by additionally assigning an element $\xi(x)\in\interpret{\tau}{\III_\mathsf{ty}}{\xi}$ to each variable $x \oftype \tau$. An \emph{interpretation function} $\mathscr{J}$ for a type interpretation $\III_\mathsf{ty}$ associates with each symbol $\cst{f}\oftypedecl\forallty{\tuple{\alpha}_m}\tau$ and domain tuple $\tuple{\mathscr{D}}_m\in\mathscr{U}^m$ a value $\mathscr{J}(\cst{f},\tuple{\mathscr{D}}_m) \in \interpret{\tau}{\III_\mathsf{ty}}{\xi}$, where $\xi$ is the type valuation that maps each $\alpha_i$ to $\mathscr{D}_i$. The comprehension principle states that every function designated by a $\lambda$-expression is contained in the corresponding domain. Loosely following Fitting~\cite[\Section~2.4]{fitting-2002}, we initially allow $\lambda$-expressions to designate arbitrary elements of the domain, to be able to define the denotation of a term. We impose restrictions afterwards using the notion of a proper interpretation. A \emph{$\lambda$-designation function} $\mathscr{L}$ for a type interpretation $\III_\mathsf{ty}$ is a function that maps a valuation $\xi$ and a $\lambda$-expression of type $\tau$ to elements of $\interpret{\tau}{\III_\mathsf{ty}}{\xi}$. A type interpretation, an interpretation function, and a $\lambda$-designation function form an (\emph{extensional}) \emph{interpretation} $\mathscr{I} = (\III_\mathsf{ty},\mathscr{J},\mathscr{L})$. For an interpretation~$\mathscr{I}$ and a valuation~$\xi$, the \relax{denotation of a term} is defined as $\interpretaxi{x} \defeq \xi(x)$, $\interpretaxi{\cst{f}\typeargs{\tuple{\tau}_m}} \defeq \mathscr{J}(\cst{f},\interpret{\tuple{\tau}_m}{\III_\mathsf{ty}}{\xi})$, $\interpretaxi{s\>t} \defeq \interpretaxi{s} (\interpretaxi{t})$, and $\interpretaxi{\lambda x.\> t} \defeq \mathscr{L}(\xi,\lambda x.\> t)$. For ground terms $t$, the denotation does not depend on the choice of the valuation $\xi$, which is why we sometimes write $\interpret{t}{\mathscr{I}}{}$ for $\interpretaxi{t}$. An interpretation $\mathscr{I}$ is \emph{proper} if $\interpret{\lambda x.\>t}{\mathscr{I}}{\xi}(a) = \interpret{t}{\mathscr{I}}{\xi[x\mapsto a]}$ for all $\lambda$-expressions $\lambda x.\>t$, all valuations $\xi$, and all $a$. If a type interpretation $\III_\mathsf{ty}$ and an interpretation function $\mathscr{J}$ can be extended by a $\lambda$-designation function $\mathscr{L}$ to a proper interpretation $(\III_\mathsf{ty},\mathscr{J},\mathscr{L})$, then this $\mathscr{L}$ is unique \cite[Proposition~2.18]{fitting-2002}. Given an interpretation $\mathscr{I}$ and a valuation $\xi$, an equation $s\approx t$ is \relax{true} if % $\interpretaxi{s}$ and $\interpretaxi{t}$ are equal and it is \relax{false} otherwise. A disequation $s\not\eq t$ is true if $s \approx t$ is false. A clause is \relax{true} if at least one of its literals is true. A clause set is \relax{true} if all its clauses are true. A proper interpretation $\mathscr{I}$ is a \emph{model} of a clause set $N$, written $\mathscr{I} \models N$, if $N$ is true in $\mathscr{I}$ for all valuations $\xi$. \ourparagraph{Axiomatization of Booleans} \looseness=-1 Our clausal logic lacks a Boolean type, but it can easily be axiomatized as follows. We extend the signature with a nullary type constructor $\typ{bool} \in \Sigma_\mathsf{ty}$ equipped with the proxy constants $\cst{t}, \cst{f} : \typ{bool}$, $\cst{not} : \typ{bool} \fun \typ{bool}$, $\cst{and}, \cst{or}, \cst{impl}, \cst{equiv} : \typ{bool} \fun \typ{bool} \fun \typ{bool}$, $\cst{forall}, \cst{exists} : \forallty{\alpha} (\alpha \fun \typ{bool}) \fun \typ{bool}$, $\cst{eq} : \forallty{\alpha} \alpha \fun \alpha \fun \typ{bool}$, and $\cst{choice} : \forallty{\alpha} (\alpha \fun \typ{bool}) \fun \alpha$, characterized by the axioms \kern\abovedisplayskip \noindent \begin{minipage}[t]{.2\textwidth} \begin{center} $\cst{t} \not\eq \cst{f}$ \\ $x \approx \cst{t} \mathrel\lor x \approx \cst{f}$ \\ $\cst{not} \> \cst{t} \approx \cst{f}$ \\ $\cst{not} \> \cst{f} \approx \cst{t}$ \\ $\cst{and} \> \cst{t} \> x \approx x $ \\ $\cst{and} \> \cst{f} \> x \approx \cst{f} $ \\ \end{center} \end{minipage}% \begin{minipage}[t]{.3\textwidth} \begin{center} $\cst{or} \> \cst{t} \> x \approx \cst{t} $ \\ $\cst{or} \> \cst{f} \> x \approx x $ \\ $\cst{impl} \> \cst{t} \> x \approx x $ \\ $\cst{impl} \> \cst{f} \> x \approx \cst{t} $ \\ $x \not\eq y \mathrel\lor \cst{eq}\typeargs{\alpha}\;x\> y \approx \cst{t}$ \\ $x \approx y \mathrel\lor \cst{eq}\typeargs{\alpha}\;x\> y \approx \cst{f}$ \end{center} \end{minipage}% \begin{minipage}[t]{.5\textwidth} \begin{center} $\cst{equiv} \> x \> y \approx \cst{and} \> (\cst{impl} \> x \> y) \> (\cst{impl} \> y \> x) $ \\ $\cst{forall}\typeargs{\alpha}\; (\lambda x.\; \cst{t}) \approx \cst{t}$ \\ $y \approx (\lambda x.\; \cst{t}) \mathrel\lor \cst{forall}\typeargs{\alpha}\; y \approx \cst{f}$ \\ $\cst{exists}\typeargs{\alpha}\; y \approx \cst{not} \> (\cst{forall}\typeargs{\alpha}\; (\lambda x. \> \cst{not} \> (y \> x)))$ \\ $y \> x \approx \cst{f} \mathrel\lor y \> (\textsf{choice}\typeargs{\alpha} \> y) \approx \cst{t}$ \\ \end{center} \end{minipage} \kern\belowdisplayskip This axiomatization of Booleans can be used in a prover to support full higher-order logic with or without Hilbert choice, corresponding to the TPTP THF format variants TH0 (monomorphic) \cite{sutcliffe-et-al-2009} and TH1 (polymorphic) \cite{kaliszyk-et-al-2016}. The prover's clausifier would transform the outer first-order skeleton of a formula into a clause and use the axiomatized Booleans within the terms. It would also add the proxy axioms to the clausal problem. As an alternative to this complete axiomatization, Vukmirovi\'c and Nummelin \cite{vukmirovic-nummelin-2020-boolean} present a possibly refutationally incomplete calculus extension with dedicated rules to support Booleans. This approach works better in practice and contributed to Zipperposition's victory at CASC 2020. \section{The Calculus} \label{sec:the-calculus} Our \emph{Boolean-free $\lambda$-superposition calculus} presented here is inspired by the \relax{extensional nonpurifying} Boolean-free \hbox{$\lambda$-free} higher-order superposition calculus described by Bentkamp et al.\ \cite{bentkamp-et-al-2018}. The text of this and the next section is partly based on that paper and the associated journal submission \cite{bentkamp-et-al-lfhosup-arxiv} (with Cruanes's~permission). The central idea is that superposition inferences are restricted to \emph{unapplied} subterms occurring in the first-order outer skeleton of clauses---that is, outside $\lambda$-expressions and outside the arguments of applied variables. We call these ``green subterms.'' Thus, $\cst{g} \approx (\lambda x.\> \cst{f}\>x\>x)$ cannot be used directly to rewrite $\cst{g}\> \cst{a}$ to $\cst{f}\> \cst{a}\> \cst{a}$, because $\cst{g}$ is applied in $\cst{g}\> \cst{a}$. A separate inference rule, \infname{ArgCong}, takes care of deriving $\cst{g}\>x \approx \cst{f}\>x\>x$, which can be oriented independently of its parent clause and used to rewrite $\cst{g}\> \cst{a}$ or $\cst{f}\> \cst{a}\> \cst{a}$. \begin{definitionx}[Green positions and subterms] The \emph{green positions} and \emph{green subterms} of a term (i.e., a $\beta\eta$-equivalence class) are defined inductively as follows. A green position is a tuple of natural numbers. For any term $t$, the empty tuple $\varepsilon$ is a green position of $t$, and $t$ is the green subterm of $t$ at position $\varepsilon$. For all symbols $\cst{f}\in\Sigma$, types $\tuple{\tau}$, and terms $\tuple{u}$, if $t$ is a green subterm of $u_i$ at some position $p$ for some~$i$, then $i.p$ is a green position of $\cst{f}\typeargs{\tuple{\tau}}\kern+0.083333em\> \tuple{u}$, and $t$ is the green subterm of $\cst{f}\typeargs{\tuple{\tau}}\>\tuple{u}$ at position~$i.p$. % We denote the green subterm of $s$ at the green position $p$ by $s|_p$. \end{definitionx} In $\cst{f}\> (\cst{g}\> \cst{a})\> (y\> \cst{b})\> (\lambda x.\> \cst{h}\> \cst{c}\> (\cst{g}\> x))$, the proper green subterms are $\cst{a}$, $\cst{g}\> \cst{a}$, $y\> \cst{b}$, and $\lambda x.\> \cst{h}\> \cst{c}\> (\cst{g}\> x)$. The last two of these do not look like first-order terms and hence their subterms are not green. \begin{definitionx}[Green contexts] We write $t = \greensubterm{s}{u}_p$ to express that $u$ is a green subterm of~$t$ at the green position $p$ and call $\greensubterm{s}{\phantom{.}}_p$ a \emph{green context}. We omit the subscript $p$ if there are no ambiguities. \end{definitionx} In a $\beta\eta$-normal representative of a green context, the hole never occurs applied. Therefore, inserting a $\beta\eta$-normal term into the context produces another $\beta\eta$-normal term. Another key notion is that of a fluid term: \begin{definitionx}[Fluid terms] A term $t$ is called \emph{fluid} if (1)~$\betanf{t}$ is of the form $y\>\tuple{u}_n$ where $n \geq 1$, or (2)~$\betanf{t}$ is a $\lambda$-expression and there exists a substitution~$\sigma$ such that $\betanf{t\sigma}$ is not a $\lambda$-expression (due to $\eta$-reduction). \end{definitionx} Case~(2) can arise only if $t$ contains an applied variable. Intuitively, fluid terms are terms whose $\eta$-short $\beta$-normal form can change radically as a result of instantiation. For example, $\lambda x.\> y\> \cst{a}\> (z\> x)$ is fluid because applying $\{z \mapsto \lambda x.\>x\}$ makes the $\lambda$ vanish: $(\lambda x.\> y\> \cst{a}\> x) = y\> \cst{a}$. Similarly, $\lambda x.\> \cst{f}\>(y\>x)\>x$ is fluid because $(\lambda x.\> \cst{f}\>(y\>x)\>x)\{y \mapsto \lambda x.\>\cst{a}\} = (\lambda x.\> \cst{f}\>\cst{a}\>x) = \cst{f}\>\cst{a}$. \oursubsection{The Core Inference Rules} \label{ssec:the-core-inference-rules} The calculus is parameterized by a strict and a nonstrict term order as well as a selection function. These concepts are defined below. \begin{definitionx}[Strict ground term order] A \emph{strict ground term order} is a well-founded strict total order $\succ$ on ground terms satisfying the following criteria, where $\succeq$ denotes the reflexive closure of $\succ$: \begin{itemize} \item \emph{green subterm property}:\enskip $\greensubterm{t}{s} \succeq s$; \item \emph{compatibility with green contexts}:\enskip $s' \succ s$ implies $\greensubterm{t}{s'} \succ \greensubterm{t}{s}$. \end{itemize} Given a strict ground term order, we extend it to literals and clauses via the multiset extensions in the standard way \cite[\Section~2.4]{bachmair-ganzinger-1994}. \label{def:strict-ground-term-order} \end{definitionx} Two properties that are not required are \emph{compatibility with $\lambda$-expressions} ($s'\succ s$ implies $(\lambda x. \> s') \succ (\lambda x.\> s)$) and \emph{compatibility with arguments} ($s' \succ s$ implies $s'\>\relax{t} \succ s\kern+0.083333em\>\relax{t}$). The latter would even be inconsistent with totality. To see why, consider\pagebreak[2] the symbols $\cst{c} \succ \cst{b} \succ \cst{a}$ and the terms $\lambda x.\> \cst{b}$ and $\lambda x.\> x$. Owing to totality, one of the terms must be larger than the other, say, $(\lambda x.\> \cst{b}) \succ (\lambda x.\> x)$. By compatibility with arguments, we get $(\lambda x.\> \cst{b})\> \cst{c} \succ (\lambda x.\>x)\> \cst{c}$, i.e., $\cst{b} \succ \cst{c}$, a contradiction. A similar line of reasoning applies if $(\lambda x.\> \cst{b}) \prec (\lambda x.\> x)$, using $\cst{a}$ instead of~$\cst{c}$. \begin{definitionx}[Strict term order] A \emph{strict term order} is a relation $\succ$ on terms, literals, and clauses such that restriction to ground entities is a strict ground term order % and such that it is stable under grounding substitutions (i.e., $t \succ s$ implies $t\theta \succ s\theta$ for all substitutions~$\theta$ grounding the entities $t$ and $s$). \label{def:strict-term-order} \end{definitionx} \begin{definitionx}[Nonstrict term order] Given a strict term order $\succ$ and its reflexive closure $\succeq$, a \emph{nonstrict term order} is a relation $\succsim$ on terms, literals, and clauses such that $t \succsim s$ implies $t\theta \succeq s\theta$ for all $\theta$ grounding the entities $t$ and $s$. \label{def:nonstrict-term-order} \end{definitionx} Although we call them orders, a strict term order $\succ$ is not required to be transitive on nonground entities, and a nonstrict term order $\succsim$ does not need to be transitive at all. Normally, $t \succeq s$ should imply $t \succsim s$, but this is not required either. A nonstrict term order~$\succsim$ allows us to be more precise than the reflexive closure $\succeq$ of % $\succ$. For example, we cannot have $y\>\cst{b} \succeq y\>\cst{a}$, because $y\>\cst{b} \not= y\>\cst{a}$ and $y\>\cst{b} \not\succ y\>\cst{a}$ by stability under grounding substitutions (with $\{y \mapsto \lambda x.\>\cst{c}\}$). But we can have $y\>\cst{b} \succsim y\>\cst{a}$ if $\cst{b} \succ \cst{a}$. In practice, the strict and the nonstrict term order should be chosen so that they can compare as many pairs of terms as possible while being computable and reasonably efficient. \begin{definitionx}[Maximality] \label{def:maximal} An element $x$ of a multiset $M$ is $\unrhd$-\emph{maximal} for some relation $\unrhd$ if for all $y \in M$ with $y \unrhd x$, we have $y \unlhd x$. It is \emph{strictly $\unrhd$-maximal} if it is $\unrhd$-maximal and occurs only once in $M$. \end{definitionx} \begin{definitionx}[Selection function] A \emph{selection function} is a function that maps each clause to a subclause consisting of negative literals, which we call the \emph{selected} literals of that clause. A literal $\greensubterm{L}{\,y}$ must not be selected if $y\> \tuple{u}_n$, with $n > 0$, is a $\succeq$-maximal term of the clause. \label{def:sel} \end{definitionx} The restriction on the selection function is needed for our proof, but it is an open question whether it is actually necessary for refutational completeness. Our calculus is parameterized by a strict term order $\succ$, a nonstrict term order $\succsim$, and a selection function $\mathit{HSel}$. The calculus rules depend on the following auxiliary notions. \begin{definitionx}[Eligibility] A literal $L$ is (\emph{strictly}) $\unrhd$-\emph{eligible} \hbox{w.r.t.}\ a substitution $\sigma$ in $C$ for some relation $\unrhd$ if it is selected in $C$ or there are no selected literals in $C$ and $L\sigma$ is (strictly) $\unrhd$-maximal in $C\sigma.$ If $\sigma$ is the identity substitution, we leave it implicit. \end{definitionx} \begin{definitionx}[Deep occurrences] A variable \emph{occurs deeply} in a clause $C$ if it occurs inside a $\lambda$-expression or inside an argument of an applied variable. \end{definitionx} For example, $x$ and $z$ occur deeply in $\cst{f}\kern+0.0416667em x \> y \approx y \> x \mathrel\lor z \not\eq (\lambda w.\>z\>\cst{a})$, whereas $y$ does not occur deeply. The purpose of this definition is to capture all variables with an occurrence that corresponds to a position inside a $\lambda$-expression in some ground instances of $C$. The first rule of our calculus is the superposition rule. We regard positive and negative superposition as two cases of a single rule \[\namedinference{Sup} {\overbrace{D' \mathrel\lor { t \approx t'}}^{\vphantom{\cdot}\smash{D}} \hskip1.25em \overbrace{C' \mathrel\lor s\lang\, u\rang \doteq s'}^{\smash{C}}} {(D' \mathrel\lor C' \mathrel\lor s\lang\, t'\rang \doteq s')\sigma}\] where $\doteq$ denotes either $\approx$ or $\not\eq$. The following side conditions apply: \begin{enumerate} % \item[1.] $u$ is not fluid;% \hfill 2.\enskip $u$ is not a variable deeply occurring in $C$;% \hfill\hbox{} \hfill\hbox{} \item[3.] \emph{variable condition}: if $u$ is a variable~$y$, there must exist a grounding substitution~$\theta$ such that $t\sigma\theta \succ t'\kern-0.083333em\sigma\theta$ and $C\sigma\theta \prec C''\sigma\theta$, where $C'' = C\{y\mapsto t'\}$; \item[4.] $\sigma\in\csu(t,u)$;% \hfill 5.\enskip $t\sigma \not\precsim t'\kern-0.083333em\sigma$;% \hfill 6.\enskip $s\lang\, u\rang\sigma \not\precsim s'\sigma$;% \hfill 7.\enskip $C\sigma \not\precsim D\sigma$;% \hfill\hbox{} \item[8.] $t \approx t'$ is strictly $\succsim$-eligible in $D$ \hbox{w.r.t.}\ $\sigma$; \item[9.] $s\lang\, u\rang \doteq s'$ is $\succsim$-eligible in $C$ \hbox{w.r.t.}\ $\sigma$, and strictly $\succsim$-eligible if it is positive. \end{enumerate} There are four main differences with the statement of the standard superposition rule: Contexts $\subterm{s}{~}$ are replaced by green contexts $\greensubterm{s}{\phantom{.}}$. The standard condition $u \notin \mathscr{V}$ is generalized by conditions 2~and~3. Most general unifiers are replaced by complete sets of unifiers. And $\not\preceq$ is replaced by the more precise $\not\precsim$. The second rule is a variant of \infname{Sup} that focuses on fluid green subterms: \[\namedinference{FluidSup} {\overbrace{D' \mathrel\lor t \approx t'}^{\phantom{\cdot}\smash{D}} \hskip1.25em \overbrace{C' \mathrel\lor \greensubterm{s}{u} \doteq s'}^{\smash{C}}} {(D' \mathrel\lor C' \mathrel\lor \greensubterm{s}{z\>t'} \doteq s') \sigma}\] with the following side conditions, in addition to \infname{Sup}'s conditions 5~to~9: \begin{enumerate} % \item[1.] $u$ is either a fluid term or a variable deeply occurring in $C$; \item[2.] $z$ is a fresh variable;% \hfill 3. $\sigma\in\csu(z\>t{,}\;u)$;% \hfill 4.\enskip $(z\>t')\sigma \not= (z\>t)\sigma$.\hfill\hbox{} \end{enumerate} The equality resolution and equality factoring rules are almost identical to their standard counterparts: \begin{align*} &\namedinference{ERes} {\overbrace{C' \mathrel\lor {u \not\eq u'}}^C} {C'\sigma} &&\namedinference{EFact} {\overbrace{C' \mathrel\lor {u'} \approx v' \mathrel\lor {u} \approx v}^C} {(C' \mathrel\lor v \not\eq v' \mathrel\lor u \approx v')\sigma} \end{align*} For \infname{ERes}: $\sigma\in\csu(u,u')$ and $u \not\eq u'$ is $\succsim$-eligible in $C$ \hbox{w.r.t.}\ $\sigma$. For~\infname{EFact}: $\sigma\in\csu(u,u')$, $u\sigma \not\precsim v\sigma$, and $u \approx v$ is $\succsim$-eligible in $C$ \hbox{w.r.t.}\ $\sigma$. Argument congruence, a higher-order concern, is embodied by the rule \[\namedinference{ArgCong} {\overbrace{C' \mathrel\lor s \approx s'}^C} {C'\sigma \mathrel\lor s\sigma\>\tuple{x}_n \approx s'\sigma\>\tuple{x}_n}\] where $\sigma$ is the most general type substitution that ensures well-typedness of the conclusion. In particular, if the result type of $s$ is not a type variable, $\sigma$ is the identity substitution; and if the result type is a type variable, it is instantiated with $\alpha_1 \fun \cdots \fun \alpha_m \fun \beta$, where $\tuple{\alpha}_m$ and $\beta$ are fresh. This yields infinitely many conclusions, one for each $m$. The literal $s \approx s'$ must be strictly $\succsim$-eligible in $C$ \hbox{w.r.t.}\ $\sigma$, and $\tuple{x}_n$ is a nonempty tuple of distinct fresh variables. The rules are complemented by the polymorphic functional extensionality axiom: \[y\>(\cst{diff}\typeargs{\alpha,\beta}\> y\> z)\not\eq z\>(\cst{diff}\typeargs{\alpha,\beta}\> y\> z) \mathrel\lor y \approx z \tag*{\text{(\infname{Ext})}} \] From now on, we will omit the type arguments to $\cst{diff}$ since they can be inferred from the term arguments. \oursubsection{Rationale for the Rules} \label{ssec:rationale-for-the-rules} The calculus realizes the following division of labor: \infname{Sup} and \infname{FluidSup} are responsible for green subterms, which are outside $\lambda$s, \infname{ArgCong} effectively gives access to the remaining positions outside $\lambda$s, and the extensionality axiom takes care of subterms inside $\lambda$s. \begin{examplex} Prefix subterms such as $\cst{g}$ in the term $\cst{g}\>\cst{a}$ are not green subterms and thus cannot be superposed into. \infname{ArgCong} gives us access to those positions. Consider the clauses $\cst{g}\>\cst{a} \not\eq \cst{f}\>\cst{a}$ and $\cst{g} \approx \cst{f}$. An \infname{ArgCong} inference from $\cst{g} \approx \cst{f}$ generates $\cst{g}\>x \approx \cst{f}\>x$. This clause can be used for a $\infname{Sup}$ inference into the first clause, yielding $\cst{f}\>\cst{a} \not\eq \cst{f}\>\cst{a}$ and thus $\bot$ by \infname{ERes}. \end{examplex} \begin{examplex} \label{ex:wsup-1} Applied variables give rise to subtle situations with no counterparts in first-order logic. Consider the clauses $\cst{f}\>\cst{a} \approx \cst{c}$ and $\cst{h}\>(y\>\cst{b})\>(y\>\cst{a}) \not\eq \cst{h}\>(\cst{g}\>(\cst{f}\>\cst{b}))\>(\cst{g}\>\cst{c})$, where $\cst{f}\>\cst{a} \succ \cst{c}$. It is easy to see that the clause set is unsatisfiable, by grounding the second clause with $\theta = \{y \mapsto \lambda x.\> \cst{g}\>(\cst{f}\>x)\}$. However, to mimic the superposition inference that can be performed at the ground level, it is necessary to superpose at an imaginary position \emph{below} the applied variable $y$ and yet \emph{above} its argument~$\cst{a}$, namely, into the subterm $\cst{f}\>\cst{a}$ of $\cst{g}\>(\cst{f}\>\cst{a}) = (\lambda x.\> \cst{g}\>(\cst{f}\>x))\>\cst{a} = (y\>\cst{a})\theta$. \infname{FluidSup}'s $z$~variable effectively transforms $\cst{f}\>\cst{a} \approx \cst{c}$ into $z\>(\cst{f}\>\cst{a}) \approx z\>\cst{c}$, whose left-hand side can be unified with $y\>\cst{a}$ by taking $\{y \mapsto \lambda x.\> z\>(\cst{f}\>x)\}$. The resulting clause is $\cst{h}\>(z\>(\cst{f}\>\cst{b}))\>(z\>\cst{c}) \not\eq \cst{h}\>(\cst{g}\>(\cst{f}\>\cst{b}))\>(\cst{g}\>\cst{c})$, from which $\bot$ follows by \infname{ERes}. \end{examplex} \begin{examplex} \label{ex:wsup-2} The clause set consisting of $\cst{f}\>\cst{a} \approx \cst{c}$, $\cst{f}\>\cst{b} \approx \cst{d}$, and $\cst{g}\>\cst{c} \not\eq y\>\cst{a} \mathrel\lor \cst{g}\>\cst{d} \not\eq y\>\cst{b}$ has a similar flavor. \infname{ERes} is applicable on either literal of the third clause, but the computed unifier, $\{y \mapsto \lambda x.\> \cst{g}\>\cst{c}\}$ or $\{y \mapsto \lambda x.\> \cst{g}\>\cst{d}\}$, is not the right one. Again, we need \infname{FluidSup}. \end{examplex} \begin{examplex} \label{ex:wsup-3} Third-order clauses containing subterms of the form $y\>(\lambda x.\> t)$ can be even more stupefying. The clause set consisting of $\cst{f}\> \cst{a} \approx \cst{c}$ and $\cst{h}\> (y\> (\lambda x.\> \cst{g}\> (\cst{f}\> x))\> \cst{a})\> y \not\eq \cst{h}\> (\cst{g}\> \cst{c})\> (\lambda w\>x.\> w\>x)$ is unsatisfiable. To see why, apply $\theta = \{y \mapsto \lambda w\>x.\> w\>x\}$ to the second clause, yielding $\cst{h}\> (\cst{g}\> (\cst{f}\> \cst{a}))\> (\lambda w\>x.\> w\>x) \not\eq \cst{h}\> (\cst{g}\> \cst{c})\> (\lambda w\>x.\> w\>x)$. Let $\cst{f}\> \cst{a} \succ \cst{c}$. A \infname{Sup} inference is possible between the first clause and this ground instance of the second one. But at the nonground level, the subterm $\cst{f}\> \cst{a}$ is not clearly localized: $\cst{g}\> (\cst{f}\> \cst{a}) = (\lambda x.\> \cst{g}\> (\cst{f}\> x))\> \cst{a} = (\lambda w\>x.\> w\>x)\> (\lambda x.\> \cst{g}\> (\cst{f}\> x))\> \cst{a} = (y\> (\lambda x.\> \cst{g}\> (\cst{f}\> x))\> \cst{a})\theta$. The \infname{FluidSup} rule can cope with this. One of the unifiers of $z\> (\cst{f}\> \cst{a})$ and $y\> (\lambda x.\> \cst{g}\> (\cst{f}\> x))\> \cst{a}$ will be $\{y \mapsto \lambda w\>x.\> w\>x{,}\; z \mapsto \cst{g}\}$, yielding the clearly unsatisfiable clause $\cst{h}\> (\cst{g}\> \cst{c})\> (\lambda w\>x.\> w\>x) \not\eq \cst{h}\> (\cst{g}\> \cst{c})\> (\lambda w\>x.\> w\>x)$. \end{examplex} \begin{examplex} The \infname{FluidSup} rule is concerned not only with applied variables but also with $\lambda$-expressions that, after substitution, may be $\eta$-reduced to reveal new applied variables or green subterms. Consider the clauses $\cst{g}\> \cst{a} \approx \cst{b}$, $\cst{h}\> (\lambda y.\> x\> y\> \cst{g}\> z) \approx \cst{c}$, and $\cst{h}\>(\cst{f}\> \cst{b}) \not\eq \cst{c}$. Applying $\{x \mapsto \lambda y'\> w\> z'.\> \cst{f}\> (w\> \cst{a})\> y' \}$ to the second clause yields $\cst{h}\> (\lambda y.\> (\lambda y'\> w\> z'.\; \cst{f}\> (w\> \cst{a})\> y')\> y\> \cst{g}\> z) \approx \cst{c}$, which $\beta$-reduces to $\cst{h}\;(\lambda y.\> \cst{f}\> (\cst{g}\> \cst{a})\> y) \approx \cst{c}$ and $\beta\eta$-reduces to $\cst{h}\;(\cst{f}\> (\cst{g}\> \cst{a})) \approx \cst{c} $. A \infname{Sup} inference is possible between the first clause and this new ground clause, generating the clause $\cst{h}\>(\cst{f}\> \cst{b}) \approx \cst{c}$. By also considering $\lambda$-expressions, the \infname{FluidSup} rule is applicable at the nonground level to derive this clause. \end{examplex} \begin{examplex} \label{ex:prod-div} Consider the clause set consisting of the facts $C_{\text{succ}} = \cst{succ}\>x \not\eq \cst{zero}$, $C_{\text{div}} = n \approx \cst{zero} \mathrel\lor \cst{div}\;n\;n \approx \cst{one}$, $C_{\text{prod}} = \cst{prod}\; K\;(\lambda k.\>\cst{one}) \approx \cst{one}$, and the negated conjecture $C_{\text{conj}} = \cst{prod}\; K\;(\lambda k.\> \cst{div}\; (\cst{succ}\; k)\; (\cst{succ}\; k)) \not\eq \cst{one}$. Intuitively, the term $\cst{prod}\;K\;(\lambda k.\; u)$ is intended to denote the product $\smash{\prod_{k\in K} u}$, where $k$ ranges over a finite set~$K$ of natural numbers. The calculus derives the empty clause as follows: \[ \prftree[r]{\infname{ERes}}{ \prftree[r]{\infname{Sup}} {C_{\text{prod}}\!} { \prftree[r]{ \infname{Sup}}{C_{\text{conj}}} {\trimbox{3.2em 0pt 0em 0em}{% {\prftree[r]{\infname{ERes}}{ \prftree[r]{\infname{Sup}} {\prftree[r]{\infname{ERes}}{ \prftree[r]{\infname{FluidSup}} {C_{\text{div}}}{ \prftree[r] {\infname{Ext}}{}{y\>(\cst{diff}\typeargs{\alpha,\beta}\> y\> z)\not\eq z\>(\cst{diff}\typeargs{\alpha,\beta}\> y\> z) \mathrel\lor y \approx z} }{ \begin{aligned} w&\>(\cst{diff}\typeargs{\alpha,\iota}\>(\lambda k.\>\cst{div}\>(w\>k)\>(w\>k))\>z) \approx \cst{zero} \\ &\mathrel\lor \cst{one} \not\eq z\>(\cst{diff}\typeargs{\alpha,\iota}\> (\lambda k.\>\cst{div}\>(w\>k)\>(w\>k))\> z) \mathrel\lor (\lambda k.\>\cst{div}\>(w\>k)\>(w\>k)) \approx z \end{aligned} } } {{\begin{aligned}\\C_{\text{succ}}\end{aligned}}\quad \begin{aligned} w&\>(\cst{diff}\typeargs{\alpha,\iota}\>(\lambda k.\>\cst{div}\>(w\>k)\>(w\>k))\>(\lambda k.\>\cst{one})) \approx \cst{zero}\\ &\mathrel\lor (\lambda k.\>\cst{div}\>(w\>k)\>(w\>k)) \approx (\lambda k.\>\cst{one}) \end{aligned} }} {\cst{zero} \not\eq \cst{zero} \mathrel\lor (\lambda k.\>\cst{div}\>(\cst{succ}\>k)\>(\cst{succ}\>k)) \approx (\lambda k.\>\cst{one})} } {(\lambda k.\>\cst{div}\>(\cst{succ}\>k)\>(\cst{succ}\>k)) \approx (\lambda k.\>\cst{one})}}}} {\cst{prod}\; K\;(\lambda k.\>\cst{one}) \not\eq \cst{one}}} {\cst{one} \not\eq \cst{one}}} {\bot} \] Since the calculus does not superpose into $\lambda$-expressions, we need to use the extensionality axiom to refute this clause set. We perform a \infname{FluidSup} inference into the extensionality axiom with the unifier $\{ \beta \mapsto \iota,\>\allowbreak z' \mapsto \lambda x.\>x,\>\allowbreak n \mapsto w\>(\cst{diff}\typeargs{\alpha,\iota}\> (\lambda k.\>\cst{div}\>(w\>k)\>(w\>k))\> z),\> \allowbreak y \mapsto \lambda k.\>\cst{div}\>(w\>k)\>(w\>k) \} \in \csu(z'\>(\cst{div}\>n\>n){,}\; y\>(\cst{diff}\typeargs{\alpha,\beta}\> y\> z))$. Then we apply \infname{ERes} with the unifier $\{z \mapsto \lambda k.\>\cst{one}\} \in \csu(\cst{one}{,}\; z\>(\cst{diff}\typeargs{\alpha,\iota}\> (\lambda k.\>\cst{div}\>(w\>k)\>(w\>k))\> z))$ to eliminate the negative literal. Next, we perform a $\infname{Sup}$ inference into $C_{\text{succ}}$ with the unifier $\{ \alpha \mapsto \iota,\>\allowbreak w \mapsto \cst{succ},\>\allowbreak x \mapsto \cst{diff}\typeargs{\alpha,\iota}\>\allowbreak(\lambda k.\>\cst{div}\>(w\>k)\>(w\>k))\>(\lambda k.\>\cst{one})\} \in \csu(w\>(\cst{diff}\typeargs{\alpha,\iota}\>(\lambda k.\>\cst{div}\>(w\>k)\>(w\>k))\>(\lambda k.\>\cst{one})),\allowbreak \>\cst{succ}\>x)$. To eliminate the trivial literal, we apply \infname{ERes}. We then apply a \infname{Sup} inference into $C_{\text{conj}}$ and superpose into the resulting clause with $C_{\text{prod}}$. Finally we derive the empty clause by \infname{ERes}. The unifiers in this example were chosen to keep the clauses reasonably small. \end{examplex} Because it gives rise to flex--flex pairs, which are unification constraints where both sides are variable-headed, \infname{FluidSup} can be very prolific. With variable-headed terms on both sides of its maximal literal, the extensionality axiom is another prime source of flex--flex pairs. Flex--flex pairs can also arise in the other rules (\infname{Sup}, \infname{ERes}, and \infname{EFact}). Due to order restrictions and fairness, we cannot postpone solving flex--flex pairs indefinitely. Thus, we cannot use Huet's pre-unification procedure \cite{huet-1975} and must instead choose a full unification procedure such as Jensen and Pietrzykowski's \cite{jensen-pietrzykowski-1976}, Snyder and Gallier's \cite{snyder-gallier-1989}, or the procedure that has recently been developed by Vukmirovi\'c, Bentkamp, and Nummelin \cite{vukmirovic-et-al-2020-unif}. On the positive side, optional inference rules can efficiently cover many cases where \infname{FluidSup} or the extensionality axiom would otherwise be needed (\Section~\ref{sec:extensions}), and heuristics can help postpone the explosion. Moreover, flex--flex pairs are not always as bad as their reputation; for example, $y\> \cst{a}\> \cst{b} \UNIF z\> \cst{c}\> \cst{d}$ admits a most general unifier: $\{y \mapsto \lambda w\> x.\> y'\,w\> x\> \cst{c}\>\cst{d}{,}\; z \mapsto y'\, \cst{a}\>\cst{b}\}$. The calculus is a graceful generalization of standard superposition, except for the extensionality axiom. From simple first-order clauses, the axiom can be used to derive clauses containing $\lambda$-expressions, which are useless if the problem is first-order. For instance, the clause $\cst{g}\>x \approx \cst{f}\>x\>x$ can be used for a \infname{FluidSup} inference into the axiom (\infname{Ext}) yielding the clause $w\>t\>(\cst{f}\>t\>t)\not\eq z\>t \mathrel\lor (\lambda u.\>w\>u\>(\cst{g} u)) \approx z$ via the unifier $\{ \alpha \mapsto \iota,\>\allowbreak \beta\mapsto \iota,\>\allowbreak x \mapsto t,\>\allowbreak v \mapsto \lambda u.\> w\>t\>u,\>\allowbreak y \mapsto \lambda u.\>w\>u\>(\cst{g}\>u) \} \in \csu(v\>(\cst{g}\>x),\>y\>(\cst{diff}\typeargs{\alpha,\beta}\>y\>z))$ where $t = \cst{diff}\typeargs{\iota,\iota}\>(\lambda u.\>w\>u\>(\cst{g}\>u))\>z$, the variable $w$ is freshly introduced by unification, and $v$ is the fresh variable introduced by \infname{FluidSup} (named $z$ in the definition of the rule). By \infname{ERes}, with the unifier $\{ z \mapsto \lambda u.\> w\>u\>(\cst{f}\>u\>u) \}\in \csu(w\>t\>(\cst{f}\>t\>t),\>z\>t)$, we can then derive $(\lambda u.\> w\>u\>(\cst{g}\>u)) \approx (\lambda u.\> w\>u\>(\cst{f}\>u\>u))$, an equality of two $\lambda$-expressions, although we started with a simple first-order clause. This could be avoided if we could find a way to make the positive literal $y \approx z$ of (\infname{Ext}) larger than the other literal, or to select $y \approx z$ without losing refutational completeness. The literal $y \approx z$ interacts only with green subterms of functional type, which do not arise in first-order clauses. \oursubsection{Soundness} \label{ssec:soundness} To show soundness of the inferences, we need the substitution lemma for our logic: \begin{lemmax}[Substitution lemma] Let $\mathscr{I} = (\III_\mathsf{ty},\mathscr{J},\mathscr{L})$ be a proper interpretation. Then \[\interpret{\tau\rho}{\III_\mathsf{ty}}{\xi} = \interpret{\tau}{\III_\mathsf{ty}}{\xi'}% \text{\quad and\quad}% \interpret{t\rho}{\mathscr{I}}{\xi} = \interpret{t}{\mathscr{I}}{\xi'}\] for all terms $t$, all types $\tau$, and all substitutions $\rho$, where $\xi'(\alpha) = \interpret{\alpha\rho}{\III_\mathsf{ty}}{\xi}$ for all type variables~$\alpha$ and $\xi'(x) = \interpret{x\rho}{\mathscr{I}}{\xi}$ for all term variables $x$. \label{lem:subst-lemma-general} \end{lemmax} \begin{proof} First, we prove that $\interpret{\tau\rho}{\III_\mathsf{ty}}{\xi} = \interpret{\tau}{\III_\mathsf{ty}}{\xi'}$ by induction on the structure of $\tau$. If $\tau = \alpha$ is a type variable, \[\interpret{\alpha\rho}{\III_\mathsf{ty}}{\xi} = \xi'(\alpha) = \interpret{\alpha}{\III_\mathsf{ty}}{\xi'}\] If $\tau = \kappa(\tuple{\upsilon})$ for some type constructor $\kappa$ and types $\tuple{\upsilon}$, \[\interpret{\kappa(\tuple{\upsilon})\rho}{\III_\mathsf{ty}}{\xi} = \II_\mathsf{ty}(\kappa)(\interpret{\tuple{\upsilon}\rho}{\III_\mathsf{ty}}{\xi}) \eqIH \II_\mathsf{ty}(\kappa)(\interpret{\tuple{\upsilon}}{\III_\mathsf{ty}}{\xi'}) = \interpret{\kappa(\tuple{\upsilon})}{\III_\mathsf{ty}}{\xi'}\] Next, we prove $\interpret{t\rho}{\mathscr{I}}{\xi} = \interpret{t}{\mathscr{I}}{\xi'}$ by induction on the structure of a $\lambda$-term representative of $t$, allowing arbitrary substitutions $\rho$ in the induction hypothesis. If~$t = y$, then by the definition of the denotation of a variable \[\interpret{y\rho}{\mathscr{I}}{\xi} = \xi'(y) = \interpret{y}{\mathscr{I}}{\xi'}\] % If $t = \cst{f}\typeargs{\tuple{\tau}}$, then by the definition of the term denotation \[\interpret{\cst{f}\typeargs{\tuple{\tau}}\rho}{\mathscr{I}}{\xi} = \mathscr{J}(\cst{f},\interpret{\tuple{\tau}\rho}{\III_\mathsf{ty}}{\xi}) \eqIH \mathscr{J}(\cst{f},\interpret{\tuple{\tau}}{\III_\mathsf{ty}}{\xi'}) = \interpret{\cst{f}\typeargs{\tuple{\tau}}}{\mathscr{I}}{\xi'}\] % If $t = u\>v$, then by the definition of the term denotation \[\interpret{(u\>v)\rho}{\mathscr{I}}{\xi} = \interpret{u\rho}{\mathscr{I}}{\xi}(\interpret{v\rho}{\mathscr{I}}{\xi}) \eqIH \interpret{u}{\mathscr{I}}{\xi'}(\interpret{v}{\mathscr{I}}{\xi'}) = \interpret{u\>v}{\mathscr{I}}{\xi'}\] % If $t = \lambda z.\>u$, let $\rho'(z)=z$ and $\rho'(x)=\rho(x)$ for $x\neq z$. Using properness of $\mathscr{I}$ in the second and the last step, we have \[\interpret{(\lambda z.\>u)\rho}{\mathscr{I}}{\xi}(a) = \interpret{(\lambda z.\>u\rho')}{\mathscr{I}}{\xi}(a) = \interpret{u\rho'}{\mathscr{I}}{\xi[z\mapsto a]} \eqIH \interpret{u}{\mathscr{I}}{\xi'[z\mapsto a]} = \interpret{\lambda z.\>u}{\mathscr{I}}{\xi'}(a) \tag*{\qed}\] \end{proof} \begin{lemmax}\label{lem:apply-subst} If $\mathscr{I}\models C$ for some interpretation $\mathscr{I}$ and some clause $C$, then $\mathscr{I}\models C\rho$ for all substitutions $\rho$. \end{lemmax} \begin{proof} We have to show that $C\rho$ is true in $\mathscr{I}$ for all valuations $\xi$. Given a valuation $\xi$, define $\xi'$ as in Lemma~\ref{lem:subst-lemma-general}. Then, by Lemma~\ref{lem:subst-lemma-general}, a literal in $C\rho$ is true in $\mathscr{I}$ for $\xi$ if and only if the corresponding literal in $C$ is true in $\mathscr{I}$ for $\xi'$. There must be at least one such literal because $\mathscr{I} \models C$ and hence $C$ is in particular true in $\mathscr{I}$ for $\xi'$. Therefore, $C\rho$ is true in $\mathscr{I}$ for $\xi$. \qed \end{proof} \begin{theoremx}[Soundness] The inference rules \infname{Sup}, \infname{FluidSup}, \infname{ERes}, \infname{EFact}, and \infname{Arg\-Cong} are sound {\upshape(}even without the variable condition and the side conditions on fluidity, deeply occurring variables, order, and eligibility{\upshape)}. \label{lem:soundness} \end{theoremx} \begin{proof} We fix an inference and an interpretation $\mathscr{I}$ that is a model of the premises. We need to show that it is also a model of the conclusion. From the definition of the denotation of a term, it is obvious that congruence holds in our logic, at least for subterms that are not inside a $\lambda$-expression. In particular, it holds for green subterms and for the left subterm $t$ of an application $t\>s$. By Lemma~\ref{lem:apply-subst}, $\mathscr{I}$ is a model of the $\sigma$-instances of the premises as well, where $\sigma$ is the substitution used for the inference. Let $\xi$ be a valuation. By making case distinctions on the truth under $\mathscr{I},\xi$ of the literals of the $\sigma$-instances of the premises, using the conditions that $\sigma$ is a unifier, and applying congruence, it follows that the conclusion is true under $\mathscr{I},\xi$. Hence, $\mathscr{I}$ is a model of the conclusion. \qed \end{proof} As in the $\lambda$-free higher-order logic of Bentkamp et al.~\cite{bentkamp-et-al-lfhosup-arxiv}, skolemization is unsound in our logic. As a consequence, axiom (\infname{Ext}) does not hold in all interpretations, but the axiom is consistent with our logic, i.e., there exist models of (\infname{Ext}). \oursubsection{The Redundancy Criterion} \label{ssec:the-redundancy-criterion} A redundant clause is usually defined as a clause whose ground instances are entailed by smaller (\kern-0.0416667em$\prec$) ground instances of existing clauses. This would be too strong for our calculus, as it would make most clauses produced by \infname{ArgCong} redundant. The solution is to base the redundancy criterion on a weaker ground logic---ground monomorphic first-order logic---in which argument congruence and extensionality do not hold. The resulting notion of redundancy gracefully generalizes the standard first-order notion. We employ an encoding $\mathcalx{F}$ to translate ground higher-order terms into ground first-order terms. $\mathcalx{F}$ indexes each symbol occurrence with the type arguments and the number of term arguments. For example, $\floor{\cst{f}\>\cst{a}}=\cst{f}_1(\cst{a}_0)$ and $\floor{\cst{g}\typeargs{\kappa}} = \cst{g}\kern+0.0416667em{}^{\kappa}_{0}$. In addition, $\mathcalx{F}$ conceals $\lambda$-expressions by replacing them with fresh symbols. These measures effectively disable argument congruence and extensionality. For example, the clause sets $\{\cst{g} \approx \cst{f}{,}\; \cst{g}\>\cst{a} \not\eq \cst{f}\> \cst{a}\}$ and $\{\cst{b} \approx \cst{a}{,}\; (\lambda x.\; \cst{b}) \not\eq (\lambda x.\; \cst{a})\}$ are unsatisfiable in higher-order logic, but the encoded clause sets $\{\cst{g}_0 \approx \cst{f}_0{,}\allowbreak\; \cst{g}_1(\cst{a}_0) \not\eq \cst{f}_1(\cst{a}_0)\}$ and $\{\cst{b}_0 \approx \cst{a}_0{,}\allowbreak\; \cst{lam}_{\lambda x.\; \cst{b}} \not\eq \cst{lam}_{\lambda x.\; \cst{a}}\}$ are satisfiable in first-order logic, where $\cst{lam}_{\lambda x.\>t}$ is a family of fresh symbols. Given a higher-order signature ($\Sigma_\mathsf{ty},\Sigma)$, we define a ground first-order signature ($\Sigma_\mathsf{ty},\allowbreak\Sigma_{\mathrm{GF}})$ as follows. The type constructors $\Sigma_\mathsf{ty}$ are the same in both signatures, but ${\fun}$ is uninterpreted in first-order logic. For each ground instance $\cst{f}\typeargs{\tuple{\upsilon}} : \tau_1\fun\cdots\fun\tau_n\fun\tau$ of a symbol $\cst{f} \in \Sigma$, % we introduce a first-order symbol $\smash{\cst{f}^{\tuple{\upsilon}}_{\!j}} \in \Sigma_{\mathrm{GF}}$ with argument types~$\tuple{\tau}_{\!j}$ and return type~$\tau_{\!j+1} \fun \cdots \fun \tau_n \fun \tau$, for each $j$. Moreover, for each ground term $\lambda x.\>t$, we introduce a symbol $\cst{lam}_{\lambda x.\>t} \in \Sigma_{\mathrm{GF}}$ of the same type. Thus, we consider three levels of logics:\ the higher-order level ${\mathrm{H}}$ over a given signature ($\Sigma_\mathsf{ty},\Sigma)$, the ground higher-order level ${\mathrm{GH}}$, which is the ground fragment of ${\mathrm{H}}$, and the ground monomorphic first-order level ${\mathrm{GF}}$ over the signature ($\Sigma_\mathsf{ty},\Sigma_{\mathrm{GF}})$ defined above. We use $\TT_\HH$, $\TT_\GH$, and $\TT_\GF$ to denote the respective sets of terms, $\Ty_\HH$, $\Ty_\GH$, and $\Ty_\GF$ to denote the respective sets of types, and $\CC_\HH$, $\CC_\GH$, and $\CC_\GF$ to denote the respective sets of clauses. Each of the three levels has an entailment relation $\models$. A clause set $N_1$ entails a clause set $N_2$, denoted $N_1 \models N_2$, if every model of $N_1$ is also a model of $N_2$. For ${\mathrm{H}}$ and ${\mathrm{GH}}$, we use higher-order models; for ${\mathrm{GF}}$, we use first-order models. This machinery may seem excessive, but it is essential to define redundancy of clauses and inferences properly, and it will play an important role in the refutational completeness proof (\Section~\ref{sec:refutational-completeness}). The three levels are connected by two functions ${\mathcalx{G}}$ and $\mathcalx{F}$: \begin{definitionx}[Grounding function $\boldsymbol{{\mathcalx{G}}}$ on terms and clauses] The grounding function ${\mathcalx{G}}$ maps terms $t \in \TT_\HH$ to the set of their ground instances---i.e., the set of all $t\theta \in \TT_\GH$ where $\theta$ is a substitution. It also maps clauses $C\in\CC_\HH$ to the set of their ground instances---i.e., the set of all $C\theta \in \CC_\GH$ where $\theta$ is a substitution. \end{definitionx} \begin{definitionx}[Encoding $\boldsymbol{\mathcalx{F}}$ on terms and clauses] The encoding $\mathcalx{F} : \TT_\GH \rightarrow \TT_\GF$ is recursively defined as \begin{align*} \floor{\lambda x.\>t} = \cst{lam}_{\lambda x.\>t} &&& \floor{\cst{f}\typeargs{\tuple{\upsilon}}\> \tuple{s}_{\!j}} = \cst{f}^{\tuple{\upsilon}}_{\!j} (\floor{\tuple{s}_{\!j}}) \end{align*} using $\eta$-short $\beta$-normal representatives of terms. The encoding $\mathcalx{F}$ is extended to map from $\CC_\GH$ to $\CC_\GF$ by mapping each literal and each side of a literal individually. \end{definitionx} Schematically, the three levels are connected as follows: \[ \begin{tikzpicture}[level distance=13em] \node[align=center]{${\mathrm{H}}$\\higher-order}[grow'=right] child { node[align=center]{${\mathrm{GH}}$\\ground higher-order} child { node[align=center]{${\mathrm{GF}}$\\ground first-order} edge from parent[->] node[above] {$\mathcalx{F}$} } edge from parent[->] node[above] {${\mathcalx{G}}$} }; \end{tikzpicture} \] The mapping $\mathcalx{F}$ is clearly bijective. Using the inverse mapping, the order $\succ$ can be transferred from $\TT_\GH$ to $\TT_\GF$ and from $\CC_\GH$ to $\CC_\GF$ by defining $t \succ s$ as $\ceil{t} \succ \ceil{s}$ and $C \succ D$ as $\ceil{C} \succ \ceil{D}$. The property that $\succ$ on clauses is the multiset extension of $\succ$ on literals, which in turn is the multiset extension of $\succ$ on terms, is maintained because $\mathcalx{F}^{-1}$ maps the multiset representations elementwise. For example, let $C = y\>\cst{b} \approx y\>\cst{a} \lor y \not\eq \cst{f}\>\cst{a}\in\CC_\HH$. Then ${\mathcalx{G}}(C)$ contains, among many other clauses, $C\theta = \cst{f}\>\cst{b}\>\cst{b} \approx \cst{f}\>\cst{a}\>\cst{a} \lor (\lambda x.\>\cst{f}\>x\>x) \not\eq \cst{f}\>\cst{a}\in\CC_\GH$, where $\theta = \{y \mapsto \lambda x.\>\cst{f}\>x\>x\}$. On the ${\mathrm{GF}}$ level, this clause corresponds to $\floor{C\theta} = \cst{f}_2(\cst{b}_0,\cst{b}_0) \approx \cst{f}_2(\cst{a}_0,\cst{a}_0) \lor \cst{lam}_{\lambda x.\>\cst{f}\>x\>x} \not\eq \cst{f}_1(\cst{a}_0)\in\CC_\GF$. A key property of $\mathcalx{F}$ is that green subterms in $\TT_\GH$ correspond to subterms in $\TT_\GF$. This allows us to show that well-foundedness, totality on ground terms, compatibility with contexts, and the subterm property hold for $\succ$ on $\TT_\GF$. \begin{lemmax}\label{lem:subterm-correspondence1} Let $s,t\in\TT_\GH$. We have $\floor{\greensubterm{t}{s}_p} = \subterm{\floor{t}}{\floor{s}}_p$. In other words, $s$ is a green subterm of $t$ at position $p$ if and only if $\floor{s}$ is a subterm of $\floor{t}$ at position $p$. \end{lemmax} \begin{proof} Analogous to Lemma~3.13 of Bentkamp et al.~\cite{bentkamp-et-al-lfhosup-arxiv}. \qed \end{proof} \begin{lemmax}\label{lem:order-prop-transfer} Well-foundedness, totality, compatibility with contexts, and the subterm property hold for $\succ$ in $\TT_\GF$. \end{lemmax} \begin{proof} Analogous to Lemma~3.15 of Bentkamp et al.~\cite{bentkamp-et-al-lfhosup-arxiv}, using Lemma~\ref{lem:subterm-correspondence1}. \qed \end{proof} The saturation procedures of superposition provers aggressively delete clauses that are strictly subsumed by other clauses. A clause $C$ \emph{subsumes}~$D$ if there exists a substitution $\sigma$ such that $C\sigma \subseteq D$. A clause $C$ \emph{strictly subsumes}~$D$ if $C$ subsumes $D$ but $D$ does not subsume $C$. For example, $x \approx \cst{c}$ strictly subsumes both $\cst{a} \approx \cst{c}$ and $\cst{b} \not\eq \cst{a} \mathrel\lor x \approx \cst{c}$. The proof of refutational completeness of resolution and superposition provers relies on the well-foundedness of the strict subsumption relation. Unfortunately, this property does not hold for higher-order logic, where $\cst{f}\>x\>x \approx \cst{c}$ is strictly subsumed by $\cst{f}\>(x\>\cst{a})\>(x\>\cst{b}) \approx \cst{c}$, which is strictly subsumed by $\cst{f}\>(x\>\cst{a}\>\cst{a}')\>(x\>\cst{b}\>\cst{b}') \approx \cst{c}$, and so on. To prevent such infinite chains, we use a well-founded partial order $\sqsupset$ on $\CC_\HH$. We can define $\sqsupset$ as ${\mathrel{\raisebox{+0.8pt}{\Large\rlap{\kern0.5pt$\cdot$}}}\gtrsim} \mathrel\cap {>_\text{size}}$, where $\mathrel{\raisebox{+0.8pt}{\Large\rlap{\kern0.5pt$\cdot$}}}\gtrsim$ stands for ``subsumed by'' and $D >_\text{size} C$ if either $\mathit{size}(D) > \mathit{size}(C)$ or $\mathit{size}(D) = \mathit{size}(C)$ and $D$ contains fewer distinct variables than $C$; the $\mathit{size}$ function is some notion of syntactic size, such as the number of constants and variables contained in a clause. This yields for instance $\cst{a} \mathbin\approx \cst{c} \sqsupset x \mathbin\approx \cst{c}$ and $\cst{f}\>(x\>\cst{a}\>\cst{a}) \mathbin\approx \cst{c} \sqsupset \cst{f}\>(y\>\cst{a}) \mathbin\approx \cst{c}$. To justify the deletion of subsumed clauses, we set up our redundancy criterion to cover subsumption, following Waldmann et al.~\cite{waldmann-et-al-2020-saturation}. We define the sets of redundant clauses \hbox{w.r.t.}\ a given clause set as follows: \begin{itemize} \item Given $C\in\CC_\GF$ and $N\subseteq\CC_\GF$, let $C\in\mathit{GFRed}_{\mathrm{C}}(N)$ if $\{D \in N \mid D \prec C\}\models C$. \item Given $C\in\CC_\GH$ and $N\subseteq\CC_\GH$, let $C\in\mathit{GHRed}_{\mathrm{C}}(N)$ if $\floor{C} \in \mathit{GFRed}_{\mathrm{C}}(\floor{N})$. \item Given $C\in\CC_\HH$ and $N\subseteq\CC_\HH$, let $C\in{\mathit{HRed}}_{\mathrm{C}}(N)$ if for every $D \in {\mathcalx{G}}(C)$, we have $D \in \mathit{GHRed}_{\mathrm{C}}({\mathcalx{G}}(N))$ or there exists $C' \in N$ such that $C \sqsupset C'$ and $D \in {\mathcalx{G}}(C')$. \end{itemize} For example, $(\cst{h}\>\cst{g})\>x \approx (\cst{h}\>\cst{f})\>x$ is redundant \hbox{w.r.t.}\ $\cst{g}\approx \cst{f}$, but $\cst{g}\>x \approx \cst{f}\>x$ and $(\lambda x.\>\cst{g}) \approx (\lambda x.\>\cst{f})$ are not, because $\mathcalx{F}$ translates an unapplied $\cst{g}$ to $\cst{g}_0$, whereas an applied $\cst{g}$ is translated to $\cst{g}_1$ and the expression $\lambda x.\>\cst{g}$ is translated to $\cst{lam}_{\lambda x.\>\cst{g}}$. These different translations prevent entailment on the ${\mathrm{GF}}$ level. For an example of subsumption, we assume that $\cst{a} \mathbin\approx \cst{c} \sqsupset x \mathbin\approx \cst{c}$ holds, for instance using the above definition of $\sqsupset$. Then $\cst{a} \mathbin\approx \cst{c}$ is redundant \hbox{w.r.t.}\ $x \mathbin\approx \cst{c}$. Along with the three levels of logics, we consider three inference systems% :\ % $\mathit{HInf}$, $\mathit{GHInf}$, and $\mathit{GFInf}$. $\mathit{HInf}$ is the inference system described in \Section~\ref{ssec:the-core-inference-rules}. For uniformity, we regard the extensionality axiom as a premise-free inference rule \infname{Ext} whose conclusion is axiom~(\infname{Ext}). The rules of $\mathit{GHInf}$ include \infname{Sup}, \infname{ERes}, and \infname{EFact} from $\mathit{HInf}$, but with the restriction that premises and conclusion are ground and with all references to $\succsim$ replaced by $\succeq$. In addition, $\mathit{GHInf}$ contains a premise-free rule \infname{GExt} whose infinitely many conclusions are the ground instances of (\infname{Ext}), and the following ground variant of \infname{ArgCong}: \[\namedinference{GArgCong} {C' \mathrel\lor s \approx s'} {C' \mathrel\lor s\>\tuple{u}_n \approx s'\>\tuple{u}_n}\] where $s \approx s'$ is strictly $\succeq$-eligible in $C' \mathrel\lor s \approx s'$ and $\tuple{u}_n$ is a nonempty tuple of ground terms. $\mathit{GFInf}$ contains all \infname{Sup}, \infname{ERes}, and \infname{EFact} inferences from $\mathit{GHInf}$ translated by $\mathcalx{F}$. It coincides with standard first-order superposition. \looseness=-1 Each of the three inference systems is parameterized by a selection function. For $\mathit{HInf}$, we globally fix one selection function $\mathit{HSel}$. For $\mathit{GHInf}$ and $\mathit{GFInf}$, we need to consider different selection functions. We write $\mathit{GHInf}^\mathit{GHSel}$ for $\mathit{GHInf}$ and $\mathit{GFInf}^\mathit{GFSel}$ for $\mathit{GFInf}$ to make the dependency on the respective selection functions $\mathit{GHSel}$ and $\mathit{GFSel}$ explicit. Let ${\mathcalx{G}}(\mathit{HSel})$ denote the set of all selection functions on~$\CC_\GH$ such that for each clause in $C\in\CC_\GH$, there exists a clause $D\in\CC_\HH$ with $C\in{\mathcalx{G}}(D)$ and corresponding selected literals. For each selection function $\mathit{GHSel}$ on $\CC_\GH$, via the bijection $\mathcalx{F}$, we obtain a corresponding selection function on $\CC_\GF$, which we denote by $\floor{\mathit{GHSel}}$. We extend the functions $\mathcalx{F}$ and ${\mathcalx{G}}$ to inferences: \begin{notationx} Given an inference $\iota$, we write $\mathit{prems}(\iota)$ for the tuple of premises, $\mathit{mprem}(\iota)$ for the main (i.e., rightmost) premise, and $\mathit{concl}(\iota)$ for the conclusion. \end{notationx} \begin{definitionx} [Encoding $\mathcalx{F}$ on inferences]\, Given a \infname{Sup}, \infname{ERes}, or \infname{EFact} inference $\iota \in \mathit{GHInf}$, let $\floor{\iota}\in\mathit{GFInf}$ denote the inference defined by $\mathit{prems}(\floor{\iota}) = \floor{\mathit{prems}(\iota)}$ and $\mathit{concl}(\floor{\iota}) = \floor{\mathit{concl}(\iota)}$. \end{definitionx} \begin{definitionx} [Grounding function ${\mathcalx{G}}$ on inferences]\, Given an inference $\iota\in\mathit{HInf}$, and a selection function $\mathit{GHSel}\in{\mathcalx{G}}(\mathit{HSel})$, we define the set ${\mathcalx{G}}^\mathit{GHSel}(\iota)$ of ground instances of $\iota$ to be all inferences $\iota'\in\mathit{GHInf}^\mathit{GHSel}$ such that $\mathit{prems}(\iota') = \mathit{prems}(\iota)\theta$ and $\mathit{concl}(\iota') = \mathit{concl}(\iota)\theta$ for some grounding substitution $\theta$. \end{definitionx} This will map \infname{Sup} and \infname{FluidSup} to \infname{Sup}, \infname{EFact} to \infname{EFact}, \infname{ERes} to \infname{ERes}, \infname{Ext} to \infname{GExt}, and \infname{Arg\-Cong} to \infname{GArgCong} inferences, but it is also possible that ${\mathcalx{G}}^\mathit{GHSel}(\iota)$ is the empty set for some inferences $\iota$. We define the sets of redundant inferences \hbox{w.r.t.}\ a given clause set as follows: \begin{itemize} \item Given $\iota\in\mathit{GFInf}^\mathit{GFSel}$ and $N\subseteq\CC_\GF$, let $\iota\in\mathit{GFRed}_{\mathrm{I}}^\mathit{GFSel}(N)$ if $\mathit{prems}(\iota) \mathrel\cap \mathit{GFRed}_{\mathrm{C}}(N) \not= \varnothing$ or $\{D \in N \mid D \prec \mathit{mprem}(\iota)\} \models \mathit{concl}(\iota)$. \item Given $\iota\in\mathit{GHInf}^\mathit{GHSel}$ and $N\subseteq\CC_\GH$, let $\iota\in\mathit{GHRed}_{\mathrm{I}}^\mathit{GHSel}(N)$ if \begin{itemize} \item $\iota$ is not a \infname{GArgCong} or \infname{GExt} inference and $\floor{\iota}\in\smash{\mathit{GFRed}_{\mathrm{I}}^{\floor{\mathit{GHSel}}}}(\floor{N})$; or \item $\iota$ is a \infname{GArgCong} or \infname{GExt} inference and $\mathit{concl}(\iota)\in N\mathrel\cup\mathit{GHRed}_{\mathrm{C}}(N)$. \end{itemize} \item Given $\iota\in\mathit{HInf}$ and $N\subseteq\CC_\HH$, let $\iota\in\mathit{HRed}_{\mathrm{I}}(N)$ if ${\mathcalx{G}}^\mathit{GHSel}(\iota)\subseteq\mathit{GHRed}_{\mathrm{I}}({\mathcalx{G}}(N))$ for all $\mathit{GHSel}\in{\mathcalx{G}}(\mathit{HSel})$. \end{itemize} Occasionally, we omit the selection function in the notation when it is irrelevant. A clause set $N$ is \emph{saturated} \hbox{w.r.t.}\ an inference system and the inference component $\mathit{Red}_{\mathrm{I}}$ of a redundancy criterion if every inference from clauses in $N$ is in~$\mathit{Red}_{\mathrm{I}}(N).$ \oursubsection{Simplification Rules} \label{ssec:simplification-rules} The redundancy criterion $(\mathit{HRed}_{\mathrm{I}}, {\mathit{HRed}}_{\mathrm{C}})$ is strong enough to support most of the simplification rules implemented in Schulz's first-order prover E \cite[Sections 2.3.1~and~2.3.2]{schulz-2002-brainiac}, some only with minor adaptions. Deletion of duplicated literals, deletion of resolved literals, syntactic tautology deletion, negative simplify-reflect, and clause subsumption adhere to our redundancy criterion. Positive simplify-reflect and equality subsumption are supported by our criterion if they are applied in green contexts $\greensubterm{u}{\phantom{.}}$ instead of arbitrary contexts $u[\phantom{.}]$. Semantic tautology deletion can be applied as well, but we must use the entailment relation of the GF level---i.e., only rewriting in green contexts can be used to establish the entailment. Similarly, rewriting of positive and negative literals (demodulation) can only be applied in green contexts. Moreover, for positive literals, the rewriting clause must be smaller than the rewritten clause---a condition that is also necessary with the standard first-order redundancy criterion but not always fulfilled by Schulz's rule. As for destructive equality resolution, even in first-order logic the rule cannot be justified with the standard redundancy criterion, and it is unclear whether it preserves refutational completeness. \oursubsection{A Derived Term Order} \label{ssec:a-derived-term-order} We stated some requirements on the term orders $\succ$ and $\succsim$ in \Section~\ref{ssec:the-core-inference-rules} but have not shown how to fulfill them. To derive a suitable strict term order $\succ$, we propose to encode $\eta$-short $\beta$-normal forms into untyped first-order terms and apply an order~$\fosucc$ of first-order terms such as the Knuth--Bendix order \cite{knuth-bendix-1970} or the lexicographic path order \cite{kamin-levy-1980-cannotfind}. The encoding, denoted by $\encOonly$, indexes symbols with their number of term arguments, similarly to the $\mathcalx{F}$ encoding. Unlike the $\mathcalx{F}$ encoding, $\encOonly$ translates $\lambda x \mathbin:\nobreak \tau.\; t$ to $\cst{lam}(\encO{\tau},\encO{t})$ and uses De Bruijn \cite{de-bruijn-1972} symbols to represent bound variables. The $\encOonly$ encoding replaces fluid terms~$t$ by fresh variables~$\zof{t}$ and maps type arguments to term arguments, while erasing any other type information. For example, $\encO{\lambda x \mathbin: \kappa.\> \cst{f}\>(\cst{f}\>(\cst{a}\typeargs{\kappa}))\>(y\>\cst{b}) } = \cst{lam}(\kappa, \cst{f}_2(\cst{f}_1(\cst{a}_0(\kappa)), \zof{y \> \cst{b}}))$. The use of De Bruijn indices and the monolithic encoding of fluid terms ensure stability under both $\alpha$-renaming and substitution. \begin{definitionx}[Encoding $\boldsymbol{\encOonly}$] Given a signature $(\Sigma_\mathsf{ty},\allowbreak\Sigma)$, $\encOonly$ encodes types and terms as terms over the untyped first-order signature $\Sigma_\mathsf{ty} \uplus \{\cst{f}_k \mid \cst{f}\in\Sigma,\>k\in\nobreak\mathbb{N}\} \uplus \{\cst{lam}\}\uplus \{\smash{\cst{db}^i_k}\mid i,k\in\nobreak\mathbb{N}\}$. We reuse higher-order type variables as term variables in the target untyped first-order logic. Moreover, let $\zof{t}$ be an untyped first-order variable for each higher-order term $t$. The auxiliary function $\encB{x}{t}$ replaces each free occurrence of the variable~$x$ by a symbol $\cst{db}^i$, where $i$ is the number of $\lambda$-expressions surrounding the variable occurrence. The type-to-term version of $\encOonly$ is defined by $\encO{\alpha} = \alpha$ and $\encO{\kappa(\tuple{\tau})} = \kappa(\encO{\tuple{\tau}})$. The term-to-term version is defined by \[\encO{t} = \begin{cases} \zof{t} & \text{if $t = x$ or $t$ is fluid} \\ \cst{lam}(\encO{\tau}, \encO{\encB{x}{u}}) & \text{if $t = (\lambda x \mathbin: \tau.\; u)$ and $t$ is not fluid} \\ \cst{f}_k(\encO{\tuple{\tau}}, \encO{\tuple{u}_k}) & \text{if $t = \cst{f}\typeargs{\tuple{\tau}}\>\tuple{u}_k$} \end{cases}\]% \end{definitionx} For example, let $s = \lambda y.\>\cst{f}\> y\> (\lambda w.\> \cst{g}\>(y\> w))$ where $y$ has type $\kappa \fun \kappa$ and $w$ has type $\kappa$. We have $\encB{y}{\cst{f}\> y\> (\lambda w.\> \cst{g}\>(y\> w))} = \cst{f}\> \cst{db}^0 (\lambda w.\> \cst{g}\> (\cst{db}^1 \> w))$ and $\encB{w}{\cst{g}\> (\cst{db}^1 \> w)} = \cst{g}\> (\cst{db}^1 \> \cst{db}^0)$. Neither $s$ nor $\lambda w.\> \cst{g}\>(y\> w)$ are fluid. Hence, we have $\encO{s} = \cst{lam}({\fun}(\kappa,\kappa),\allowbreak \cst{f}_2( \cst{db}^0_0, \cst{lam}(\kappa, \cst{g}_1(\cst{db}^1_1(\cst{db}^0_0))))$. \begin{definitionx}[Derived strict term order] Let the strict term order derived from $\fosucc$ be $\lsucc$ where $t \lsucc s$ if $\encO{t} \fosucc \encO{s}$. \end{definitionx} We will show that the derived $\lsucc$ fulfills all properties of a strict term order (Definition \ref{def:strict-term-order}) if $\fosucc$ fulfills the corresponding properties on first-order terms. For the nonstrict term order $\succsim$, we can use the reflexive closure $\lsucceq$ of $\lsucc$. \begin{lemmax} \label{lem:lsucc-preserves-lfsucc-ground-properties} Let $\fosucc$ be a strict partial order on first-order terms and $\lsucc$ the derived term order on $\beta\eta$-equivalence classes. If the restriction of $\fosucc$ to ground terms enjoys well-foundedness, totality, the subterm property, and compatibility with contexts {\upshape(}\hbox{w.r.t.}\ first-order terms{\upshape)}, the restriction of $\lsucc$ to ground terms enjoys well-foundedness, totality, the green subterm property, and compatibility with green contexts {\upshape(}\hbox{w.r.t.}\ $\beta\eta$-equivalence classes{\upshape)}. \end{lemmax} \begin{proof} Transitivity and irreflexivity of $\fosucc$ imply transitivity and irreflexivity of $\lsucc$. \medskip \noindent \textsc{Well-foundedness}:\enskip If there existed an infinite % chain $t_1 \lsucc t_2 \lsucc \cdots$ of ground terms, there would also be the chain $\encO{t_1} \fosucc \encO{t_2} \fosucc \cdots$, contradicting the well-foundedness of $\fosucc$ on ground $\lambda$-free terms. \medskip \noindent \textsc{Totality:}\enskip By ground totality of $\fosucc$, for any ground terms $t$ and $s$ we have $\encO{t} \fosucc \encO{s}$, $\encO{t} \lfprec \encO{s}$, or $\encO{t} = \encO{s}$. In the first two cases, it follows that $t \lsucc s$ or $t\lprec s$. In the last case, it follows that $t = s$ because $\encOonly$ is clearly injective. \medskip \noindent \textsc{Green subterm property:}\enskip Let $s$ be a term. We show that $s \lsucceq s|_p$ by induction on $p$, where $s|_p$ denotes the green subterm at position $p$. If $p = \varepsilon$, this is trivial. If $p = p'.i$, we have $s \lsucceq s|_{p'}$ by the induction hypothesis. Hence, it suffices to show that $s|_{p'} \lsucceq s|_{p'.i}$. From the existence of the position $p'.i$, we know that $s|_{p'}$ must be of the form $s|_{p'} = \cst{f}\typeargs{\tuple{\tau}}\>\tuple{u}_k$. Then $s|_{p'.i} = u_i$. The encoding yields $\encO{s|_{p'}} = \cst{f}_k(\encO{\tuple{\tau}},\encO{\tuple{u}_k})$ and hence $\encO{s|_{p'}} \fosucceq \encO{s|_{p'.i}}$ by the ground subterm property of $\fosucc$. Hence, $s|_{p'} \lsucceq s|_{p'.i}$ and thus $s \lsucceq s|_{p}$. \medskip \noindent \textsc{Compatibility with green contexts:}\enskip By induction on the depth of the context, it suffices to show that $t \lsucc s$ implies $\cst{f}\typeargs{\tuple{\tau}}\>\tuple{u}\>t\>\tuple{v} \lsucc \cst{f}\typeargs{\tuple{\tau}}\>\tuple{u}\>s\>\tuple{v}$ for all $t$, $s$, $\cst{f}$, $\tuple{\tau}$, $\tuple{u}$, and $\tuple{v}$. This amounts to showing that $\encO{t} \fosucc \encO{s}$ implies $\encO{\cst{f}\typeargs{\tuple{\tau}}\>\tuple{u}\>t\>\tuple{v}} =\cst{f}_k(\encO{\tuple{\tau}},\encO{\tuple{u}},\encO{t},\encO{\tuple{v}}) \fosucc \cst{f}_k(\encO{\tuple{\tau}},\encO{\tuple{u}},\encO{s},\encO{\tuple{v}}) =\encO{\cst{f}\typeargs{\tuple{\tau}}\>\tuple{u}\>s\>\tuple{v}}$, which follows directly from ground compatibility of $\fosucc$ with contexts and the induction hypothesis. \qed \end{proof} \begin{lemmax} \label{lem:lsucc-preserves-lfsucc-stability-under-ground-subst} Let $\fosucc$ be a strict partial order on first-order terms. If $\fosucc$ is stable under grounding substitutions {\upshape(}\hbox{w.r.t.}\ first-order terms{\upshape)}, the derived term order $\lsucc$ is stable under grounding substitutions {\upshape(}\hbox{w.r.t.}\ $\beta\eta$-equivalence classes{\upshape)}. \end{lemmax} \begin{proof} Assume $s \lsucc s'$ for some terms $s$ and $s'$. Let $\theta$ be a higher-order substitution grounding $s$ and $s'$. We must show $s\theta \lsucc s'\theta$. We will define a first-order substitution $\rho$ grounding $\encO{s}$ and $\encO{s'}$ such that $\encO{s}\rho = \encO{s\theta}$ and $\encO{s'}\rho = \encO{s'\theta}$. Since $s \lsucc s'$, we have $\encO{s} \fosucc \encO{s'}$. By stability of $\fosucc$ under grounding substitutions, $\encO{s}\rho \fosucc \encO{s'}\rho$. It follows that $\encO{s\theta} \fosucc \encO{s'\theta}$ and hence $s\theta \lsucc s'\theta$. We define the first-order substitution $\rho$ as $\alpha\rho = \alpha\theta$ for type variables $\alpha$ and $\zof{u}\rho = \encO{u\theta}$ for terms $u$. Strictly speaking, the domain of a substitution must be finite, so we restrict this definition of $\rho$ to the finitely many variables that occur in the computation of $\encO{s}$ and $\encO{s'}$. Clearly $\encO{\tau}\rho = \encO{\tau\theta}$ for all types $\tau$ occurring in the computation of $\encO{s}$ and $\encO{s'}$. Moreover, $\encO{t}\rho = \encO{t\theta}$ for all $t$ occurring in the computation of $\encO{s}$ and $\encO{s'}$, which we show by induction on the definition of the encoding. If $t=x$ or if $t$ is fluid, $\encO{t}\rho = \zof{t}\rho = \encO{t\theta}$. If $t = \cst{f}\typeargs{\tuple{\tau}}\>\tuple{u}$, then $\encO{t}\rho = \cst{f}_k(\encO{\tuple{\tau}}\rho,\encO{\tuple{u}}\rho) \eqIH \cst{f}_k(\encO{\tuple{\tau}\theta},\encO{\tuple{u}\theta}) = \encO{\cst{f}\typeargs{\tuple{\tau}\theta}\>(\tuple{u}\theta)} = \encO{t\theta}$. If $t = (\lambda x\mathbin\oftype\tau.\;u)$ and $t$ is not fluid, then $\encO{t}\rho = \cst{lam}(\encO{\tau}\rho,\allowbreak\encO{\encB{x}{u}}\rho) \eqIH \cst{lam}(\encO{\tau\theta},\allowbreak\encO{\encB{x}{u}\theta}) = \cst{lam}(\encO{\tau\theta},\allowbreak\encO{\encB{x}{u}\theta[x\mapsto x]}) = \encO{\lambda x\mathbin\oftype\tau\theta.\;u\theta[x\mapsto x]} = \encO{(\lambda x\mathbin\oftype\tau.\;u)\theta} = \encO{t\theta}$. \qed \end{proof} \begin{notyet} Starting with a $\lambda$-free higher-order term order $\fosucc$, we can also derive a relation $\lsuccsim$ which, unlike the reflexive closure $\lsucceq$ of $\lsucc$, is precise enough to orient pairs of terms such as $y\>\cst{b} \lsuccsim y\>\cst{a}$. The idea is that if $\cst{b} \lsuccsim \cst{a}$ and their type is neither functional nor a type variable (which could be instantiated by a functional type), then $\cst{b}$ and $\cst{a}$ will appear unapplied in $(y\>\cst{b})\theta$ and $(y\>\cst{a})\theta$, as green subterms, regardless of~$\theta$. Intuitively, in higher-order logic, $y\>\cst{b}$ can be construed as an arbitrary term with zero or more occurrences of $\cst{b}$, and $y\>\cst{a}$ is the same term but with zero or more occurrences of $\cst{a}$ instead. Like $\lsucc$, the relation $\lsuccsim$ is defined via an encoding: $t \lsuccsim s$ if and only if $\encO{t} \fosucceq \encOsub{t}{s}$. In turn, $\encOsub{t}{s} = \encZsub{\encB{XXX}{t}}{\encB{XXX}{s}}$, where $\encBonly$ encodes the bound variables as above and \[\encZsub{t}{s} = \begin{cases} \zof{y\>\tuple{t}} & \text{if $t = y\>\tuple{t}_n$, $s = y\>\tuple{s}_n$, and each $(t_i, s_i)$ satisfies ($*$) below} \\ \cst{lam}\>\encZ{\tau}\> \encZsub{v}{u} & \text{if $t = (\lambda x \mathbin: \tau.\; v)$, $s = (\lambda x \mathbin: \tau.\> u)$, and $t$ and $s$ are not fluid} \\ \cst{f}\>\encZ{\tuple{\tau}}\> \tuple{u}'_n & \text{if $t = \cst{f}\typeargs{\tuple{\tau}}\>\tuple{v}_n$, $s = \cst{f}\typeargs{\tuple{\tau}}\>\tuple{u}_n$, and $u'_i = \encZsub{v_i}{u_i}$ for all $i$} \\ \encZ{s} & \text{otherwise} \end{cases}\]% The condition ($*$) on $(t_i, s_i)$ is that (1) $t_i \lsuccsim s_i$ and $t_i$'s type is neither functional nor a type variable or (2) $t_i = s_i$. Notice the (well-founded) mutual dependency between $\lsuccsim$ and~$\encZsubonly{t}$. The difference between $\encOsubonly{t}$ and $\encOonly$, and hence between $\lsuccsim$ and $\lsucceq$, concerns applied variables. If the subterm $y\>\cst{b}$ occurs in $t$ where $s$ has $y\>\cst{a}$, and it can be determined that $\cst{b} \lsuccsim \cst{a}$, we proceed as if $y\>\cst{a}$ had been $y\>\cst{b}$, using the same variable $\zof{y\>\cst{b}}$ to represent both $y\>\cst{b}$ and~$y\>\cst{a}$. For example, we have $\cst{g}\> (y\> \cst{b}\> \cst{f}) \lsuccsim \cst{g}\> (y\> \cst{a}\> \cst{f})$ and $\cst{h}\> \cst{b}\> (y\> \cst{b}) \lsuccsim \cst{h}\> \cst{a}\> (y\> \cst{a})$ because \[\begin{array}{r@{}c@{}r@{\>}c@{\>}l@{}c@{}l} \encO{\cst{g}\> (y\> \cst{b}\> \cst{f})} & {} = {} & \cst{g}\> \zof{y\> \cst{b}\> \cst{f}} & = & \cst{g}\> \zof{y\> \cst{b}\> \cst{f}} & {} = {} & \encOsub{\cst{g}\> (y\> \cst{b}\> \cst{f})}{\cst{g}\> (y\> \cst{a}\> \cst{f})} \\ \encO{\cst{h}\> \cst{b}\> (y\> \cst{b})} & {} = {} & \cst{h}\> \cst{b}\> \zof{y\> \cst{b}} & {} \fosucc\kern-0.083333em {} & \cst{h}\> \cst{a}\> \zof{y\> \cst{b}} & {} = {} & \encOsub{\cst{h}\> \cst{b}\> (y\> \cst{b})}{\cst{h}\> \cst{a}\> (y\> \cst{a})} \end{array}\] On the other hand, $\cst{g}\> (y\> \cst{b}\> \cst{f}) \not\lsucceq \cst{g}\> (y\> \cst{a}\> \cst{f})$ and $\cst{h}\> \cst{b}\> (y\> \cst{b}) \not\lsucceq \cst{h}\> \cst{a}\> (y\> \cst{a})$. \begin{lemmax} \label{lem:lsuccsim-preserves-lfsucc-stability-under-ground-subst} Let $\fosucc$ be a strict partial order on $\lambda$-free terms that is stable under grounding substitutions and whose ground restriction is compatible with green contexts. If $t \lsuccsim s$, then $t\theta \lsucceq s\theta$ for all grounding substitutions $\theta$. \end{lemmax} \begin{proof} \looseness=-1 By induction on $t$. Let $\rho$ be the grounding substitution based on $\theta$ and $\{t, s\}$ defined in the proof of Lemma~\ref{lem:lsucc-preserves-lfsucc-stability-under-ground-subst}. From the hypothesis $t \lsuccsim s$ (i.e., $\encO{t} \fosucceq \encOsub{t}{s}$), we have $\encO{t}\rho \fosucceq \encOsub{t}{s}\rho$ by stability of $\fosucc$ under grounding substitutions. The crux is to show that $\encOsub{t}{s}\rho \fosucceq \encO{s}\rho$. The rest follows easily: From $\encO{t}\rho \fosucceq \encOsub{t}{s}\rho \fosucceq \encO{s}\rho$, $\encO{t\theta} \fosucceq \encO{s\theta}$ follows by construction of $\rho$, and thus $t\theta \lsucceq s\theta$ by injectivity of $\encOonly$. We show $\encOsub{t}{s}\rho \fosucceq \encO{s}\rho$ by exploiting ground compatibility of $\fosucc$ with green contexts. The difference between $\encOsub{t}{s}$ and $\encO{s}$ is that the former may have $\zof{y\>\tuple{t}}$ where the latter has $\zof{y\>\tuple{s}}$. We call each such pair $(t_i, s_i)$ with $t_i \not= s_i$ a \emph{mismatch}. After applying $\rho$, the terms $\encOsub{t}{s}\rho$ and $\encO{s}\rho$ have the form $\subterm{u}{t'_1, \ldots, t'_n}$ and $\subterm{u}{s'_1, \ldots, s'_n}$, respectively, where the holes in $\subterm{u}{\phantom{i}}$ correspond to positions at or below variables $\zof{y\>\tuple{t}}$ in $\encOsub{t}{s}$ and, in parallel, $\zof{y\>\tuple{s}}$ in $\encO{s}$. Clearly, each pair $(t'_{\!j}, s'_{\!j})$ must be equal to a pair $(\encO{t_i\theta},\allowbreak \encO{s_i\theta})$ of argument tuples of some applied variable. By~($*$), all mismatches are of nonfunctional, nonvariable types, and this property is preserved by $\theta$ and $\encOonly$. Hence, all the holes in $\subterm{u}{\phantom{i}}$ are located at green positions. For each $i$, we have $t_i \lsuccsim s_i$ by ($*$) and hence $t_i\theta \lsucceq s_i\theta$ by the induction hypothesis. Thus, $t'_{\!j} \fosucceq s'_{\!j}$ for all $j$. Using these inequalities in turn together with ground compatibility with green contexts, we form the following transitive chain, thereby resolving the crux: \vskip\abovedisplayskip \noindent\hbox{}% \phantom{\squareforqed}\hfill \raise1.35\baselineskip\hbox{$\underbrace{\greensubterm{u}{t'_1, \ldots, t'_n}}_{\encOsub{t}{s}\rho} \>\fosucceq\> \greensubterm{u}{s'_1, t'_2, \ldots, t'_n} \>\fosucceq\, \cdots \,\fosucceq\> \greensubterm{u}{s'_1, \ldots, s'_{n-1}, t'_n} \>\fosucceq\> \underbrace{\greensubterm{u}{s'_1, \ldots, s'_n}}_{\encO{s}\rho}$}% \hfill\squareforqed \end{proof} \end{notyet} \section{Refutational Completeness} \label{sec:refutational-completeness} Besides soundness, the most important property of the Boolean-free $\lambda$-superposition calculus introduced in \Section~\ref{sec:the-calculus} is refutational completeness. We will prove static and dynamic refutational completeness of $\mathit{HInf}$ \hbox{w.r.t.}\ $(\mathit{HRed}_{\mathrm{I}}, {\mathit{HRed}}_{\mathrm{C}})$, which is defined as follows: \begin{definitionx}[Static refutational completeness] Let $\mathit{Inf}$ be an inference system and let $(\mathit{Red}_{\mathrm{I}}, \mathit{Red}_{\mathrm{C}})$ be a redundancy criterion. The inference system $\mathit{Inf}$ is \emph{statically refutationally complete} \hbox{w.r.t.}\ $(\mathit{Red}_{\mathrm{I}}, \mathit{Red}_{\mathrm{C}})$ if we have $N \models \bot$ if and only if $\bot \in N$ for every clause set $N$ that is saturated \hbox{w.r.t.}\ $\mathit{Inf}$ and $\mathit{Red}_{\mathrm{I}}$. \end{definitionx} \begin{definitionx}[Dynamic refutational completeness] Let $\mathit{Inf}$ be an inference system and let $(\mathit{Red}_{\mathrm{I}}, \mathit{Red}_{\mathrm{C}})$ be a redundancy criterion. Let $(N_i)_i$ be a finite or infinite sequence over sets of clauses. Such a sequence is a \emph{derivation} if $N_i \setminus N_{i+1} \subseteq \mathit{Red}_{\mathrm{C}}(N_{i+1})$ for all $i$. It is \emph{fair} if all $\mathit{Inf}$-inferences from clauses in the limit inferior $\bigcup_i \bigcap_{\!j \geq i} N_{\!j}$ are contained in $\bigcup_i \mathit{Red}_{\mathrm{I}}(N_i)$. The inference system $\mathit{Inf}$ is \emph{dynamically refutationally complete} \hbox{w.r.t.}\ $(\mathit{Red}_{\mathrm{I}}, \mathit{Red}_{\mathrm{C}})$ if for every fair derivation $(N_i)_i$ such that $N_0 \models \bot$, we have $\bot \in N_i$ for some $i$. \label{def:dyn-complete} \end{definitionx} \oursubsection{Outline of the Proof} The proof proceeds in three steps, corresponding to the three levels ${\mathrm{GF}}$, ${\mathrm{GH}}$, and ${\mathrm{H}}$ introduced in \Section~\ref{ssec:the-redundancy-criterion}: \begin{enumerate} \item We use Bachmair and Ganzinger's work on the refutational completeness of standard (first-order) superposition~\cite{bachmair-ganzinger-1994} to prove static refutational completeness of $\mathit{GFInf}$. \item From the first-order model constructed in Bachmair and Ganzinger's proof, we derive a clausal higher-order model and thus prove static refutational completeness of $\mathit{GHInf}$. \item We use the saturation framework by Waldmann et al.~\cite{waldmann-et-al-2020-saturation} to lift the static refutational completeness of $\mathit{GHInf}$ to static and dynamic refutational completeness of $\mathit{HInf}$. \end{enumerate} In the first step, since the inference system $\mathit{GFInf}$ is standard ground superposition, we can make use of Bachmair and Ganzinger's results. % Given a saturated clause set $N\subseteq\CC_\GF$ with $\bot\not\in N$, Bachmair and Ganzinger prove refutational completeness by constructing a term rewriting system $R_N$ and showing that it can be viewed as an interpretation that is a model of $N$. This first step deals exclusively with ground first-order clauses. In the second step, we derive refutational completeness of $\mathit{GHInf}$. Given a saturated clause set $N\subseteq\CC_\GH$ with $\bot\not\in N$, we use the first-order model $R_{\floor{N}}$ of $\floor{N}$ constructed in the first step to derive a clausal higher-order interpretation that is a model of $N$. Under the encoding $\mathcalx{F}$, occurrences of the same symbol with different numbers of arguments are regarded as different symbols---e.g., $\floor{\cst{f}}=\cst{f}_0$ and $\floor{\cst{f}\>\cst{a}}=\cst{f}_1(\cst{a}_0)$. All $\lambda$-expressions $\lambda x.\>t$ are regarded as uninterpreted symbols $\cst{lam}_{\lambda x.\>t}$. The difficulty is to construct a higher-order interpretation that merges the first-order denotations of all $\cst{f}_i$ into a single higher-order denotation of $\cst{f}$ and to show that the symbols $\cst{lam}_{\lambda x.\>t}$ behave like $\lambda x.\>t$. This step relies on saturation \hbox{w.r.t.}\ the \infname{GArgCong} rule---which connects a term of functional type with its value when applied to an argument~$x$---and on the presence of the extensionality rule \infname{GExt}. In the third step, we employ the saturation framework by Waldmann et al.~\cite{waldmann-et-al-2020-saturation}% , which is based on Bachmair and Ganzinger's framework~\cite[\Section~4]{bachmair-ganzinger-2001-resolution}, to prove refutational completeness of $\mathit{HInf}$. Both saturation frameworks help calculus designers prove static and dynamic refutational completeness of nonground calculi. In addition, the framework by Waldmann et al.\ explicitly supports the redundancy criterion defined in \Section~\ref{ssec:the-redundancy-criterion}, which can be used to justify the deletion of subsumed clauses. Moreover, their saturation framework provides completeness theorems for prover architectures, such as the DISCOUNT loop. The main proof obligation we must discharge to use the framework is that there should exist nonground inferences in $\mathit{HInf}$ corresponding to all nonredundant inferences in $\mathit{GHInf}$. We face two specifically higher-order difficulties. First, in standard superposition, we can avoid \infname{Sup} inferences into variables~$x$ by exploiting the clause order's compatibility with contexts: If $t' \prec t$, we have $C\{x \mapsto\nobreak t'\} \prec C\{x \mapsto t\}$, which allows us to show that \infname{Sup} inferences into variables are redundant. This technique fails for higher-order variables~$x$ that occur applied in~$C$, because the order lacks compatibility with arguments. This is why our \infname{Sup} rule must perform some inferences into variables. The other difficulty also concerns applied variables. We must show that any nonredundant \infname{Sup} inference in level ${\mathrm{GH}}$ into a position corresponding to a fluid term or a deeply occurring variable in level ${\mathrm{H}}$ can be lifted to a \infname{FluidSup} inference. This involves showing that the $z$ variable in \infname{FluidSup} can represent arbitrary contexts around a term~$t$. For the entire proof of refutational completeness, $\beta\eta$-normalization is the proverbial dog that did not bark. On level ${\mathrm{GH}}$, the rules \infname{Sup}, \infname{ERes}, and \infname{EFact} preserve $\eta$-short $\beta$-normal form, and so does first-order term rewriting. Thus, we can completely ignore $\medrightarrow_\beta$ and $\medrightarrow_\eta$. On level ${\mathrm{H}}$, instantiation can cause $\beta$- and $\eta$-reduction, but this poses no difficulties thanks to the clause order's stability under grounding substitutions. \oursubsection{The Ground First-Order Level} We use Bachmair and Ganzinger's results on standard superposition~\cite{bachmair-ganzinger-1994} to prove refutational completeness of ${\mathrm{GF}}$. In the subsequent steps, we will also make use of specific properties of the model Bachmair and Ganzinger construct. The basis of Bachmair and Ganzinger's proof is that a term rewriting system $R$ defines an interpretation $\TT_\GF/R$ such that for every ground equation $s \approx t$, we have $\TT_\GF/R \models s \approx t$ if and only if $s \medleftrightarrow_R^* t$. Formally, $\TT_\GF/R$ denotes the monomorphic first-order interpretation whose universes $\UU_\tau$ consist of the $R$-equivalence classes over $\TT_\GF$ containing terms of type $\tau$. The interpretation $\TT_\GF/R$ is term-generated---that is, for every element $a$ of the universe of this interpretation and for any valuation $\xi$, there exists a ground term~$t$ such that $\interpret{t}{\TT_\GF/R}{\xi} = a$. To lighten notation, we will write $R$ to refer to both the term rewriting system $R$ and the interpretation $\TT_\GF/R$. The term rewriting system is constructed as follows: \begin{definitionx} Let $N\subseteq\CC_\GF$. We first define sets of rewrite rules $E_N^C$ and $R_N^C$ for all $C\in N$ by induction on the clause order. Assume that $E_N^D$ has already been defined for all $D \in N$ such that $D \prec C.$ Then $R_N^C = \bigcup_{D \prec C} E_N^D.$ Let $E_N^C=\{s \medrightarrow t\}$ if the following conditions are met:\ \begin{enumerate}[(a)] \item $C = C' \lor s \approx t$; \label{cond:C-eq-C'-st} \item $s \approx t$ is $\succsim$-maximal in $C$; \label{cond:st-strictly-max} \item $s \succ t$; \label{cond:s-gt-t} \item $C'$ is false in $R_N^C$; \label{cond:C'-false} \item $s$ is irreducible \hbox{w.r.t.}\ $R_N^C.$ \label{cond:s-irred} \end{enumerate} Then $C$ is said to \emph{produce} $s \medrightarrow t$. % Otherwise, $E_N^C = \emptyset$. Finally, $R_N = \bigcup_{D} E_N^D.$ \end{definitionx} Based on Bachmair and Ganzinger's work, Bentkamp et al.\ \cite[Lemma~4.2 % and Theorem~4.3] % {bentkamp-et-al-lfhosup-arxiv} prove the following properties of $R_N$: \begin{lemmax} \label{lem:productive-clauses} Let $\bot\not\in N$ and $N\subseteq\CC_\GF$ be saturated \hbox{w.r.t.}\ $\mathit{GFInf}$ and $\mathit{GFRed}_{\mathrm{I}}$. If $C = C' \lor s \approx t \in N$ produces $s \medrightarrow t$, then $s \approx t$ is strictly $\succeq$-eligible in $C$ and $C'$ is false in $R_N$. \end{lemmax} \begin{theoremx}[Ground first-order static refutational completeness] The inference system $\mathit{GFInf}$ is statically refutationally complete \hbox{w.r.t.}\ $(\mathit{GFRed}_{\mathrm{I}}, \mathit{GFRed}_{\mathrm{C}})$. More precisely, if $N\subseteq\CC_\GF$ is a clause set saturated \hbox{w.r.t.}\ $\mathit{GFInf}$ and $\mathit{GFRed}_{\mathrm{I}}$ such that $\bot\not\in N$, then $R_N$ is a model of $N$. \label{thm:GF-refutational-completeness} \end{theoremx} \oursubsection{The Ground Higher-Order Level} \label{ssec:the-ground-higher-order-level} In this subsection, let $\mathit{GHSel}$ be a selection function on $\CC_\GH$, let $N\subseteq\CC_\GH$ be a clause set saturated \hbox{w.r.t.}\ $\mathit{GHInf}^\mathit{GHSel}$ and $\mathit{GHRed}_{\mathrm{I}}^\mathit{GHSel}$ such that $\bot\not\in N$. Clearly, $\floor{N}$ is then saturated \hbox{w.r.t.}\ $\smash{\mathit{GFInf}^{\floor{\mathit{GHSel}}}}$ and $\smash{\mathit{GFRed}_{\mathrm{I}}^{\floor{\mathit{GHSel}}}}$. We abbreviate $R_{\floor{N}}$ as $R$. Given two terms $s,t\in\TT_\GH$, we write $\eqR{s}{t}$ to abbreviate $R\models \floor{s}\approx\floor{t}$, which is equivalent to $\interpretfo{\floor{s}}{} = \interpretfo{\floor{t}}{}$. \begin{lemmax}\label{lem:arg-cong-ext} For all terms $t,s\oftype\tau\fun\upsilon$ in $\TT_\GH$, the following statements are equivalent: \begin{enumerate} \item[\upshape 1.] $\eqR{t}{s}$; \item[\upshape 2.] $\eqR{t\>(\cst{diff}\>t\>s)}{s\>(\cst{diff}\>t\>s)}$; \item[\upshape 3.] $\eqR{t\>u}{s\>u}$ for all $u\in\TT_\GH$. \end{enumerate} \end{lemmax} \begin{proof} (3)\,$\Rightarrow$\,(2): Take $u := \cst{diff}\>t\>s$. \medskip\noindent (2)\,$\Rightarrow$\,(1): Since $N$ is saturated, the \infname{GExt} inference that generates the clause $C = t\>(\cst{diff}\>t\>s) \not\eq s\>(\cst{diff}\>t\>s) \mathrel\lor t \approx s$ is redundant---i.e., $C \in N \mathrel\cup \mathit{GHRed}_{\mathrm{C}}(N)$---and hence $R\models\floor{C}$ by Theorem~\ref{thm:GF-refutational-completeness} and the assumption that $\bot\not\in N$. Therefore, it follows from $\eqR{t\>(\cst{diff}\>t\>s)}{s\>(\cst{diff}\>t\>s)}$ that $\eqR{t}{s}$. \medskip\noindent (1)\,$\Rightarrow$\,(3): We assume that $\eqR{t}{s}$---i.e., $\floor{t} \medleftrightarrow^*_{R}\floor{s}$. By induction on the number of rewrite steps between $\floor{t}$ and $\floor{s}$ and by transitivity of $\eqR{}{}$, it suffices to show that $\floor{t} \medrightarrow_{R} \floor{s}$ implies $\eqR{t\>u}{s\>u}$. If the rewrite step $\floor{t} \medrightarrow_{R} \floor{s}$ is not at the top level, then neither $\betanf{s}$ nor $\betanf{t}$ can be $\lambda$-expressions. Therefore, $(\betanf{s})\>(\betanf{u})$ and $(\betanf{t})\>(\betanf{u})$ are in $\eta$-short $\beta$-normal form, and there is an analogous rewrite step $\floor{t\>u} \medrightarrow_{R} \floor{s\>u}$ using the same rewrite rule. It follows that $\eqR{t\>u}{s\>u}$. If the rewrite step $\floor{t} \medrightarrow_{R} \floor{s}$ is at the top level, $\floor{t}\medrightarrow \floor{s}$ must be a rule of $R$. This rule must originate from a productive clause of the form $\floor{C} = \floor{C' \mathrel\lor t \approx s}$. By Lemma~\ref{lem:productive-clauses}, $\floor{t \approx s}$ is strictly $\succeq$-eligible in $\floor{C}$ \hbox{w.r.t.}\ $\floor{\mathit{GHSel}}$, and hence $t \approx s$ is strictly $\succeq$-eligible in $C$ \hbox{w.r.t.}\ $\mathit{GHSel}$. % Thus, the following \infname{GArgCong} inference $\iota$ is applicable: \[ \namedinference{GArgCong} {C' \mathrel\lor t \approx s} {C' \mathrel\lor t\>u \approx s\>u} \] By saturation, $\iota$ is redundant \hbox{w.r.t.}\ $N$---i.e., $\mathit{concl}(\iota)\in N \mathrel\cup \mathit{GHRed}_{\mathrm{C}}(N)$. By Theorem~\ref{thm:GF-refutational-completeness} and the assumption that $\bot\not\in N$, $\floor{\mathit{concl}(\iota)}$ is then true in $R$. By Lemma~\ref{lem:productive-clauses}, $\floor{C'}$ is false in $R$. Therefore, $\floor{t\>u \approx s\>u}$ must be true in $R$. \qed \end{proof} \begin{lemmax} \label{lem:subst-congruence} Let $s\in\TT_\HH$ and $\theta$, $\theta'$ grounding substitutions such that $\eqR{x\theta}{x\theta'}$ for all variables~$x$ and $\alpha\theta = \alpha\theta'$ for all type variables $\alpha$. Then $\eqR{s\theta}{s\theta'}$. \end{lemmax} \begin{proof} In this proof, we work directly on $\lambda$-terms. To prove the lemma, it suffices to prove it for any $\lambda$-term $s$. Here, for $\lambda$-terms $t_1$ and $t_2$, the notation $\eqR{t_1}{t_2}$ is to be read as $\eqR{\betanf{t_1}}\betanf{{t_2}}$ because $\mathcalx{F}$ is only defined on $\eta$-short $\beta$-normal terms. % % % \newcommand{\choice}{\oplus}% \medskip\noindent \textsc{Definition}\enskip % We extend the syntax of $\lambda$-terms with a new polymorphic function symbol $\choice\oftype\forallty{\alpha}\alpha\fun\alpha\fun\alpha$. We will omit its type argument. It is equipped with two reduction rules: $\choice\>t\>s \medrightarrow t$ and $\choice\>t\>s \medrightarrow s$. A \emph{$\beta\choice$-reduction step} is either a rewrite step following one of these rules or a $\beta$-reduction step. \medskip\noindent The computability path order $\succ_\cst{CPO}$ \cite{blanqui-et-al-2015} guarantees that \begin{itemize} \item $\choice\>t\>s \succ_\cst{CPO} s$ by applying rule $@\rhd$; \item $\choice\>t\>s \succ_\cst{CPO} t$ by applying rule $@\rhd$ twice; \item $(\lambda x.\>t)\>s \succ_\cst{CPO} t[x\mapsto s]$ by applying rule $@\beta$. \end{itemize} Since this order is moreover monotone, it decreases with $\beta\choice$-reduction steps. % % The order is also well founded; thus, $\beta\choice$-reductions terminate. And since the $\beta\choice$-reduction steps describe a finitely branching term rewriting system, by K\H{o}nig's lemma \cite{koenigs-lemma-1927}, there is a maximal number of $\beta\choice$-reduction steps from each $\lambda$-term. \medskip\noindent \textsc{Definition}\enskip % A $\lambda$-term is \emph{term-ground} if it does not contain free term variables. It may contain polymorphic type arguments. \newcommand{\choicesubst}{\sigma}% \newcommand{\livesize}{\mathscr{S}}% \medskip\noindent \textsc{Definition}\enskip % We introduce an auxiliary function $\livesize$ that essentially measures the size of a $\lambda$-term but assigns a size of $1$ to term-ground $\lambda$-terms. \[\livesize(s) = \begin{cases} 1 & \text{if $s$ is term-ground or is a bound or free variable or a symbol} \\ 1 + \livesize(t) & \text{if $s$ is not term-ground and has the form $\lambda x.\>t$} \\ \livesize(t) + \livesize(u) & \text{if $s$ is not term-ground and has the form $t\>u$} \end{cases}\]% % We prove $\eqR{s\theta}{s\theta'}$ by well-founded induction on $s$, $\theta$, and $\theta'$ using the left-to-right lexicographic order on the triple $\bigl(n_1(s), n_2(s), n_3(s)\bigr)\in\mathbb{N}^3$, where \begin{itemize} \item \label{mea:bi-red} $n_1(s)$ is the maximal number of $\beta\choice$-reduction steps starting from $s\choicesubst$, where $\choicesubst$ is the substitution mapping each term variable $x$ to $\choice\>x\theta\>x\theta'$; \item $n_2(s)$ is the number of free term variables occurring more than once in $s$; \item $n_3(s) = \livesize(s)$. \end{itemize} \medskip\noindent \textsc{Case 1:}\enskip The $\lambda$-term $s$ is term-ground. Then the lemma is trivial. \medskip\noindent \textsc{Case 2:}\enskip The $\lambda$-term $s$ contains $k \geq 2$ free term variables. Then we can apply the induction hypothesis twice and use the transitivity of $\eqR{}{}$ as follows. Let $x$ be one of the free term variables in $s$. Let $\rho = \{x \mapsto x\theta\}$ the substitution that maps $x$ to $x\theta$ and ignores all other variables. Let $\rho' = \theta'[x\mapsto x]$. We want to invoke the induction hypothesis on $s\rho$ and $s\rho'$. This is justified because $s\choicesubst$ $\choice$-reduces to $s\rho\choicesubst$ and to $s\rho'\choicesubst$. These $\choice$-reductions have at least one step because $x$ occurs in $s$ and $k \geq 2$. Hence, $n_1(s)>n_1(s\rho)$ and $n_1(s)>n_1(s\rho')$. This application of the induction hypothesis gives us $\eqR{s\rho\theta}{s\rho\theta'}$ and $\eqR{s\rho'\theta}{s\rho'\theta'}$. Since $s\rho\theta = s\theta$ and $s\rho'\theta' = s\theta'$, this is equivalent to $\eqR{s\theta}{s\rho\theta'}$ and $\eqR{s\rho'\theta}{s\theta'}$. Since moreover $s\rho\theta' = s\rho'\theta$, we have $\eqR{s\theta}{s\theta'}$ by transitivity of $\eqR{}{}$. The following illustration visualizes the above argument: \[ \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=1.5em,column sep=0.1em,minimum width=1em] { & s\rho & & & & s\rho' & \\ s\theta & \underset{\scriptscriptstyle\text{IH}}{\eqR{}{}} & s\rho\theta' & = & s\rho'\theta & \underset{\scriptscriptstyle\text{IH}}{\eqR{}{}} & s\theta' \\}; \draw[-stealth] (m-1-2) edge node [left] {$\theta$\,} (m-2-1); \draw[-stealth] (m-1-2) edge node [right] {\,\,$\theta'$} (m-2-3); \draw[-stealth] (m-1-6) edge node [left] {$\theta$\,} (m-2-5); \draw[-stealth] (m-1-6) edge node [right] {\,\,$\theta'$} (m-2-7); \end{tikzpicture} \] \vskip-\baselineskip % \medskip\noindent \textsc{Case 3:}\enskip The $\lambda$-term $s$ contains a free term variable that occurs more than once. Then we rename variable occurrences apart by replacing each occurrence of each free term variable $x$ by a fresh variable $x_i$, for which we define $x_i\theta = x\theta$ and $x_i\theta' = x\theta'$. Let $s'$ be the resulting $\lambda$-term. Since $s\choicesubst = s'\choicesubst$, we have $n_1(s)=n_1(s')$. All free term variables occur only once in $s'$. Hence, $n_2(s)>0=n_2(s')$. Therefore, we can invoke the induction hypothesis on $s'$ to obtain $\eqR{s'\theta}{s'\theta'}$. Since $s\theta = s'\theta$ and $s\theta' = s'\theta'$, it follows that $\eqR{s\theta}{s\theta'}$. \medskip\noindent \textsc{Case 4:}\enskip The $\lambda$-term $s$ contains only one free term variable $x$, which occurs exactly once. \medskip\noindent \textsc{Case 4.1:}\enskip The $\lambda$-term $s$ is of the form $\cst{f}\typeargs{\tuple{\tau}}\>\tuple{t}$ for some symbol~$\cst{f}$, some types $\tuple{\tau}$, and some $\lambda$-terms~$\tuple{t}$. Then let $u$ be the $\lambda$-term in $\tuple{t}$ that contains $x$. We want to apply the induction hypothesis to $u$, which can be justified as follows. Consider the longest sequence of $\beta\choice$-reductions from $u\choicesubst$. This sequence can be replicated inside $s\choicesubst=(\cst{f}\typeargs{\tuple{\tau}}\>\tuple{t})\choicesubst$. Therefore, the longest sequence of $\beta\choice$-reductions from $s\choicesubst$ is at least as long---i.e., $n_1(s)\geq n_1(u)$. Since both $s$ and $u$ have only one free term variable occurrence, we have $n_2(s) = 0 = n_2(u)$. But $n_3(s) > n_3(u)$ because $u$ is a term-nonground subterm of $s$. Applying the induction hypothesis gives us $\eqR{u\theta}{u\theta'}$. By definition of $\mathcalx{F}$, we have $\floor{(\cst{f}\typeargs{\tuple{\tau}}\>\tuple{t})\theta} = \cst{f}_m^{\smash{\tuple{\tau}\theta}}\>\floor{\tuple{t}\theta}$ and analogously for $\theta'$, where $m$ is the length of $\tuple{t}$. By congruence of $\approx$ in first-order logic, it follows that $\eqR{s\theta}{s\theta'}$. \medskip\noindent \textsc{Case 4.2:}\enskip The $\lambda$-term $s$ is of the form $x\>\tuple{t}$ for some $\lambda$-terms $\tuple{t}$. Then we observe that, by assumption, $\eqR{x\theta}{x\theta'}$. By applying Lemma~\ref{lem:arg-cong-ext} repeatedly, we have $\eqR{x\theta\>\tuple{t}}{x\theta'\>\tuple{t}}$. Since $x$ occurs only once, $\tuple{t}$ is term-ground and hence $s\theta = x\theta\>\tuple{t}$ and $s\theta' = x\theta'\>\tuple{t}$. Therefore, $\eqR{s\theta}{s\theta'}$. \medskip\noindent \textsc{Case 4.3:}\enskip The $\lambda$-term $s$ is of the form $\lambda z.\>u$ for some $\lambda$-term $u$. Then we observe that to prove $\eqR{s\theta}{s\theta'}$, it suffices to show that $\eqR{s\theta\>(\cst{diff}\>s\theta\>s\theta')}{s\theta' \>(\cst{diff}\>s\theta\>s\theta')}$ by Lemma~\ref{lem:arg-cong-ext}. Via $\beta\eta$-conversion, this is equivalent to $\eqR{u\rho\theta}{u\rho\theta'}$ where $\rho = \{z\mapsto \cst{diff}\>(\betanf{s\theta})\>(\betanf{s\theta'})\}$. To prove $\eqR{u\rho\theta}{u\rho\theta'}$, we apply the induction hypothesis on $u\rho$. It remains to show that the induction hypothesis is applicable on $u\rho$. Consider the longest sequence of $\beta\choice$-reductions from $u\rho\choicesubst$. Since $z\rho$ starts with the $\cst{diff}$ symbol, $z\rho$ will not cause more $\beta\choice$-reductions than $z$. Hence, the same sequence of $\beta\choice$-reductions can be applied inside $s\choicesubst = (\lambda z.\>u)\choicesubst$, proving that $n_1(s) \geq n_1(u\rho)$. Since both $s$ and $u\rho$ have only one free term variable occurrence, $n_2(s) = 0 = n_2(u\rho)$. But $n_3(s) = \livesize(s) = 1 + \livesize(u)$ because $s$ is term-nonground. Moreover, $\livesize(u)\geq\livesize(u\rho)=n_3(u\rho)$ because $\rho$ replaces a variable by a ground $\lambda$-term. Hence, $n_3(s) > n_3(u\rho)$, which justifies the application of the induction hypothesis. \medskip\noindent \textsc{Case 4.4:}\enskip The $\lambda$-term $s$ is of the form $(\lambda z.\>u)\>t_0\>\tuple{t}$ for some $\lambda$-terms $u$, $t_0$, and $\tuple{t}$. We apply the induction hypothesis on $s' = u\{z \mapsto t_0\}\>\tuple{t}$. To justify it, consider the longest sequence of $\beta\choice$-reductions from $s'\choicesubst$. Prepending the reduction $s\choicesubst \medrightarrow_\beta s'\choicesubst$ to it gives us a longer sequence from $s\choicesubst$. Hence, $n_1(s) > n_1(s')$. The induction hypothesis gives us $\eqR{s'\theta}{s'\theta'}$. Since $\eqR{}{}$ is invariant under $\beta$-reductions, it follows that $\eqR{s\theta}{s\theta'}$. \qed \end{proof} We proceed by defining a higher-order interpretation $\III^{\smash{{\mathrm{GH}}}}=(\UU^{{\mathrm{GH}}},\IIty^{{\mathrm{GH}}},\II^{{\mathrm{GH}}},\allowbreak\LL^{{\mathrm{GH}}})$ derived from~$R$. The interpretation $R$ is an interpretation in monomorphic first-order logic. Let $\UU_\tau$ be its universe for type $\tau$ and $\II$ its interpretation function. To illustrate the construction, we will employ the following running example. Let the higher-order signature be $\Sigma_\mathsf{ty} = \{\iota, \fun\}$ % and $\Sigma = \{\cst{f}\oftype \iota \fun \iota,\> \cst{a} \oftype \iota,\> \cst{b} \oftype \iota\}$. The first-order signature accordingly consists of $\Sigma_\mathsf{ty}$ and $\Sigma_{\mathrm{GF}} = \{\cst{f}_0, \cst{f}_1, \cst{a}_0, \cst{b}_0\} \cup \{ \cst{lam}_{\lambda x.\>t} \mid \lambda x.\>t \in \TT_\GH\}$. We write $[t]$ for the equivalence class of $t\in\TT_\GF$ modulo $R$. We assume that $[\cst{f}_0] = [\cst{lam}_{\lambda x.\>x}]$, $[\cst{a}_0] = [\cst{f}_1(\cst{a}_0)]$, $[\cst{b}_0] = [\cst{f}_1(\cst{b}_0)]$, and that $\cst{f}_0$, $\cst{lam}_{\lambda x.\>\cst{a}}$, $\cst{lam}_{\lambda x.\>\cst{b}_0}$, $\cst{a}_0$, and $\cst{b}_0$ are in disjoint equivalence classes. Hence, $\mathscr{U}_{\iota\fun\iota} = \{ [\cst{f}_0], [\cst{lam}_{\lambda x.\>\cst{a}}], [\cst{lam}_{\lambda x.\>\cst{b}}], \dots \}$ and $\mathscr{U}_{\iota} = \{ [\cst{a}_0], [\cst{b}_0] \}$. When defining the universe $\UU^{{\mathrm{GH}}}$ of the higher-order interpretation, we need to ensure that it contains subsets of function spaces, since $\IIty^{{\mathrm{GH}}}(\fun)(\mathscr{D}_1,\mathscr{D}_2)$ must be a subset of the function space from $\mathscr{D}_1$ to $\mathscr{D}_2$ for all $\mathscr{D}_1,\mathscr{D}_2\in\UU^{{\mathrm{GH}}}$. But the first-order universes $\UU_\tau$ consist of equivalence classes of terms from $\TT_\GF$ \hbox{w.r.t.}\ the rewriting system $R$, not of functions. To repair this mismatch, we will define a family of functions $\mathscr{E}_\tau$ that give a meaning to the elements of the first-order universes $\UU_{\tau}$. We will define a domain $\DD_\tau$ for each ground type $\tau$ and then let $\UU^{{\mathrm{GH}}}$ be the set of all these domains $\DD_\tau$. Thus, there will be a one-to-one correspondence between ground types and domains. Since the higher-order and first-order type signatures are identical (including ${\fun}$, which is uninterpreted in first-order logic), we can identify higher-order and first-order types. We define $\mathscr{E}_\tau$ and $\DD_{\tau}$ in a mutual recursion and prove that $\mathscr{E}_\tau$ is a bijection simultaneously. We start with nonfunctional types $\tau$: Let $\DD_\tau = \UU_{\tau}$ and let $\mathscr{E}_{\tau} : \UU_{\tau} \medrightarrow \DD_\tau$ be the identity. We proceed by defining $\mathscr{E}_{\tau\fun\upsilon}$ and $\DD_{\tau\fun\upsilon}$. We assume that $\mathscr{E}_\tau$, $\mathscr{E}_\upsilon$, $\DD_{\tau}$, and $\DD_{\upsilon}$ have already been defined and that $\mathscr{E}_\tau$, $\mathscr{E}_\upsilon$ are bijections. To ensure that $\mathscr{E}_{\tau\fun\upsilon}$ will be bijective, we first define an injective function $\mathscr{E}^0_{\tau\fun\upsilon}:(\UU_{\tau\fun\upsilon}\medrightarrow \DD_{\tau})\medrightarrow\DD_{\upsilon}$, define $\DD_{\tau\fun\upsilon}$ as its image $\mathscr{E}^0_{\tau\fun\upsilon}(\UU_{\tau\fun\upsilon})$, and finally define $\mathscr{E}_{\tau\fun\upsilon}$ as $\mathscr{E}^0_{\tau\fun\upsilon}$ with its codomain restricted to $\DD_{\tau\fun\upsilon}$: \begin{align*} &\mathscr{E}^0_{\tau\fun\upsilon}:\UU_{\tau\fun\upsilon}\medrightarrow\DD_{\tau}\medrightarrow\DD_{\upsilon}\\ &\mathscr{E}^0_{\tau\fun\upsilon}(\interpretfo{\floor{s}}{}) \bigl(\mathscr{E}_{\tau}\bigl(\interpretfo{\floor{u}}{}\bigr)\bigr) = \mathscr{E}_{\upsilon}\bigl(\interpretfo{\floor{s\>u}}{}\bigr) \end{align*} This is a valid definition because each element of $\smash{\UU_{\tau\fun\upsilon}}$ is of the form $\interpretfo{\floor{s}}{}$ for some $s$ and each element of $\DD_{\tau}$ is of the form $\mathscr{E}_{\tau}\bigl(\interpretfo{\floor{u}}{}\bigr)$ for some $u$. This function is well defined if it does not depend on the choice of $s$ and $u$. To show this, we assume that there are other ground terms $t$ and $v$ such that $\interpretfo{\floor{s}}{} = \interpretfo{\floor{t}}{}$ and $\mathscr{E}_{\tau}\bigl(\interpretfo{\floor{u}}{}\bigr) = \mathscr{E}_{\tau}\bigl(\interpretfo{\floor{v}}{}\bigr)$. Since $\mathscr{E}_{\tau}$ is bijective, we have $\interpretfo{\floor{u}}{} = \interpretfo{\floor{v}}{}$. Using the $\eqR{}{}$-notation, we can write this as $\eqR{u}{v}$. Applying Lemma~\ref{lem:subst-congruence} to the term $x\>y$ and the substitutions $\{x\mapsto s, y\mapsto u\}$ and $\{x\mapsto t, y\mapsto v\}$, we obtain $\eqR{s\>u}{t\>v}$---i.e., $\interpretfo{\floor{s\>u}}{} = \interpretfo{\floor{t\>v}}{}$. Thus, $\mathscr{E}^0_{\tau\fun\upsilon}$ is well defined. It remains to show that $\mathscr{E}^0_{\tau\fun\upsilon}$ is injective as a function from $\smash{\UU_{\tau\fun\upsilon}}$ to $\DD_{\tau}\medrightarrow\DD_{\upsilon}$. Assume two terms $s, t \in \TT_\GH$ such that for all $u \in\TT_\GH$, we have $\interpretfo{\floor{s\>u}}{} = \interpretfo{\floor{t\>u}}{}$. By Lemma~\ref{lem:arg-cong-ext}, it follows that $\interpretfo{\floor{s}}{} = \interpretfo{\floor{t}}{}$, which concludes the proof that $\mathscr{E}^0_{\tau\fun\upsilon}$ is injective. We define $\DD_{\tau\fun\upsilon} = \mathscr{E}^0_{\tau\fun\upsilon}(\UU_{\tau\fun\upsilon})$ and $\mathscr{E}_{\tau\fun\upsilon}(a) = \mathscr{E}^0_{\tau\fun\upsilon}(a)$. This ensures that $\mathscr{E}_{\tau\fun\upsilon}$ is bijective and concludes the inductive definition of $\DD$ and $\mathscr{E}$. In the following, we will usually write $\mathscr{E}$ instead of $\mathscr{E}_\tau$, since the type $\tau$ is determined by the first argument of $\mathscr{E}_\tau$. In our running example, we thus have $\mathscr{D}_\iota = \mathscr{U}_\iota = \{ [\cst{a}_0], [\cst{b}_0] \}$ and $\mathscr{E}_\iota$ is the identity $\mathscr{U}_\iota \medrightarrow \mathscr{D}_\iota,\ c \mapsto c$. The function $\mathscr{E}^0_{\iota\fun\iota}$ maps $[\cst{f}_0]$ to the identity $\mathscr{D}_\iota \medrightarrow \mathscr{D}_\iota,\ c \mapsto c$; it maps $[\cst{lam}_{\lambda x.\>\cst{a}}]$ to the constant function $\mathscr{D}_\iota \medrightarrow \mathscr{D}_\iota,\ c \mapsto [\cst{a}_0]$; and it maps $[\cst{lam}_{\lambda x.\>\cst{b}}]$ to the constant function $\mathscr{D}_\iota \medrightarrow \mathscr{D}_\iota,\ c \mapsto [\cst{b}_0]$. The swapping function $[\cst{a}_0] \mapsto [\cst{b}_0], [\cst{b}_0] \mapsto [\cst{a}_0]$ is not in the image of $\mathscr{E}^0_{\iota\fun\iota}$. Therefore, $\mathscr{D}_{\iota\fun\iota}$ contains only the identity and the two constant functions, but not this swapping function. We define the higher-order universe as $\UU^{{\mathrm{GH}}}= \{\DD_{\tau}\mid \tau \text{ ground}\}$. Moreover, we define $\IIty^{{\mathrm{GH}}}(\kappa)(\DD_{\tuple{\tau}}) = \UU_{\kappa(\tuple{\tau})}$ for all $\kappa \in \Sigma_\mathsf{ty}$, completing the type interpretation $\III_\mathsf{ty}^{\mathrm{GH}} = (\UU^{{\mathrm{GH}}},\IIty^{{\mathrm{GH}}})$. We define the interpretation function as $\II^{{\mathrm{GH}}}(\cst{f},\DD_{\tuple{\upsilon}_m}) \defeq \mathscr{E}(\II(\cst{f}_0^{\tuple{\upsilon}_m}))$ for all $\cst{f}\oftypedecl\forallty{\tuple{\alpha}_m}\tau$. In our example, we thus have $\II^{{\mathrm{GH}}}(\cst{f}) = \mathscr{E}([\cst{f}_0])$, which is the identity on $\mathscr{D}_\iota \medrightarrow \mathscr{D}_\iota$. Finally, we need to define the designation function $\LL^{{\mathrm{GH}}}$, which takes a valuation $\xi$ and a $\lambda$-expression as arguments. Given a valuation $\xi$, we choose a grounding substitution~$\theta$ such that $\DD_{\alpha\theta}=\xi(\alpha) \text{ and } \mathscr{E}(\interpretfo{\floor{x\theta}}{}) = \xi(x)$ for all type variables $\alpha$ and all variables $x$. Such a substitution can be constructed as follows: We can fulfill the first equation in a unique way because there is a one-to-one correspondence between ground types and domains. Since $\mathscr{E}^{-1}(\xi(x))$ is an element of a first-order universe and $R$ is term-generated, there exists a ground term $t$ such that $\interpretfoxi{t}=\mathscr{E}^{-1}(\xi(x))$. Choosing one such $t$ and defining $x\theta = \ceil{t}$ gives us a grounding substitution $\theta$ with the desired property. We define $\LL^{{\mathrm{GH}}}(\xi,(\lambda x.\>t)) = \mathscr{E}(\interpretfo{\floor{(\lambda x.\>t)\theta}}{})$. To prove that this is well defined, we assume that there exists another substitution~$\theta'$ with the properties $\smash{\DD_{\alpha\theta'}}=\xi(\alpha)$ for all $\alpha$ and $\mathscr{E}(\interpretfo{\floor{x\theta'}}{}) = \xi(x)$ for all $x$. Then we have $\alpha\theta = \alpha\theta'$ for all $\alpha$ due to the one-to-one correspondence between domains and ground types. We have $\interpretfo{\floor{x\theta}}{} = \interpretfo{\floor{x\theta'}}{}$ for all $x$ because $\mathscr{E}$ is injective. By Lemma~\ref{lem:subst-congruence} it follows that $\interpretfo{\floor{(\lambda x.\>t)\theta}}{} = \interpretfo{\floor{(\lambda x.\>t)\theta'}}{}$, which proves that $\LL^{{\mathrm{GH}}}$ is well defined. In our example, for all $\xi$ we have $\LL^{{\mathrm{GH}}}(\xi,\lambda x.\> x) = \mathscr{E}([\cst{lam}_{\lambda x.\>x}]) = \mathscr{E}([\cst{f}_0])$, which is the identity. If $\xi(y) = [\cst{a}_0]$, then $\LL^{{\mathrm{GH}}}(\xi,\lambda x.\> y) = \mathscr{E}([\cst{lam}_{\lambda x.\>\cst{a}}])$, which is the constant function $c \mapsto [\cst{a}_0]$. Similarly, if $\xi(y) = [\cst{b}_0]$, then $\LL^{{\mathrm{GH}}}(\xi,\lambda x.\> y)$ is the constant function $c \mapsto [\cst{b}_0]$. This concludes the definition of the interpretation $\III^{\smash{{\mathrm{GH}}}}=(\UU^{{\mathrm{GH}}},\IIty^{{\mathrm{GH}}},\II^{{\mathrm{GH}}},\LL^{{\mathrm{GH}}})$. It remains to show that $\smash{\III^{\smash{{\mathrm{GH}}}}}$ is proper. In a proper interpretation, the denotation $\interpretho{t}{}$ of a term $t$ does not depend on the representative of $t$ modulo $\beta\eta$, but since we have not yet shown $\III^{\smash{{\mathrm{GH}}}}$ to be proper, we cannot rely on this property. For this reason, we use $\lambda$-terms in the following three lemmas and mark all $\beta\eta$-reductions explicitly. The higher-order interpretation $\III^{\smash{{\mathrm{GH}}}}$ relates to the first-order interpretation $R$ as follows: \begin{lemmax}\label{lem:ceil-floor-correspondence} Given a ground $\lambda$-term $t$, we have $\interpretho{t}{} = \mathscr{E}(\interpretfo{\floor{\betanf{t}}}{})$ \end{lemmax} \begin{proof} By induction on $t$. % Assume that $\interpretho{s}{} = \mathscr{E}(\interpretfo{\floor{\betanf{s}}}{})$ for all proper subterms $s$ of~$t$. % If $t$ is of the form $\cst{f}\typeargs{\tuple{\tau}}$, then % \begin{align*} \interpretho{t}{} &= \II^{{\mathrm{GH}}}(\cst{f},\DD_{\tuple{\tau}})\\[-.5\jot] &=\mathscr{E}(\II(\cst{f}_0,\UU_{\floor{\tuple{\tau}}}))\\[-.5\jot] &=\mathscr{E}(\interpretfo{\cst{f}_0\typeargs{\floor{\tuple{\tau}}}}{})\\[-.5\jot] &=\mathscr{E}(\interpretfo{\floor{{\cst{f}\typeargs{\tuple{\tau}}}}}{})\\[-.5\jot] &=\mathscr{E}(\interpretfo{\floor{\betanf{\cst{f}\typeargs{\tuple{\tau}}} }}{}) =\mathscr{E}(\interpretfo{\floor{\betanf{t}}}{}) \end{align*} % If $t$ is an application $t = t_1\>t_2$, where $t_1$ is of type $\tau\fun\upsilon$, then \begin{align*} \interpretho{t_1\>t_2}{} &= \interpretho{t_1}{} (\interpretho{t_2}{}) \\[-.5\jot] &\overset{\!\scriptscriptstyle\text{IH}\!}{=} \mathscr{E}_{\tau\fun\upsilon}(\interpretfo{\floor{\betanf{t_1}}}{}) (\mathscr{E}_\tau(\interpretfo{\floor{\betanf{t_2}}}{}))\\[-.5\jot] % &\overset{\kern-10mm\text{Def }\mathscr{E}\kern-10mm}{=} \enskip\mathscr{E}_\upsilon(\interpretfo{\floor{\betanf{(t_1\>t_2)}}}{}) \end{align*} % If $t$ is a $\lambda$-expression, then \begin{align*} \interprethoxi{\lambda x.\>u} &= \LL^{{\mathrm{GH}}} (\xi, (\lambda x.\>u)) \\[-.5\jot] & = \mathscr{E}(\interpretfo{\floor{\betanf{(\lambda x.\>u)\theta}}}{}) \\[-.5\jot] & = \mathscr{E}(\interpretfo{\floor{\betanf{(\lambda x.\>u)}}}{}) \end{align*} where $\theta$ is a substitution such that $\DD_{\alpha\theta}=\xi(\alpha)$ and $\mathscr{E}(\interpretfo{\floor{x\theta}}{}) = \xi(x)$. \qed \end{proof} We need to show that the interpretation $\III^{\smash{{\mathrm{GH}}}}=(\UU^{{\mathrm{GH}}},\IIty^{{\mathrm{GH}}},\II^{{\mathrm{GH}}},\LL^{{\mathrm{GH}}})$ is proper. In the proof, we will need to employ the following lemma, which is very similar to the substitution lemma (Lemma~\ref{lem:subst-lemma-general}), but we must prove it here for our particular interpretation $\III^{\smash{{\mathrm{GH}}}}$ because we have not shown that $\III^{\smash{{\mathrm{GH}}}}$ is proper yet. \begin{lemmax}[Substitution lemma] $\interpret{\tau\rho}{\III_\mathsf{ty}^{\mathrm{GH}}}{\xi} = \interpret{\tau}{\III_\mathsf{ty}^{\mathrm{GH}}}{\xi'}$ and $\vphantom{(_{(_(}}\interpretho{t\rho}{\xi} = \interpretho{t}{\xi'}$ for all $\lambda$-terms $t$, all~$\tau\in\Ty_\HH$ and all grounding substitutions $\rho$, where $\xi'(\alpha) = \vphantom{(_{(_(}}\interpret{\alpha\rho}{\III_\mathsf{ty}^{\mathrm{GH}}}{\xi}$ for all type variables $\alpha$ and $\xi'(x) = \interpretho{x\rho}{\xi}$ for all term variables $x$. \label{lem:subst-lemma-special} \end{lemmax} \begin{proof} We proceed by induction on the structure of $\tau$ and $t$. The proof is identical to the one of Lemma~\ref{lem:subst-lemma-general}, except for the last step, which uses properness of the interpretation, a property we cannot assume here. However, here, we have the assumption that $\rho$ is a grounding substitution. Therefore, if $t$ is a $\lambda$-expression, we argue as follows: \begin{align*} \interpretho{(\lambda z.\>u)\rho}{\xi} &=\interpretho{(\lambda z.\>u\rho')}{\xi}&&\text{ where $\rho'(z)=z$ and $\rho'(x)=\rho(x)$ for $x\neq z$}\\ &= \LL^{{\mathrm{GH}}}(\xi,(\lambda z.\>u\rho')) &&\text{ by the definition of the term denotation}\\ &= \mathscr{E}(\interpretfo{\floor{\betanf{(\lambda z.\>u)\rho\theta}}}{\xi}) &&\text{ for some $\theta$ by the definition of $\LL^{{\mathrm{GH}}}$}\\ &= \mathscr{E}(\interpretfo{\floor{\betanf{(\lambda z.\>u)\rho}}}{\xi}) &&\text{ because $(\lambda z.\>u)\rho$ is ground}\\ &\overset{\smash{*}}{=} \LL^{{\mathrm{GH}}}(\xi',\lambda z.\>u) &&\text{ by the definition of $\LL^{{\mathrm{GH}}}$ and Lemma~\ref{lem:ceil-floor-correspondence}}\\ &= \interpretho{\lambda z.\>u}{\xi'} &&\text{ by the definition of the term denotation} \end{align*} The step $*$ is justified as follows: We have $\LL^{{\mathrm{GH}}}(\xi',\lambda z.\>u) = \mathscr{E}(\interpretfo{\floor{\betanf{(\lambda z.\>u)\theta'}}}{\xi})$ by the definition of $\LL^{{\mathrm{GH}}}$, if $\theta'$ is a substitution such that $\smash{\DD_{\alpha\theta'}}=\xi'(\alpha)$ for all $\alpha$ and $\mathscr{E}(\interpretfo{\floor{\betanf{x\theta'}}}{\xi}) = \xi'(x)$ for all $x$. By the definition of $\xi'$ and by Lemma~\ref{lem:ceil-floor-correspondence}, $\rho$ is such a substitution. Hence, $\LL^{{\mathrm{GH}}}(\xi',\lambda z.\>u) = \mathscr{E}(\interpretfo{\floor{\betanf{(\lambda z.\>u)\rho}}}{\xi})$. \qed \end{proof} \begin{lemmax} \label{lem:proper} The interpretation $\III^{\smash{{\mathrm{GH}}}}$ is proper. \end{lemmax} \begin{proof} We must show that $\interprethoxi{(\lambda x.\>t)}(a) = \interpretho{t}{\xi[x\mapsto a]}$ for all $\lambda$-expressions $\lambda x.\>t$, all valuations $\xi$, and all values $a$. \begin{align*} \interprethoxi{\lambda x.\>t}(a) &= \LL^{{\mathrm{GH}}}(\xi,\lambda x.\>t)(a) &&\text{by the definition of $\interpretho{\phantom{\cdot}}{}$}\\ &= \mathscr{E}(\interpretfo{\floor{\betanf{(\lambda x.\>t)\theta}}}{})(a) &&\text{by the definition of $\LL^{{\mathrm{GH}}}$ for some $\theta$}\\[-\jot] &&&\text{such that $\mathscr{E}(\interpretfo{\floor{z\theta}}{}) = \xi(z)$ for all $z$}\\[-\jot] &&&\text{and $\smash{\DD}_{\alpha\theta} = \xi(\alpha)$ for all $\alpha$}\\ &= \mathscr{E}(\interpretfo{\floor{\betanf{((\lambda x.\>t)\theta\;s)}}}{}) &&\text{by the definition of $\mathscr{E}$}\\[-\jot] &&&\text{where $\mathscr{E}(\interpretfo{\floor{s}}{})=a$}\\ &= \mathscr{E}(\interpretfo{\floor{\betanf{t(\theta[x\mapsto s])}}}{}) &&\text{by $\beta$-reduction}\\ &= \interpretho{t(\theta[x\mapsto s])}{} &&\text{by Lemma~\ref{lem:ceil-floor-correspondence}}\\ &= \interpretho{t}{\xi[x\mapsto a]} &&\text{by Lemma~\ref{lem:subst-lemma-special}} \end{align*} \vskip-\baselineskip \vskip-\belowdisplayskip ~\qed % \end{proof} \begin{lemmax}\label{lem:B-inverse-model} $\III^{\smash{{\mathrm{GH}}}}$ is a model of $N$. \end{lemmax} \begin{proof} By Lemma \ref{lem:ceil-floor-correspondence}, we have $\interpretho{t}{} = \mathscr{E}(\interpretfo{\floor{t}}{})$ for all $t \in \TT_\GH$. Since $\mathscr{E}$ is a bijection, it follows that any (dis)equation $s \doteq t \in \CC_\GH$ is true in $\III^{\smash{{\mathrm{GH}}}}$ if and only if $\floor{s \doteq t}$ is true in $R$. Hence, a clause $C \in \CC_\GH$ is true in $\III^{\smash{{\mathrm{GH}}}}$ if and only if $\floor{C}$ is true in $R$. By Theorem~\ref{thm:GF-refutational-completeness} and the assumption that $\bot \notin N$, $R$ is a model of $\floor{N}$---% that is, for all clauses $C\in N$, $\floor{C}$ is true in $R$. Hence, all clauses $C\in N$ are true in $\III^{\smash{{\mathrm{GH}}}}$ and therefore $\III^{\smash{{\mathrm{GH}}}}$ is a model of $N$. \qed \end{proof} We summarize the results of this subsection in the following theorem: \begin{sloppypar} \begin{theoremx}[Ground static refutational completeness] Let $\mathit{GHSel}$ be a selection function on $\CC_\GH$. Then the inference system $\mathit{GHInf}^\mathit{GHSel}$ is statically refutationally complete \hbox{w.r.t.}\ $(\mathit{GHRed}_{\mathrm{I}}, \mathit{GHRed}_{\mathrm{C}})$. In other words, if $N \subseteq \CC_\GH$ is a clause set saturated \hbox{w.r.t.}\ $\mathit{GHInf}^\mathit{GHSel}$ and $\mathit{GHRed}_{\mathrm{I}}^\mathit{GHSel}$, then $N \models \bot$ if and only if $\bot \in N$. \label{thm:GH-refutational-completeness} \end{theoremx} \end{sloppypar} The construction of $\III^{\smash{{\mathrm{GH}}}}$ relies on specific properties of $R$. It would not work with an arbitrary first-order interpretation. Transforming a higher-order interpretation into a first-order interpretation is easier: \begin{lemmax} \label{lem:gf-interpretation-from-gh} Given a clausal higher-order interpretation $\mathscr{I}$ on ${\mathrm{GH}}$, there exists a first-order interpretation $\mathscr{I}^{\mathrm{GF}}$ on ${\mathrm{GF}}$ such that for any clause $C\in\CC_\GH$ the truth values of $C$ in $\mathscr{I}$ and of $\floor{C}$ in $\mathscr{I}^{\mathrm{GF}}$ coincide. \end{lemmax} \begin{proof} Let $\mathscr{I} = (\III_\mathsf{ty},\mathscr{J},\mathscr{L})$ be a clausal higher-order interpretation. Let $\mathscr{U}^{\mathrm{GF}}_\tau = \interpret{\tau}{\III_\mathsf{ty}}{}$ be the first-order type universe for the ground type $\tau$. For a symbol $\smash{\cst{f}^{\tuple{\upsilon}}_{\!j}} \in \Sigma_{\mathrm{GF}}$, let $\mathscr{J}^{\mathrm{GF}} (\smash{\cst{f}^{\tuple{\upsilon}}_{\!j}}) = \interpret{\cst{f}\typeargs{\tuple{\upsilon}}}{\mathscr{I}}{}$ (up to currying). For a symbol $\cst{lam}_{\lambda x.\>t} \in \Sigma_{\mathrm{GF}}$, let $\mathscr{J}^{\mathrm{GF}} (\cst{lam}_{\lambda x.\>t}) = \interpret{\lambda x.\>t}{\mathscr{I}}{}$. This defines a first-order interpretation $\mathscr{I}^{\mathrm{GF}} = (\mathscr{U}^{\mathrm{GF}},\mathscr{J}^{\mathrm{GF}})$. We need to show that for any $C\in\CC_\GH$, $\mathscr{I} \models C$ if and only if $\mathscr{I}^{\mathrm{GF}} \models \floor{C}$. It suffices to show that $\interpret{t}{\mathscr{I}}{} = \interpret{\floor{t}}{\mathscr{I}^{\mathrm{GF}}}{}$ for all terms $t\in\TT_\GH$. We prove this by induction on the structure of the $\eta$-short $\beta$-normal form of $t$. If $t$ is a $\lambda$-expression, this is obvious. If $t$ is of the form $\cst{f}\typeargs{\tuple{\upsilon}}\>\tuple{s}_{\!j}$, then $\floor{t} = \smash{\cst{f}^{\tuple{\upsilon}}_{\!j}}(\floor{\tuple{s}_{\!j}})$ and hence $\interpret{\floor{t}}{\mathscr{I}^{\mathrm{GF}}}{} = \mathscr{J}^{\mathrm{GF}} (\smash{\cst{f}^{\tuple{\upsilon}}_{\!j}})(\interpret{\floor{\tuple{s}_{\!j}}}{\mathscr{I}^{\mathrm{GF}}}{}) = \interpret{\cst{f}\typeargs{\tuple{\upsilon}}}{\mathscr{I}}{}(\interpret{\floor{\tuple{s}_{\!j}}}{\mathscr{I}^{\mathrm{GF}}}{}) \eqIH \interpret{\cst{f}\typeargs{\tuple{\upsilon}}}{\mathscr{I}}{}(\interpret{\tuple{s}_{\!j}}{\mathscr{I}}{}) = \interpret{t}{\mathscr{I}}{}$. \qed \end{proof} \oursubsection{The Nonground Higher-Order Level} To lift the result to the nonground level, we employ the saturation framework of Waldmann et al.~\cite{waldmann-et-al-2020-saturation}. It is easy to see that the entailment relation $\models$ on ${\mathrm{GH}}$ is a consequence relation in the sense of the framework. We need to show that our redundancy criterion on ${\mathrm{GH}}$ is a redundancy criterion in the sense of the framework and that ${\mathcalx{G}}$ is a grounding function in the sense of the framework: \begin{lemmax} \label{lem:redundancy-criterion} The redundancy criterion for ${\mathrm{GH}}$ is a redundancy criterion in the sense of \Section~2 of the saturation framework. \end{lemmax} \begin{sloppypar} \begin{proof} We must prove the conditions (R1) to (R4) of the saturation framework. Adapted to our context, they state the following for all clause sets $N,N' \subseteq \CC_\GH$: \begin{enumerate}[(R1)] \item if $N \models \bot$, then $N \setminus \mathit{GHRed}_{\mathrm{C}}(N) \models \bot$; \item if $N \subseteq N'$, then $\mathit{GHRed}_{\mathrm{C}}(N) \subseteq \mathit{GHRed}_{\mathrm{C}}(N')$ and $\mathit{GHRed}_{\mathrm{I}}(N) \subseteq \mathit{GHRed}_{\mathrm{I}}(N')$; \item if $N' \subseteq \mathit{GHRed}_{\mathrm{C}}(N)$, then $\mathit{GHRed}_{\mathrm{C}}(N) \subseteq \mathit{GHRed}_{\mathrm{C}}(N \setminus N')$ and $\mathit{GHRed}_{\mathrm{I}}(N) \subseteq \mathit{GHRed}_{\mathrm{I}}(N \setminus N')$; \item if $\iota \in \mathit{GHInf}$ and $\mathit{concl}(\iota) \in N$, then $\iota \in \mathit{GHRed}_{\mathrm{I}}(N)$. \end{enumerate} The proof is analogous to the proof of Lemma~4.10 % of Bentkamp et al.~\cite{bentkamp-et-al-lfhosup-arxiv}, using Lemma~\ref{lem:gf-interpretation-from-gh}. \qed \end{proof} \end{sloppypar} \begin{lemmax} \label{lem:grounding-function} The grounding functions ${\mathcalx{G}}^\mathit{GHSel}$ for $\mathit{GHSel}\in{\mathcalx{G}}(\mathit{HSel})$ are grounding functions in the sense of \Section~3 of the saturation framework. \end{lemmax} \begin{proof} We must prove the conditions (G1), (G2), and (G3) of the saturation framework. Adapted to our context, they state the following: \begin{enumerate}[(G1)] \item ${\mathcalx{G}}(\bot) = \{ \bot \}$; \item for every $C \in \CC_\HH$, if $\bot \in {\mathcalx{G}}(C)$, then $C = \bot$; \item for every $\iota \in \mathit{HInf}$, ${\mathcalx{G}}^\mathit{GHSel}(\iota) \subseteq \mathit{GHRed}_{\mathrm{I}}^\mathit{GHSel}({\mathcalx{G}}(\mathit{concl}(\iota)))$. \end{enumerate} Clearly, $C = \bot$ if and only if $\bot \in {\mathcalx{G}}(C)$ if and only if ${\mathcalx{G}}(C) = \{\bot\}$, proving (G1) and (G2). For every $\iota\in\mathit{HInf}$, by the definition of ${\mathcalx{G}}^\mathit{GHSel}$, we have $\mathit{concl}({\mathcalx{G}}^\mathit{GHSel}(\iota))\subseteq{\mathcalx{G}}(\mathit{concl}(\iota))$, and thus (G3) by (R4). \qed \end{proof} To lift the completeness result of the previous subsection to the nonground calculus $\mathit{HInf}$, we employ Theorem~14 of the saturation framework, which, adapted to our context, is stated as follows. The theorem uses the notation $\mathit{Inf}(N)$ to denote the set of $\mathit{Inf}$-inferences whose premises are in $N$, for an inference system $\mathit{Inf}$ and a clause set $N$. Moreover, it uses Herbrand entailment $\models_{\mathcalx{G}}$ on $\CC_\HH$, which is defined so that $N_1 \models_{\mathcalx{G}} N_2$ if and only if ${\mathcalx{G}}(N_1) \models {\mathcalx{G}}(N_2)$. \begin{theoremx}[Lifting theorem] If $\mathit{GHInf}^\mathit{GHSel}$ is statically refutationally complete \hbox{w.r.t.}\ $(\mathit{GHRed}_{\mathrm{I}}^\mathit{GHSel}, \mathit{GHRed}_{\mathrm{C}})$ for every $\mathit{GHSel} \in {\mathcalx{G}}(\mathit{HSel})$, and if for every $N\subseteq \CC_\HH$ that is saturated \hbox{w.r.t.}\ $\mathit{HInf}$ and $\mathit{HRed}_{\mathrm{I}}$ there exists a $\mathit{GHSel} \in {\mathcalx{G}}(\mathit{HSel})$ such that $\mathit{GHInf}^\mathit{GHSel}({\mathcalx{G}}(N)) \subseteq {\mathcalx{G}}^\mathit{GHSel}(\mathit{HInf}(N)) \cup \mathit{GHRed}_{\mathrm{I}}^\mathit{GHSel}({\mathcalx{G}}(N))$, then also $\mathit{HInf}$ is statically refutationally complete \hbox{w.r.t.}\ $(\mathit{HRed}_{\mathrm{I}}, {\mathit{HRed}}_{\mathrm{C}})$ and $\models_{\mathcalx{G}}$. \label{thm:lifting-theorem} \end{theoremx} \begin{proof} This is almost an instance of Theorem~14 % of the saturation framework. We take $\CC_\HH$ for $\mathbf{F}$, $\CC_\GH$ for $\mathbf{G}$, and ${\mathcalx{G}}(\mathit{HSel})$ for $Q$. It is easy to see that the entailment relation $\models$ on ${\mathrm{GH}}$ is a consequence relation in the sense of the framework. By Lemma~\ref{lem:redundancy-criterion} and~\ref{lem:grounding-function}, $(\mathit{GHRed}_{\mathrm{I}}^\mathit{GHSel},\allowbreak\mathit{GHRed}_{\mathrm{C}})$ is a redundancy criterion in the sense of the framework, and ${\mathcalx{G}}^\mathit{GHSel}$ are grounding functions in the sense of the framework, for all $\mathit{GHSel}\in{\mathcalx{G}}(\mathit{HSel})$. The redundancy criterion $(\mathit{HRed}_{\mathrm{I}},{\mathit{HRed}}_{\mathrm{C}})$ matches exactly the intersected lifted redundancy criterion % $\mathit{Red}^{\mathrel\cap,\sqsupset}$ of the saturation framework. Their Theorem~14 % states the theorem only for ${\sqsupset} = \varnothing$. By their Lemma~16, % it also holds if ${\sqsupset} \not= \varnothing$. \qed \end{proof} Let $N\subseteq\CC_\HH$ be a clause set saturated \hbox{w.r.t.}\ $\mathit{HInf}$ and $\mathit{HRed}_{\mathrm{I}}$. We assume that $\mathit{HSel}$ fulfills the selection restriction that a literal $\greensubterm{L}{\,y}$ must not be selected if $y\> \tuple{u}_n$, with $n > 0$, is a $\succeq$-maximal term of the clause, as required in Definition~\ref{def:sel}. For the above theorem to apply, we need to show that there exists a selection function $\mathit{GHSel}\in{\mathcalx{G}}(\mathit{HSel})$ such that all inferences $\iota\in\mathit{GHInf}^\mathit{GHSel}$ with $\mathit{prems}(\iota)\in{\mathcalx{G}}(N)$ are liftable or redundant. Here, for $\iota$ to be \emph{liftable} means that $\iota$ is a $\smash{{\mathcalx{G}}^\mathit{GHSel}}$-ground instance of a $\smash{\mathit{HInf}}$-inference from $N$; for $\iota$ to be \emph{redundant} means that $\iota\in\smash{\mathit{GHRed}_{\mathrm{I}}^\mathit{GHSel}({\mathcalx{G}}(N))}$. To choose the right selection function $\mathit{GHSel}\in{\mathcalx{G}}(\mathit{HSel})$, we observe that each ground clause $C\in{\mathcalx{G}}(N)$ must have at least one corresponding clause $D\in N$ such that $C$ is a ground instance of $D$. We choose one of them for each $C\in{\mathcalx{G}}(N)$, which we denote by ${\mathcalx{G}}^{-1}(C)$. Then let $\mathit{GHSel}$ select those literals in $C$ that correspond to literals selected by $\mathit{HSel}$ in ${\mathcalx{G}}^{-1}(C)$. With respect to this selection function $\mathit{GHSel}$, we can show that all inferences from ${\mathcalx{G}}(N)$ are liftable or redundant: \begin{lemmax} \label{lem:eligibility-lifting} Let ${\mathcalx{G}}^{-1}(C) = D \in N$ and $D\theta = C$. Let $\sigma$ and $\rho$ be substitutions such that $x\sigma\rho = x\theta$ for all variables $x$ in $D$. Let $L$ be a (strictly) $\succeq$-eligible literal in $C$ \hbox{w.r.t.}\ $\mathit{GHSel}$. Then there exists a (strictly) $\succsim$-eligible literal $L'$ in $D$ \hbox{w.r.t.}\ $\sigma$ and $\mathit{HSel}$ such that $L'\theta = L$. \end{lemmax} \begin{proof} If $L \in \mathit{GHSel}(C)$, then there exists $L'$ such that $L'\theta = L$ and $L' \in \mathit{HSel}(D)$ by the definition of ${\mathcalx{G}}^{-1}$. Otherwise, $L$ is $\succeq$-maximal in $C$. Since $C = D\sigma\rho$, there are literals $L'$ in $D\sigma$ such that $L'\rho = L$. Choose $L'$ to be a $\succsim$-maximal among them. Then $L'$ is $\succsim$-maximal in $D\sigma$ because for any literal $L''\in D$ with $L'' \succsim L'$, we have $L''\rho \succeq L'\rho = L$ and hence $L''\rho = L$ by $\succeq$-maximality of $L$. If $L$ is \relax{strictly} $\succeq$-maximal in $C$, $L'$ is also \relax{strictly} $\succsim$-maximal in $D\sigma$ because a duplicate of $L'$ in $D\sigma$ would imply a duplicate of $L$ in $C$. \qed \end{proof} \begin{lemmax}[Lifting of \infname{ERes}, \infname{EFact}, \infname{GArgCong}, and \infname{GExt}] All \infname{ERes}, \infname{EFact}, \infname{GArgCong}, and \infname{GExt} inferences from ${\mathcalx{G}}(N)$ are liftable. \label{lem:lifting1} \end{lemmax} \begin{proof} \infname{ERes}: Let $\iota\in\mathit{GHInf}^\mathit{GHSel}$ be an \infname{ERes} inference with $\mathit{prems}(\iota)\in{\mathcalx{G}}(N)$. Then $\iota$ is of the form \[\namedinference{ERes}{C\theta~=~C'\theta \mathrel\lor s\theta \not\eq s'\theta}{C'\theta}\] where ${\mathcalx{G}}^{-1}(C\theta) = C = C' \mathrel\lor s \not\eq s'$ and the literal $s\theta \not\eq s'\theta$ is $\succeq$-eligible \hbox{w.r.t.}\ $\mathit{GHSel}$. Since $s\theta$ and $s'\theta$ are unifiable and ground, we have $s\theta = s'\theta$. Thus, there exists an idempotent $\sigma \in \csu(s,s')$ such that for some substitution~$\rho$ and for all variables $x$ in $C$, we have $x\sigma\rho = x\theta$. By Lemma~\ref{lem:eligibility-lifting}, we may assume without loss of generality that $s \not\eq s'$ is $\succsim$-eligible in $C$ \hbox{w.r.t.}\ $\sigma$ and $\mathit{HSel}$. Hence, the following inference $\iota'\in\mathit{HInf}$ is applicable: \[\namedinference{ERes}{C' \mathrel\lor s \not\eq s'}{C'\sigma}\] Then $\iota$ is the $\sigma\rho$-ground instance of $\iota'$ and is therefore liftable. \medskip \noindent \infname{EFact}:\enskip Analogously, if $\iota\in\mathit{GHInf}^\mathit{GHSel}$ is an \infname{EFact} inference with $\mathit{prems}(\iota)\in{\mathcalx{G}}(N)$, then $\iota$ is of the form \[\namedinference{EFact}{C\theta~=~C'\theta \mathrel\lor s'\theta \approx t'\theta \mathrel\lor s\theta \approx t\theta} {C'\theta \mathrel\lor t\theta \not\eq t'\theta \mathrel\lor s\theta \approx t'\theta}\] where ${\mathcalx{G}}^{-1}(C\theta) = C = C' \mathrel\lor s' \approx t' \mathrel\lor s \approx t$, the literal $s\theta \approx t\theta$ is $\succeq$-eligible in $C$ \hbox{w.r.t.}\ $\mathit{GHSel}$, and $s\theta\not\prec t\theta$. Then $s\not\prec t$. Moreover, $s\theta$ and $s'\theta$ are unifiable and ground. Hence, $s\theta = s'\theta$ and there exists an idempotent $\sigma \in \csu(s,s')$ such that for some substitution~$\rho$ and for all variables $x$ in $C$, we have $x\sigma\rho = x\theta$. By Lemma~\ref{lem:eligibility-lifting}, we may assume without loss of generality that $s\approx t$ is $\succsim$-eligible in $C$ \hbox{w.r.t.}\ $\sigma$ and $\mathit{HSel}$. It follows that the following inference $\iota'\in\mathit{HInf}$ is applicable: \[\namedinference{EFact}{C' \mathrel\lor s' \approx t' \mathrel\lor s \approx t} {(C' \mathrel\lor t \not\eq t' \mathrel\lor s \approx t')\sigma} \] Then $\iota$ is the $\sigma\rho$-ground instance of $\iota'$ and is therefore liftable. \medskip \noindent \infname{GArgCong}: Let $\iota\in\mathit{GHInf}^\mathit{GHSel}$ be a \infname{GArgCong} inference with $\mathit{prems}(\iota)\in{\mathcalx{G}}(N)$. Then $\iota$ is of the form \[\namedinference{GArgCong}{C\theta~=~C'\theta \mathrel\lor s\theta \approx s'\theta} {C'\theta \mathrel\lor s\theta\>\tuple{u}_n \approx s'\theta\>\tuple{u}_n}\] where ${\mathcalx{G}}^{-1}(C\theta) = C = C' \mathrel\lor s \approx s'$, the literal $s\theta \approx s'\theta$ is strictly $\succeq$-eligible \hbox{w.r.t.}\ $\mathit{GHSel}$, and $s\theta$ and $s'\theta$ are of functional type. It follows that $s$ and $s'$ have either a functional or a polymorphic type. Let $\sigma$ be the most general substitution such that $s\sigma$ and $s'\sigma$ take $n$ arguments. By Lemma~\ref{lem:eligibility-lifting}, we may assume without loss of generality that $s \not\eq s'$ is strictly $\succsim$-eligible in $C$ \hbox{w.r.t.}\ $\sigma$ and $\mathit{HSel}$. Hence the following inference $\iota'\in\mathit{HInf}$ is applicable: \[\namedinference{ArgCong}{C' \mathrel\lor s \approx s'} {C'\sigma \mathrel\lor s\sigma\>\tuple{x}_n \approx s'\sigma\>\tuple{x}_n}\] Since $\sigma$ is the most general substitution that ensures well-typedness of the conclusion, $\iota$ is a ground instance of $\iota'$ and is therefore liftable. \medskip \noindent \infname{GExt}: The conclusion of a \infname{GExt} inference in $\mathit{GHInf}$ is by definition a ground instance of the conclusion of an \infname{Ext} inference in $\mathit{HInf}$. Hence, the \infname{GExt} inference is a ground instance of the \infname{Ext} inference. Therefore it is liftable. \qed \end{proof} Some of the \infname{Sup} inferences in $\mathit{GHInf}$ are liftable as well: \begin{lemmax}[Instances of green subterms] Let $s$ be a $\lambda$-term in $\eta$-short $\beta$-normal form, let $\sigma$ be a substitution, and let $p$ be a green position of both $s$ and $\betanf{s\sigma}$. Then $\betanf{(s|_p)\sigma} = (\betanf{s\sigma})|_p$. \label{lem:arg-subterm-instances} \end{lemmax} \begin{proof} By induction on $p$. If $p=\varepsilon$, then $\betanf{(s|_p)\sigma} = \betanf{s\sigma} = (\betanf{s\sigma})|_p$. If $p=i.p'$, then $% s = \cst{f}\typeargs{\tuple{\tau}} \> s_1 \dots s_n $ % and $% s\sigma = \cst{f}\typeargs{\tuple{\tau}\sigma} \> (s_1\sigma) \dots (s_n\sigma) $, % where $1 \leq i \leq n$ and $p'$ is a green position of $s_i$. Clearly, $\beta\eta$-normalization steps of $s\sigma$ can take place only in proper subterms. So $% \betanf{s\sigma} = \cst{f}\typeargs{\tuple{\tau}\sigma} \> (\betanf{s_1\sigma}) \dots (\betanf{s_n\sigma}). $ % Since $p = i.p'$ is a green position of $\betanf{s\sigma}$, $p'$ must be a green position of $\betanf{(s_i\sigma)}$. By the induction hypothesis, $\betanf{(s_i|_{p'})\sigma} = (\betanf{s_i\sigma})|_{p'}$. Therefore $\betanf{(s|_p)\sigma} = \betanf{(s|_{i.p'})\sigma} = \betanf{(s_i|_{p'})\sigma} = (\betanf{s_i\sigma})|_{p'} = (\betanf{s\sigma})|_p$. \qed \end{proof} \begin{lemmax}[Lifting of \infname{Sup}] Let $\iota\in\mathit{GHInf}^\mathit{GHSel}$ be a \infname{Sup} inference \[\namedinference{Sup} {\overbrace{D'\theta \mathrel\lor t\theta \approx t'\theta}^{\vphantom{\cdot}\smash{D\theta}} \hskip1.25em \overbrace{C'\theta \mathrel\lor \greensubterm{s\theta}{t\theta}_p \doteq s'\theta}^{\vphantom{\cdot}\smash{C\theta}}} {D'\theta \mathrel\lor C'\theta \mathrel\lor \greensubterm{s\theta}{ t'\theta}_p \doteq s'\theta} \] where ${\mathcalx{G}}^{-1}(D\theta) = D = D' \mathrel\lor t \approx t'\in N$, $s\theta = \greensubterm{s\theta}{t\theta}_p$, and ${\mathcalx{G}}^{-1}(C\theta) = C = C' \mathrel\lor s \doteq s'\in N$. We assume that $s$, $t$, $s\theta$, and $t\theta$ are represented by $\lambda$-terms in $\eta$-short $\beta$-normal form. Let $p'$ be the longest prefix of $p$ that is a green position of $s$. Since $\varepsilon$ is a green position of $s$, the longest prefix always exists. Let $u = s|_{p'}$. Suppose one of the following conditions applies:\ {\upshape(i)} $u$ is a deeply occurring variable in $C$; {\upshape(ii)} $p = p'$ and the variable condition holds for $D$ and~$C$; or {\upshape(iii)} $p \neq p'$ and $u$ is not a variable. Then $\iota$ is liftable. \label{lem:lifting2} \end{lemmax} \begin{proof} The \infname{Sup} inference conditions for $\iota$ are that $t\theta\approx t'\theta$ is strictly $\succeq$-eligible, $s\theta\doteq s'\theta$ is strictly $\succeq$-eligible if positive and $\succeq$-eligible if negative, $D\theta \not\succsim C\theta$, $t\theta \not\precsim t'\theta$, and $s\theta \not\precsim s'\theta$. We assume that $s$, $t$, $s\theta$, and $t\theta$ are represented by $\lambda$-terms in $\eta$-short $\beta$-normal form. By\ Lemma~\ref{lem:arg-subterm-instances}, $u\theta$ agrees with $s\theta|_{p'}$ (considering both as terms rather than as $\lambda$-terms). \medskip\noindent \textsc{Case 1:}\enskip We have (a) $p = p'$, (b) $u$ is not fluid, and (c) $u$ is not a variable deeply occurring in $C$. Then $u\theta = s\theta|_{p'} = s\theta|_p = t\theta$. Since $\theta$ is a unifier of $u$ and $t$, there exists an idempotent $\sigma \in \csu(t,u)$ such that for some substitution~$\rho$ and for all variables $x$ occurring in $D$ and $C$, we have $x\sigma\rho = x\theta$. The inference conditions can be lifted: (Strict) eligibility of $t\theta\approx t'\theta$ and $s\theta\doteq s'\theta$ \hbox{w.r.t.}\ $\mathit{GHSel}$ implies (strict) eligibility of $t \approx t'$ and $s \doteq s'$ \hbox{w.r.t.}\ $\sigma$ and $\mathit{HSel}$; $D\theta \not\succsim C\theta$ implies $D \not\succsim C$; $t\theta \not\precsim t'\theta$ implies $t \not\precsim t'$; and $s\theta \not\precsim s'\theta$ implies $s \not\precsim s'$. Moreover, by (a) and (c), condition~(ii) must hold and thus the variable condition holds for $D$ and~$C$. Hence there is the following \infname{Sup} inference $\iota'\in\mathit{HInf}$: \[\namedinference{Sup} {D' \mathrel\lor t \approx t' \hskip1.25em C' \mathrel\lor \greensubterm{s}{u}_p \doteq s'} {(D' \mathrel\lor C' \mathrel\lor \greensubterm{s}{t'}_p \doteq s')\sigma} \] Then $\iota$ is the $\sigma\rho$-ground instance of $\iota'$ and therefore liftable. \medskip\noindent \textsc{Case 2:}\enskip We have (a) $p \neq p'$, or (b) $u$ is fluid, or (c) $u$ is a variable deeply occurring in $C$. We will first show that (a) implies (b) or (c). Suppose (a) holds but neither (b) nor (c) holds. Then condition (iii) must hold---i.e., $u$ is not a variable. Moreover, since (b) does not hold, $u$ cannot have the form $y\>\tuple{u}_n$ for a variable $y$ and $n \geq 1$. If $u$ were of the form $\cst{f}\typeargs{\tuple{\tau}} \> s_1 \dots {s_n}$ with $n \geq 0$, $u\theta$ would have the form $\cst{f}\typeargs{\tuple{\tau}\theta} \> (s_1\theta)\dots (s_n\theta)$, but then there is some $1 \leq i \leq n$ such that $p'.i$ is a prefix of $p$ and $s|_{p'.i}$ is a green subterm of $s$, contradicting the maximality of $p'$. % So $u$ must be a $\lambda$-expression, but since $t\theta$ is a proper green subterm of $u\theta$, $u\theta$ cannot be a $\lambda$-expression, yielding a contradiction. We may thus assume that (b) or (c) holds. Let $p = p'.p''$. Let $z$ be a fresh variable. Define a substitution $\theta'$ that maps this variable $z$ to $\lambda y.\> \greensubterm{(s\theta|_{p'})}{\, y}_{p''}$ and any other variable $w$ to $w\theta$. Clearly, $(z \> t)\theta' = \greensubterm{(s\theta|_{p'})}{t\theta}_{p''} = s\theta|_{p'} = u\theta = u\theta'$. Since $\theta'$ is a unifier of $u$ and $z \> t$, there exists an idempotent $\sigma \in \csu(z \> t, u)$ such that for some substitution~$\rho$, for $x=z$, and for all variables $x$ in $C$ and $D$, we have $x\sigma\rho = x\theta'$. As in case 1, (strict) eligibility of the ground literals implies (strict) eligibility of the nonground literals. Moreover, by construction of $\theta'$, $t\theta' = t\theta \not= t'\theta = t'\theta'$ implies $(z \> t)\theta' \not= (z \> t')\theta'$, and thus $(z \> t)\sigma \not= (z \> t')\sigma$. Since we also have (b) or (c), there is the following inference $\iota'$: \[\namedinference{FluidSup} {D' \mathrel\lor t \approx t' \hskip1.25em C' \mathrel\lor \greensubterm{s}{u}_{p'} \doteq s'} {(D' \mathrel\lor C' \mathrel\lor \greensubterm{s}{z \> t'}_{p'} \doteq s')\sigma} \] Then $\iota$ is the $\sigma\rho$-ground instance of $\iota'$ and therefore liftable. \qed \end{proof} The other \infname{Sup} inferences might not be liftable, but they are redundant: \begin{lemmax}\label{lem:nonliftable-sup-redundant} Let $\iota\in\mathit{GHInf}^\mathit{GHSel}$ be a \infname{Sup} inference from ${\mathcalx{G}}(N)$ not covered by Lemma~\ref{lem:lifting2}. Then $\iota\in\mathit{GHRed}_{\mathrm{I}}^\mathit{GHSel}({\mathcalx{G}}(N))$. \end{lemmax} \begin{proof} Let $C\theta = C'\theta \lor s\theta \doteq s'\theta$ and $D\theta = D'\theta \lor t\theta \approx t'\theta$ be the premises of $\iota$, where $s\theta \doteq s'\theta$ and $t\theta \approx t'\theta$ are the literals involved in the inference, $s\theta \succ s'\theta$, $t\theta \succ t'\theta$, and $C'$, $D'$, $s$, $s'$, $t$, $t'$ are the respective subclauses and terms in $C = {\mathcalx{G}}^{-1}(C\theta)$ and $D = {\mathcalx{G}}^{-1}(D\theta).$ Then the inference $\iota$ has the form % \[\namedinference{Sup} {{D'\theta \mathrel\lor { t\theta \approx t'\theta}} \hskip1.25em {C'\theta \mathrel\lor s\theta\lang\, t\theta\rang \doteq s'\theta}} {D'\theta \mathrel\lor C'\theta \mathrel\lor s\theta\lang\, t'\theta\rang \doteq s'\theta}\] % To show that $\iota \in \mathit{GHRed}_{\mathrm{I}}^\mathit{GHSel}({\mathcalx{G}}(N))$, it suffices to show $\{D\in\floor{{\mathcalx{G}}(N)}\mid D\prec \floor{C\theta}\}\models\floor{\mathit{concl}(\iota)}$. To this end, let $\mathscr{I}$ be an interpretation in ${\mathrm{GF}}$ such that $\mathscr{I}\models\{D\in\floor{{\mathcalx{G}}(N)}\mid D\prec \floor{C\theta}\}$. We need to show that $\mathscr{I}\models \floor{\mathit{concl}(\iota)}$. If $\floor{D'\theta}$ is true in $\mathscr{I}$, then obviously $\mathscr{I}\models \floor{\mathit{concl}(\iota)}$. So we assume that $\floor{D'\theta}$ is false in $\mathscr{I}$. Since $C\theta \succ D\theta$ by the \infname{Sup} order conditions, it follows that $\mathscr{I}\models \floor{t\theta \approx t'\theta}$. Therefore, it suffices to show $\mathscr{I}\models \floor{C\theta}$. Let $p$ be the position in $s\theta$ where $\iota$ takes place and $p'$ be the longest prefix of $p$ that is a green subterm of $s$. Let $u = s|_{p'}$. Since Lemma~\ref{lem:lifting2} does not apply to $\iota$, $u$ is not a deeply occurring variable; if $p=p'$, the variable condition does not hold for $D$ and $C$; and if $p\neq p'$, $u$ is a variable. This means either the position $p$ does not exist in $s$, because it is below an unapplied variable that does not occur deeply in $C$, or $s|_p$ is an unapplied variable that does not occur deeply in $C$ and for which the variable condition does not hold. \medskip\noindent \textsc{Case 1:}\enskip The position $p$ does not exist in $s$ because it is below a variable $x$ that does not occur deeply in $C$. Then $t\theta$ is a green subterm of~$x\theta$ and hence a green subterm of $x\theta \> \tuple{w}$ for any arguments~$\tuple{w}$. Let $v$ be the term that we obtain by replacing $t\theta$ by $t'\theta$ in $x\theta$ at the relevant position. Since $\mathscr{I}\models \floor{t\theta\approx t'\theta}$, by congruence, $\mathscr{I}\models \floor{x\theta \> \tuple{w}\approx v \> \tuple{w}}$ for any arguments $\tuple{w}$. Hence, $\mathscr{I}\models \floor{C\theta}$ if and only if $\mathscr{I}\models \floor{C\{x\mapsto v\}\theta}$ by congruence. Here, it is crucial that the variable does not occur deeply in $C$ because congruence does not hold in $\mathcalx{F}$-encoded terms below $\lambda$-binders. By the inference conditions, we have $t\theta \succ t'\theta$, which implies $\floor{C\theta} \succ \floor{C\{x\mapsto v\}\theta}$ by compatibility with green contexts. Therefore, by the assumption about $\mathscr{I}$, we have $\mathscr{I}\models \floor{C\{x\mapsto v\}\theta}$ and hence $\mathscr{I}\models \floor{C\theta}$. \medskip\noindent \textsc{Case 2:}\enskip The term $s|_p$ is a variable $x$ that does not occur deeply in $C$ and for which the variable condition does not hold. % From this, we know that $C\theta \succeq C''\theta$, where $C'' = C\{x\mapsto t'\}$. % % % % % % % % % % % % We cannot have $C\theta = C''\theta$ because $x\theta = t\theta\neq t'\theta$ and $x$ occurs in $C$. Hence, we have $C\theta \succ C''\theta$. By the definition of $\mathscr{I}$, $C\theta \succ C''\theta$ implies $\mathscr{I}\models\floor{C''\theta}$. We will use equalities that are true in $\mathscr{I}$ to rewrite $\floor{C\theta}$ into $\floor{C''\theta}$, which implies $\mathscr{I}\models \floor{C\theta}$ by congruence. By saturation, every \infname{ArgCong} inference $\iota'$ from $D$ is in $\mathit{HRed}_{\mathrm{I}}(N)$% ---i.e., ${\mathcalx{G}}(\mathit{concl}(\iota')) \allowbreak\subseteq {\mathcalx{G}}(N) \cup \mathit{GHRed}_{\mathrm{C}}({\mathcalx{G}}(N))$. Hence, $D'\theta \lor t\theta \> \tuple{u} \approx t'\theta \> \tuple{u}$ is in ${\mathcalx{G}}(N) \cup \mathit{GHRed}_{\mathrm{C}}({\mathcalx{G}}(N))$ for any % ground arguments $\tuple{u}$. We observe that whenever $t\theta \> \tuple{u}$ and $t'\theta \> \tuple{u}$ are smaller than the $\succeq$-maximal term of $C\theta$ for some arguments $\tuple{u}$, we have \[\mathscr{I}\models \floor{t\theta \> \tuple{u}} \approx \floor{t'\theta \> \tuple{u}} \tag{$*$}\label{eq:congruences} \] To show this, we assume that $t\theta \> \tuple{u}$ and $t'\theta \> \tuple{u}$ are smaller than the $\succeq$-maximal term of $C\theta$ and we distinguish two cases: If $t\theta$ is smaller than the $\succeq$-maximal term of $C\theta$, all terms in $D'\theta$ are smaller than the $\succeq$-maximal term of $C\theta$ and hence $D'\theta \lor t\theta \> \tuple{u} \approx t'\theta \> \tuple{u} \prec C\theta$. If, on the other hand, $t\theta$ is equal to the $\succeq$-maximal term of $C\theta$, then $t\theta \> \tuple{u}$ and $t'\theta \> \tuple{u}$ are smaller than $t\theta$. Hence $t\theta \> \tuple{u} \approx t'\theta \> \tuple{u} \prec t\theta \approx t'\theta$ and $D'\theta \lor t\theta \> \tuple{u} \approx t'\theta \> \tuple{u} \prec D\theta \prec C\theta$. In both cases, since $D'\theta$ is false in $\mathscr{I}$, by the definition of $\mathscr{I}$, we have (\ref{eq:congruences}). Next, we show the equivalence of $C\theta$ and $C''\theta$ via rewriting with equations of the form (\ref{eq:congruences}) where $t\theta \> \tuple{u}$ and $t'\theta \> \tuple{u}$ are smaller than the $\succeq$-maximal term of $C\theta$. Since $x$ does not occur deeply in~$C$, every occurrence of $x$ in $C$ is not inside a $\lambda$-expression and not inside an argument of an applied variable. Therefore, all occurrences of $x$ in $C$ are in a green subterm of the form $x\>\tuple{v}$ for some terms $\tuple{v}$ that do not contain $x$. Hence, every occurrence of $x$ in $C$ corresponds to a subterm $\floor{(x\>\tuple{v})\theta} = \floor{t\theta\>\tuple{v}\theta}$ in $\floor{C\theta}$ and to a subterm $\floor{(x\>\tuple{v})\{x\mapsto t'\}\theta} = \floor{t'\theta\>\tuple{v}\{x\mapsto\nobreak t'\}\theta} = \floor{t'\theta\>\tuple{v}\theta}$ in $\floor{C''\theta}$. These are the only positions where $C\theta$ and $C''\theta$ differ. \looseness=-1 To justify the necessary rewrite steps from $\floor{t\theta\>\tuple{v}\theta}$ into $\floor{t'\theta\>\tuple{v}\theta}$ using (\ref{eq:congruences}), we must show that $\floor{t\theta\>\tuple{v}\theta}$ and $\floor{t'\theta\>\tuple{v}\theta}$ are smaller than the $\succeq$-maximal term in $\floor{C\theta}$ for the relevant $\tuple{v}$. If $\tuple{v}$ is the empty tuple, we do not need to show this because $\mathscr{I} \models \floor{t\theta \approx t'\theta}$ follows from $\floor{D\theta}$'s being true and $\floor{D'\theta}$'s being false. If $\tuple{v}$ is nonempty, it suffices to show that $x\>\tuple{v}$ is not a $\succeq$-maximal term in $C$. Then $\floor{t\theta\>\tuple{v}\theta}$ and $\floor{t'\theta\>\tuple{v}\theta}$, which correspond to the term $x\>\tuple{v}$ in $C$, cannot be $\succeq$-maximal in $\floor{C\theta}$ and $\floor{C''\theta}$. Hence they must be smaller than the $\succeq$-maximal term in $\floor{C\theta}$ because they are subterms of $\floor{C\theta}$ and $\floor{C''\theta}\prec \floor{C\theta}$, respectively. To show that $x\>\tuple{v}$ is not a $\succeq$-maximal term in $C$, we make a case distinction on whether $s\theta \doteq s'\theta$ is selected in $C\theta$ or $s\theta$ is the $\succeq$-maximal term in $C\theta$. One of these must hold because $s\theta \doteq s'\theta$ is $\succeq$-eligible in $C\theta$. If it is selected, by the selection restrictions, $x$ cannot be the head of a $\succeq$-maximal term of $C$. If $s\theta$ is the $\succeq$-maximal term in $C\theta$, we can argue that $x$ is a green subterm of $s$ and, since $x$ does not occur deeply, $s$ cannot be of the form $x\>\tuple{v}$ for a nonempty $\tuple{v}$. This justifies the necessary rewrites between $\floor{C\theta}$ and $\floor{C''\theta}$ and it follows that $\mathscr{I} \models \floor{C\theta}$. \qed \end{proof} With these properties of our inference systems in place, Theorem~\ref{thm:lifting-theorem} guarantees static and dynamic refutational completeness of $\mathit{HInf}$ \hbox{w.r.t.}\ $\mathit{HRed}_{\mathrm{I}}$. However, this theorem gives us refutational completeness \hbox{w.r.t.}\ the Herbrand entailment $\models_{\mathcalx{G}}$, defined as $N_1 \models_{\mathcalx{G}} N_2$ if ${\mathcalx{G}}(N_1) \models {\mathcalx{G}}(N_2)$, whereas our semantics is Tarski entailment $\models$, defined as $N_1 \models N_2$ if any model of $N_1$ is a model of $N_2$. To repair this mismatch, we use the following lemma, which can be proved along the lines of Lemma~4.16 of Bentkamp et al.~\cite{bentkamp-et-al-lfhosup-arxiv}, using Lemma~\ref{lem:subst-lemma-general} and Lemma~\ref{lem:apply-subst}. \begin{lemmax} \label{lem:herbrand-tarski} For $N \subseteq \CC_\HH$, we have $N \models_{\mathcalx{G}} \bot$ if and only if $N \models \bot$. \end{lemmax} \begin{theoremx}[Static refutational completeness] The inference system $\mathit{HInf}$ is statically refutationally complete \hbox{w.r.t.}\ $(\mathit{HRed}_{\mathrm{I}}, {\mathit{HRed}}_{\mathrm{C}})$. In other words, if $N \subseteq \CC_\HH$ is a clause set saturated \hbox{w.r.t.}\ $\mathit{HInf}$ and $\mathit{HRed}_{\mathrm{I}}$, then we have $N \models \bot$ if and only if $\bot \in N$. \label{thm:static-refutational-completeness} \end{theoremx} \begin{proof} We apply Theorem~\ref{thm:lifting-theorem}. By Theorem~\ref{thm:GH-refutational-completeness}, $\mathit{GHInf}^\mathit{GHSel}$ is statically refutationally complete for all $\mathit{GHSel}\in{\mathcalx{G}}(\mathit{HSel})$. By Lemmas~\ref{lem:lifting1}, \ref{lem:lifting2}, and~\ref{lem:nonliftable-sup-redundant}, for every saturated $N\subseteq \CC_\HH$, there exists a selection function $\mathit{GHSel}\in{\mathcalx{G}}(\mathit{HSel})$ such that all inferences $\iota\in\mathit{GHInf}^\mathit{GHSel}$ with $\mathit{prems}(\iota)\in{\mathcalx{G}}(N)$ either are ${\mathcalx{G}}^\mathit{GHSel}$-ground instances of $\mathit{HInf}$-inferences from $N$ or belong to $\smash{\mathit{GHRed}_{\mathrm{I}}^\mathit{GHSel}({\mathcalx{G}}(N))}$. Theorem~\ref{thm:lifting-theorem} implies that if $N \subseteq \CC_\HH$ is a clause set saturated \hbox{w.r.t.}\ $\mathit{HInf}$ and $\mathit{HRed}_{\mathrm{I}}$, then $N \models_{\mathcalx{G}} \bot$ if and only if $\bot \in N$. By Lemma~\ref{lem:herbrand-tarski}, this also holds for the Tarski entailment $\models$. That is, if $N \subseteq \CC_\HH$ is a clause set saturated \hbox{w.r.t.}\ $\mathit{HInf}$ and $\mathit{HRed}_{\mathrm{I}}$, then $N \models \bot$ if and only if $\bot \in N$. \qed \end{proof} From static refutational completeness, we can easily derive dynamic refutational completeness. \begin{theoremx}[Dynamic refutational completeness] The inference system $\mathit{HInf}$ is dynamically refutationally complete \hbox{w.r.t.}\ $(\mathit{HRed}_{\mathrm{I}}, {\mathit{HRed}}_{\mathrm{C}})$, as defined in Definition~\ref{def:dyn-complete}.% \label{thm:dynamic-refutational-completeness} \end{theoremx} \begin{proof} By Theorem~17 % of the saturation framework, this follows from Theorem~\ref{thm:static-refutational-completeness} and Lemma~\ref{lem:herbrand-tarski}. \qed \end{proof} \section{Extensions} \label{sec:extensions} The core calculus can be extended with various optional rules. Although these are not necessary for refutational completeness, they can allow the prover to find more direct proofs. Most of these rules are concerned with the areas covered by the \infname{FluidSup} rule and the extensionality axiom. Two of the optional rules below rely on the notion of ``orange subterms.'' \begin{definitionx} A $\lambda$-term $t$ is an \emph{orange subterm} of a $\lambda$-term $s$ if $s = t$; or if $s = \cst{f}\typeargs{\tuple{\tau}}\> \tuple{s}$ and $t$ is an orange subterm of $s_i$ for some~$i$; or if $s = x\> \tuple{s}$ and $t$ is an orange subterm of $s_i$ for some~$i$; or if $s = (\lambda x.\> u)$ and $t$ is an orange subterm of $u$. \end{definitionx} For example, in the term $\cst{f}\> (\cst{g}\> \cst{a})\> (y\> \cst{b})\> (\lambda x.\> \cst{h}\> \cst{c}\> (\cst{g}\> x))$, the orange subterms are all the green subterms---$\cst{a}$, $\cst{g}\> \cst{a}$, $y\> \cst{b}$, $\lambda x.\> \cst{h}\> \cst{c}\> (\cst{g}\> x)$ and the whole term---and in addition $\cst{b}$, $\cst{c}$, $x$, $\cst{g}\> x$, and $\cst{h}\> \cst{c}\> (\cst{g}\> x)$. Following Convention~\ref{conv:beta-eta-normal-form}, this notion is lifted to $\beta\eta$-equivalence classes via representatives in $\eta$-short $\beta$-normal form. We write $t = \orangesubterm{s}{\tuple{x}_n}{u}$ to indicate that $u$ is an orange subterm of $t$, where $\tuple{x}_n$ are the variables bound in the \emph{orange context} around $u$, from outermost to innermost. If $n = 0$, we simply write $t = \yellowsubterm{s}{u}$. Once a term $\orangesubterm{s}{\tuple{x}_n}{u}$ has been introduced, we write $\orangesubtermeta{s}{\tuple{x}_n}{u'}$ to denote the same context with a different subterm $u'$ at that position. The $\eta$ subscript is a reminder that $u'$ is not necessarily an orange subterm of $\orangesubtermeta{s}{\tuple{x}_n}{u'}$ due to potential applications of $\eta$-reduction. For example, if $\orangesubterm{s}{x}{\cst{g}\>x\>x} = \cst{h}\>\cst{a}\>(\lambda x.\>\cst{g}\>x\>x)$, then $\orangesubtermeta{s}{x}{\cst{f}\>x} = \cst{h}\>\cst{a}\>(\lambda x.\>\cst{f}\>x) = \cst{h}\>\cst{a}\>\cst{f}$. Demodulation, which destructively rewrites using an equality $t \approx t'$, is available at green positions. In addition, a variant of demodulation rewrites in orange contexts: \[\namedsimp{\ensuremath{\lambda}DemodExt}{t \approx t' \hskip1.25em \greensubterm{C}{\orangesubterm{s}{\tuple{x}}{t\sigma}}\phantom{\kern-0.083333em'_\eta} \hskip1.25em \phantom{\orangesubterm{s}{\tuple{x}}{t\sigma} \approx \orangesubtermeta{s}{\tuple{x}}{t'\kern-0.083333em\sigma}}} {t \approx t' \hskip1.25em \greensubterm{C}{\orangesubtermeta{s}{\tuple{x}}{t'\kern-0.083333em\sigma}} \hskip1.25em \orangesubterm{s}{\tuple{x}}{t\sigma} \approx \orangesubtermeta{s}{\tuple{x}}{t'\kern-0.083333em\sigma}}\] where the term $t\sigma$ may refer to the bound variables $\tuple{x}$. The following side conditions apply:\ \begin{enumerate} % \item[1.] $\betanf{\orangesubterm{s}{\tuple{x}}{t\sigma}}$ is a $\lambda$-expression or a term of the form $y\>\tuple{u}_n$ with $n>0$; % \item[2.] $\orangesubterm{s}{\tuple{x}}{t\sigma} \succ \orangesubtermeta{s}{\tuple{x}}{t'\kern-0.083333em\sigma}$;% \hfill 3.\enskip $\greensubterm{C}{\orangesubterm{s}{\tuple{x}}{t\sigma}} \succ \orangesubterm{s}{\tuple{x}}{t\sigma} \approx \orangesubtermeta{s}{\tuple{x}}{t'\kern-0.083333em\sigma}$ \hfill\hbox{} \end{enumerate} Condition~3 ensures that the second premise is redundant \hbox{w.r.t.}\ the conclusions and may be removed. The double bar indicates that the conclusions collectively make the premises redundant and can replace them. The third conclusion, which is entailed by $t \approx t'$ and (\infname{Ext}), could be safely omitted if the corresponding (\infname{Ext}) instance is smaller than the second premise. But in general, the third conclusion is necessary for the proof, and the variant of \infname{$\lambda$DemodExt} that omits it---let us call it \infname{$\lambda$Demod}---might not preserve refutational completeness. An instance of \infname{$\lambda$DemodExt}, where $\cst{g}\>z$ is rewritten to $\cst{f}\>z\>z$ under a $\lambda$-binder, follows: \[\namedsimp{\ensuremath{\lambda}DemodExt}{ \cst{g}\>x \approx \cst{f}\>x\>x \hskip1.25em \cst{k}\>\rlap{\ensuremath{(\lambda z.\> \cst{h}\>(\cst{g}\>z))\approx \cst{c}}} \phantom{(\lambda z.\> \cst{h}\>(\cst{f}\>z\>z)\approx \cst{c}) \hskip1.25em (\lambda z.\> \cst{h}\>(\cst{g}\>z)) \approx (\lambda z.\> \cst{h}\>(\cst{f}\>z\>z))}} {\cst{g}\>x \approx \cst{f}\>x\>x \hskip1.25em \cst{k}\>(\lambda z.\> \cst{h}\>(\cst{f}\>z\>z))\approx \cst{c} \hskip1.25em (\lambda z.\> \cst{h}\>(\cst{g}\>z)) \approx (\lambda z.\> \cst{h}\>(\cst{f}\>z\>z))}\] \begin{lemmax} \infname{\ensuremath{\lambda}DemodExt} is sound and preserves refutational completeness of the calculus. \end{lemmax} \begin{proof} Soundness of the first conclusion is obvious. Soundness of the second and third conclusion follows from congruence and extensionality using the premises. Preservation of completeness is justified by redundancy. Specifically, we justify the deletion of the second premise by showing that it is redundant \hbox{w.r.t.}\ the conclusions. By definition, it is redundant if for every ground instance $\greensubterm{C}{\orangesubterm{s}{\tuple{x}}{t\sigma}}\theta \in {\mathcalx{G}}(\greensubterm{C}{\orangesubterm{s}{\tuple{x}}{t\sigma}})$, its encoding $\floor{\greensubterm{C}{\orangesubterm{s}{\tuple{x}}{t\sigma}}\theta}$ is entailed by $\floor{{\mathcalx{G}}(N)}$, where $N$ are the conclusions of $\infname{\ensuremath{\lambda}DemodExt}$. The first conclusion cannot help us prove redundancy because $\betanf{\orangesubterm{s}{\tuple{x}}{t\sigma}\theta}$ might be a $\lambda$-expression and then $\floor{\orangesubterm{s}{\tuple{x}}{t\sigma}\theta}$ is a symbol that is unrelated to $\floor{t\sigma\theta}$. Instead, we use the $\theta$-instances of the last two conclusions. By Lemma~\ref{lem:subterm-correspondence1}, $\floor{\greensubterm{C}{\orangesubtermeta{s}{\tuple{x}}{t'\kern-0.083333em\sigma}}\theta}$ has $\floor{\orangesubtermeta{s}{\tuple{x}}{t'\kern-0.083333em\sigma}\theta}$ as a subterm. If this subterm is replaced by $\floor{\orangesubterm{s}{\tuple{x}}{t\sigma}\theta}$, we obtain $\floor{\greensubterm{C}{\orangesubterm{s}{\tuple{x}}{t\sigma}}\theta}$. Hence, the $\mathcalx{F}$-encodings of the $\theta$-instances of the last two conclusions entail the $\mathcalx{F}$-encoding of the $\theta$-instance of the second premise by congruence. Due to the side condition that the second premise is larger than the second and third conclusion, by stability under grounding substitutions, the $\theta$-instances of the last two conclusions must be smaller than the $\theta$-instance of the second premise. Thus, the second premise is redundant. \qed \end{proof} The next simplification rule can be used to prune arguments of applied variables if the arguments can be expressed as functions of the remaining arguments. For example, the clause $\subterm{C}{\, y\>\cst{a}\>\cst{b}\>(\cst{f}\>\cst{b}\>\cst{a}){,}\allowbreak\; y\>\cst{b}\>\cst{d}\>(\cst{f}\>\cst{d}\>\cst{b})}$, in which $y$ occurs twice, can be simplified to $\subterm{C}{\, y'\>\cst{a}\>\cst{b}{,}\; y'\>\cst{b}\>\cst{d}}$. Here, for each occurrence of $y$, the third argument can be computed by applying $\cst{f}$ to the second and first arguments. The rule can also be used to remove the repeated arguments in $y\>\cst{b}\>\cst{b} \not\eq y\>\cst{a}\>\cst{a}$, the static argument~$\cst{a}$ in $y\>\cst{a}\>\cst{c} \not\eq y\>\cst{a}\>\cst{b}$, and all four arguments in $y\>\cst{a}\>\cst{b} \not\eq z\>\cst{b}\>\cst{d}$. It is stated as \[\namedsimp{PruneArg}{C}{C\sigma}\] where the following conditions apply: \begin{enumerate} % \item[1.] $\sigma = \{y \mapsto \lambda\tuple{x}_{\!j}.\> y'\> \tuple{x}_{\!j-1}\}$;% \hfill 2.\enskip $y'$ is a fresh variable;% \hfill 3.\enskip $C\sqsupset C\sigma$; \hfill\hbox{}% \item[4.] the minimum number~$k$ of arguments passed to any occurrence of $y$ in the clause $C$ is at least $j$;% \item[5.] there exists a term $t$ containing no variables bound in the clause such that for all terms of the form $y\>\tuple{s}_k$ occurring in the clause we have $s_{\!j} = t\> \tuple{s}_{\!j-1}\>s_{\!j+1}\ldots s_k$. \end{enumerate} Clauses with a static argument correspond to the case $t := (\lambda \bar{x}_{\!j-1}\> x_{\!j+1} \ldots x_k.\; u)$, where $u$ is the static argument (containing no variables bound in $t$) and $j$ is its index in $y$'s argument list. The repeated argument case corresponds to $t := (\lambda \bar{x}_{\!j-1} \> x_{\!j+1} \ldots x_k.\; x_i)$, where $i$ is the index of the repeated argument's mate. \begin{lemmax} \infname{PruneArg} is sound and preserves refutational completeness of the calculus. \end{lemmax} \begin{proof} The rule is sound because it simply applies a substitution to $C$. It preserves completeness because the premise $C$ is redundant \hbox{w.r.t.}\ the conclusion $C\sigma$. This is because the sets of ground instances of $C$ and $C\sigma$ are the same and $C \sqsupset C\sigma$. Clearly $C\sigma$ is an instance of $C$. We will show the inverse:\ that $C$ is an instance of $C\sigma$. Let $\rho = \{y' \mapsto \lambda\tuple{x}_{\!j-1}\> x_{\!j+1} \ldots x_k.\;y\>\tuple{x}_{\!j-1}\allowbreak\> (t\>\tuple{x}_{\!j-1}\> x_{\!j+1} \ldots x_k)\> x_{\!j+1} \ldots x_k\}$. We show $C\sigma\rho = C$. Consider an occurrence of $y$ in $C$. By the side conditions, it will have the form $y\>\tuple{s}_k\>\tuple{u}$, where $s_{\!j} = t\> \tuple{s}_{\!j-1}\>s_{\!j+1}\ldots s_k$. Hence, $(y\>\tuple{s}_k)\sigma\rho = (y'\>\tuple{s}_{\!j-1}\>s_{\!j+1} \ldots s_k)\rho = y\>\tuple{s}_{\!j-1}\>(t\> \tuple{s}_{\!j-1}\>s_{\!j+1}\ldots s_k)\>s_{\!j+1} \ldots s_k = y\>\tuple{s}_k$. Thus, $C\sigma\rho = C$. \qed \end{proof} We designed an algorithm that efficiently computes the subterm $u$ of the term $t = (\lambda x_1 \ldots \, x_{\!j-1} \, x_{\!j+1} \ldots \, x_k.\allowbreak\; u)$ occurring in the side conditions of $\infname{PruneArg}$. The algorithm is incomplete, but our tests suggest that it discovers most cases of prunable arguments that occur in practice. The algorithm works by maintaining a mapping of pairs $(y, i)$ of functional variables $y$ and indices $i$ of their arguments to a set of candidate terms for $u$. For an occurrence $y \> \tuple{s}_{n}$ of~$y$ and for an argument $s_{\!j}$, the algorithm approximates this set by computing all possible ways in which subterms of $s_{\!j}$ that are equal to any other $s_i$ can be replaced with the variable $x_i$ corresponding to the $i$th argument of $y$. The candidate sets for all occurrences of $y$ are then intersected. An arbitrary element of the final intersection is returned as the term~$u$. For example, suppose that $y\>\cst{a}\>(\cst{f} \> \cst{a})\>\cst{b}$ and $y\>z\>(\cst{f} \>z)\>\cst{b}$ are the only occurrences of $y$ in the clause $C$. The initial mapping is $\{1 \mapsto \TT_\HH{,}\; 2 \mapsto \TT_\HH{,}\; 3 \mapsto \TT_\HH\}$. After computing the ways in which each argument can be expressed using the remaining ones for the first occurrence and intersecting the sets, we get $\{1 \mapsto \{\cst{a}\}{,}\; 2 \mapsto \{\cst{f}\>\cst{a}{,}\; \cst{f}\>x_1\}{,}\; 3 \mapsto \{\cst{b}\}\}$, where $x_1$ represents $y$'s first argument. Finally, after computing the corresponding sets for the second occurrence of $y$ and intersecting them with the previous candidate sets, we get $\{1 \mapsto \emptyset{,}\; 2 \mapsto \{\cst{f}\>x_1\}{,}\; 3 \mapsto \{\cst{b}\}\}.$ The final mapping shows that we can remove the second argument, since it can be expressed as a function of the first argument: $t = (\lambda x_1 \, x_3.\; \cst{f}\> x_1\> x_3)$. We can also remove the third argument, since its value is fixed: $t = (\lambda x_1 \, x_3.\; \cst{b})$. An example where our procedure fails is the pair of occurrences $y\>(\lambda x.\> \cst{a})\>(\cst{f}\>\cst{a})\>\cst{c}$ and $y\>(\lambda x.\> \cst{b})\>(\cst{f}\>\cst{b})\>\cst{d}$. \infname{PruneArg} can be used to eliminate the second argument by taking $t := (\lambda x_1\>x_3.\; \cst{f}\>(x_1\>x_3))$, but our algorithm will not detect this. Following the literature \cite{gupta-et-al-2014,steen-benzmueller-2018}, we provide a rule for negative extensionality: \[\namedinference{NegExt}{C' \mathrel\lor s \not\eq s'} {C' \mathrel\lor s\>(\cst{sk}\typeargs{\tuple{\alpha}}\>\tuple{y}) \not\eq s'\>(\cst{sk}\typeargs{\tuple{\alpha}}\>\tuple{y})}\] where the following conditions apply: \begin{enumerate} % \item[1.] $\cst{sk}$ is a fresh Skolem symbol;% \hfill 2.\enskip $s \not\eq s'$ is $\succsim$-eligible in the premise;% \hfill\hbox{}% \item[3.] $\tuple{\alpha}$ and $\tuple{y}$ are the type and term variables occurring free in the literal $s \not\eq s'$.% \end{enumerate} Negative extensionality can be applied as an inference rule at any time or as a simplification rule during preprocessing of the initial problem. The rule uses Skolem terms $\cst{sk}\>\tuple{y}$ rather than $\cst{diff}\> s\> s'$ because they tend to be more compact. \begin{lemmax}[\infname{NegExt}'s satisfiability preservation] Let $N\subseteq\CC_\HH$ and let $E$ be the conclusion of a \infname{NegExt} inference from $N.$ If $N \mathrel\cup \{\text{\upshape(\infname{Ext})}\}$ is satisfiable, then $N \mathrel\cup \{\text{\upshape(\infname{Ext})}, E\}$ is satisfiable. \end{lemmax} \begin{proof} Let $\mathscr{I}$ be a model of $N \mathrel\cup \{\text{\upshape(\infname{Ext})}\}.$ We need to construct a model of $N \mathrel\cup \{\text{\upshape(\infname{Ext})}, E\}.$ Since (\infname{Ext}) holds in~$\mathscr{I}$, so does its instance $s\>(\cst{diff}\> s\> s')\not\eq s'\>(\cst{diff}\> s\> s') \mathrel\lor s \approx s'$. We extend the model $\mathscr{I}$ to a model $\mathscr{I}'$, interpreting $\cst{sk}$ such that $\mathscr{I}' \models \cst{sk}\typeargs{\tuple{\alpha}}\>\tuple{y} \approx \cst{diff}\> s\> s'$. The Skolem symbol $\cst{sk}$ takes the free type and term variables of $s \not\eq s'$ as arguments, which include all the free variables of $\cst{diff}\> s\> s'$, allowing us to extend $\mathscr{I}$ in this way. By assumption, the premise $C' \mathrel\lor s \not\eq s'$ is true in $\mathscr{I}$ and hence in $\mathscr{I}'$. Since the above instance of (\infname{Ext}) holds in $\mathscr{I}$, it also holds in $\mathscr{I}'$. Hence, the conclusion $C' \mathrel\lor s\>(\cst{sk}\typeargs{\tuple{\alpha}_m}\>\tuple{y}_n) \not\eq s'\>(\cst{sk}\typeargs{\tuple{\alpha}_m}\>\tuple{y}_n)$ also holds, which can be seen by resolving the premise against the (\infname{Ext}) instance and unfolding the defining equation of~$\cst{sk}$. \qed \end{proof} \looseness=-1 One reason why the extensionality axiom is so prolific is that both sides of its maximal literal, $y\>(\cst{diff}\> y\> z) \not\eq z\>(\cst{diff}\> y\> z)$, are fluid. As a pragmatic alternative to the axiom, we introduce the ``abstracting'' rules \infname{AbsSup}, \infname{AbsERes}, and \infname{AbsEFact} with the same premises as the core \infname{Sup}, \infname{ERes}, and \infname{EFact}, respectively. We call these rules collectively \infname{Abs}. Each new rule shares all the side conditions of the corresponding core rule except that of the form $\sigma\in\csu(s,t)$. Instead, it lets $\sigma$ be the most general unifier of $s$ and $t$'s types and adds this condition: Let $\greensubterm{v}{s_1, \ldots, s_n} = s\sigma$ and $\greensubterm{v}{t_1,\ldots,t_n} = t\sigma$, where $\greensubterm{v}{\phantom{.}}$ is the largest common green context of $s\sigma$ and $t\sigma$. If any $s_i$ is of functional type and the core rule has conclusion $E\sigma$, the new rule has conclusion $E\sigma \mathrel\lor s_1 \not\eq t_1 \mathrel\lor \cdots \mathrel\lor s_n \not\eq t_n$. The \infname{NegExt} rule can then be applied to those literals $s_i \not\eq t_i$ whose sides have functional type. Essentially the same idea was proposed by Bhayat and Reger as \emph{unification with abstraction} in the context of combinatory superposition \cite[\Section~3.1]{bhayat-reger-2020-combsup}. The approach regrettably does not fully eliminate the need for axiom~(\infname{Ext}), as Visa Nummelin demonstrated via the following example. \begin{examplex} Consider the unsatisfiable clause set consisting of $\cst{h}\>x \approx \cst{f}\>x$, $\cst{k}\>\cst{h} \approx \cst{k}\;\cst{g}$, and $\cst{k}\>\cst{g} \not\eq \cst{k}\>\cst{f}$, where $\cst{k}$ takes at most one argument and $\cst{h} \succ \cst{g} \succ \cst{f}$. The only nonredundant \infname{Abs} inference applicable is \infname{AbsERes} on the third clause, resulting in $\cst{g} \not\eq \cst{f}$. Applying \infname{ExtNeg} further produces $\cst{g}\>\cst{sk} \not\eq \cst{f}\>\cst{sk}$. The set consisting of all five clauses is saturated. \end{examplex} A different approach is to instantiate the extensionality axiom with arbitrary terms $s, s'$ of the same functional type:\strut \[ \namedinference{ExtInst}{} {s\>(\cst{diff}\> s\> s') \not\eq s'\>(\cst{diff}\> s\> s') \mathrel\lor s \approx s'} \] We would typically choose $s, s'$ among the green subterms occurring in the current clause set. Intuitively, if we think in terms of eligibility, \infname{ExtInst} demands $s\>(\cst{diff}\> s\> s') \approx s'\>(\cst{diff}\> s\> s')$ to be proved before $s \approx s'$ can be used. This can be advantageous because simplifying inferences (based on matching) will often be able to rewrite the applied terms $s\>(\cst{diff}\> s\> s')$ and $s'\>(\cst{diff}\> s\> s')$. In contrast, \infname{Abs} assume $s \approx s'$ and delay the proof obligation that $s\>(\cst{diff}\> s\> s') \approx s'\>(\cst{diff}\> s\> s')$. This can create many long clauses, which will be subject to expensive generating inferences (based on full unification). Superposition can be generalized to orange subterms as follows: \[\namedinference{\ensuremath{\lambda}Sup} {D' \mathrel\lor { t \approx t'} \hskip1.25em C' \mathrel\lor \orangesubterm{s}{\tuple{x}}{u} \doteq s'} {(D' \mathrel\lor C' \mathrel\lor \orangesubtermeta{s}{\tuple{x}}{t'} \doteq s') \sigma\rho}\] where the substitution $\rho$ is defined as follows: Let $P_y = \{y\}$ for all type and term variables $y \not\in\tuple{x}$. For each $i$, let $P_{x_i}$ be recursively defined as the union of all $P_y$ such that $y$ occurs free in the $\lambda$-expression that binds $x_i$ in $\orangesubterm{s}{\tuple{x}}{u}\sigma$ or that occurs free in the corresponding subterm of $\smash{\orangesubtermeta{s}{\tuple{x}}{t'}\sigma}$. % % Then $\rho$ is defined as $\{x_i \mapsto \cst{sk}_i\typeargs{\tuple{\alpha}_i}\>\tuple{y}_i\text{ for each $i$}\}$, where $\tuple{y}_i$ are the term variables in $P_{x_i}$ and $\tuple{\alpha}_i$ are the type variables in $P_{x_i}$ and the type variables occurring in the type of the $\lambda$-expression binding $x_i$. In addition, \infname{Sup}'s side conditions and the following conditions apply: \begin{enumerate} % \item[10.] $\tuple{x}$ has length $n > 0$;% \hfill 11.\enskip $\tuple{x}\sigma = \tuple{x}$;% \hfill\hbox{}% \item[12.] the variables $\tuple{x}$ do not occur in $y\sigma$ for all variables $y$ in $u$. \end{enumerate} The substitution $\rho$ introduces Skolem terms to represent bound variables that would otherwise escape their binders. The rule can be justified in terms of paramodulation and extensionality, with the Skolem terms standing for $\cst{diff}$ terms. We can shorten the derivation of Example~\ref{ex:prod-div} by applying this rule to the clauses $C_{\text{div}}$ and $C_{\text{conj}}$ as follows: \[\namedinference{\ensuremath{\lambda}Sup} {n \approx \cst{zero} \mathrel\lor \cst{div}\;n\;n \approx \cst{one} \hskip1.25em \cst{prod}\; K\;(\lambda k.\> \cst{div}\; (\cst{succ}\; k)\; (\cst{succ}\; k)) \not\eq \cst{one}} {\cst{succ}\; \cst{sk} \approx \cst{zero} \mathrel\lor \cst{prod}\; K\;(\lambda k.\> \cst{one}) \not\eq \cst{one}}\] From this conclusion, $\bot$ can be derived using only \infname{Sup} and \infname{EqRes} inferences. We thus avoid both \infname{FluidSup} and (\infname{Ext}). \begin{lemmax}[\infname{\ensuremath{\lambda}Sup}'s satisfiability preservation] Let $N\subseteq\CC_\HH$ and let $E$ be the conclusion of a \infname{\ensuremath{\lambda}Sup} inference from $N.$ If $N \mathrel\cup \{\text{\upshape(\infname{Ext})}\}$ is satisfiable, then $N \mathrel\cup \{\text{\upshape(\infname{Ext})}, E\}$ is satisfiable. \end{lemmax} \begin{proof} Let $\mathscr{I}$ be a model of $N \mathrel\cup \{\text{\upshape(\infname{Ext})}\}.$ We need to construct a model of $N \mathrel\cup \{\text{\upshape(\infname{Ext})}, E\}.$ For each $i$, let $v_i$ be the $\lambda$-expression binding $x_i$ in the term $\orangesubterm{s}{\tuple{x}}{u}\sigma$ in the rule. Let $v'_i$ be the variant of $v_i$ in which the relevant occurrence of $u\sigma$ is replaced by $t'\sigma$. We define a substitution $\pi$ recursively by $x_i\pi = \cst{diff}\> (v_i\pi)\> (v'_i\pi)$ for all $i$. This definition is well founded because the variables $x_{\!j}$ with $j \geq i$ do not occur freely in $v_i$ and $v_i'$. We extend the model $\mathscr{I}$ to a model $\mathscr{I}'$, interpreting $\cst{sk}_i$ such that $\mathscr{I}' \models \cst{sk}_i\typeargs{\tuple{\alpha}_i}\>\tuple{y}_i \approx \cst{diff}\> (v_i\pi)\allowbreak\> (v'_i\pi)$ for each $i$. Since the free type and term variables of any $x_i\pi$ are necessarily contained in $P_{x_i}$, the arguments of $\cst{sk}_i$ include the free % variables of $\cst{diff}\> (v_i\pi)\> (v'_i\pi)$, allowing us to extend $\mathscr{I}$ in this way. By assumption, the premises of the \infname{\ensuremath{\lambda}Sup} inference are true in $\mathscr{I}$ and hence in $\mathscr{I}'$. We need to show that the conclusion $(D' \mathrel\lor C' \mathrel\lor \orangesubtermeta{s}{\tuple{x}}{t'} \doteq s')\sigma\rho$ is also true in $\mathscr{I}'$. Let $\xi$ be a valuation. If $\mathscr{I}',\xi \models (D' \mathrel\lor C')\sigma\rho$, we are done. So we assume that $D'\sigma\rho$ and $C'\sigma\rho$ are false in $\mathscr{I}'$ under $\xi$. In the following, we omit `$\mathscr{I}',\xi\models$', but all equations ($\approx$) are meant to be true in $\mathscr{I}'$ under $\xi$. Assuming $D'\sigma\rho$ and $C'\sigma\rho$ are false, we will show inductively that $v_i\pi \approx v'_i\pi$ for all $i = k, \dots, 1$. By this assumption, the premises imply that $t\sigma\rho \approx t'\sigma\rho$ and $\orangesubterm{s}{\tuple{x}}{u}\sigma\rho \doteq s'\sigma\rho$. Due to the way we constructed $\mathscr{I}'$, we have $w\pi \approx w\rho$ for any term $w$. Hence, we have $t\sigma\pi \approx t'\sigma\pi$. The terms $v_k\pi\>(\cst{diff}\> (v_k\pi)\> (v'_k\pi))$ and $v_k'\pi\>(\cst{diff}\> (v_k\pi)\> (v'_k\pi))$ are the respective result of applying $\pi$ to the body of the $\lambda$-expressions $v_k$ and $v'_k$. Therefore, by congruence, $t\sigma\pi \approx t'\sigma\pi$ and $t\sigma = u\sigma$ imply that $v_k\pi\>(\cst{diff}\> (v_k\pi)\> (v'_k\pi)) \approx v'_k\pi\>(\cst{diff}\> (v_k\pi)\> (v'_k\pi)).$ The extensionality axiom then implies $v_k\pi \approx v'_k\pi$. It follows directly from the definition of $\pi$ that for all $i$, $v_i\pi\>(\cst{diff}\> (v_i\pi)\> (v'_i\pi)) = \yellowsubterm{s_i}{v_{i+1}\pi}$ and $v'_i\pi\>(\cst{diff}\> (v_i\pi)\> (v'_i\pi)) = \yellowsubterm{s_i}{v'_{i+1}\pi}$ for some context $\yellowsubterm{s_i}{\phantom{\cdot}}$. The subterms $v_{i+1}\pi$ of $\yellowsubterm{s_i}{v_{i+1}\pi}$ and $v_{i+1}'\pi$ of $\yellowsubterm{s_i}{v_{i+1}'\pi}$ may be below applied variables but not below $\lambda$s. Since substitutions avoid capture, in $v_i$ and $v_i'$, $\pi$ only substitutes $x_{\!j}$ with $j<i$, but in $v_{i+1}$ and $v_{i+1}'$, it substitutes all $x_{\!j}$ with $j\leq i$. By an induction using these equations, congruence, and the extensionality axiom, we can derive from $v_k\pi \approx v'_k\pi$ that $v_1\pi \approx v'_1\pi.$ Since $\mathscr{I}' \models w\pi \approx w\rho$ for any term $w$, we have $v_1\rho \approx v'_1\rho.$ By congruence, it follows that $\orangesubterm{s}{\tuple{x}}{u}\sigma\rho \approx \orangesubtermeta{s}{\tuple{x}}{t'}\sigma\rho.$ With $\orangesubterm{s}{\tuple{x}}{u}\sigma\rho \doteq s'\sigma\rho,$ it follows that $(\orangesubtermeta{s}{\tuple{x}}{t'} \doteq s')\sigma\rho.$ Hence, the conclusion of the \infname{\ensuremath{\lambda}Sup} inference is true in $\mathscr{I}'$. \qed \end{proof} The next rule, \emph{duplicating flex subterm superposition}, is a lightweight alternative to \infname{FluidSup}: \[ \namedinference{DupSup}{D'\lor t\approx t' \quad C'\lor \greensubterm{s}{\, y\>\tuple{u}_n} \doteq s'} {(D'\lor C'\lor \greensubterm{s}{z\>\tuple{u}_n\>t'}\doteq s')\rho\sigma} \] where $n > 0$, $\rho = \{y \mapsto \lambda\tuple{x}_n.\>z\>\tuple{x}_n\>(w\>\tuple{x}_n)\}$, and $\sigma \in \csu(t{,}\> w\>(\tuple{u}_n\rho))$ for fresh variables $w, z$. The order and eligibility restrictions are as for \infname{Sup}. The rule can be understood as the composition of an inference that applies the substitution~$\rho$ and of a paramodulation inference into the subterm $w\>(\tuple{u}_n\rho)$ of $\greensubterm{s}{z\>(\tuple{u}_n\rho)\>(w\>(\tuple{u}_n\rho))}$. \infname{DupSup} is general enough to replace \infname{FluidSup} in Examples \ref{ex:wsup-1}~and~\ref{ex:wsup-2} but not in Example~\ref{ex:wsup-3}. On the other hand, \infname{FluidSup}'s unification problem is usually a flex--flex pair, whereas \infname{DupSup} yields a less explosive flex--rigid pair unless $t$ is variable-headed. \begin{notyet} Let us call a $\lambda$-term $t$ \emph{second-order-like} if $t = \cst{f}\typeargs{\tuple{\tau}}\> \tuple{s}$ and each $s_i$ is second-order-like; or if $s = x\> \tuple{s}$, each $s_i$'s type is not functional or a type variable, and each $s_i$ is second-order-like; or if $\betanf{t}$ is a $\lambda$-expression. The \infname{DupSup} rule, in conjunction with an extended \infname{Sup} rule that rewrites into the yellow subterms, constitutes a complete alternative to the explosive \infname{FluidSup} rule for second-order-like fluid subterms of the form $y\>\tuple{u}_n$. \begin{theoremx}[Static refutational completeness with \infname{DupSup}] Let $\mathit{HInf}'$ be the inference system obtained by removing \infname{FluidSup} inferences into second-order-like fluid subterms of the form $y\>\tuple{u}_n$, where $n > 0$, by adding \infname{DupSup}, and by extending \infname{Sup} so that it rewrites into yellow subterms. Then $\mathit{HInf}'$ is statically refutationally complete. \label{thm:static-refutational-completeness-with-dupsup} \end{theoremx} \begin{proof} The proof is essentially as for Theorem~\ref{thm:static-refutational-completeness}. The only necessary change is in case~2 of the proof of Lemma~\ref{lem:lifting2}, subcase~(b). We reuse the notations from that proof. If $u$ is not of the form $y\>\tuple{u}_n$ or is not second-order-like, $\mathit{HInf}'$ contains an applicable \infname{FluidSup} inference that lifts the ground inference~$\iota$. Otherwise, let $p = p'.p''$. \qed \end{proof} \end{notyet} The last rule, \emph{flex subterm superposition}, is an even more lightweight alternative to \infname{Fluid\-Sup}: \[ \namedinference{FlexSup}{D'\lor t\approx t' \quad C'\lor \greensubterm{s}{\, y\>\tuple{u}_n} \doteq s'} {(D'\lor C'\lor \greensubterm{s}{t'}\doteq s')\sigma} \] where $n > 0$ and $\sigma \in \csu(t{,}\> y\>\tuple{u}_n)$. The order and eligibility restrictions are as for \infname{Sup}. \section{Implementation} \label{sec:implementation} Zipperposition \cite{cruanes-2015,cruanes-2017} is an open source superposition prover written in OCaml.% \footnote{\url{https://github.com/sneeuwballen/zipperposition}} Originally designed for polymorphic first-order logic (TF1 \cite{blanchette-paskevich-2013}), it was later extended with an incomplete higher-order mode based on pattern unification \cite{miller-1991}. Bentkamp et al.~\cite{bentkamp-et-al-2018} extended it further with a complete $\lambda$-free clausal higher-order mode. We have now implemented a clausal higher-order mode based on our calculus. We use the order $\lsucc$ (\Section~\ref{ssec:a-derived-term-order}) derived from the Knuth--Bendix order \cite{knuth-bendix-1970} and the lexicographic path order \cite{kamin-levy-1980-cannotfind}. We currently use the corresponding nonstrict order $\lsucceq$ as~$\succsim$. Except for \infname{FluidSup}, the core calculus rules already existed in Zipperposition in a similar form. To improve efficiency, we extended the prover to use a higher-order generalization \cite{vukmirovic-et-al-2020-unif} of fingerprint indices \cite{schulz-fingerprint-2012} to find inference partners for all new binary inference rules. To speed up the computation of the \infname{Sup} conditions, we omit the condition $C\sigma \not\precsim D\sigma$ in the implementation, at the cost of performing some additional inferences. Among the optional rules, we implemented \infname{$\lambda$Demod}, \infname{PruneArg}, \infname{NegExt}, \infname{Abs}, \infname{ExtInst}, \infname{$\lambda$Sup}, \infname{DupSup}, and \infname{FlexSup}. For \infname{$\lambda$Demod} and \infname{$\lambda$Sup}, demodulation, subsumption, and other standard simplification rules (as implemented in E~\cite{schulz-et-al-2019}), we use pattern unification. For generating inference rules that require enumerations of complete sets of unifiers, we use the complete procedure of Vukmirovi\'c et al.\ \cite{vukmirovic-et-al-2020-unif}. It has better termination behavior, produces fewer redundant unifiers, and can be implemented more efficiently than procedures such as Jensen and Pietrzykowski's \cite{jensen-pietrzykowski-1976} and Snyder and Gallier's \cite{snyder-gallier-1989}. The set of fluid terms is overapproximated in the implementation by the set of terms that are either nonground $\lambda$-expressions or terms of the form $y\>\tuple{u}_n$ with $n>0$. To efficiently retrieve candidates for \infname{Abs} inferences without slowing down superposition term indexing structures, we implemented dedicated indexing for clauses that are eligible for \infname{Abs} inferences \cite[\Section~3.3]{vukmirovic-nummelin-2020-boolean}. Zipperposition implements a DISCOUNT-style given clause procedure \cite{avenhaus-et-al-1995}. The proof state is represented by a set $A$ of active clauses and a set $P$ of passive clauses. To interleave nonterminating unification with other computation, we added a set $T$ containing possibly infinite sequences of scheduled inferences. These sequences are stored as finite instructions of how to compute the inferences. Initially, all clauses are in $P$. At each iteration of the main loop, the prover heuristically selects a \emph{given clause} $C$ from $P$. If $P$ is empty, sequences from $T$ are evaluated to generate more clauses into $P$; if no clause can be produced in this way, $A$ is saturated and the prover stops. Assuming a given clause $C$ could be selected, it is first simplified using $A$. Clauses in $A$ are then simplified \hbox{w.r.t.}\ $C$, and any simplified clause is moved to $P$. Then $C$ is added to $A$ and all sequences representing nonredundant inferences between $C$ and $A$ are added to $T$. This maintains the invariant that all nonredundant inferences between clauses in $A$ have been scheduled or performed. Then some of the scheduled inferences in $T$ are performed and the conclusions are put into $P$. We can view the above loop as an instance of the abstract Zipperposition loop prover \textsf{ZL} of Waldmann et al.~\cite[Example~34]{waldmann-et-al-2020-saturation}. % Their Theorem~32 % allows us to obtain dynamic completeness for this prover architecture from our static completeness result (Theorem~54). % This requires that the sequences in $T$ are visited fairly, that clauses in $P$ are chosen fairly, and that simplification terminates, all of which are guaranteed by our implementation. The unification procedure we use returns a sequence of either singleton sets containing the unifier or an empty set signaling that a unifier is still not found. Empty sets are returned to give back control to the caller of unification procedure and avoid getting stuck on nonterminating problems. These sequences of unifier subsingletons are converted into sequences containing subsingletons of clauses representing inference conclusions. \section{Evaluation} \label{sec:evaluation} \newcommand\HEAD[1]{\hbox to \wd\mybox{\hfill\hbox{#1}\hfill}} \newcommand\Z{\phantom{0}} \newcommand\MIDLINE{\\[.25ex]\hline\rule{0pt}{3ex}} We evaluated our prototype implementation of the calculus in Zipperposition with other higher-order provers and with Zipperposition's modes for less expressive logics. All of the experiments were performed on StarExec nodes equipped with Intel Xeon E5-2609$\>$0 CPUs clocked at 2.40\,GHz. Following CASC 2019,\footnote{\url{http://tptp.cs.miami.edu/CASC/27/}} we use 180\,s as the CPU time limit. We used both standard TPTP benchmarks~\cite{sutcliffe-2017-tptp} and Sledgehammer-generated benchmarks \cite{meng-paulson-2008-trans}. From the TPTP, version~7.2.0, we used 1000 randomly selected first-order (FO) problems in CNF, FOF, or TFF syntax without arithmetic and all 499 mono\-morphic higher-order theorems in TH0 syntax without interpreted Booleans and arithmetic. We partitioned the TH0 problems into those containing no $\lambda$-expressions (\relax{TH0$\lambda$f}, 452~problems) and those containing $\lambda$-expressions (\relax{TH0$\lambda$}, 47~problems). The Sledgehammer benchmarks, corresponding to Isabelle's Judgment Day suite \cite{boehme-nipkow-2010}, were regenerated to target clausal higher-order logic. They comprise 2506~problems, divided in two groups: \relax{SH-$\lambda$} preserves $\lambda$-expressions, whereas \relax{SH-ll} encodes them as $\lambda$-lifted supercombinators \cite{meng-paulson-2008-trans} to make the problems accessible to $\lambda$-free clausal higher-order provers. Each group of problems is generated from 256 Isabelle facts (definitions and lemmas). Our results are publicly available.% \footnote{\url{https://doi.org/10.5281/zenodo.4032969}} \ourparagraph{Evaluation of Extensions} To assess the usefulness of the extensions described in \Section~\ref{sec:extensions}, we fixed a \emph{base} configuration of Zipperposition parameters. For each extension, we then changed the corresponding parameters and observed the effect on the success rate. The base configuration uses the complete variant of the unification procedure of Vukmirovi\'c et al.\ \cite{vukmirovic-et-al-2020-unif}. It also includes the optional rules \infname{NegExt} and \infname{PruneArg}, substitutes \infname{FlexSup} for the highly explosive \infname{FluidSup}, and excludes axiom~(\infname{Ext}). The base configuration is not refutationally complete. \newcommand{\mainevalnumbers}{ \newbox\mybox \setbox\mybox=\hbox{\small SH256-$\lambda$} \def\arraystretch{1.1}% % \relax{\begin{tabular}{@{}l@{\hskip 0.5em}c@{\hskip 1em}c@{\hskip 1em}c@{\hskip 1em}c@{\hskip 1em}c@{\hskip 1em}c@{\hskip 1em}c@{}} \strut & \HEAD{FO} & \HEAD{TH0$\lambda$f} & \HEAD{TH0$\lambda$} & \HEAD{SH-ll} & \HEAD{SH-$\lambda$} \MIDLINE CVC4 & 539 & 424 & 31 & 696 & 650 \\ Ehoh & 681 & 418 & -- & 691 & -- \\ Leo-III-uncoop & 198 & 389 & 42 & 226 & 234 \\ Leo-III-coop & 582 & {\bf438} & 43 & 683 & 674 \\ Satallax-uncoop & -- & 398 & 43 & 489 & 507 \\ Satallax-coop & -- & 432 & 43 & 602 & 616 \\ Vampire & {\bf 729} & 432 & 42 & \textbf{718} & 707 \\[\jot] FOZip & 399 & -- & -- & -- & -- \\ @+FOZip & 363 & 400 & -- & 478 & -- \\ $\lambda$freeZip & 395 & 398 & -- & {538} & -- \\ $\lambda$Zip-base & 388 & {408} & 39 & 420 & 436 \\ $\lambda$Zip-pragmatic & 396 & 411 & 33 & 496 & 503 \\ $\lambda$Zip-full & 177 & 339 & 34 & 353 & 361 \\ Zip-uncoop & 514 & 426 & {\bf 46} & 661 & 677 \\ Zip-coop & 625 & 434 & {\bf 46} & 710 & \textbf{717} \end{tabular}} \caption{Number of problems proved by the different provers} \label{fig:res} } \begin{figure} \def\arraystretch{1.1}% \relax{\begin{tabular}{@{}l@{\hskip 2em}c@{\hskip 2em}c@{\hskip 2em}c@{\hskip 2em}c} \strut & $-$NE,$-$PA & $-$NE & $-$PA & Base \MIDLINE TH0 & 446 (0) & 446 (0) & 447 (0) & 447 (0) \\ SH-$\lambda$ & 431 (0) & 433 (0) & 433 (0) & 436 (1) \end{tabular}} \caption{Number of problems proved without rules included in the base configuration} \label{fig:neg-ext-prune-arg} \vspace*{\floatsep} \def\arraystretch{1.1}% \relax{\begin{tabular}{@{}l@{\hskip 2em}c@{\hskip 2em}c@{\hskip 2em}c@{\hskip 2em}c@{\hskip 2em}c@{\hskip 2em}c@{\hskip 2em}c@{\hskip 2em}c} \strut & Base & $+\lambda$\infname{D} & $+\lambda$S0 & $+\lambda$S1 & $+\lambda$S2 & $+\lambda$S4 & $+\lambda$S8 & $+\lambda$S1024 \MIDLINE TH0 & 447 (0) & 448 (0) & 449 (0) & 449 (0) & 449 (0) & 449 (0) & 449 (0) & 449 (0) \\ SH-$\lambda$ & 436 (1) & 435 (4) & 430 (1) & 429 (0) & 429 (0) & 429 (0) & 429 (0) & 429 (0) \end{tabular}} \caption{Number of problems proved using rules that perform rewriting under $\lambda$-binders} \label{fig:lambda-rules} \vspace*{\floatsep} \def\arraystretch{1.1}% \relax{\begin{tabular}{@{}l@{\hskip 2em}c@{\hskip 2em}c@{\hskip 2em}c@{\hskip 2em}c} \strut & Base & $+$\infname{Abs} & $+$\infname{ExtInst} & $+$(\infname{Ext}) \MIDLINE TH0 & 447 (0)\phantom{0} & 450 (1)\phantom{0} & 450 (1) & 376 (0) \\ SH-$\lambda$ & 436 (11) & 430 (11) & 402 (1) & 365 (2) \end{tabular}} \caption{Number of problems proved using rules that perform extensionality reasoning} \label{fig:ext-rules} \vspace*{\floatsep} \def\arraystretch{1.1}% \relax{\begin{tabular}{@{}l@{\hskip 2em}c@{\hskip 2em}c@{\hskip 2em}c@{\hskip 2em}c} \strut & $-$\infname{FlexSup} & Base & $-$\infname{FlexSup},$+$\infname{DupSup} & $-$\infname{FlexSup},$+$\infname{FluidSup} \MIDLINE TH0 & 446 (0)\phantom{0} & 447 (0) & 448 (1) & 447 (0) \\ SH-$\lambda$ & 469 (10) & 436 (4) & 451 (3) & 461 (7) \end{tabular}} \caption{Number of problems proved with rules that perform superposition into fluid terms} \label{fig:fluid-rules} \end{figure} The rules \infname{NegExt} (NE) and \infname{PruneArg} (PA) were added to the base configuration because our informal experiments showed that they usually help. Fig.~\ref{fig:neg-ext-prune-arg} confirms this, although the effect is small. In all tables, $+R$ denotes the inclusion of a rule~$R$ not present in the base, and $-R$ denotes the exclusion of a rule~$R$ present in the base. Numbers given in parentheses denote the number of problems that are solved only by the given configuration and no other configuration in the same table. The rules \infname{$\lambda$Demod} ($\lambda$D) and \infname{$\lambda$Sup} extend the calculus to perform some rewriting under \hbox{$\lambda$-binders}. While experimenting with the calculus, we noticed that, in some configurations, \infname{$\lambda$Sup} performs better when the number of fresh Skolem symbols it introduces overall is bounded by some parameter $n$. As Fig.~\ref{fig:lambda-rules} shows, inclusion of these rules has different effect on the two benchmark sets. Different choices of $n$ for \infname{$\lambda$Sup} (denoted by $\lambda$S$n$) do not seem to influence the success rate much. The evaluation of the \infname{Abs} and \infname{ExtInst} rules and axiom~(\infname{Ext}), presented in Fig.~\ref{fig:ext-rules}, confirms our intuition that including the extensionality axiom is severely detrimental to performance. The $+$(\infname{Ext}) configuration solved two unique problems on SH-$\lambda$ benchmarks, but the success of the $+$(\infname{Ext}) configuration on these problems appears to be due to a coincidental influence of the axiom on heuristics---the axiom is not referenced in the generated proofs. The \infname{FlexSup} rule included in the base configuration did not perform as well as we expected. Even the \infname{FluidSup} and \infname{DupSup} rules outperformed \infname{FlexSup}, as shown in Fig.~\ref{fig:fluid-rules}. This effect is especially visible on SH-$\lambda$ benchmarks. On TPTP, the differences are negligible. Most of the extensions had a stronger effect on SH-$\lambda$ than on TH0. A possible explanation is that the Boolean-free TH0 benchmark subset consists mostly of problems that are simple to solve using most prover parameters. On the other hand, SH-$\lambda$ benchmarks are of varying difficulty and can thus benefit more from changing prover parameters. \ourparagraph{Main Evaluation} We selected all contenders in the THF division of CASC 2019 as representatives of the state of the art:\ CVC4 1.8 prerelease \cite{barbosa-et-al-2019}, Leo-III 1.4 \cite{steen-benzmueller-2018}, Satallax 3.4 \cite{brown-2012-ijcar}, and Vampire 4.4 \cite{bhayat-reger-2019-restricted}. We also included Ehoh \cite{vukmirovic-et-al-2019}, the $\lambda$-free clausal higher-order mode of E~2.4. Leo-III and Satallax are cooperative higher-order provers that can be set up to regularly invoke first-order provers as terminal proof procedures. To assess the performance of their core calculi, we evaluated them with first-order backends disabled. We denote these ``uncooperative'' configurations by Leo-III-uncoop and Satallax-uncoop respectively, as opposed to the standard versions Leo-III-coop and Satallax-coop. \begin{figure}[b] \mainevalnumbers \end{figure} To evaluate the overhead our calculus incurs on first-order or $\lambda$-free higher-order problems, we ran Zipperposition in first-order (FOZip) and $\lambda$-free ($\lambda$freeZip) modes, as well as in a mode that encodes curried applications using a distinguished binary symbol $\cst{@}$ before using first-order Zipperposition (@+FOZip). We evaluated the implementation of our calculus in Zipperposition ($\lambda$Zip) in three configurations:\ base, pragmatic, and full. Pragmatic builds on base by disabling \infname{FlexSup} and replacing complete unification with the pragmatic variant procedure pv$^2_{1121}$ of Vukmirovi\'c et al. Full is a refutationally complete extension of base that substitutes \infname{Fluid\-Sup} for \infname{FlexSup} and includes axiom~(\infname{Ext}). Finally, we evaluated Zipperposition in a portfolio mode that runs the prover in various configurations (Zip-uncoop). We also evaluated a cooperative version of the portfolio which, in some configurations, after a predefined time invokes Ehoh as backend on higher-order problems (Zip-coop). In this version, Zipperposition encodes heuristically selected clauses from the current proof state to lambda-free higher-order logic supported by Ehoh \cite{vukmirovic-et-al-2019}. On first-order problems, we ran Ehoh, Vampire, and Zip-uncoop using the provers' respective first-order modes. A summary of these experiments is presented in \figurename~\ref{fig:res}. In the pragmatic configuration, our calculus outperformed $\lambda$freeZip on TH0$\lambda$f problems and incurred less than 1\% overhead compared with FOZip, but fell behind $\lambda$freeZip on SH-ll problems. The full configuration suffers greatly from the explosive extensionality axiom and \infname{FluidSup} rule. Except on TH0$\lambda$ problems, both base and pragmatic configurations outperformed Leo-III-uncoop, which runs a fixed configuration, by a substantial margin. Zip-uncoop outperformed Satallax-uncoop, which uses a portfolio. Our most competitive configuration, Zip-coop, emerges as the winner on both problem sets containing \hbox{$\lambda$-expressions}. On higher-order TPTP benchmarks this configuration \OK{does not} solve any problems that no other (cooperative) higher-order prover solves. By contrast, on SH-ll benchmarks Zip-coop solves \OK{21} problems no other higher-order prover solves, and on SH-$\lambda$ benchmarks, it uniquely solves \OK{27} problems. \section{Discussion and Related Work} \label{sec:discussion-and-related-work} Bentkamp et al.~\cite{bentkamp-et-al-2018} introduced four calculi for $\lambda$-free clausal higher-order logic organized along two axes:\ \emph{intensional} versus \emph{extensional}, and \emph{nonpurifying} versus \emph{purifying}. The purifying calculi flatten the clauses containing applied variables, thereby eliminating the need for superposition into variables. As we extended their work to support $\lambda$-expressions, we found the purification approach problematic and gave it up because it needs $x$ to be smaller than $x\;t$, which is impossible to achieve with a term order on $\beta\eta$-equivalence classes. We also quickly gave up our attempt at supporting intensional higher-order logic. Extensionality is the norm for higher-order unification \cite{dowek-2001} and is mandated by the TPTP THF format \cite{sutcliffe-et-al-2009} and in proof assistants such as HOL4, HOL Light, Isabelle/HOL, Lean, Nuprl, and PVS. Bentkamp et al.\ viewed their approach as ``a stepping stone towards full higher-order logic.'' It already included a notion analogous to green subterms and an \infname{ArgCong} rule, which help cope with the complications occasioned by $\beta$-reduction. Our Boolean-free $\lambda$-superposition calculus joins the family of proof systems for higher-order logic. It is related to Andrews's higher-order resolution \cite{andrews-1971}, Huet's constrained resolution \cite{huet-1973}, Jensen and Pietrzykowski's $\omega$-resolution \cite{jensen-pietrzykowski-1976}, Snyder's higher-order $E$-resolution \cite{snyder-1990}, Benz\-m\"uller and Kohlhase's extensional higher-order resolution \cite{benzmueller-kohlhase-1998}, Benzm\"uller's higher-order unordered paramodulation and RUE resolution \cite{benzmueller-1999}, and Bhayat and Reger's combinatory superposition \cite{bhayat-reger-2020-combsup}. A noteworthy variant of higher-order unordered paramodulation is Steen and Benzm\"uller's higher-order ordered paramodulation \cite{steen-benzmueller-2018}, whose order restrictions undermine refutational completeness but yield better empirical results. Other approaches are based on analytic tableaux \cite{robinson-1969,kohlhase-1995,konrad-1998,backes-brown-2011}, connections \cite{andrews-1989}, sequents \cite{lindblad-2014}, and satisfiability modulo theories (SMT) \cite{barbosa-et-al-2019}. Andrews \cite{andrews-2001} and Benzm\"uller and Miller \cite{benzmueller-miller-2014} provide excellent surveys of higher-order automation. Combinatory superposition was developed shortly after $\lambda$-superposition and is closely related. It is modeled on the intensional nonpurifying calculus by Bentkamp et al.\ and targets extensional polymorphic clausal higher-order logic. Both combinatory and $\lambda$-superposition gracefully generalize the highly successful first-order superposition rules without sacrificing refutational completeness, and both are equipped with a redundancy criterion, which earlier refutationally complete higher-order calculi lack. In particular, \infname{PruneArg} is a versatile simplification rule that could be useful in other provers. Combinatory superposition's distinguishing feature is that it uses $\cst{SKBCI}$ combinators to represent $\lambda$-expressions. Combinators can be implemented more easily starting from a first-order prover; $\beta$-reduction amounts to demodulation. However, according to its developers \cite{bhayat-reger-2020-combsup}, ``Narrowing terms with combinator axioms is still explosive and results in redundant clauses. It is also never likely to be competitive with higher-order unification in finding complex unifiers.'' Among the drawbacks of $\lambda$-superposition are the need to solve flex--flex pairs eagerly and the explosion caused by the extensionality axiom. We believe that this is a reasonable trade-off, especially for large problems with a substantial first-order component. Our prototype Zipperposition joins the league of automatic theorem provers for higher-order logic. We list some of its rivals. TPS \cite{andrews-et-al-1996} is based on the connection method and expansion proofs. LEO \cite{benzmueller-kohlhase-1998} and \textsc{Leo}-II \cite{benzmuller-2015-leo2} implement variants of RUE resolution. Leo-III \cite{steen-benzmueller-2018} is based on higher-order paramodulation. Satallax \cite{brown-2012-ijcar} implements a higher-order tableau calculus guided by a SAT solver. \textsc{Leo}-II, Leo-III, and Satallax integrate first-order provers as terminal procedures. AgsyHOL \cite{lindblad-2014} is based on a focused sequent calculus guided by narrowing. The SMT solvers CVC4 and veriT have recently been extended to higher-order logic \cite{barbosa-et-al-2019}. Vampire now implements both combinatory superposition and a version of standard superposition with first-order unification replaced by restricted combinatory unification \cite{bhayat-reger-2019-restricted}. Half a century ago, Robinson \cite{robinson-1970} proposed to reduce higher-order logic to first-order logic via a translation. ``Hammer'' tools such as Sledgehammer \cite{paulson-blanchette-2010}, Miz$\mathbb{AR}$ \cite{urban-et-al-2013}, HOLyHammer \cite{kaliszyk-urban-2015}, and CoqHammer \cite{czajka-kaliszyk-2018} have since popularized this approach in proof assistants. The translation must eliminate the $\lambda$-expressions, typically using $\cst{SKBCI}$ combinators or $\lambda$-lifting \cite{meng-paulson-2008-trans}, and encode typing information \cite{blanchette-et-al-2016-types}. \section{Conclusion} \label{sec:conclusion} We presented the Boolean-free $\lambda$-superposition calculus, which targets a clausal fragment of extensional polymorphic higher-order logic. With the exception of a functional extensionality axiom, it gracefully generalizes standard superposition. Our prototype prover Zipperposition shows promising results on TPTP and Isabelle benchmarks. In future work, we plan to pursue five main avenues of investigation. We first plan to \emph{extend the calculus to support Booleans and Hilbert choice.} Booleans are notoriously explosive. We want to experiment with both axiomatizations and native support in the calculus. Native support would likely take the form of a primitive substitution rule that enumerates predicate instantiations \cite{andrews-1989}, delayed clausification rules \cite{ganzinger-stuber-2005}, and rules for reasoning about Hilbert choice. We want to investigate techniques to \emph{curb the explosion caused by functional extensionality.} The extensionality axiom reintroduces the search space explosion that the calculus's order restrictions aim at avoiding. Maybe we can replace it by more restricted inference rules without compromising refutational completeness. We will also look into approaches to \emph{curb the explosion caused by higher-order unification.} Our calculus suffers from the need to solve flex--flex pairs. Existing procedures \cite{jensen-pietrzykowski-1976,snyder-gallier-1989,vukmirovic-et-al-2019} enumerate redundant unifiers. This can probably be avoided to some extent. It could also be useful to investigate unification procedures that would delay imitation/projection choices via special schematic variables, inspired by Libal's representation of regular unifiers \cite{libal-2015}. We clearly need to \emph{fine-tune and develop heuristics.} We expect heuristics to be a fruitful area for future research in higher-order reasoning. Proof assistants are an inexhaustible source of easy-looking benchmarks that are beyond the power of today's provers. Whereas ``hard higher-order'' may remain forever out of reach, we believe that there is a substantial ``easy higher-order'' fragment that awaits automation. Finally, we plan to \emph{implement the calculus in a state-of-the-art prover.} A suitable basis for an optimized implementation of the calculus would be Ehoh, the $\lambda$-free clausal higher-order version of E developed by Vukmirovi\'c, Blanchette, Cruanes, and Schulz\ \cite{vukmirovic-et-al-2019}. \def\ackname{Acknowledgment} % \begin{acknowledgements} Simon Cruanes patiently explained Zipperposition's internals and allowed us to continue the development of his prover. % Christoph Benzm\"uller and Alexander Steen shared insights and examples with us, guiding us through the literature and clarifying how the Leos work. % Maria Paola Bonacina and Nicolas Peltier gave us some ideas on how to treat the extensionality axiom as a theory axiom, ideas we have yet to explore. % Mathias Fleury helped us set up regression tests for Zipperposition. % % Ahmed Bhayat, Tomer Libal, and Enrico Tassi shared their insights on higher-order unification. % Andrei Popescu and Dmitriy Traytel explained the terminology surrounding the $\lambda$-calculus. % Haniel Barbosa, Daniel El Ouraoui, Pascal Fontaine, Visa Nummelin, and Hans-J\"org Schurr were involved in many stimulating discussions. % Christoph Weidenbach made this collaboration possible. % Ahmed Bhayat, Wan Fokkink, Mark Summerfield, and the anonymous reviewers suggested several textual improvements. % The maintainers of StarExec let us use their service for the evaluation. % We thank them all. % Bentkamp, Blanchette, and Vukmirovi\'c's research has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No.\ 713999, Matryoshka). Bentkamp and Blanchette also benefited from the Netherlands Organization for Scientific Research (NWO) Incidental Financial Support scheme. % Blanchette has received funding from the NWO under the Vidi program (project No.\ 016.Vidi.189.037, Lean Forward). \end{acknowledgements} \bibliographystyle{spmpsci}
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection{The SIRS Model on the Star}\label{sec:intro_SIRS_star} Our main result for the star is the following theorem, which shows that the expected survival time is bounded from above by an expression independent of the infection rate. \begin{restatable*}{theorem}{StarSurvival} \label{lem:starSurvival} Let $G$ be a star with $\mathOrText{n} \in \mathOrText{\mathbf{N}}_{>0}$ leaves, and let \mathOrText{C} be a contact process in the SIRS model on $G$ with infection rate \mathOrText{\lambda} and with deimmunization rate \mathOrText{\varrho}. Let $T$ be the survival time of \mathOrText{C}. Then $\E{T} \in \bigO{\mathOrText{n}^{4 \mathOrText{\varrho}} \ln(\mathOrText{n})}$. \end{restatable*} If $\mathOrText{\varrho} \in \bigO{1}$ with respect to asymptotics in~\mathOrText{n}, there exists no super-polynomial survival threshold, as the expected survival time is at most polynomial in~\mathOrText{n}. If $\mathOrText{\varrho} \in \smallOmega{1}$, then this result does not tell whether a super-polynomial survival threshold exists or not. The analysis mainly relies on the method of investigating independent phases in which the center is not infected, bounding the probability of the infection process dying out during that time, as is common~\cite{BorgsCGS10Antidote,berger2005spread}. A phase lasts at most until all leaves' healing clocks triggered at least once, which occurs in expectation after a time of about $\ln(\mathOrText{n})$. Thus, if the center just healed, it needs to become susceptible more quickly than that bound, as otherwise all leaves are healed. Since the triggers of the deimmunization clock follow an exponential distribution with rate~\mathOrText{\varrho}, the probability that the center does not become susceptible in this time interval is about $\eulerE^{-\mathOrText{\varrho} \ln \mathOrText{n}}$, resulting in a probability of about $\mathOrText{n}^{-\mathOrText{\varrho}}$ that the infection dies out. Since these phases are independent, the infection processes survives, in expectation, about $\mathOrText{n}^{\mathOrText{\varrho}}$ of these trials, each lasting about~$\ln(\mathOrText{n})$ time in expectation. Note that the deimmunization rate and the state \emph{recovered} are important for this argument to hold. Without this additional state, that is, in the SIS model, it is quite likely that the center becomes quickly reinfected before all leaves heal, which leads to the super-polynomial survival threshold of~$\mathOrText{n}^{-1/2}$ in this setting~\cite{ganesh2005effect}. \subsection{The SIRS Model on the Clique}\label{sec:intro_SIRS_clique} In contrast to the star, there exists a super-polynomial survival threshold on the clique if the deimmunization rate is independent of~\mathOrText{n}, as our following main result shows. \begin{restatable*}{theorem}{SIRSClique} \label{thm:cliqueSIRS} Let $G$ be a clique of size $\mathOrText{n} \in \mathOrText{\mathbf{N}}_{>0}$, and let \mathOrText{C} be a contact process in the SIRS model on $G$ with infection rate $\mathOrText{\lambda} = \frac{\mathOrText{c}}{\mathOrText{n}}$ for a constant $\mathOrText{c} \in \mathOrText{\mathbf{R}}_{>1}$ and with constant deimmunization rate \mathOrText{\varrho}. Further, let \mathOrText{C} start with exactly one infected vertex and no recovered vertices, and let $T$ be the survival time of \mathOrText{C}. Then for sufficiently large \mathOrText{n}, we have $\E{T} = 2^{\bigOmega{\mathOrText{n}}}$. \end{restatable*} The threshold is close to that of the SIS model on the clique, which is around $1 / (\mathOrText{n} - \mathOrText{n}^{1/2})$ (\Cref{thm:SIScliqueUpperBound}). We note that although these two thresholds are only apart by a constant factor, our result in \Cref{thm:cliqueSIRS} shows an exponential expected survival time, not only a super-polynomial one. Thus, the actual super-polynomial survival threshold of the SIRS model is likely slightly lower than what we state. Note that it cannot be much lower though, as our lower bound for the SIS model on the clique (\Cref{thm:SIScliqueLowerBound}) directly translates into the same lower bound for the SIRS model. We prove \Cref{thm:cliqueSIRS} by carefully tracking the number of infected, susceptible, and thus also recovered vertices over time. The main reason that the survival time of the process is exponential is that with decent probability, the infection process gets close to a state where changing any of these three values is equally likely. We call this state the \emph{equilibrium}. The equilibrium is an attractive state, that is, it has a neighborhood in which it is likely that the process gets closer to the equilibrium than moving away from it, which results in an exponential survival time. \Cref{thm:cliqueSIRS} then follows by showing that the probability of getting into the attractive neighborhood of the equilibrium has a chance that is larger than an inverse exponential function. In order to formalize the attractive region around the equilibrium, we define a potential function~\potentialSIRSClique{}{} that maps the current number of infected, susceptible, and recovered vertices to a real value. The minimum of~\potentialSIRSClique{}{} is the equilibrium, and the potential of the SIRS process decreases in expectation, if there is a constant fraction of infected vertices in the attractive neighborhood of the equilibrium. Thus, \potentialSIRSClique{}{} is a strict supermartingale in this regime, and we apply a concentration bound by \textcite{oliveto2011simplified} (\Cref{pre:NegativeDrift}) for strict supermartingales, known as \emph{negative-drift theorem}, based on an intricate theorem by \textcite{Hajek82HittingTime}. The negative-drift theorem results in the lower exponential bound of the survival time. Our definition of~\potentialSIRSClique{}{} is based on a Lyapunov function~\lyapunovHelper used by \textcite{korobeinikov2002lyapunov} in order to derive results on the global stability of the SIRS model via mean-field theory. Their function~\lyapunovHelper is already well suited for our purposes but is minimal for all values such that the number of susceptible vertices is equal to that of the equilibrium (regardless of the other values). In order to define a potential that has a unique minimum and thus yields a strict supermartingale, we appropriately augment~\lyapunovHelper. For showing that the infection processes ends up in a state such that~\potentialSIRSClique{}{} is a strict supermartingale, we utilize an easier potential function that results in a submartingale. Applying the optional-stopping theorem to this potential function yields a sufficient probability bound. We note that the key features to our proof method are, as is often the case with hitting-times of stochastic processes, the potential functions. However, the more involved potential function~\potentialSIRSClique{}{} is based on an established Lyapunov function used in a mean-field theoretic approach for the same setting. As Lyapunov functions have been identified for a variety of processes, we believe that our approach could be used for rigorous analyses of the absorption time of such processes. \subsection{The SIS Model on the Clique}\label{sec:intro_SIS_clique} For the SIS model on the clique, we present two results that, jointly, provide a tighter super-polynomial survival threshold than the one previously known. The first result provides an upper bound on the threshold. \begin{restatable*}{theorem}{SIScliqueUpperBound} \label{thm:SIScliqueUpperBound} Let $G$ be a clique with $\mathOrText{n} \in \mathOrText{\mathbf{N}}_{>0}$ vertices, and let \mathOrText{C} be a contact process in the SIS model on $G$ with infection rate $\mathOrText{\lambda} = \frac{1}{\mathOrText{n} - \mathOrText{\alpha}}$, for some $\mathOrText{\alpha} \in \mathOrText{\mathbf{N}}_{<\mathOrText{n}/2}$, that starts with at least one infected vertex. Further, let there be a constant $\varepsilon \in (0,1/2)$ such that $\mathOrText{\alpha} \geq \mathOrText{n}^{1/2 + \varepsilon}$. Let $T$ be the survival time of \mathOrText{C}. Then \begin{align*} \E{T} &\in \bigOmega{1.5^{\frac{\mathOrText{n}^\varepsilon}{4}}}[\big].\qedhere \end{align*} \end{restatable*} The second result provides a lower bound that almost matches the upper one. \begin{restatable*}{theorem}{SIScliqueLowerBound} \label{thm:SIScliqueLowerBound} Let $G$ be a clique with $\mathOrText{n} \in \mathOrText{\mathbf{N}}_{>0}$ vertices and let \mathOrText{C} be a contact process in the SIS model on $G$ with infection rate $\mathOrText{\lambda} = \frac{1}{\mathOrText{n} - \mathOrText{\alpha}}$ for some $\mathOrText{\alpha} \in \mathOrText{\mathbf{N}}_{>0}$ with $\mathOrText{\alpha} \leq \mathOrText{n}^{1/2}$ that starts with at least one infected vertex. Let $T$ be the survival time of \mathOrText{C}. Then \begin{align*} \E{T} &\in \bigO{\mathOrText{n}^2 \ln(\mathOrText{n})} .\qedhere \end{align*} \end{restatable*} The previously best know upper and lower bound were, respectively, for any $\varepsilon \in \mathOrText{\mathbf{R}}_{>0}$ independent of~$n$ and for any $\alpha \in \mathOrText{\mathbf{R}}_{>0}$, $(1 + \varepsilon) / (\mathOrText{n} - \mathOrText{n}^{\alpha})$ as well as $1/(\mathOrText{n} - 1)$~\cite{ganesh2005effect}. Our analysis for both results is similar to our analysis for the SIRS model on the clique in the sense that we consider again the equilibrium state, where the probability to decrease the number of infected vertices is equal to increasing it. We denote the number of infected vertices in the equilibrium by~\mathOrText{\alpha}. For both bounds, we utilize that the number of infected vertices, once above~\mathOrText{\alpha}, returns in expectation to~\mathOrText{\alpha} in a time of at most about $\mathOrText{n} \ln(\mathOrText{n})$, which we prove by reducing the infection process in that regime to one on a smaller clique, for which the expected survival time is known. This leaves us with considering the regime when the number of infected vertices is close to and below~\mathOrText{\alpha}. For both results, the infection process is similar to a biased gambler's ruin process. For the upper bound, we split the process into independent phases each of which starts as soon as the process reaches $\mathOrText{\alpha} / 2-1$ infected vertices. For each of the phases, we derive a probability bound of the infection dying out before reaching to a state with $\mathOrText{\alpha} / 2$ infected vertices. For the lower bound, we consider the probability that the infection process dies out before reaching~\mathOrText{\alpha}. Since we derived beforehand that the process returns to~\mathOrText{\alpha} quickly, once it is above it, we apply again a restart argument, which yields the result. \subsection{Discussion and Outlook}\label{sec:intro_outlook} Although inspired by mean-field theory, our analysis is fundamentally different. Mean-field theory shows the existence of an equilibrium that is globally stable, i.e. that there exists a state of the process, given in terms of number of infected vertices (together with the number of recovered vertices, if we consider the SIRS model) such that the whole process drifts towards that state. For example, for the SIS model on the clique, with $\lambda=(n-\alpha)^{-1}$, this equilibrium is where the process has $\alpha$ infected vertices. However, solely knowing that such an equilibrium exists does not suffice for determining the survival time of the process. In our previous example, $\alpha=1$ yields a logarithmic extinction time, $\alpha\in O(n^{\frac{1}{2}})$ yields a polynomial extinction time, while $\alpha\in\Omega(n^{\frac{1}{2}+\varepsilon})$ yields a super-polynomial survival time. In order to obtain our results, we perform a refined analysis, determining the strength of the stochastic drift for each possible location of the equilibrium point. This analysis becomes significantly more challenging when the equilibrium point is determined by more than one degree of freedom, which is the case for the SIRS model. Assuming a constant deimmunization rate, our results show that the super-polynomial survival threshold of the SIRS model is fundamentally different from that of the SIS model on a star but not on a clique. Since the SIRS model does not have a super-polynomial survival threshold on the star in this regime but on the clique, our results highlight the importance of the topology of the network in the SIRS model. We note that real-world networks seem to be well-captured by random graph models with an underlying geometry~\cite{BBDHKS20}, and graphs with an underlying geometry have been shown to contain large cliques~\cite{BlasiusFK18CliquesInHRGs} (of size~$\sqrt{\mathOrText{n}}$). Although promising, it is unclear whether cliques can be used as basic structures for the analysis of the SIRS model on such real-world network models. This is for two reasons: (a) it is not proven whether, starting with an arbitrary infected vertex, the infection reaches the vertices of the clique; (b) we do not know how the additional vertices and edges that do not belong in the clique affect the survival time. To see point (b), note that it is not obvious whether the survival time of the SIRS model increases when adding vertices and edges on the host graph, in contrast to the SIS model. Our results for the super-polynomial survival threshold for the SIS model are already very close but still not matching. The exact location of the threshold (or whether such a threshold can be derived at all) remains an open problem. A further interesting question is how the survival time of the infection process scales with the infection rate once it is below the super-polynomial survival threshold. \textcite{ganesh2005effect} provide a large parameter regime for which the expected survival time is at most logarithmic in the number of vertices. Our bound on the expected survival time (for larger infection rates still below the super-polynomial survival threshold) is only polynomial in the number of vertices. Future research can look more closely into this regime of the infection rate and try to see whether the expected survival time is still logarithmic for some infection rates and when the transition to super-logarithmic time occurs. \subsection{Infection Processes} Let $G=(V,E)$ be a graph with vertex set $V$ and edge set $E$. Further, let $\mathOrText{\lambda},\mathOrText{\varrho} \in \mathOrText{\mathbf{R}}_{>0}$. In the SIRS model, for each edge $e \in E$, we define a Poisson process $M_e$ with parameter $\mathOrText{\lambda}$, and for each vertex $v \in V$, we define the two Poisson processes $N_v$ with parameter $1$ and $O_v$ with parameter $\mathOrText{\varrho}$. We refer to these processes as \emph{clocks}, and when an event occurs in one of them, we say that the relevant clock \emph{triggers}. We use $Z$ to denote the set of all of these clocks, that is, $Z = \left(\bigcup_{e \in E}{\{M_e\}}\right) \cup \left(\bigcup_{v \in V}{\{N_v,O_v\}}\right)$. Let $P$ denote the stochastic process in which all of the clocks in $Z$ evolve simultaneously and independently, starting at time 0. Note that almost surely there is no time point at which two clocks trigger at once. There are almost surely a countably infinite number of trigger times in $P$, which we index by the increasing sequence $\{\gamma_i\}_{i\in\mathOrText{\mathbf{N}}_{\geq0}}$, where $\gamma_0=0$. A contact process $\mathOrText{C} = (\mathOrText{C}_\mathOrText{t})_{\mathOrText{t} \in \mathOrText{\mathbf{R}}_{\geq 0}}$ in the SIRS model has an underlying graph $G=(V,E)$, an infection rate $\mathOrText{\lambda}$, a deimmunization rate $\mathOrText{\varrho}$, and an initial partition of $V$ into susceptible, infected, and recovered vertices with the respective sets $\susceptibleSet{0}$, $\infectedSet{0}$, and $\recoveredSet{0}$. At every time $\mathOrText{t} \in \mathOrText{\mathbf{R}}_{\geq 0}$, the state $\mathOrText{C}_\mathOrText{t}$ is a partition of $V$ into $\susceptibleSet{\mathOrText{t}}$, $\infectedSet{\mathOrText{t}}$, and $\recoveredSet{\mathOrText{t}}$. The state only changes at times in $P$. Let $i \in \mathOrText{\mathbf{N}}_{>0}$. We consider the following state transitions in $\gamma_i$. \begin{itemize} \item If for some $e=\{u,v\}\in E$ we have $\gamma_i \in M_e$, $u \in \infectedSet{\gamma_{i-1}}$, and $v \in \susceptibleSet{\gamma_{i-1}}$, then $\susceptibleSet{\gamma_{i}} = \susceptibleSet{\gamma_{i-1}} \setminus \{v\}$, $\infectedSet{\gamma_{i}} = \infectedSet{\gamma_{i-1}} \cup \{v\}$, and $\recoveredSet{\gamma_{i}} = \recoveredSet{\gamma_{i-1}}$. We say that $v$ \emph{gets infected} at time point $\gamma_{i}$ by $u$. \item If for some $v \in V$ we have $\gamma_i \in N_v$ and $v \in \infectedSet{\gamma_{i-1}}$ then $\susceptibleSet{\gamma_{i}} = \susceptibleSet{\gamma_{i-1}}$, $\infectedSet{\gamma_{i}} = \infectedSet{\gamma_{i-1}} \setminus \{v\}$ and $\recoveredSet{\gamma_{i}} = \recoveredSet{\gamma_{i-1}} \cup \{v\}$. We say that $v$ \emph{recovers} at time point $\gamma_{i}$. \item If for some $v \in V$ we have $\gamma_i \in O_v$ and $v \in \recoveredSet{\gamma_{i-1}}$, then $\susceptibleSet{\gamma_{i}} = \susceptibleSet{\gamma_{i-1}} \cup \{v\}$, $\infectedSet{\gamma_{i}} = \infectedSet{\gamma_{i-1}}$ and $\recoveredSet{\gamma_{i}} = \recoveredSet{\gamma_{i-1}} \setminus \{v\}$. We say that $v$ \emph{gets susceptible} at time point $\gamma_{i}$. \end{itemize} If none of the above three cases occurs, the state of $\mathOrText{C}$ at $\gamma_{i}$ is the same as the state of $\mathOrText{C}$ at $\gamma_{i-1}$. Note that at all times between $\gamma_{i-1}$ and $\gamma_{i}$, $\mathOrText{C}$ retains the same state as in $\gamma_{i-1}$. In our proofs, we only consider the time points in $P$ at which the state changes. To this end, let $P'= \{\gamma_0\} \cup \{\gamma_i \mid i \in \mathOrText{\mathbf{N}}_{>0} \land \mathOrText{C}_{\gamma_{i}} \neq \mathOrText{C}_{\gamma_{i-1}}\}$. We index the times in $P'$ by the increasing sequence $\{\timeContinuous{i}\}_{i\in\mathOrText{\mathbf{N}}}$. For all $i \in \mathOrText{\mathbf{N}}$, we call $\timeContinuous{i}$ the $i$-th \emph{step} of the process. Contact processes in the SIS model are very similar to those in the SIRS model, with the difference that they have no deimmunization rate. That is, for an SIS process, we assume that the sets $O_v$ and $\recoveredSet{\mathOrText{t}}$ remain empty at all times and vertices that recover are added to $\susceptibleSet{\mathOrText{t}}$ instead of $\recoveredSet{\mathOrText{t}}$. In both models, the state in which $\susceptibleSet{\mathOrText{t}} = V$ is an absorbing state. As this state is the only absorbing state, it is reached almost surely at some point. In this article, we analyze how long it takes for the process to reach this state. We say that the infection \emph{dies out} or \emph{goes extinct} at the first (random) time~$T$ with $\infectedSet{T} = \emptyset$. We call $T$ the \emph{survival time} of the contact process. Observe that at $T$, in the SIS model, the absorbing state is reached, while in the SIRS model, some of the vertices in $V$ might be in $\recoveredSet{T}$. The graphs~$G$ we consider are cliques and stars. We give thresholds for the infection rate \mathOrText{\lambda} above or below which the survival time scales in a specific way with respect to $\mathOrText{n}$. \begin{itemize} \item We call a threshold $\mathOrText{\lambda}_l$ a \emph{logarithmic extinction threshold} if for all $\mathOrText{\lambda} \leq \mathOrText{\lambda}_l$ holds $\E{T} \in \bigO{\ln(\mathOrText{n})}$. \item We call a threshold $\mathOrText{\lambda}_p$ a \emph{polynomial extinction threshold} if there exists a constant $c\in \mathOrText{\mathbf{R}}$ such that for all $\mathOrText{\lambda} \leq \mathOrText{\lambda}_p$ holds $\E{T} \in \bigO{\mathOrText{n}^{c}}$. \item We call a threshold $\mathOrText{\lambda}_s$ a \emph{super-polynomial survival threshold} if there exists a constant $c\in \mathOrText{\mathbf{R}}$ such that for all $\mathOrText{\lambda} \geq \mathOrText{\lambda}_s$ holds $\E{T} \in \bigOmega{2^{\mathOrText{n}^c}}$. \end{itemize} We only keep track of the number of vertices in each of the sets. To this end, we define for all $\mathOrText{t} \in \mathOrText{\mathbf{R}}_{\geq0}$ the random variables $\susceptibleContinuous{\mathOrText{t}} = |\susceptibleSet{\mathOrText{t}}|$, $\infectedContinuous{\mathOrText{t}} = |\infectedSet{\mathOrText{t}}|$, and $\recoveredContinuous{\mathOrText{t}} = |\recoveredSet{\mathOrText{t}}|$. These random variables change depending on the Poisson clocks in $P$. We say that an event \emph{happens at a rate of $r$} if the set of Poisson clocks that cause this event when they trigger has a sum of rates~$r$. We use stochastic domination to transfer results from one random variable to another. We say that a random variable $(X_t)_{t \in \mathOrText{\mathbf{R}}}$ \emph{dominates} another random variable $(Y_t)_{t \in \mathOrText{\mathbf{R}}}$ if there exists a coupling $(X'_t, Y'_t)_{t \in \mathOrText{\mathbf{R}}}$ in a way such that for all $t \geq 0$ we have $X'_t \geq Y'_t$. \subsection{Probabilistic Tools} We use general concepts from probability theory (see for example \cite{feller1957introduction,mitzenmacher2017probability}). In addition, we use the following theorems. We use the optional-stopping theorem for submartingales to bound the probability of reaching a specific state. \begin{theorem}[Optional stopping {\cite[page~$298$]{mitzenmacher2017probability}}]\label{pre:optionalStopping} Let $(X_t)_{t \in \mathOrText{\mathbf{N}}}$ be a submartingale and $T$ a stopping time, both with respect to a filtration $(\filtration_t)_{t \in \mathOrText{\mathbf{N}}}$. Assume that the following two conditions hold: \begin{enumerate} \item $\E{T} < \infty$; \item There is a $c \in \mathOrText{\mathbf{R}}$ such that for all $t\in \mathOrText{\mathbf{N}}$ we have $\E{|X_{t+1}-X_t|}[\filtration_t] \cdot \indicator{t<T} \leq c \cdot \indicator{t<T}$. \end{enumerate} Then $\E{X_T} \geq \E{X_0}$. \end{theorem} We use the following theorem in order to show a super-polynomial survival time for a contact process. We state it in a fashion that better suits our purposes. \begin{theorem}[Negative drift {\cite[Theorem~$4$]{oliveto2011simplified}~\cite{OlivetoW12NegativeDriftErratum}}]\label{pre:NegativeDrift} Let $(X_t)_{t \in \mathOrText{\mathbf{N}}}$ be a random process over~\mathOrText{\mathbf{R}}, adapted to a filtration $(\filtrationContinuous{t})_{t \in \mathOrText{\mathbf{N}}}$. Let there be an interval $[a,b] \subseteq \mathOrText{\mathbf{R}}$, two constants $\delta,\varepsilon \in \mathOrText{\mathbf{R}}_{>0}$ and, possibly depending on $l=b-a$, a function $r(l)$ satisfying $1 \leq r(l) = \smallO{l/\log(l)}$. Let $T = \inf\{ t\geq 0 \mid X_t \geq b \}$. Suppose that for all $t \in \mathOrText{\mathbf{N}}$ the following two conditions hold: \begin{enumerate} \item $\E{X_{t+1}-X_t}[\filtrationContinuous{t}] \cdot \indicator{a < X_t < b} \leq -\varepsilon \cdot \indicator{a < X_t < b},$ \item For all $j \in \mathOrText{\mathbf{R}}_{\geq 0}$ we have $\Pr{|X_{t+1}-X_t|\geq j}[ \filtrationContinuous{t}] \cdot \indicator{t<T} \leq \frac{r(l)}{(1+\delta)^j} \cdot \indicator{t<T}.$ \end{enumerate} Then there exists a constant $c \in \mathOrText{\mathbf{R}}_{>0}$ such that \begin{align*} \Pr{T \leq 2^{cl/r(l)}}[\filtrationContinuous{0}][\big] \cdot \indicator{X_0 \leq a} &= 2^{-\bigOmega{l/r(l)}} \cdot \indicator{X_0 \leq a}.\qedhere \end{align*} \end{theorem} We derive polynomial upper bounds for the survival time via the following theorem. \begin{theorem}[Additive drift {\cite[Theorem~$4$]{KOTZING201951}, \cite{HeY01AdditiveDrift}}]\label{pre:additiveDrift} Let $(X_t)_{t \in \mathOrText{\mathbf{N}}}$ be a random process over $\mathOrText{\mathbf{R}}$, adapted to a filtration $(\filtration_t)_{t \in \mathOrText{\mathbf{N}}}$. Further, let $T = \inf\{t \mid X_t \leq 0\}$, and let $\delta \in \mathOrText{\mathbf{R}}_{>0}$. Suppose that for all $t \in \mathOrText{\mathbf{N}}$ the following two conditions hold: \begin{enumerate} \item $X_t \cdot \indicator{t \leq T} \geq 0$, \item $\E{X_t - X_{t+1}}[\filtration_t] \cdot \indicator{t < T} \geq \delta \cdot \indicator{t<T}$. \end{enumerate} Then $\E{T}[\filtration_0] \leq X_0/\delta$. \end{theorem} The following theorem bounds the expected value of the maximum of $n$ exponentially distributed random variables. \begin{theorem}[{\cite[page~$33$]{mitzenmacher2017probability}}]\label{pre:maxExponential} Let $n \in \mathOrText{\mathbf{N}}_{> 0}$, and let $\{X_i\}_{i \in [n]}$ be independent random variables that are each exponentially distributed with parameter $\lambda \in \mathOrText{\mathbf{R}}_{> 0}$. Let $m = \max_{i \in [n]} X_i$, and let $H_n$ be the $n$-th harmonic number. Then \begin{align*} \E{m} &= \frac{H_n}{\lambda} < \frac{1 + \ln(n+1)}{\lambda}.\qedhere \end{align*} \end{theorem} We use the following version of Wald's equation, which does not require the addends to be independent. \begin{theorem}[Generalized Wald's equation~{\cite[Theorem~$5$]{DoerrK22WaldsEquation}}]\label{pre:wald} Let $c,c' \in \mathOrText{\mathbf{R}}$, and let $(X_t)_{t \in \mathOrText{\mathbf{N}}}$ be a random process over $\mathOrText{\mathbf{R}}_{\geq c}$ such that $\sum_{i \in [S]}{X_i}$ is integrable. Furthermore, let $(\filtration_t)_{t \in \mathOrText{\mathbf{N}}}$ be a filtration, and let $S$ be a stopping time with respect to $\filtration$. If for all $i \in \mathOrText{\mathbf{N}}$ we have $\E{X_{i+1}}[\filtration_i] \leq c'$, then \begin{align*} &\E{\sum\nolimits_{i \in [S]} {X_i}}[\filtration_0] = \E{\sum\nolimits_{i \in [S]} {\E{X_i} [ \filtration_{i-1}]}}[\filtration_0].\qedhere \end{align*} \end{theorem} Some of the processes that we analyze are very similar to the well-known gambler's ruin problem, as they increase and decrease by one with certain probabilities until they reach a limit in either direction. We consider the following version of the gambler's ruin problem. \begin{theorem}[Gambler's ruin~{\cite[page~$345$]{feller1957introduction}}]\label{pre:gamblersRuin} Let $(P_t)_{t \in \mathOrText{\mathbf{N}}}$ be the amount of money that a player has in a gambler's ruin game that has a probability of $p \neq 1/2$ for them to win in each step. Let $q=1-p$. The game ends at time $T$ when the player either reaches the lower bound $l$ or the upper bound $u$ of money. Then \begin{enumerate} \item $\Pr{P_T = l} = \frac{1-(p/q)^{u-P_0}}{1-(p/q)^{u-l}}$; \item $\Pr{P_T = u} = \frac{1-(q/p)^{P_0-l}}{1-(q/p)^{u-l}}$.\qedhere \end{enumerate} \end{theorem} \subsection{The SIRS Contact Process}\label{sec:SIRSCliqueBasic} Let $G=(V,E)$ be a clique with $\mathOrText{n}$ infected vertices. Consider a contact process \mathOrText{C} with infection rate \mathOrText{\lambda} and deimmunization rate \mathOrText{\varrho} on $G$. We define for all $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ the random variables $\susceptibleDiscreteShifted{\mathOrText{t}} = \susceptibleDiscrete{\mathOrText{t}} + \frac{\mathOrText{\varrho}}{\mathOrText{\lambda}}$ and $\recoveredDiscreteShifted{\mathOrText{t}} = \recoveredDiscrete{\mathOrText{t}}+ \frac{\mathOrText{\varrho}}{\mathOrText{\lambda}}$. We use these two random variables to define the potential later. Note that, at all times $t$, $\susceptibleDiscrete{\mathOrText{t}} + \infectedDiscrete{\mathOrText{t}} + \recoveredDiscrete{\mathOrText{t}} = \mathOrText{n}$, since every vertex of $G$ is always in exactly one of these three sets. Additionally, $\susceptibleDiscreteShifted{\mathOrText{t}} + \infectedDiscrete{\mathOrText{t}} + \recoveredDiscrete{\mathOrText{t}} = \mathOrText{n} + \frac{\mathOrText{\varrho}}{\mathOrText{\lambda}} = \mathOrText{n'}$. For all $\mathOrText{t} \in \mathOrText{\mathbf{N}}$, one of the following three events occurs at step $\mathOrText{t}+1$ ($\tau_{t+1}$): either a susceptible vertex is infected, which we call \eventInfection{\mathOrText{t}}; or an infected vertex recovers in the event \eventRecover{\mathOrText{t}}; or a recovered vertex loses its immunity which is called \eventSusceptible{\mathOrText{t}}. At the time point \timeContinuous{\mathOrText{t}}, vertices get infected at a rate of $\rateInfection{\mathOrText{t}} = \mathOrText{\lambda} \infectedDiscrete{\mathOrText{t}} \susceptibleDiscrete{\mathOrText{t}} = \mathOrText{\lambda} \infectedDiscrete{\mathOrText{t}} \susceptibleDiscreteShifted{\mathOrText{t}} - \mathOrText{\varrho} \infectedDiscrete{\mathOrText{t}}$ because every infected vertex infects each susceptible vertices at a rate of \mathOrText{\lambda}. Vertices recover from an infection at a rate of $\rateRecover{\mathOrText{t}} = \infectedDiscrete{\mathOrText{t}}$ and lose their immunity at a rate of $\rateSusceptible{\mathOrText{t}} = \mathOrText{\varrho} \recoveredDiscrete{\mathOrText{t}}$. Now let $\rateTotal{\mathOrText{t}} = \rateInfection{\mathOrText{t}} + \rateRecover{\mathOrText{t}} + \rateSusceptible{\mathOrText{t}}$. We get \begin{align*} \probabilityInfection{\mathOrText{t}} &= \Pr{\eventInfection{\mathOrText{t}}} = \frac{\rateInfection{\mathOrText{t}}}{\rateTotal{\mathOrText{t}}} = \frac{\mathOrText{\lambda} \infectedDiscrete{\mathOrText{t}} \susceptibleDiscreteShifted{\mathOrText{t}} - \mathOrText{\varrho} \infectedDiscrete{\mathOrText{t}}}{\rateTotal{\mathOrText{t}}},\\ \probabilityRecover{\mathOrText{t}} &= \Pr{\eventRecover{\mathOrText{t}}} = \frac{\rateRecover{\mathOrText{t}}}{\rateTotal{\mathOrText{t}}} = \frac{\infectedDiscrete{\mathOrText{t}}}{\rateTotal{\mathOrText{t}}},\textrm{ and }\\ \probabilitySusceptible{\mathOrText{t}} &= \Pr{\eventSusceptible{\mathOrText{t}}} = \frac{\rateSusceptible{\mathOrText{t}}}{\rateTotal{\mathOrText{t}}} = \frac{\mathOrText{\varrho} \recoveredDiscrete{\mathOrText{t}}}{\rateTotal{\mathOrText{t}}}. \end{align*} Note that we only consider these probabilities in states in which at least one vertex is infected, hence $\rateTotal{\mathOrText{t}} \neq 0$, hence the above probabilities are well-defined. We now define \mathOrText{S^*}, \mathOrText{P^*}, \mathOrText{I^*}, \mathOrText{R^*} and \mathOrText{Q^*} as the values of the process at the equilibrium state where all of the three events are equally likely. We calculate \mathOrText{S^*} and \mathOrText{P^*} by solving $\probabilityInfection{\mathOrText{t}}= \probabilityRecover{\mathOrText{t}}$ for \susceptibleDiscrete{\mathOrText{t}} or \susceptibleDiscreteShifted{\mathOrText{t}} respectively. The values for \mathOrText{I^*} and \mathOrText{R^*} are then calculated with the equations $\mathOrText{I^*} + \mathOrText{R^*} = \mathOrText{n} - \mathOrText{S^*}$ and $\probabilityRecover{\mathOrText{t}} = \probabilitySusceptible{\mathOrText{t}}$. We get \begin{align*} \mathOrText{S^*} &= \frac{1}{\mathOrText{\lambda}},\\ \mathOrText{P^*} &= \frac{1 + \mathOrText{\varrho}}{\mathOrText{\lambda}},\\ \mathOrText{I^*} &= \frac{\mathOrText{\varrho} (\mathOrText{n} - \frac{1}{\mathOrText{\lambda}})}{1 + \mathOrText{\varrho}},\\ \mathOrText{R^*} &= \frac{(\mathOrText{n} - \frac{1}{\mathOrText{\lambda}})}{1 + \mathOrText{\varrho}}, \textrm{ and}\\ \mathOrText{Q^*} &= \mathOrText{R^*} + \frac{\mathOrText{\varrho}}{\mathOrText{\lambda}}. \end{align*} \subsection{Super-polynomial survival threshold}\label{sec:SIRSCliqueThreshold} We now aim to show that the infection becomes epidemic when $\mathOrText{\lambda} = \frac{\mathOrText{c}}{\mathOrText{n}}$ for some $\mathOrText{c} \in \mathOrText{\mathbf{R}}_{>1}$ that is constant with respect to \mathOrText{n}. We start by proving that, when starting with one infected vertex, the infection reaches a state with at least $\varepsilon \mathOrText{n}$ infected vertices with sufficiently large probability. \begin{lemma}\label{lem:farFromEdgeSIRS} Let $G$ be a clique of size $\mathOrText{n} \in \mathOrText{\mathbf{N}}_{>0}$ and let \mathOrText{C} be a contact process in the SIRS model on $G$ with infection rate $\mathOrText{\lambda} = \frac{\mathOrText{c}}{\mathOrText{n}}$ for a constant $\mathOrText{c} \in \mathOrText{\mathbf{R}}_{>1}$ and with constant deimmunization rate \mathOrText{\varrho}. Also let \mathOrText{C} start with exactly one infected vertex and no recovered vertices. Then there exists an $\varepsilon \in \mathOrText{\mathbf{R}}_{>0}$, such that for sufficiently large \mathOrText{n}, the probability that there exists a $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ with $\infectedDiscrete{\mathOrText{t}} \geq \varepsilon \mathOrText{n}$ is at least $\frac{1}{\mathOrText{n}+2}$. \end{lemma} \begin{proof} Let $\mathOrText{c'}=\mathOrText{c}-1$. Note that \mathOrText{c'} is positive because $\mathOrText{c}>1$. Let $\varepsilon = \frac{\mathOrText{c'}}{2+\mathOrText{c'}}/(1+\frac{2+\mathOrText{c'}}{\mathOrText{c'}})$. We define for all $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ the potential $\potentialIMinusR{\mathOrText{t}} =\potentialFunctionIMinusR[\infectedDiscrete{\mathOrText{t}},\recoveredDiscrete{\mathOrText{t}}]= \infectedDiscrete{\mathOrText{t}} - \frac{\mathOrText{c'}}{2+\mathOrText{c'}}\recoveredDiscrete{\mathOrText{t}}$. Additionally, we define the stopping time $T = \inf\{\mathOrText{t} \in \mathOrText{\mathbf{N}} \mid \potentialIMinusR{\mathOrText{t}} \leq 0 \lor \susceptibleDiscrete{\mathOrText{t}} < \frac{2}{2 + \mathOrText{c'}}\mathOrText{n}\}$ and the natural filtration $(\filtrationContinuous{\mathOrText{t}})_{\mathOrText{t} \in \mathOrText{\mathbf{R}}_{\geq 0}}$ of \mathOrText{C}. We aim to show that $(\potentialIMinusR{\mathOrText{t}})_{\mathOrText{t} \in \mathOrText{\mathbf{N}}}$ is a sub-martingale until $T$. This allows us to apply the optional stopping theorem (\Cref{pre:optionalStopping}) to lower bound $\E{\potentialIMinusR{T}}$. The law of total expectation then gives us a lower bound of $\frac{1}{\mathOrText{n}+2}$ for $\Pr{\potentialIMinusR{T} > 0}$. We conclude the proof by showing that if $\potentialIMinusR{T} > 0$, then $\infectedDiscrete{T} \geq \varepsilon \mathOrText{n}$. We first bound for all $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ the drift $\E{(\potentialIMinusR{\mathOrText{t}+1}-\potentialIMinusR{\mathOrText{t}}) \cdot 1_{t<T}} [\filtrationDiscrete{\mathOrText{t}}]$. To improve readability, we omit the multiplicative $1_{t>T}$ in all of the terms. \begin{align*} \E{\potentialIMinusR{\mathOrText{t}+1}-\potentialIMinusR{\mathOrText{t}}} [\filtrationDiscrete{\mathOrText{t}}] &= \probabilityInfection{\mathOrText{t}} (\potentialFunctionIMinusR[\infectedDiscrete{\mathOrText{t}}+1,\recoveredDiscrete{\mathOrText{t}}]-\potentialIMinusR{\mathOrText{t}}) + \probabilityRecover{\mathOrText{t}} (\potentialFunctionIMinusR[\infectedDiscrete{\mathOrText{t}}-1,\recoveredDiscrete{\mathOrText{t}}+1]-\potentialIMinusR{\mathOrText{t}}) + \probabilitySusceptible{\mathOrText{t}} (\potentialFunctionIMinusR[\infectedDiscrete{\mathOrText{t}},\recoveredDiscrete{\mathOrText{t}}-1]-\potentialIMinusR{\mathOrText{t}})\\ &= \probabilityInfection{\mathOrText{t}} - \probabilityRecover{\mathOrText{t}}(1+\frac{\mathOrText{c'}}{2+\mathOrText{c'}}) + \probabilitySusceptible{\mathOrText{t}} \frac{\mathOrText{c'}}{2+\mathOrText{c'}}\\ &= \left(\mathOrText{\lambda} \susceptibleDiscrete{\mathOrText{t}} \infectedDiscrete{\mathOrText{t}} - \infectedDiscrete{\mathOrText{t}}(1+\frac{\mathOrText{c'}}{2+\mathOrText{c'}}) + \mathOrText{\varrho} \recoveredDiscrete{\mathOrText{t}} \frac{\mathOrText{c'}}{2+\mathOrText{c'}}\right)/\rateTotal{\mathOrText{t}}\\ &\geq \left(\frac{1+ \mathOrText{c'}}{\mathOrText{n}} \frac{2}{2 + \mathOrText{c'}}\mathOrText{n} \infectedDiscrete{\mathOrText{t}} - \infectedDiscrete{\mathOrText{t}}(1+\frac{\mathOrText{c'}}{2+\mathOrText{c'}}) + \mathOrText{\varrho} \recoveredDiscrete{\mathOrText{t}} \frac{\mathOrText{c'}}{2+\mathOrText{c'}}\right)/\rateTotal{\mathOrText{t}}\\ &= \frac{\mathOrText{\varrho} \recoveredDiscrete{\mathOrText{t}} \mathOrText{c'}}{(2+\mathOrText{c'})\rateTotal{\mathOrText{t}}}\\ &\geq 0. \end{align*} Now it holds $\E{T} < \infty$ because in each step $\mathOrText{t} \in \mathOrText{\mathbf{N}}_{<T}$, there is a non-zero probability (independent of \mathOrText{t}) to heal a vertex, hence there is always a non-zero probability to heal all vertices within the next \mathOrText{n} steps, which stops the process. Therefore, by applying the optional stopping theorem (\Cref{pre:optionalStopping}) we get $\E{\potentialIMinusR{T}} \geq \E{\potentialIMinusR{0}}$. By the law of total expectation, we get that \begin{align*} \E{\potentialIMinusR{T}} &= \E{\potentialIMinusR{T}} [ \potentialIMinusR{T} \leq 0] \cdot \Pr{\potentialIMinusR{T} \leq 0} + \E{\potentialIMinusR{T}} [ \potentialIMinusR{T} > 0] \cdot \Pr{\potentialIMinusR{T} > 0}\\ &=\E{\potentialIMinusR{T}} [ \potentialIMinusR{T} \leq 0] \cdot (1-\Pr{\potentialIMinusR{T} > 0}) + \E{\potentialIMinusR{T}} [ \potentialIMinusR{T} > 0] \cdot \Pr{\potentialIMinusR{T} > 0}. \end{align*} Because of the definition of $T$ and the fact that \mathOrText{H} changes by at most $1+ \frac{\mathOrText{c'}}{2+\mathOrText{c'}} \leq 2$ in one step, we get that $\potentialIMinusR{T} \geq -2$. We also know that $\potentialIMinusR{T} \leq \mathOrText{n}$ as $\infectedDiscrete{T} \leq \mathOrText{n}$. By definition of \mathOrText{C}, it holds $\potentialIMinusR{0}=1$. By substituting \E{\potentialIMinusR{T}} in $\E{\potentialIMinusR{T}} \geq \E{\potentialIMinusR{0}}$ and solving for \Pr{\potentialIMinusR{T} > 0} we get \begin{align*} \Pr{\potentialIMinusR{T} > 0} &\geq \frac{1 - \E{\potentialIMinusR{T}} [ \potentialIMinusR{T} \leq 0]}{\E{\potentialIMinusR{T}} [ \potentialIMinusR{T} > 0] - \E{\potentialIMinusR{T}} [ \potentialIMinusR{T} \leq 0]}\\ & \geq \frac{1}{\mathOrText{n}+2}. \end{align*} Now assume $\potentialIMinusR{T} > 0$. By the definition of $T$, it then holds $\susceptibleDiscrete{T} < \frac{2}{2 + \mathOrText{c'}}\mathOrText{n}$. Therefore $$\infectedDiscrete{T} + \recoveredDiscrete{T} = \mathOrText{n}- \susceptibleDiscrete{T} > \frac{\mathOrText{c'}}{2+\mathOrText{c'}}\mathOrText{n}.$$ With $\potentialIMinusR{T} > 0$ we then get $\infectedDiscrete{T} > \frac{\mathOrText{c'}}{2+\mathOrText{c'}}\recoveredDiscrete{T}$ which implies \begin{align*} (1+\frac{2+\mathOrText{c'}}{\mathOrText{c'}})\infectedDiscrete{T} &> \frac{\mathOrText{c'}}{2+\mathOrText{c'}}\mathOrText{n}. \qedhere \end{align*} \end{proof} To show that the infection survives long from that point onwards, we define a potential function for the states and analyze its drift. The potential function is an extended version of the Lyapunov function of \cite{korobeinikov2002lyapunov}. We first define a helper function \lyapunovHelper. \begin{definition}\label{def:laypunocHelper} We define \lyapunovHelper such that, for all $x,x^* \in \mathOrText{\mathbf{R}}_{>0}$, we have \begin{align*} \lyapunovHelper[x^*,x] &= x^* \left( \frac{x}{x^*} - \ln \frac{x}{x^*}-1\right).\qedhere \end{align*} \end{definition} Note that the derivative $\frac{\mathrm{d} f(x^*,x)}{\mathrm{d}x}=1- \frac{x^*}{x}$, hence for a given $x^* \in \mathOrText{\mathbf{R}}_{>0}$, $x = x^*$ is the only local optimum of $\lyapunovHelper[x^*,x]$ and it is a global minimum. We now define the potential function that we use in the following lemmas. \begin{definition}\label{def:lyapunovPotential} Let $G$ be a clique of size $\mathOrText{n} \in \mathOrText{\mathbf{N}}_{>0}$ and let \mathOrText{C} be a contact process in the SIRS model on $G$ with infection rate $\mathOrText{\lambda} = \frac{\mathOrText{c}}{\mathOrText{n}}$ for a constant $\mathOrText{c} \in \mathOrText{\mathbf{R}}_{>1}$ and with constant deimmunization rate \mathOrText{\varrho}. Let $\mathOrText{n'} = \mathOrText{n} + \frac{\mathOrText{\varrho}}{\mathOrText{\lambda}}$ and let \begin{align*} \mathOrText{\alpha} &= \frac{(1+\mathOrText{\varrho})^2\mathOrText{n}}{\mathOrText{c}^2\mathOrText{\varrho} \mathOrText{n'}}\left(1+\frac{\mathOrText{c}}{1+\mathOrText{\varrho}}\right),\\ \mathOrText{\beta} &= \frac{\mathOrText{\varrho}}{\mathOrText{c}}. \end{align*} For all $\mathOrText{t} \in \mathOrText{\mathbf{N}}$, we define $\potentialSIRSClique{\mathOrText{\beta}}{\mathOrText{t}}$ as $$\potentialSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} = \lyapunovFunction{\mathOrText{\beta}}[\susceptibleDiscreteShifted{\mathOrText{t}},\infectedDiscrete{\mathOrText{t}},\recoveredDiscreteShifted{\mathOrText{t}}] = \mathOrText{\alpha} \left(\lyapunovHelper[\mathOrText{P^*},\susceptibleDiscreteShifted{\mathOrText{t}}]+ \lyapunovHelper[\mathOrText{I^*},\infectedDiscrete{\mathOrText{t}}]\right) + \mathOrText{\beta} \lyapunovHelper[\mathOrText{Q^*},\recoveredDiscreteShifted{\mathOrText{t}}].$$ Let $(\filtrationContinuous{\mathOrText{t}})_{\mathOrText{t} \in \mathOrText{\mathbf{R}}_{\geq 0}}$ be the natural filtration of \mathOrText{C}. We define for all $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ the drift $\driftSIRSClique{\mathOrText{\beta}}{\mathOrText{t}}$ as \begin{align*} \driftSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} &= \E{\potentialSIRSClique{\mathOrText{\beta}}{\mathOrText{t}+1}-\potentialSIRSClique{\mathOrText{\beta}}{\mathOrText{t}}}[\filtrationDiscrete{\mathOrText{t}}]. \qedhere \end{align*} \end{definition} The potential \lyapunovFunction{\mathOrText{\beta}} is minimized at the equilibrium state and becomes larger at states further away. We aim to show that the process tends to drift towards the equilibrium state. To calculate the differences of the \lyapunovFunction{\mathOrText{\beta}} values in the drift, we first have a look at \lyapunovHelper. \begin{lemma}\label{lem:lyapunovHelper} Let $x^* \in \mathOrText{\mathbf{R}}_{>0}$ and $x \in \mathOrText{\mathbf{R}}_{>2}$. Then \begin{align*} &\lyapunovHelper[x^*,x+1] - \lyapunovHelper[x^*,x] \leq 1 - \frac{x^*}{x} + \frac{x^*}{x(x+1)}\\ \text{and }&\lyapunovHelper[x^*,x-1] - \lyapunovHelper[x^*,x] \leq -\left(1 - \frac{x^*}{x} - \frac{x^*}{x(x-1)}\right).\qedhere \end{align*} \end{lemma} \begin{proof} We use that for all $y \in \mathOrText{\mathbf{R}}_{>1}$ holds $$\frac{1}{y+1} < \ln(y+1)-\ln(y) < \frac{1}{y}.$$ Together with the definition of \lyapunovHelper, we have \begin{align*} \lyapunovHelper[x^*,x+1] - \lyapunovHelper[x^*,x] &= x^* \left( \frac{x+1}{x^*} - \ln \frac{x+1}{x^*}-1\right) - x^* \left( \frac{x}{x^*} - \ln \frac{x}{x^*}-1\right)\\ &= 1 - x^* (\ln(x+1)-\ln(x))\\ &\leq 1 - \frac{x^*}{x+1}. \end{align*} For the second part we get \begin{align*} \lyapunovHelper[x^*,x-1] - \lyapunovHelper[x^*,x] &= x^* \left( \frac{x-1}{x^*} - \ln \frac{x-1}{x^*}-1\right) - x^* \left( \frac{x}{x^*} - \ln \frac{x}{x^*}-1\right)\\ &= -1 + x^* (\ln(x)-\ln(x-1))\\ &\leq -(1 - \frac{x^*}{x-1}). \end{align*} Noting that $\frac{x^*}{x+1} = \frac{x^*}{x}-\frac{x^*}{x(x+1)}$ and $\frac{x^*}{x-1} = \frac{x^*}{x}+\frac{x^*}{x(x-1)}$ concludes the proof. \end{proof} We now show the following lemma that states that the drift \driftSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} is upper bounded by a negative constant for states in which the random variables are far enough away from the equilibrium and from 0. \begin{lemma}\label{lem:constantDriftSIRS} Let $G$ be a clique of size $\mathOrText{n} \in \mathOrText{\mathbf{N}}_{>0}$ and let \mathOrText{C} be a contact process in the SIRS model on $G$ with infection rate $\mathOrText{\lambda} = \frac{\mathOrText{c}}{\mathOrText{n}}$ for a constant $\mathOrText{c} \in \mathOrText{\mathbf{R}}_{>1}$ and with constant deimmunization rate \mathOrText{\varrho}. Let $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ and $\varepsilon, \delta \in (0,1)$ be constants. We define \deltaP{\mathOrText{t}}, \deltaI{\mathOrText{t}} and \deltaR{\mathOrText{t}} such that $$\susceptibleDiscreteShifted{\mathOrText{t}} = \mathOrText{P^*} + \deltaP{\mathOrText{t}} \cdot \mathOrText{n} , \infectedDiscrete{\mathOrText{t}} = \mathOrText{I^*} + \deltaI{\mathOrText{t}} \cdot \mathOrText{n} ,\recoveredDiscrete{\mathOrText{t}} = \mathOrText{R^*} + \deltaR{\mathOrText{t}} \cdot \mathOrText{n}.$$ Now assume that the following conditions hold \begin{align*} \infectedDiscrete{\mathOrText{t}} \geq \varepsilon \mathOrText{n},\\ |\deltaP{\mathOrText{t}}| + |\deltaI{\mathOrText{t}}| + |\deltaR{\mathOrText{t}}| \geq \delta. \end{align*} Then there exists a constant $d \in \mathOrText{\mathbf{R}}_{>0}$ such that $\driftSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} \leq -d$ for sufficiently large \mathOrText{n}. \end{lemma} \begin{proof} For this proof, we first use the law of total expectation and \Cref{lem:lyapunovHelper} to get a large formula as an upper bound for $\rateTotal{\mathOrText{t}} \driftSIRSClique{\mathOrText{\beta}}{\mathOrText{t}}$. We split that bound into two parts and simplify each part separately. We show that, with the given conditions, one of the parts is upper bounded by a constant and the other part is at most $-m \mathOrText{n}$ for some constant $m \in \mathOrText{\mathbf{R}}_{>0}$. We conclude the proof by bounding $\rateTotal{\mathOrText{t}}$ and dividing the obtained bound for $\rateTotal{\mathOrText{t}} \driftSIRSClique{\mathOrText{\beta}}{\mathOrText{t}}$ by it. Using the law of total expectation and \Cref{lem:lyapunovHelper}, we get \begin{align*} \rateTotal{\mathOrText{t}} \cdot \driftSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} &=\rateInfection{\mathOrText{t}} \cdot (\lyapunovFunction{\mathOrText{\beta}}[\susceptibleDiscreteShifted{\mathOrText{t}}-1,\infectedDiscrete{\mathOrText{t}}+1,\recoveredDiscreteShifted{\mathOrText{t}}]- \lyapunovFunction{\mathOrText{\beta}}[\susceptibleDiscreteShifted{\mathOrText{t}},\infectedDiscrete{\mathOrText{t}},\recoveredDiscreteShifted{\mathOrText{t}}])\\ &\quad + \rateRecover{\mathOrText{t}} \cdot (\lyapunovFunction{\mathOrText{\beta}}[\susceptibleDiscreteShifted{\mathOrText{t}},\infectedDiscrete{\mathOrText{t}}-1,\recoveredDiscreteShifted{\mathOrText{t}}+1]- \lyapunovFunction{\mathOrText{\beta}}[\susceptibleDiscreteShifted{\mathOrText{t}},\infectedDiscrete{\mathOrText{t}},\recoveredDiscreteShifted{\mathOrText{t}}])\\ &\quad + \rateSusceptible{\mathOrText{t}} \cdot (\lyapunovFunction{\mathOrText{\beta}}[\susceptibleDiscreteShifted{\mathOrText{t}}+1,\infectedDiscrete{\mathOrText{t}},\recoveredDiscreteShifted{\mathOrText{t}}-1]- \lyapunovFunction{\mathOrText{\beta}}[\susceptibleDiscreteShifted{\mathOrText{t}},\infectedDiscrete{\mathOrText{t}},\recoveredDiscreteShifted{\mathOrText{t}}])\\ &\leq \rateInfection{\mathOrText{t}} \cdot \left(\mathOrText{\alpha} (1 - \frac{\mathOrText{I^*}}{\infectedDiscrete{\mathOrText{t}}} + \frac{\mathOrText{I^*}}{\infectedDiscrete{\mathOrText{t}}(\infectedDiscrete{\mathOrText{t}}+1)}) - \mathOrText{\alpha} (1 - \frac{\mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}}} - \frac{\mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}}(\susceptibleDiscreteShifted{\mathOrText{t}}-1)})\right)\\ &\quad + \rateRecover{\mathOrText{t}} \cdot \left(\mathOrText{\beta} (1 - \frac{\mathOrText{Q^*}}{\recoveredDiscreteShifted{\mathOrText{t}}} + \frac{\mathOrText{Q^*}}{\recoveredDiscreteShifted{\mathOrText{t}}(\recoveredDiscreteShifted{\mathOrText{t}}+1)}) - \mathOrText{\alpha} (1 - \frac{\mathOrText{I^*}}{\infectedDiscrete{\mathOrText{t}}} - \frac{\mathOrText{I^*}}{\infectedDiscrete{\mathOrText{t}}(\infectedDiscrete{\mathOrText{t}}-1)})\right)\\ &\quad + \rateSusceptible{\mathOrText{t}} \cdot \left(\mathOrText{\alpha} (1 - \frac{\mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}}} + \frac{\mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}}(\susceptibleDiscreteShifted{\mathOrText{t}}+1)}) - \mathOrText{\beta} (1 - \frac{\mathOrText{Q^*}}{\recoveredDiscreteShifted{\mathOrText{t}}} - \frac{\mathOrText{Q^*}}{\recoveredDiscreteShifted{\mathOrText{t}}(\recoveredDiscreteShifted{\mathOrText{t}}-1)})\right)\\ &= \mathOrText{\alpha} \left( (1-\frac{\mathOrText{I^*}}{\infectedDiscrete{\mathOrText{t}}}) (\rateInfection{\mathOrText{t}} - \rateRecover{\mathOrText{t}}) + (1-\frac{\mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}}})(\rateSusceptible{\mathOrText{t}} - \rateInfection{\mathOrText{t}})\right) + \mathOrText{\beta} (1-\frac{\mathOrText{Q^*}}{\recoveredDiscreteShifted{\mathOrText{t}}})(\rateRecover{\mathOrText{t}} - \rateSusceptible{\mathOrText{t}})\\ &\quad + \frac{\mathOrText{\alpha} \rateInfection{\mathOrText{t}} \mathOrText{I^*}}{\infectedDiscrete{\mathOrText{t}} (\infectedDiscrete{\mathOrText{t}}+1)} + \frac{\mathOrText{\alpha} \rateInfection{\mathOrText{t}} \mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}} (\susceptibleDiscreteShifted{\mathOrText{t}}-1)} + \frac{\mathOrText{\beta} \rateRecover{\mathOrText{t}} \mathOrText{Q^*}}{\recoveredDiscreteShifted{\mathOrText{t}} (\recoveredDiscreteShifted{\mathOrText{t}}+1)} + \frac{\mathOrText{\alpha} \rateRecover{\mathOrText{t}} \mathOrText{I^*}}{\infectedDiscrete{\mathOrText{t}} (\infectedDiscrete{\mathOrText{t}}-1)} + \frac{\mathOrText{\alpha} \rateSusceptible{\mathOrText{t}} \mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}} (\susceptibleDiscreteShifted{\mathOrText{t}}+1)} + \frac{\mathOrText{\beta} \rateSusceptible{\mathOrText{t}} \mathOrText{Q^*}}{\recoveredDiscreteShifted{\mathOrText{t}} (\recoveredDiscreteShifted{\mathOrText{t}}-1)}. \end{align*} For ease of notation, we define \driftSIRSCliquePartOne{\mathOrText{\beta}}{\mathOrText{t}} and \driftSIRSCliquePartTwo{\mathOrText{\beta}}{\mathOrText{t}} such that $\rateTotal{\mathOrText{t}} \driftSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} \leq \driftSIRSCliquePartOne{\mathOrText{\beta}}{\mathOrText{t}} + \driftSIRSCliquePartTwo{\mathOrText{\beta}}{\mathOrText{t}}$ as \begin{align*} \driftSIRSCliquePartOne{\mathOrText{\beta}}{\mathOrText{t}} &= \mathOrText{\alpha} \left( (1-\frac{\mathOrText{I^*}}{\infectedDiscrete{\mathOrText{t}}}) (\rateInfection{\mathOrText{t}} - \rateRecover{\mathOrText{t}}) + (1-\frac{\mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}}})(\rateSusceptible{\mathOrText{t}} - \rateInfection{\mathOrText{t}})\right) + \mathOrText{\beta} (1-\frac{\mathOrText{Q^*}}{\recoveredDiscreteShifted{\mathOrText{t}}})(\rateRecover{\mathOrText{t}} - \rateSusceptible{\mathOrText{t}})\textrm{ and}\\ \driftSIRSCliquePartTwo{\mathOrText{\beta}}{\mathOrText{t}} &= \frac{\mathOrText{\alpha} \rateInfection{\mathOrText{t}} \mathOrText{I^*}}{\infectedDiscrete{\mathOrText{t}} (\infectedDiscrete{\mathOrText{t}}+1)} + \frac{\mathOrText{\alpha} \rateInfection{\mathOrText{t}} \mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}} (\susceptibleDiscreteShifted{\mathOrText{t}}-1)} + \frac{\mathOrText{\beta} \rateRecover{\mathOrText{t}} \mathOrText{Q^*}}{\recoveredDiscreteShifted{\mathOrText{t}} (\recoveredDiscreteShifted{\mathOrText{t}}+1)} + \frac{\mathOrText{\alpha} \rateRecover{\mathOrText{t}} \mathOrText{I^*}}{\infectedDiscrete{\mathOrText{t}} (\infectedDiscrete{\mathOrText{t}}-1)} + \frac{\mathOrText{\alpha} \rateSusceptible{\mathOrText{t}} \mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}} (\susceptibleDiscreteShifted{\mathOrText{t}}+1)} + \frac{\mathOrText{\beta} \rateSusceptible{\mathOrText{t}} \mathOrText{Q^*}}{\recoveredDiscreteShifted{\mathOrText{t}} (\recoveredDiscreteShifted{\mathOrText{t}}-1)}. \end{align*} We first upper bound \driftSIRSCliquePartTwo{\mathOrText{\beta}}{\mathOrText{t}}. Note that with the given conditions, all values of \susceptibleDiscreteShifted{\mathOrText{t}}, \infectedDiscrete{\mathOrText{t}}, \recoveredDiscreteShifted{\mathOrText{t}}, \mathOrText{P^*}, \mathOrText{I^*} and \mathOrText{Q^*} are in $\bigTheta{\mathOrText{n}}$. All of \rateInfection{\mathOrText{t}}, \rateRecover{\mathOrText{t}} and \rateSusceptible{\mathOrText{t}} are in $\bigO{\mathOrText{n}}$. As both \mathOrText{\alpha} and \mathOrText{\beta} are constants, \driftSIRSCliquePartTwo{\mathOrText{\beta}}{\mathOrText{t}} is the sum of six terms that are all in $\bigO{1}$ and is thus upper bounded by a constant $b \in \mathOrText{\mathbf{R}}$. We now bound \driftSIRSCliquePartOne{\mathOrText{\beta}}{\mathOrText{t}}. We first simplify the first summand of \driftSIRSCliquePartOne{\mathOrText{\beta}}{\mathOrText{t}} by plugging in the rates, some of the values of \mathOrText{I^*} and \mathOrText{P^*} and using that $\susceptibleDiscreteShifted{\mathOrText{t}} + \infectedDiscrete{\mathOrText{t}} + \recoveredDiscrete{\mathOrText{t}} = \mathOrText{n'}$. We get \begin{align*} & (1-\frac{\mathOrText{I^*}}{\infectedDiscrete{\mathOrText{t}}}) (\rateInfection{\mathOrText{t}} - \rateRecover{\mathOrText{t}}) + (1-\frac{\mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}}})(\rateSusceptible{\mathOrText{t}} - \rateInfection{\mathOrText{t}})\\ &= (1-\frac{\mathOrText{I^*}}{\infectedDiscrete{\mathOrText{t}}}) (\mathOrText{\lambda} \infectedDiscrete{\mathOrText{t}} \susceptibleDiscreteShifted{\mathOrText{t}} - \mathOrText{\varrho} \infectedDiscrete{\mathOrText{t}} - \infectedDiscrete{\mathOrText{t}}) + (1-\frac{\mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}}})(\mathOrText{\varrho} \recoveredDiscrete{\mathOrText{t}} - \mathOrText{\lambda} \infectedDiscrete{\mathOrText{t}} \susceptibleDiscreteShifted{\mathOrText{t}} + \mathOrText{\varrho} \infectedDiscrete{\mathOrText{t}})\\ &= (1-\frac{\mathOrText{I^*}}{\infectedDiscrete{\mathOrText{t}}}) (\mathOrText{\lambda} \infectedDiscrete{\mathOrText{t}} \susceptibleDiscreteShifted{\mathOrText{t}} - (1+\mathOrText{\varrho}) \infectedDiscrete{\mathOrText{t}}) + (1-\frac{\mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}}})(\mathOrText{\varrho} \mathOrText{n'} - \mathOrText{\varrho} \susceptibleDiscreteShifted{\mathOrText{t}} - \mathOrText{\lambda} \infectedDiscrete{\mathOrText{t}} \susceptibleDiscreteShifted{\mathOrText{t}})\\ &= \mathOrText{\lambda} \infectedDiscrete{\mathOrText{t}} \susceptibleDiscreteShifted{\mathOrText{t}} - (1+\mathOrText{\varrho}) \infectedDiscrete{\mathOrText{t}} - \mathOrText{\lambda} \mathOrText{I^*} \susceptibleDiscreteShifted{\mathOrText{t}} + (1+\mathOrText{\varrho}) \mathOrText{I^*} + \mathOrText{\varrho} \mathOrText{n'}- \mathOrText{\varrho} \susceptibleDiscreteShifted{\mathOrText{t}} - \mathOrText{\lambda} \infectedDiscrete{\mathOrText{t}} \susceptibleDiscreteShifted{\mathOrText{t}} -\mathOrText{\varrho} \mathOrText{n'}\frac{\mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}}} + \mathOrText{\varrho} \mathOrText{P^*} + \mathOrText{\lambda} \infectedDiscrete{\mathOrText{t}} \mathOrText{P^*}\\ &= - \frac{\mathOrText{\varrho}(\mathOrText{c}-1)}{1+\mathOrText{\varrho}} \susceptibleDiscreteShifted{\mathOrText{t}} + \mathOrText{\varrho}(\mathOrText{n} - \frac{\mathOrText{n}}{\mathOrText{c}}) + \mathOrText{\varrho} \mathOrText{n'} - \mathOrText{\varrho} \susceptibleDiscreteShifted{\mathOrText{t}} -\mathOrText{\varrho} \mathOrText{n'}\frac{\mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}}} + \frac{\mathOrText{\varrho}(1+\mathOrText{\varrho})\mathOrText{n}}{\mathOrText{c}}\\ &= \mathOrText{\varrho} \left(\mathOrText{n'} +(\mathOrText{n} - \frac{\mathOrText{n}}{\mathOrText{c}}) + \frac{(1+\mathOrText{\varrho})\mathOrText{n}}{\mathOrText{c}} - \frac{\mathOrText{c}-1}{1+\mathOrText{\varrho}} \susceptibleDiscreteShifted{\mathOrText{t}} - \susceptibleDiscreteShifted{\mathOrText{t}} - \mathOrText{n'}\frac{\mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}}}\right) \\ &= \mathOrText{\varrho} \left( \mathOrText{n'} +\mathOrText{n} + \frac{(1+\mathOrText{\varrho})\mathOrText{n}-\mathOrText{n}}{\mathOrText{c}} - \frac{\mathOrText{c}+\mathOrText{\varrho}}{1+\mathOrText{\varrho}} \susceptibleDiscreteShifted{\mathOrText{t}} - \mathOrText{n'}\frac{\mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}}} \right)\\ &= \mathOrText{\varrho} \left( 2\mathOrText{n'} -\mathOrText{n'} \frac{\susceptibleDiscreteShifted{\mathOrText{t}}}{\mathOrText{P^*}}- \mathOrText{n'}\frac{\mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}}} \right) \\ &= -\mathOrText{\varrho} \mathOrText{n'} \frac{\mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}}} \left( 1- \frac{\susceptibleDiscreteShifted{\mathOrText{t}}}{\mathOrText{P^*}} \right)^2. \end{align*} Note that these calculations are the same as the calculations in \cite{korobeinikov2002lyapunov} as the considered term is exactly the derivative of their Lyapunov function. We aim to represent $\driftSIRSCliquePartOne{\mathOrText{\beta}}{\mathOrText{t}}$ by using \deltaP{\mathOrText{t}}, \deltaI{\mathOrText{t}} and \deltaR{\mathOrText{t}}. By their definitions, we get \[\frac{\susceptibleDiscreteShifted{\mathOrText{t}}}{\mathOrText{P^*}} = \frac{\mathOrText{P^*} + \deltaP{\mathOrText{t}} \cdot \mathOrText{n}}{\mathOrText{P^*}} = 1 + \frac{\deltaP{\mathOrText{t}} \cdot \mathOrText{n}}{\mathOrText{P^*}} = 1 + \frac{\deltaP{\mathOrText{t}} \cdot \mathOrText{c}}{1+\mathOrText{\varrho}}\] and \[1 - \frac{\mathOrText{Q^*}}{\recoveredDiscreteShifted{\mathOrText{t}}} = 1 - \frac{\mathOrText{Q^*}}{\mathOrText{Q^*} + \deltaR{\mathOrText{t}} \cdot \mathOrText{n}} = \frac{\deltaR{\mathOrText{t}} \cdot \mathOrText{n}}{\mathOrText{Q^*} + \deltaR{\mathOrText{t}} \cdot \mathOrText{n}} = \frac{\deltaR{\mathOrText{t}}}{\frac{\mathOrText{c}-1}{(1+\mathOrText{\varrho})\mathOrText{c}} + \frac{\mathOrText{\varrho}}{\mathOrText{c}} + \deltaR{\mathOrText{t}}}.\] Note that $\mathOrText{I^*} = \mathOrText{\varrho} \mathOrText{R^*}$. With these equations and the definition of \mathOrText{\alpha} and \mathOrText{\beta}, we get \begin{align*} \driftSIRSCliquePartOne{\mathOrText{\beta}}{\mathOrText{t}} &= -\mathOrText{\alpha} \mathOrText{\varrho} \mathOrText{n'} \frac{\mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}}} \left( 1- \frac{\susceptibleDiscreteShifted{\mathOrText{t}}}{\mathOrText{P^*}} \right)^2 + \mathOrText{\beta} (1-\frac{\mathOrText{Q^*}}{\recoveredDiscreteShifted{\mathOrText{t}}})(\infectedDiscrete{\mathOrText{t}}-\mathOrText{\varrho} \recoveredDiscrete{\mathOrText{t}})\\ &= -\mathOrText{\alpha} \mathOrText{\varrho} \mathOrText{n'} \frac{(\deltaP{\mathOrText{t}} \cdot \mathOrText{c}/(1+\mathOrText{\varrho}))^2}{1+\deltaP{\mathOrText{t}} \cdot \mathOrText{c}/(1+\mathOrText{\varrho})} + \mathOrText{\beta} \frac{\deltaR{\mathOrText{t}}}{\frac{(\mathOrText{c}-1)}{(1+\mathOrText{\varrho})\mathOrText{c}}+ \frac{\mathOrText{\varrho}}{\mathOrText{c}} + \deltaR{\mathOrText{t}}}(\deltaI{\mathOrText{t}} - \mathOrText{\varrho} \deltaR{\mathOrText{t}})\mathOrText{n}\\ &= - (1+\frac{\mathOrText{c}}{1+\mathOrText{\varrho}}) \frac{\deltaP{\mathOrText{t}}^2}{1+\deltaP{\mathOrText{t}} \cdot \mathOrText{c}/(1+\mathOrText{\varrho})} \mathOrText{n} + \mathOrText{\beta} \frac{\deltaR{\mathOrText{t}}}{\frac{(\mathOrText{c}-1)}{(1+\mathOrText{\varrho})\mathOrText{c}} + \frac{\mathOrText{\varrho}}{\mathOrText{c}} +\deltaR{\mathOrText{t}}}(\deltaI{\mathOrText{t}} - \mathOrText{\varrho} \deltaR{\mathOrText{t}})\mathOrText{n}. \end{align*} We aim to simplify this formula by substituting the $\deltaP{\mathOrText{t}}$ and $\deltaR{\mathOrText{t}}$ in the denominator. By definition of \deltaP{\mathOrText{t}}, it holds that \[-\frac{1}{\mathOrText{c}} = -\frac{\mathOrText{S^*}}{\mathOrText{n}} \leq \deltaP{\mathOrText{t}} < 1. \] Therefore, the first term of the previous equation is always negative and we upper bound it by substituting the \deltaP{\mathOrText{t}} in the denominator by 1. We also split up the second term for easier calculations later. We get \[\driftSIRSCliquePartOne{\mathOrText{\beta}}{\mathOrText{t}} \leq - \deltaP{\mathOrText{t}}^2 \mathOrText{n} + \mathOrText{\beta} \frac{\deltaR{\mathOrText{t}} \cdot \deltaI{\mathOrText{t}}}{\frac{\mathOrText{c}-1}{(1+\mathOrText{\varrho})\mathOrText{c}} + \frac{\mathOrText{\varrho}}{\mathOrText{c}}+ \deltaR{\mathOrText{t}}}\mathOrText{n} - \mathOrText{\beta} \frac{\mathOrText{\varrho} \deltaR{\mathOrText{t}}^2 }{\frac{\mathOrText{c}-1}{(1+\mathOrText{\varrho})\mathOrText{c}} + \frac{\mathOrText{\varrho}}{\mathOrText{c}}+ \deltaR{\mathOrText{t}}}\mathOrText{n}.\] By definition of \deltaR{\mathOrText{t}} it holds \[-\frac{\mathOrText{c} -1}{(1+\mathOrText{\varrho})\mathOrText{c}} = -\frac{\mathOrText{R^*}}{\mathOrText{n}} \leq \deltaR{\mathOrText{t}} < 1. \] Note that the latter inequality implies that $\frac{\mathOrText{c}-1}{(1+\mathOrText{\varrho})\mathOrText{c}} + \frac{\mathOrText{\varrho}}{\mathOrText{c}}+ \deltaR{\mathOrText{t}}$ is positive. We now make a case distinction depending on the sign of $\deltaR{\mathOrText{t}} \cdot \deltaI{\mathOrText{t}}$, as this sign decides whether we use the lower bound or the upper bound of $\deltaR{\mathOrText{t}}$ to upper bound the second term. For both cases, note that $\deltaP{\mathOrText{t}} + \deltaI{\mathOrText{t}} + \deltaR{\mathOrText{t}} = 0$ as every vertex is always in exactly one of $\susceptibleDiscrete{\mathOrText{t}},\infectedDiscrete{\mathOrText{t}}$ or $\recoveredDiscrete{\mathOrText{t}}$. Together with the constraint $|\deltaP{\mathOrText{t}}| + |\deltaI{\mathOrText{t}}| + |\deltaR{\mathOrText{t}}| \geq \delta$ this means that of these three values, one or two are positive and add up to at least $\frac{\delta}{2}$ and one or two are negative and add up to at most $-\frac{\delta}{2}$. \paragraph{Case $\deltaR{\mathOrText{t}} \cdot \deltaI{\mathOrText{t}} < 0$:} The second term of the bound of \driftSIRSCliquePartOne{\mathOrText{\beta}}{\mathOrText{t}} is negative. Therefore we get an upper bound by substituting the \deltaR{\mathOrText{t}} in the denominator by its upper bound $1$. We get \begin{align*} \driftSIRSCliquePartOne{\mathOrText{\beta}}{\mathOrText{t}} &\leq - \deltaP{\mathOrText{t}}^2 \mathOrText{n} + \mathOrText{\beta} \frac{\deltaR{\mathOrText{t}} \cdot \deltaI{\mathOrText{t}}}{\frac{\mathOrText{c} -1}{(1+\mathOrText{\varrho})c} + \frac{\mathOrText{\varrho}}{\mathOrText{c}}+ 1}\mathOrText{n} - \mathOrText{\beta} \frac{\mathOrText{\varrho} \deltaR{\mathOrText{t}}^2 }{\frac{\mathOrText{c}-1}{(1+\mathOrText{\varrho})\mathOrText{c}} + \frac{\mathOrText{\varrho}}{\mathOrText{c}}+ 1}\mathOrText{n}\\ &\leq - \deltaP{\mathOrText{t}}^2 \mathOrText{n} - \mathOrText{\beta} \frac{\mathOrText{\varrho} \deltaR{\mathOrText{t}}^2 }{\frac{\mathOrText{c}-1}{(1+\mathOrText{\varrho})\mathOrText{c}} + \frac{\mathOrText{\varrho}}{\mathOrText{c}}+ 1}\mathOrText{n}. \end{align*} By the pigeonhole principle, one of $|\deltaP{\mathOrText{t}}|$ and $|\deltaR{\mathOrText{t}}|$ has to be at least $\frac{\delta}{4}$ in order to fulfill $\deltaP{\mathOrText{t}} + \deltaI{\mathOrText{t}} + \deltaR{\mathOrText{t}} = 0$ and $|\deltaP{\mathOrText{t}}| + |\deltaI{\mathOrText{t}}| + |\deltaR{\mathOrText{t}}| \geq \delta$. Therefore we get $$\driftSIRSCliquePartOne{\mathOrText{\beta}}{\mathOrText{t}} \leq -\frac{\delta ^2}{16} \min\left(1,\frac{\mathOrText{\beta} \mathOrText{\varrho}}{\frac{(\mathOrText{c}-1)}{(1+\mathOrText{\varrho})\mathOrText{c}} + \frac{\mathOrText{\varrho}}{\mathOrText{c}}+ 1}\right) \cdot \mathOrText{n}.$$ \paragraph{Case $\deltaR{\mathOrText{t}} \cdot \deltaI{\mathOrText{t}} \geq 0$:} The second term of the bound of \driftSIRSCliquePartOne{\mathOrText{\beta}}{\mathOrText{t}} is non-negative. Therefore we get an upper bound by substituting the \deltaR{\mathOrText{t}} in the denominator by its lower bound $-\frac{\mathOrText{c} -1}{(1+\mathOrText{\varrho})\mathOrText{c}}$. We get \begin{align*} \driftSIRSCliquePartOne{\mathOrText{\beta}}{\mathOrText{t}} &\leq - \deltaP{\mathOrText{t}}^2 \mathOrText{n} + \mathOrText{\beta} \frac{\deltaR{\mathOrText{t}} \cdot \deltaI{\mathOrText{t}}}{\frac{\mathOrText{c}-1}{(1+\mathOrText{\varrho})\mathOrText{c}} + \frac{\mathOrText{\varrho}}{\mathOrText{c}}+ \deltaR{\mathOrText{t}}}\mathOrText{n} - \mathOrText{\beta} \frac{\mathOrText{\varrho} \deltaR{\mathOrText{t}}^2 }{\frac{\mathOrText{c}-1}{(1+\mathOrText{\varrho})\mathOrText{c}} + \frac{\mathOrText{\varrho}}{\mathOrText{c}}+ \deltaR{\mathOrText{t}}}\mathOrText{n}\\ &\leq - \deltaP{\mathOrText{t}}^2 \mathOrText{n} + \mathOrText{\beta} \frac{\deltaR{\mathOrText{t}} \cdot \deltaI{\mathOrText{t}}}{\frac{\mathOrText{c}-1}{(1+\mathOrText{\varrho})\mathOrText{c}} + \frac{\mathOrText{\varrho}}{\mathOrText{c}} - \frac{\mathOrText{c}-1}{(1+\mathOrText{\varrho})\mathOrText{c}}}\mathOrText{n}\\ &= (- \deltaP{\mathOrText{t}}^2 + \deltaR{\mathOrText{t}} \cdot \deltaI{\mathOrText{t}} ) \mathOrText{n}. \end{align*} Because $\deltaP{\mathOrText{t}} + \deltaI{\mathOrText{t}} + \deltaR{\mathOrText{t}} = 0$ and \deltaR{\mathOrText{t}} and \deltaI{\mathOrText{t}} have the same sign or one of them is 0, we know that $ |\deltaP{\mathOrText{t}}| = |\deltaR{\mathOrText{t}} + \deltaI{\mathOrText{t}}|$. Also because $|\deltaP{\mathOrText{t}}| + |\deltaI{\mathOrText{t}}| + |\deltaR{\mathOrText{t}}| \geq \delta$, by the pigeonhole principle one of $|\deltaI{\mathOrText{t}}|$ and $|\deltaR{\mathOrText{t}}|$ has to be at least $\frac{\delta}{4}$. Using these insights, we get \begin{align*} \driftSIRSCliquePartOne{\mathOrText{\beta}}{\mathOrText{t}} &\leq (- \deltaP{\mathOrText{t}}^2 + \deltaR{\mathOrText{t}} \cdot \deltaI{\mathOrText{t}} ) \mathOrText{n}\\ &= (- (\deltaR{\mathOrText{t}} + \deltaI{\mathOrText{t}})^2 + \deltaR{\mathOrText{t}} \cdot \deltaI{\mathOrText{t}} ) \mathOrText{n}\\ &= (- \deltaR{\mathOrText{t}}^2 - \deltaR{\mathOrText{t}} \cdot \deltaI{\mathOrText{t}} - \deltaI{\mathOrText{t}}^2 ) \mathOrText{n}\\ &\leq -\frac{\delta ^2}{16}\mathOrText{n}. \end{align*} Let $$m= \frac{\delta ^2}{16}\min\left(1,\frac{\mathOrText{\beta} \mathOrText{\varrho}}{\frac{(\mathOrText{c}-1)}{(1+\mathOrText{\varrho})\mathOrText{c}} + \frac{\mathOrText{\varrho}}{\mathOrText{c}}+ 1}\right).$$ We showed that in all cases $\driftSIRSCliquePartOne{\mathOrText{\beta}}{\mathOrText{t}} \leq -m\mathOrText{n}$. For sufficiently large \mathOrText{n} we then get \begin{align*} \rateTotal{\mathOrText{t}} \driftSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} &\leq \driftSIRSCliquePartOne{\mathOrText{\beta}}{\mathOrText{t}} + \driftSIRSCliquePartTwo{\mathOrText{\beta}}{\mathOrText{t}}\\ &\leq -m\mathOrText{n} + b\\ &\leq -\frac{m\mathOrText{n}}{2}. \end{align*} We know that $\rateTotal{\mathOrText{t}} = \mathOrText{\lambda} \infectedDiscrete{\mathOrText{t}}\susceptibleDiscrete{\mathOrText{t}} + \infectedDiscrete{\mathOrText{t}} + \mathOrText{\varrho} \recoveredDiscrete{\mathOrText{t}} \leq \mathOrText{\lambda} \mathOrText{n}^2 + \mathOrText{n} + \mathOrText{\varrho} \mathOrText{n} = (\mathOrText{c} + 1 + \mathOrText{\varrho}) \mathOrText{n}$. As also $\rateTotal{\mathOrText{t}} \geq \infectedDiscrete{\mathOrText{t}} \geq \varepsilon \mathOrText{n}>0$, by dividing the inequality for \driftSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} by \rateTotal{\mathOrText{t}} we get \begin{align*} \driftSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} &\leq -\frac{m\mathOrText{n}}{2 \rateTotal{\mathOrText{t}}}\\ &\leq -\frac{m\mathOrText{n}}{2 (1 + \mathOrText{\varrho} + \mathOrText{c}) \mathOrText{n}}\\ &= -\frac{m}{2 (1 + \mathOrText{\varrho} + \mathOrText{c})}. \end{align*} As all of the constants on the right side of that inequality are positive, choosing $d = \frac{m}{2 (1 + \mathOrText{\varrho}+\mathOrText{c})}$ concludes the proof. \end{proof} We aim to apply the negative drift theorem (\Cref{pre:NegativeDrift}) to bound the survival time of the infection. In \Cref{lem:constantDriftSIRS}, we showed a constant negative drift of the potential in a region of the state space. To apply the drift theorem, we first transform the state space restrictions into restrictions on the value of the potential. \begin{lemma}\label{lem:highEpsilonSIRS} Let $G$ be a clique of size $\mathOrText{n} \in \mathOrText{\mathbf{N}}_{>0}$ and let \mathOrText{C} be a contact process in the SIRS model on $G$ with infection rate $\mathOrText{\lambda} = \frac{\mathOrText{c}}{\mathOrText{n}}$ for a constant $\mathOrText{c} \in \mathOrText{\mathbf{R}}_{>1}$ and with constant deimmunization rate \mathOrText{\varrho}. Let $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ and $\varepsilon \in (0,1)$ be constants such that $\infectedDiscrete{\mathOrText{t}} \geq \varepsilon \mathOrText{n}$. It then holds \begin{align*} \potentialSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} &\in \bigO{\mathOrText{n}}. \qedhere \end{align*} \end{lemma} \begin{proof} We aim to upper bound $\potentialSIRSClique{\mathOrText{\beta}}{\mathOrText{t}}$ by writing it as a sum and upper bounding the individual summands. To this end, we first bound the terms that appear in the summands. By the definition of our random variables and the fact that there are only \mathOrText{n} vertices that are in any of the states, we get \begin{align*} \max(\susceptibleDiscreteShifted{\mathOrText{t}},\infectedDiscrete{\mathOrText{t}},\recoveredDiscreteShifted{\mathOrText{t}},\mathOrText{P^*},\mathOrText{I^*},\mathOrText{Q^*}) &\leq \mathOrText{n'},\\ \min(\susceptibleDiscreteShifted{\mathOrText{t}},\infectedDiscrete{\mathOrText{t}},\recoveredDiscreteShifted{\mathOrText{t}}) &\geq \min(\varepsilon,\mathOrText{\varrho}/\mathOrText{c})\mathOrText{n}. \end{align*} Applying these bounds to \potentialSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} results in \begin{align*} \potentialSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} &= \mathOrText{\alpha} \left(\lyapunovHelper[\mathOrText{P^*},\susceptibleDiscreteShifted{\mathOrText{t}}]+ \lyapunovHelper[\mathOrText{I^*},\infectedDiscrete{\mathOrText{t}}]\right) + \mathOrText{\beta} \lyapunovHelper[\mathOrText{Q^*},\recoveredDiscreteShifted{\mathOrText{t}}]\\ &= \mathOrText{\alpha} \left( \mathOrText{P^*} \left(\frac{\susceptibleDiscreteShifted{\mathOrText{t}}}{\mathOrText{P^*}} - \ln \frac{\susceptibleDiscreteShifted{\mathOrText{t}}}{\mathOrText{P^*}} -1 \right) + \mathOrText{I^*} \left(\frac{\infectedDiscrete{\mathOrText{t}}}{\mathOrText{I^*}} - \ln \frac{\infectedDiscrete{\mathOrText{t}}}{\mathOrText{I^*}} -1 \right)\right) + \mathOrText{\beta} \mathOrText{Q^*} \left(\frac{\recoveredDiscreteShifted{\mathOrText{t}}}{\mathOrText{Q^*}} -\ln \frac{\recoveredDiscreteShifted{\mathOrText{t}}}{\mathOrText{Q^*}}-1\right)\\ &\leq \mathOrText{\alpha} \left( \susceptibleDiscreteShifted{\mathOrText{t}} + \mathOrText{P^*} \ln \frac{\mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}}} + \infectedDiscrete{\mathOrText{t}} + \mathOrText{I^*} \ln \frac{\mathOrText{I^*}}{\infectedDiscrete{\mathOrText{t}}} \right) + \mathOrText{\beta} \left(\recoveredDiscreteShifted{\mathOrText{t}} + \mathOrText{Q^*} \ln \frac{\mathOrText{Q^*}}{\recoveredDiscreteShifted{\mathOrText{t}}}\right)\\ &\leq (2 \mathOrText{\alpha} + \mathOrText{\beta}) \cdot (\mathOrText{n'} + \mathOrText{n'} \ln \frac{\mathOrText{n'}}{\min(\varepsilon,\mathOrText{\varrho}/\mathOrText{c})\mathOrText{n}}). \end{align*} As $\mathOrText{n'} = (1 + \mathOrText{\varrho}/ \mathOrText{c}) \mathOrText{n}$, the calculated upper bound for $\potentialSIRSClique{\mathOrText{\beta}}{\mathOrText{t}}$ is linear in \mathOrText{n} and therefore it holds $\potentialSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} \in \bigO{\mathOrText{n}}$. \end{proof} \begin{lemma}\label{lem:lowEpsilonSIRS} Let $G$ be a clique of size $\mathOrText{n} \in \mathOrText{\mathbf{N}}_{>0}$ and let \mathOrText{C} be a contact process in the SIRS model on $G$ with infection rate $\mathOrText{\lambda} = \frac{\mathOrText{c}}{\mathOrText{n}}$ for a constant $\mathOrText{c} \in \mathOrText{\mathbf{R}}_{>1}$ and with constant deimmunization rate \mathOrText{\varrho}. Let $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ and $\varepsilon \in (0,\mathOrText{I^*} / \mathOrText{n})$ be constants such that $1 \leq \infectedDiscrete{\mathOrText{t}} \leq \varepsilon \mathOrText{n}$. It then holds \begin{align*} \potentialSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} &\geq \mathOrText{\alpha} \mathOrText{I^*} \left( \ln \frac{1}{\varepsilon} + \ln \frac{\mathOrText{I^*}}{\mathOrText{n}} - 1\right). \qedhere \end{align*} \end{lemma} \begin{proof} We aim to lower bound $\potentialSIRSClique{\mathOrText{\beta}}{\mathOrText{t}}$ by lower bounding the $\lyapunovHelper$ values in its definition. Recall that for a given $x^* \in \mathOrText{\mathbf{R}}_{>0}$, the function $\lyapunovHelper[x^*,x]$ is minimized for $x=x^*$, which is the only local extreme value for $x \in \mathOrText{\mathbf{R}}_{>0}$. Therefore, we get for all $x,x^* \in \mathOrText{\mathbf{R}}_{>0}$ $$\lyapunovHelper[x^*,x] \geq \lyapunovHelper[x^*,x^*] = 0.$$ Using $1 \leq \infectedDiscrete{\mathOrText{t}} \leq \varepsilon \mathOrText{n}$ and the fact that for all $x^* \in \mathOrText{\mathbf{R}}_{>0}$, $\lyapunovHelper[x^*,x]$ decreases monotone in $x$ while $x<x^*$, we now get \begin{align*} \potentialSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} &= \mathOrText{\alpha} \left(\lyapunovHelper[\mathOrText{P^*},\susceptibleDiscreteShifted{\mathOrText{t}}]+ \lyapunovHelper[\mathOrText{I^*},\infectedDiscrete{\mathOrText{t}}]\right) + \mathOrText{\beta} \lyapunovHelper[\mathOrText{Q^*},\recoveredDiscreteShifted{\mathOrText{t}}]\\ &\geq 0+\mathOrText{\alpha} \lyapunovHelper[\mathOrText{I^*},\varepsilon \mathOrText{n}]+0\\ &\geq \mathOrText{\alpha} \mathOrText{I^*} \left( \frac{\varepsilon \mathOrText{n}}{\mathOrText{I^*}} - \ln \frac{\varepsilon \mathOrText{n}}{\mathOrText{I^*}} - 1\right) \\ &\geq \mathOrText{\alpha} \mathOrText{I^*} \left( \ln \frac{1}{\varepsilon} + \ln \frac{\mathOrText{I^*}}{\mathOrText{n}} - 1\right). \qedhere \end{align*} \iffalse Note that for a given $x^* \in \mathOrText{\mathbf{R}}_{>0}$, the function $\lyapunovHelper[x^*,x]$ is minimized for $x=x^*$, which is the only local extreme value for $x \in \mathOrText{\mathbf{R}}_{>0}$. Therefore, we get for all $x,x^* \in \mathOrText{\mathbf{R}}_{>0}$ $$\lyapunovHelper[x^*,x] - \lyapunovHelper[x^*,x^*] \geq 0.$$ Now let $X= \min(\susceptibleDiscreteShifted{\mathOrText{t}},\infectedDiscrete{\mathOrText{t}},\recoveredDiscrete{\mathOrText{t}})$ and let $X^*$ be the corresponding equilibrium value. Using the properties of $\lyapunovHelper$ and $1 \leq X \leq \mathOrText{\beta} \mathOrText{n} \leq X^*/2$, we get \begin{align*} \potentialSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} - \lyapunovFunction{\mathOrText{\beta}}[\mathOrText{P^*},\mathOrText{I^*},\mathOrText{R^*}] &\geq 0 + 0 + \min(\mathOrText{\alpha},\mathOrText{\beta})\left( \lyapunovHelper[X^*,X] - \lyapunovHelper[X^*,X^*]\right)\\ &\geq \min(\mathOrText{\alpha},\mathOrText{\beta}) \left(\lyapunovHelper[X^*,X^*/2] - X^*\right)\\ &= \min(\mathOrText{\alpha},\mathOrText{\beta}) \left(X^*/2 - X^*\ln(1/2) - X^*\right)\\ &\geq \min(\mathOrText{\alpha},\mathOrText{\beta}) \cdot 0.1 X^*\\ &\geq \min(\mathOrText{\alpha},\mathOrText{\beta}) \cdot 0.2 \mathOrText{\beta} \mathOrText{n}. \qedhere \end{align*} \fi \end{proof} \begin{lemma}\label{lem:constamtStepSIRS} Let $G$ be a clique of size $\mathOrText{n} \in \mathOrText{\mathbf{N}}_{>0}$ and let \mathOrText{C} be a contact process in the SIRS model on $G$ with infection rate $\mathOrText{\lambda} = \frac{\mathOrText{c}}{\mathOrText{n}}$ for a constant $\mathOrText{c} \in \mathOrText{\mathbf{R}}_{>1}$ and with constant deimmunization rate \mathOrText{\varrho}. Let $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ and $\varepsilon \in (0,\mathOrText{\varrho}/\mathOrText{c})$ be constants. Assume that $\infectedDiscrete{\mathOrText{t}} \geq \varepsilon \mathOrText{n}$. Further let $\Delta P, \Delta I, \Delta Q \in \{-1,0,1\}$. Then for sufficiently large \mathOrText{n} holds \begin{align*} |\lyapunovFunction{\mathOrText{\beta}}[\susceptibleDiscreteShifted{\mathOrText{t}}+\Delta P,\infectedDiscrete{\mathOrText{t}}+\Delta I,\recoveredDiscreteShifted{\mathOrText{t}}+\Delta Q] - \lyapunovFunction{\mathOrText{\beta}}[\susceptibleDiscreteShifted{\mathOrText{t}},\infectedDiscrete{\mathOrText{t}},\recoveredDiscreteShifted{\mathOrText{t}}]| &\leq (2 \mathOrText{\alpha} + \mathOrText{\beta})(1+2 (1+ \mathOrText{\varrho}/\mathOrText{c})\varepsilon^{-1}). \qedhere \end{align*} \end{lemma} \begin{proof} We aim to use the triangle inequality to upper bound the absolute change in the $\lyapunovFunction{\mathOrText{\beta}}$-values by the sum of the absolute changes in the $\lyapunovHelper$-values. We use that for all $x \in \mathOrText{\mathbf{R}}_{>1}$ holds $$\frac{1}{x+1} < \ln\left(\frac{x+1}{x}\right) < \frac{1}{x}.$$ For all $x, x^* \in \mathOrText{\mathbf{R}}_{>2}$ and $\Delta x \in \{-1,0,1\}$ holds \begin{align*} |\lyapunovHelper[x^*,x+\Delta x]- \lyapunovHelper[x^*,x]| &= \left|x^* \left( \frac{x+\Delta x}{x^*} - \ln \frac{x+\Delta x}{x^*}-1\right) - x^* \left( \frac{x}{x^*} - \ln \frac{x}{x^*}-1\right)\right|\\ &= \left| \Delta x - x^* \ln\left(\frac{x + \Delta x}{x} \right)\right|\\ &\leq |\Delta x| + \left| x^* \ln\left(\frac{x + \Delta x}{x} \right)\right|\\ &\leq 1 + \frac{x^*}{x-1}. \end{align*} We now apply this inequality to upper bound the absolute change in potential. Note that for sufficiently large \mathOrText{n}, we get that $\min(\susceptibleDiscreteShifted{\mathOrText{t}}-1,\infectedDiscrete{\mathOrText{t}}-1,\recoveredDiscreteShifted{\mathOrText{t}}-1) \geq \varepsilon \mathOrText{n}/2$. We get \begin{align*} &|\lyapunovFunction{\mathOrText{\beta}}[\susceptibleDiscreteShifted{\mathOrText{t}}+\Delta P,\infectedDiscrete{\mathOrText{t}}+\Delta I,\recoveredDiscreteShifted{\mathOrText{t}}+\Delta Q] - \lyapunovFunction{\mathOrText{\beta}}[\susceptibleDiscreteShifted{\mathOrText{t}},\infectedDiscrete{\mathOrText{t}},\recoveredDiscreteShifted{\mathOrText{t}}]|\\ &= \left| \mathOrText{\alpha} \lyapunovHelper[\mathOrText{P^*},\susceptibleDiscreteShifted{\mathOrText{t}}+\Delta P] + \mathOrText{\alpha} \lyapunovHelper[\mathOrText{I^*}, \infectedDiscrete{\mathOrText{t}}+\Delta I]+ \mathOrText{\beta} \lyapunovHelper[\mathOrText{Q^*},\recoveredDiscreteShifted{\mathOrText{t}}+\Delta Q]- \mathOrText{\alpha} \lyapunovHelper[\mathOrText{P^*},\susceptibleDiscreteShifted{\mathOrText{t}}] - \mathOrText{\alpha} \lyapunovHelper[\mathOrText{I^*}, \infectedDiscrete{\mathOrText{t}}]- \mathOrText{\beta} \lyapunovHelper[\mathOrText{Q^*},\recoveredDiscreteShifted{\mathOrText{t}}]\right|\\ &\leq \left| \mathOrText{\alpha} \lyapunovHelper[\mathOrText{P^*},\susceptibleDiscreteShifted{\mathOrText{t}}+\Delta P] - \mathOrText{\alpha} \lyapunovHelper[\mathOrText{P^*},\susceptibleDiscreteShifted{\mathOrText{t}}] \right| + \left|\mathOrText{\alpha} \lyapunovHelper[\mathOrText{I^*}, \infectedDiscrete{\mathOrText{t}}+\Delta I] - \mathOrText{\alpha} \lyapunovHelper[\mathOrText{I^*}, \infectedDiscrete{\mathOrText{t}}] \right| + \left|\mathOrText{\beta} \lyapunovHelper[\mathOrText{Q^*},\recoveredDiscreteShifted{\mathOrText{t}}+\Delta Q] - \mathOrText{\beta} \lyapunovHelper[\mathOrText{Q^*},\recoveredDiscreteShifted{\mathOrText{t}}]\right|\\ &\leq \mathOrText{\alpha}\left(1+ \frac{\mathOrText{P^*}}{\susceptibleDiscreteShifted{\mathOrText{t}}-1}\right) + \mathOrText{\alpha}\left(1 + \frac{\mathOrText{I^*}}{\infectedDiscrete{\mathOrText{t}}-1}\right) + \mathOrText{\beta}\left(1 + \frac{\mathOrText{Q^*}}{\recoveredDiscreteShifted{\mathOrText{t}}-1}\right)\\ &\leq (2 \mathOrText{\alpha} + \mathOrText{\beta})(1+ \frac{\mathOrText{n'}}{\varepsilon \mathOrText{n}/2})\\ &\leq (2 \mathOrText{\alpha} + \mathOrText{\beta})(1+2 (1+ \mathOrText{\varrho}/\mathOrText{c})\varepsilon^{-1}).\qedhere \end{align*} \end{proof} \begin{lemma}\label{lem:longSurvivalCliqueSIRS} Let $G$ be a clique of size $\mathOrText{n} \in \mathOrText{\mathbf{N}}_{>0}$ and let \mathOrText{C} be a contact process in the SIRS model on $G$ with infection rate $\mathOrText{\lambda} = \frac{\mathOrText{c}}{\mathOrText{n}}$ for a constant $\mathOrText{c} \in \mathOrText{\mathbf{R}}_{>1}$ and with constant deimmunization rate \mathOrText{\varrho}. Let $\varepsilon_0 \in (0,1)$ be a constant and let $E_{\varepsilon_0}$ be the event that there exists a $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ such that $\infectedDiscrete{\mathOrText{t}} \geq \varepsilon_0 \mathOrText{n}$. Then for the survival time $T$ of \mathOrText{C} holds that $\E{T}[E_{\varepsilon_0}] = 2^{ \Omega(\mathOrText{n})}$. \end{lemma} \begin{proof} We assume that $E_{\varepsilon_0}$ happens. Let $(\filtrationContinuous{\mathOrText{t}})_{\mathOrText{t} \in \mathOrText{\mathbf{R}}_{\geq 0}}$ be the natural filtration of \mathOrText{C} and let $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ such that $\infectedDiscrete{\mathOrText{t}} \geq \varepsilon_0 \mathOrText{n}$. We aim to apply the negative drift theorem (\Cref{pre:NegativeDrift}) to get the desired bound. To this end, we define a stopping time that is dominated by the number of steps until $T$, and we use the previous lemmas to show that all of the conditions for the drift theorem are satisfied. Note that we shift the time to start at $\mathOrText{t}$ instead of $0$. We then translate the bound on the number of steps into a bound on the survival time. As $\infectedDiscrete{\mathOrText{t}} \geq \varepsilon_0 \mathOrText{n}$, by \Cref{lem:highEpsilonSIRS} there exists a constant $a \in \mathOrText{\mathbf{R}}_{>0}$ such that $\potentialSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} \leq a \mathOrText{n}$. We define $T_1 = \inf\{i \in \mathOrText{\mathbf{N}}_{\geq t} \mid \potentialSIRSClique{\mathOrText{\beta}}{i} > 2a\mathOrText{n} \}$. We first show that for all $i \in \mathOrText{\mathbf{N}}$ with $\mathOrText{t} \leq i < T_1$ holds that $\infectedDiscrete{i}$ is large enough such that \Cref{lem:constamtStepSIRS} is applicable. Let $\varepsilon_1 \in (0,\mathOrText{I^*}/\mathOrText{n})$ be a constant low enough such that $\mathOrText{\alpha} \frac{\mathOrText{I^*}}{\mathOrText{n}} \left( \ln \frac{1}{\varepsilon_1} + \ln \frac{\mathOrText{I^*}}{\mathOrText{n}} - 1\right) > 2a$. Such an $\varepsilon_1$ exists as $\mathOrText{\alpha}$, $\frac{\mathOrText{I^*}}{\mathOrText{n}}$ and $a$ are positive constants. Then by the contraposition of \Cref{lem:lowEpsilonSIRS} for all $i \in \mathOrText{\mathbf{N}}$, $\potentialSIRSClique{\mathOrText{\beta}}{i} \leq 2a \mathOrText{n}$ implies that $\infectedDiscrete{i} \geq \varepsilon_1 \mathOrText{n}$. To show that condition 2 of \Cref{pre:NegativeDrift} is satisfied, let $s = (2 \mathOrText{\alpha} + \mathOrText{\beta})(1+2 (1+ \mathOrText{\varrho}/\mathOrText{c})\varepsilon_1^{-1})$. For all $i \in \mathOrText{\mathbf{N}}$ with $\mathOrText{t} \leq i < T_1$ holds $\potentialSIRSClique{\mathOrText{\beta}}{i} \leq 2a \mathOrText{n}$ and therefore $\infectedDiscrete{i} \geq \varepsilon_1 \mathOrText{n}$. Hence by \Cref{lem:constamtStepSIRS}, for all $i \in \mathOrText{\mathbf{N}}_{\geq \mathOrText{t}}$ holds that $|\potentialSIRSClique{\mathOrText{\beta}}{i+1} - \potentialSIRSClique{\mathOrText{\beta}}{i}| \cdot \indicator{i<T_1}\leq s \cdot \indicator{i<T_1}$. Therefore for all $i \in \mathOrText{\mathbf{N}}_{\geq \mathOrText{t}}$ and $j \in \mathOrText{\mathbf{R}}_{>0}$ holds that $\Pr{|\potentialSIRSClique{\mathOrText{\beta}}{i+1} - \potentialSIRSClique{\mathOrText{\beta}}{i}|\geq j}[\filtrationDiscrete{i}] \cdot \indicator{i<T_1} \leq \frac{2^s}{2^j} \cdot \indicator{i<T_1}$. Note that $2^s$ is a constant. We now show that condition 1 is satisfied as well. Let $\delta \in (0,1)$ such that $\delta s \leq a$. Also, for all time steps $i\in \mathOrText{\mathbf{N}}$, we define \deltaP{i}, \deltaI{i} and \deltaR{i} such that $$\susceptibleDiscreteShifted{i} = \mathOrText{P^*} + \deltaP{i} \cdot \mathOrText{n} , \infectedDiscrete{i} = \mathOrText{I^*} + \deltaI{i} \cdot \mathOrText{n} ,\recoveredDiscrete{i} = \mathOrText{R^*} + \deltaR{i} \cdot \mathOrText{n}.$$ Recall that $\lyapunovFunction{\mathOrText{\beta}}[\mathOrText{P^*},\mathOrText{I^*},\mathOrText{Q^*}] =0$. Let $i \in \mathOrText{\mathbf{N}}$. By \Cref{lem:constamtStepSIRS} if $\infectedDiscrete{i} \geq \varepsilon_1 \mathOrText{n}$, $|\deltaP{i}| + |\deltaI{i}| + |\deltaR{i}| \leq \frac{1}{\mathOrText{n}}$ implies $\potentialSIRSClique{\mathOrText{\beta}}{i} \leq s$. By induction on $k \in \mathOrText{\mathbf{N}}$ we get that if $\infectedDiscrete{i} \geq \varepsilon_1 \mathOrText{n}$, $|\deltaP{i}| + |\deltaI{i}| + |\deltaR{i}| \leq \frac{k}{\mathOrText{n}}$ implies $\potentialSIRSClique{\mathOrText{\beta}}{i} \leq ks$. By choosing $k= \delta \mathOrText{n}$, the contradiction of the previous statement results in the fact that if $\infectedDiscrete{i} \geq \varepsilon_1 \mathOrText{n}$, $\potentialSIRSClique{\mathOrText{\beta}}{i} > a \mathOrText{n} \geq \delta s \mathOrText{n}$ implies $|\deltaP{i}| + |\deltaI{i}| + |\deltaR{i}| > \delta$. Hence, for all $i \in \mathOrText{\mathbf{N}}$ with $a \mathOrText{n}< \potentialSIRSClique{\mathOrText{\beta}}{i}< 2a \mathOrText{n}$, the conditions for \Cref{lem:constantDriftSIRS} are fulfilled and we get that there exists a constant $d \in \mathOrText{\mathbf{R}}_{>0}$ such that for all $i \in \mathOrText{\mathbf{N}}$ holds that $\E{\potentialSIRSClique{\mathOrText{\beta}}{i+1} - \potentialSIRSClique{\mathOrText{\beta}}{i}}[\filtrationDiscrete{i}] \cdot \indicator{a \mathOrText{n}< \potentialSIRSClique{\mathOrText{\beta}}{i}< 2a \mathOrText{n}} \leq -d \cdot \indicator{a \mathOrText{n}< \potentialSIRSClique{\mathOrText{\beta}}{i}< 2a \mathOrText{n}}$. Now all of the conditions of \Cref{pre:NegativeDrift} are satisfied and we get that there exists a constant $c^* \in \mathOrText{\mathbf{R}}_{>0}$ such that $$\Pr{T_1 - \mathOrText{t} \leq 2^{c^* a \mathOrText{n} / 2^s}}[\filtrationDiscrete{\mathOrText{t}}] \cdot \indicator{\potentialSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} \leq a \mathOrText{n}} = 2^{-\Omega(a \mathOrText{n} / 2^s)} \cdot \indicator{\potentialSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} \leq a \mathOrText{n}}.$$ Note that this probability goes towards 0 as \mathOrText{n} goes towards infinity. Hence, $\E{T_1}[\filtrationDiscrete{\mathOrText{t}}] \cdot \indicator{\potentialSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} \leq a \mathOrText{n}}= 2^{ \Omega(\mathOrText{n})} \cdot \indicator{\potentialSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} \leq a \mathOrText{n}}$. Remember that $\infectedDiscrete{\mathOrText{t}} \geq \varepsilon_0 \mathOrText{n}$ implies $\potentialSIRSClique{\mathOrText{\beta}}{\mathOrText{t}} \leq a \mathOrText{n}$. We therefore get $\E{T_1}[\filtrationDiscrete{\mathOrText{t}}] \cdot \indicator{\infectedDiscrete{\mathOrText{t}} \geq \varepsilon_0 \mathOrText{n}}= 2^{ \Omega(\mathOrText{n})} \cdot \indicator{\infectedDiscrete{\mathOrText{t}} \geq \varepsilon_0 \mathOrText{n}}$. We showed that for all $i \in \mathOrText{\mathbf{N}}$ with $\mathOrText{t} \leq i < T_1$ holds $\infectedDiscrete{i} \geq \varepsilon_1 \mathOrText{n} >0$. Therefore, $T$ dominates $\timeContinuous{T_1}$. In $C$ there are \mathOrText{n} healing clocks with a rate of 1 each and $\mathOrText{n}(\mathOrText{n}-1)$ infection clocks with a rate of $\lambda = \frac{\mathOrText{c}}{\mathOrText{n}}$ each. Therefore, all of the clocks trigger at a rate of at most $(\mathOrText{c}+1) \mathOrText{n}$ and the expected time for each trigger is at least the inverse of that. By Wald's equation, we get that \begin{align*} \E{T}[\filtrationContinuous{0}] \cdot \indicator{E_{\varepsilon_0}} &\geq \E{\timeContinuous{T_1}}[\filtrationContinuous{0}] \cdot \indicator{E_{\varepsilon_0}}\\ &\geq \frac{1}{\mathOrText{c} \mathOrText{n}}2^{ \Omega(\mathOrText{n})} \cdot \indicator{E_{\varepsilon_0}}. \qedhere \end{align*} \end{proof} We now prove the main theorem. \SIRSClique \begin{proof} Let for all $\varepsilon \in (0,1)$, $E_\varepsilon$ be the event that there exists a $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ such that $\infectedDiscrete{\mathOrText{t}} \geq \varepsilon \mathOrText{n}$. By \Cref{lem:farFromEdgeSIRS}, there exists an $\varepsilon \in \mathOrText{\mathbf{R}}_{>0}$ such that for sufficiently large \mathOrText{n} holds $\Pr{E_\varepsilon} \geq \frac{1}{\mathOrText{n} +2}$. By \Cref{lem:longSurvivalCliqueSIRS} it holds that $\E{T}[E_\varepsilon]=2^{ \Omega(\mathOrText{n})}$. Using the law of total expectation, we get \begin{align*} \E{T} &= \Pr{E_\varepsilon} \E{T}[E_\varepsilon] + \Pr{\overline{E_\varepsilon}} \E{T}[\overline{E_\varepsilon}]\\ &\geq \Pr{E_\varepsilon} \E{T}[E_\varepsilon]\\ &\geq \frac{1}{\mathOrText{n} +2} 2^{ \Omega(\mathOrText{n})}\\ &= 2^{ \Omega(\mathOrText{n})}. \qedhere \end{align*} \end{proof} \subsection{The SIS Contact Process}\label{sec:SISCliqueBasic} Consider an SIS contact process \mathOrText{C} on a clique $G$ with \mathOrText{n} vertices with an infection rate \mathOrText{\lambda}. For all $\mathOrText{t} \in \mathOrText{\mathbf{N}}$, one of the following two events occurs at step $\mathOrText{t}+1$ ($\tau_{t+1}$): either a susceptible vertex is infected, which we call \eventInfection{\mathOrText{t}}; or an infected vertex recovers in the event \eventSusceptibleSIS{\mathOrText{t}}. At the time point \timeContinuous{\mathOrText{t}}, vertices heal at a rate of $\rateSusceptibleSIS{\mathOrText{t}}= \infectedDiscrete{\mathOrText{t}}$ and new vertices get infected at a rate of $\rateInfection{\mathOrText{t}} = \mathOrText{\lambda} \infectedDiscrete{\mathOrText{t}} \susceptibleDiscrete{\mathOrText{t}}$. Let $\rateTotal{\mathOrText{t}} = \rateSusceptibleSIS{\mathOrText{t}} + \rateInfection{\mathOrText{t}} = \infectedDiscrete{\mathOrText{t}} + \mathOrText{\lambda} \infectedDiscrete{\mathOrText{t}} \susceptibleDiscrete{\mathOrText{t}}$. While $\infectedDiscrete{\mathOrText{t}} > 0$, we get \begin{align*} \probabilitySusceptibleSIS{\mathOrText{t}} &= \Pr{\eventSusceptibleSIS{\mathOrText{t}}} = \frac{\rateSusceptibleSIS{\mathOrText{t}}}{\rateTotal{\mathOrText{t}}} = \frac{\infectedDiscrete{\mathOrText{t}}}{\infectedDiscrete{\mathOrText{t}} + \mathOrText{\lambda} \infectedDiscrete{\mathOrText{t}} \susceptibleDiscrete{\mathOrText{t}}}\\ \probabilityInfection{\mathOrText{t}} &= \Pr{\eventInfection{\mathOrText{t}}} = \frac{\rateInfection{\mathOrText{t}}}{\rateTotal{\mathOrText{t}}} = \frac{\mathOrText{\lambda} \infectedDiscrete{\mathOrText{t}} \susceptibleDiscrete{\mathOrText{t}}}{\infectedDiscrete{\mathOrText{t}} + \mathOrText{\lambda} \infectedDiscrete{\mathOrText{t}} \susceptibleDiscrete{\mathOrText{t}}}. \end{align*} In the proofs, we consider infection rates of the form $\mathOrText{\lambda} = \frac{1}{\mathOrText{n} - \mathOrText{\alpha}}$ for some $\mathOrText{\alpha} \in (0, \mathOrText{n})$. As $\susceptibleDiscrete{\mathOrText{t}} = \mathOrText{n} - \infectedDiscrete{\mathOrText{t}}$, by expanding the fractions by $(\mathOrText{n} - \mathOrText{\alpha})/\infectedDiscrete{\mathOrText{t}}$, we get \begin{align*} \probabilitySusceptibleSIS{\mathOrText{t}} &= \frac{\mathOrText{n} - \mathOrText{\alpha}}{2 \mathOrText{n} - \mathOrText{\alpha} - \infectedDiscrete{\mathOrText{t}}}\\ \probabilityInfection{\mathOrText{t}} &= \frac{\mathOrText{n} - \infectedDiscrete{\mathOrText{t}}}{2 \mathOrText{n} - \mathOrText{\alpha} - \infectedDiscrete{\mathOrText{t}}}. \end{align*} Note that these two probabilities are exactly the same for $\infectedDiscrete{\mathOrText{t}} = \mathOrText{\alpha}$. Therefore, we call $\mathOrText{\alpha}$ the equilibrium value for the contact process. Note that we assume in the proofs that $\mathOrText{\alpha}$ is a natural number, such that there is a state with exactly $\mathOrText{\alpha}$ infected vertices. As the expected survival time increases monotone with \mathOrText{\alpha}, the results extend to real values of \mathOrText{\alpha} as well. \subsection{The Process Above the Equilibrium}\label{sec:SISCliqueEqui} To show that $\infectedDiscrete{\mathOrText{t}}$ quickly drops back to the equilibrium value after getting higher than it, we use following theorem. It is the application of a more general theorem of \cite{ganesh2005effect} on the clique. \begin{corollary}[\cite{ganesh2005effect} Theorem 3.1]\label{thm:cliqueSISFastDieOut} Let $G$ be a clique with $\mathOrText{n} \in \mathOrText{\mathbf{N}}_{>0}$ vertices and let \mathOrText{C} be a contact process in the SIS model on $G$ with infection rate \mathOrText{\lambda}. Let $T \in \mathOrText{\mathbf{N}}$ be the first step at which no vertex is in the infected state. Let $\mathOrText{\lambda} < \frac{1}{\mathOrText{n} - 1}$. Then \begin{align*} \E{\timeContinuous{T}} &\leq \frac{\ln(\mathOrText{n}) + 1}{1- (\mathOrText{n}-1)\mathOrText{\lambda}}.\qedhere \end{align*} \end{corollary} We now apply this theorem. \begin{lemma}\label{lem:cliqueSISAboveEquilibrium} Let $G$ be a clique with $\mathOrText{n} \in \mathOrText{\mathbf{N}}_{>0}$ vertices and let \mathOrText{C} be a contact process in the SIS model on $G$ with infection rate $\mathOrText{\lambda} = \frac{1}{\mathOrText{n} - \mathOrText{\alpha}}$ for some $\mathOrText{\alpha} \in \mathOrText{\mathbf{N}}_{<\mathOrText{n}}$. Let $t_0$ be a time step with $\infectedDiscrete{t_0} > \mathOrText{\alpha}$ and let $T \in \mathOrText{\mathbf{N}}$ be the first time step after $t_0$ with $\infectedDiscrete{T} = \mathOrText{\alpha}$. Then \begin{align*} \E{\timeContinuous{T} - \timeContinuous{t_0}} &\leq (\ln(\mathOrText{n} - \mathOrText{\alpha})+1) \cdot(\mathOrText{n} - \mathOrText{\alpha}).\qedhere \end{align*} \end{lemma} \begin{proof} We aim to apply \Cref{thm:cliqueSISFastDieOut}. To this end, we consider the random process $(\infectedDiscreteShifted{\mathOrText{t}})_{\mathOrText{t} \in \mathOrText{\mathbf{N}}}$ with $\infectedDiscreteShifted{\mathOrText{t}} = \infectedDiscrete{\mathOrText{t}} - \mathOrText{\alpha}$ for all $\mathOrText{t} \in \mathOrText{\mathbf{N}}$. Also, let $\mathOrText{n}' = \mathOrText{n} - \mathOrText{\alpha}$. For each $\mathOrText{t} \in \mathOrText{\mathbf{N}}$, \mathOrText{X} increases by 1 in the next step with probability \probabilitySusceptibleSIS{\mathOrText{t}} and decreases by one with probability \probabilityInfection{\mathOrText{t}}. While the infection has not died out yet, we get \begin{align*} \probabilitySusceptibleSIS{\mathOrText{t}} &= \frac{\mathOrText{n} - \mathOrText{\alpha}}{2 \mathOrText{n} - \mathOrText{\alpha} - \infectedDiscrete{\mathOrText{t}}} = \frac{\mathOrText{n}'}{2 \mathOrText{n}' - \infectedDiscreteShifted{\mathOrText{t}}}\\ \probabilityInfection{\mathOrText{t}} &= \frac{\mathOrText{n} - \infectedDiscrete{\mathOrText{t}}}{2 \mathOrText{n} - \mathOrText{\alpha} - \infectedDiscrete{\mathOrText{t}}} = \frac{\mathOrText{n}' - \infectedDiscreteShifted{\mathOrText{t}}}{2 \mathOrText{n}' - \infectedDiscreteShifted{\mathOrText{t}}}. \end{align*} Now consider an SIS contact process $\mathOrText{C}'$ on a clique $G'$ with $\mathOrText{n}'$ vertices that has an infection rate of $\mathOrText{\lambda}' = \frac{1}{\mathOrText{n}'}$ and starts with $\infectedDiscrete{t_0}- \mathOrText{\alpha}$ infected vertices. Note that if for $\mathOrText{t}, \mathOrText{t}' \in \mathOrText{\mathbf{N}}$ holds $\infectedDiscreteShifted{\mathOrText{t}} = \infectedDiscretePrime{\mathOrText{t}'}' > 0$ then $\infectedDiscreteShifted{\mathOrText{t}+1} - \infectedDiscreteShifted{\mathOrText{t}}$ and $\infectedDiscretePrime{\mathOrText{t}'+1}' - \infectedDiscretePrime{\mathOrText{t}'}'$ are distributed exactly the same as the transition probabilities are the same. Further note that for these steps, $\timeContinuous{\mathOrText{t}+1} - \timeContinuous{\mathOrText{t}}$ follows an exponential distribution with a higher rate than the exponential distribution $\timeContinuousPrime{\mathOrText{t}'+1} - \timeContinuousPrime{\mathOrText{t}'}$ as $\mathOrText{C}$ has \mathOrText{\alpha} more vertices which are all infected, so there are strictly more triggers that heal vertices. Hence, \mathOrText{C} and $\mathOrText{C}'$ can be coupled in a way such that for all $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ with $t_0 \leq \mathOrText{t} \leq T$ holds that $\infectedDiscreteShifted{\mathOrText{t}} = \infectedDiscretePrime{\mathOrText{t}-t_0}'$ and $\timeContinuous{\mathOrText{t}}- \timeContinuous{t_0} \leq \timeContinuousPrime{\mathOrText{t}-t_0}$. As for all $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ with $t_0 \leq \mathOrText{t} \leq T$ holds that $\infectedDiscreteShifted{\mathOrText{t}} = \infectedDiscretePrime{\mathOrText{t}-t_0}'$ by definition of $T$, $T-t_0$ it is the first step in which there is no infected vertex in $\mathOrText{C}'$. Therefore \Cref{thm:cliqueSISFastDieOut} is applicable and we get \begin{align*} \E{\timeContinuous{T}- \timeContinuous{t_0}} &\leq \E{\timeContinuousPrime{T-t_0}}\\ &\leq \frac{\ln(\mathOrText{n}')+1}{1-(\mathOrText{n}'-1)\frac{1}{\mathOrText{n}'}}\\ &= (\ln(\mathOrText{n}')+1) \mathOrText{n}'.\qedhere \end{align*} \end{proof} \subsection{Upper Bound for the Epidemic Threshold}\label{sec:SISCliqueUpper} To give an upper bound for the epidemic threshold, we show that the considered process \mathOrText{C} has a super-polynomial expected survival time if for some $\varepsilon \in (0,1)$, $\mathOrText{\alpha} \geq \mathOrText{n}^{1/2 + \varepsilon}$. To this end, we show that the process gets to a state with at least $\mathOrText{\alpha}/2$ infected vertices with sufficient probability and that it is very unlikely from that point on that the infection dies out fast. \begin{lemma}\label{lem:cliqueSISUpperProbability} Let $G$ be a clique with $\mathOrText{n} \in \mathOrText{\mathbf{N}}_{>0}$ vertices and let \mathOrText{C} be a contact process in the SIS model on $G$ with infection rate $\mathOrText{\lambda} = \frac{1}{\mathOrText{n} - \mathOrText{\alpha}}$ for some $\mathOrText{\alpha} \in \mathOrText{\mathbf{N}}_{<\mathOrText{n}/2}$. Further, let there be a constant $\varepsilon \in (0,1/2)$ such that $\mathOrText{\alpha} \geq \mathOrText{n}^{1/2 + \varepsilon}$. Let $t_0$ be a time step with $\infectedDiscrete{t_0} = \mathOrText{\alpha}/2 - 1$ and let $T$ be the first step after $t_0$ with $\infectedDiscrete{T}= 0$ or $\infectedDiscrete{T}= \mathOrText{\alpha}/2$. Then \begin{align*} \Pr{\infectedDiscrete{T}=0} &\leq \frac{1}{1.5^{\frac{\mathOrText{n}^\varepsilon}{2}}-1}.\qedhere \end{align*} \end{lemma} \begin{proof} For all $\mathOrText{t}$ with $t_0 \leq \mathOrText{t} < T$ holds $0< \infectedDiscrete{\mathOrText{t}} < \mathOrText{\alpha}/2$ and therefore $\probabilitySusceptibleSIS{\mathOrText{t}} = \frac{\mathOrText{n} - \mathOrText{\alpha}}{2 \mathOrText{n} - \mathOrText{\alpha} - \infectedDiscrete{\mathOrText{t}}} \leq \frac{\mathOrText{n} - \mathOrText{\alpha}}{2 \mathOrText{n} -\frac{3}{2} \mathOrText{\alpha}}$. As this bound holds independently for all steps and because each step changes the number of infected vertices by 1 in either direction, in the time interval $[\timeContinuous{t_0}, \timeContinuous{T})$ the discrete version of the process dominates a gamblers ruin instance $(P_t)_{t \in \mathOrText{\mathbf{N}}}$ with a probability of $p=\frac{\mathOrText{n} - \frac{1}{2}\mathOrText{\alpha}}{2 \mathOrText{n} -\frac{3}{2} \mathOrText{\alpha}}$ to win. Let $T'$ be the first time that $P$ either reaches the lower bound $0$ or the upper bound $\mathOrText{\alpha}/2$ when starting at $P_0= \mathOrText{\alpha}/2 -1$. Then $\Pr{\infectedDiscrete{T} = 0} \leq \Pr{P_{T'}=0}$ because of the domination. We bound $\Pr{P_{T'}=0}$ with \Cref{pre:gamblersRuin}. Note that $\frac{p}{1-p} = \frac{\mathOrText{n} - \frac{1}{2}\mathOrText{\alpha}}{\mathOrText{n} - \mathOrText{\alpha}} = 1 + \frac{\mathOrText{\alpha}}{2(\mathOrText{n} - \mathOrText{\alpha})}$. We also use the bounds for \mathOrText{\alpha} and the fact that for all $x \in \mathOrText{\mathbf{R}}_{\geq 1}$ and $y \in \mathOrText{\mathbf{R}}$ with $|y| \leq x$ holds $(1 + \frac{y}{x})^x \geq 1+ y$ to get that \begin{align*} \Pr{\infectedDiscrete{T}=0} &\leq \Pr{P_{T'}=0}\\ &= \frac{1 - (1 + \frac{\mathOrText{\alpha}}{2(\mathOrText{n} - \mathOrText{\alpha})})^1}{1 - (1 + \frac{\mathOrText{\alpha}}{2(\mathOrText{n} - \mathOrText{\alpha})})^{\frac{\mathOrText{\alpha}}{2}}}\\ &= \frac{1 + \frac{\mathOrText{\alpha}}{2(\mathOrText{n} - \mathOrText{\alpha})} - 1}{(1 + \frac{\mathOrText{\alpha}}{2(\mathOrText{n} - \mathOrText{\alpha})})^{\frac{\mathOrText{\alpha}}{2}}-1}\\ &\leq \frac{1}{(1 + \frac{\mathOrText{\alpha}}{2\mathOrText{n} })^{\frac{\mathOrText{\alpha}}{2}}-1}\\ &\leq \frac{1}{(1 + \frac{\mathOrText{n}^{-1/2 + \varepsilon}}{2})^{\frac{\mathOrText{n}^{1/2 + \varepsilon}}{2}}-1}\\ &\leq \frac{1}{\left((1 + \frac{\mathOrText{n}^{-1/2}}{2})^{\mathOrText{n}^{1/2}}\right)^{\frac{\mathOrText{n}^{\varepsilon}}{2}}-1}\\ &\leq \frac{1}{(1 + 1/2)^{\frac{\mathOrText{n}^\varepsilon}{2}}-1}. \qedhere \end{align*} \end{proof} We use the previous lemma to get an upper bound for the epidemic threshold on cliques. \SIScliqueUpperBound \begin{proof} We aim to apply \Cref{lem:cliqueSISUpperProbability} to lower bound the survival time after reaching a state with $\mathOrText{\alpha}/2$ infected vertices. To get a bound on the overall survival time we first lower bound the probability to reach such a state. Let $T'$ be the first step with $\infectedDiscrete{T'} = 0$ or $\infectedDiscrete{T'}= \mathOrText{\alpha}/2$. For all time steps $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ with $\infectedDiscrete{\mathOrText{t}} \leq \mathOrText{\alpha}$ holds $\probabilityInfection{\mathOrText{t}} = \frac{\mathOrText{n} - \infectedDiscrete{\mathOrText{t}}}{2 \mathOrText{n} - \mathOrText{\alpha} - \infectedDiscrete{\mathOrText{t}}} \geq \frac{1}{2}$. Hence until $T'$, the discrete version of $\infectedDiscrete{\mathOrText{t}}$ dominates an unbiased gamblers ruin instance. Therefore, we get $\Pr{\infectedDiscrete{T'}= \mathOrText{\alpha}/2} \geq \frac{2}{\mathOrText{\alpha}}$. Let $A$ be the random variable that counts the number of time steps $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ that exist with $\infectedDiscrete{\mathOrText{t}}= \mathOrText{\alpha}/2$. Assume that $\infectedDiscrete{T'}= \mathOrText{\alpha}/2$. As each step changes the number of infected vertices by 1, by \Cref{lem:cliqueSISUpperProbability} $A$ dominates a random variable $B \sim \text{Geom}\left((1.5^{\frac{\mathOrText{n}^\varepsilon}{2}}-1)^{-1}\right)$. Returning to $\mathOrText{\alpha}/2$ infected vertices $A$ times takes at least $A$ steps. In $C$ there are \mathOrText{n} healing clocks with a rate of 1 each and $\mathOrText{n}(\mathOrText{n}-1)$ infection clocks with a rate of $\lambda \leq \frac{2}{\mathOrText{n}}$ each. Therefore, all of the clocks trigger at a rate of at most $\frac{3}{2} \mathOrText{n}$ and the expected time for each trigger is at least the inverse of that. Applying these insights on the expected survival time together with the law of total expectation results in \begin{align*} \E{T} &\geq \Pr{\infectedDiscrete{T'}= \mathOrText{\alpha}/2}\cdot \E{T}[\infectedDiscrete{T'}= \mathOrText{\alpha}/2]\\ &\geq \frac{2}{\mathOrText{\alpha}} \cdot \frac{2}{3 \mathOrText{n}}\E{A}[\infectedDiscrete{T'}= \mathOrText{\alpha}/2]\\ &\geq \frac{4}{\mathOrText{n}} \cdot \frac{2}{3 \mathOrText{n}}(1.5^{\frac{\mathOrText{n}^\varepsilon}{2}}-1). \qedhere \end{align*} \end{proof} \subsection{Lower Bound for the Epidemic Threshold}\label{sec:SISCliqueLower} To give a lower bound for the epidemic threshold, we show that the considered process \mathOrText{C} has a polynomial expected survival time if $\mathOrText{\alpha} \leq \mathOrText{n}^{1/2}$. To this end, we use \Cref{lem:cliqueSISAboveEquilibrium} to bound the time spent above \mathOrText{\alpha} infected vertices and we lower bound the probability to die out when below \mathOrText{\alpha}. The proof of the probability bound below \mathOrText{\alpha} is very similar to the proof of \Cref{lem:cliqueSISUpperProbability}. \begin{lemma}\label{lem:cliqueSISLowerProbability} Let $G$ be a clique with $\mathOrText{n} \in \mathOrText{\mathbf{N}}_{>0}$ vertices and let \mathOrText{C} be a contact process in the SIS model on $G$ with infection rate $\mathOrText{\lambda} = \frac{1}{\mathOrText{n} - \mathOrText{\alpha}}$ for some $\mathOrText{\alpha} \in \mathOrText{\mathbf{N}}_{>0}$ with $\mathOrText{\alpha} \leq \mathOrText{n}^{1/2}$. Let $t_0\in \mathOrText{\mathbf{N}}$ be a time step with $\infectedDiscrete{t_0} = \mathOrText{\alpha}$ and let $T\in \mathOrText{\mathbf{N}}_{>t_0}$ be the first step after $t_0$ with $\infectedDiscrete{T}= 0$ or $\infectedDiscrete{T}= \mathOrText{\alpha}+ 1$. Then \begin{align*} &\Pr{\infectedDiscrete{T}=0} \geq \frac{1}{\mathOrText{n} \cdot(2\eulerE^2-1)}\\ \text{and }& \E{\timeContinuous{T} - \timeContinuous{t_0}} \leq 4( \mathOrText{n} - \mathOrText{\alpha} + 1) .\qedhere \end{align*} \end{lemma} \begin{proof} We first bound the probability for the infection to die out. For all $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ with $t_0 \leq \mathOrText{t} < T$ holds $0< \infectedDiscrete{\mathOrText{t}}$ and therefore $\probabilitySusceptibleSIS{\mathOrText{t}} = \frac{\mathOrText{n} - \mathOrText{\alpha}}{2 \mathOrText{n} - \mathOrText{\alpha} - \infectedDiscrete{\mathOrText{t}}} \geq \frac{\mathOrText{n} - \mathOrText{\alpha}}{2 \mathOrText{n} - \mathOrText{\alpha}}$. As this bound holds independently for all steps and because each step changes the number of infected vertices by 1 in either direction, in the time interval $[\timeContinuous{t_0}, \timeContinuous{T})$ the discrete version of the process is dominated by a gamblers ruin instance $(P_t)_{t \in \mathOrText{\mathbf{N}}}$ with a probability of $p=\frac{\mathOrText{n}}{2 \mathOrText{n} - \mathOrText{\alpha}}$ to win. Let $T'$ be the first time that $P$ either reaches the lower bound $0$ or the upper bound $\mathOrText{\alpha}+1$ when starting at $P_0= \mathOrText{\alpha}$. Then $\Pr{\infectedDiscrete{T} = 0} \geq \Pr{P_{T'}=0}$ because of the dominance. We bound $\Pr{P_{T'}=0}$ with \Cref{pre:gamblersRuin}. Note that $\frac{p}{1-p} = \frac{\mathOrText{n}}{\mathOrText{n} - \mathOrText{\alpha}} = 1 + \frac{\mathOrText{\alpha}}{\mathOrText{n} - \mathOrText{\alpha}}$. We get \begin{align*} \Pr{\infectedDiscrete{T}=0} &\geq \Pr{P_{T'}=0}\\ &= \frac{1 - (1 + \frac{\mathOrText{\alpha}}{\mathOrText{n} - \mathOrText{\alpha}})^1}{1 - (1 + \frac{\mathOrText{\alpha}}{\mathOrText{n} - \mathOrText{\alpha}})^{\mathOrText{\alpha} +1}}\\ &= \frac{1 + \frac{\mathOrText{\alpha}}{\mathOrText{n} - \mathOrText{\alpha}} - 1}{(1 + \frac{\mathOrText{\alpha}}{\mathOrText{n} - \mathOrText{\alpha}})^{\mathOrText{\alpha}+1}-1}\\ &\geq \frac{1}{\mathOrText{n} \cdot((1 + \frac{2\mathOrText{\alpha}}{\mathOrText{n}})^{\mathOrText{\alpha}+1}-1)}\\ &\geq \frac{1}{\mathOrText{n} \cdot(2(1 + \frac{2\mathOrText{\alpha}}{\mathOrText{n}})^{\mathOrText{\alpha}}-1)}\\ &\geq \frac{1}{\mathOrText{n} \cdot(2(1 + \frac{2}{\sqrt{\mathOrText{n}}})^{\sqrt{\mathOrText{n}}}-1)}\\ &\geq \frac{1}{\mathOrText{n} \cdot(2\eulerE^2-1)}. \end{align*} To bound $\E{\timeContinuous{T} - \timeContinuous{t_0}}$, we bound the number of times that we reach a state with exactly $\mathOrText{\alpha}$ infected vertices and the time between those states. Let $S$ be the random variable that counts the number of time steps $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ with $t_0 \leq \mathOrText{t} < T$ and $\infectedDiscrete{\mathOrText{t}} = \mathOrText{\alpha}$. For all $i \in \mathOrText{\mathbf{N}}_{\leq S+1}$, let $X_i$ be the $i$-th time step from $t_0$ to $T$ at which the number of infected vertices is either $0$, $\mathOrText{\alpha}$ or $\mathOrText{\alpha}+1$. It then holds that $\timeContinuous{T} - \timeContinuous{t_0} = \timeContinuous{X_{S+1}} - \timeContinuous{X_1} = \sum_{i=1}^{S}{\timeContinuous{X_{i+1}}-\timeContinuous{X_i}}$. We aim to bound the expectation of that value using the generalized Wald's equation (\Cref{pre:wald}). We first bound $S$. Let $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ with $t_0 \leq \mathOrText{t} < T$ and $\infectedDiscrete{\mathOrText{t}} = \mathOrText{\alpha}$. Then $\probabilityInfection{\mathOrText{t}} = \frac{\mathOrText{n} - \infectedDiscrete{\mathOrText{t}}}{2 \mathOrText{n} - \mathOrText{\alpha} - \infectedDiscrete{\mathOrText{t}}} = \frac{1}{2}$. Hence, with a probability of $\frac{1}{2}$, it holds $\infectedDiscrete{\mathOrText{t}+1} = \mathOrText{\alpha} +1$ which implies that $T= \mathOrText{t} +1$ and that $t$ is the last step before $T$ with $\mathOrText{\alpha}$ infected vertices. Therefore, $S$ is dominated by a geometrically distributed random variable $A \sim \text{Geom}(\frac{1}{2})$. Let $(\filtrationContinuous{\mathOrText{t}})_{\mathOrText{t} \in \mathOrText{\mathbf{N}}}$ be the natural filtration of \mathOrText{C}. Let $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ with $1 \leq \mathOrText{t} \leq S$. To bound $\E{\timeContinuous{X_{\mathOrText{t}+1}} - \timeContinuous{X_\mathOrText{t}}}[\filtrationContinuous{\timeContinuous{X_\mathOrText{t}}}]$, we first bound $\E{X_{\mathOrText{t}+1} - X_{\mathOrText{t}}}[\filtrationContinuous{\timeContinuous{X_\mathOrText{t}}}]$. We know that $\infectedDiscrete{X_{\mathOrText{t}}} = \mathOrText{\alpha}$. As the number of infected vertices changes by exactly 1 in each step, there are two possibilities for $\infectedDiscrete{X_{\mathOrText{t}}+1}$. Let $E$ be the event that $\infectedDiscrete{X_{\mathOrText{t}}+1} = \mathOrText{\alpha} - 1$. If $E$ does not occur, it holds $\infectedDiscrete{X_{\mathOrText{t}}+1} = \mathOrText{\alpha}+1$, which implies $X_{\mathOrText{t}+1} = X_{\mathOrText{t}}+1$, and therefore $X_{\mathOrText{t}+1} - X_{\mathOrText{t}}=1$. We now bound $X_{\mathOrText{t}+1} - X_{\mathOrText{t}}$ in the case that $E$ happens using the additive drift theorem (\Cref{pre:additiveDrift}). We define the random process $(Y_i)_{i \in \mathOrText{\mathbf{N}}}$ with $Y_i = \indicator{\infectedDiscrete{i+X_{\mathOrText{t}}+1} \neq 0} \cdot (\mathOrText{\alpha} - \infectedDiscrete{i+X_{\mathOrText{t}}+1})$ for all $i \in \mathOrText{\mathbf{N}}$ and the stopping time $T' = \inf(i \in \mathOrText{\mathbf{N}} \mid Y_i \leq 0)$. Note that this process is basically shifted in time to start at $X_{\mathOrText{t}}+1$ and that the process reaches 0 when the number of infected vertices at the shifted time point is either $0$ or $\mathOrText{\alpha}$. Therefore, $T' = X_{\mathOrText{t}+1} - X_{\mathOrText{t}} -1$. Let $(\filtrationContinuous{i}')_{i \in \mathOrText{\mathbf{N}}}$ with $\filtrationContinuous{i}' = \filtrationDiscrete{i + X_\mathOrText{t} + 1}$ for all $i \in \mathOrText{\mathbf{N}}$ be the natural filtration of $Y$. We now show that the two conditions of the additive drift theorem are satisfied. We start with the first condition. As we assume that $\infectedDiscrete{X_{\mathOrText{t}}+1} = \mathOrText{\alpha}-1$ and because we stop when we reach $\mathOrText{\alpha}$ infected vertices, the term $\mathOrText{\alpha} - \infectedDiscrete{i+X_{\mathOrText{t}}+1}$ is non-negative until $T'$ for all $i \in \mathOrText{\mathbf{N}}_{\leq T'}$ which implies for all $i \in \mathOrText{\mathbf{N}}$ that $Y_i \cdot \indicator{i \leq T'} \geq 0$. Now let $i \in \mathOrText{\mathbf{N}}$ and $i' = i + X_\mathOrText{t}+1$. To bound the drift for condition 2, we use that that by definition of $Y$ it holds $(Y_{i+1}-Y_i) \cdot \indicator{i<T'} \leq (\infectedDiscrete{i'} - \infectedDiscrete{i'+1}) \cdot \indicator{i<T'}$. Note that $\infectedDiscrete{i'}\leq \mathOrText{\alpha}-1$. We get the following inequality. Note that we omit the multiplicative $\indicator{i < T'}$ in all of the terms for better readability. \begin{align*} \E{Y_i - Y_{i+1}}[\filtrationContinuous{i}'] &\geq \E{\infectedDiscrete{i'+1} - \infectedDiscrete{i'}}[\filtrationContinuous{i}']\\ &= \probabilityInfection{i'} - \probabilitySusceptibleSIS{i'}\\ &= (1 - \probabilitySusceptibleSIS{i'}) - \probabilitySusceptibleSIS{i'}\\ &= 1 - 2 \frac{\mathOrText{n} - \mathOrText{\alpha}}{2 \mathOrText{n} - \mathOrText{\alpha} - \infectedDiscrete{i'}}\\ &\geq 1 - 2 \frac{\mathOrText{n} - \mathOrText{\alpha}}{2 \mathOrText{n} - 2\mathOrText{\alpha} + 1}\\ &= \frac{1}{2 \mathOrText{n} - 2\mathOrText{\alpha} + 1} \end{align*} Now all of the conditions of \Cref{pre:additiveDrift} are satisfied and as $Y_0 = 1$, we get \begin{align*} \E{T'}[\filtrationContinuous{0}'] \cdot \indicator{E} &\leq (2 \mathOrText{n} - 2\mathOrText{\alpha} + 1) \cdot \indicator{E}. \end{align*} As now depending on $E$, $X_{\mathOrText{t}+1} - X_{\mathOrText{t}}$ is either equal to 1 or $T' +1$, we get \begin{align*} \E{X_{\mathOrText{t}+1} - X_{\mathOrText{t}}}[\filtrationContinuous{X_\mathOrText{t}}] &\leq \E{1 \cdot \indicator{\overline{E}} + (T'+1) \cdot \indicator{E} }[\filtrationContinuous{X_\mathOrText{t}}]\\ &\leq 2 \mathOrText{n} - 2\mathOrText{\alpha} + 2. \end{align*} As long as the infection has not died out yet, the state of \mathOrText{C} changes at a rate of at least 1. Therefore, the expected time between two states is at most 1 and we get \begin{align*} \E{\timeContinuous{X_{\mathOrText{t}+1}} -\timeContinuous{X_{\mathOrText{t}}}}[\filtrationDiscrete{X_\mathOrText{t}}] &\leq \E{X_{\mathOrText{t}+1} - X_{\mathOrText{t}}}[\filtrationDiscrete{X_\mathOrText{t}}]\\ &\leq 2 \mathOrText{n} - 2\mathOrText{\alpha} + 2. \end{align*} As now for all $\mathOrText{t} \in \mathOrText{\mathbf{N}}$, both $S$ and $\E{\timeContinuous{X_{\mathOrText{t}+1}} -\timeContinuous{X_{\mathOrText{t}}}}[\filtrationDiscrete{X_\mathOrText{t}}]$ are upper bounded by some values independent of $\mathOrText{t}$, $\sum_{i=1}^{S}{\timeContinuous{X_{i+1}}-\timeContinuous{X_i}}$ is integrable and by \Cref{pre:wald} we get \begin{align*} \E{\timeContinuous{T} - \timeContinuous{t_0}}[\filtrationDiscrete{t_0}]&= \E{\sum_{i=1}^{S}{\timeContinuous{X_{i+1}}-\timeContinuous{X_i}}}[\filtrationDiscrete{X_1}]\\ &= \E{\sum_{i=1}^{S}{\E{\timeContinuous{X_{i+1}}-\timeContinuous{X_i}}[\filtrationDiscrete{X_i}]}}[\filtrationDiscrete{X_1}]\\ &\leq \E{\sum_{i=1}^{S}{2 \mathOrText{n} - 2\mathOrText{\alpha} + 2}}[\filtrationDiscrete{X_1}]\\ &= 2( \mathOrText{n} - \mathOrText{\alpha} + 1) \E{\sum_{i=1}^{S}{1}}[\filtrationDiscrete{X_1}]\\ &\leq 4( \mathOrText{n} - \mathOrText{\alpha} + 1). \qedhere \end{align*} \end{proof} We now use \Cref{lem:cliqueSISAboveEquilibrium} and \Cref{lem:cliqueSISLowerProbability} to give a polynomial extinction threshold. \SIScliqueLowerBound \begin{proof} Let $(\filtrationContinuous{\mathOrText{t}})_{\mathOrText{t} \in \mathOrText{\mathbf{R}}_{\geq 0}}$ be the natural filtration of \mathOrText{C}. First note that adding more vertices to the set of initially infected vertices monotonically increases $T$. To upper bound $\E{T}[\filtrationContinuous{0}]$, we assume that $\infectedDiscrete{0} = \mathOrText{n}$ as that gives us the largest survival time of the infection. We use \Cref{lem:cliqueSISAboveEquilibrium} and \Cref{lem:cliqueSISLowerProbability} to upper bound the number of times that the number of infected vertices drops to \mathOrText{\alpha} and the expected time between these times. With the generalized Wald's equation (\Cref{pre:wald}), that upper bounds $\E{T}[\filtrationContinuous{0}]$. Let $S$ be the random variable that counts the number of times at which the number of infected vertices in \mathOrText{C} drops from $\mathOrText{\alpha} +1$ to \mathOrText{\alpha}. For all $i \in \mathOrText{\mathbf{N}}_{\leq S+1}$, let $X_i$ be the $i$-th time at which the number of infected vertices drops to either \mathOrText{\alpha} or 0 (We define $X_0 = 0$). Note that the number of infected vertices can only drop to 0 once as the infection cannot leave the state with no infected vertices anymore. By the definition of $S$, the infection dies out at $X_{S+1}$. We get that $T = X_{S+1} - X_0 = \sum_{i=0}^{S}{X_{i+1}-X_{i}}$. By \Cref{lem:cliqueSISLowerProbability} after each time that the number of infected vertices drops to \mathOrText{\alpha}, there is a probability of at least $\frac{1}{\mathOrText{n} \cdot(2\eulerE^2-1)}$ that the infection dies out before the number of infected vertices drops to \mathOrText{\alpha} again. Therefore, $S$ is dominated by a geometrically distributed random variable $A \sim \text{Geom}(\frac{1}{\mathOrText{n} \cdot(2\eulerE^2-1)})$. Let $(\filtrationContinuous{\mathOrText{t}}')_{\mathOrText{t} \in \mathOrText{\mathbf{N}}}$ with $\filtrationContinuous{t}' = \filtrationDiscrete{X_t}$ for all $\mathOrText{t} \in \mathOrText{\mathbf{N}}$ be the natural filtration of $X$. Further, let $i \in \mathOrText{\mathbf{N}}_{\leq S}$. We aim to upper bound $\E{X_{i+1}-X_i}[\filtrationContinuous{i}']$. Let $Y_i \in \mathOrText{\mathbf{R}}$ be the first time after $X_i$ with $\infectedContinuous{Y_i}= 0$ or $\infectedContinuous{Y_i} > \mathOrText{\alpha}$. Note that $X_i \leq Y_i \leq X_{i+1}$. If $i=0$, it holds $Y_i - X_i =0$. Otherwise \Cref{lem:cliqueSISLowerProbability} is applicable and we get $\E{Y_i -X_i}[\filtrationContinuous{i}'] \leq 4( \mathOrText{n} - \mathOrText{\alpha} + 1)$. If $\infectedContinuous{Y_i}= 0$ then $X_{i+1}- Y_i=0$. Otherwise \Cref{lem:cliqueSISAboveEquilibrium} is applicable and we get $\E{X_{i+1} -Y_i}[\filtrationContinuous{i}'] \leq (\ln(\mathOrText{n} - \mathOrText{\alpha})+1) \cdot(\mathOrText{n} - \mathOrText{\alpha})$. We therefore have \begin{align*} \E{X_{i+1}-X_i}[\filtrationContinuous{i}'] &= \E{X_{i+1}-Y_i}[\filtrationContinuous{i}'] + \E{Y_i-X_i}[\filtrationContinuous{i}']\\ &\leq 4( \mathOrText{n} - \mathOrText{\alpha} + 1) + (\ln(\mathOrText{n} - \mathOrText{\alpha})+1) \cdot(\mathOrText{n} - \mathOrText{\alpha})\\ &\leq (\ln(\mathOrText{n} - \mathOrText{\alpha})+5) \cdot(\mathOrText{n} - \mathOrText{\alpha}+1). \end{align*} As we now have an upper bound for $S$ and for all $i \in \mathOrText{\mathbf{N}}_{\leq S}$ an upper bound for $\E{X_{i+1}-X_i}[\filtrationContinuous{i}']$, $\sum_{i=0}^{S}{X_{i+1}-X_{i}}$ is integrable and by \Cref{pre:wald} we get \begin{align*} \E{T}[\filtrationContinuous{0}] &= \E{\sum_{i=0}^{S}{X_{i+1}-X_{i}}}[\filtrationContinuous{0}]\\ &= \E{\sum_{i=0}^{S}{\E{X_{i+1}-X_{i}}[\filtrationContinuous{i}']}}[\filtrationContinuous{0}]\\ &\leq \E{\sum_{i=0}^{S}{(\ln(\mathOrText{n} - \mathOrText{\alpha})+5) \cdot(\mathOrText{n} - \mathOrText{\alpha}+1)}}[\filtrationContinuous{0}]\\ &= (\ln(\mathOrText{n} - \mathOrText{\alpha})+5) \cdot(\mathOrText{n} - \mathOrText{\alpha}+1) \E{\sum_{i=0}^{S}{1}}[\filtrationContinuous{0}]\\ &\leq (\ln(\mathOrText{n} - \mathOrText{\alpha})+5) \cdot(\mathOrText{n} - \mathOrText{\alpha}+1)(\mathOrText{n} \cdot(2\eulerE^2-1)+1). \qedhere \end{align*} \end{proof} \section{Introduction} \input{content/introduction} \section{Preliminaries} \input{content/preliminaries} \section{SIRS Star} \input{content/sirs_star} \section{SIRS Clique} \input{content/sirs_clique} \section{SIS Clique} \input{content/sis_clique} \section*{Acknowledgments} Andreas Göbel was funded by the project PAGES (project No. 467516565) of the German Research Foundation (DFG). This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 945298-ParisRegionFP. This research was partially funded by the HPI Research School on Data Science and Engineering. \printbibliography \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{I. Introduction} The Lorentz invariance (LI) is one of the fundamentals of modern physics. It is meaningful to test the fate of LI both on theories and experiments. Kostelecky and Samuel \cite{KosteleckyS001} have manifested that the LI could be broken spontaneously in the string theory. The spontaneous Lorentz breaking involves the expectation values of Lorentz vectors and tensors in the Lagrangian of particles which lead to the framework of standard model extension (SME) \cite{SME}. Coleman and Glashow \cite{ColemanGlashowLIV} have proposed a perturbative framework to investigate the possible departures from the LI, in which the spacetime translations and space rotations are invariant while the Lorentz boosts have small departures. In a different approach, Cohen and Glashow \cite{VSR} suggested that the symmetry group of nature is isomorphic to the spacetime translation group plus a proper subgroup of the Lorentz group, which is referred as the theory of very special relativity (VSR). In addition, the Lorentz transformations were deformed in the doubly special relativity (DSR) \cite{DSR} because of the Planckian-scale effects of quantum gravity. Recently, one possible signal of Lorentz invariance violation (LIV) was reported by the OPERA collaboration that the muon neutrinos behave superluminally \cite{OPERA2011}. The muon neutrinos were produced in the CERN and arrived at the Gran Sasso Laboratory in advance than expectation by Einstein's special relativity. To study the energy dependence of neutrino superluminality, the data of the OPERA neutrino experiment was split into two groups. The speed is reported as \(1+\left(2.18\pm0.77\pm0.30\right)\times10^{-5}\) for the neutrinos with energy below \(20~\rm{GeV}\) with the mean energy \(13.9~\rm{GeV}\); the speed is reported as \(1+\left(2.75\pm0.75\pm0.30\right)\times10^{-5}\) for the neutrinos with energy above \(20~\rm{GeV}\) with the mean energy \(42.9~\rm{GeV}\). Throughout of the paper, we use the natural unit which implies that \(c=1\). The previous neutrino experiments or observations also gave evidences or constraints on the superluminal behaviors \cite{MINOS2007,Fermilab1979,SN1987A}. Soon after the OPERA's report, Cohen and Glashow \cite{CohenG01} pointed out that, in the framework of standard quantum field theory, the superluminal neutrinos would lose their energy via the Cherenkov-like process (\(\nu\longrightarrow\nu+e^{-}+e^{+}\)) rapidly in their distant propagations from the CERN to the Gran Sasso Laboratory. The number of the superluminal muon neutrinos detected by the OPERA detector should be suppressed strongly. The OPERA detector would not receive the neutrinos with energy above \(12.5~\rm{GeV}\) which is contradictory with the results of the OPERA experiment. Bi {\it et al.} \cite{BiETAL01} made a similar discussion on this issue. Their arguments are in the context of LIV with a preferred frame. Only in the preferred frame, the energy-momentum conservation is preserved \cite{VillanteV001,Amelino-CameliaETAL01}. Furthermore, Li {\it et al.} \cite{LiLMWZ001} pointed that the Cherenkov-like process is unavoidable even in the trivial frame without the effective ''rest frame''. However, Amelino-Camelia {\it et al.} \cite{Amelino-CameliaETAL01} revealed that the Cherenkov-like process is forbidden in the context that the principle of relativity is preserved and the energy-momentum conservation is amended. In addition, the ICARUS collaboration \cite{AntonelloETAL001} reported that there are no Cherenkov-like events observed directly for the superluminal neutrinos. The superluminality of particles is stringently forbidden in Einstein's special relativity. The speed of light is the upper limit of speed for all particles unless in the context of LIV. To account for the data of the neutrino superluminality, the dispersion relations are considered phenomenally \begin{equation} \label{Dispersion relations} \eta^{\mu\nu}p_{\mu}p_{\nu}=m^{2}-\sum_{n=1}^{\infty}A_{n}(\mu,M)p_{0}^{n}\ , \end{equation} where the \(A_{n}\) are dimensional coefficients which are functions of the physical mass scale of particles \(\mu\) and the energy scale of new physics \(M\). For a given nonvanishing power exponent \(n\), the superluminal neutrinos propagate with energy dependence as \(\delta v:=v-1\propto E^{n}\), where \(E\) denotes the energy of neutrinos. This is a power-law energy dependence for the superluminal behaviors of neutrinos, and the power exponent \(n\) could be constrained by the neutrino observations. If the OPERA's report is confirmed, Einstein's special relativity as well as the Minkowskian description of spacetime should be amended. The superluminality of neutrinos may imply new spacetime structure. In the new spacetime, the superluminality of particles at least neutrinos is admitted and consistent with the present neutrino experiments and observations. Meanwhile, the causality still holds and the Cherenkov-like process is forbidden. The superluminal neutrinos could arrive at the OPERA detector from the distant CERN without losing their energy rapidly. The Finslerian spacetime has been proposed to be a reasonable candidate to account for the neutrino superluminality \cite{Finsler special relativity,Finslerian special relativity Cherenkov-like process forbidden}. The Finsler geometry \cite{Book by Bao and Shen} is a straightforward generalization of the Riemann geometry without the quadratic restriction on the metric, which may introduce new insights on the spacetime background. The Finsler spacetime structure is dependent on one or more preferred directions. The LIV was studied in the Finsler spacetimes with modified physical dispersion relations \cite{GirelliLS001,MDR,ChangL001}. It is worth to note that the modified dispersion relations in the DSR could be realized in the Finsler geometry \cite{GirelliLS001}. The VSR \cite{VSR} was proved to reside in Finsler spacetime \cite{VSR Finsler}. Most recently, the effective fields with LIV, namely SME, was proposed to be linked to Finsler geometry by Kostelecky \cite{KosteleckyFinslerSME}. In addition, the symmetry of special relativity in the Finsler spacetimes with constant curvature was studied systematically \cite{LiC001}. Furthermore, the Finsler geometry could also bring about new insights on the resolution of the anomalies residing in Einstein's general relativity and cosmology \cite{Finsler gravity}. We have proposed \cite{Finsler special relativity} a Finslerian special relativity of (\(\alpha,\beta\)) type with an additional term which is three orders of \(\beta/\alpha\) in the line element of spacetime. A preferred direction was involved in the line element of Finslerian special relativity to account for the superluminal behaviors of neutrinos. The null structure was found to be enlarged and the causality was still preserved for superluminal neutrinos. We studied the kinematics and obtained a new dispersion relation of the form (\ref{Dispersion relations}) with only \(A_{3}\neq0\). Then the superluminality was found to be linearly dependent on the energy per unit mass of particles, which is roughly consistent with the present neutrino experiments and observations. Besides these, we proved that the energy-momentum conservation is preserved and the energy-momentum is well defined in Finslerian special relativity \cite{Finslerian special relativity Cherenkov-like process forbidden}. The Cherenkov-like process is forbidden for the superluminal neutrinos. The superluminal neutrinos would not lose their energy rapidly via this Cherenkov-like process. After a distant propagation from CERN to the Gran Sasso Laboratory, a large quantity of superluminal neutrinos survive and could be detected by the OPERA detector. In the present paper, we investigate the symmetry, causal structure and superluminality clearly in Finslerian special relativity. The general case of energy dependence of superluminality is studied. It is found that the energy and momentum conservations are preserved and the Cherenkov-like process is forbidden for the superluminal neutrinos. The predicted energy dependence of neutrino superluminality in Finslerian special relativity are compared with data of the superluminal neutrino experiments and astrophysical observations. The rest of the paper is arranged as follows. In Section II, the line element of Finslerian special relativity is proposed and the corresponding dispersion relation is obtained. We show clearly superluminality of particles and enlarged causal structure. In Finslerian special relativity, the causality and the energy-momentum conservation are preserved. The Cherenkov-like process is proved to be forbidden. In Section III, the energy dependence of the superluminal behaviors of neutrinos is studied in the framework of Finslerian special relativity. Discussions and remarks are listed in Section IV. \section{II. Theory: Finslerian special relativity and superluminality} In this section, we present Finslerian special relativity with general power-law energy dependence of neutrino superluminality. The corresponding dispersion relations are obtained and the null structure is found to be enlarged. The energy and momentum are found to be well defined and proved to be conserved. The Cherenkov-like process is proved to be forbidden kinematically. \subsection{A. Finslerian line element} The action of free particles in Finslerian special relativity takes the form \begin{equation} \label{integral length} I\propto\int F\left(x, y\right)d\tau\ , \end{equation} where \(x^{\mu}\) and \(y^{\mu}:=dx^{\mu}/d\tau\) denote the position and four-velocity of particles, respectively. The Greek indices run from \(0\) to \(3\) and the Latin indices run from \(1\) to \(3\). The integrand \(F\) is positively homogeneous of order one \cite{Book by Bao and Shen}. The metric tensor in the Finsler spacetime is defined by \begin{equation} g_{\mu\nu}:=\frac{\partial}{\partial y^\mu}\frac{\partial}{\partial y^\nu}\left(\frac{1}{2}F^2\right)\ , \end{equation} which is used to lower and raise the indices of vectors and tensors together with its inverse. The physical spacetime may be described by the Finsler structure which depart mildly from the Minkowski one. Suppose that the Finslerian line element take the simple form \begin{equation} \label{Finslerian special relativity line elements} F(y)d\tau=\alpha\left(1-A\left(\frac{\beta}{\alpha}\right)^{n+2}\right)d\tau\ , \end{equation} where \begin{eqnarray} \alpha&=&\sqrt{\eta_{\mu\nu}y^{\mu}y^{\nu}}\ ,\\ \beta&=&b_{\mu}y^{\mu}\ , \end{eqnarray} and \(\eta_{\mu\nu}\) is the Minkowski metric, \(b_{\mu}=(1,0,0,0)\) and \(n\) denotes a non-negative real number. The dimensionless parameter \(A\) characterizes the level of the departure of Finslerian special relativity from Einstein's special relativity. And the parameter \(A\) takes a tiny value which could be determined uniquely by the superluminal experiments and observations. It is noted that the Finslerian line element (\ref{Finslerian special relativity line elements}) is locally Minkowskian \cite{Book by Bao and Shen} and belongs to the (\(\alpha,\beta\)) type \cite{alpha beta type}. Furthermore, the Minkowski line element could be added more extra terms in powers of \(\beta/\alpha\) to generate the dispersion relation in Eq.(\ref{Dispersion relations}). \subsection{B. Physical dispersion relations and superluminality} For the particles with mass, the normalization of the Finsler norm is \(F(y)=1\). The canonical four-momentum of the particle with mass \(m\) is given by \begin{equation} \label{four-momentum} p_{\mu}:=m\frac{\partial F}{\partial y^{\mu}}\ , \end{equation} which is a conserved quantity. Corresponding to the Finslerian line element (\ref{Finslerian special relativity line elements}), the kinematics implies the physical dispersion relation \begin{equation} g^{\mu\nu}p_{\mu}p_{\nu}=m^{2}\ , \end{equation} which could be rewritten as \begin{equation} \label{Finslerian special relativity dispersion relation} \eta^{\mu\nu}p_{\mu}p_{\nu}=m^{2}-2Ap_{0}^{2}\left(\frac{p_{0}}{m}\right)^{n}\ , \end{equation} where we have neglected the terms with higher orders of \(A\). This could also be demonstrated by the correspondence between the dispersion relations and the Finsler line elements \cite{GirelliLS001}. In the case of large enough \(p_{0}\) and \(A>0\), the right hand side of equation (\ref{Finslerian special relativity dispersion relation}) is negative and the superluminal behaviors of particles emerge. The speed of particle is defined as \cite{CacciapagliaDP001} \begin{equation} v:=\frac{\sqrt{-\eta^{ij}p_{i}p_{j}}}{\sqrt{\eta^{00}p_{0}p_{0}}}\approx1-\frac{1}{2u^{2}}+Au^{n}\ , \end{equation} where \(u\) denotes the energy per unit mass \({E}/{m}\). It is demonstrated that the speed of particles could be larger than one when \(A>0\) and \(u\) is large enough. \subsection{C. Null structure and causality} To study the null structure, the Finsler norm is normalized to be \(F(y)=0\). The causal four-velocity is defined by \begin{equation} u_{\mu}:=\frac{\partial F}{\partial y^{\mu}}\ . \end{equation} The null structure is obtained as \begin{equation} \eta^{\mu\nu}u^{'}_{\mu}u^{'}_{\nu}=-2A(u^{'}_{0})^{n+2}\ , \end{equation} where the primes denote the normalization with respect to $F$. It could be seen that the superluminal causal speed is admitted in this null structure when the right hand side of the above equation is negative. The causal speed is given by \begin{equation} v_{c}:=\frac{\sqrt{-\eta^{ij}u^{'}_{i}u^{'}_{j}}}{\sqrt{\eta^{00}u^{'}_{0}u^{'}_{0}}}\approx1+A\left(u^{'}_{0}\right)^{n}\ . \end{equation} It is found that the null structure is enlarged in Finslerian special relativity than that in Einstein's special relativity when \(A>0\). In addition, the superluminal behaviors of neutrinos would not break the causality since the speed of neutrinos is always smaller than the causal speed. This null structure is illustrated schematically in the Fig.\ref{fig1}. \begin{figure}[h] \begin{center} \includegraphics[width=8cm]{null_structure.eps} \caption{\small{A schematic plot for the Finslerian null structure. The dashed line denotes the null structure in Einstein's special relativity and the real line denotes the one in Finsler spacetime. The null structure is found to be enlarged in Finslerian special relativity.}} \label{fig1} \end{center} \end{figure} \subsection{D. Energy-momentum conservation} It is obvious that the Finslerian line element (\ref{Finslerian special relativity line elements}) is invariant under the spacetime translations since it does not depend on the spacetime positions. This could also be demonstrated by the approach of isometry or equivalently Killing vectors \cite{LiC001}. The infinitesimal coordinate transformation is \begin{eqnarray} \label{infinitesimal transformation 1} x^{\mu}&\longrightarrow& x^{\mu}+\epsilon V^{\mu}\ ,\\ \label{infinitesimal transformation 2} y^{\mu}&\longrightarrow& y^{\mu}+\epsilon \frac{\partial V^{\mu}}{\partial x^{\nu}}y^{\nu}\ , \end{eqnarray} where \(|\epsilon|\ll 1\) and the generators are called Killing vectors \(V^{\mu}\). The Finsler structure is isometry if and only if \begin{equation} F(x,y)=F(\bar{x},\bar{y})\ , \end{equation} under the coordinate transformation (\ref{infinitesimal transformation 1}) and (\ref{infinitesimal transformation 2}). For the Finslerian line element (\ref{Finslerian special relativity line elements}) of (\(\alpha,\beta\)) type, the isometry implies that the Killing vectors satisfy \begin{equation} \label{Killing equations} V^{\mu}\frac{\partial F}{\partial x^{\mu}}+y^{\nu}\frac{\partial V^{\mu}}{\partial x^{\nu}}\frac{\partial F}{\partial y^{\mu}}=0\ . \end{equation} It is obvious that the constant vectors \(V^{\mu}=C^{\mu}\) are solutions of the Killing equation (\ref{Killing equations}) for the Finslerian line element (\ref{Finslerian special relativity line elements}). Based on the Noether theorem, the spacetime translational invariance implies that the energy and momentum \(p_{\mu}\) are well defined and conserved in Finslerian special relativity. \subsection{E. No Cherenkov-like process} Based on the energy-momentum conservation and the dispersion relation (\ref{Finslerian special relativity dispersion relation}), the Cherenkov-like process could be proved to be forbidden in Finslerian special relativity. It is enough to describe properties of the Cherenkov-like process by the process (\(\mu\longrightarrow M+M\)) \cite{Amelino-CameliaETAL01}. There are only one single incoming particle with mass \(\mu\), energy \(E\), and momentum \({P}\) while two ejected particles with mass both \(M\), energy \(E_{1}\), \(E_{2}\), and momentum \({P}_{1}\), \({P}_{2}\). Meanwhile, the incoming particle is more light than the two ejected particles (\(\mu<M\)). The energy-momentum conservation in Finslerian special relativity implies that \begin{eqnarray} \label{energy conservation} E&=&E_{1}+E_{2}\ ,\\ \label{momentum conservation} {P}^{2}&=&{P}_{1}^{2}+{P}_{2}^{2}+2P_{1}P_{2}\cos\theta\ , \end{eqnarray} where \(\theta\) is the angle between the moving directions of the two ejected particles. By combining the four-momentum (\ref{four-momentum}) and the dispersion relation (\ref{Finslerian special relativity dispersion relation}) with the energy-momentum conservation (\ref{energy conservation}) and (\ref{momentum conservation}), we obtain \begin{eqnarray} \cos\theta&=&\frac{2E_{1}E_{2}+2A\left(\frac{(E_{1}+E_{2})^{n+2}}{\mu^{n}}-\frac{E_{1}^{n+2}+E_{2}^{n+2}}{M^{n}}\right) -\mu^{2}+2M^{2}}{2E_{1}E_{2}+2AE_{1}E_{2}\frac{E_{1}^{n}+E_{2}^{n}}{M^{n}}-M^{2}\left(\frac{E_{1}}{E_{2}}+\frac{E_{2}}{E_{1}}\right)} +\mathcal{O}(A^{2})\nonumber\\ &=&1+\frac{2A\left(\frac{(E_{1}+E_{2})^{n+2}}{\mu^{n}}-\frac{E_{1}^{n+2}+E_{2}^{n+2}}{M^{n}}\right)-2AE_{1}E_{2}\frac{E_{1}^{n}+E_{2}^{n}}{M^{n}} -\mu^{2}+2M^{2}+M^{2}\left(\frac{E_{1}}{E_{2}}+\frac{E_{2}}{E_{1}}\right)} {2E_{1}E_{2}+2AE_{1}E_{2}\frac{E_{1}^{n}+E_{2}^{n}}{M^{n}}-M^{2}\left(\frac{E_{1}}{E_{2}}+\frac{E_{2}}{E_{1}}\right)} +\mathcal{O}(A^{2})\nonumber\\ &>&1+\frac{2A\left(\frac{(E_{1}+E_{2})^{n+2}}{\mu^{n}}-\frac{E_{1}^{n+2}+E_{2}^{n+2}}{M^{n}}\right)-2A\frac{E_{1}^{n+1}E_{2}+E_{1}E_{2}^{n+1}}{M^{n}}} {2E_{1}E_{2}+2A\frac{E_{1}^{n+1}E_{2}+E_{1}E_{2}^{n+1}}{M^{n}}}\nonumber\\ &>&1+\frac{2A}{M^{n}}\frac{\left(E_{1}+E_{2}\right)^{n+2}-E_{1}^{n+2}-E_{2}^{n+2}-E_{1}^{n+1}E_{2}-E_{1}E_{2}^{n+1}} {2E_{1}E_{2}+2A\frac{E_{1}^{n+1}E_{2}+E_{1}E_{2}^{n+1}}{M^{n}}}\ , \end{eqnarray} where the ultra relativistic approximation is involved (\(\mu\ll E,~M\ll E_{1},~M\ll E_{2}\)) in the third step. It is easy to check that the right hand side of the above formula is always greater than \(1\). Thus, the Cherenkov-like process is forbidden for the superluminal neutrinos in Finslerian special relativity and the superluminal neutrinos would not lose their energy rapidly. \section{III. Phenomenology: Energy dependence of the neutrino superluminality} The superluminality was reported by the OPERA collaboration (OPERA) \cite{OPERA2011} to be \(\delta v:=v-1=\left(2.18\pm0.77\pm0.30\right)\times10^{-5}\) for muon neutrinos with mean energy \(13.9~\rm{GeV}\) and \(\delta v=\left(2.75\pm0.75\pm0.30\right)\times10^{-5}\) for muon neutrinos with mean energy \(42.9~\rm{GeV}\). For all neutrinos with mean energy \(17~\rm{GeV}\), the superluminality is reported to be \(\delta v=(2.48\pm0.28\pm0.30)\times10^{-5}\). The MINOS collaboration (MINOS) \cite{MINOS2007} reported that the superluminality is \(\delta v=(5.1\pm 2.9)\times10^{-5}\) for muon neutrinos with \(3~\rm{GeV}\). Report from the FermiLab in 1979 (FermiLab1979) \cite{Fermilab1979} showed that the muon neutrino with energy between \(30~\rm{GeV}\) and \(120~\rm{GeV}\) may propagate superluminally with \(\delta v\sim10^{-5}\). In addition, the observations of Supernova-1987A (SN1987A) \cite{SN1987A} set a stringent limit on the supperluminal behaviors of antielectron neutrinos with energy \(\sim10~\rm{MeV}\) to be \(\delta v\lesssim2\times10^{-9}\). As is mentioned in the introduction, the superluminality is revealed by the physical dispersion relations (\ref{Dispersion relations}) with extra terms which are dependent on the energy of particles phenomenally. Especially, the simplest linear and quadratic energy dependence are considered popularly which correspond to the five and six dimensional operators added to the neutrino Lagrangians in the LIV models \cite{CacciapagliaDP001,EllisHMRS001,von GersdorffQ001}. In addition, the data of OPERA and MINOS experiments revealed that the power exponent of energy dependence should be in the range \(0.40-1.18\) \cite{Trojan001}. However, the SN1987A observation showed that both linear and quadratic energy dependence are ruled out for the neutrino superluminality \cite{AlexandreEM001}. Only the energy dependence with higher orders than two could reconcile the datasets of SN1987A and OPERA experiments \cite{AlexandreEM001}. In Finslerian special relativity, we have shown that the generic power-law dispersion relations (\ref{Dispersion relations}) are related to Finslerian structures leading to LIV. In the following, we consider the possible energy dependence of the superluminal behaviors of neutrinos in the Finslerian framework. The simple power-law energy dependence of neutrino superluminality is studied by combining the present observations of neutrino superluminality. In addition, one of the simplest interpolations is considered to take account the stringent constraint on neutrino superluminality from the SN1987A. \subsection{A. Energy independent superluminality} The observed superluminality is reported to be at the order \(10^{-5}\) together with large errorbars for the muon neutrinos with energy between \(\sim1~\rm{GeV}\) and \(\sim200~\rm{GeV}\) from the OPERA, MINOS and FermiLab1979 experiments \cite{OPERA2011,MINOS2007,Fermilab1979}. In this energy range of neutrinos, the superluminality may be energy independent \cite{CacciapagliaDP001,LiW001,Energy independent__Dass,Amelino-CameliaGLMRL001} \begin{equation} \delta v\sim\mathcal{O}\left(10^{-5}\right)\ , \end{equation} which is consistent with the experimental datasets. The energy independent superluminality of neutrinos corresponds to the physical dispersion relation \begin{equation} \label{energy independence} \eta^{\mu\nu}p_{\mu}p_{\nu}=m^{2}-2Ap_{0}^{2}\ , \end{equation} where the parameter \(A\sim\mathcal{O}\left(10^{-5}\right)\). The dispersion relation has been proposed by the previous works on LIV at high energy scales \cite{SME,ColemanGlashowLIV}. In addition, it is the dispersion relation (\ref{Finslerian special relativity dispersion relation}) with \(n=0\). In Finslerian special relativity, the dispersion relation (\ref{energy independence}) is related to the Finslerian structure \begin{equation} F(y)=\alpha\left(1-A\left(\frac{\beta}{\alpha}\right)^{2}\right)\ . \end{equation} Meanwhile, the neutrino superluminality from the SN1987A is \(10^{4}\) times less than \(10^{-5}\) for antielectron neutrinos with mean energy \(\sim10~\rm{MeV}\). Thus, the energy threshold of superluminality may exist and should be much higher than \(\sim10~\rm{MeV}\) for neutrinos. In other words, the LIV of superluminal neutrinos may be ``mass'' dependent \cite{LiW001}. The effects of Finslerian structure may emerge and impact on the neutrino superluminality above this energy threshold. The spacetime background may be modified by the Finsler geometry and the Minkowski structure may be altered by the Finslerian structure above the huge Lorentz boosts related to this energy threshold for the neutrinos. The energy independent superluminality of neutrinos is illustrated in Fig.\ref{fig2}. \begin{figure}[h] \begin{center} \includegraphics[width=8cm]{energy_independence.eps} \caption{\small{A schematic plot for the energy independence of neutrino superluminality. The errorbars denote the datasets of observations on superluminal neutrinos from OPERA (red) \cite{OPERA2011}, MINOS (blue) \cite{MINOS2007} and FermiLab1979 (black) \cite{Fermilab1979} experiments. The horizontal lines denote the energy independent superluminal behaviors of neutrinos which are set to be \(\delta v=(2.5,~3.5,~5.0,~7.0)\times10^{-5}\) from below to above.}} \label{fig2} \end{center} \end{figure} \subsection{B. Linear energy dependent superluminality} Previous studies \cite{Finsler special relativity,LiW001,Energy independent__Dass,Amelino-CameliaGLMRL001} showed that the linear energy dependence of superluminality could also account for the superluminal behaviors of neutrinos observed by the OPERA, MINOS and FermiLab1979 experiments because of the large errorbars of these experimental data. In general, the linear energy dependence of neutrino superluminality could be revealed as \begin{equation} \delta v=aE+b\ , \end{equation} where the parameter \(a=A/m\) in Finslerian special relativity and the parameter \(b\) denotes an offset term. This superluminality corresponds to the dispersion relation \begin{equation} \eta^{\mu\nu}p_{\mu}p_{\nu}=m^{2}-2ap_{0}^{3}-2b p_{0}^{2}\ . \end{equation} In the case that the offset term vanishes \(b=0\), the above neutrino experiments showed that the parameter \(a^{-1}\sim10^{6}~\rm{GeV}\) is roughly consistent with the experimental data \cite{Finsler special relativity,LiW001}. The corresponding Finslerian structure is given by \cite{Finsler special relativity} \begin{equation} F(y)=\alpha\left(1-A\left(\frac{\beta}{\alpha}\right)^{3}\right)\ . \end{equation} The upper limit of the muon neutrino mass is at the order \(0.01~\rm{eV}\) \cite{PDG}. Thus, the parameter \(A\) is set to be of order \(10^{-17}\) in this case. The linear energy dependence without offsets of the superluminal behaviors of neutrinos is illustrated in Fig.\ref{fig3}. \begin{figure}[h] \begin{center} \includegraphics[width=8cm]{Linear_energy_dependence_without_offsets.eps} \caption{\small{A schematic plot for the linear energy independence without offsets of neutrino superluminality. The parameter $A$ is set to be \((7,~8,~9)\times10^{-18}\) from below to above and the neutrino mass is chosen to be \(0.01~\rm{eV}\).}} \label{fig3} \end{center} \end{figure} In the case that the offset term exists \(b\neq0\), a very nice fit of the observed data is given by \(a^{-1}=5\times10^{6}~\rm{GeV}\) and \(b=1.91\times10^{-5}\), namely \cite{Energy independent__Dass,Amelino-CameliaGLMRL001} \begin{equation} \delta v=2\times10^{-7}E_{GeV}+1.91\times10^{-5}\ , \end{equation} where the lower index \(\rm{GeV}\) denotes the energy unit of neutrinos. This superluminal case is related to the Finslerian structure of the form \begin{equation} F(y)=\alpha\left(1-A\left(\frac{\beta}{\alpha}\right)^{3}-b\left(\frac{\beta}{\alpha}\right)^{2}\right)\ , \end{equation} where the parameter \(A\) is at the order \(2\times10^{-18}\). It is noted that it is difficult to reconcile the datasets of OPERA and SN1987A in the simplest linear energy dependent scenario. The energy threshold of neutrino superluminality may also appear in this linear scenario. This linear energy dependence with offsets of neutrino superluminality could be illustrated in Fig.\ref{fig4}. \begin{figure}[h] \begin{center} \includegraphics[width=8cm]{Linear_energy_dependence_with_offset.eps} \caption{\small{A schematic plot for the linear energy independence without offsets of neutrino superluminality. The parameter $A$ is constrained to be \(2\times10^{-18}\) and the parameter $b$ is constrained to be \(1.91\times10^{-5}\) for the purplish red fit-line.}} \label{fig4} \end{center} \end{figure} \subsection{C. Power-law energy dependent superluminality} Both the energy independence and linear dependence of neutrino superluminality are consistent with the datasets of present OPERA, MINOS and FermiLab1979 observations. However, the data of SN1987A showed that the energy independent superluminality should disappear under certain energy threshold. In addition, the linearly dependent superluminality of neutrinos is slightly larger than the observed upper limit \(2\times10^{-9}\) although the superluminality is predicted to be at the same order \(10^{-9}\). In the low energy ranges, the neutrino superluminality is suppressed to be smaller than that in the high energy ranges. This fact may trigger the studies on the power-law dependence with higher orders which is even more steep than the linear case. The nonlinear energy dependence of superluminality means power-law energy dependence with higher orders for the superluminal neutrinos. To account for the data of SN1987A observations, the simplest power-law behaviors of superluminal neutrinos is considered popularly \begin{equation} \delta v=aE^{i}_{GeV}\ , \end{equation} where the parameters \(a\) and \(i\) should be determined by the experimental observations. This kind of neutrino superluminality corresponds to the Finslerian line element (\ref{Finslerian special relativity line elements}) and the dispersion relation (\ref{Finslerian special relativity dispersion relation}) with \(n=i\). By combining the data of OPERA and MINOS, it is found that the dimensional parameter \(a\) should be in the range \((0.09-16.6)\times10^{-5}\) and the dimensionless power exponent \(i\) should be within \(0.40-1.18\) \cite{Trojan001}. However, it is argued that the data of SN1987A rules out the linear and quadratic dependence of the neutrino superluminality \cite{AlexandreEM001}. More detailed discussions showed that the SN1987A data disfavors all \(i<2.5\) \cite{Amelino-CameliaGLMRL001}. \subsection{D. Interpolations} As is discussed in the last subsections, the OPERA's data requires flat energy dependence of neutrino superluminality while the SN1987A's data requires more steep energy dependence of superluminality. To reconcile the SN1987A and OPERA observations, it is essential to balance these two crosscurrents. In principle, we could always make reconcilement between the datasets of superluminal neutrinos from the SN1987A and OPERA observations. One of the simplest means to realize this purpose is to find an energy dependent function so that the superluminal behaviors are steep at low energy ranges while they become flat at high energy ranges. However, we have demonstrated that it is difficult to realize this purpose for the function with the simplest power-law energy dependence in which there are equal or less than two parameters. If more parameters are involved, it is possible to reconcile the datasets of SN1987A and OPERA observations. For instance, the Lifshitz-type fermion model implies energy dependence of neutrino superluminality with even powers of high orders to realize this purpose\cite{AlexandreEM001}. The forms of interpolation are various to reconcile the present observed datasets of superluminal neutrinos. It is unpractical to explore all the possible interpolations in this paper. As an example, we consider one of the simplest interpolations of the form \begin{equation} \label{interpolation} \delta v=a\left(\frac{bE}{bE+1}\right)^{i}\ , \end{equation} where the parameters \(a\), \(b\) and \(i\) should be constrained by the present neutrino observations. Here, one extra dimensional parameter \(b\) is involved, which denotes a typical energy scale of superluminal neutrinos. The datasets of SN1987A, OPERA and FermiLab1979 observations constrain the parameters to be \(a\approx8.0\times10^{-5}\), \(b^{-1}\approx40~\rm{GeV}\) and \(i\approx1.3\). The superluminal curve determined by these parameters get through almost all errorbars of the neutrino superluminality from these present territory observations. The neutrino superluminality is less than \(2\times10^{-9}\) at the energy ranges related to neutrinos from SN1987A, which is consistent with the astrophysical observation of SN1987A. It is noted that this kind of superluminality could also be realized in the framework of Finslerian special relativity since the formula (\ref{interpolation}) could be expanded into power-law series generally but with complex forms. In addition, the above interpolation (\ref{interpolation}) could be illustrated in Fig.\ref{fig5}. \begin{figure}[h] \begin{center} \includegraphics[width=8cm]{Interpolation.eps} \caption{\small{A schematic plot for the interpolation (\ref{interpolation}) of all datasets of superluminal behaviors of neutrinos from OPERA, MINOS and FermiLab1979 experiments. The purplish red curve is the interpolating curve for which the parameters are constrained to be \(a\approx8.0\times10^{-5}\), \(b^{-1}\approx40~\rm{GeV}\) and \(i\approx1.3\). For the neutrinos from SN1987A, the superluminality is consistent with the stringent upper limit \(2\times10^{-9}\) at the energy \(\sim10~\rm{MeV}\).}} \label{fig5} \end{center} \end{figure} \section{IV. Discussions and Remarks} The OPERA's report challenges cruelly the foundation of modern physics. If it is confirmed in future, the neutrino superluminality would improve our knowledge on spacetime structure. The superluminal behaviors of neutrinos may imply that the nature of spacetime is different from the Minkowski one and the Lorentz symmetry should be replaced by some new symmetry. In such a new spacetime, the neutrino superluminality is admitted and would not be suppressed by the Cherenkov-like process. Meanwhile, the causal law is preserved and the energy-momentum is conserved. Most importantly, the theoretical predictions on neutrino superluminality should be consistent with the experiments and observations. In our previous paper (arXiv:1110.6673 [hep-ph]), we have proposed Finslerian special relativity as a reasonable candidate to account for the OPERA neutrino superluminality. Finslerian special relativity resides in the Finsler spacetime where the LIV is admitted. It was found that Finslerian special relativity meets the above requirements of the new spacetime and the linear energy dependence of superluminality is consistent with the data of the present observations on superluminal neutrinos. In this paper, we investigated the symmetry, causal structure and superluminality in Finslerian special relativity. In a generic case, the superluminal behaviors of neutrinos are of power-law energy dependence. It was found that the generic Finslerian special relativity also admits the existence of neutrino superluminality and the superluminal neutrinos would not lose their energy via the Cherenkov-like process rapidly. Both the causality and energy-momentum conservation are preserved. In addition, we studied the dispersion relations with extra power-law terms of higher orders corresponding to the Finslerian structures. These dispersion relations were compared with the datasets of the present observations of superluminal neutrinos in detail. It was found that Finslerian special relativity could be a reasonable arena to interpret the neutrino superluminality at least in the energy ranges of present territory experiments and astrophysical observations. Of course, more observable datasets on superluminal neutrinos are required to test and discriminate the models. We wish that the MINOS experiment and the T2K experiment could give opportunities to test and discriminate the predictions of Finslerian special relativity in future. \begin{acknowledgments} We thank useful discussions with Y. G. Jiang, M. H. Li and H. N. Lin. This work is supported by the National Natural Science Fund of China under Grant No. 10875129 and No. 11075166. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Solid state emitters such as color-centers and epitaxially grown quantum dots provide both electronic spin qubits and coherent optical transitions, and are optically accessible quantum memories. They can therefore serve as building blocks of a quantum network composed of nodes in which information is stored in spin qubits and interactions between nodes are mediated by photons\cite{kimble_quantum_2008, bernien_heralded_2013, delteil2016generation, stockill_phase_2017}. However, due to the effects of their complex solid state environment, most quantum emitters do not simultaneously provide long coherence time for the memory, and favorable optical properties such as bright, spectrally stable emission. The negatively charged silicon vacancy center in diamond (SiV$^-$, hereafter simply referred to as SiV) has been recently identified as a system that can overcome these limitations, since it provides excellent optical and spin properties simultaneously. Its dominant zero-phonon-line (ZPL) emission and stable optical transition frequencies resulting from its inversion symmetry\cite{gali2013ab, muller_optical_2014, sipahigil_indistinguishable_2014} have recently been used to realize single-photon switching\cite{sipahigil_integrated_2016} and a fibre-coupled coherent single-photon source\cite{burek_fiber-coupled_2017} in a nanophotonic platform. Further, recent demonstrations of microwave\cite{pingault_coherent_2017} and all-optical\cite{becker2017all} control of its electronic spin, as well as long ($\sim$10 ms) spin coherence times at mK temperatures\cite{sukachev2017silicon}, when electron-phonon processes in the center are suppressed,\cite{jahnke_electronphonon_2015, pingault_coherent_2017} make the SiV a good spin qubit. Scaling up these demonstrations to multi-qubit networks requires local tunability of individual emitters, as well as the realization of strong interactions between them. In this work, we control local strain in the SiV environment using a nano-electro-mechanical system (NEMS), and show wide tunability for both optical and spin transition frequencies. In particular, we demonstrate hundreds of GHz of optical tuning, sufficient to achieve spectrally identical emitters for photon-mediated entanglement\cite{kimble_quantum_2008, bernien_heralded_2013}. Further, we characterize the strain Hamiltonian of the SiV and measure high strain susceptibilities for both the electronic and spin levels. Building on this strain response, we discuss a scheme to realize strong coupling of the SiV spin to coherent phonons in GHz frequency nanomechanical resonators. While phonons have been proposed as quantum transducers for qubits,\cite{wallquist_hybrid_2009,rabl_quantum_2010} experiments with solid-state spins have been limited to the classical regime of large displacement amplitudes driving their internal levels\cite{arcizet_single_2011, kolkowitz_coherent_2012, ovartchaiyapong_dynamic_2014, teissier_2014, barfuss_strong_2015, macquarrie_coherent_2015, golter_optomechanical_2016, golter_saw_2016, meesala_enhanced_2016}. The high strain susceptibility of the SiV ground states can enable MHz spin-phonon coupling rates in existing nanomechanical resonators. Such a spin-phonon interface can enable quantum gates between spins akin to those in ion traps\cite{cirac-zoller_1995, molmer-sorensen_1999, leibfried_experimental_2003}, and interfaces with disparate qubits\cite{schuetz_universal_2015,aref_2015}. \section{Strain tuning of optical transitions} \begin{figure}[ht!] \includegraphics[width=\columnwidth]{Fig1.pdf} \caption{(a) Electronic level structure of the SiV center (molecular structure shown in inset) at zero strain showing ground and excited manifolds with spin-orbit eigenstates. The four optical transitions A, B, C, and D at zero magnetic field, and splittings between orbital branches in the ground state (GS) and excited state (ES), $\Delta_{\mathrm{gs}}$ and $\Delta_{\mathrm{es}}$ respectively are indicated. In the presence of a magnetic field, each orbital branch splits into two Zeeman sublevels. A spin-qubit can be defined in the sublevels of the lower orbital branch in the GS. (b) Schematic of the diamond cantilever device and surrounding electrodes with a corresponding scanning electron microscope (SEM) image in the inset. Diamond crystal axes relative to the cantilever orientation are shown. Four possible orientations of the highest symmetry axis of an SiV are indicated by the four arrows above the cantilever. Under application of strain, these can be grouped into axial (red) and transverse (blue) orientations. Molecular structure of a transverse-orientation SiV as viewed in the plane normal to the cantilever axis is shown below, and crystal axes that define the internal co-ordinate frame of the color center are indicated. The $z$-axis is the highest symmetry axis, which defines the orientation of the SiV. } \label{fig1} \end{figure} The SiV center is an interstitial point defect in which a silicon atom is positioned midway between two adjacent missing carbon atoms in the diamond lattice as depicted in the inset of Fig. \ref{fig1}(a). Its electronic level structure at zero strain is shown in Fig. \ref{fig1}(a). The optical ground state (GS) and excited state (ES) each contain two distinct electronic configurations shown by the bold horizontal lines. Physically, each of the two branches in the GS and ES corresponds to the occupation of a specific $E$-symmetry orbital by an unpaired hole.\cite{HeppThesis} At zero magnetic field, the degeneracy of these orbitals is broken by spin-orbit (SO) coupling leading to frequency splittings $\Delta_{\mathrm{gs}}$ = 46 GHz, and $\Delta_{\mathrm{es}}$ = 255 GHz respectively. Due to inversion symmetry of the defect about the Si atom, the wavefunctions of these orbitals can be classified according to their parity with respect to this inversion center.\cite{gali2013ab, HeppThesis} Thus, the GS configurations correspond to the presence of the unpaired hole in one of the even-parity orbitals $e_{g+}, e_{g-}$, while the ES configurations have this hole in one of the odd-parity orbitals $e_{u+}, e_{u-}$. Here the subscripts $g$, $u$ refer to even ($gerade$) and odd ($ungerade$) parity respectively, and $+$, $-$ refer to the orbital angular momentum projecton $l_Z$. This specific level structure gives rise to four distinct optical transitions in the ZPL indicated by A, B, C, D in Fig. \ref{fig1}(a). Upon application of a magnetic field, degeneracy between the SO eigenstates is further broken to reveal two sub-levels within each orbital branch corresponding to different spin states of the unpaired hole ($S= 1/2$). In this manner, a spin-qubit can be defined on the two sublevels of the lowest orbital branch in the ground state. To control local strain in the environment of the SiV center, we use a diamond cantilever, schematically shown in Fig. \ref{fig1}(b). Electrodes are fabricated, one on top of the cantilever, and another on the substrate below the cantilever to form a capacitive actuator. By applying a specific DC voltage to these electrodes, we can deflect the cantilever to achieve a desired amount of static strain at the SiV site. The fabrication procedure based on angled etching of diamond \cite{atikian2017freestanding,burek_free-standing_2012} and device design are discussed in detail elsewhere \cite{sohn2017controlling}. The diamond sample with cantilever NEMS is maintained at 4 K in a Janis, ST-500 continuous-flow liquid helium cryostat. We perform optical spectroscopy on SiVs inside the cantilever via resonant laser excitation of the transitions shown in Fig. \ref{fig1}(a). Mapping the response of these transitions as a function of voltage applied to the device allows us to study the strain response of the SiV electronic structure. The diamond samples used in our study have a $[001]$-oriented top surface, and the long axis of the cantilever is oriented along the $[110]$ direction. There are four possible equivalent orientations of SiVs - $[111]$, $[\bar{1}\bar{1}1]$, $[1\bar{1}1]$, $[\bar{1}11]$ - in a diamond crystal, indicated by the four arrows above the cantilever in Fig. \ref{fig1}(b). Since the cantilever primarily achieves uniaxial strain directed along $[110]$, this breaks the equivalence of the four orientations, and leads to two classes indicated by the blue and red colored arrows in Fig. \ref{fig1}(b). The blue SiVs, oriented perpendicular to the cantilever long-axis, predominantly experience uniaxial strain along their internal $y$-axis (see inset of Fig. \ref{fig1}(b)). On the other hand, the red SiVs are not orthogonal to the cantilever long-axis, and experience a non-trivial strain tensor, which includes significant strain along their internal $z$-axis. For simplicity, we refer to blue SiVs as `transverse-orientation' SiVs, and red SiVs as `axial-orientation' SiVs. This nomenclature is used with the understanding that it is specific to the situation of predominantly $[110]$ uniaxial strain applied with our cantilevers. \begin{figure}[ht!] \includegraphics[width=\columnwidth]{Fig2.pdf} \caption{Tuning of optical transitions of (a) transverse-orientation SiV (red in Fig. 1(b)), and (b) axial orientation SiV (blue in Fig. 1(b)). Voltage applied to the device is indicated next to each spectrum.} \label{fig2} \end{figure} Two distinct strain-tuning behaviors correlated with SiV orientation are observed as shown in Fig. \ref{fig2}. Orientation of SiVs in the cantilever is inferred from polarization-dependence of their optical transitions at zero strain.\cite{HeppThesis} With gradually increasing strain, transverse-orientation SiVs show an increasing separation between the A and D transitions with relatively small shifts in the B and C transitions as seen in Fig. \ref{fig2}(a). This behavior has been observed on a previous experiment with an ensemble of SiVs.\cite{sternschulte_1.681-ev_1994} On the other hand, axial-orientation SiVs show a more complex tuning behavior in which all transitions shift as seen in Fig. \ref{fig2}(b). In the context of photon-mediated entanglement of emitters, typically, photons emitted in the C line, the brightest and narrowest linewidth transition are of interest\cite{sipahigil_integrated_2016}. Upon comparing Figs. \ref{fig2}(a) and (b), we note that this transition is significantly more responsive for axial-orientation SiVs. Particularly in Fig. \ref{fig2}(b), we achieve tuning of the C transition wavelength by 0.3 nm (150 GHz), approximately 10 times the typical inhomogeneity in optical transition frequencies of SiV centers.\cite{muller_optical_2014,evans_narrow-linewidth_2016} Thus, NEMS-based strain control can be used to deterministically tune multiple on-chip or distant emitters to a set optical wavelength. In particular, integration of this NEMS-based strain-tuning with existing diamond nanophotonic devices\cite{sipahigil_integrated_2016, burek_fiber-coupled_2017, bhaskar_gev_2017, zhang_2017, Burek:2014bj} can enable scalable on-chip entanglement and widely tunable single photon sources. Besides static tuning of emitters, dynamic control of the voltage applied to the NEMS can be used to counteract slow spectral diffusion, and stabilize optical transition frequencies\cite{acosta_dynamic_2012}. \section{Effect of strain on electronic structure} \begin{figure}[ht!] \includegraphics[width=\columnwidth]{Fig3.pdf} \caption{(a) Dominant effect of $E_g$-strain on the electronic levels of the SiV. (b) Dominant effect of $A_{1g}$-strain on the electronic levels of the SiV. (c) Normalized strain-tensor components experienced by transverse-orientation SiV (red in Fig. 1(b)), and (d) axial orientation SiV (blue in Fig. 1(b)) in the SiV co-ordinate frame upon deflection of the cantilever. (e) Variation in orbital splittings within GS (solid green squares) and ES (open blue circles) upon application of $E_g$-strain. Data points are extracted from the optical spectra in Fig. 2(a). Solid curves are fits to theory in text. (f) Tuning of mean optical wavelength with $A_{1g}$ strain. Data points are extracted from the optical spectra in Fig. 2(b). Solid line is a linear fit as predicted by theory in text.} \label{fig3} \end{figure} Following previous work on point defects,\cite{hughes_uniaxial_1967, maze_properties_2011, HeppThesis} we employ group theory to explain the effect of strain on the SiV electronic levels, and extract the susceptibilities for various strain components. \subsection{Strain Hamiltonian}\label{strain.theory} In this section, we describe the strain Hamiltonian of the SiV center, and summarize the physical effects of various modes of deformation on the orbital wavefunctions. A more detailed group-theoretic discussion of the results in this section is provided in Appendix \ref{group.theory} and in Ref. \cite{HeppThesis}. Based on the symmetries of the orbital wavefunctions, it can be shown that the effects of strain on the GS ($e_g$) and ES ($e_u$) manifolds are independent and identical in form. For either manifold, the strain Hamiltonian in the basis of $\{\ket{e_x\downarrow}, \ket{e_x\uparrow}, \ket{e_y\downarrow}, \ket{e_y\uparrow}\}$ states (pure orbitals unmixed by SO coupling as defined in \cite{HeppThesis}) is given by \begin{equation} \mathbb{H}^{\text{strain}} = \begin{bmatrix} \epsilon_{A_{1g}}-\epsilon_{E_{gx}} & \epsilon_{E_{gy}} \\ \epsilon_{E_{gy}} & \epsilon_{A_{1g}}+\epsilon_{E_{gx}} \\ \end{bmatrix}\otimes \mathbb{I}_{2} \label{eq:Hstr.spin} \end{equation} The spin part of the wavefunction is associated with an identity matrix in Eq. (\ref{eq:Hstr.spin}) because lattice deformation predominantly perturbs the Coulomb energy of the orbitals, which is independent of the spin character. Each $\epsilon_r$ is a linear combination of strain components $\epsilon_{ij}$, and corresponds to specific symmetries indicated by the subscript $r$. \begin{eqnarray} \epsilon_{A_{1g}} & = & t_\perp(\epsilon_{xx}+\epsilon_{yy}) + t_\parallel\epsilon_{zz} \nonumber\\ \epsilon_{E_{gx}} & = & d(\epsilon_{xx}-\epsilon_{yy}) + f\epsilon_{zx} \label{eq:strain.siv.basis}\\ \epsilon_{E_{gy}} & = & -2d\epsilon_{xy} + f\epsilon_{yz} \nonumber \end{eqnarray} Here $t_\perp, t_\parallel, d, f$ are the four strain-susceptibility parameters that completely describe the strain-response of the $\{\ket{e_x},\ket{e_y}\}$ states. These parameters have different numerical values in the GS and ES manifolds. From the Hamiltonian \ref{eq:Hstr.spin}, we see that $E_{gx}$ and $E_{gy}$ strain cause mixing and relative shifts between orbitals, and modify the orbital splittings within the GS and ES manifolds as depicted in Fig. \ref{fig3}(a). On the other hand, $A_{1g}$ strain leads to a uniform or common-mode shift of the GS and ES manifolds, and only shifts the mean ZPL frequency as depicted in Fig. \ref{fig3}(b). By decomposing the strain applied in our experiment into $A_{1g}$ and $E_g$ components, we can confirm the observations on tuning of transverse- and axial-orientation SiVs in Fig. \ref{fig2}. Strain tensors for transverse- and axial-orientations of emitters obtained from finite element method (FEM) simulations are plotted in Figs. \ref{fig3}(c), (d) respectively. As expected from the cantilever geometry in Fig. \ref{fig1}(a), transverse-orientation SiVs predominantly experience $\epsilon_{yy}$ and hence an $E_g$ deformation. The $E_g$-strain response predicted in Fig. \ref{fig3}(a) leads to the strain-tuning of mainly A and D transitions seen in Fig. \ref{fig2}(a). On the other hand, axial-orientation SiVs experience both $\epsilon_{zz}$ and $\epsilon_{yz}$ as shown in Fig. \ref{fig3}(d), which leads to simultaneous $E_g$ and $A_{1g}$ deformations. Indeed, a combination of the strain responses in Figs. \ref{fig3}(a), (b) qualitatively explains the strain-tuning behavior of the transitions in Fig. \ref{fig2}(b). \subsection{Estimation of strain-susceptibilities}\label{strain.fitting} We now quantitatively fit the results in Fig. \ref{fig2} with the above strain response model. Adding SO coupling ($\mathbb{H}^{\text{SO}} = -{\lambda_{SO}}L_zS_z$) to the strain Hamiltonian in Eq. \ref{eq:Hstr.spin}, we get the following total Hamiltonian in the $\{\ket{e_x}, \ket{e_y}\} \otimes \{\ket{\uparrow}, \ket{\downarrow}\}$ basis.\cite{HeppThesis} \begin{widetext} \begin{equation} \mathbb{H}^{\text{total}} = \left[ {\begin{array}{cccc} \epsilon_{A_{1g}}-\epsilon_{E_{gx}} & 0 & \epsilon_{E_{gy}}-i\lambda_{SO}/2 & 0 \\ 0 & \epsilon_{A_{1g}}-\epsilon_{E_{gx}} & 0 & \epsilon_{E_{gy}}+i\lambda_{SO}/2 \\ \epsilon_{E_{gy}}+i\lambda_{SO}/2 & 0 & \epsilon_{A_{1g}}+\epsilon_{E_{gx}} & 0 \\ 0 & \epsilon_{E_{gy}}-i\lambda_{SO}/2 & 0 & \epsilon_{A_{1g}}+\epsilon_{E_{gx}}\\ \end{array} } \right] \label{eq:H.SO.strain} \end{equation} \end{widetext} Here, $\lambda_{SO}$ is the SO coupling strength within each manifold - 46 GHz for the GS, and 255 GHz for the ES. Diagonalization of this Hamiltonian gives two distinct eigenvalues \begin{eqnarray} E_1 & = & \alpha-\frac{1}{2}\sqrt{\lambda_{SO}^2+4(\epsilon_{E_{gx}}^2+\epsilon_{E_{gy}}^2)} \nonumber \\ E_2 & = & \alpha+\frac{1}{2}\sqrt{\lambda_{SO}^2+4(\epsilon_{E_{gx}}^2+\epsilon_{E_{gy}}^2)} \label{eq:eigenenergy} \end{eqnarray} Each of these corresponds to doubly spin-degenerate eigenstates in the absence of an external magnetic field. Noting that Eqs. (\ref{eq:eigenenergy}) are valid within both GS and ES manifolds, but with different strain susceptibilities, we obtain the following quantities that can be directly extracted from the optical spectra in Fig. \ref{fig2}. \begin{widetext} \begin{eqnarray} \Delta_{\text{ZPL}} &=& \Delta_{\text{ZPL},0}+\left(t_{\parallel,u}-t_{\parallel,g}\right)\epsilon_{zz} +\left(t_{\perp,u}-t_{\perp,g}\right)(\epsilon_{xx}+\epsilon_{yy}) \label{eq:fitmodel1}\\ \Delta_{\mathrm{gs}} &=& \sqrt{\lambda_{SO,g}^2 + 4\left[d_g(\epsilon_{xx}-\epsilon_{yy})+f_g\epsilon_{yz}\right]^2+4\left[-2d_g\epsilon_{xy}+f_g\epsilon_{zx}\right]^2} \label{eq:fitmodel2}\\ \Delta_{\mathrm{es}} &=& \sqrt{\lambda_{SO,u}^2 + 4\left[d_u(\epsilon_{xx}-\epsilon_{yy})+f_u\epsilon_{yz}\right]^2+4\left[-2d_u\epsilon_{xy}+f_u\epsilon_{zx}\right]^2} \label{eq:fitmodel3} \end{eqnarray} \end{widetext} Here, the subscript $g(u)$ refers to the GS (ES) manifold. $\Delta_{\text{ZPL}}$ is the mean ZPL frequency, and $\Delta_{\mathrm{gs}}$, $\Delta_{\mathrm{es}}$ are the GS and ES orbital splittings respectively. $\Delta_{\text{ZPL},0}$ is the mean ZPL frequency at zero strain. Extracting all three frequencies in Eqs. (\ref{eq:fitmodel1}-\ref{eq:fitmodel3}) as a function of strain from the optical spectra measured in Fig. \ref{fig2}, we fit them to the above model in Figs. \ref{fig3}(c), (d), and estimate the strain-susceptibilities. The fitting procedure described in detail in Appendix \ref{strain.susc.fit} gives us \begin{eqnarray} \left(t_{\parallel,u}-t_{\parallel,g}\right) & = & -1.7\,\text{PHz/strain} \nonumber\\ \left(t_{\perp,u}-t_{\perp,g}\right) & = & 0.078\,\text{PHz/strain} \nonumber\\ d_g & = & 1.3\,\text{PHz/strain} \nonumber\\ d_u & = & 1.8\,\text{PHz/strain} \nonumber\\ f_g & = &-0.25\,\text{THz/strain} \nonumber\\ f_u & = &-0.72\,\text{THz/strain} \label{eq:str.susc.vals} \end{eqnarray} We note that these values are subject to errors arising from (i) imprecision in SiV depth from the diamond surface (10\% from SRIM calculations, and in practice, higher due to ion-channeling effects), and (ii) due to the fact that the device geometry cannot be replicated exactly in FEM simulations for strain estimation. In particular, the values $f$ and $t_{\perp}$ are subject to higher error, since the $E_g$ and $A_{1g}$ responses are mostly dominated by the numerically larger susceptibilities $d$ and $t_{\parallel}$ respectively. \section{Controlling electron-phonon processes} At 4 K, dephasing and population relaxation of the SiV spin qubit defined with the $\ket{e_{g+}\downarrow}^\prime$, $\ket{e_{g-}\uparrow}^\prime$ states ($^\prime$ denoting modified SO eigenstates due to strain) is known to be dominated by electron-phonon processes shown in Fig. \ref{fig5}(a)\cite{jahnke_electronphonon_2015, pingault_coherent_2017}. In accordance with our observations on response to static $E_g$-strain in the previous section, we expect that AC strain generated by thermal $E_g$-phonons at frequency $\Delta_\mathrm{gs} < k_B T/h$ is capable of driving the GS orbital transitions. Since we can tune the splitting $\Delta_\mathrm{gs}$ by applying static $E_g$-strain with our device, we have control over these electron-phonon processes, and can engineer the relaxation rates of spin qubit. In particular, by making $\Delta_\mathrm{gs} \gg k_B T/h$, we have shown that spin coherence can be improved significantly.\cite{sohn2017controlling} Here, we elucidate the physical mechanisms behind such improvement in spin properties with strain control. \begin{figure}[b!] \includegraphics[width=\columnwidth]{Fig5.pdf} \caption{(a) Illustration of dephasing and population decay processes for spin qubit. Blue arrow shows a spin-conserving transition responsible for dephasing. Red arrow shows a spin-flipping transition driving decay from $\ket{e_{g+}\downarrow}^\prime$ to $\ket{e_{g-}\uparrow}^\prime$. Processes suppressed at high strain are crossed out. (b) Calculated rates for spin-conserving upward and downward phonon processes. Both rates are normalized to their values at zero strain. (c) Reduction in CPT linewidth with increasing GS splitting $\Delta_{\mathrm{gs}}$. Inset shows an example of a CPT spectrum taken at $\Delta_{\mathrm{gs}}$ = 460 GHz. The two resonances in the spectrum are due to the presence of a neighboring nuclear spin \cite{sohn2017controlling}. Linewidths of both are plotted and indicated as Dip 1 and Dip 2 in the main plot. (d) Reduction in spin relaxation rate ($1/T_1$) with increasing GS splitting $\Delta_{\mathrm{gs}}$ as extracted from pump-probe measurements. Solid line is a fit to theory in Appendix \ref{spin.T1}.} \label{fig5} \end{figure} When a thermal phonon randomly excites the SiV center from the spin qubit manifold to the upper orbital branch, say from $\ket{e_{g+}\downarrow}^\prime$ to $\ket{e_{g-}\downarrow}^\prime$ as shown by the blue upward arrow in Fig. \ref{fig5}(a), the energy of the $\downarrow$ projection of the spin qubit suddenly changes by an amount $h\Delta_{\mathrm{gs}}$. After some time in the upper branch, the system randomly relaxes back to the lower manifold through spontaneous emission of a phonon as shown by the blue downward arrow in Fig. \ref{fig5}(a). In this process, the spin projection is conserved, since phonons predominantly flip only the orbital character. However, a random phase is acquired between the $\downarrow$ and $\uparrow$ projections of the spin qubit due to phonon absorption and emission, as well as faster precession in the upper manifold. The dephasing rate is determined by the upward phonon transition rate $\gamma_{\mathrm{up}}(\Delta_{\mathrm{gs}})$. Both this rate and the downward transition rate $\gamma_{\mathrm{down}}(\Delta_{\mathrm{gs}})$ can be calculated from Fermi's golden rule and are given by - \begin{align} \gamma_{\mathrm{up}}(\Delta_{\mathrm{gs}}) &= 2\pi\chi\rho\Delta_{\mathrm{gs}}^3 n_{th}(\Delta_{\mathrm{gs}}) \label{eq:gammaup.final}\\ \gamma_{\mathrm{down}}(\Delta_{\mathrm{gs}}) &= 2\pi\chi\rho\Delta_{\mathrm{gs}}^3 (n_{th}(\Delta_{\mathrm{gs}})+1) \label{eq:gammadown.final} \end{align} where $\chi$ is a constant that encapsulates averaged interaction over all phonon modes and polarizations and $n_{th}(\nu)$ is the Bose-Einstein distribution. It is instructive to view these rates as a product of the phonon density of states (DOS) and the occupation of phonon modes. In the above expressions, the first part $2\pi\chi\rho\Delta_{\mathrm{gs}}^3$ contains the bulk DOS of phonons, which scales as $\sim\Delta_\mathrm{gs}^2$. On the other hand, $n_{th}(\nu)$ is the number of thermal phonons in each mode. Note that the +1 term in the downward rate in Eq. (\ref{eq:gammadown.final}) corresponds to spontaneous emission of a phonon, a process that is independent of temperature. Fig. \ref{fig5}(b) shows the theoretically predicted behavior of upward and downward rates as a function of $\Delta_{\mathrm{gs}}$ at temperature $T=4$\,K. Here, we calculate both transition rates with corrected exponent in Eqs. (\ref{eq:gammaup.final}) and (\ref{eq:gammadown.final}), approximately 1.9 rather than 3, to take into account the geometric factor associated with the cantilever\cite{sohn2017controlling}. We observe that the upward rate shows a non-monotonic behavior, approaching its maximum value around $h\Delta_{\mathrm{gs}} \sim k_BT$. In the $h\Delta_{\mathrm{gs}} < k_BT$ regime, the increasing DOS term dominates, and causes $\gamma_{\mathrm{up}}$ to increase. However, when $h\Delta_{\mathrm{gs}} \gg k_BT$, thermal occupation of the modes is approximated by Boltzmann distribution $n_{th}(\Delta_{\mathrm{gs}}) = \mathrm{exp}\left(-\frac{h\Delta_{\mathrm{gs}}}{k_BT}\right)$, and this exponential roll-off dominates the polynomially increasing DOS. Therefore, $\gamma_{\mathrm{up}}$ decreases exponentially, when sufficiently high strain is applied. In contrast, the downward rate monotonically increases with the GS-splitting, because it is dominated by the spontaneous emission rate, which simply increases polynomially with the DOS. Fig. \ref{fig5}(c) shows experimentally measured improvement of spin coherence using coherent population trapping (CPT) in this high strain regime\cite{sohn2017controlling}. Above $\Delta_\mathrm{gs}$ of 400 GHz, the dephasing rate saturates, indicating a secondary dephasing mechanism such as the $^{13}$C nuclear spin bath in diamond. Our data is supported by similar $1/T_2^*$ measured at 100 mK where the thermal occupation of relevant phonon modes is negligible\cite{sukachev2017silicon}. Population decay or longitudinal relaxation of the spin qubit shown by the red arrows in Fig. \ref{fig5}(a) is driven by spin-flipping phonon transitions, which occur with a small probability due to perturbative mixing of spin projections. A detailed analysis of various decay channels is presented in Appendix \ref{spin.T1}. At high strain, it can be shown that the decay rate is approximately $4\left(d_{g, \mathrm{flip}}/d_g\right)^2\gamma_\mathrm{up}$, where $d_{g, \mathrm{flip}}$ is the strain susceptibility for a spin-flipping transition such as $\ket{e_{g+}\downarrow}^\prime \rightarrow \ket{e_{g+}\uparrow}^\prime$. Thus it is a fraction of the spin-conserving transition rate $\gamma_\mathrm{up}$ shown in Eq. \ref{eq:gammaup.final}. The factor $d_{g, \mathrm{flip}}/d_g$ scales as $\sim 1/\Delta_\mathrm{gs}$ according to first order perturbation theory. As a result, we expect exponential decrease in the population decay rate with a different polynomial pre-factor compared to the spin decoherence rate. Fig. \ref{fig5}(d) shows this decreasing trend with increasing $\Delta_\mathrm{gs}$ fit to this two-phonon relaxation model. \section{Strain response of spin transition}\label{ss.susceptibility} \begin{figure}[ht!] \includegraphics[width=\columnwidth]{Fig6.pdf} \caption{(a) Splitting of the C transition into the four transitions C1, C2, C3, and C4 in the presence of a magnetic field. Spin transition frequencies on the lower orbital branches of the GS and ES are $\omega_s$, $\omega_s^\prime$ respectively. (b) Response of transitions C1, C2, C3, and C4 upon tuning GS splitting $\Delta_{\mathrm{gs}}$ with $E_g$-strain. (c) Calculated response of optical transitions C1, C2, C3, and C4 to $E_g$-strain in presence of 0.17 T B-field aligned along the [001] direction. Shaded regions on the left and right ends indicate the regimes in which the GS orbitals are determined by SO coupling and strain respectively. (d) Strain response of spin transition frequencies upon tuning of ground state orbital splitting $\Delta_{\mathrm{gs}}$ with $E_g$-strain. SO regime data points are extracted from the optical spectra in Fig. 5(b). High strain regime data points are obtained from CPT measurements on the SiV studied in Fig. 4. Solid (dashed) line is calculated spin transition frequency on the lower orbital branch of GS (ES) from Fig. 5(c).} \label{fig6} \end{figure} So far, we have seen that static $E_g$-strain in the SiV environment can significantly impact spin coherence and relaxation rates by modifying the orbital splitting in the GS. In this section, we discuss additional effects of this type of strain on the SiV spin qubit that arise from SO coupling. Particularly, we can tune the spin transition frequency, $\omega_s$ by a large amount (a few GHz) at a fixed external magnetic field by simply controlling local strain. At the same time, we discuss how the magnitude of local strain strongly determines the ability to couple or control the SiV spin qubit with external fields such as resonant strain or microwaves at frequency $\omega_s$, and resonant laser-fields in a $\Lambda$-scheme. The strain-response of the spin transition is measured by monitoring the four Zeeman-split optical lines arising from the C transition as shown schematically in Fig. \ref{fig6}(a). In Fig. \ref{fig6}(b), we apply a fixed magnetic field $B=$0.17 T aligned along the vertical [001] axis with a permanent magnet placed underneath the sample, and gradually increase the GS splitting of a transverse-orientation SiV by applying strain. With increasing strain, each of the four Zeeman-split optical transitions moves outwards from the position of the unsplit C transition at zero magnetic field. In particular, the spin-conserving inner transitions C2 and C3 overlap at zero strain, but become more resolvable with increasing strain. Thus, all-optical control of the spin \cite{becker2017all} relying on simultaneous excitation of a pair of transitions C1 and C3 (or C2 and C4) forming a $\Lambda$-scheme requires the presence of some local strain. The strain-tuning behavior of Zeeman split optical transitions can be theoretically calculated by diagonalizing the GS and ES Hamiltonians in the presence of a magnetic field. Upon adding Zeeman terms to the Hamiltonian in equation \ref{eq:H.SO.strain}, and switching to the basis of SO eigenstates $\{e_{g-}\downarrow, e_{g+}\uparrow, e_{g+}\downarrow, e_{g-}\uparrow \}$, we obtain \begin{widetext} \begin{equation} \mathbb{H}^{\text{total}} = \begin{bmatrix} -\lambda_{SO}/2-\gamma_LB_z-\gamma_sB_z & 0 & \epsilon_{E_{gx}} & \gamma_sB_x \\ 0 & -\lambda_{SO}/2+\gamma_LB_z+\gamma_sB_z & \gamma_sB_x & \epsilon_{E_{gx}} \\ \epsilon_{E_{gx}} & \gamma_sB_x & \lambda_{SO}/2+\gamma_LB_z-\gamma_sB_z & 0\\ \gamma_sB_x & \epsilon_{E_{gx}} & 0 & \lambda_{SO}/2-\gamma_LB_z+\gamma_sB_z \end{bmatrix} \label{eq:Htot.lowstr} \end{equation} \end{widetext} Here we have discarded the $A_{1g}$ and $E_{gy}$ strain terms, since the transverse-orientation SiVs in our experiments experience predominantly $E_{gx}$ strain. We have also assumed that the transverse magnetic field is entirely along the $x$-axis of the SiV. The gyromagnetic ratios are $\gamma_s$ = 14 GHz/T, $\gamma_L$ = 0.1(14) GHz/T, where the pre-factor of 0.1 is a quenching factor for the orbital angular momentum.\cite{HeppThesis} The result of our calculation is shown in Fig. \ref{fig6}(c). In the low strain regime indicated by the region with the shaded gradient, we reproduce the experimental behavior in Fig. \ref{fig6}(b), and obtain good quantitative agreement with the variation in the spin transition frequency $\omega_s$ in Fig. \ref{fig6}(d). Physically, this behavior of the spin transitions arises as strain and SO coupling compete to determine the orbital wavefunctions. From the Hamiltonian in equation \ref{eq:Htot.lowstr}, we can see that the orbitals begin as SO eigenstates $\{e_{g-}\downarrow, e_{g+}\uparrow, e_{g+}\downarrow, e_{g-}\uparrow \}$ at zero strain, and end up as the pure states $\{e_{gx}\downarrow, e_{gx}\uparrow, e_{gy}\downarrow, e_{gy}\uparrow \}$ at high strain ($\epsilon_{E_{gx}} \gg \lambda_{SO}/2$). At zero strain, the effective magnetic field from SO coupling quantizes the electron spin along the $z-$axis. In this condition, the off-axis B-field does not affect the spin transition frequency $\omega_s$ to first order, so $\omega_s \sim 2(\gamma_s+\gamma_L)B_z = 3.1$ GHz. As the strain $\epsilon_{E_{gx}}$ is increased far above the SO coupling $\lambda_{SO}$ and the orbitals are purified, the spin quantization axis approaches the direction of the external magnetic field, and $\omega_s$ approaches ~$2\gamma_sB = 4.8$ GHz. Since SO coupling in the ES is stronger, this limit is attained at higher values of strain than in the GS as shown by the dashed line in Fig. \ref{fig6}(d). Once the orbitals in both the GS and ES are predominantly dictated by local strain and SO coupling is merely perturbative, the difference in GS and ES spin transition frequencies becomes vanishingly small, eventually leading to converging C2 and C3 optical transitions as depicted on the right hand side of Fig. \ref{fig6}(c). In the limit of very high strain, the transitions C2 and C3 also become strictly spin-conserving, and optical polarization \cite{rogers_electronic_2014, pingault_coherent_2017} and readout of the spin qubit will be forbidden. The rapid variation of the spin transition frequency $\omega_s$ in the low-strain regime of Fig. \ref{fig6}(d) provides the first hint that the SiV spin-qubit can be very sensitive to oscillating strain generated by coherent phonons. The interaction terms due to strain and the off-axis magnetic field predicted by the Hamiltonian in equation \ref{eq:Htot.lowstr} are depicted visually in Fig. \ref{fig7}(a). In particular, at zero strain, the presence of the off-axis magnetic field perturbs the eigenstates of the spin qubit to first order as \begin{eqnarray} \ket{e_{g-}\downarrow}^\prime & \approx & \ket{e_{g-}\downarrow} + \frac{\gamma_sB_x}{\lambda_{SO}}\ket{e_{g-}\uparrow} \label{eq:spinstates.perturbed1}\\ \ket{e_{g+}\uparrow}^\prime & \approx & \ket{e_{g+}\uparrow} + \frac{\gamma_sB_x}{\lambda_{SO}}\ket{e_{g+}\downarrow} \label{eq:spinstates.perturbed2} \end{eqnarray} This perturbative mixing with opposite spin-character can now allow resonant AC strain at frequency $\omega_s$ to drive the spin qubit. For a small amplitude of such AC strain $\epsilon_{E_{gx}}^{AC}$, we can calculate the strain susceptibility of the spin transition $d_{\mathrm{spin}}$ in terms of the GS orbital strain susceptibility $d_g$ in Eq. \ref{eq:str.susc.vals}. \begin{equation} d_{\mathrm{spin}} = \frac{\bra{e_{g-}\downarrow^\prime}\mathbb{H}^{\text{strain}}\ket{e_{g+}\uparrow^\prime}}{\epsilon_{E_{gx}}^{AC}}d_g = \frac{2\gamma_sB_x}{\lambda_{SO}}d_g \label{eq:dspin.pert} \end{equation} Since $d_g$ is very large ($\sim$1 PHz/strain), even with the presence of the pre-factor ${\gamma_sB_x}/{\lambda_{SO}}$, the spin qubit can have a relatively large strain-response. For the present case of $B$=0.17 T along the [001] axis, we get $d_{\mathrm{spin}}/d_\perp = 0.085$ yielding $d_{\mathrm{spin}} \sim 100$ THz/strain. An exact calculation of $d_{\mathrm{spin}}$ for arbitrary local static strain using the Hamiltonian in equation \ref{eq:Htot.lowstr} is shown in Fig. \ref{fig7}(b). As static strain in the SiV environment is increased far above the SO coupling, the AC strain susceptibility approaches zero. Thus we can conclude that coupling the SiV spin qubit to resonant AC strain requires (i) low static strain $\epsilon_{E_{g}} \ll \lambda_{SO}/2$ and (ii) a non-zero off-axis magnetic field $B_x$. The spin qubit can also parametrically couple to off-resonant AC strain with a different susceptibility $t_{\mathrm{spin}}$, and this is discussed in Appendix \ref{t.spin}. A similar analysis predicts the response of the spin qubit to resonant microwave magnetic fields in Appendix \ref{microwaves}. \section{Prospects for a coherent spin-phonon interface} \begin{figure}[h!] \includegraphics[width=\columnwidth]{"Fig7_new".pdf} \caption{(a) Illustration of mixing terms introduced by $E_g$-strain and an off-axis magnetic field in the GS manifold. (b) Calculated susceptibility of the spin-qubit for interaction with AC $E_g$-strain resonant with the transition frequency $\omega_s$ (interaction shown in inset). This AC strain susceptibility is maximum at zero strain for the pure SO eigenstates. At high strain, it falls off as $1/\Delta_{\mathrm{gs}}$. Color variation along the curve shows the GS splitting $\Delta_{\mathrm{gs}}$ corresponding to the value of static $E_g$-strain at the SiV. Both the static and AC strain are assumed to be entirely in the $\beta$ component. (c) SEM image of an OMC nanobeam cavity \cite{burek_diamond_2016} along with an FEM simulation of its 5 GHz flapping resonance. Displacement profile and a cross-sectional strain profile of the mode are shown with arbitrary normalization.} \label{fig7} \end{figure} Our results on the strain response of the electronic and spin levels of the SiV indicate the potential of this color center as a spin-phonon interface. The diamond NV center spin, the most investigated candidate in this direction has an intrinsically weak strain susceptibility ($\sim 10$ GHz/strain) since the qubit levels are defined within the same orbital in the GS configuration of the defect\cite{barson2017nanomechanical}. While using distinct orbitals in the ES can provide much larger strain susceptibility ($\sim 1$ PHz/strain)\cite{davies_optical_1976, lee_strain_2016}, such schemes will be limited by fast dephasing due to spontaneous emission and spectral diffusion. In comparison, the SiV center provides distinct orbital branches within the GS itself. Further, the presence of SO coupling dictates that the spin qubit levels $\ket{e_{g-}\downarrow}, \ket{e_{g+}\uparrow}$ correspond to different orbitals. As a result, one achieves the ideal combination of high strain susceptibility and low qubit dephasing rate. The effects of various modes of strain and the rich electronic structure of the SiV allow a variety of spin-phonon coupling schemes. In this letter, we focus on direct coupling of the spin transition to a mechanical resonator at frequency $\omega_s$ enabled by $E_g$-strain response of the spin discussed in the previous section. An alternative approach utilizing propagating phonons of frequency $\sim \lambda_\mathrm{SO}$ coupled to the GS orbital transition is discussed elsewhere\cite{lemonde_2017}. Our scheme would require diamond mechanical resonators of frequency $\omega_s \sim$ few GHz, which have already been realized in both optomechanical\cite{burek_diamond_2016,lake_2017} and electromechanical platforms\cite{macquarrie_coherent_2015,macquarrie_continuous_2015,golter_optomechanical_2016,golter_saw_2016}. Fig. \ref{fig7}(c) shows the strain profile resulting from GHz frequency mechanical modes in an optomechanical crystal cavity. Since this structure achieves three-dimensional confinement of phonons on the scale of the acoustic wavelength, it provides large per-phonon strain. For an SiV located $\sim$20 nm below the top surface, when a magnetic field $B =$ 0.3 T is applied along the [001] direction, the spin qubit is resonant with the 5 GHz flapping mode, and has a single-phonon coupling rate $g \sim$0.8 MHz. At mK temperatures, given the low SiV spin dephasing rate $\gamma_s \sim$ 100 Hz\cite{sukachev2017silicon}, even modest mechanical quality-factors $Q_m\sim 10^3$ measured previously\cite{burek_diamond_2016} are sufficient to achieve strong spin-phonon coupling. At 4 K, despite the higher spin dephasing rate $\gamma_s \sim$ 4 MHz\cite{pingault_all-optical_2014,rogers_all-optical_2014} and thermal occupation of mechanical modes $n_{th} \sim 20$, high spin-phonon co-operativity can be achieved if previously observed 4 K quality factors for silicon OMCs\cite{chan_2012}, $Q_m \sim 10^5$ can be replicated in diamond. This form of spin-phonon coupling can also be implemented in other resonator designs such as surface acoustic wave cavities\cite{golter_optomechanical_2016,golter_saw_2016,lee_topical_2017}, wherein piezoelectric materials are used to transduce the mechanical motion with microwave electrical signals instead of optical fields. \section{Conclusion} In conclusion, we characterize the strain response of the SiV center in diamond with a NEMS device. The implications of our results are two-fold. First, the large tuning range of optical transitions we have demonstrated establishes strain control as a technique to achieve spectrally identical emitters in a quantum network. Strain tuning is particularly relevant here since inversion-symmetric centers with superior optical properties do not have a first order electric field response, thereby negating the feasibility of direct electrical tuning. Second, the intrinsic sensitivity of the SiV spin qubit to strain makes it a promising candidate for coherent spin-phonon coupling. This can enable phonon-mediated quantum information processing with spins\cite{wallquist_hybrid_2009,rabl_quantum_2010}. The development of such a cavity QED platform with a phononic two-level system\cite{ruskov2013chip,aref_2015} will also allow deterministic quantum nonlinearities for phonons\cite{oconnell_quantum_2010}, thereby overcoming inefficiencies in probabilistic schemes used to generate single phonon states in cavity optomechanics\cite{riedinger2016,hong2017}. Further, the use of optomechanical and electromechanical resonators towards this goal suggests the possibility of coherently interfacing diamond spin qubits with telecom and microwave photons respectively.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The galaxies of M51, which consist of the main spiral galaxy M51a/NGC5194 and its smaller companion M51b/NGC5195, are a host to a multitude of X-ray sources \citep[e.g.][]{brightman18b}. These include two active galactic nuclei (AGN) powered by accreting supermassive black holes (SMBHs), one at the center of each of the galaxies; several ultraluminous X-ray sources (ULXs); and transient sources, such as the type IIb supernova SN~2011dh. Both SMBHs are weakly accreting, relatively speaking, with Eddington ratios $<10^{-4}$, and the AGN powered by the SMBH in M51a is heavily obscured by Compton thick (CT) material. The ULXs, in contrast, are believed to be strongly accreting compact objects. Indeed, two of them are known to be powered by neutron stars (NSs) far less massive than the SMBHs but apparently equally as powerful. For ULX8, the NS identification was determined from the detection of a cyclotron resonance scattering feature (CRSF), caused by interactions of charged particles with a strong magnetic field \citep{brightman18}. For ULX7, this was determined from the detection of coherent pulsations from the source \citep{rcastillo19}. The observational properties of ULXs originally suggested they were good intermediate-mass black hole candidates \citep[e.g.][]{earnshaw16}. However, the detection of coherent pulsations and CRSFs mean that some of them certainly are not black holes since black holes are unable to produce such signals. ULXs powered by NSs are a surprising phenomenon since their luminosities imply extreme Eddington ratios of up to 500, and are not a well understood phenomenon. The first of NS ULX to be identified was M82~X-2 \citep{bachetti14}, followed by NGC~7793~P13, NGC~5907~ULX1 and NGC~300~ULX1 \citep{fuerst16,israel17a,israel17,carpano18}, with the most recent discovery being NGC~1313~X-2 \citep{sathyaprakash19}. Thus M51 ULX7 and ULX8 are among a small but growing group of these sources. Being hosted in the same galaxy and separated by a few arcminutes means they can be easily studied together. No pulsations have been detected from ULX8 yet, and hence a spin period is unknown. The spin period of ULX7 is $\sim2$\,s, which is similar to the $\sim1$\,s spin periods for M82~X-2, NGC~7793~P13 and NGC~5901~ULX1. These sources also show another distinguishing characteristic: X-ray flux modulations with periods of 60--80 days. These were all detected from long term monitoring by {\it The Neil Gehrels Swift Observatory} \citep[hereafter \swift,][]{walton16,hu17,qiu15,fuerst18}. Theories to explain these invoke a precession of the accretion disk or wind and some geometric beaming \citep[e.g.][]{dauser17,middleton18}. Furthermore, NGC~7793~P13 and NGC~5901~ULX1 exhibit unusual `off' states in addition to the periodic modulations, where the X-ray flux drops significantly, by orders of magnitude, relative to their peak brightness \citep{motch14,walton15}. While the nature of these off states is currently unknown, \cite{tsygankov16} suggest they are related to the onset of the propeller effect, in which the magnetospheric radius moves outside the co-rotation radius such that accretion is dramatically suppressed and the X-ray flux drops precipitously. Since the discoveries that M51 harbors two ULXs powered by neutron stars, we have obtained monitoring of the galaxies with \swift\ in order to identify potential periodic flux modulations and off states exhibited by the other sources of this class. We present the results of this monitoring campaign here. In addition to ULX7 and ULX8, ULX4 was identified as a source of interest since it exhibits a bimodal distribution of fluxes, possibly related to the propellor effect \citep{earnshaw18}, and \cite{urquhart16} identified two eclipsing ULXs in the galaxy. Finally, M51 has also hosted several transient phenomena such as the type II supernova SN~2005cs, the type IIb supernova SN~2011dh and the intermediate luminosity red transient AT~2019abn \citep{jencson19}. The \swift\ monitoring also allows us to study these other interesting ULXs and to look for transients in the X-ray band. We assume a distance of 8.58 Mpc to M51 throughout this study \citep{mcquinn16}. During the preparation of this manuscript, \cite{vasilopoulos19b} was published presenting results similar to ours regarding ULX7 based on the publicly available \swift\ data. Our analyses differ, however, since they use the online tool described in \cite{evans09}, to calculate the XRT count rates, whereas we use our own tailored analysis method. We also present $\sim100$ days more data than they do, covering an extra three cycles of the modulation. \section{Swift/XRT data analysis} \label{sec_swift} We have conducted a systematic monitoring campaign of the M51 galaxies with \swiftxrt\ \citep{burrows05} since 2018-05-01 with a typical cadence of 3--6\,days and typical exposure time of 2000\,s per snapshot. Since our main goal was to investigate the long-term X-ray lightcurves of the two NS-powered ULXs, ULX7 and ULX8, to look for periodic flux modulations, the observing campaign was designed with this goal in mind. However, as described above, the M51 galaxies contain many other interesting X-ray sources which we have also obtained long-term lightcurves for. In order to determine which additional sources are bright enough to provide useful lightcurves, we stack all 105\ observations using the online XRT analysis tool. We create images in three bands, 0.3--1 keV, 1--2.5 keV, and 2.5--10 keV. The image is presented in Figure \ref{fig_xrt_img} where red, green and blue represent these three bands respectively. The AGN of the two galaxies, M51a and M51b, several known ULXs are clearly detected, as well as SN~2011dh and a previously uncatalogued ULX which we call \newulx. We proceed to extract spectra from all sources using the {\sc heasoft} v6.25 tool {\sc xselect}. Source events are selected from a circular region with a 25\arcsec\ radius and background regions consisting of large circles external to the galaxies are used to extract background events. All extraction regions are shown in Figure \ref{fig_xrt_img2}. For each source spectrum, we construct the auxiliary response file (ARF) using {\tt xrtmkarf}. The relevant response matrix file (RMF) from the CALDB is used. All spectra are grouped with a minimum of 1 count per bin. \begin{figure*} \begin{center} \includegraphics[width=180mm]{xrt+uvot_image_labels.pdf} \caption{Left - Stacked \swiftxrt\ images of the M51 galaxies. Red represents 0.3--1 keV emission, green represents 1--2.5 keV emission and blue represents 2.5--10 keV emission. The image has been smoothed with a 4\arcsec\ Gaussian and the sources of study have been labelled. Right {\it Swift}/UVOT obsID 00032017112 {\it u}-band image of M51 with the same sources marked with red circles } \label{fig_xrt_img} \end{center} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=90mm]{xrt_image_regions.pdf} \caption{Same as Figure \ref{fig_xrt_img} (left), but image is wider angle, not smoothed and shows the source (green) and background (white) extraction regions.} \label{fig_xrt_img2} \end{center} \end{figure} We use the {\sc heasoft} tool {\sc xspec} to calculate background-subtracted count rates in the 0.3--10 keV band. The lightcurves of these sources are shown in Figure \ref{fig_xrt_ltcrv}. For observations where a source has zero total counts, we estimate the 90\% upper limit on the count rate using a typical background count rate of 7$\times10^{-5}$ counts\,s$^{-1}$\ and Poisson statistics. Our next step is to look for count rate variability in the sources. We do this by testing for deviations from a constant count rate by calculating the \chisq\ of the count rates where $$\chi^2\equiv\sum_{n=1}^{N_{\rm obs}}\left(\frac{CR_{n}-\langle CR\rangle}{\sigma_n}\right)^2.$$ $CR_n$ is the count rate in each observation, $n$, $\langle CR\rangle$ is the mean count rate averaged over all observations, and $\sigma_n$ is the 1$\sigma$ uncertainty on the count rate for each observation. We present the results in Table \ref{table_stats}. We find that the sources which exhibit evidence for count rate variability, which we arbitrarily define as \rchisq$\equiv$\chisq$/N_{\rm obs}>2.0$, are ULX4, ULX7, ULX8, the eclipsing ULXs, and \newulx. The sources with the largest variability (i.e. \rchisq$>10.0$) are indeed ULX7 and ULX8. \begin{figure*} \begin{center} \includegraphics[trim=10 40 10 10, width=180mm]{swift_ltcrv_2018_all_srcs.pdf} \caption{0.3--10 keV \swiftxrt\ lightcurves of all sources of interest in M51 presented in counts\,s$^{-1}$\ as black squares with 1$\sigma$ error bars. 90\% upper limits for non-detections are shown as error bars without squares.} \label{fig_xrt_ltcrv} \end{center} \end{figure*} \begin{table*} \centering \caption{Source statistics} \label{table_stats} \begin{center} \begin{tabular}{l l l c r c r} \hline Source name & Position & SIMBAD indentifier & $\langle CR\rangle$ & \chisq & $N_{\rm obs}$ & \rchisq \\ (1) & (2) & (3) & (4) & (5) & (6) & (7)\\ \hline \input{source_statistics.tex} \hline \end{tabular} \tablecomments{Summary of the sources studied here. Column (1) lists the source name adopted, column (2) gives the position in J2000 co-ordinates, column (3) lists the SIMBAD identifier, column (4) gives the mean 0.3--10 keV \swiftxrt\ count rate in counts\,s$^{-1}$, column (5) shows the \chisq\ of the count rate, column (6) gives the number of observations used, and column (7) gives the reduced \chisq.} \end{center} \end{table*} Following this, we look for periodic flux modulations from all the sources with evidence for variability. We use two techniques to do so. First we do epoch folding, whereby the lightcurve is folded on several test periods and deviations from a constant flux are determined \citep{leahy97}. This technique is well suited for finding periodic signals which are not necessarily sinusoidal. We also use a Lomb-Scargle periodogram which is commonly used to search for periodicities in unevenly sampled data and is specialized for finding sinusoidal signals \citep{lomb76,scargle82}. For the epoch folding analysis, we search for periods ranging from 20--100 days over 200 linearly spaced steps. For each test period we split the lightcurve into 10 phase bins, as has been common practice in these searches and compute the L-statistic \citep{davies90}. For the Lomb-Scargle analysis, we search over the same period range with the same number of test periods. As shown in Figure \ref{fig_epfold}, ULX7 has a strong peak at 38-days in the periodogram seen from both epoch folding and the Lomb-Scargle analysis. A peak at twice this period is also seen in the epoch folding periodogram, which is likely a harmonic of the shorter period peak, but we discuss the possibility that it is the fundamental later. There are no strong peaks seen in any other of the variable sources. \begin{figure} \begin{center} \includegraphics[trim=10 10 10 10, width=80mm]{swift_epfold_2018_ULX-7.pdf} \caption{Results from the epoch-folding (black) and Lomb-Scargle analysis (blue) on the lightcurve of ULX7. A clear signal at 38 days is seen in both periodograms of this source, which is significant at $>99$\% significance, based on simulations (dashed lines).} \label{fig_epfold} \end{center} \end{figure} \subsection{Significance simulations} In order to determine the significance of the signals exhibited in the periodograms, we conduct simulations to determine the false alarm rate, i.e. the rate at which such a signal could be produced from a statistical fluctuation rather from a real signal. While both the epoch folding using the L-stat and the Lomb-Scargle periodogram define tests to assess the significance of signals, they can often overestimate the significance. Therefore, we follow \cite{walton16b} by simulating 10,000 lightcurves with 2000s resolution and a red noise power spectrum and we sample them with the same observational strategy as the real lightcurves, running the same analysis as described above. We then note the largest peak in each periodogram, irrespective of period. We determine the 99.9\% significance level by calculating what level 99.9\% of the simulated periodograms fall under. We plot this quantify on Figure \ref{fig_epfold}, which shows that the 38-day period is detected with a significance greater than 99.9\%. \section{Investigating the periodic flux modulation from ULX7} The orbital period of the NS powering ULX7 and its binary companion is known to be 2 days, which was determined from the timing analysis of the pulsations \citep{rcastillo19}. Therefore the 38-day periodic flux modulation we detect here from ULX7 is super-orbital in nature, making it the fourth ULX pulsar where this characteristic has been identified. Interestingly, ULX8 does not exhibit any periodic flux modulations on the timescales we have searched. Investigating further, from the epoch folding analysis described above, we determine the average profile of the modulation by calculating the mean count rate in each of the 10 phase bins. The average profile is plotted in Figure \ref{fig_ulx7_epfold} and appears sinusoidal. The mean profile of the modulations peaks at $\sim0.013$ counts\,s$^{-1}$\ and has a minimum at $\sim0.001$ counts\,s$^{-1}$. This is more than an order of magnitude from peak to trough, and corresponds to a luminosity range of $\sim8\times10^{38}$--$10^{40}$ \mbox{\thinspace erg\thinspace s$^{-1}$}. The deviations from this average profile, which we define as $\sigma=\frac{data-profile}{error}$ are also shown. These do not appear to show any long term structure in addition to the sinusoidal modulation, with the exception of three dips in the lightcurve where $\sigma<-4$. Also plotted in Figure \ref{fig_ulx7_epfold} are the average profile and data plotted against phase, as well as the deviation from the profile. The dip features are clustered at phase$\sim0.8$--1, but further monitoring is required to determine if these features are coherent in phase of the flux modulation. \begin{figure*} \begin{center} \includegraphics[trim=50 10 10 10, width=190mm]{swift_2018_ULX-7_epfold.pdf} \caption{Top left - 0.3--10 keV \swiftxrt\ lightcurve of ULX7 (black data points, 1$\sigma$ errors), overplotted with the average 38-day profile determined from epoch folding (red line). Top right - The same lightcurve but folded on the 38-day period and plotted against phase. Bottom left - Deviations of the data from the average profile shown against time. Bottom right - deviations of the data from the average profile folded on the 38-day period.} \label{fig_ulx7_epfold} \end{center} \end{figure*} We next look for spectral variations as a function of the phase of the flux modulation, which may yield insight into their origin. While the individual $\sim2$\,ks \swiftxrt\ observations do not have the photon statistics to conduct spectral analysis, we calculate hardness ratios for each, where $HR=(H-S)/(H+S)$, $S$ is the total number of counts in the 0.3--2 keV band and $H$ is the total number of counts in the 2--10 keV band. We show the results in Figure \ref{fig_hr}, where we also display the binned averages. We find that the source is relatively hard during the peaks of the modulation, and relatively soft during the troughs. \begin{figure} \begin{center} \includegraphics[trim=50 10 10 10, width=80mm]{swift_2018_ULX-7_phase-hr.pdf} \caption{Hardness ratios of ULX7 as a function of phase of the super-orbital period (black squares), with binned averages (blue open squares). The scaled average profile of the count rate is shown for reference (red line).} \label{fig_hr} \end{center} \end{figure} \section{A new transient ULX} In addition to the well-known bright X-ray sources in M51, we detect a new, previously uncatalogued source when stacking the XRT images (Figure \ref{fig_xrt_img}). The new source appeared in the southern part of M51a at coordinates 13h 29m 56.8s, +47{$^\circ$}\ 08\arcmin\ 52.6\arcsec\ (J2000), and we refer to it as \newulx. The lightcurve of the source in Figure \ref{fig_xrt_ltcrv} shows that it was a transient. The source was weakly detected, or undetected during the first $\sim100$ days of the monitoring campaign, with a count rate $<0.001$ counts\,s$^{-1}$, but became one of the brightest X-ray sources in the galaxy with a count rate of $\sim0.02$ counts\,s$^{-1}$\ in the space of 10--15 days. The peak of its activity occurred during a 6-day period covering 2018-08-23 and 2018-08-29. The source then gradually faded over the next $\sim200$ days and was once again weakly detected at day $\sim$400 of our monitoring campaign. There are no known, catalogued sources at the position of \newulx, with no sources within 20\arcsec\ in the \chandra\ source catalog of \cite{kuntz16}, which is the deepest catalogue of X-ray sources in M51 available, summing over 800 ks of \chandra\ exposure time. There are also no matching sources in the latest ULX catalogue of \cite{earnshaw19} compiled from the \xmm\ serendipitous source catalog nor in the \xmm\ serendipitous source catalog itself (3XMM DR8). We searched the Transient Name Server\footnote{https://wis-tns.weizmann.ac.il} for known transients in M51, finding none that occurred in 2018. AT2019abn was the closest in time and spatial separation and was an intermediate luminosity red transient discovered on 2019-01-22 several arcmins from \newulx\ \citep{jencson19}. Figure \ref{fig_xrt_img} shows the \swift/UVOT $u$-band image of M51 during the peak of the outburst: no source is seen at the position of \newulx. We also looked at the UVOT data from later observations to check for delayed emission at longer wavelengths, but we did not see anything either. Using the build \swiftxrt\ spectral products tool\footnote{http://www.swift.ac.uk/user\_objects/} \citep{evans09}, we created a spectrum of \newulx\ when it was at the peak of its outburst. We grouped the spectrum with a minimum of 1 count per bin and fitted it with an absorbed powerlaw. The spectrum is shown in Figure \ref{fig_j1329_spec}. The spectrum is very hard with $\Gamma=1.1^{+0.7}_{-0.5}$ and no evidence for absorption above the Galactic value. The source had a peak flux of 1.2$\times10^{-12}$ \mbox{\thinspace erg\thinspace cm$^{-2}$\thinspace s$^{-1}$}, which corresponds to a luminosity of 1.1$\times10^{40}$ \mbox{\thinspace erg\thinspace s$^{-1}$}\ at 8.58 Mpc. Serendipitously, \chandra\ observed M51 for 19.8 ks on 2018-08-31 (obsID 20998), only a few days after the peak of activity from \newulx. This allowed us to obtain a better position of the source than \swiftxrt\ provided, and a higher signal-to-noise spectrum. We ran the {\sc ciao} tool {\tt wavdetect} on the observation to obtain list of positions for all sources. We then cross correlated this source list with {\it Gaia} DR2 catalog to obtain the astrometric shifts. Having applied the astrometric shifts to the \chandra\ source catalog, we determine the position of \newulx\ to be R.A. = 13:29:56.97, decl. = +47:08:54.1 (J2000), with an astrometric uncertainty of 0\farcs45 from the residual offsets with the {\it Gaia} catalog. We used the {\sc ciao} v4.11 tool {\sc specextract} to extract the spectrum of the source from a circular region with a 2\arcsec\ radius. Background events were expected from a nearby region. The source had a count rate of 4.1$\pm0.1\times10^{-2}$ counts\,s$^{-1}$\ on the ACIS detector, and the spectrum is well fitted by an absorbed power-law spectrum ({\tt tbabs*powerlaw}) where \nh$=3.3\pm0.1\times10^{21}$ \cmsq\ and $\Gamma=1.61\pm0.20$ with a flux of $7.2\pm0.7\times10^{-13}$ \mbox{\thinspace erg\thinspace cm$^{-2}$\thinspace s$^{-1}$}, which corresponds to a luminosity of 6.3$\times10^{39}$ \mbox{\thinspace erg\thinspace s$^{-1}$}\ at 8.58 Mpc. The spectrum is shown in Figure \ref{fig_j1329_spec}. Additionally, \xmm\ observed M51 on 2019-07-11 (obsID 0852030101), almost a year after the outburst, also serendipitously. The source was detected with a count rate of 4$\times10^{-3}$ counts\,s$^{-1}$\ in the pn detector. We extracted a spectrum of the source using a circular region with a radius of 10\arcsec, and extracted the background from a nearby region, using the {\sc xmmsas} v18.0.0 tool {\sc evselect}. The source is described by a power-law spectrum with \nh=1.9$^{+1.1}_{-0.9}\times10^{21}$ \cmsq\ and $\Gamma=2.6\pm0.5$, which presents a clear softening of the spectrum since the outburst. The source exhibited a flux of 2.0$^{+0.4}_{-0.5}\times10^{-14}$ \mbox{\thinspace erg\thinspace cm$^{-2}$\thinspace s$^{-1}$}, which corresponds to a luminosity of 2$\times10^{38}$ \mbox{\thinspace erg\thinspace s$^{-1}$}\ at 8.68 Mpc. The spectrum is also shown in Figure \ref{fig_j1329_spec}. To place these spectra in context, we show in Figure \ref{fig_j1329_spec} the spectrum of ULX7 which was obtained from a joint \xmm\ and \nustar\ observing campaign in 2019 (Brightman et al. {\it in prep}). The spectrum of ULX7 is similar to that of other ULXs with good quality broadband spectra \citep[e.g.][]{walton17}, consisting of two disk-like components with different temperatures. ULX7 also has a similar flux and luminosity as \newulx\ at its peak, therefore serving as a good comparison. As seen, the spectra of \newulx\ at its peak are harder than ULX7 and resemble the hot disk-like component seen in that source. Although the absorption is higher for \newulx\ (3.3 compared to 0.7$\times10^{21}$ \cmsq), this does not account for all the spectral hardness as a cooler disk-like component can be ruled out in the \chandra\ spectrum. We discuss the physical implications of this further in Section \ref{sec_disc}. Finally, \xmm\ observed M51 four times (obsIDs 0824450901, 0830191401, 0830191501, and 0830191601) in a period of $\sim100$ days before the outburst, allowing us to place tight upper limits on the flux of the source before outburst. For each individual observation, the upper limit on the 0.3--10 keV flux is $\sim8\times10^{-15}$ \mbox{\thinspace erg\thinspace cm$^{-2}$\thinspace s$^{-1}$}, corresponding to \lx$\sim7\times10^{37}$ \mbox{\thinspace erg\thinspace s$^{-1}$}. Figure \ref{fig_j1329_ltcrv} shows the lightcurve of \newulx\ with the \chandra\ and \xmm\ data added. We also show a line with a $t^{-5/3}$ shape that matches the data well after $\sim50$ days of outburst for discussion in Section \ref{sec_disc}. \begin{figure} \begin{center} \includegraphics[trim=50 10 50 10, width=80mm]{J132956_ltcrv.pdf} \caption{Lightcurve of \newulx\ during its outburst monitored by \swiftxrt\ (black squares), and serendipitously observed with \chandra\ and \xmm\ (red squares). Also shown is a red dashed line with a $t^{-5/3}$ shape, which matches the data well after $\sim50$ days of outburst.} \label{fig_j1329_ltcrv} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[trim=10 10 10 10, width=80mm]{J132956_swift+chandra+xmm_eeuf_spectral_fig.pdf} \caption{\swiftxrt\ unfolded spectrum of \newulx\ during the peak of its outburst (blue), the \chandra\ unfolded spectrum from several days afterwards (red), and the \xmm\ unfolded spectrum from $\sim1$ year later (green). The dashed black lines show the spectrum of ULX7 for comparison, from a joint \xmm\ and \nustar\ observation, which consists of two disk-like components, each shown with dotted lines, as commonly observed in ULXs. This shows that the spectrum of \newulx\ at its peak resembles the hotter disk-like component seen in ULXs, with the cooler component missing, while the spectrum has become considerably softer at late times. Inset are confidence contours resulting from a fit with an absorbed power-law to the \swift, \chandra, and \xmm\ spectra of \newulx.} \label{fig_j1329_spec} \end{center} \end{figure} Using the astrometrically corrected \chandra\ position of \newulx, we searched the {\it Hubble Source Catalog (HSCv3)} for potential counterparts. Only one source is found within the 0\farcs45 positional uncertainty, at a position of 13:29:56.983, +47:08:54.35 detected by WFC3 in the F110W filter with a magnitude of 24.44. The distance modulus corresponding to 8.58 Mpc is 29.7, implying an absolute magnitude of this source of $M_{\rm F110W}=-5.22$. \section{Discussion} \label{sec_disc} \subsection{A 38-day super-orbital flux modulation from ULX7} The orbital period of the NS powering ULX7 and its binary companion is known to be 2 days, which was determined from the timing analysis of the pulsations \citep{rcastillo19}. Therefore the 38-day periodic flux modulation we detect here from ULX7 is super-orbital in nature, making it the fourth ULX pulsar where this characteristic has been identified. Indeed it appears as if this characteristic is near-ubiquitous in these systems. This leads to the possibility that identifying periodic flux modulations in ULXs where the accretor is unknown, can be used as strong evidence that it is powered by a NS. During the preparation of this manuscript, \cite{vasilopoulos19b} also presented the identification of the 38-day super-orbital period of M51 ULX7 from the same \swift\ data we have presented here. We list the spin period, orbital period, and super-orbital period for all ULX pulsars where these super-orbital periods have been identified so far in Table \ref{table_sorbitalp} for reference and comparison. M51 ULX7 has the shortest super-orbital period of the four, at 38 days, where the others are at 60--80 days. As mentioned earlier, the epoch folding periodogram presents a peak at twice the period of the 38-day peak, which is most likely a harmonic, but if it is the true period, it would make it comparable to the other ULX pulsars. ULX7 also has the longest spin period for ULX pulsars with super-orbital periods, at 2.8\,s, whereas the others have 0.4--1.4\,s. A larger sample is needed before any conclusions can be drawn regarding any inverse relationship between the spin period of the NS and the super-orbital period. The luminosity range of the flux modulation from ULX7, which is $\sim8\times10^{38}$--$10^{40}$ \mbox{\thinspace erg\thinspace s$^{-1}$}\ is larger than NGC~7793~P13 and NGC~5901~ULX1 but smaller to M82~X-2. However, due to the observational challenges related to M82~X-2, ULX7 is much easier to study, making it the best source to study these extreme flux variations. Not only does it appear that these super-orbital periods are a near-ubiquitous property of ULX pulsars, but they have been observed in many other neutron-star X-ray binary systems, such as Her~X-1 \citep{tananbaum72,katz73}, LMC~X-4 \citep{lang81} and SMC X-1 \citep{gruber84} as well as black hole binaries \citep[e.g. compilation by][]{sood07}. In these lower luminosity and lower mass accretion rate systems, a warped precessing disk is often invoked to explain the flux modulations. Here the limb of the warped disk periodically occults the X-ray emitting region, causing the periodic dips in flux. From our hardness ratio analysis, we found that the source is hard during the peaks of the modulation, and soft during the troughs. This argues against changes in the absorption causing the flux modulation, as this would have predicted the source being hard during the troughs, rather than soft, since absorption causes attenuation of soft X-rays. Occultation of the source by a warped disk would appear to be disfavored, unless the source is completely obscured during the occultations and we only see scattered emission. Furthermore, it is likely that the thin accretion disk that is needed for warps does not exist in ULX pulsars. In these systems the extreme accretion rate causes the disk to extend vertically giving it a large scale height. Indeed the precession of a large scale height disk has been proposed as the mechanism behind the flux modulations in ULX pulsars \citep[e.g.][]{fuerst17,dauser17,middleton18}. This theory would also fit the observations of M51 ULX7 better. Here, where the source is at low fluxes and soft, we are potentially viewing the disk more edge on where only the cooler outer regions can be seen. When the source is at high fluxes, we see into the hot funnel region. This would require a very large precession angle, however, to go from a disk seen almost face on, to edge on. \cite{middleton18} suggest that the precession can be explained by the Lense-Thirring effect, a consequence of General Relativity whereby a massive spinning object induces precession of orbiting particles that are displaced vertically from the rotation axis, for example, a large scale height accretion flow. Indeed, \cite{middleton19} suggest that these timescale periodicities can be used to determine whether the compact object is a black hole or a NS. Four \xmm\ observations of M51 took place during the period 2018-05-13 to 2018-06-15, which covered the first cycle of the flux modulation we observed with \swift. Indeed, it was from these observations that the pulsations confirming ULX7 to be powered by a neutron star were detected \citep{rcastillo19}. \cite{rcastillo19} also presented a spectral analysis of the source from these data, finding too that the source became very soft during the minimum of the flux modulation. They also suggest that the luminosity swing may be caused by a change in inclination angle of the disk, though the scaling of the temperature of the disk with luminosity did not fit this hypothesis. Alternatively they proposed that the periodic flux modulations may be induced by changes in accretion rate caused by the orbit of a third star, although this explanation did not account for the spectral variations. Interestingly, \cite{earnshaw16} presented the results from a detailed systematic spectral analysis of ULX7 using \xmm, \chandra\ and \nustar, finding no evidence for significant spectral evolution, even at the lowest fluxes. They did however find evidence for soft diffuse thermal emission surrounding the source in the \chandra\ data, that is likely contaminating the larger PSFs of \xmm\ and \swift. It is possible that this component dominates the \xmm\ and \swift\ data at lower fluxes, making the source appear soft. \begin{table*} \centering \caption{Super-orbital periods of all ULX pulsars known to date.} \label{table_sorbitalp} \begin{center} \begin{tabular}{l c c c c c c c c c} \hline Source name & $P_{\rm spin}$ & Ref & $P_{\rm orbit}$ & Ref & $P_{\rm super-orbital}$ & Ref & \lx,min -- \lx,max & Ref \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9)\\ \hline M82 X-2 & 1.37 & 1,2 & 2.5 & 1,2 & 64 & 3 & $\sim10^{38}$--$10^{40}$ & 4,3 \\ NGC~7793~P13 & 0.42 & 5,6 & 63.9 & 7 & 66.8 & 7 & $\sim4\times10^{39}$--$10^{40}$ & 8,7 \\ NGC~5907~ULX1 & 1.43 & 9 & 5.3 & 9 & 78.1 & 10 & $\sim3\times10^{40}$--$10^{41}$ & 10,11\\ M51~ULX7 & 2.8 & 12 & 2.0 & 12 & 38 & 13 & $\sim8\times10^{38}$--$10^{40}$ & 13 \\ \hline \end{tabular} \tablecomments{Summary of the observed properties of the ULX pulsars which exhibit super-orbital periods in their lightcurves. Column (1) lists the source name, column (2) lists the spin period of the neutron star in s as determined from the pulsations, column (3) gives the reference for the spin, column (4) gives the orbital period in days of the neutron star and its binary companion determined from timing analyses of the pulsations, column (5) gives the reference for the orbital period, column (6) gives the super-orbital period in days, column (7) gives the reference for this. Column (8) gives the luminosity range of the super-orbital flux modulations in \mbox{\thinspace erg\thinspace s$^{-1}$}\ and column (9) gives the reference. References - 1.\cite{bachetti14}, 2. \cite{bachetti19}, 3. \cite{brightman19}, 4. \cite{brightman16}, 5. \cite{fuerst16}, 6. \cite{israel17}, 7. \cite{fuerst18}, 8. \cite{walton17}, 9. \cite{israel17a}, 10. \cite{walton16b}, 11 \cite{fuerst17}, 12. \cite{rcastillo19}, 13. This work.} \end{center} \end{table*} \subsection{The nature of the new transient ULX \newulx} The peak luminosity and timescale of \newulx, the new transient we have identified, corresponds to the ``ULX in outburst'' section of the X-ray transients luminosity-timescale plot from \cite{soderberg09}. However, we note that the decline of the outburst after $\sim50$ days is consistent with a $t^{-5/3}$ relationship, which is the signature of the fallback from a tidal disruption event (TDE, Figure \ref{fig_j1329_ltcrv}). TDEs are however much brighter and occur in the nuclei of galaxies, thought to be due to the tidal disruption of a star by a supermassive black hole. Other examples of transient ULXs are NGC~300~ULX1, which had an X-ray outburst in 2010, reaching an X-ray luminosity of 5$\times10^{38}$ \mbox{\thinspace erg\thinspace s$^{-1}$} \citep{binder11}. The source was observed at lower fluxes in observations made in 2014 \citep{binder16} but then reached ULX luminosities during observations made in 2016 with \lx$\sim5\times10^{39}$ \mbox{\thinspace erg\thinspace s$^{-1}$}\ during which pulsations were detected \citep{carpano18}. Regular \swift\ monitoring of the source in 2018 revealed that the source initially persisted at these luminosities but then went into decline. Spectral analysis showed a hard spectrum. Another source, Swift~J0243.6+6124, was an X-ray transient found in our own Galaxy, identified by \swift/BAT \citep{cenko17} and with no previously reported activity. The source reached an X-ray luminosity of $2\times10^{39}$ \mbox{\thinspace erg\thinspace s$^{-1}$}\ in a period of around 30 days, before steadily declining over a period of $\sim100$ days \citep{wilsonhodge18}. The detection of pulsations identified it as a neutron star accretor \citep{kennea17}. The source exhibited re-brightenings in the X-ray after the decline, albeit to peak luminosities around 2 orders of magnitude less than the initial outburst \citep{vandeneijnden19}. Most recently, \cite{earnshaw19} identified a new ULX in the galaxy NGC 6946 that was previously undetected in several archival \xmm\ and \chandra\ observations, but caught at a luminosity of $2\times10^{39}$ \mbox{\thinspace erg\thinspace s$^{-1}$}\ during a simultaneous \xmm\ and \nustar\ observation in 2017. The source was then undetected 10 days later in a \chandra\ observation. Again the spectrum was hard with $\Gamma\sim$1. Several hypotheses were put forward by \cite{earnshaw19} regarding the nature of this transient event, which may also explain \newulx. These included a supernova, a neutron-star-powered ULX briefly leaving the propeller regime, a black hole binary exhibiting a hard-only outburst, and a micro-tidal disruption event. A supernova is unlikely to be the cause of \newulx\ since no supernova candidates were announced at its time and position and the galaxy is regularly monitored by transient surveys such as ZTF. Considering the peak luminosity of the outburst, a hard-only outburst would require a black hole much more massive than the black hole binaries in our Galaxy. The similarity of the outburst to other transient neutron star ULXs described above appears to favour that scenario, but the more exotic hypothesis of a micro-tidal disruption event cannot be ruled out, especially considering the $t^{-5/3}$ decline in count rate. It could be possible that the source is in the background of M51 and is infact a nuclear TDE seen at larger distances, and hence luminosities. However, \newulx\ is located in the spiral arm of M51, so the source should be absorbed, which is inconsistent with the X-ray spectrum. Other examples of transient ULXs are ones in M31 \citep{middleton12,middleton13}, M83 \citep{soria12}, NGC~1365 \citep{soria07}, NGC~3628 \citep{strickland01}, NGC~5907 \citep{pintore18}, NGC~7090 \citep{liu19}, among others. For most of these transient ULXs, only a handful of observations were made and the lightcurves were not well sampled. Here, we serendipitously obtained a high quality lightcurve that captured the onset of activity and covered it until it became undetected. Since most ULXs are now considered to be super-Eddington accretors, we have likely witnessed the onset of a super-Eddington event with \newulx\ in M51, and the details we have gleaned from it may give clues regarding the formation of such an accretion flow. Most persistent ULXs show a characteristic broadband spectrum consisting of two disk-like components. \cite{walton17} presented a compilation of ULXs with good quality broadband X-ray spectra, finding that they are remarkably similar, even comparing those with known neutron star accretors and those with unknown accretors. The spectral components consist of two disk-like components, one potentially associated with the hot, large-scale-height inner flow, and the other a cooler component, perhaps associated with the outer part of the disk or the photosphere of an outflow. Furthermore, a high-energy tail is often detected, possibly associated with the accretion column of a pulsar, or Comptonized emission. When comparing the spectral shape of \newulx\ during the peak of its activity to the ULX population, we noted that it resembled the hot disk-like component attributed to the large-scale-height inner flow of other ULXs, with the cooler component absent (Figure \ref{fig_j1329_spec}). This perhaps suggests that the cooler component had yet to form. If the cooler component is indeed associated with the outer disk, its absence may explain why the outburst was relatively short, since it was not supplied by more material in the outer part of the disk. If instead the cooler component is from an outflow, it would imply that there is a time lag between the formation of the super-Eddington flow and the ejection of the material. Indeed a year after the outburst, the spectrum appeared much cooler, which may be the remnants of the outflow, albeit with the hotter component gone. Following an event such as this in future with more detail will allow a clearer picture of the formation and evolution of a super-Eddington flow. With the launch of \erosita\ \citep{merloni12}, which will observe each part of the sky every 6 months, more of these transient ULXs will undoubtedly be identified. \section{Summary and Conclusions} \label{sec_conc} We have presented the results from the systematic monitoring of M51 by \swiftxrt\ over the period of $\sim1.5$ years which has yielded high-quality X-ray lightcurves of more than 10 X-ray sources, including two AGN and several ULXs. Our main results are the detection of a 38-day super-orbital flux modulation from the ULX pulsar, ULX7, and the identification of a new, transient ULX, \newulx. The super-orbital period is a near-ubiquitous property of ULX pulsars, possibly driven by the precession of a large scale height disk. However, the magnitude-and-a-half swings in flux are hard to reconcile with this theory. Alternatively, the spectral shape as a function of phase does not favor periodic obscuration of the source, unless it is being completely obscured and only scattered light is seen. The outburst of \newulx\ appears similar to super-Eddington outbursts from other neutron stars, suggesting we have witnessed the onset and decline of a highly super-Eddington event. The spectral shape at the peak of the outburst is similar to the hot-disk like component seen in other ULXs that is attributed to a large-scale-height accretion flow. \section{Acknowledgements} We wish to thank the \swift\ PI, Brad Cenko for approving the target of opportunity requests we made to observe M51, as well as the rest of the \swift\ team for carrying them out. We also acknowledge the use of public data from the \swift\ data archive. This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. The work of DS was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. DJW acknowledges support from an STFC Ernest Rutherford Fellowship. \input{M51swiftarxiv.bbl} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{The infrastructure} \label{sec:infra} The motivation for the study reported in this paper comes from the celebrated and long-standing problem, originally posed by Erd{\H o}s \cite{Er} in 1946, of obtaining a sharp lower bound for the number of distinct distances guaranteed to exist in any set $S$ of $s$ points in the plane. Erd{\H o}s has shown that a section of the integer lattice determines only $O(s/\sqrt{\log s})$ distinct distances, and conjectured this to be a lower bound for any planar point set. In spite of steady progress on this problem, reviewed next, Erd{\H o}s's conjecture is still open. L. Moser \cite{Mo}, Chung \cite{Chu}, and Chung~\etal~\cite{ChSzT} proved that the number of distinct distances determined by $s$ points in the plane is $\Omega(s^{2/3})$, $\Omega(s^{5/7})$, and $\Omega(s^{4/5}/{\rm polylog}(s))$, respectively. Sz\'ekely \cite{Sz} managed to get rid of the polylogarithmic factor, while Solymosi and T\'oth \cite{SoTo} improved this bound to $\Omega(s^{6/7})$. This was a real breakthrough. Their analysis was subsequently refined by Tardos \cite{Ta} and then by Katz and Tardos \cite{KT}, who obtained the current record of $\Omega(s^{(48-14e)/(55-16e)-\eps})$, for any $\eps>0$, which is $\Omega(s^{0.8641})$. In this paper we transform the problem of distinct distances in the plane to an incidence problem between points and a certain kind of curves (helices or parabolas) in three dimensions. As we show, sharp upper bounds on the number of such incidences translate back to sharp lower bounds on the number of distinct distances. Incidence problems in three dimensions between points and curves have been studied in several recent works \cite{AKS,EKS,SW}, and a major push in this direction has been made last year, with the breakthrough result of Guth and Katz~\cite{GK}, who have introduced methods from algebraic geometry for studying problems of this kind. This has been picked up by the authors~\cite{EKS}, where worst-case tight bounds on the number of incidences between points and lines in three dimensions (under certain restrictions) have been obtained. The present paper serves two purposes. First, it studies in detail the connection between the distinct distances problem and the corresponding 3-dimensional incidence problem. As it turns out, there is a lot of interesting geometric structure behind this reduction, and the paper develops it in detail. We offer several conjectures on the number of incidences, and show how, if true, they yield the almost tight worst-case lower bound $\Omega(s/\log s)$ on the number of distinct distances. Unfortunately, so far we have not succeeded in proving these conjectures. Nevertheless, we have made considerable progress on the incidence problem itself, which is the second purpose of the study in this paper. We show how to adapt the algebraic machinery of \cite{GK,EKS,KSS,Qu} to derive sharp bounds for the incidence problem. \cite{EKS,GK,KSS,Qu} to derive sharp bounds for the incidence problem. These bounds are very similar to, and in fact even better than the bounds obtained in \cite{EKS} for point-line incidences, where they have been shown to be worst-case tight. However, they are not (yet) good enough to yield significant lower bounds for distinct distances. We believe that there is additional geometric structure in the particular problem studied here, which should enable one to further improve the bounds, but so far this remains elusive. The paper is organized as follows. We first describe the reduction from the planar distinct distances problem to the 3-dimensional incidence problem mentioned above. In doing so, we note and explore several additional geometric connections between the two problems (as manifested, e.g., in the analysis of {\em special surfaces} given below). We then present the tools from algebraic geometry that are needed to tackle the incidence problem; they are variants of the tools used in \cite{EKS,GK}, adapted to the specific curves that we need to handle. We then go on to bound the number of incidences. We first bound the number of rotations in terms of the number of parabolas, and then bound the number of incidences themselves. The latter task is achieved in two steps. We first use a ``purely algebraic'' analysis, akin to those in \cite{EKS,GK}, to obtain a weaker bound, which we then refine in the second step, using more traditional space decomposition techniques. The final bound is still not as good as we would like it to be, but it shows that the case studied in this paper ``behaves better'' than its counterpart involving lines. \paragraph{Distinct distances and incidences with helices.} We offer the following novel approach to the problem of distinct distances. \paragraph{(H1) Notation.} Let $S$ be a set of $s$ points in the plane with $x$ distinct distances. Let $K$ denote the set of all quadruples $(a,b,a',b')\in S^4$, such that the pairs $(a,b)$ and $(a',b')$ are distinct (although the points themselves need not be) and $|ab|=|a'b'|>0$. Let $\delta_1,\ldots,\delta_x$ denote the $x$ distinct distances in $S$, and let $E_i = \{(a,b)\in S^2 \mid |ab|=\delta_i \}$. We have $$ |K| = 2\sum_{i=1}^x {|E_i|\choose 2} \ge \sum_{i=1}^x (|E_i|-1)^2 \ge \frac{1}{x}\left[ \sum_{i=1}^x (|E_i|-1) \right]^2 = \frac{\left[s(s-1)-x\right]^2}{x} . $$ \paragraph{(H2) Rotations.} We associate each $(a,b,a',b')\in K$ with a (unique) {\em rotation} (or, rather, a rigid, orientation-preserving transformation of the plane) $\tau$, which maps $a$ to $a'$ and $b$ to $b'$. A rotation $\tau$, in complex notation, can be written as the transformation $z\mapsto pz+q$, where $p,q\in\comp$ and $|p|=1$. Putting $p=e^{i\theta}$, $q=\xi+i\eta$, we can represent $\tau$ by the point $(\xi,\eta,\theta) \in \reals^3$. In the planar context, $\theta$ is the counterclockwise angle of the rotation, and the center of rotation is $c=q/(1-e^{i\theta})$, which is defined for $\theta\ne 0$; for $\theta=0$, $\tau$ is a pure translation. The {\em multiplicity} $\mu(\tau)$ of a rotation $\tau$ (with respect to $S$) is defined as $|\tau(S)\cap S| =$ the number of pairs $(a,b)\in S^2$ such that $\tau(a)=b$. Clearly, one always has $\mu(\tau) \le s$, and we will mostly consider only rotations satisfying $\mu(\tau)\ge 2$. As a matter of fact, the bulk of the paper will only consider rotations with multiplicity at least $3$. Rotations with multiplicity $2$ are harder to analyze. If $\mu(\tau) = k$ then $S$ contains two congruent and equally oriented copies $A,B$ of some $k$-element set, such that $\tau(A)=B$. Thus, studying multiplicities of rotations is closely related to analyzing repeated (congruent and equally oriented) patterns in a planar point set; see \cite{BMP} for a review of many problems of this kind. \paragraph{Anti-rotations.} In this paper we will also consider {\em anti-rotations}, which are rigid, orientation-reversing transformations of the plane. Any anti-rotation can be represented as a rotation, followed by a reflection about some fixed line, e.g., the $x$-axis (so, in complex notation, this can be written as $z \mapsto \overline{pz+q}$). Anti-rotations will be useful in certain steps of the analysis. \paragraph{(H3) Bounding $|K|$.} If $\mu(\tau) = k$ then $\tau$ contributes $\binom{k}{2}$ quadruples to $K$. Let $N_k$ (resp., $N_{\ge k}$) denote the number of rotations with multiplicity exactly $k$ (resp., at least $k$), for $k\ge 2$. Then $$ |K| = \sum_{k=2}^{s} {k\choose 2} N_k = \sum_{k=2}^{s} {k\choose 2} (N_{\ge k} - N_{\ge k+1}) = N_{\ge 2} + \sum_{k\ge 3} (k-1) N_{\ge k} . $$ \paragraph{(H4) The main conjecture.} \begin{conjecture} \label{conj1} For any $2\le k\le s$, we have $$ N_{\ge k} = O\left(s^3/k^2\right). $$ \end{conjecture} Suppose that the conjecture were true. Then we would have $$ \frac{\left[s(s-1)-x\right]^2}{x} \le |K| = O(s^3) \cdot \left[ 1 + \sum_{k\ge 3} \frac{1}{k} \right] = O(s^3\log s) , $$ which would have implied that $x=\Omega(s/\log s)$. This would have almost settled the problem of obtaining a tight bound for the minimum number of distinct distances guaranteed to exist in any set of $s$ points in the plane, since, as mentioned above, the upper bound for this quantity is $O(s/\sqrt{\log s})$ \cite{Er}. We note that Conjecture~\ref{conj1} is rather deep; even the simple instance $k=2$, asserting that there are only $O(s^3)$ rotations which map (at least) two points of $S$ to two other points of $S$ (at the same distance apart), seems quite difficult. In this paper we establish a variety of upper bounds on the number of rotations and on the sum of their multiplicities. In particular, these results provide a partial positive answer, showing that $N_{\ge 3} = O(s^3)$; that is, the number of rotations which map a (degenerate or non-degenerate) triangle determined by $S$ to another congruent (and equally oriented) such triangle, is $O(s^3)$. Bounding $N_2$ by $O(s^3)$ is still an open problem. See Section~\ref{sec:impr} for a simple proof of the weaker bound $N_{\ge 2} = O(s^{10/3})$. \paragraph{Lower bound.} We next give a construction (suggested by Haim Kaplan) which shows: \begin{lemma} \label{lem:lower} There exist sets $S$ in the plane of arbitrarily large cardinality, which determine $\Theta(|S|^3)$ distinct rotations, each mapping a triple of points of $S$ to another triple of points of $S$. \end{lemma} \noindent{\bf Proof:} Consider the set $S=S_1\cup S_2\cup S_3$, where \begin{eqnarray*} S_1 & = & \{ (i,0) \mid i=1,\ldots,s \} , \\ S_2 & = & \{ (i,1) \mid i=1,\ldots,s \} , \\ S_3 & = & \{ (i/2,1/2) \mid i=1,\ldots,2s \} . \end{eqnarray*} See Figure~\ref{lower}. \begin{figure}[htbp] \begin{center} \input lower.pstex_t \caption{A lower bound construction of $\Theta(|S|^3)$ rotations with multiplicity $3$.} \label{lower} \end{center} \end{figure} For each triple $a,b,c\in\{1,\ldots,s\}$ such that $a+b-c$ also belongs to $\{1,\ldots,s\}$, construct the rotation $\tau_{a,b,c}$ which maps $(a,0)$ to $(b,0)$ and $(c,1)$ to $(a+b-c,1)$. Since the distance between the two source points is equal to the distance between their images, $\tau_{a,b,c}$ is well (and uniquely) defined. Moreover, $\tau_{a,b,c}$ maps the midpoint $((a+c)/2,1/2)$ to the midpoint $((a+2b-c)/2,1/2)$. We claim that the rotations $\tau_{a,b,c}$ are all distinct. Indeed, suppose that two such rotations, $\tau_{a,b,c}$ and $\tau_{a',b',c'}$, for distinct triples $(a,b,c)$, $(a',b',c')$, coincide; call the common rotation $\tau$. We can represent $\tau$ as the rigid transformation which first translates the plane horizontally by distance $b-a$, so that $(a,0)$ is mapped to $(b,0)$, and then rotates it around $(b,0)$ by an appropriate angle $0<\theta<\pi$, so that $(c+b-a,1)$ is mapped to $(a+b-c,1)$. Suppose first that $a\ne a'$. Since $\tau = \tau_{a,b,c} = \tau_{a',b',c'}$, it maps $(a',0)$ to $(a'+b-a,0)$ and then rotates this point by angle $\theta$ around $(b,0)$, mapping it to a point outside the $x$-axis, contradicting the fact that $\tau_{a',b',c'}$ maps $(a',0)$ to $(b',0)$. If $a'=a$ then we also must have $b'=b$, so $c'\ne c$. But then it is impossible to turn, around $(b,0)$, the shifted point $(c+b-a,1)$ to $(a+b-c,1)$ and the shifted point $(c'+b-a,1)$ to $(a+b-c',1)$, by the same angle, a contradiction which shows that the two rotations are distinct. Since there are $\Theta(s^3)$ triples $(a,b,c)$ with the above properties, the claim follows. $\Box$ \noindent{\bf Remarks.} {\bf (1)} A ``weakness'' of this construction is that all the rotations $\tau_{a,b,c}$ map a {\em collinear} triple of points of $S$ to another collinear triple. (In the terminology to follow, these will be called {\em flat} rotations.) We do not know whether the number of rotations which map a {\em non-collinear} triple of points of $S$ to another non-collinear triple can be $\Omega(|S|^3)$. We tend to conjecture that this is indeed the case. \noindent {\bf (2)} We do not know whether Conjecture~\ref{conj1} is worst-case tight (if true). That is, we do not know whether there exist sets $S$, with $s=|S|$ arbitrarily large, so that there are $\Omega(s^3/k^2)$ distinct rotations, each mapping at least $k$ points of $S$ to $k$ other points of $S$. \paragraph{(H5) Helices.} To estimate $N_{\ge k}$, we reduce the problem of analyzing rotations and their interaction with $S$ to an incidence problem in three dimensions, as follows. With each pair $(a,b)\in S^2$, we associate the curve $h_{a,b}$, in a 3-dimensional space parametrized by $(\xi,\eta,\theta)$, which is the locus of all rotations which map $a$ to $b$. That is, the equation of $h_{a,b}$ is given by $$ h_{a,b} = \{ (\xi,\eta,\theta) \mid b = ae^{i\theta} + (\xi,\eta) \}. $$ Putting $a=(a_1,a_2)$, $b=(b_1,b_2)$, this becomes \begin{eqnarray} \label{eq:helix} \xi & = & b_1 - (a_1\cos\theta - a_2\sin\theta) , \\ \eta & = & b_2 - (a_1\sin\theta + a_2\cos\theta) \nonumber . \end{eqnarray} This is a {\em helix} in $\reals^3$, having four degrees of freeedom, which we parametrize by $(a_1,a_2,b_1,b_2)$. It extends from the plane $\theta=0$ to the plane $\theta=2\pi$; its two endpoints lie vertically above each other, and it completes exactly one revolution between them. \paragraph{(H6) Helices, rotations, and incidences.} Let $P$ be a set of rotations, represented by points in $\reals^3$, as above, and let $H$ denote the set of all $s^2$ helices $h_{a,b}$, for $(a,b)\in S^2$ (note that $a=b$ is permitted). Let $I(P,H)$ denote the number of incidences between $P$ and $H$. Then we have $$ I(P,H) = \sum_{\tau\in P} \mu(\tau) . $$ Rotations $\tau$ with $\mu(\tau)=1$ are not interesting, because each of them only contributes $1$ to the count $I(P,H)$, and we will mostly ignore them. For the same reason, rotations with $\mu(\tau)=2$ are also not interesting for estimating $I(P,H)$, but they need to be included in the analysis of $N_{\ge 2}$. Unfortunately, as already noted, we do not yet have a good upper bound (i.e., cubic in $s$) on the number of such rotations. \paragraph{(H7) Incidences and the second conjecture.} \begin{conjecture} \label{conj2} For any $P$ and $H$ as above, we have $$ I(P,H) = O\left(|P|^{1/2}|H|^{3/4}+|P|+|H|\right) . $$ \end{conjecture} Suppose that Conjecture~\ref{conj2} were true. Let $P_{\ge k}$ denote the set of all rotations with multiplicity at least $k$ (with respect to $S$). We then have $$ kN_{\ge k} = k|P_{\ge k}| \le I(P_{\ge k},H) = O\left(N_{\ge k}^{1/2}|H|^{3/4} + N_{\ge k} + |H|\right) , $$ from which we obtain $$ N_{\ge k} = O\left( \frac{s^3}{k^2} + \frac{s^2}{k} \right) = O\left( \frac{s^3}{k^2} \right) , $$ thus establishing Conjecture~\ref{conj1}, and therefore also the lower bound for $x$ (the number of distinct distances) derived above from this conjecture. \noindent{\bf Remark.} Conjecture~\ref{conj2} can also be formulated for an {\em arbitrary} subset $H$ of all possible helices. Note that two helices $h_{a,b}$ and $h_{c,d}$ intersect in at most one point---this is the unique rotation which maps $a$ to $b$ and $c$ to $d$ (if it exists at all, namely if $|ac|=|bd|$). Hence, combining this fact with a standard cutting-based decomposition technique, similar to what has been noted in \cite{SW}, say, yields the weaker bound \begin{equation} \label{weak23} I(P,H) = O\left(|P|^{2/3}|H|^{2/3}+|P|+|H|\right) , \end{equation} which, alas, only yields the much weaker bound $N_{\ge k} = O\left( s^4/k^3 \right)$, which is completely useless for deriving any lower bound on $x$. (We will use this bound, though, in Section~\ref{sec:conc}.) \paragraph{(H8) From helices to parabolas.} The helices $h_{a,b}$ are non-algebraic curves, because of the use of the angle $\theta$ as a parameter. This can be easily remedied, in the following standard manner. Assume that $\theta$ ranges from $-\pi$ to $\pi$, and substitute, in the equations (\ref{eq:helix}), $Z=\tan(\theta/2)$, $X = \xi(1+Z^2)$, and $Y = \eta(1+Z^2)$, to obtain \begin{eqnarray} \label{parabola} X & = & (a_1+b_1)Z^2 + 2a_2Z + (b_1-a_1) \\ Y & = & (a_2+b_2)Z^2 - 2a_1Z + (b_2-a_2) , \nonumber \end{eqnarray} which are the equations of a {\em planar parabola} in the $(X,Y,Z)$-space. (The parabola degenerates to a line if $b=-a$, a situation that we will rule out by choosing an appropriate generic coordinate frame in the original $xy$-plane.) We denote the parabola corresponding to the helix $h_{a,b}$ as $h^*_{a,b}$, and refer to it as an {\em $h$-parabola}. \paragraph{(H9) Joint and flat rotations.} A rotation $\tau\in P$ is called a {\em joint} of $H$ if $\tau$ is incident to at least three helices of $H$ whose tangent lines at $\tau$ are non-coplanar. Otherwise, still assuming that $\tau$ is incident to at least three helices of $H$, $\tau$ is called {\em flat}. Let $\tau=(\xi,\eta,\theta)\in P$ be a rotation, incident to three distinct helices $h_{a,b}$, $h_{c,d}$, $h_{e,f}$. From their equations, as given in (\ref{eq:helix}), the directions of the tangents to these helices at $\tau$ are \begin{align*} & (a_1\sin\theta + a_2\cos\theta,\, -a_1\cos\theta + a_2\sin\theta,\, 1) \\ & (c_1\sin\theta + c_2\cos\theta,\, -c_1\cos\theta + c_2\sin\theta,\, 1) \\ & (e_1\sin\theta + e_2\cos\theta,\, -e_1\cos\theta + e_2\sin\theta,\, 1) . \end{align*} Put $p=\cos\theta$ and $q=\sin\theta$. Then the three tangents are coplanar if and only if $$ \left| \begin{array}{ccc} a_1q+a_2p & -a_1p+a_2q & 1 \\ c_1q+c_2p & -c_1p+c_2q & 1 \\ e_1q+e_2p & -e_1p+e_2q & 1 \end{array} \right| = 0 . $$ Simplifying the determinant, and recalling that $p^2+q^2=1$, the condition is equivalent to $$ \left| \begin{array}{ccc} a_1 & a_2 & 1 \\ c_1 & c_2 & 1 \\ e_1 & e_2 & 1 \end{array} \right| = 0 . $$ In other words, the three helices $h_{a,b}$, $h_{c,d}$, $h_{e,f}$ form a joint at $\tau$ if and only if the three points $a,c,e$ (and thus also $b,d,f$) are non-collinear. That is, we have shown: \begin{claim} \label{jointri} A rotation $\tau$ is a joint of $H$ if and only if $\tau$ maps a non-degenerate triangle determined by $S$ to another (congruent and equally oriented) non-degenerate triangle determined by $S$. A rotation $\tau$ is a flat rotation if and only if $\tau$ maps at least three collinear points of $S$ to another collinear triple of points of $S$, but does not map any point of $S$ outside the line containing the triple to another point of $S$. \end{claim} \noindent{\bf Remarks:} {\bf (1)} Note that if $\tau$ is a flat rotation, it maps the entire line containing the three source points to the line containing their images. Specifically (see also below), we can respectively parametrize points on these lines as $a_0+tu$, $b_0+tv$, for $t\in\reals$, such that $\tau$ maps $a_0+tu$ to $b_0+tv$ for every $t$. \noindent {\bf (2)} For flat rotations, we also need to ensure, for technical reasons, that the three (or more) helices incident to a flat rotation $\tau$ are such that their tangents at $\tau$ are all distinct. This fortunately is always the case. Indeed, The preceding analysis is easily seen to imply that if $h_{a,b}$ and $h_{c,d}$ meet at $\tau$ then their tangents at $\tau$ coincide if and only if $a=c$. But then $h_{a,b}$ and $h_{a,d}$ cannot have a common point (rotation) unless $b=d$ too, i.e., they are the same helix; otherwise the common rotation would have to map $a$ to the two distinct points $b$ and $d$, an impossibility. \paragraph{(H10) Special surfaces.} In preparation for the forthcoming algebraic analysis, we need the following property of our helices. Let $\tau$ be a flat rotation, with multiplicity $k\ge 3$, and let $\ell$ and $\ell'$ be the corresponding lines in the plane, such that there exist $k$ points $a_1,\ldots,a_k\in S\cap \ell$ and $k$ points $b_1,\ldots,b_k\in S\cap \ell'$, such that $\tau$ maps $a_i$ to $b_i$ for each $i$ (and in particular maps $\ell$ to $\ell'$). By definition, $\tau$ is incident to the $k$ helices $h_{a_i,b_i}$, for $i=1,\ldots,k$. Let $u$ and $v$ denote unit vectors in the direction of $\ell$ and $\ell'$, respectively. Clearly, there exist two reference points $a\in\ell$ and $b\in\ell'$, such that for each $i$ there is a real number $t_i$ such that $a_i=a+t_iu$ and $b_i=b+t_iv$. As a matter of fact, for each real $t$, $\tau$ maps $a+tu$ to $b+tv$, so it is incident to $h_{a+tu,b+tv}$. Note that $a$ and $b$ are not uniquely defined: we can take $a$ to be any point on $\ell$, and shift $b$ accordingly along $\ell'$. Let $H(a,b;u,v)$ denote the set of these helices. Since a pair of helices can meet in at most one point, all the helices in $H(a,b;u,v)$ pass through $\tau$ but are otherwise pairwise disjoint. Using the re-parametrization $(\xi,\eta,\theta)\mapsto (X,Y,Z)$, we denote by $\Sigma=\Sigma(a,b;u,v)$ the surface which is the union of all the $h$-parabolas that are the images of the helices in $H(a,b;u,v)$. We refer to such a surface $\Sigma$ as a {\em special surface}. An important comment is that most of the ongoing analysis also applies when only two helices are incident to $\tau$; they suffice to determine the four parameters $a,b,u,v$ that define the surface $\Sigma$. We also remark that, although we started the definition of $\Sigma(a,b;u,v)$ with a flat rotation $\tau$, the definition only depends on the parameters $a,b,u$, and $v$ (and even there we have, as just noted, one degree of freedom in choosing $a$ and $b$). If $\tau$ is not flat it may determine many special surfaces, one for each line that contains two or more points of $S$ which $\tau$ maps to other (also collinear) points of $S$. Also, as we will shortly see, the same surface can be obtained from a different set (in fact, many such sets) of parameters $a',b',u'$, and $v'$ (or, alternatively, from different flat rotations $\tau'$). An ``intrinsic'' definition of special surfaces will be given shortly. The surface $\Sigma$ is a cubic algebraic surface, whose equation can be worked out as follows. The equation of the parabola $h^*_{a+tu,b+tv}$ corresponding to $h_{a+tu,b+tv}$ is \begin{eqnarray*} X & = & (a_1+b_1+t(u_1+v_1))Z^2 + 2(a_2+tu_2)Z + (b_1-a_1+t(v_1-u_1)) \\ Y & = & (a_2+b_2+t(u_2+v_2))Z^2 - 2(a_1+tu_1)Z + (b_2-a_2+t(v_2-u_2)) . \end{eqnarray*} We can view this as a parametrization of $\Sigma$ using $t$ and $Z$ as parameters. We can simplify these equations as \begin{eqnarray} \label{parsig} X & = & tQ_1(Z) + Q_3(Z) \\ Y & = & tQ_2(Z) + Q_4(Z) , \nonumber \end{eqnarray} where $Q_1,\ldots,Q_4$ are quadratic polynomials in $Z$. Eliminating $t$ from these equations gives us the first version of the equation of $\Sigma$, which is \begin{equation} \label{sigma1} Q_2(Z)X - Q_1(Z)Y + (Q_1(Z)Q_4(Z)-Q_2(Z)Q_3(Z)) = 0 . \end{equation} This is a quartic equation, although it is only linear in $X$ and $Y$. Note also that the cross-section of $\Sigma$ by any plane $Z={\rm const}$ is a line, so $\Sigma$ is a ruled surface. We next reduce (\ref{sigma1}) to a cubic equation, as follows. Let $(X_0,Y_0,Z_0)$ denote the coordinates of $\tau$ in the $XYZ$-frame. We note that $Q_1(Z_0)=Q_2(Z_0)=0$. This can be worked out explicitly, or concluded by noting that $(X_0,Y_0,Z_0)$ is a common point of all our parabolas, so $(X_0,Y_0,Z_0)$ cannot determine $t$, meaning that the coefficients $Q_1(Z_0)$ and $Q_2(Z_0)$ in (\ref{parsig}) must both be zero. Hence, each of the three polynomials $Q_2$, $Q_1$, and $Q_1Q_4-Q_2Q_3$, appearing in the left-hand side of (\ref{sigma1}), vanishes at $Z_0$, and is therefore divisible by $Z-Z_0$. Factoring $Z-Z_0$ out, we get a reduced equation for $\Sigma$, of the form \begin{equation} \label{sigma} E_2(Z)X - E_1(Z)Y + (E_1(Z)Q_4(Z)-E_2(Z)Q_3(Z)) = 0 , \end{equation} where $E_1$ and $E_2$ are linear in $Z$. Recalling that \begin{eqnarray*} Q_1(Z) & = & (u_1+v_1)Z^2 + 2u_2Z + (v_1-u_1) \\ Q_2(Z) & = & (u_2+v_2)Z^2 - 2u_1Z + (v_2-u_2) \\ Q_3(Z) & = & (a_1+b_1)Z^2 + 2a_2Z + (b_1-a_1) \\ Q_4(Z) & = & (a_2+b_2)Z^2 - 2a_1Z + (b_2-a_2) , \end{eqnarray*} an explicit calculation yields: \begin{eqnarray*} E_1(Z) & = & (u_1+v_1)(Z+Z_0) + 2u_2 \\ E_2(Z) & = & (u_2+v_2)(Z+Z_0) - 2u_1 . \end{eqnarray*} An additional explicit calculation shows that \begin{equation} \label{e12z0} E_1(Z_0) = 2v_2 \quad\quad\mbox{and}\quad\quad E_2(Z_0) = -2v_1 . \end{equation} (To see, say, the first equality, we need to show that $(u_1+v_1)Z_0 = v_2-u_2$. Writing $u=(\cos\alpha,\sin\alpha)$, $v=(\cos(\alpha+\theta),\sin(\alpha+\theta))$, where $\theta$ is the angle of rotation, and recalling that $Z_0=\tan\frac{\theta}{2}$, the claim follows by straightforward trigonometric manipulations.) This allows us to rewrite \begin{eqnarray} \label{e1e2} E_1(Z) & = & (u_1+v_1)Z + (u_2+v_2) \\ E_2(Z) & = & (u_2+v_2)Z - (u_1+v_1) \nonumber . \end{eqnarray} Hence, the ``free'' term in (\ref{sigma}) is the cubic polynomial $$ E_1(Z)Q_4(Z)-E_2(Z)Q_3(Z) = $$ $$ \biggl( (u_1+v_1)Z + (u_2+v_2) \biggr) \biggl( (a_2+b_2)Z^2 - 2a_1Z + (b_2-a_2) \biggr) - $$ $$ \biggl( (u_2+v_2)Z - (u_1+v_1) \biggr) \biggl( (a_1+b_1)Z^2 + 2a_2Z + (b_1-a_1) \biggr) . $$ We refer to the cubic polynomial in the left-hand side of (\ref{sigma}) as a {\em special polynomial}. Thus a special surface is the zero set of a special polynomial. \paragraph{(H11) The geometry of special surfaces.} Special surfaces pose a technical challenge to the analysis. Specifically, each special surface $\Sigma$ captures a certain underlying pattern in the ground set $S$, which may result in many incidences between rotations and $h$-parabolas, all contained in $\Sigma$. The next step of the analysis studies this pattern in detail. \begin{figure}[htbp] \begin{center} \input uvuv1.pstex_t \caption{The configuration of $u,v,u',v'$.} \label{uvuv1} \end{center} \end{figure} Consider first a simple instance of this situation, in which two special surfaces $\Sigma$, $\Sigma'$, generated by two distinct flat rotations $\tau$, $\tau'$, coincide. More precisely, there exist four parameters $a,b,u,v$ such that $\tau$ maps the line $\ell_1 = a+tu$ to the line $\ell_2 = b+tv$ (so that points with the same parameter $t$ are mapped to one another), and four other parameters $a',b',u',v'$ such that $\tau'$ maps (in a similar manner) the line $\ell'_1 = a'+tu'$ to the line $\ell'_2 = b'+tv'$, and $\Sigma(a,b;u,v) = \Sigma(a',b';u',v')$. Denote this common surface by $\Sigma$. Since the surfaces coincide, the coefficients $E_1(Z)$, $E_2(Z)$ for $(a,b,u,v)$ must be proportional to the coefficients $E'_1(Z)$, $E'_2(Z)$ for $(a',b',u',v')$. That is, we must have $u'_1+v'_1 = \gamma(u_1+v_1)$ and $u'_2+v'_2 = \gamma(u_2+v_2)$, for some real $\gamma$. In other words, $u'+v' = \gamma(u+v)$. Since $u,v,u',v'$ are unit vectors, the angle bisector between $u$ and $v$ must coincide with that between $u'$ and $v'$, as depicted in Figure~\ref{uvuv1}. Moreover, as is easily checked, if we let $a_0$ be the intersection point of $\ell_1$ and $\ell'_1$, and let $b_0$ be the intersection point of $\ell_2$ and $\ell'_2$, then both $\tau$ and $\tau'$ map $a_0$ to $b_0$, and $h^*_{a_0,b_0}$ is contained in $\Sigma$. (See Figure~\ref{uvuv2}.) Indeed, $\tau'$ lies on some parabola $h^*_{p,q}$ through $\tau$ which is contained in $\Sigma$, and $\tau$ lies on some parabola $h^*_{p',q'}$ through $\tau'$ which is also contained in $\Sigma$. Since a pair of distinct $h$-parabolas meet in at most one point, the two parabolas must coincide, so $p=p'$ and $q=q'$. However, by construction, $p$ lies on $\ell_1$ and $p'$ lies on $\ell'_1$, so this common point must be $a_0$, and, similarly, $q=q'=b_0$, as claimed. \begin{figure}[htbp] \begin{center} \input uvuv2.pstex_t \caption{The structure of $\tau$ and $\tau'$ on a common special surface $\Sigma$.} \label{uvuv2} \end{center} \end{figure} Since the preceding analysis applies to any pair of distinct rotations on a common special surface $\Sigma$, it follows that we can associate with $\Sigma$ a common direction $w$ and a common shift $\delta$, so that for each $\tau\in\Sigma$ there exist two lines $\ell$, $\ell'$, where $\tau$ maps $\ell$ to $\ell'$, so that the angle bisector between these lines is in direction $w$, and $\tau$ is the unique rigid motion, obtained by rotating $\ell$ to $\ell'$ around their intersection point $\ell\cap\ell'$, and then shifting $\ell'$ along itself by a distance whose projection in direction $w$ is $\delta$. The fact that the shifts of any pair of rotations on $\Sigma$ have the same $w$-component follows from the fact that they both map the intersection point $a_0$ of their source lines to the intersection point $b_0$ of their target lines; consult Figure~\ref{uvuv2}. Let $\Sigma$ be a special surface, generated by $H(a,b;u,v)$; that is, $\Sigma$ is the union of all parabolas of the form $h^*_{a+tu,b+tv}$, for $t\in\reals$. Let $\tau_0$ be the common rotation to all these parabolas, so it maps the line $\ell_0 = \{a+tu \mid t\in\reals\}$ to the line $\ell'_0 = \{b+tv \mid t\in\reals\}$, so that every point $a+tu$ is mapped to $b+tv$. Let $h^*_{c,d}$ be a parabola contained in $\Sigma$ but not passing through $\tau_0$. Take any pair of distinct rotations $\tau_1,\tau_2$ on $h^*_{c,d}$. Then there exist two respective real numbers $t_1,t_2$, such that $\tau_i\in h^*_{a+t_iu,b+t_iv}$, for $i=1,2$. Thus $\tau_i$ is the unique rotation which maps $c$ to $d$ and $a_i=a+t_iu$ to $b_i=b+t_iv$. In particular, we have $|a+t_iu-c| = |b+t_iv-d|$. This in turn implies that the triangles $a_1a_2c$ and $b_1b_2d$ are congruent; see Figure~\ref{uvuv3}. \begin{figure}[htbp] \begin{center} \input uvuv3.pstex_t \caption{The geometric configuration corresponding to a parabola $h^*_{c,d}$ contained in $\Sigma$.} \label{uvuv3} \end{center} \end{figure} Given $c$, this determines $d$, up to a reflection about $\ell'_0$. We claim that $d$ has to be on the ``other side'' of $\ell'_0$, namely, be such that the triangles $a_1a_2c$ and $b_1b_2d$ are oppositely oriented. Indeed, if they were equally oriented, then $\tau_0$ would have mapped $c$ to $d$, and then $h^*_{c,d}$ would have passed through $\tau_0$, contrary to assumption. Now form the two sets \begin{eqnarray} \label{aandb} A & = & \{ p \mid \mbox{there exists $q\in S$ such that}\; h^*_{p,q} \subset \Sigma \} \\ B & = & \{ q \mid \mbox{there exists $p\in S$ such that}\; h^*_{p,q} \subset \Sigma \} . \nonumber \end{eqnarray} The preceding discussion implies that $A$ and $B$ are congruent and oppositely oriented. To recap, each rotation $\tau\in\Sigma$, incident to $k\ge 2$ parabolas contained in $\Sigma$, corresponds to a pair of lines $\ell,\ell'$ with the above properties, so that $\tau$ maps $k$ points of $S\cap\ell$ (rather, of $A\cap\ell$) to $k$ points of $S\cap\ell'$ (that is, of $B\cap\ell'$). If $\tau$ is flat, its entire multiplicity comes from points of $S$ on $\ell$ (these are the points of $A\cap\ell$) which are mapped by $\tau$ to points of $S$ on $\ell'$ (these are points of $B\cap\ell'$), and all the corresponding parabolas are contained in $\Sigma$. If $\tau$ is a joint then, for any other point $p\in S$ outside $\ell$ which is mapped by $\tau$ to a point $q\in S$ outside $\ell'$, the parabola $h^*_{p,q}$ is not contained in $\Sigma$, and crosses it transversally at the unique rotation $\tau$. Note also that any pair of parabolas $h^*_{c_1,d_1}$ and $h^*_{c_2,d_2}$ which are contained in $\Sigma$ intersect, necessarily at the unique rotation which maps $c_1$ to $d_1$ and $c_2$ to $d_2$. This holds because $|c_1c_2|=|d_1d_2|$, as follows from the preceding discussion. \paragraph{Special surfaces are anti-rotations.} Let $\Sigma$ be a special surface, and let $A,B$ be the subsets of $S$ associated with $\Sigma$, as in (\ref{aandb}). Then there exists a single {\em anti-rotation} which maps $A$ to $B$. Conversely, any anti-rotation can be associated with a unique special surface in this manner. However, the number of incidences within a special surface may be larger than the incidence count of the anti-rotation with the appropriate variants of the $h$-parabolas: the former counts incidences between the points of $A$ (or of $B$) and the lines that they determine, while the latter only counts the size of $A$ (or of $B$). \paragraph{An alternative analysis.} Recall the equation (\ref{sigma}) of $\Sigma$ $$ E_2(Z)X - E_1(Z)Y + (E_1(Z)Q_4(Z)-E_2(Z)Q_3(Z)) = 0 , $$ where, writing $\lambda = u_1+v_1$ and $\mu = u_2+v_2$, \begin{eqnarray*} E_1(Z) & = & \lambda Z + \mu \\ E_2(Z) & = & \mu Z - \lambda . \end{eqnarray*} Now let $h^*_{a,b}$ be a parabola contained in $\Sigma$. Substituting the equations (\ref{parabola}) of $h^*_{a,b}$ into the above equation, we get $$ (\mu Z - \lambda) \biggl[ (a_1+b_1)Z^2 + 2a_2Z + (b_1-a_1) \biggr] - (\lambda Z + \mu) \biggl[ (a_2+b_2)Z^2 - 2a_1Z + (b_2-a_2) \biggr] + K(Z) \equiv 0 , $$ where $K(Z) = E_1(Z)Q_4(Z)-E_2(Z)Q_3(Z)$ is the ``free'' cubic term in the equation of $\Sigma$. A straightforward algebraic simplification of this equation yields $$ (Z^2+1) \biggl[ (\mu Z + \lambda) a_1 - (\lambda Z - \mu) a_2 + (\mu Z - \lambda) b_1 - (\lambda Z + \mu) b_2 \biggr] + K(Z) \equiv 0 . $$ In particular (an interesting observation in itself, albeit obvious from the definition of $X,Y,Z$), $K(Z)$ must be divisible by $Z^2+1$, with the remainder being a linear function of $Z$. Eliminating this factor, we get \begin{eqnarray*} \mu (a_1+b_1) - \lambda (a_2+b_2) & = & c_1 \\ \lambda (a_1-b_1) + \mu (a_2-b_2) & = & c_2 , \end{eqnarray*} for appropriate reals numbers $c_1$, $c_2$. Now, writing $u = (\cos\alpha,\sin\alpha)$ and $v = (\cos(\alpha+\theta),\sin(\alpha+\theta))$, where $\theta$ is the angle of rotation, and observing that $$ u+v = (u_1+v_1,u_2+v_2) = (\lambda,\mu) = \cos\tfrac{\theta}{2} \left( \cos \left(\alpha+\tfrac{\theta}{2}\right),\; \sin \left(\alpha+\tfrac{\theta}{2}\right) \right) , $$ the containment of $h^*_{a,b}$ in $\Sigma$ is equivalent to the two conditions \begin{eqnarray*} (a+b)\cdot (u+v)^T & = & c'_1 \\ (a-b)\cdot (u+v) & = & c'_2 , \end{eqnarray*} for appropriate parameters $c'_1,c'_2$. The geometric interpretation of the first condition is that the midpoint of $ab$ has to lie on a fixed line $\ell_0$ (whose direction, $\alpha+\frac{\theta}{2}$, is parallel to the angle bisector between the lines $\ell_1,\ell_2$ (see Figure~\ref{uvuv2}). The second condition means that $b-a$ has a fixed component in the direction of $\ell_0$. In other words, $h^*_{a,b}$ is contained in $\Sigma$ if and only if $b=\varphi(a)$, where $\varphi$ is the anti-rotation obtained as a reflection about $\ell_0$ followed by a shift parallel to $\ell_0$. This constitutes an alternative derivation of the characterization of $\Sigma$ given above. \paragraph{(H12) Special surfaces and parabolas.} Finally, we study intersection patterns involving special surfaces. Let $\Sigma$ be a special surface as above, and let $\Xi$ be another $(X,Y)$-linear surface of the form $A(Z)X+B(Z)Y+C(Z)=0$. Then either $\Xi$ coincides with $\Sigma$, or there is at most one parabola contained in both of them. Indeed, the intersection of $\Xi$ and $\Sigma$ is the curve satisfying \begin{eqnarray*} A(Z)X+B(Z)Y+C(Z) & = & 0 \\ E_2(Z)X - E_1(Z)Y + (E_1(Z)Q_4(Z)-E_2(Z)Q_3(Z)) & = & 0 . \end{eqnarray*} This is a linear system in $X$ and $Y$. Suppose first that its determinant, $A(Z)E_1(Z) + B(Z)E_2(Z)$, does not vanish identically. Then, with the exception of finitely many values of $Z$, we get a unique solution of the form $X=F(Z)$, $Y=G(Z)$, which can describe at most one parabola. If the determinant vanishes identically, then the equation of $\Xi$ can be written as $E_2(Z)X-E_1(Z)Y+D(Z)=0$, for an appropriate rational algebraic function $D(Z)$. If $\Xi$ and $\Sigma$ do intersect in a parabola, then we must have $D(Z)\equiv E_1(Z)Q_4(Z)-E_2(Z)Q_3(Z)$, so $\Xi$ and $\Sigma$ coincide. $\Box$ As a corollary, we have: \begin{lemma} \label{only1} Let $\Xi$ be an $(X,Y)$-linear surface of the above form, and let $\tau$ be a flat rotation contained in $\Xi$. Then either $\Xi$ contains at least two of the parabolas incident to $\tau$, and then it must coincide with the corresponding special surface $\Sigma$, or $\Xi$ contains at most one of these parabolas, so at least two other parabolas cross $\Xi$ at $\tau$. \end{lemma} \begin{lemma} \label{sspec} A special surface can contain at most $s$ $h$-parabolas. \end{lemma} \noindent{\bf Proof:} Let $\Xi$ be the given special surface. We claim that for each $a\in S$ there can be at most one point $b\in S$ such that $h^*_{a,b}\subset\Xi$. Indeed, suppose that there exist two such points $b_1,b_2\in S$. Since any pair of $h$-parabolas on $\Xi$ intersect, $h^*_{a,b_1}$ and $h^*_{a,b_2}$ meet at a rotation $\tau$, which maps $a$ to both $b_1$ and $b_2$, an impossibility which completes the proof. $\Box$ \begin{lemma} \label{surfpar} The number of containments between $n$ $h$-parabolas and $E$ special surfaces is $$ O(E^{2/3}n^{2/3}+E+n) . $$ \end{lemma} \noindent{\bf Proof:} As argued above, a special surface $\Sigma$ is characterized by an anti-rotation $\varphi_\Sigma$ in the plane, specified by a line $\ell$ and a shift $\delta$, such that $\varphi_\Sigma(a)$ is the point obtained by reflecting $a$ about $\ell$ and then by shifting the reflected point parallel to $\ell$ by distance $\delta$. Thus $\Sigma$ has three degrees of freedom, and can be parametrized by $(\alpha,\beta,\delta)$, where $y=\alpha x+\beta$ is the equation of $\ell$ and $\delta$ is the shift. We write $\Sigma(\alpha,\beta,\gamma)$ to denote the special surface parametrized by $(\alpha,\beta,\gamma)$. By construction, a parabola $h^*_{a,b}$ is contained in $\Sigma$ if and only if $\varphi_\Sigma(a) = b$. We use the following parametric setup. We represent each special surface $\Sigma$ by the corresponding triple $(\alpha,\beta,\delta)$, and regard it as a point in parametric 3-space. Each parabola $h^*_{a,b}$ is mapped to the locus $\tilde{h}_{a,b}$ of all (points representing) special surfaces containing $h^*_{a,b}$. This is a curve in the $(\alpha,\beta,\delta)$-space, given by the pair of scalar equations $\varphi_{\Sigma(\alpha,\beta,\delta)}(a) = b$. This is a low-degree algebraic curve, whose concrete equations can be worked out explicitly, but we skip over this step. We thus have a system of $E$ points and $n$ such curves in 3-space, and we wish to bound the number of incidences between them. We have the additional property, noted in Lemma~\ref{only1}, that two curves meet in at most one point. By projecting these points and curves onto some generic 2-plane, one can easily show that that the number of incidences, and thus the number of original containments, is at most $O(E^{2/3}n^{2/3}+E+n)$, as claimed. $\Box$ \noindent{\bf Remark.} If we represent each special surface by its corresponding anti-rotation, Lemma~\ref{surfpar} simply bounds the number of incidences between $E$ anti-rotations and $n$ (appropriately transformed copies of) $h$-parabolas, and the bound noted in (\ref{weak23}) holds here as well. \section{Tools from algebraic geometry} \label{sec:tools} We begin by reviewing and extending the basic tools from algebraic geometry which have been used in \cite{GK} and in \cite{EKS}. However, we develop them here in the context of incidences between points and our $h$-parabolas, rather than the context of points and lines considered in the previous papers. So let $C$ be a set of $n\le s^2$ $h$-parabolas in $\reals^3$. For each $h^*\in C$, we denote the plane containing $h^*$ by $\pi_{h^*}$ and its equation as $L_{h^*}=0$, where $L_{h^*}$ is a linear polynomial. We represent $h^*$ as the intersection curve of $L_{h^*}=0$ and $F_{h^*}=0$, where $F_{h^*}$ is one of the quadratic equations in (\ref{parabola}) defining $h^*$, say the first one. Note that all the parabolas of $C$ cross every plane of the form $Z={\rm const}$, each at a single point. Recalling the definitions in (H9), and similar to the case of lines, we say that a point\footnote Recall that points in 3-space represent rotations in the plane. Later on we will mostly refer to them as rotations, but in the more abstract algebraic treatment in this section we prefer to call them points.} $a$ is a {\em joint} of $C$ if it is incident to three parabolas of $C$ whose tangents at $a$ are non-coplanar. Let $J=J_C$ denote the set of joints of $C$. We will also consider points $a$ that are incident to three or more parabolas of $C$, so that the tangents to all these parabolas are coplanar, and refer to such points as {\em flat} points of $C$. We recall (see (H9)) that any pair of distinct $h$-parabolas which meet at a point have there distinct tangents. First, we note that a trivariate polynomial $p$ of degree $d$ which vanishes at $2d+1$ points that lie on a common parabola $h^*\in C$ must vanish identically on $h^*$. Indeed, these points are common roots of $p$ and $F_{h^*}$, restricted to the plane $\pi_{h^*}$. By B\'ezout's theorem~\cite{Shaf}, either these restricted polynomials have a common factor, or they have at most $2d$ roots. Since $F_{h^*}$ is irreducible, it must divide the restricted $p$, so $p$ must vanish identically on $h^*$, as claimed. \paragraph{Critical points and parabolas.} A point $a$ is {\em critical} (or {\em singular}) for a trivariate polynomial $p$ if $p(a)=0$ and $\nabla p(a) = 0$; any other point $a$ in the zero set of $p$ is called {\em regular}. A parabola $h^*$ is {\em critical} if all its points are critical. The following proposition is adapted from \cite{EKS}. \begin{proposition} \label{factor} Let $f(x,y,z)$ and $g(x,y,z)$ be two trivariate polynomials, of respective degrees $k$ and $m$, so that there are $km+1$ parabolas of $C$ on which both $f$ and $g$ vanish identically. Then $f$ and $g$ have a common factor. \end{proposition} \noindent{\bf Proof.} Assume that both $f(x,y,z)$ and $g(x,y,z)$ have a positive degree in $x$; this can always be enforced by an appropriate rotation of the coordinate frame. It is then an easy exercise to show that $f$ and $g$ have a common factor if and only if their resultant, when viewing them as polynomials in $x$, is identically $0$. Recall that the resultant is a polynomial in $y$ and $z$. (The same holds when $f$ and $g$ have any number of variables, including $x$, in which case the resultant is a polynomial in the remaining variables.) For any fixed value $z_0$ of $z$, $f(x,y,z_0)$ and $g(x,y,z_0)$ have at least $km+1$ common roots (at the intersection points of the $km+1$ parabolas with $z=z_0$), so, by B\'ezout's Theorem \cite{Shaf}, they have a common factor. Therefore, the resultant, with respect to $x$, of $f(x,y,z_0)$ and $g(x,y,z_0)$ is identically $0$ (as a polynomial in $y$). Since this is true for every value $z_0$ of $z$, it follows that the resultant of $f(x,y,z)$ and $g(x,y,z)$, with respect to $x$, vanishes identically as a polynomial in $y$ and $z$. Therefore, $f(x,y,z)$ and $g(x,y,z)$, as trivariate polynomials, have a common factor. $\Box$ \begin{proposition} \label{prop1} Let $C$ be as above. Then any trivariate square-free polynomial $p$ of degree $d$ can have at most $d(d-1)$ critical parabolas in $C$. \end{proposition} \noindent{\bf Proof:} We prove the claim by induction on the degree $d$ of $p$. The claim holds trivially for $d=1$, so assume that $d>1$. Assume first that $p$ is irreducible. Apply Proposition~\ref{factor} to $p$ and $p_x$, say. Both polynomials vanish identically on each critical parabola, and their respective degrees are $d$ and $d-1$. If $p$ had more than $d(d-1)$ critical parabolas then $p$ and $p_x$ would have a common factor, which is impossible since $p$ is irreducible. Suppose next that $p$ is reducible (but square-free), and write $p=fg$, so that $f$ and $g$ are non-constant polynomials which have no common factor (since $p$ is square-free, this can always be done). Denote the degrees of $f$ and $g$ by $d_f$ and $d_g$, respectively; we have $d=d_f+d_g$. Let $h^*$ be a critical parabola for $p$. Then either $f\equiv 0$ on $h^*$ or $g\equiv 0$ on $h^*$ (or both). Moreover, since $\nabla p = f\nabla g + g\nabla f \equiv 0$ on $h^*$, it is easily checked that $h^*$ must satisfy (at least) one of the following properties: \noindent (i) $f\equiv g\equiv 0$ on $h^*$. \noindent (ii) $h^*$ is a critical parabola of $f$. \noindent (iii) $h^*$ is a critical parabola of $g$. Indeed, if (i) does not hold, we have, without loss of generality, $f\equiv 0$ on $h^*$, but $g$ vanishes only at finitely many points of $h^*$. On any other point $a$ of $h^*$ we then must have $\nabla f(a) = 0$, which implies that $\nabla f$ is identically zero on $h^*$, so $h^*$ is critical for $f$. This implies (ii); (iii) holds in the symmetric case where $g\equiv 0$ on $h^*$ but $f$ does not vanish identically on $h^*$. By the induction hypothesis, the number of critical parabolas for $f$ is at most $d_f(d_f-1)$, and the number of critical parabolas for $g$ is at most $d_g(d_g-1)$. Consider the parabolas that satisfy (i) and intersect all of them by any of the planes $z=z_0$, as in the proof of Proposition~\ref{factor}. All the intersection points are roots of $f=0$ and $g=0$ on this plane, and, as follows from the proof of Proposition~\ref{factor}, these bivariate polynomials have no common factor (or, more precisely, they can have a common factor only at finitely many values of $z$). Hence, by B\'ezout's theorem, they have at most $d_fd_g$ common roots. Altogether, the number of critical parabolas for $p$ is at most $$ d_f(d_f-1) + d_g(d_g-1) + d_fd_g < d(d-1) . $$ $\Box$ \begin{proposition} \label{prop2} Let $a$ be a regular point of $p$, so that $p\equiv 0$ on three parabolas of $C$ passing through $a$. Then these parabolas must have coplanar tangents at $a$. \end{proposition} \noindent{\bf Proof:} Any such tangent line must be contained in the tangent plane to $p=0$ at $a$. $\Box$ Hence, a point $a$ incident to three parabolas of $C$ whose tangent lines at $a$ are non-coplanar, so that $p\equiv 0$ on each of these parabolas, must be a critical point of $p$. \begin{proposition} \label{prop4} Given a set $S$ of $m$ points in 3-space, there exists a nontrivial trivariate polynomial $p(x,y,z)$ which vanishes at all the points of $S$, of degree $d$, for any $d$ satisfying $\binom{d+3}{3} > m$. \end{proposition} \noindent{\bf Proof:} (See \cite{EKS,GK}.) A trivariate polynomial of degree $d$ has $\binom{d+3}{3}$ monomials, and requiring it to vanish at $m$ points yields these many homogeneous equations in the coefficients of these monomials. Such an underdetermined system always has a nontrivial solution. $\Box$ \paragraph{Flat points and parabolas.} Call a regular point $\tau$ of a trivariate polynomial $p$ {\em geometrically flat} if it is incident to three distinct parabolas of $C$ (with necessarily coplanar tangent lines at $\tau$, no pair of which are collinear) on which $p$ vanishes identically.\footnote Compare this definition with the one in \cite{EKS} (see also~\cite{GK}), where a geometrically flat point was defined there as a point incident to at least three vanishing {\em lines}, all coplanar.} Let $\tau$ be a geometrically flat point of $p$, and let $h^*_1,h^*_2,h^*_3\in C$ be three incident parabolas on which $p$ vanishes. Let $\ttt_i$ denote the tangent line to $h^*_i$ at $\tau$, and let $v_i$ denote a unit vector in the direction of $\ttt_i$, for $i=1,2,3$. The second-order Taylor expansion of $p$ at $\tau$ has the form $$ q(\tau+w) = p(\tau) + \nabla p(\tau)\cdot w + \frac12 w^TH_p(\tau)w = \nabla p(\tau)\cdot w + \frac12 w^TH_p(\tau)w , $$ for any vector $w$, where $$ H_p(\tau) = \left( \begin{array}{ccc} p_{xx} & p_{xy} & p_{xz} \\ p_{xy} & p_{yy} & p_{yz} \\ p_{xz} & p_{yz} & p_{zz} \\ \end{array} \right) $$ is the {\em Hessian} matrix of $p$. $q$ is a quadratic polynomial (in $w$) which approximates $p$ up to third order terms for sufficiently small values of $|w|$. Our goal is to construct, using this approximation and the fact that $p\equiv 0$ on three parabolas incident to $\tau$, as above, a new polynomial, depending on $p$, which vanishes at $\tau$, and use this vanishing as a characterization of flat points. To do so, we need to make the analysis more specific, and taylor it to the special form of $h$-parabolas. Let $\tau$ be a flat point, and let $a,b,u,v$ be the corresponding parameters in the $xy$-plane (so $\tau$ maps $a+tu$ to $b+tv$ for each $t\in\reals$; cf.~Remark (1) at the end of (H9)). Let $\Sigma=\Sigma(a,b;u,v)$ be the corresponding special surface spanned by the parabolas $h^*_{a+tu,b+tv}$, for all $t$ (here we vary $t$ continuously, but only finitely many corresponding parabolas belong to $C$). Since $\tau$ is flat, there exist at least three parabolas $h^*_{a+t_iu,b+t_iv}$, $i=1,2,3$ (all belonging to $C$, contained in $\Sigma$, and passing through $\tau$), such that $p\equiv 0$ on each of them. Let $q$ denote, as above, the quadratic polynomial which is the second-order Taylor expansion of $p$ at $\tau$. Let $h^*=h^*_{a+tu,b+tv}$ be one of the above parabolas on which $p$ vanishes identically. For $\tau'$ in the vicinity of $\tau$, $p(\tau')-q(\tau') = O(|\tau'-\tau|^3)$, so, for points $\tau'$ near $\tau$ on $h^*$, we have $q(\tau') = O(|\tau'-\tau|^3)$. Let us continue to consider only points $\tau'$ on $h^*$. Let $(X_0,Y_0,Z_0)$ (resp., $(X,Y,Z)$) be the coordinates of $\tau$ (resp., $\tau'$). The equations of $h^*$ (see (\ref{parabola})) are \begin{eqnarray*} X & = & (a_1+b_1+tu_1+tv_1)Z^2 + 2(a_2+tu_2)Z + (b_1-a_1+tv_1-tu_1) \\ Y & = & (a_2+b_2+tu_2+tv_2)Z^2 - 2(a_1+tu_1)Z + (b_2-a_2+tv_2-tu_2) , \end{eqnarray*} so we have \begin{eqnarray*} X - X_0 & = & (Z-Z_0) \left( (a_1+b_1+tu_1+tv_1)(Z+Z_0) + 2(a_2+tu_2) \right) \\ Y - Y_0 & = & (Z-Z_0) \left( (a_2+b_2+tu_2+tv_2)(Z+Z_0) - 2(a_1+tu_1) \right) , \end{eqnarray*} which we can further rewrite as \begin{eqnarray*} X - X_0 & = & 2(Z-Z_0) \left( (a_1+b_1+tu_1+tv_1)Z_0 + (a_2+tu_2) \right) + (Z-Z_0)^2 \left( a_1+b_1+tu_1+tv_1 \right) \\ Y - Y_0 & = & 2(Z-Z_0) \left( (a_2+b_2+tu_2+tv_2)Z_0 - (a_1+tu_1) \right) + (Z-Z_0)^2 \left( a_2+b_2+tu_2+tv_2 \right) . \end{eqnarray*} Let us simplify these equations as \begin{eqnarray*} X-X_0 & = & 2(Z-Z_0)A(t) + (Z-Z_0)^2C(t) \\ Y-Y_0 & = & 2(Z-Z_0)B(t) + (Z-Z_0)^2D(t) , \end{eqnarray*} where $A(t)$, $B(t)$, $C(t)$, and $D(t)$ are all linear functions of $t$. If we substitute these equations into the equation of $q$, assume that $Z$ is very close to $Z_0$, ignore terms which are at least cubic in $Z-Z_0$, and use the fact that $q(\tau')=O(|\tau'-\tau|^3)$ for any $\tau'$ on $h^*$ sufficiently close to $\tau$, we conclude that both the linear and the quadratic parts of $q(\tau')$ (in $Z-Z_0$) vanish identically. The linear part is $$ (Z-Z_0)\nabla p(\tau)\cdot (2A(t),2B(t),1) , $$ and the quadratic part is $$ (Z-Z_0)^2 \left( \nabla p(\tau)\cdot (C(t),D(t),0) + \frac12 (2A(t),2B(t),1)^TH_p(\tau)(2A(t),2B(t),1) \right) . $$ Hence we have \begin{eqnarray*} \nabla p(\tau)\cdot (2A(t),2B(t),1) & = & 0 , \\ \nabla p(\tau)\cdot (C(t),D(t),0) + \frac12 (2A(t),2B(t),1)^TH_p(\tau)(2A(t),2B(t),1) & = & 0 . \end{eqnarray*} Note that both equations vanish for (at least) three distinct values of $t$. Since the first equation is linear in $t$ and the second is quadratic in $t$, all the coefficients of both equations are identically zero. Let us restrict ourselves to the coefficient of the linear term in the first equation and of the quadratic term in the second one. Denote by $\alpha$ (resp., $\beta$) the coefficient of $t$ in $A(t)$ (resp., $B(t)$). Then we have \begin{eqnarray*} \alpha p_X(\tau) + \beta p_Y(\tau) & = & 0 \\ \alpha^2 p_{XX}(\tau) + 2\alpha\beta p_{XY}(\tau) + \beta^2 p_{YY}(\tau) & = & 0 . \end{eqnarray*} It is easily seen that $\alpha$ and $\beta$ cannot both be zero (assuming a generic coordinate frame in the original $xy$-plane), so, eliminating them gives \begin{equation} \label{sofsof} p_Y^2(\tau) p_{XX}(\tau) - 2p_X(\tau)p_Y(\tau) p_{XY}(\tau) + p_X^2(\tau) p_{YY}(\tau) = 0 , \end{equation} which is the constraint we were after. In what follows, we refer to the left-hand side of (\ref{sofsof}) as $\Pi(p)$. That is, $$ \Pi(p) = p_Y^2 p_{XX} - 2p_Xp_Y p_{XY} + p_X^2 p_{YY} , $$ and this polynomial has to vanish at $\tau$. We have thus shown: \begin{proposition} \label{flat} Let $p$ be a trivariate polynomial. If $\tau$ is a regular geometrically flat point of $p$ (with respect to three parabolas of $C$) then $\Pi(p)(\tau) = 0$. \end{proposition} \noindent{\bf Remark.} Note that the left-hand side of (\ref{sofsof}) is one of the three polynomials $\Pi_i(p)$ used in \cite{EKS} to analyze flat points in a 3-dimensional line arrangement. Specifically, $$ \Pi(p) = (e_3\times\nabla p)^TH_p(e_3\times\nabla p) , $$ where $e_3$ is the unit vector in the $z$-direction; the other two polynomials are defined analogously, using the other two coordinate vectors $e_1$, $e_2$. These polynomials form the {\em second fundamental form} of $p$; see \cite{EKS,GK} for details. In particular, if the degree of $p$ is $d$ then the degree of $\Pi(p)$ is at most $(d-1)+(d-1)+(d-2) = 3d-4$. In what follows, we call a point $\tau$ {\em flat} for $p$ if $\Pi(p)(\tau) = 0$. We will need the following technical lemma. \begin{lemma} \label{plinxy} Let $p$ be an irreducible trivariate polynomial, with the properties that (i) $\Pi(p)(\tau)=0$ at each regular point $\tau$ of $p=0$, and (ii) $p\equiv 0$ on at least two distinct intersecting $h$-parabolas of $C$. Then $p$ is a special polynomial. \end{lemma} (Note that the converse of the lemma is trivial, because the second-order derivatives $p_{XX}$, $p_{XY}$, and $p_{YY}$ are all identically zero for a special polynomial $p$, and because of the way such polynomials are constructed.) \noindent{\bf Proof:} Fix $Z=Z_0$ and consider the restricted bivariate polynomial $\tilde{p}(X,Y) = p(X,Y,Z_0)$. Clearly, $\Pi(\tilde{p}) = \Pi(p)$ on the plane $\pi_0:\;Z=Z_0$. Hence $\Pi(\tilde{p})=0$ at each regular point $\tau\in\pi_0$ of $p=0$, and thus at each regular point of $\tilde{p}$. (Note that a regular point of $\tilde{p}$ is also a regular point of $p$, although the converse need not be true.) Note also that $\tilde{p}$ is an irreducible polynomial, except possibly for finitely many values of $Z_0$. As is well known~\cite{Gra,Pre}, the curvature of the plane curve $\tilde{p}(X,Y)=0$, at a regular point of $\tilde{p}$, is given by $$ \kappa = \frac{\tilde{p}_Y^2\tilde{p}_{XX} - 2\tilde{p}_X\tilde{p}_Y\tilde{p}_{XY} + \tilde{p}_X^2\tilde{p}_{YY}}{(\tilde{p}_X^2+\tilde{p}_Y^2)^{3/2}} . $$ Hence this curve has zero curvature at every regular point of $\tilde{p}$, and thus, being the zero set of an irreducible polynomial, it must be a single line. In other words, $p$ is linear in $X$ and $Y$ for every fixed $Z$, except for finitely many values, implying that its equation is of the form $p(X,Y,Z) = A(Z)X+B(Z)Y+C(Z)$, where $A(Z)$, $B(Z)$ and $C(Z)$ are univariate polynomials. We now exploit assumption (ii), denoting by $\Sigma$ the unique special surface determined by (and containing) the two given $h$-parabolas. The analysis in (H12) then implies that $\Sigma$ coincides with the zero set of $p$, so $p$ is indeed a special polynomial, as claimed. $\Box$ Call an $h$-parabola $h^*\in C$ {\em flat} for $p$ if all the points of $h^*$ are flat points of $p$ (with the possible exception of a discrete subset). Arguing as in the case of critical points, if $h^*$ contains more than $2(3d-4)$ flat points then $h^*$ is a flat parabola. As in \cite{EKS,GK}, we next show that, in general, trivariate polynomials do not have too many flat parabolas. As before, we first establish this property for irreducible polynomials, and then extend the analysis to more general polynomials. \begin{proposition} \label{lflat} Let $p$ be an irreducible trivariate polynomial of degree $d$, which is not a special polynomial. Then $p$ can have at most $3d^2-4d$ flat $h$-parabolas of $C$. \end{proposition} \noindent{\bf Proof:} Suppose to the contrary that there are more than $3d^2-4d$ flat $h$-parabolas. As above, restrict $p$ and $\Pi(p)$ to a fixed plane $\pi_0$ of the form $Z=Z_0$. The number of common roots of $p$ and $\Pi(p)$ on $\pi_0$ exceeds the product of their degrees. Since this holds for every $Z_0$, Proposition~\ref{factor} implies that they must have a common factor. Since $p$ is irreducible, $p$ must be a factor of $\Pi(p)$. This implies that all the (regular) points at which $p$ vanishes are flat. Hence, by Lemma~\ref{plinxy}, $p$ must be a special polynomial, a contradiction which completes the proof of the asserted bound. $\Box$ \begin{proposition} \label{lflatx} Let $p$ be any trivariate square-free polynomial of degree $d$ with no special polynomial factors. Then $p$ can have at most $d(3d-4)$ flat $h$-parabolas in $C$. \end{proposition} \noindent{\bf Proof:} If $p$ is irreducible, the claim holds by Proposition~\ref{lflat}. Otherwise, write $p=fg$ where $f$ and $g$ are non-constant polynomials with no common factors (and no special polynomial factors). Let $d_f$ and $d_g$ denote their respective degrees, so $d=d_f+d_g$. Let $\tau$ be a regular flat point of $p$. Then either $f(\tau)=g(\tau)=0$, or only exactly one of $f(\tau)$, $g(\tau)$ vanishes. Hence, if $h^*$ is a flat parabola for $p$ then either both $f$ and $g$ vanish identically on $h^*$ or exactly one of them vanishes identically on $h^*$, while the other has only finitely many zeroes on $h^*$. Now, as already argued in the proof of Proposition~\ref{prop1}, there are at most $d_fd_g$ parabolas of the former kind. To handle parabolas of the latter kind, consider a regular point $\tau$ of $p$ at which $f=0$ but $g$ is nonzero. A simple calculation yields: \begin{eqnarray*} p_X & = & f_Xg + fg_X \\ p_Y & = & f_Yg + fg_Y \\ p_{XX} & = & f_{XX}g + 2f_Xg_X + fg_{XX} \\ p_{XY} & = & f_{XY}g + f_Xg_Y + f_Yg_X + fg_{YY} \\ p_{YY} & = & f_{YY}g + 2f_Yg_Y + fg_{YY} . \end{eqnarray*} Hence, at $\tau$ we have \begin{eqnarray*} p_X(\tau) & = & f_X(\tau)g(\tau) \\ p_Y(\tau) & = & f_Y(\tau)g(\tau) \\ p_{XX}(\tau) & = & f_{XX}(\tau)g(\tau) + 2f_X(\tau)g_X(\tau) \\ p_{XY}(\tau) & = & f_{XY}(\tau)g(\tau) + f_X(\tau)g_Y(\tau) + f_Y(\tau)g_X(\tau) \\ p_{YY}(\tau) & = & f_{YY}(\tau)g(\tau) + 2f_Y(\tau)g_Y(\tau) , \end{eqnarray*} and therefore we have at $\tau$, as is easily checked, $$ \Pi(p)(\tau) = g^3(\tau)\Pi(f)(\tau) . $$ That is, a regular flat point for $p$, at which $f=0$ but $g$ is nonzero, is a regular flat point for $f$, and a symmetric statement holds when $g=0$ but $f$ is nonzero. Hence, any flat parabola of the latter kind is either a flat parabola for $f$ or a flat parabola for $g$. Arguing by induction on the degree, the number of flat parabolas for $p$ is thus at most $$ 3d_f^2-4d_f + 3d_g^2-4d_g + d_fd_g < 3d^2-4d , $$ and the lemma follows. $\Box$ \section{Joint and flat rotations in a set of $h$-parabolas in $\reals^3$} \label{sec:joints} In this section we extend the recent algebraic machinery of Guth and Katz~\cite{GK}, as further developed by Elekes et al.~\cite{EKS}, using the algebraic tools set forth in the preceding section, to establish the bound $O(n^{3/2})=O(s^3)$ on the number of rotations with multiplicity at least $3$ in a collection of $n$ $h$-parabolas. \begin{theorem} \label{thm:gk1} Let $C$ be a set of at most $n$ $h$-parabolas in $\reals^3$, and let $P$ be a set of $m$ rotations, each of which is incident to at least three parabolas of $C$. Suppose further that no special surface contains more than $q$ parabolas of $C$. Then $m=O(n^{3/2}+nq)$. \end{theorem} \noindent{\bf Remarks.} {\bf (1)} The recent results of \cite{KSS,Qu} imply that the number of joints in a set of $n$ $h$-parabolas is $O(n^{3/2})$. The proofs in \cite{KSS,Qu} are much simpler than the proof given below, but they do not apply to flat points (rotations) as does Theorem~\ref{thm:gk1}. Since flat rotations are an integral part of the setup considered in this paper, we need to count them too, using the stronger Theorem~\ref{thm:gk1}. Moreover, even if we were to consider only joint rotations, the analysis of their incidences with the $h$-parabolas will turn some of them into flat rotations (by pruning some of the parabolas), so, as in \cite{EKS}, we will need to face flat rotations, no matter what. \noindent {\bf (2)} By Lemma~\ref{sspec}, we always have $q\le s$, and we also have $n^{1/2} \le s$, so the ``worst-case'' bound on $m$ is $O(ns)$. \noindent {\bf (3)} Note that the parameter $n$ in the statement of the theorem is arbitrary, not necessarily the maximum number $s^2$. When $n$ attains its maximum possible value $s^2$, the bound becomes $m=O(n^{3/2})=O(s^3)$. The proof of Theorem \ref{thm:gk1} uses the proof technique of \cite{EKS}, properly adapted to the present, somewhat more involved context of $h$-parabolas and rotations. \noindent{\bf Proof.} We first prove the theorem under the additional assumption that $q = n^{1/2}$. The proof proceeds by induction on $n$, and shows that $m \le An^{3/2}$, where $A$ is a sufficiently large constant whose choice will be dictated by the forthcoming analysis. The statement holds for all $n\le n_0$, for some constant $n_0$, if we choose $A$ to be sufficiently large. Fix $n>n_0$, and suppose that the claim holds for all $n'<n$. Let $C$ and $P$ be as in the statement of the theorem, with $|C|=n$, and suppose to the contrary that $|P| > An^{3/2}$. We first apply the following iterative pruning process to $C$. As long as there exists a parabola $h^*\in C$ incident to fewer than $cn^{1/2}$ rotations of $P$, for some constant $1\le c\ll A$ that we will fix later, we remove $h^*$ from $C$, remove its incident rotations from $P$, and repeat this step with respect to the reduced set of rotations. In this process we delete at most $cn^{3/2}$ rotations. We are thus left with a subset of at least $(A-c)n^{3/2}$ of the original parabolas, each incident to at least $cn^{1/2}$ surviving rotations, and each surviving rotation is incident to at least three surviving parabolas. For simplicity, continue to denote these sets as $C$ and $P$. Choose a random sample $C^s$ of parabolas from $C$, by picking each parabola independently with probability $t$, where $t$ is a small constant that we will fix later. The expected number of parabolas that we choose is $tn_1 \le tn$, where $n_1$ is the number of parabolas remaining after the pruning. We have $n_1 = \Omega(n^{1/2})$, because each surviving parabola is incident to at least $cn^{1/2}$ surviving rotations, each incident to at least two other surviving parabolas; since all these parabolas are distinct (recall that a pair of parabolas can meet in at most one rotation point), we have $n_1 \ge 2cn^{1/2}$. Hence, using Chernoff's bound, as in \cite{EKS} (see, e.g., \cite{AS}), we obtain that, with positive probability, (a) $|C^s| \le 2tn$. (b) Each parabola $h^*\in C$ contains at least $\frac12 ctn^{1/2}$ rotations that lie on parabolas of $C^s$. (To see (b), take a parabola $h^*\in C$ and a rotation $\tau\in P\cap h^*$. Note that $\tau$ will be incident to a parabola of $C^s$ with probability at least $t$, so the expected number of rotations in $P\cap{h^*}$ which lie on parabolas of $C^s$ is at least $ctn^{1/2}$. This, combined with Chernoff's bound, implies (b).) We assume that $C^s$ does indeed satisfy (a) and (b), and then (recalling that $c\ge 1$) choose $n^{1/2}$ arbitrary rotations on each parabola in $C^s$, to obtain a set $S$ of at most $2tn^{3/2}$ rotations. Applying Proposition~\ref{prop4}, we obtain a nontrivial trivariate polynomial $p(X,Y,Z)$ which vanishes at all the rotations of $S$, whose degree is at most the smallest integer $d$ satisfying $\binom{d+3}{3} \ge |S| + 1$, so $$ d \le \lceil (6|S|)^{1/3} \rceil \le (12t)^{1/3}n^{1/2} + 1 \le 2(12t)^{1/3}n^{1/2} , $$ for $n$ (i.e., $n_0$) sufficiently large. Without loss of generality, we may assume that $p$ is square-free---by removing repeated factors, we get a square-free polynomial which vanishes on the same set as the original $p$, with the same upper bound on its degree. The polynomial $p$ vanishes on $n^{1/2}$ points on each parabola in $C^s$. This number is larger than $2d$, if we choose $t$ sufficiently small so as to satisfy $4(12t)^{1/3} < 1$. Hence $p$ vanishes identically on all these parabolas. Any other parabola of $C$ meets at least $\frac12 ctn^{1/2}$ parabolas of $C^s$, at distinct points, and we can make this number also larger than $2d$, with an appropriate choice of $t$ and $c$ (we need to ensure that $ct > 8(12t)^{1/3}$). Hence, $p$ vanishes identically on each parabola of $C$. We will also later need the property that each parabola of $C$ contains at least $9d$ points of $P$; that is, we require that $cn^{1/2} > 9d$, which will hold if $c>18(12t)^{1/3}$. To recap, the preceding paragraphs impose several inequalities on $c$ and $t$, and a couple of additional similar inequalities will be imposed later on. All these inequalities are easy to satisfy by choosing $t<1$ to be a sufficiently small positive constant, and $c$ a sufficiently large constant. (These choices will also affect the choice of $A$---see below.) We note that $p$ can have at most $d/3$ special polynomial factors (since each of them is a cubic polynomial); i.e., $p$ can vanish identically on at most $d/3$ respective special surfaces $\Xi_1,\ldots,\Xi_k$, for $k\le d/3$. We factor out all these special polynomial factors from $p$, and let $\tilde{p}$ denote the resulting polynomial, which is a square-free polynomial without any special polynomial factors, of degree at most $d$. Consider one of the special surfaces $\Xi_i$, and let $t_i$ denote the number of parabolas contained in $\Xi_i$. Then any rotation on $\Xi_i$ is either an intersection point of (at least) two of these parabolas, or it lies on at most one of them. The number of rotations of the first kind is $O(t_i^2)$. Any rotation $\tau$ of the second kind is incident to at least one parabola of $C$ which crosses $\Xi_i$ transversally at $\tau$. We note that each $h$-parabola $h^*$ can cross $\Xi_i$ in at most three points. Indeed, substituting the equations of $h^*$ into the equation $E_2(Z)X-E_1(Z)Y+K(Z)=0$ of $\Xi_i$ (see (\ref{sigma})) yields a cubic equation in $Z$, with at most three roots. Hence, the number of rotations of the second kind is $O(n)$, and the overall number of rotations on $\Xi_i$ is $O(t_i^2+n) = O(n)$, since we have assumed in the present version of the proof that $t_i \le n^{1/2}$. Summing the bounds over all surfaces $\Xi_i$, we conclude that altogether they contain $O(nd)$ rotations, which we bound by $bn^{3/2}$, for some absolute constant $b$. We remove all these vanishing special surfaces, together with the rotations and the parabolas which are fully contained in them, and let $C_1\subseteq C$ and $P_1\subseteq P$ denote, respectively, the set of those parabolas of $C$ (rotations of $P$) which are not contained in any of the vanishing surfaces $\Xi_i$. Note that there are still at least three parabolas of $C_1$ incident to any remaining rotation in $P_1$, since none of the rotations of $P_1$ lie in any surface $\Xi_i$, so all parabolas incident to such a rotation are still in $C_1$. Clearly, $\tilde{p}$ vanishes identically on every $h^*\in C_1$. Furthermore, every $h^* \in C_1$ contains at most $d$ points in the surfaces $\Xi_i$, because, as just argued, it crosses each surface $\Xi_i$ in at most three points. Note that this also holds for every parabola $h^*$ in $C\setminus C_1$, if we only count intersections of $h^*$ with surfaces $\Xi_i$ which do not fully contain $h^*$. Hence, each $h^*\in C_1$ contains at least $8d$ rotations of $P_1$. Since each of these rotations is incident to at least three parabolas in $C_1$, each of these rotations is either critical or geometrically flat for $\tilde{p}$. Consider a parabola $h^* \in C_1$. If $h^*$ contains more than $2d$ critical rotations then $h^*$ is a critical parabola for $\tilde{p}$. By Proposition~\ref{prop1}, the number of such parabolas is at most $d(d-1)$. Any other parabola $h^*\in C_1$ contains more than $6d$ geometrically flat points and hence $h^*$ must be a flat parabola for $\tilde{p}$. By Proposition \ref{lflatx}, the number of such parabolas is at most $d(3d-4)$. Summing up we obtain $$ |C_1| \le d(d-1)+d(3d-4) < 4d^2 . $$ We require that $4d^2 < n/2$; that is, $32(12t)^{2/3} < 1$, which can be guaranteed by choosing $t$ sufficiently small. We next want to apply the induction hypothesis to $C_1$, with the parameter $4d^2$ (which dominates the size of $C_1$). For this, we first need to argue that each special surface contains at most $(4d^2)^{1/2} = 2d$ parabolas of $C_1$. Indeed, let $\Xi$ be a special surface. Using (\ref{sigma}), eliminate, say, $Y$ from the equation of $\Xi$ and substitute the resulting expression into the equation of $\tilde{p}$, to obtain a bivariate polynomial $\tilde{p}_0(X,Z)$. Let $h^*$ be a parabola of $C_1$ contained in $\Xi$. We represent $h^*$ by its $X$-equation of the form $X=Q(Z)$, and observe that $\tilde{p}_0(X,Z)$ vanishes on the zero set of $X-Q(Z)$. Hence $\tilde{p}_0$ must be divisible by $X-Q(Z)$. Note that, in a generic coordinate frame in the $xy$-plane, two different parabolas cannot have the same equation $X=Q(Z)$, because this equation uniquely determines $a_1,b_1$, and $a_2$, and then, in a generic frame, $b_2$ is also uniquely determined. Note also that the degree of $\tilde{p}_0$ is at most $3d$, and that the degree of each factor $X-Q(Z)$ is $2$, implying that $\Sigma$ can contain at most $3d/2$ parabolas of $C_1$. An important observation, which we will use in the proof of general version of the theorem, is that the argument just given does not use the assumed bound on the number of $h$-parabolas contained in a special surface, but, rather, establishes this bound ``from scratch'' for the subproblem involving $P_1$ and $C_1$. That is, even if the original problem does not satisfy the extra assumption in the restricted version, the subproblems that it generates always do satisfy it. Hence, the maximum number of parabolas of $C_1$ contained in a special surface is at most $3d/2\le (4d^2)^{1/2}$, so, by the induction hypothesis, the number of points in $P_1$ is at most $$ A(4d^2)^{3/2} \le \frac{A}{2^{3/2}} n^{3/2} . $$ Adding up the bounds on the number of points on parabolas removed during the pruning process and on the special surfaces $\Xi_i$ (which correspond to the special polynomial factors of $p$), we obtain $$ |P| \le \frac{A}{2^{3/2}}n^{3/2} + (b+c)n^{3/2} \le An^{3/2} \ , $$ with an appropriate, final choice of $t$, $c$, and $A$. This contradicts the assumption that $|P| > A n^{3/2}$, and thus establishes the induction step for $n$, and, consequently, completes the proof of the restricted version of the theorem. \paragraph{Proof of the general version:} The proof proceeds almost exactly as the proof of the restricted version, except for the analysis of the number of rotations on the special surfaces $\Xi_i$. As noted above, we encounter this difference only once, in handling the original problem: When we apply the induction step, we always fall into the restricted setup. By assumption, each special surface $\Xi_i$ contains at most $q$ $h$-parabolas. We modify the preceding analysis, so that each parabola is considered only once. That is, we iterate over the special surfaces in some order. When handling a surface $\Xi_i$, we consider only those $h$-parabolas that are not contained in any previously processed surface, and bound the number of rotations that they contain. Then we remove these parabolas and rotations from further considerations and go to the next surface. As argued above, a special surface $\Xi_i$ containing $t_i$ (surviving) parabolas contains at most $O(t_i^2+n)$ rotations which lie on these parabolas (and on no previously processed parabola). Summing these bounds over all special surfaces, and using the fact that $t_i\le q$ for each $i$, we get an overall bound $O(nd + q\sum_i t_i) = O(n^{3/2} + nq)$, as asserted. $\Box$ We summarize the remarks following Theorem~\ref{thm:gk1}, combined with Lemma~\ref{lem:lower}, in the following corollary. \begin{corollary} \label{roth} Let $S$ be a set of $s$ points in the plane. Then there are at most $O(s^3)$ rotations which map some (degenerate or non-degenerate) triangle spanned by $S$ to another (congruent and equally oriented) such triangle. This bound is tight in the worst case. \end{corollary} In the following section we will continue to adapt the analysis of \cite{EKS} to obtain bounds on the number of incidences between helices ($h$-parabolas) and rotations with multiplicity $\ge 3$, and, consequently, obtain bounds on $|P_{\ge k}|$, for any $k\ge 3$. \section{Incidences between parabolas and rotations} \label{sec:inc} In this section we further adapt the machinery of \cite{EKS} to derive an upper bound on the number of incidences between $m$ rotations and $n$ $h$-parabolas in $\reals^3$, where each rotation is incident to at least three parabolas (i.e., has multiplicity $\ge 3$). \begin{theorem} \label{inc-gen} For an underlying ground set $S$ of $s$ points in the plane, let $C$ be a set of at most $n\le s^2$ $h$-parabolas defined on $S$, and let $P$ be a set of $m$ rotations with multiplicity at least $3$ (with respect to $S$). Then $$ I(P,C) = O\left(m^{1/3}n + m^{2/3}n^{1/3}s^{1/3}\right) . $$ \end{theorem} \noindent{\bf Remark.} As easily checked, the first term dominates the second term when $m\le n^2/s$, and the second term dominates when $n^2/s < m \le ns$. In particular, the first term dominates when $n=s^2$, because we have $m=O(s^3)=O(n^2/s)$ \noindent{\bf Proof:} The proof of Theorem~\ref{inc-gen} proceeds in two steps. We first establish a bound which is independent of $m$, and then apply it to obtain the $m$-dependent bound asserted in the theorem. For the first step, we have: \begin{theorem} \label{inc-32} Let $C$ be a set of at most $n\le s^2$ $h$-parabolas defined on $S$, and let $P$ be a set of rotations with multiplicity at least $3$ with respect to $S$, such that no special surface contains more than $n^{1/2}$ parabolas of $C$. Then the number of incidences between $P$ and $C$ is $O(n^{3/2})$. \end{theorem} \noindent{\bf Proof.} Write $I=I(P,C)$ for short, and put $m=|P|$. We will establish the upper bound $I\le Bn^{3/2}$, for some sufficiently large absolute constant $B$, whose specific choice will be dictated by the various steps of the proof. Suppose then to the contrary that $I>Bn^{3/2}$ for the given $C$ and $P$. For $h^*\in C$, let $\nu(h^*)$ denote the number of rotations incident to $h^*$. We refer to $\nu(h^*)$ as the {\em multiplicity} of $h^*$. We have $\sum_{h^*\in C} \nu(h^*) = I$. The average multiplicity of a parabola $h^*$ is $I/n$. We begin by applying the following pruning process. Put $\nu = I/(6n)$. As long as there exists a parabola $h^*\in C$ whose multiplicity is smaller than $\nu$, we remove $h^*$ from $C$, but do not remove any rotation incident to $h^*$. We keep repeating this step (without changing $\nu$), until each of the surviving parabolas has multiplicity at least $\nu$. Moreover, if, during the pruning process, some rotation $\tau$ loses $\lfloor \mu(\tau)/2\rfloor$ incident parabolas, we remove $\tau$ from $P$. This decreases the multiplicity of some parabolas, and we use the new multiplicities in the test for pruning further parabolas, but we keep using the original threshold $\nu$. When we delete a parabola $h^*$, we lose at most $\nu$ incidences with surviving rotations. When a rotation $\tau$ is removed, the number of current incidences with $\tau$ is smaller than or equal to twice the number of incidences with $\tau$ that have already been removed. Hence, the total number of incidences that were lost during the pruning process is a most $3n\nu = I/2$. Thus, we are left with a subset $P_1$ of the rotations and with a subset $C_1$ of the parabolas, so that each $h^*\in C_1$ is incident to at least $\nu=I/(6n)$ rotations of $P_1$, and each rotation $\tau\in P_1$ is incident to at least three parabolas of $C_1$ (the latter is an immediate consequence of the rule for pruning a rotation). Moreover, we have $I(P_1,C_1) \ge I/2$. It therefore suffices to bound $I(P_1,C_1)$. Let $n_1=|C_1|$. Since at least three parabolas in $C_1$ are incident to each rotation in $P_1$, it follows that each parabola in $C_1$ is incident to at most $n_1/2$ rotations of $P_1$, and therefore $I(P_1,C_1) \le n_1^2/2$. Combining this with the fact that $I(P_1,C_1) \ge I/2$, we get that $n_1 \ge B^{1/2}n^{3/4}$. We fix the following parameters $$ x = \frac{n_1}{n^{1/2}} \quad\quad\mbox{and}\quad\quad t = \delta \frac{n_1}{n} , $$ for an appropriate absolute constant $\delta < 1$, whose value will be fixed shortly. Clearly, $t < 1$, and we can also ensure that $x < \nu$, i.e., that $I > 6 n_1 n^{1/2}$, by choosing $B > 6$. Furthermore, since $n_1 \ge B^{1/2}n^{3/4}$ we have $x \ge B^{1/2}n^{1/4}$. We construct a random sample $C^s_1$ of parabolas of $C_1$ by choosing each parabola independently at random with probability $t$; the expected size of $C^s_1$ is $tn_1$. Now take $x$ (arbitrary) rotations of $P_1$ on each parabola of $C^s_1$ (which can always be done since $x<\nu$), to form a sample $S$ of rotations in $P_1$, of expected size at most $txn_1$. For any parabola $h^*\in C_1$, the expected number of rotations of $P_1\cap{h^*}$ which lie on parabolas of $C^s_1$ is at least $t\nu$ (each of the at least $\nu$ rotations $a\in P_1\cap{h^*}$ is incident to at least one other parabola of $C_1$, and the probability of this parabola to be chosen in $C^s_1$ is $t$). We assume that $B$ is large enough so that ${\displaystyle t\nu = \delta\frac{n_1}{n}\frac{I}{6n} \ge \frac{\delta B}{6} \frac{n_1}{n^{1/2}} }$ is larger than $2x$ (it suffices to choose $B > 12/\delta$). Since $t\nu> 2x = \Omega({n^{1/4}})$, and the expected size of $C_1^s$ is ${\displaystyle tn_1 = \frac{\delta n_1^2}{n} \ge B\delta n^{1/2} }$, we can use Chernoff's bound, to show that there exists a sample $C^s_1$ such that (i) $|C^s_1| \le 2tn_1$, and (ii) each parabola $h^*\in C_1$ contains at least $\frac12 t\nu > x$ rotations of $P_1$ which lie on parabolas of $C^s_1$. In what follows, we assume that $C^s_1$ satisfies these properties. In this case, we have $|S|\le 2txn_1$. Now construct, using Proposition~\ref{prop4}, a nontrivial suqare-free trivariate polynomial $p$ which vanishes on $S$, of smallest degree $d$ satisfying $\binom{d+3}{3} \ge |S|+1$, so \begin{eqnarray*} d & \le & \lceil (6|S|)^{1/3} \rceil \le (12txn_1)^{1/3} + 1 = (12\delta)^{1/3} \frac{n_1}{n^{1/2}} + 1 \\ & \le & 2(12\delta)^{1/3} \frac{n_1}{n^{1/2}} \end{eqnarray*} for $n$ sufficiently large (for small values of $n$ we ensure the bound by choosing $B$ sufficiently large, as before). We will choose $\delta < 1/6144$, so $x > 4d$. As above, and without loss of generality, we may assume that $p$ is square-free: factoring out repeated factors only lowers the degree of $p$ and does not change its zero set. The following properties hold: (a) Since $x>2d$, $p$ vanishes at more than $2d$ rotations on each parabola of $C^s_1$, and therefore, as already argued, it vanishes identically on each of these parabolas. (b) Each parabola $h^*\in C_1$ contains at least $\frac12 t\nu > x > 2d$ rotations which lie on parabolas of $C^s_1$. Since, as just argued, $p$ vanishes at these rotations, it must vanish identically on $h^*$. Thus, $p\equiv 0$ on every parabola of $C_1$. Before proceeding, we enforce the inequality $d^2 < \frac18 n_1$ which will hold if we choose $\delta$ so that $(12\delta)^{2/3} < 1/32$. Similarly, an appropriate choice of $\delta$ (or $B$) also ensures that $\nu > 9d$. We next consider all the special polynomial factors of $p$, and factor them out, to obtain a square-free polynomial $\tilde{p}$, of degree at most $d$, with no special polynomial factors. As in the previous analysis, $p$ can have at most $d/3$ special polynomial factors, so it can vanish identically on at most $d/3$ special surfaces $\Xi_1,\ldots,\Xi_k$, for $k\le d/3$. Let $C_2\subseteq C_1$ denote the set of those parabolas of $C_1$ which are not contained in any of the vanishing surfaces $\Xi_i$. For each parabola $h^*\in C_2$, $\tilde{p}$ vanishes identically on $h^*$, and (as argued above) at most $d$ rotations in $P_1\cap{h^*}$ lie in the surfaces $\Xi_i$. Hence, $h^*$ contains at least $8d$ remaining rotations, each of which is either critical or flat for $\tilde{p}$, because each such point is incident to at least three parabolas (necessarily of $C_2$) on which $\tilde{p}\equiv 0$. Hence, either at least $2d$ of these rotations are critical, and then $h^*$ is a critical parabola for $\tilde{p}$, or at least $6d$ of these rotations are flat, and then $h^*$ is a flat parabola for $\tilde{p}$. Applying Propositions~\ref{prop1} and \ref{lflatx}, the overall number of parabolas in $C_2$ is therefore at most $$ d(d-1)+d(3d-4) < 4d^2 < \frac12 n_1 . $$ On the other hand, by assumption, each vanishing special surface $\Xi_i$ contains at most $n^{1/2}$ parabolas of $C$, so the number of parabolas contained in the special vanishing surfaces is at most $n^{1/2}d < \frac14 n^{1/2}x \le \frac14 n_1$, with our choice of $\delta$. Hence, the overall number of parabolas in $C_1$ is smaller than $\frac12 n_1 + \frac14 n_1 < n_1$, a contradiction that completes the proof of Theorem~\ref{inc-32}. $\Box$ \noindent{\bf Proof of Theorem~\ref{inc-gen}.} Write $I=I(P,C)$ for short. Set $\nu = cm^{1/3}$ and $\mu=cn/m^{2/3}$, for some sufficiently large constant $c$ whose value will be determined later, and apply the following pruning process. As long as there exists a parabola $h^*\in C$ whose multiplicity is smaller than $\nu$, we remove $h^*$ from $C$, but do not remove any rotation incident to $h^*$. Similarly, as long as there exists a rotation $\tau\in P$ whose multiplicity is smaller than $\mu$, we remove $\tau$ from $P$. Of course, these removals may reduce the multiplicity of some surviving rotations or parabolas, making additional rotations and parabolas eligible for removal. We keep repeating this step (without changing the initial thresholds $\nu$ and $\mu$), until each of the surviving parabolas has multiplicity at least $\nu$ and each of the surviving rotations has multiplicity at least $\mu$. We may assume that $\mu\ge 3$, by choosing $c$ suficiently large and using Theorem~\ref{thm:gk1}(i). When we delete a parabola $h^*$, we lose at most $\nu$ incidences with surviving rotations. When a rotation $\tau$ is removed, we lose at most $\mu$ incidences with surviving parabolas. All in all, we lose at most $n\nu + m\mu = 2c m^{1/3}n$ incidences, and are left with a subset $P_1$ of $P$ and with a subset $C_1$ of $C$, so that each parabola of $C_1$ is incident to at least $\nu$ rotations of $P_1$, and each rotation of $P_1$ is incident to at least $\mu$ parabolas of $C_1$ (these subsets might be empty). Put $n_1=|C_1|$ and $m_1=|P_1|$. We have $I \le I(P_1,C_1) + 2c m^{1/3}n$, so it remains to bound $I(P_1,C_1)$, which we do as follows. We fix some sufficiently small positive parameter $t<1$, and construct a random sample $P_1^s\subset P_1$ by choosing each rotation of $P_1$ independently with probability $t$. The expected size of $P_1^s$ is $m_1t$, and the expected number of points of $P_1^s$ on any parabola of $C_1$ is at least $\nu t = ctm^{1/3}$. Chernoff's bound implies that, with positive probability, $|P_1^s|\le 2m_1t$, and $|P_1^s\cap h^*| \ge \frac12 ctm^{1/3}$ for every $h^*\in C_1$. We can therefore assume that $P_1^s$ satisfies all these inequalities. (For the bound to apply, $m_1$ (and $m$) must be at least some sufficiently large constant; if this is not the case, we turn the trivial bound $m_1n$ (or $mn$) on $I$ into the bound $O(m_1^{1/3}n)$ (or $O(m^{1/3}n)$) by choosing the constant of proportionality sufficiently large.) Construct, using Proposition~\ref{prop4}, a nontrivial square-free trivariate polynomial $p$ which vanishes on $P_1^s$, whose degree is at most the smallest integer $d$ satisfying ${d+3\choose 3} \ge 2tm_1+1$, so $$ d \le \lceil (12tm_1)^{1/3} \rceil \le 3t^{1/3}m_1^{1/3} , $$ assuming (as above) that $m_1$ is sufficiently large. Choosing $c$ to be large enough, we may assume that $\nu t > 18d$. (This will hold if we ensure that $ct > 54t^{1/3}$.) This implies that $p$ vanishes at more than $9d$ points on each parabola $h^*\in C_1$, and therefore it vanishes identically on each of these parabolas. As in the previous analysis, we factor out the special polynomial factors of $p$, obtaining a square-free polynomial $\tilde{p}$, of degree at most $d$, with no special polynomial factors. Let $\Xi_1,\ldots,\Xi_k$ denote the special surfaces on which $p$ vanishes identically (the zero sets of the special polynomial factors of $p$), for some $k\le d/3$. Let $C_2\subseteq C_1$ (resp., $P_2\subseteq P_1$) denote the set of those parabolas of $C_1$ (resp., rotations of $P_1$) which are not contained in any of the vanishing surfaces $\Xi_i$. Put $C'_2 = C_1\setminus C_2$ and $P'_2 = P_1\setminus P_2$. For each parabola $h^*\in C_2$, $\tilde{p}$ vanishes identically on $h^*$, and, as argued in the proof of Theorem~\ref{thm:gk1}, at most $d$ rotations of $P_1\cap{h^*}$ lie in the surfaces $\Xi_i$. Hence, $h^*$ contains more than $8d$ rotations of $P_2$, and, arguing as in the preceding proof, each of these rotations is either critical or flat for $\tilde{p}$. Hence, either more than $2d$ of these rotations are critical, and then $h^*$ is a critical parabola for $\tilde{p}$, or more than $6d$ of these rotations are flat, and then $h^*$ is a flat parabola for $\tilde{p}$. Applying Propositions~\ref{prop1} and \ref{lflatx}, the overall number of parabolas in $C_2$ is therefore at most $$ d(d-1)+d(3d-4) < 4d^2 . $$ We now apply Theorem~\ref{inc-32} to $C_2$ and $P_2$, with the bound $4d^2$ on the size of $C_2$. The conditions of this theorem hold for these sets: Clearly, each rotation in $P_2$ is incident to at least three parabolas of $C_2$. For the other condition, we argue exactly as in the proof of Theorem~\ref{thm:gk1}, to conclude that any special surface can contain at most $3d/2$ parabolas of $C_1$, establishing the second condition of Theorem~\ref{inc-32}. This theorem then implies that the number of incidences between $P_2$ and $C_2$, which is also equal to the number of incidences between $P_2$ and $C_1$, is $$ I(P_2,C_1) = I(P_2,C_2) = O((4d^2)^{3/2}) = O(d^3) = O(m) \ . $$ Moreover, since each parabola of $C_2$ contains at least eight times more rotations of $P_2$ than of $P'_2$, this bound also applies to the number of incidences between $P'_2$ and $C_2$. It therefore remains to bound the number of incidences between $P'_2$ and $C'_2$, namely, between the rotations and parabolas contained in the vanishing special surfaces $\Xi_i$. To do so, we iterate over the surfaces, say, in the order $\Xi_1,\ldots,\Xi_k$. For each surface $\Xi_i$ in turn, we process the rotations and parabolas contained in $\Xi_i$ and then remove them from further processing on subsequent surfaces. Let us then consider a special surface $\Xi_i$. Let $m_i$ and $n_i$ denote respectively the number of rotations and parabolas contained in $\Xi_i$, which were not yet removed when processing previous surfaces. The number of incidences between these rotations and parabolas can be bounded by the classical Szemer\'edi-Trotter incidence bound~\cite{ST} (see also (\ref{weak23})), which is $O(m_i^{2/3}n_i^{2/3} + m_i+n_i)$. Summing these bounds over all the special surfaces $\Xi_i$, and using H\"older's inequality and the fact, established in Lemma~\ref{sspec}, that $n_i \le s$, we get an overall bound of $$ O\left( \sum_i \left( m_i^{2/3}n_i^{2/3} + m_i + n_i \right) \right) = $$ $$ O\left( s^{1/3} \sum_i m_i^{2/3}n_i^{1/3} + \sum_i (m_i + n_i) \right) = O\left( m^{2/3}n^{1/3}s^{1/3} + m + n \right) , $$ where we use the facts that $\sum_i m_i \le m$ and $\sum_i n_i \le n$, which follow since in this analysis each parabola and rotation is processed at most once. The two linear terms satisfy $n = O(m^{1/3}n)$ (the bound obtained in the pruning process), and $m = O(m^{2/3}n^{1/3}s^{1/3})$ since $m=O(ns)$; see Remark (2) following Theorem~\ref{thm:gk1}. We are not done yet, because each rotation of $P'_2$ is processed only once, within the first surface $\Xi_i$ containing it. This, however, can be handled as in \cite{EKS}. That is, let $\tau$ be a rotation which was processed within the first surface $\Xi_i$ containing it. Suppose that $\tau$ also lies on some later surface $\Xi_j$, with $j>i$, and let $h^*$ be a parabola contained in $\Xi_j$, which has not been removed yet; in particular, $h^*$ is not contained in $\Xi_i$, and thus meets it transversally, so the incidence between $h^*$ and $\tau$ can be regarded as one of the transversal incidences in $\Xi_i$, which we have been ignoring so far. To count them, we simply recall that each parabola, whether of $C'_2$ or of $C_2$, has at most three transversal intersections with a surface $\Xi_i$ (see the proof of Theorem~\ref{thm:gk1}), for a total of at most $d$ crossings with all the vanishing surfaces. Since each of these parabolas contains at least $9d$ rotations of $P_1$, those ``transversal incidences'' are only a fraction of the total number of incidences, and we simply ignore them altogether. To recap, we obtain the following bound on the number of incidences between $P_1$ and $C_1$: $$ I(P_1,C_1) = O\left( m + m^{1/3}n + m^{2/3}n^{1/3}s^{1/3} \right) = O\left(m^{1/3}n + m^{2/3}n^{1/3}s^{1/3}\right) . $$ Adding the bound $2c m^{1/3}n$ on the incidences lost during the pruning process, we get the asserted bound. $\Box$ It is interesting to note that the proof technique also yields the following result. \begin{corollary} \label{mgek} Let $C$ be a set of $n$ $h$-parabolas and $P$ a set of points in 3-space which satisfy the conditions of Theorem~\ref{inc-gen}(i). Then, for any $k\ge 1$, the number $M_{\ge k}$ of points of $P$ incident to at least $k$ parabolas of $C$ satisfies $$ M_{\ge k} = \begin{cases} {\displaystyle O\left( \frac{ns}{k^{3}} \right) } & \mbox{for $k\le s^{2/3}/n^{1/3}$,} \\ {\displaystyle O\left( \frac{n^{3/2}}{k^{3/2}} \right) } & \mbox{for $s^{2/3}/n^{1/3}\le k\le n^{1/3}$,} \\ {\displaystyle O\left( \frac{n^2}{k^3} + \frac{n}{k} \right) } & \mbox{for $k > n^{1/3}$.} \end{cases} $$ \end{corollary} \noindent{\bf Proof:} Write $m = M_{\ge k}$ for short. We clearly have $I(P,C) \ge km$. Theorem~\ref{inc-gen} then implies $km = O(m^{1/3}n+m^{2/3}n^{1/3}s^{1/3})$, from which the first two bounds follow. If $k > n^{1/3}$ we use the other bound (in (\ref{weak23})), to obtain $km = O(m^{2/3}n^{2/3}+m+n)$, which implies that $m = O(n^2/k^3 + n/k)$ (which is in fact an equivalent statement of the classical Szemer\'edi-Trotter bound). $\Box$ \section{Further improvements} \label{sec:impr} In this section we further improve the bound in Theorem~\ref{inc-gen} (and Corollary~\ref{mgek}) using more standard space decomposition techniques. We show: \begin{theorem} \label{thm:impr} The number of incidences between $m$ arbitrary rotations and $n$ $h$-parabolas, defined for a planar ground set with $s$ points, is $$ O^*\left(m^{5/12}n^{5/6}s^{1/12} + m^{2/3}n^{1/3}s^{1/3} + n \right) , $$ where the $O^*(\cdot)$ notation hides polylogarithmic factors. In particular, when all $n=s^2$ $h$-parabolas are considered, the bound is $$ O^*\left( m^{5/12}s^{7/4} + s^2 \right) . $$ \end{theorem} \noindent{\bf Proof:} We dualize the problem as follows. We map each parabola $h^*_{a,b}$ to the point $\hat{h}_{a,b} = (a,b) = (a_1,a_2,b_1,b_2)$ in $\reals^4$. Each rotation $\tau$ is mapped to a 2-plane $\hat{\tau}$, which is the locus of all points $\hat{h}$ such that $\tau$ is incident to $h^*$. This is indeed a 2-plane, because the equations of $\tau$, either (\ref{eq:helix}) in the $(\xi,\eta,\theta)$-frame, or (\ref{parabola}) in the $(X,Y,Z)$-frame, are a pair of linear (independent) equations in $(a_1,a_2,b_1,b_2)$. So in this new setup we have $n$ points and $m$ 2-planes in 4-space, and we wish to bound the number of incidences between these points and 2-planes. We note that any pair of these 2-planes intersect in at most one point. (The corresponding statement in the primal setup is that two rotations can be incident to at most one common $h$-parabola.) To bound the number of incidences, we first project the points and 2-planes onto the 3-space $b_2=0$. We claim that, with a generic choice of the coordinate frame in the original $xy$-plane, the projected points remain distinct. Indeed, a point $(a_1,a_2,b_1,b_2)$, dual to an $h$-parabola $h^*_{a,b}$, is projected to the point $(a_1,a_2,b_1)$, so the projected point uniquely determines $a$, and also $b$, because we may assume that no two points of $S$ have the same $x$-coordinate $b_1$. Hence the projected points are all distinct. This is not necessarily the case for the 2-planes. Indeed, consider a 2-plane $\hat{\tau}$. Its projection onto the $a_1a_2b_1$-space is the plane satisfying the first equation of (\ref{parabola}), say, namely $$ X = (a_1+b_1)Z^2 + 2a_2Z + (b_1-a_1) . $$ It is easily checked that this equation uniquely determines the $X$ and $Z$ components of $\tau$, leaving $Y$ (i.e., the shift along the $y$-direction that $\tau$ makes after its initial pure rotation) undetermined. Thus it is possible that several distinct rotations, all with the same $X$ and $Z$ components, are projected to the same 2-plane. This has the potential danger that the projection loses incidences, when several 2-planes, incident to a common point $\hat{\tau}$, get projected into the same plane, so that, instead of several incidences with $\hat{\tau}$ in 4-space, we get only one incidence in the projection. Nevertheless, this bad situation cannot arise. This follows from the easy observation that two distinct rotations with the same $X$ and $Z$ components cannot both map a point $(a_1,a_2)$ into the same point $(b_1,b_2)$. To recap, after the projection we get $n$ points and at most $m$ planes in $\reals^3$, and our goal is to bound the number of incidences between them. More precisely, we want to bound only the number of original incidences. We note that each such incidence appears as an incidence in the projection, but not necessarily the other way around. We recall that, in general, the number of incidences between $n$ points and $m$ planes in 3-space can be $mn$ in the worst case, because of the possibility that many points lie on a common line and many planes pass through that line. This situation can also arise in our setup, but we will apply a careful analysis to show that the number of original incidences that project to such a degenerate configuration is much smaller. We proceed as follows. We fix a parameter $r$, to be determined shortly, and construct the following decomposition of 3-space. First, we note that the projected points $(a_1,a_2,b_1)$ have only $s$ distinct $a_1$-coordinates, which are the $x$-coordinates of the points of $S$. Similarly, they have only $s$ distinct $b_1$-coordinates. We partition the 3-space by a set $R_1$ of $r$ planes orthogonal to the $a_1$-axis, so that within each resulting slab the projected points have at most $s/r$ distinct $a_1$-coordinates. We construct a similar collection $R_2$ of $r$ planes orthogonal to the $b_1$-axis, so that within each resulting slab the projected points have at most $s/r$ distinct $b_1$-coordinates. We then choose a random sample $R_0$ of $r$ of the projected planes. We take the set $R=R_0\cup R_1\cup R_2$ of $3r$ planes, construct their arrangement, and decompose each of its cells into simplices. We obtain $O(r^3)$ simplices, and the construction and the standard $\eps$-net theory \cite{HW} imply that, with high probability, the following properties hold for every simplex $\sigma$ of the partition: (i) $\sigma$ is crossed by at most $O\left(\frac{m}{r}\log r\right)$ projected 2-planes; (ii) the projected points that fall into $\sigma$ have at most $s/r$ distinct $a_1$-coordinates and at most $s/r$ distinct $b_1$-coordinates. Further refining the simplices, if necessary, we can also assume that (iii) each simplex contains at most $n/r^3$ projected points. Property (ii) is crucial. It asserts that the number of points of $S$ which induce the parabolas whose dual points project into a fixed simplex is at most $2s/r$; more precisely, there are only $s/r$ ``source'' points of $S$ and only $s/r$ ``target'' points, so that each of these parabolas is of the form $h^*_{a,b}$, where $a$ is one of the $s/r$ source points and $b$ is one of the $s/r$ target points. (Note, by the way, that the number of parabolas, $n/r^3$, in volved in a subproblem is much smaller than the maximum possible value $(s/r)^2$, when $r\gg 1$.) We now apply Theorem~\ref{inc-gen} to each simplex $\sigma$; that is, to the set $C_\sigma$ of those parabolas whose (projected) dual points lie in $\sigma$, and to the set $P_\sigma$ of those rotations whose (projected) dual 2-planes cross $\sigma$. Put $m_\sigma=|P_\sigma|$ and $n_\sigma=|C_\sigma|$. We note that some rotations in $P_\sigma$ may be incident to no more than two parabolas in $C_\sigma$; these rotations contribute $O(m_\sigma) = O\left(\frac{m}{r}\log r\right)$ to the overall incidence bound. By Theorem~\ref{inc-gen} we thus have\footnote Here we cannot argue, as we did earlier, that the term $m_\sigma$ is subsumed by the other terms, because of the possibility that some of the $m_\sigma$ rotations are incident to only one or two parabolas in a subproblem.} $$ I(P_\sigma,C_\sigma) = O\left( m_\sigma^{1/3}n_\sigma + m_\sigma^{2/3}n_\sigma^{1/3}(s/r)^{1/3} + m_\sigma \right) . $$ Summing these bounds over all cells $\sigma$, we get an overall bound of $$ \sum_\sigma I(P_\sigma,C_\sigma) = O^*\left( r^3 \cdot \left( (m/r)^{1/3}n/r^3 + (m/r)^{2/3}(n/r^3)^{1/3}(s/r)^{1/3} + m/r \right) \right) = $$ $$ O^*\left( m^{1/3}n/r^{1/3} + rm^{2/3}n^{1/3}s^{1/3} + mr^2 \right) , $$ where, as above, $O^*(\cdot)$ hides polylogarithmic factors. We also have to add to the bound incidences involving points, which are projections dual to parabolas, which lie on the boundaries of the cells of the cutting. Let $q=(a_1,a_2,b_1)$, the projection of a (unique) point $\hat{h}_{a,b}$ be such a point. Let $f$ denote the face whose relative interior contains $q$. If $f$ is a 2-face of some simplex $\sigma$, we can associate $q$ with $\sigma$: except for the single plane containing $f$, any other plane incident to $q$ must cross $\sigma$, and we can count the incidence within the subproblem of $\sigma$. The uncounted incidences, at most one per parabola, add up to at most $n$. If $f$ is a vertex (so $q=f$) then any plane through $f$ either bounds or crosses some adjacent simplex, so the total number of such incidences is $O^*(r^3\cdot (m/r)) = O^*(mr^2)$. The harder situation is when $f$ is an edge. Again, if a plane {\em crosses} $f$ at $q$, we can count this incidence within any adjacent simplex, arguing as in the case where $f$ is a 2-face. The difficult case is when the plane {\em contains} $f$, and we handle it as follows. It is simpler to consider $f$ as a full line of intersection of two sampled planes, rather than a single edge. (The decomposition, though, has also other edges, obtained in the decomposition of arrangement cells into simplices; these edges require a slightly different treatment, given below.) Let $q_1,\ldots,q_t$ be the projected dual points that lie on $f$, and let $h^*_{a_i,b_i}$ denote the parabola corresponding to $q_i$, for $i=1,\ldots,t$. Consider the rotations $\tau$ whose dual 2-planes project to planes containing $f$. Rotations $\tau$ of this kind which are incident to just one of the parabolas $h^*_{a_i,b_i}$ are easy to handle, because the number of incidences involving these rotations is at most $m$ (for the fixed line $f$), for a total of $O^*(mr^2)$. Consider then those rotations $\tau$ which are incident to at least two of the parabolas $h^*_{a_i,b_i}$. Since the points $(a_{i1},a_{i2},b_{i1})$ lie on a common line, it follows that the points $a_i$ are also collinear in the original $xy$-plane, lying on a common line $\ell_0$. The points $b_i$ are not necessarily collinear, but they have the property that, for any pair of indices $i\ne j$, the ratio $(b_{j1}-b_{i1})/(a_{j1}-a_{i1})$ is fixed. See Figure~\ref{abab}. \begin{figure}[htbp] \begin{center} \input abab.pstex_t \caption{Many projected dual points lying on a common line: The situation in the $xy$-plane.} \label{abab} \end{center} \end{figure} Now if $\tau$ is incident to two parabolas $h^*_{a_i,b_i}$, $h^*_{a_j,b_j}$, then $\tau$ maps $a_i$ to $b_i$ and $a_j$ to $b_j$. In particular, $|a_ia_j|=|b_ib_j|$. This, and the fact that $(b_{j1}-b_{i1})/(a_{j1}-a_{i1})$ is fixed, imply that $\tau$ maps $\ell_0$ to the line through $b_i$ and $b_j$, and that the slope of this line has a {\em fixed absolute value} $\lambda$. Hence, considering, with no loss of generality, only lines of the latter kind with positive slope, we can partition $\{q_1,\ldots,q_t\}$ into equivalence classes, so that, for each class, all the corresponding points $b_i$ lie on a common line of slope $\lambda$. Moreover, there is at most one rotation that is incident to at least two parabolas from the same class (and no rotation can be incident to two parabolas from different classes). Thus the total number of incidences of this kind, for the fixed $f$, is at most $t$. Summing over all lines $f$, we get a total of $O(n)$ such incidences. In the preceding analysis we considered only intersection lines between sampled planes, but, as noted, the cutting has additional edges, interior to cells of the arrangement. We handle such edges in almost the same way as above. That is, we consider such an edge $e$, and argue, exactly as above, that the number of original incidences involving points on $e$ and planes that contain $e$ is proportional to the number $n_e$ of points on $e$ plus the number $m_e$ of planes containing $e$. (Incidences involving planes that cross $e$ are also handled exactly as above, wih the same resulting bound.) The sum $\sum_e n_e$ is still at most $n$. For the other sum $\sum_e m_e$, we note that the number of edges $e$ is $O(r^3)$ (instead of $O(r^2)$ in the preceding analysis), but each edge $e$ can be contained in at most $O\left(\frac{m}{r}\log r\right)$ planes, as follows easily from the $\eps$-net theory (this holds with high probability, but we may assume that our sample does indeed have this property). Hence, we have $\sum_e m_e = O^*(r^3\cdot(m/r)) = O^*(mr^2)$, the same bound as above. Altogether, the number of incidences is thus $$ O^*\left( m^{1/3}n/r^{1/3}+mr^2+rm^{2/3}n^{1/3}s^{1/3}+n\right) . $$ We now choose $$ r = \left( \frac{n^{2/3}}{m^{1/3}s^{1/3}} \right)^{3/4} = \frac{n^{1/2}}{m^{1/4}s^{1/4}} . $$ This choice of $r$ makes the first and third terms in the incidence bound equal to each other, and they both dominate the second term, as is easily verified, using the fact that $n\le s^2$. Note also that $1\le r\le m$ when $$ \frac{n^{2/5}}{s^{1/5}} \le m\le \frac{n^{2}}{s} . $$ Assume first that $m$ lies in this range. Then the incidence bound becomes $$ O\left(m^{5/12}n^{5/6}s^{1/12} + n\right) . $$ When $m > n^{2}/s$, we use $r=1$ and get the bound $$ O\left( m^{1/3}n + m^{2/3}n^{1/3}s^{1/3} + m \right) . $$ Since $n^{2}/s < m \le ns$, the second term dominates the two other terms, and the bound is thus $O\left( m^{2/3}n^{1/3}s^{1/3} \right)$. Finally, when $m < n^{2/5}/s^{1/5}$, we use the Szemer\'edi-Trotter bound in (\ref{weak23}), which is easily seen to yield the bound $O(n)$. Adding all these bounds, the theorem follows. $\Box$ Using this bound, we can strengthen Corollary~\ref{mgek}, as follows. \begin{corollary} \label{mgek1} Let $C$ be a set of $n$ $h$-parabolas and $P$ a set of rotations, with respect to a planar ground set $S$ of $s$ points. Then, for any $k\ge 3$, the number $M_{\ge k}$ of rotations of $P$ incident to at least $k$ parabolas of $C$ satisfies $$ M_{\ge k} = O^*\left( \frac{n^{10/7}s^{1/7}}{k^{12/7}} + \frac{ns}{k^{3}} + \frac{n}{k} \right) . $$ For $n=s^2$, the bound becomes $$ M_{\ge k} = O^*\left( \frac{s^3}{k^{12/7}} \right) . $$ \end{corollary} \noindent{\bf Proof:} The proof is similar to the proof of Corollary~\ref{mgek}, and we omit its routine details. $\Box$ \section{Conclusion} \label{sec:conc} In this paper we have reduced the problem of obtaining a near-linear lower bound for the number of distinct distances in the plane to a problem involving incidences between points and a special class of parabolas (or helices) in three dimensions. We have made significant progress in obtaining upper bounds for the number of such incidences, but we are still short of tightening these bounds to meet the conjectures on these bounds made in the introduction. To see how far we still have to go, consider the bound in Corollary~\ref{mgek1}, for the case $n=s^2$, which then becomes $O^*(s^3/k^{12/7})$. (Here $M_{\ge k}$ coincides with $N_{\ge k}$ as defined in (H3).) Moreover, we also have the Szemer\'edi-Trotter bound $O(s^4/k^3)$, which is smaller than the previous bound for $k\ge s^{7/9}$. Substituting these bounds in the analysis of (H3) and (H4), we get $$ \frac{\left[s(s-1)-x\right]^2}{x} \le |K| = N_{\ge 2} + \sum_{k\ge 3} (k-1) N_{\ge k} = $$ $$ N_{\ge 2} + O(s^3) \cdot \left[ 1 + \sum_{k=3}^{s^{7/9}} \frac{1}{k^{5/7}} + \sum_{k > s^{7/9}} \frac{s^4}{k^3} \right] = N_{\ge 2} + O(s^{29/9}) . $$ It is fairly easy to show that $N_{\ge 2}$ is $O(s^{10/3})$, by noting that $N_{\ge 2}$ can be upper bounded by $O\left(\sum_i |E_i|^2\right)$, where $E_i$ is as defined in (H1). Using the upper bound $|E_i|=O(s^{4/3})$ \cite{SST}, we get $$ N_{\ge 2} = O\left(\sum_i |E_i|^2\right) = O(s^{4/3})\cdot O\left(\sum_i |E_i|\right) = O(s^{10/3}) . $$ Thus, at the moment, $N_{\ge 2}$ is the bottleneck in the above bound, and we only get the (weak) lower bound $\Omega(s^{2/3})$ on the number of distinct distances. Showing that $N_{\ge 2}=O(s^{29/9})$ too (hopefully, a rather modest goal) would improve the lower bound to $\Omega(s^{7/9})$, still a rather weak lower bound. Nevertheless, we feel that the reduction to incidences in three dimensions is fruitful, because \noindent (i) It sheds new light on the geometry of planar point sets, related to the distinct distances problem. \noindent (ii) It gave us a new, and considerably more involved setup in which the new algebraic technique of Guth and Katz could be applied. As such, the analysis in this paper might prove useful for obtaining improved incidence bounds for points and other classes of curves in three dimensions. The case of points and circles is an immediate next challenge. Another comment is in order. Our work can be regarded as a special variant of the complex version of the Szemer\'edi-Trotter theorem on point-line incidences \cite{ST}. In the complex plane, the equation of a line (in complex notation) is $w=pz+q$. Interpreting this equation as a transformation of the real plane, we get a {\em homothetic map}, i.e., a rigid motion followed by a scaling. We can therefore rephrase the complex version of the Szemer\'edi-Trotter theorem as follows. We are given a set $P$ of $m$ pairs of points in the (real) plane, and a set $M$ of $n$ homothetic maps, and we seek an upper bound on the number of times a map $\tau\in M$ and a pair $(a,b)\in P$ ``coincide'', in the sense that $\tau(a)=b$. In our work we only consider ``complex lines'' whose ``slope'' $p$ has absolute value $1$ (these are our rotations), and the set $P$ is simply $S\times S$. The main open problems raised by this work are: \noindent (a) Obtain a cubic upper bound for the number of rotations which map only two points of the given ground planar set $S$ to another pair of points of $S$. Any upper bound smaller than $O(s^{3.1358})$ would already be a significant step towards improving the current lower bound of $\Omega(s^{0.8641})$ on distinct distances \cite{KT}. \noindent (b) Improve further the upper bound on the number of incidences between rotations and $h$-parabolas. Ideally, establish Conjectures 1 and 2. \subsection*{Homage and Acknowledgments} The bulk of the paper was written after the passing away of Gy\"orgy Elekes in September 2008. However, the initial infrastructure, including the transformation of the distinct distances problem to an incidence problem in three dimensions, and many other steps, is due to him. As a matter of fact, it was already discovered by Elekes about 10 years ago, and lay dormant since then, mainly because of the lack of effective tools for tackling the incidence problem. These tools became available with the breakthrough result of Guth and Katz~\cite{GK} in December 2008, and have made this paper possible. Thanks are due to M\'arton Elekes, who was a driving force in restarting the research on this problem. Many thanks are due to Haim Kaplan, for many hours of helpful discussions concerning the work in this paper. As mentioned, the construction in Lemma~\ref{lem:lower} is due to him. Finally, thanks are also due to Jozsef Solymosi for some helpful comments on the technique used in the paper.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{intro} Let $N$ be a real $2n$-dimensional nilpotent Lie group with Lie algebra $\ngo$, whose Lie bracket will be denoted by $\mu :\ngo\times\ngo\longrightarrow\ngo$. An {\it invariant complex structure} on $N$ is defined by a map $J:\ngo\longrightarrow\ngo$ satisfying $J^2=-I$ and the integrability condition \begin{equation}\label{integral} \mu(JX,JY)=\mu(X,Y)+J\mu(JX,Y)+J\mu(X,JY), \qquad \forall X,Y\in\ngo. \end{equation} By left translating $J$, one obtains a complex manifold $(N,J)$, as well as compact complex manifolds $(N/\Gamma,J)$ if $N$ admits cocompact discrete subgroups $\Gamma$, which are usually called \textit{nilmanifolds} and play an important role in complex geometry. A left-invariant metric which is {\it compatible} with $(N,J)$, also called a {\it hermitian metric}, is determined by an inner product $\ip$ on $\ngo$ such that $$ \la JX,JY\ra=\la X,Y\ra, \quad \forall X,Y\in\ngo. $$ A very natural evolution equation for hermitian metrics on a fixed complex manifold $(N,J)$ is given by $$ \ddt \ip_t=-2\riccic_{\ip_t}, $$ which will be called the {\it complexified Ricci flow} (cxRF), where $\riccic_{\ip_t}:=\la\Ricc_{\ip_t}\cdot,\cdot\ra$ is the $(1,1)$-component of the Ricci tensor and $\Ricc_{\ip_t}$ the $J$-invariant part of the Ricci operator of the hermitian manifold $(N,J,\ip_t)$. The cxRF has been studied in \cite{canonical}, where besides the uniqueness of cxRF-\textit{solitons} up to isometry and scaling on a given $(N,J)$, the following characterizations were given: \begin{itemize} \item[(i)] $\ip$ is a cxRF-soliton. \item[(ii)] $\ip$ is \textit{minimal}, that is, it minimizes the functional $\tr(\Ricc_{\ip})^2$ on the set of all compatible metrics on $(N,J)$ with the same scalar curvature. \item[(iii)] $\Ricc_{\ip}=cI+D$ for some $c\in\RR$ and $D\in\Der(\ngo)$. \end{itemize} In \cite{invcomplx}, we determined which $6$-dimensional abelian (i.e. $\mu(JX,JY)=\mu(X,Y)$) complex nilmanifolds admit a minimal metric. In \cite{solitonSCF}, Fern\'{a}ndez-Culma gives a criterion for determining the existence of a minimal compatible metric for a geometric structure on a nilpotent Lie group, which is based on the moment map of a real reductive representation (see Section \ref{existmetric}). Our aim in this paper is to use equivalence (iii) above and the criterion given in \cite{solitonSCF} to classify all complex structures admitting a minimal compatible metric on $6$-dimensional nilpotent Lie groups. In some cases, we found the minimal metrics explicitly. A complete classification result is given in Tables \ref{tablemin1} and \ref{tablemin2}. \section{Preliminaries}\label{basic} Let $\ngo$ a $2n$-dimensional real vector space, and consider the space of all skew-symmetric algebras of dimension $2n$, which is parameterized by the vector space $$V = \lam = \{\mu : \ngo \times \ngo \to \ngo : \mu \ \text{bilinear and skew-symmetric}\}.$$ We now fix a map $J:\ngo \to \ngo$ such that $J^2=-I$. There is a natural linear action of Lie group $\Gl_n(\CC):=\{g\in\Gl_{2n}(\RR): g J = J g\}$ on $V$ defined by \begin{align} g\cdot\mu(X, Y)=g\mu(g^{-1} X,g^{-1} Y), \quad X,Y\in\ngo, \ g\in\Gl_n(\CC), \ \mu\in V, \end{align} and the corresponding representation of the Lie algebra $\glg_n(\CC)$ of $\Gl_n(\CC)$ on $V$ is given by \begin{align}\label{rept} \pi(\alpha)\mu = \alpha\mu(\cdot,\cdot) - \mu(\alpha\cdot,\cdot) - \mu(\cdot,\alpha\cdot), \quad \alpha\in\glg_n(\CC), \ \mu\in V. \end{align} Any inner product $\ip$ on $\ngo$ determines inner products on $V$ and $\glg_n(\CC)$, both also denoted by $\ip$, as follows: \begin{align}\label{prodtint} \la \mu, \lambda \ra =\sum_{i,j,k} \la \mu(e_i,e_j), e_k \ra \la \lambda(e_i,e_j), e_k \ra, \qquad \la \alpha,\beta\ra = \tr\alpha\beta^*, \end{align} where $\{e_i\}$ denote an ortonormal basis of $\ngo$ and $\beta^*$ the conjugate transpose with respect to $\ip$. We use $\glg_n(\CC) = \ug(n)\oplus\hg(n)$ as a Cartan decomposition of $\glg_n(\CC)$, where $\ug(n)$ and $\hg(n)$ denote the subspaces of skew-hermitian and hermitian matrices, respectively. The set $\ag$ of all (real) diagonal $n\times n$ matrix in $\glg_n(\CC)$ is a maximal abelian subalgebra of $\hg(n)$ and therefore determines a system of roots $\Delta\subset\ag$. Let $\Phi$ denote the set of roots. If $J e_{2i-1} = e_{2i}$, $i=1,\ldots,n$, then $\Phi$ is given by \begin{align} \Phi = \{\pm\Dg(1,-1,0,\ldots,0), \pm\Dg(1,0,-1,\ldots,0), \pm\Dg(0,1,-1,\ldots,0),\ldots\}. \end{align} If $\{e^1,\ldots,e^{2n}\}$ is the basis of $\ngo^*$ dual to the basis $\{e_1, ..., e_{2n}\}$, then \begin{align}\label{vsubijk} \{v_{ijk}=(e^i\wedge e^j)\otimes e_k : 1\leq i < j\leq 2n, 1\leq k \leq 2n \} \end{align} is a basis of weight vectors of $V$ for the representation (\ref{rept}), where $v_{ijk}$ is actually the bilinear form on $\ngo$ defined by $v_{ijk}(e_i,e_j)= - v_{ijk}(e_j,e_i)= e_k$ and zero otherwise. The corresponding weights $\alpha_{ij}^k \in \ag$, $i<j$, are given by \begin{align} \pi(\alpha)v_{ijk} = (a_k - a_i - a_j)v_{ijk} = \la\alpha,\alpha_{ij}^k\ra v_{ijk}, \quad \forall\alpha = \left[\begin{smallmatrix} a_1&&&&\\ &&&&\\ &&\ddots&&\\ &&&&\\&&&&a_n \end{smallmatrix}\right] \in \ag. \end{align} If $J e_{2i-1} = e_{2i}$, it is easy to check that $$\alpha_{ij}^k= \frac{1}{2}(E_{k,k} + E_{k\mp 1,k\mp 1} - E_{i,i} - E_{i\mp 1,i\mp 1} - E_{j,j} - E_{j\mp 1,j\mp 1}),$$ where $E_{r,s}$ denotes the matrix whose only nonzero coefficient is $1$ at entry $rs$. \section{Minimal metrics on complex nilmanifolds}\label{existmetric} Let $N$ be a real $2n$-dimensional nilpotent Lie group with Lie algebra $\ngo$, and $J$ an invariant complex structure on $N$. A left invariant metric which is {\it compatible} with the nilmanifold $(N,J)$, also called a {\it hermitian metric}, is determined by an inner product $\ip$ on $\ngo$ such that $$ \la JX,JY\ra=\la X,Y\ra, \quad \forall X,Y\in\ngo. $$ We consider $$ \Ricc_{\ip}:=\unm\left(\Ric_{\ip}-J\Ric_{\ip}J\right), $$ the complexified part of the Ricci operator $\Ric_{\ip}$ of the hermitian manifold $(N,J,\ip)$, and the corresponding $(1,1)$-component of the Ricci tensor $\riccic_{\ip}:=\la\Ricc_{\ip}\cdot,\cdot\ra$. A compatible metric $\ip$ on $(N,J)$ is called {\it minimal} if $$ \tr{(\Ricc_{\ip})^2}=\min \left\{ \tr{(\Ricc_{\ip'})^2} : \scalar(\ip')=\scalar(\ip)\right\}, $$ where $\ip'$ runs over all compatible metrics on $(N,J)$ and $\scalar(\ip)=\tr\Ric_{\ip}=\tr\Ricc_{\ip}$ is the scalar curvature. In \cite{canonical}, the following conditions on $\ip$ are proved to be equivalent to minimality: \begin{itemize} \item[(i)] The solution $\ip_t$ with initial value $\ip_0=\ip$ to the cxRF $$ \ddt \ip_t=-2\riccic_{\ip_t}, $$ is self-similar, in the sense that $\ip_t=c_t\vp_t^*\ip$ for some $c_t>0$ and one-parameter group of automorphisms $\vp_t$ of $N$. In this case, $\ip$ is called a cxRF-{\it soliton}. \item[(ii)] There exist a vector field $X$ on $N$ and $c\in\RR$ such that $$\riccic_{\ip}=c\ip+L_X\ip,$$ where $L_X\ip$ denotes the usual Lie derivative. \item[(iii)] $\Ricc_{\ip}=cI+D$ for some $c\in\RR$ and $D\in\Der(\ngo)$. \end{itemize} The uniqueness up to isometry and scaling of a minimal metric on a given $(N,J)$ was also proved in \cite{canonical}, and can be used to obtain invariants in the following way. If $(N,J_1,\ip_1)$ and $(N,J_2,\ip_2)$ are minimal and $J_1$ is equivalent to $J_2$ (i.e. if there exists an automorphism $\alpha$ of $\ngo$ satisfying $J_2 = \alpha J_1 \alpha^{-1}$), then they must be conjugate via an automorphism which is an isometry between $\ip_1$ and $\ip_2$. This provides us with a lot of invariants, namely the Riemannian geometry invariants including all different kind of curvatures. In \cite{invcomplx}, we used this to give an alternative proof of the pairwise non-isomorphism between the structures which have appeared in the classification of abelian complex structures on $6$-dimensional nilpotent Lie algebras given in \cite{AndBrbDtt}, where condition (iii) is strongly applied. \begin{example}\label{NOricciC} For $t\in\RR$, consider the $3$-step nilpotent Lie algebra $\hg_{11}$ whose bracket is given by $$ \begin{array}{lll} \mu_{t}(e_1,e_2)= e_4, & \mu_{t}(e_1,e_3)= -e_5,\\ \mu_{t}(e_1,e_4)= (t-1)e_6, & \mu_{t}(e_2,e_3)=-t e_6. \end{array} $$ Let \begin{align}\label{Jstand} \begin{array}{lll} J:=\left[\begin{smallmatrix} 0&-1&&&&\\ 1&0&&&&\\ &&0&-1&&\\ &&1&0&&\\ &&&&0&-1\\ &&&&1&0 \end{smallmatrix}\right], && \la e_i,e_j\ra:=\delta_{ij}. \end{array} \end{align} A straightforward verification shows that $J$ is a non-abelian complex structure on $N_{\mu_{t}}$ for all $t$ ($N_{\mu_{t}}$ is the (simply connected) nilpotent Lie group with Lie algebra $(\hg_{11}, \mu_{t})$), and $\ip$ is compatible with $(N_{\mu_{t}},J)$. It is easy to see that $\Ricc_{\mu_{t}} = cI+D$ for some $c\in \RR$, $D\in\Der(\ngo)$ if and only if $t=0$ or $t=1$. Condition (iii) now shows that $\ip$ is not minimal for $t>1$. \end{example} The problem of finding a minimal metric can be very difficult. In \cite{solitonSCF}, Fern\'{a}ndez-Culma gives a criterion for determining the existence of a minimal compatible metric for a geometric structure on a nilpotent Lie group. We will apply such result in the complex case. We follow the notation of Section \ref{basic} for a fixed complex structure $J$ on $N$. Set $A=\exp(\ag)$ and consider $W$ a $A$-invariant subspace of $V$. It follows that $W$ has a decomposition in weight spaces $$W = W_1 \oplus^{\perp}\cdots \oplus^{\perp} W_r$$ with weights $\Psi_{W} = \{\alpha_1,\ldots,\alpha_r\}$. \begin{definition}\cite[Definition 2.18.]{solitonSCF}\label{Jnice} We call $W$ \textit{J-nice} if $\Ricc_{\mu}\in \ag$ for all $\mu\in W$. \end{definition} A very useful corollary is the following \begin{corollary}\cite[Corollary 4.7.]{distorbit}\label{critJnice} Let $W$ be an $A$-invariant subspace of $V$. If for all $\alpha_i$ and $\alpha_j$ in $\Psi_{W}$, $\alpha_i - \alpha_j \notin \Phi$, then $W$ is $J$-nice. \end{corollary} From an algebraic point of view, there is a condition on the basis of a Lie algebra that gets a subspace $J$-nice, based on the simplicity of the corresponding set of structural constants. Namely, a basis $\{X_1,\ldots, X_n\}$ of $\ngo$ is said to be \textit{nice} if $[X_i, X_j]$ is always a scalar multiple of some element in the basis and two different brackets $[X_i, X_j]$, $[X_r, X_s]$ can be a nonzero multiple of the same $X_k$ only if $\{i, j\}$ and $\{r, s\}$ are disjoint. It is easily to check that if $W$ admits a nice basis, then $W$ is $J$-nice (see \cite{riccidiag} for other application). Let us denote by $\rca(\mu)$ the ordered set of weights related with $\mu$ to the action of $\Gl_n(\CC)$ on $V$. It is clear that $\rca(\mu)$ is the orthogonal projection onto $\ag$ of the weights related with $\mu$ to the action of $\Gl_{2n}(\RR)$ on $V$. We denote by $\mathrm{U}_\mu$ the \textit{Gram matrix} of $(\rca(\mu),\ip)$, i.e. $$\mathrm{U}_\mu(p,q) = \la \rca(\mu)_p, \rca(\mu)_q \ra$$ with $1\leq p,q\leq \sharp\rca(\mu)$. \begin{theorem}\cite[Theorem 2.22.]{solitonSCF}\label{thmJnice} Let $W$ be a $J$-nice space and let $(N_\mu,J)$ be a complex nilmanifold with $\mu\in W$. \ Then $(N_\mu,J)$ admits a compatible minimal metric if and only if the equation $$\mathrm{U}_{\mu}[x_i] = \lambda [1]$$ has a positive solution $[x_i]$ for some $\lambda\in\RR$. \end{theorem} \begin{example} By using the notation of Example \ref{NOricciC}, we will now prove that $(N_{\mu_{t}},J)$ does admit a compatible minimal metric for all $t>1$. Let $$W = \spa_{\RR}\{\mu_{12}^4, \mu_{13}^5, \mu_{14}^6, \mu_{23}^6\},$$ where $\mu_{ij}^k$ is defined as in (\ref{vsubijk}). Let us first see that $W$ is $J$-nice by using Corollary \ref{critJnice}. The root set $\Phi$ of $\glg_3(\CC)$ is given by \begin{align*} \Phi = \{\pm\Dg(1,-1,0), \pm\Dg(1,0,-1), \pm\Dg(0,1,-1)\}. \end{align*} The weights of $W$ with respect to the action of $\Gl_3(\CC)$ are {\small\begin{align*} \{\alpha_1:=\Dg(-2,1,0), \alpha_2:=\Dg(-1,-1,1), \alpha_3:=\Dg(-1,-1,1), \alpha_4:=\Dg(-1,-1,1)\}, \end{align*}} for all $t>1$. Since $\alpha_i - \alpha_j \notin \Phi$, it follows that $W$ is $J$-nice. It follows that $$ \mathrm{U}_{\mu_{t}}=\left[\begin{smallmatrix} 5&1&1&1\\ 1&3&3&3\\ 1&3&3&3\\ 1&3&3&3 \end{smallmatrix}\right]. $$ Since $X = (\frac{1}{7}, \frac{1}{7}, \frac{1}{14}, \frac{1}{14})$ is a positive solution to the problem $\mathrm{U}_{\mu_{t}} X = [1]_4$, we conclude that $(N_{\mu_{t}},J)$ does admit a minimal metric for all $t>1$, by Theorem \ref{thmJnice} (see Table \ref{tablemin1}). \end{example} \begin{example} Consider the $2$-step nilpotent Lie algebra $(\hg_5,\mu_{st})$ given by $$ \begin{array}{lll} \mu_{st}(e_1,e_2)= 2 e_6, & \mu_{st}(e_1,e_3)= -e_5, & \mu_{st}(e_1,e_4)= -e_6,\\ \mu_{st}(e_2,e_3)= -e_6, & \mu_{st}(e_2,e_4)= e_5, & \mu_{st}(e_3,e_4)= 2s e_5 + 2t e_6, \end{array} $$ with $s\geq 0$, $t\in\RR$, $4s^2 < 1+4t$. We have that $(N_{\mu_{st}}, J)$ is a non-abelian complex nilmanifold for all $s, t$, where $J$ is given as in (\ref{Jstand}). Let $$W = \spa_{\RR}\{\mu_{12}^6, \mu_{13}^5, \mu_{14}^6, \mu_{23}^6, \mu_{24}^5, \mu_{34}^5, \mu_{34}^6\}.$$ The weights of $W$ with respect to the action of $\Gl_3(\CC)$ are {\small\begin{align*} \{ & \alpha_1:=\Dg(-2,0,1), \alpha_2:=\Dg(-1,-1,1), \alpha_3:=\Dg(-1,-1,1), \alpha_4:=\Dg(-1,-1,1), \\ & \alpha_5:=\Dg(-1,-1,1), \alpha_6:=\Dg(0,-2,1), \alpha_7:=\Dg(0,-2,1),\}, \end{align*}} for all $s\neq 0$, $t\neq 0$. Since $\alpha_1 - \alpha_2 \in \Phi$, $\Phi$ as in the above example, Corollary \ref{critJnice} does not apply. Anyway, it is straightforward to check that $\Ricc(g\cdot\mu_{st}) \in \ag$ for all $g\in A$, and so $W$ is $J$-nice. Hence $$ \mathrm{U}_{\mu_{st}}=\left[\begin{smallmatrix} 5&3&3&3&3&1&1\\ 3&3&3&3&3&3&3\\ 3&3&3&3&3&3&3\\ 3&3&3&3&3&3&3\\ 3&3&3&3&3&3&3\\ 1&3&3&3&3&5&5\\ 1&3&3&3&3&5&5 \end{smallmatrix}\right]. $$ Since $X = (\frac{1}{12}, \frac{1}{120}, \frac{1}{40}, \frac{1}{15}, \frac{1}{15}, \frac{1}{24}, \frac{1}{24})$ is a positive solution to the problem $\mathrm{U}_{\mu_{st}} X = [1]_7$, it follows that $(N_{\mu_{st}},J)$ does admit a minimal metric for all $(s,t)\neq (0,0)$ (analogously if $s=0$ or $t=0$), for Theorem \ref{thmJnice}. But if we now take $s=t=0$ then $$W = \spa_{\RR}\{\mu_{12}^6, \mu_{13}^5, \mu_{14}^6, \mu_{23}^6, \mu_{24}^5\}, \qquad \mathrm{U}_{\mu}=\left[\begin{smallmatrix} 5&3&3&3&3\\ 3&3&3&3&3\\ 3&3&3&3&3\\ 3&3&3&3&3\\ 3&3&3&3&3 \end{smallmatrix}\right]. $$ Any solution to the equation $\mathrm{U}_{\mu} X = \lambda[1]_5$ is of the form $(0, \frac{1}{3}-a-b-c, a, b, c)$, and therefore $(N_\mu, J)$ does not admit a minimal metric by Theorem \ref{thmJnice}. In summary, $(N_{\mu_{st}},J)$ does admit a minimal metric if and only if $s\neq 0$ or $t\neq 0$ (see Table \ref{tablemin1}). \end{example} \begin{example} We consider the $4$-step nilpotent Lie algebra $(\hg_{26}^{+},\mu)$ defined by $$ \begin{array}{lll} \mu(e_1,e_2)= e_5, & \mu(e_1,e_3)= \pm e_6, & \mu(e_1,e_5)= -e_3,\\ \mu(e_2,e_4)= \pm e_6, & \mu(e_2,e_5)= -e_4. \end{array} $$ Therefore, $(N_\mu, J)$ is a complex nilmanifold (see (\ref{Jstand})). Let $$W = \spa_{\RR}\{\mu_{12}^5, \mu_{13}^6, \mu_{15}^3, \mu_{24}^6, \mu_{25}^4\}.$$ Note that $W$ is nice, and, in consequence, it is $J$-nice. The weights of $W$ with respect to the action of $\Gl_3(\CC)$ are {\small\begin{align*} \{ & \alpha_1:=\Dg(-2,0,1), \alpha_2:=\Dg(-1,-1,1), \alpha_3:=\Dg(-1,1,-1), \alpha_4:=\Dg(-1,-1,1), \\ & \alpha_5:=\Dg(-1,1,-1)\}. \end{align*}} Thus $$ \mathrm{U}_{\mu}=\left[\begin{smallmatrix} 5&3&1&3&1\\ 3&3&-1&3&-1\\ 1&-1&3&-1&3\\ 3&3&-1&3&-1\\ 1&-1&3&-1&3 \end{smallmatrix}\right]. $$ Any solution to $\mathrm{U}_{\mu} X = \lambda[1]_5$ is of the form $(-2, 3-a, 2-b, a, b)$, thus $(N_\mu, J)$ does not admit a minimal metric by Theorem \ref{thmJnice} (see Table \ref{tablemin2}). \end{example} \section{Classification of minimal metrics on $6$-dimensional nilpotent Lie groups}\label{table} In this section, we use the classification of all complex structures on $6$-dimensional nilpotent Lie groups given in \cite{nilstruc}, to determine those admitting a minimal hermitian metric. Recall that in \cite{invcomplx}, we was analyzed the abelian case and therefore we do not study it here. Next we illustrate how to rewrite the complex structure equations appearing in \cite{nilstruc} on the Lie algebra $\hg_2$, in a way that the complex structure $J$ be fixed and varies the bracket. Let $J e^1 = e^2$, $J e^3 = e^4$ and $J e^5 = e^6$ ($J$ view in the dual $\hg_2^{\ast}$, recall that $(J\alpha)(x)=-\alpha(Jx)$ for all $\alpha\in\Lambda^2\hg_2^*$). With respect to the basis $$\{\omega^1:= e^1-i J e^1, \ \omega^2:= e^3-i J e^3, \ \omega^3:= e^5-i J e^5\},$$ the complex structure equations are \ $d\omega^1 = d\omega^2 = 0, d\omega^3 = \omega^{12} + \omega^{1\overline{1}} + \omega^{1\overline{2}} + D \omega^{2\overline{2}}$, with $D \in \CC$ and $\mathfrak{Im} D >0$. Here $\omega^{jk}$ (resp. $\omega^{j\overline{k}}$) means the wedge product $\omega^j\wedge \omega^k$ (resp. $\omega^j\wedge \omega^{\overline{k}}$), where $\omega^{\overline{k}}$ indicates the complex conjugation of $\omega^k$. Let $D=\mathfrak{Re} D + i \mathfrak{Im} D = t + i s$, $s>0$. It follows that \begin{align*} & d e^1 = d e^2 = d e^3 = d e^4 = 0,\\ & d e^5 - i d e^6 = 2i e^1\wedge e^2 + 2 e^1\wedge e^3 - 2i e^2\wedge e^3 + 2(i t - s) e^3\wedge e^4. \end{align*} Therefore, \begin{align*} & d e^1 = d e^2 = d e^3 = d e^4 = 0,\\ & d e^5 = 2 e^1\wedge e^3 - 2s e^3\wedge e^4,\\ & d e^6 = -2 e^1\wedge e^2 + 2 e^2\wedge e^3 -2 t e^3\wedge e^4. \end{align*} Recall that \ $d e^k = \sum\limits_{i,j}(-c_{ij}^k) e^i\wedge e^j \Leftrightarrow [e_i, e_j] = \sum\limits_k c_{ij}^k e_k$. Hence, \begin{align*} & [e_1, e_2] = e_6, \ [e_1, e_3] =-e_5,\\ & [e_2, e_3] = -e_6, \ [e_3, e_4] = s e_5 + t e_6. \end{align*} By arguing as above for each item in \cite[Table 1, 2]{nilstruc}, and applying Theorem \ref{thmJnice}, with $J$ as in (\ref{Jstand}), we can now formulate our main result. Let $N_4^t$, $N_5^{s,t}$ and $N_{26}^{+}$ denote the nilpotent Lie groups with Lie algebras $(\hg_4, \lb_t)$, $(\hg_5,\lb_{s,t})$ and $(\hg_{26}^{+},\lb_{\pm})$, respectively (see Tables \ref{tablemin1} and \ref{tablemin2}). \begin{theorem} Let $(N,J)$ be a $6$-dimensional complex nilmanifold. Then $(N,J)$ admits a minimal metric if and only if $(N,J)$ is not holomorphically isomorphic to one of the following: \ $(N_4, J)$ \ (abelian), $\left(N_4^{1/4},J\right)$, $\left(N_5^{0,0},J\right)$ and $\left(N_{26}^{+},J^{\pm}\right)$. \end{theorem} \begin{remark} In the notation used in \cite{nilstruc}, the five exceptions above correspond to the following complex structures: \begin{align*} \hg_4: \quad & d\omega^1 = d\omega^2 = 0, \ d\omega^3 = \omega^{1\overline{1}} + \omega^{1\overline{2}} + (1/4) \omega^{2\overline{2}} \quad (\mbox{abelian}).\\ & d\omega^1 = d\omega^2 = 0, \ d\omega^3 = \omega^{12} + \omega^{1\overline{1}} + \omega^{1\overline{2}} + (1/4) \omega^{2\overline{2}}.\\ \hg_5: \quad & d\omega^1 = d\omega^2 = 0, \ d\omega^3 = \omega^{12} + \omega^{1\overline{1}}.\\ \hg_{26}^{+}: \quad & d\omega^1 = 0, \ d\omega^2 = \omega^{13} + \omega^{1\overline{3}}, \ d\omega^3 = i\omega^{1\overline{1}} \pm i(\omega^{1\overline{2}}-\omega^{2\overline{1}}). \end{align*} \end{remark} In Tables \ref{tablemin1} and \ref{tablemin2}, we are given explicitly the Lie algebras $\ngo$, indicating the condition under which $(N,J)$ admits a minimal compatible metric in the third column. In the last column, we added the condition under which the canonical metric $\ip$ (see (\ref{Jstand})) is minimal, that is, the cases in which $\Ricc_{\ip} = cI+D$ for some $c\in \RR$, $D\in\Der(\ngo)$. \footnotesize{ \begin{table} \centering \renewcommand{\arraystretch}{1.6} \begin{tabular}{|c|l|c|c|}\hline $\mathbf{\ngo}$ & \textbf{Bracket} & \textbf{Existence} & $\ip$ \textbf{minimal}\\ \hline\hline $\hg_2$ & $[e_1,e_2]=e_6$, \ $[e_1,e_3]=-e_5$, \ $[e_2,e_3]=-e_6$, & Yes & No\\ & $[e_3,e_4]=s e_5 + t e_6$; \ $s>0$, $t\in\RR$. & &\\ \hline $\hg_4$ & $[e_1,e_2]=e_6$, \ $[e_1,e_3]=-e_5$, \ $[e_2,e_3]=-e_6$, & $t\neq\frac{1}{4}$ & $t=-1$\\ & $[e_3,e_4]= t e_6$; \ $t\in\RR-\{0\}$. & &\\ \hline $\hg_5$ & $[e_1,e_3]=-e_5$, \ $[e_1,e_4]=-e_6$, & Yes & Yes\\ & $[e_2,e_3]=-e_6$, \ $[e_2,e_4]=e_5$. & &\\ \cline{2-4} & $[e_1,e_2]=2 e_6$, \ $[e_1,e_3]=-e_5$, \ $[e_1,e_4]=-e_6$, & $s\neq 0$ or $t\neq 0$ & $s^2+t^2=1$\\ & $[e_2,e_3]=-e_6$, \ $[e_2,e_4]=e_5$, \ $[e_3,e_4]=2s e_5 + 2t e_6$, & &\\ & $s\geq 0$, \ $t\in\RR$, \ $4s^2 < 1+4t$. & &\\ \cline{2-4} & $[e_1,e_2]=2 e_6$, \ $[e_1,e_3]=-(t+1)e_5$, \ $[e_1,e_4]=(t-1)e_6$, & Yes & No\\ & $[e_2,e_3]=-(t+1) e_6$, \ $[e_2,e_4]=(1-t)e_5$, \ $[e_3,e_4]=2s e_5$, & &\\ & with $(s,t)$ satisfying one of: & &\\ & $\bullet$ \ $0<t^2<\frac{1}{2}$, \ \ $0\leq s < \frac{t^2}{2}$ & &\\ & $\bullet$ \ $\frac{1}{2}\leq t^2<1$, \ $0\leq s < \frac{1-t^2}{2}$ & &\\ & $\bullet$ \ $t^2>1$, \ $0\leq s < \frac{t^2-1}{2}$ & &\\ \hline $\hg_6$ & $[e_1,e_2]=e_6$, \ $[e_1,e_3]=-e_5$, \ $[e_2,e_3]=-e_6$. & Yes & No\\ \hline $\hg_7$ & $[e_1,e_2]=e_4$, \ $[e_1,e_3]=-e_5$, \ $[e_2,e_3]=-e_6$. & Yes & Yes\\ \hline $\hg_{10}$ & $[e_1,e_2]=e_4$, \ $[e_2,e_3]=-e_6$, \ $[e_2,e_4]=e_5$. & Yes & Yes\\ \hline $\hg_{11}$ & $[e_1,e_2]=e_4$, \ $[e_1,e_3]=-t e_5$, \ $[e_2,e_3]=-e_6$, & Yes & No\\ & $[e_2,e_4]=(1-t) e_5$; \ $t<1$, $t\neq 0$. & &\\ \cline{2-4} & $[e_1,e_2]=e_4$, \ $[e_1,e_3]=-e_5$, \ $[e_1,e_4]=(t-1)e_6$, & Yes & No\\ & $[e_2,e_3]=-t e_6$; \ $t>1$. & &\\ \hline $\hg_{12}$ & $[e_1,e_2]=2e_4$, \ $[e_1,e_3]=-(s+1-\alpha)e_5 + t e_6$, & Yes & $\left(s-\frac{1}{2}\right)^2+t^2=\frac{1}{4}$\\ & $[e_1,e_4]=t e_5 + (s-1+\alpha)e_6$, \ $[e_2,e_3]=-t e_5 - (s+1+\alpha) e_6$, & &\\ & $[e_2,e_4]=-(s-1-\alpha)e_5 + t e_6$; \ $s, t \in\RR$, \ $t\neq 0$, & &\\ & with $\alpha:=\sqrt{(s-1)^2+t^2}$. & &\\ \hline $\hg_{13}$ & $[e_1,e_2]=2e_4$, \ $[e_1,e_3]=-(s+1-c)e_5 + t e_6$, & Yes & $c^2+s^2+t^2=1$\\ & $[e_1,e_4]=t e_5 + (s-1+c)e_6$, \ $[e_2,e_3]=-t e_5 - (s+1+c) e_6$, & &\\ & $[e_2,e_4]=-(s-1-c)e_5 + t e_6$; \ $s, t \in\RR$, \ $c\in\RR^{\geq 0}$, & &\\ & with $c,\beta$ satisfying: let $\alpha:=\sqrt{(s-1)^2+t^2}$, $\beta:=\sqrt{s^2+t^2}$, & &\\ & $c\neq \alpha$, \ $(c,\beta)\neq(0,1)$, \ $c^4-2(\beta^2+1)c^2+(\beta^2-1)^2 < 0$. & &\\ \hline $\hg_{14}$ & $[e_1,e_2]=2e_4$, \ $[e_1,e_3]=-(s+1-c)e_5 + t e_6$, & Yes & $c^2+s^2+t^2=1$\\ & $[e_1,e_4]=t e_5 + (s-1+c)e_6$, \ $[e_2,e_3]=-t e_5 - (s+1+c) e_6$, & &\\ & $[e_2,e_4]=-(s-1-c)e_5 + t e_6$; \ $s, t \in\RR$, \ $c\in\RR^{\geq 0}$, & &\\ & with $c,\beta$ satisfying: let $\alpha:=\sqrt{(s-1)^2+t^2}$, $\beta:=\sqrt{s^2+t^2}$, & &\\ & $c\neq \alpha$, \ $(c,\beta)\neq(0,1)$, \ $c^4-2(\beta^2+1)c^2+(\beta^2-1)^2 = 0$. & &\\ \hline \end{tabular} \end{table}} \begin{table} \centering \renewcommand{\arraystretch}{1.6} \begin{tabular}{|c|l|c|c|}\hline $\mathbf{\ngo}$ & \textbf{Bracket} & \textbf{Existence} & $\ip$ \textbf{minimal}\\ \hline\hline $\hg_{15}$ & $[e_1,e_2]=2e_4$, \ $[e_1,e_3]=-(s+1-c)e_5 + t e_6$, & Yes & $c^2+s^2+t^2=1$\\ & $[e_1,e_4]=t e_5 + (s-1+c)e_6$, \ $[e_2,e_3]=-t e_5 - (s+1+c) e_6$, & &\\ & $[e_2,e_4]=-(s-1-c)e_5 + t e_6$; \ $s, t \in\RR$, \ $c\in\RR^{\geq 0}$, & &\\ & with $c,\beta$ satisfying: let $\alpha:=\sqrt{(s-1)^2+t^2}$, $\beta:=\sqrt{s^2+t^2}$, & &\\ & $c\neq \alpha$, \ $(c,\beta)\neq(0,1)$, \ $c^4-2(\beta^2+1)c^2+(\beta^2-1)^2 > 0$. & &\\ \hline $\hg_{16}$ & $[e_1,e_2]=2e_4$, \ $[e_1,e_3]=-(s+1)e_5 + t e_6$, & Yes & $s^2+t^2=1$\\ & $[e_1,e_4]=t e_5 + (s-1)e_6$, \ $[e_2,e_3]=-t e_5 - (s+1) e_6$, & &\\ & $[e_2,e_4]=(1-s)e_5 + t e_6$; \ $s, t \in\RR$, \ $s^2+t^2=1$, $(s,t)\neq(1,0)$. & &\\ \hline \end{tabular} \vspace{0.3cm} \caption{Non-abelian Nilpotent complex structures.}\label{tablemin1}\vspace{-0.3cm} \end{table} \begin{table} \centering \renewcommand{\arraystretch}{1.6} \begin{tabular}{|c|l|c|c|}\hline $\mathbf{\ngo}$ & \textbf{Bracket} & \textbf{Existence} & $\ip$ \textbf{minimal}\\ \hline\hline $\hg_{19}^{-}$ & $[e_1,e_3]=\pm e_6$, \ $[e_1,e_5]=-e_3$, & Yes & Yes\\ & $[e_2,e_4]=\pm e_6$, \ $[e_2,e_5]=-e_4$. & &\\ \hline $\hg_{26}^{+}$ & $[e_1,e_2]= e_5$, \ $[e_1,e_3]=\pm e_6$, \ $[e_1,e_5]= -e_3$, & No & ------\\ & $[e_2,e_4]=\pm e_6$, \ $[e_2,e_5]=-e_4$. & &\\ \hline \end{tabular} \vspace{0.3cm} \caption{Non-nilpotent complex structures.}\label{tablemin2} \end{table} \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{first} It is well known that type IIA (IIB) string theories contain two types of D-branes: the BPS Dp-branes, which have even (odd)p in type IIA (IIB) theories and unstable, non-BPS Dp-branes with odd (even)p in the type IIA (IIB) case, for review see \cite{Sen:2004nf,Sen:1999mg}. Non-BPS D-branes are very important in string theory. For example, the BPS D-branes can be thought as solitons in the worldvolume theory of non-BPS ones \cite{Witten:1998cd,Witten:2000cn,Horava:1998jy}. However, as was stressed recently in \cite{Kutasov:2004ct} there are many open questions about them that remain unanswered. For example, it is very remarkable that many aspects of the tachyon dynamics can be captured by a spacetime effective action of Dirac-Born-Infeld (DBI) type \cite{Sen:1999md,Garousi:2000tr,Bergshoeff:2000dq, Kluson:2000iy,Lambert:2003zr, Kutasov:2003er,Niarchos:2004rw} \begin{equation} S=-T^{non}_p\int d^{p+1}\xi \frac{1}{\cosh\frac{T}{\sqrt{2}}} \sqrt{-\det G} \ , \end{equation} where $T^{non}_p$ is tension of non-BPS Dp-brane and $G$ is induced metric on the brane \begin{equation}\label{inmet} G_{\mu\nu}=\eta_{\mu\nu}+\partial_{\mu}T \partial_{\nu}T+\partial_{\mu}Y^I \partial_{\nu}Y^I \ . \end{equation} The scalar fields $Y^I \ (I=p+1,\dots,9)$ living on the worldvolume of the brane parameterise its location in the transverse space. The form of the induced metric (\ref{inmet}) suggests that the tachyon direction in field space should be treaded as an extra dimension of space, like $Y^I$, however then there is an important question: what is the meaning of the tachyon potential $V(T)$? In two recent papers by D. Kutasov \cite{Kutasov:2004dj,Kutasov:2004ct} \footnote{Similar problems were discussed in \cite{Kluson:2004xc,Saremi:2004yd,Sahakyan:2004cq, Ghodsi:2004wn,Panigrahi:2004qr,Yavartanoo:2004wb}.} the precise analogy between the BPS D-brane moving in the background of NS5-branes and the tachyon dynamics on non-BPS Dp-brane was demonstrated. In particular, in \cite{Kutasov:2004ct} the dynamics of BPS D-brane propagating in the near horizon limit of $NS5$-branes with the transverse space $R^3\times S^1$ was considered as an useful toy model of the non-BPS D-brane. It was shown that from the point of view of an observer living on the 5+1 dimensional worldvolume of fivebranes the BPS D-branes in the full theory give rise to two kinds of the objects in five dimensions. One consists D-branes whose worldvolume lies entirely inside the worldvolume of fivebranes. These D-branes are non-BPS-while both fivebranes and the BPS D-branes separately preserve 16 supersymmetries, a background that contains both of them breaks all supersymmetry. The second kind consists D-branes that wrap the circle transverse to the fivebranes. These D-branes preserve eight supercharges and are BPS. In summary, two kinds of D-branes emerge from different orientations of BPS D-branes in the space transverse to fivebranes. In this paper we will continue the study of this interesting problem. Namely, we will consider a non-BPS D-brane embedded in the background of $k$ $NS5$-branes on transverse $R^3\times S^1$. We will explicitly show that the tachyon effective action in this background is invariant under special transformation that maps the tachyon mode $T$-that is presented on the worldvolume of a non-BPS Dp-brane even in the flat spacetime-to the new tachyon field $\mathcal{T}$-that arises from the field redefinition of the worldvolume field $y$ that parameterises the position of a non-BPS Dp-brane on the circle $S^1$. The existence of this symmetry, even if its physical origin is unclear to us, really suggests that it is correct to consider the tachyon mode as an additional embedding coordinate. However the origin of this possible additional dimension is unclear at present. After the discussion of the general properties of the tachyon effective action in the fivebranes background we will study some solutions of its equations of motion. We will show that these solutions describe both non-BPS and BPS lower dimensional D-branes that are embedded in the fivebrane background with the transverse space $R^3\times S^1$. These solutions explicitly demonstrate that all branes in the fivebranes background arise through the tachyon condensation on the worldvolume of a non-BPS D-brane in the same way as D-branes in the flat spacetime can be thought as solitonic solutions on the higher dimensional non-BPS D-brane. This paper is organised as follows. In the next section (\ref{second}) we will study the properties of a non-BPS Dp-brane in the NS5-branes background with transverse space $R^3\times S^1$. Then in section (\ref{third}) we will analyse some solutions of the equation of motion and we will give their physical interpretation. Finally, in conclusion (\ref{fourth}) we will outline our results and suggest possible extension of this work. \section{Tachyon effective action in the presence of NS5-branes on transverse $R^3\times S^1$} \label{second} To begin with we give a brief description of the system of $k$ $NS5$-branes on transverse $R^3\times S^1$ which we will label with coordinates $(\mathbf{Z},Y)$ with $\mathbf{Z}=(Z^1,Z^2,Z^3)\in R^3$ and $Y\sim Y+2\pi R$ where $R$ is radius $S^1$. The fivebranes are located at points $\mathbf{Z}=Y=0$. The background around them is \cite{Polchinski:1998rr} \begin{eqnarray}\label{BS} ds^2=dx^{\mu}dx_{\mu}+H(\mathbf{Z},Y) (d\mathbf{Z}^2+dY^2) \ , \nonumber \\ e^{2(\Phi-\Phi_0)}=H(\mathbf{Z},Y) \ , \nonumber \\ \end{eqnarray} where $x^{\mu}\in R^{5,1}$ label the worldvolume of fivebranes and where $\Phi_0$ is related to the string coupling constant $g_s$ as $g_s=\exp \Phi_0$. The harmonic function $H$ in (\ref{BS}) has the form \begin{equation} H=1+k\sum_{n=-\infty}^{\infty} \frac{1}{(Y-2\pi Rn)^2+\mathbf{Z}^2}\ . \end{equation} We will be interested in the study of the system in the near-horizon limit that can be defined by rescaling all distances by the factor $g_s$ \begin{equation} \mathbf{Z}=g_s\mathbf{z} \ , R=g_sr \ , Y=g_s y \end{equation} and sending $g_s\rightarrow 0$ while keeping the rescaled distances $(\mathbf{z},y,r)$ fixed. This leads to the background \begin{equation}\label{nhg} ds^2=dx^{\mu}dx_{\mu}+h(\mathbf{z},y)(d\mathbf{z}^2+dy^2) \ , e^{2\Phi}=h(\mathbf{z},y) \end{equation} with \begin{equation}\label{nhh} h(\mathbf{z},y)=k\sum_{n=-\infty}^{\infty} \frac{1}{(y-2\pi rn)^2+\mathbf{z}^2}= \frac{k}{2r|\mathbf{z}|} \frac{\cosh\left(\frac{|\mathbf{z}|}{r}\right)} {\cosh\left(\frac{|\mathbf{z}|}{r}\right) -\cos\left(\frac{y}{r}\right)} \ . \end{equation} We now place non-BPS Dp-brane whose worldvolume is embedded entirely in $R^{5,1}$ in the geometry (\ref{nhg}) and (\ref{nhh}). The action for a non-BPS Dp-brane in this background takes the form \begin{eqnarray}\label{nonact2} S=-\int d^{p+1}\xi \frac{V(T)}{\sqrt{h(\mathbf{z},y)}} \sqrt{-\det G_{\mu\nu}}= \ \nonumber \\ =-\int d^{p+1}\xi \sqrt{-\det \eta} \frac{V(T)}{\sqrt{h(\mathbf{z},y)}} \sqrt{\det(\mathbf{I}+\mathbf{M})} \ , \nonumber \\ \end{eqnarray} where \begin{equation} G_{\mu\nu}=\eta_{\mu\nu}+ h(\mathbf{z},y)\left(\partial_{\mu}z^i \partial_{\nu}z^i+\partial_{\mu}y\partial_{\nu}y \right) +\partial_{\mu}T\partial_{\nu}T \end{equation} and where we have also introduced $(n+1)\times (n+1)$ unit matrix $\mathbf{I}^{\mu}_{\nu}=\delta^{\mu}_{\nu}$ together with $(n+1)\times (n+1)$ matrix $\mathbf{M}$ \begin{equation} \mathbf{M}^{\mu}_{\nu}=h(\mathbf{z},y)(\partial^{\mu}z^i \partial_{\nu}z^i+\partial^{\mu}y\partial_{\nu}y) +\partial^{\mu}T\partial_{\nu}T \ . \end{equation} The action (\ref{nonact2}) describes the non-BPS Dp-brane that is localised in the transverse space labelled with $\mathbf{z},y$. As in the case of BPS Dp-brane studied in \cite{Kutasov:2004ct} we will be interested in the study of the dynamics of the mode $y$. For that reason we should show that $\mathbf{z}$ can be put in the values that solve their equations of motions that arise from (\ref{nonact2}) \begin{eqnarray}\label{eqSx} -\frac{1}{2h^{3/2}}\partial_{z^i}h V(T) \sqrt{\det(\mathbf{I}+\mathbf{M})}-\nonumber \\ -\partial_{\kappa}\left[\eta^{\kappa\mu} \sqrt{h}V(T)\partial_{\nu}z^i(\mathbf{I}+\mathbf{M})^{-1\nu}_{\mu}\sqrt{\det (\mathbf{I}+\mathbf{M})}\right]+\nonumber \\ +\frac{V(T)}{\sqrt{h}}\partial^{\mu}x^m\partial_{\nu}x^m \partial_{z^i}h (\mathbf{I}+\mathbf{M})^{-1\nu}_{\mu} \sqrt{\det(\mathbf{I}+\mathbf{M})}=0 \ , \nonumber \\ \end{eqnarray} where $x^m\equiv(\mathbf{z},y)$. For constant $\mathbf{z}$ the equation of motion (\ref{eqSx}) takes simple form \begin{equation} \partial_{z^i} h=0 \ \ . \end{equation} Using the form of $h$ given in (\ref{nhh}) it is easy to see that this equation has the solution $z^i=0$. Then one can place the fields $z^i$ in their minimum at $z^i=0$ and consider the dynamics of $y$ and $T$ only. Then the non-BPS Dp-brane action takes the form \begin{equation}\label{nonact3} S=-\int d^{p+1}\xi \frac{V(T)}{\sqrt{h(y)}}\sqrt{-\det(\eta_{\mu\nu} +h\partial_{\mu}y\partial_{\nu}y+ \partial_{\mu}T\partial_{\nu}T)} \ , \end{equation} where $V(T)$ is equal to \begin{equation} V(T)=\frac{T^{non}_p}{\cosh\frac{T}{\sqrt{2}}} \ , \end{equation} and where $T^{non}_p$ is defined such that the tension of a non-BPS Dp-brane at flat spacetime is $T^{non}_p/g_s$. Note that $T^{non}_p$ is related to the quantity that appears in the action for BPS Dp-brane as $T^{non}_p=\sqrt{2}T^{BPS}_p$. Now, following \cite{Kutasov:2004dj,Kutasov:2004ct} we introduce "new" tachyon field $\mathcal{T}$ that is related to $y$ through the relation \begin{equation} \frac{d\mathcal{T}}{dy}= \sqrt{h(y)}=\frac{\sqrt{k}} {2r\sin\frac{y}{2r}} \ . \end{equation} This differential equation has the solution \begin{equation} e^{-\frac{\mathcal{T}}{\sqrt{k}}}= \frac{\cos\frac{y}{4r}}{\sin\frac{y}{4r}}+C_0 \ . \end{equation} Now if we demand that for $y=\pi r$ the tachyon field $\mathcal{T}$ is equal to zero we get $C_0=0$. Then we obtain \begin{equation} h(y(\mathcal{T}))=\frac{1}{4r^2\cosh \frac{\mathcal{T}}{\sqrt{k}}} \ \end{equation} and hence the tachyon effective action (\ref{nonact3}) can be written as \begin{eqnarray}\label{mTT} S=-\int d^{p+1}\xi \mathcal{V}(T,\mathcal{T})\sqrt{-\det(\eta_{\mu\nu} +\partial_{\mu}\mathcal{T}\partial_{\nu}\mathcal{T}+ \partial_{\mu}T\partial_{\nu}T)}= \nonumber \\ =-\int d^{p+1}\xi \mathcal{V}(T,\mathcal{T}) \sqrt{\det(\mathbf{I}+ \mathbf{M})} \ , \nonumber \\ \mathcal{V}(\mathcal{T},T)=\frac{\tau_p}{ \cosh\frac{\mathcal{T}}{\sqrt{k}}\cosh \frac{T}{\sqrt{2}}} \ , \nonumber \\ (\mathbf{I}+\mathbf{M})_{\mu}^{\nu} =\delta^{\nu}_{\mu}+ \eta^{\mu\kappa} \partial_{\kappa} \mathcal{T}\partial_{\nu}\mathcal{T}+ \eta^{\mu\kappa}\partial_{\kappa} T\partial_{\nu}T \ , \nonumber \\ \end{eqnarray} where \begin{equation} \tau_p=\frac{2T_p^{non}R} {\sqrt{k}g_s} \ . \end{equation} Before we proceed to the solution of the equation of motions that arise from (\ref{mTT}) we would like to say few words about symmetries of the action (\ref{mTT}) that relate $\mathcal{T}$ with $T$. In particular, we see that this action is invariant under the transformation that maps $T\ , \mathcal{T} , A\equiv k \ , B\equiv 2 $ into the new fields $T' , \mathcal{T}'$ and new parameters $A',B'$ given as \begin{equation} \mathcal{T}'=T \ , T'=\mathcal{T} \ , A'=B \ , B'=A \ \end{equation} so that $S(T',\mathcal{T}',A',B')=S(T,\mathcal{T},A,B)$. In fact, one can find more general symmetry of the action (\ref{mTT}). For that reason let us introduce two-component vectors $X^I,Y^I \ , I=1,2$ defined as \begin{equation} X=\left(\frac{1}{\sqrt{k}},\frac{1}{\sqrt{2}}\right) \ , Y=\left(\frac{1}{\sqrt{k}},-\frac{1}{\sqrt{2}}\right) \end{equation} that allow us to rewrite the tachyon potential $\mathcal{V}(T,\mathcal{T})$ as \begin{eqnarray} \frac{\tau_p}{ \cosh\frac{\mathcal{T}}{\sqrt{k}} \cosh\frac{T}{\sqrt{2}}}= \frac{2\tau_p}{ \left(\cosh(\frac{\mathcal{T}}{\sqrt{k}}+ \frac{T}{\sqrt{2}})+ \cosh(\frac{\mathcal{T}}{\sqrt{k}}- \frac{T}{\sqrt{2}})\right)}= \nonumber \\ =\frac{2\tau_p}{ \left(\cosh(X^I\delta_{IJ}\mathbf{T}^J)+ \cosh(Y^I\delta_{IJ}\mathbf{T}^J) \right)} \ , \nonumber \\ \end{eqnarray} where we have introduced two-component vector $\mathbf{T}=(\mathcal{T},T)$. In this notation the tachyon effective action (\ref{nonact3}) takes the form \begin{equation}\label{tachS} S=-\int d^{p+1}\xi \frac{2\tau_p} {\left(\cosh(X^I\delta_{IJ}\mathbf{T}^J)+ \cosh(Y^I\delta_{IJ}\mathbf{T}^J) \right)}\sqrt{-\det(\eta_{\mu\nu} +\partial_{\mu}\mathbf{T}^I\partial_{\nu} \mathbf{T}^J\delta_{IJ})} \ . \end{equation} Now it is easy to see that this action is invariant under following transformations \begin{equation}\label{sym} X'^I=\Lambda^I_JX^J \ , Y'^I=\Lambda^I_JY^J \ , \mathbf{T}'^I= \Lambda^I_J\mathbf{T}^J \ , \end{equation} where $\Lambda^I_J$ obeys $\Lambda^I_K\delta_{IJ}\Lambda^J_L= \delta_{KL}$. We mean that an existence of the transformation (\ref{sym}) is very attractive since it relates the tachyon field $T$ with the field $\mathcal{T}$ and then in some sense suggests the geometrical nature of $T$. Unfortunately it is also clear that this form of symmetry is rather unusual and its possible physical origin is unclear at present. In fact we do not understand how $T$ and $\mathcal{T}$ could be related in such a simple way when we know that their origin is completely different. Secondly, we also do not understand the physical meaning of the vectors $X$ and $Y$ defined above. On the other hand one can hope that discovery of all possible symmetries of the tachyon effective action could be helpful for better understanding of the meaning of the tachyon in string theory. \section{Solutions of the equations of motion} \label{third} In this section we would like to study the equations of motion for $T$ and $\mathcal{T}$ that arise from the action (\ref{mTT}) \begin{eqnarray}\label{eqmTT} -\frac{\sinh\frac{\mathcal{T}}{\sqrt{k}}} {\sqrt{k}\cosh^2\frac{\mathcal{T}}{\sqrt{k}} \cosh\frac{T}{\sqrt{2}}}\sqrt{\det (\mathbf{I}+\mathbf{M})}- \partial_{\mu}\left[ \frac{1}{\cosh\frac{\mathcal{T}}{\sqrt{k}} \cosh\frac{T}{\sqrt{2}}}\frac{\eta^{\mu\kappa} \partial_{\nu}\mathcal{T}(\mathbf{I}+\mathbf{M})^{-1 \nu}_{\kappa}} {\sqrt{\det(\mathbf{I}+\mathbf{M})}}\right]=0 \ , \nonumber \\ -\frac{\sinh\frac{T}{\sqrt{2}}} {\sqrt{2}\cosh\frac{\mathcal{T}}{\sqrt{k}} \cosh^2\frac{T}{\sqrt{2}}}\sqrt{\det (\mathbf{I}+\mathbf{M})}- \partial_{\mu}\left[ \frac{1 }{\cosh\frac{\mathcal{T}}{\sqrt{k}} \cosh\frac{T}{\sqrt{2}}} \frac{\eta^{\mu\kappa} \partial_{\nu}T(\mathbf{I}+\mathbf{M})^{-1 \nu} _{\kappa}} {\sqrt{\det(\mathbf{I}+\mathbf{M})}}\right]=0 \ . \nonumber \\ \end{eqnarray} For next purposes it will be also useful to calculate the stress energy tensor from the action (\ref{mTT}). In order to do this we replace the flat worldvolume metric $\eta_{\mu\nu}$ with arbitrary metric $g_{\mu\nu}$. Using now the fact that for the action of the form \begin{equation} S=-\int d^{p+1}\xi \sqrt{-g}\mathcal{L} \end{equation} the stress energy tensor $T_{\mu\nu}=-\frac{2}{\sqrt{-g}}\frac{\delta S}{\delta g^{\mu\nu}}$ is equal to \begin{equation} T_{\mu\nu}=-g_{\mu\nu}\mathcal{L}+ 2\frac{\delta \mathcal{L}}{\delta g^{\mu\nu}} \end{equation} we obtain from (\ref{mTT}) \begin{equation}\label{strenergy} T_{\mu\nu}=-\eta_{\mu\nu}\mathcal{V} \sqrt{\det(\mathbf{I}+\mathbf{M})}+\mathcal{V}(\partial_{\nu}\mathcal{T} \partial_{\kappa}\mathcal{T}+\partial_{\nu}T \partial_{\kappa}T)(\mathbf{I}+\mathbf{M})^{-1\kappa}_{\mu} \sqrt{\det(\mathbf{I}+\mathbf{M})} \ . \end{equation} Now we are ready to study some solutions of the equation of motions (\ref{eqmTT}). We begin with the case when $\mathcal{T}$ is time dependent while the tachyon $T$ depends on one spatial coordinate $x\equiv \xi^1$. Then \begin{equation}\label{Mts} \mathbf{M}=\left(\begin{array}{ccc} -\dot{\mathcal{T}}^2 & 0 & 0 \\ 0 &T'^2 & 0 \\ 0 & 0 & 0 \\ \end{array}\right) \ , \det(\mathbf{I}+\mathbf{M})=(1-\dot{\mathcal{T}}^2)(1+T'^2) \ , T'\equiv \frac{dT}{dx} \ \end{equation} and hence components of the stress energy tensor are equal to \begin{eqnarray} T_{00}=\frac{\mathcal{V}(1+T'^2)} {\sqrt{(1-\dot{\mathcal{T}}^2)(1+T'^2)}} \ , T_{0i}=0 \ , i=1,\dots, p\nonumber \\ T_{xx}=-\frac{\mathcal{V}(1-\dot{\mathcal{T}}^2)} {\sqrt{(1-\dot{\mathcal{T}}^2)(1+T'^2)}} \ , T_{ix}=T_{xi}=0 \ , i=2,\dots, p \nonumber \\ T_{ij}=-\delta_{ij}\mathcal{V} \sqrt{(1-\dot{\mathcal{T}}^2)(1+T'^2)} \ , i , j=2,\dots, p \ . \nonumber \\ \end{eqnarray} Now for the matrix $\mathbf{M}$ given in (\ref{Mts}) the equations of motion (\ref{eqmTT}) take the form \begin{eqnarray}\label{eqmTT1} \frac{\sqrt{1+T'^2}}{\cosh \frac{T}{\sqrt{2}}} \left[-\frac{\sinh\frac{\mathcal{T}}{\sqrt{k}} \sqrt{1-\dot{\mathcal{T}}^2}} {\sqrt{k}\cosh^2\frac{\mathcal{T}}{\sqrt{k}}} +\partial_0\left( \frac{1}{\cosh\frac{\mathcal{T}}{\sqrt{k}}} \frac{\partial_0\mathcal{T}} {\sqrt{1-\dot{\mathcal{T}}^2}}\right) \right]=0 \nonumber \\ \frac{\sqrt{1-\dot{\mathcal{T}}^2}}{\cosh \frac{\mathcal{T}}{\sqrt{k}}} \left[-\frac{\sinh\frac{T}{\sqrt{2}} \sqrt{1+T'^2}} {\sqrt{2}\cosh^2\frac{T}{\sqrt{2}}} -\partial_x\left( \frac{1}{\cosh\frac{T}{\sqrt{2}}} \frac{\partial_xT} {\sqrt{1+T'^2}}\right) \right]=0 \ . \nonumber \\ \end{eqnarray} These expressions explicitly show that $\mathcal{T}$ and $T$ decouple. In particular, let us consider the last equation in (\ref{eqmTT1}). If we define \begin{equation}\label{Vx} V(T,x)=\frac{1}{\cosh\frac{T} {\sqrt{x}}} \end{equation} then the expression in the bracket can be written as \begin{eqnarray} \frac{\delta V(T,2)}{\delta T}\sqrt{1+T'^2} -\partial_x\left(\frac{V(T,2)\partial_xT} {\sqrt{1+T'^2}}\right)=0 \Rightarrow \nonumber \\ \Rightarrow \partial_x\left(\frac{V(T,2)}{\sqrt{1+T'^2}}\right)=0 \Rightarrow \frac{V(T,2)}{\sqrt{1+T'^2}}=p \ , \nonumber \\ \end{eqnarray} where $p$ is an integration constant. Using now the specific form of the potential $V(T,2)$ given in (\ref{Vx}) it is easy to find the dependence of $T$ on $x$ \begin{equation} \sinh \frac{T}{\sqrt{2}}= \frac{\sqrt{1-p^2}}{p}\sin \frac{x}{\sqrt{2}} \ . \end{equation} For such a configuration the spatial dependent energy density is equal to \begin{eqnarray} \rho(x)=T_{00}(x)=\frac{\tau_p V(\mathcal{T},k)}{\sqrt{1-\dot{\mathcal{T}}^2}} V(T,2)\sqrt{1+T'^2} =\nonumber \\ =\frac{\tau_pV(\mathcal{T},k)} {\sqrt{1-\dot{\mathcal{T}}^2}} \frac{p}{p^2+(1-p)^2\sin^2 \frac{x}{\sqrt{2}}} \ .\nonumber \\ \end{eqnarray} The physical interpretation of this solution is in terms of an array of D(p-1)-branes and D(p-1)-antibranes \cite{Kim:2003ma,Brax:2003rs,Kim:2003in} that move toward to the worldvolume of $NS5$-branes. To see this more explicitly let us now solve the equation of motion for $\mathcal{T}$ (\ref{eqmTT1}) that can be written as \begin{eqnarray} -\frac{\delta V(\mathcal{T},k)}{\delta \mathcal{T}} \sqrt{1-\dot{\mathcal{T}}^2}+ \partial_0 \left(\frac{V(\mathcal{T},k)\dot{\mathcal{T}}} {\sqrt{1-\dot{\mathcal{T}}^2}}\right)=0 \Rightarrow \nonumber \\ \Rightarrow \partial_0\left(\frac{V(\mathcal{T},k)}{\sqrt{1-\dot{\mathcal{T}}^2}} \right)=0 \Rightarrow \frac{V(\mathcal{T},k)}{\sqrt{1-\dot{\mathcal{T}}^2}}=e \ . \nonumber \\ \end{eqnarray} Using again (\ref{Vx}) we obtain \begin{eqnarray} \sinh\frac{\mathcal{T}} {\sqrt{k}}= \frac{\sqrt{e^2-1}}{e} \sinh\left(\frac{t} {\sqrt{k}}+t_0\right) \ . \nonumber \\ \end{eqnarray} We can fix the constant $t_0$ from the requirement that at time $t=0$ the non-BPS Dp-brane sits at the point $y=\pi r \ (\mathcal{T}=0)$. As a result $t_0$ should be equal to zero. Then \begin{equation} e=\frac{1}{\sqrt{1-\dot{\mathcal{T}}^2_0}} \Rightarrow \dot{\mathcal{T}}^2_0=\frac{\sqrt{e^2-1}}{e^2} \end{equation} and hence $e$ is related to the velocity $\dot{\mathcal{T}}$ at time $t=0$. In summary, we have got the solution where the spatial dependent tachyon condensation $T$ results to an emergence of the array of D(p-1)-branes and D(p-1)-antibranes. Since non-BPS Dp-branes in type IIA (IIB)theories have odd(even) $p$ the tachyon condensation leads to the emergence of BPS D(p-1)-branes with even (odd) $p$. However the configuration when these D-branes are inserted in the background of $NS5$-branes is unstable and hence these D-branes are moving towards to the worldvolume of NS5-branes. This situation is described by time dependent condensation of field $\mathcal{T}$. Let us now consider the situation when $\mathcal{T}$ is function of $x$ and $T$ is function of $t$. It is clear that we could proceed in the same way as in the previous example however in order to obtain clear physical meaning of the resulting configuration it will be useful to construct the singular kink following the analysis performed in \cite{Sen:2003tm}. First of all, the equations of motion (\ref{eqmTT}) for $\mathcal{T}=\mathcal{T}(x)$ and for $T=T(t)$ take the form \begin{eqnarray}\label{eqmTT2} \frac{\sqrt{1-\dot{T}^2}}{\cosh \frac{T}{\sqrt{2}}} \left[-\frac{\sinh\frac{\mathcal{T}}{\sqrt{k}} \sqrt{1+\mathcal{T}'^2}} {\sqrt{k}\cosh^2\frac{\mathcal{T}}{\sqrt{k}}} -\partial_x\left( \frac{1}{\cosh\frac{\mathcal{T}}{\sqrt{k}}} \frac{\partial_x\mathcal{T}} {\sqrt{1+\mathcal{T}'^2}}\right) \right]=0 \ , \nonumber \\ \frac{\sqrt{1+\mathcal{T}'^2}}{\cosh \frac{\mathcal{T}}{\sqrt{k}}} \left[-\frac{\sinh\frac{T}{\sqrt{2}} \sqrt{1-\dot{T}^2}} {\sqrt{2}\cosh^2\frac{T}{\sqrt{2}}} +\partial_0\left( \frac{1}{\cosh\frac{T}{\sqrt{2}}} \frac{\partial_0T} {\sqrt{1-\dot{T}^2}}\right) \right]=0 \ .\nonumber \\ \end{eqnarray} Now the equation of motion for $\mathcal{T}$ implies \begin{equation} \partial_x\left( \frac{V(\mathcal{T},k)}{\sqrt{1+\mathcal{T}'^2}}\right)=0 \ \end{equation} that means that the expression in the bracket does not depend on $x$. Since for a kink solution $\mathcal{T}\rightarrow \pm \infty$ as $x\rightarrow \pm \infty$ and $V(\mathcal{T},k) \rightarrow 0$ in this limit we obtain that the expression in the bracket vanishes for $x\rightarrow \infty$ and from its independence on $x$ it implies that it vanishes everywhere. This in turn implies that we should have \begin{equation} \mathcal{T}=\pm \infty \ \mathrm{or} \ \partial_x\mathcal{T}=\infty \ \mathrm{(or \ both)} \ \mathrm{for \ all} \ x \ . \end{equation} Clearly this solution looks singular. We will show, following \cite{Sen:2003tm}, that this solution has finite energy density that is localised on codimension one subspace however the interpretation is slightly different than in the case of the tachyon kink on non-BPS Dp-brane in flat spacetime. To see this let us consider the field configuration \begin{equation}\label{fd} \mathcal{T}(x)=f(ax) \ , f(u)=-f(-u) \ , f'(u)>0 \forall x \ , f(\pm \infty)=\pm \infty \end{equation} that in the limit $a\rightarrow \infty$ looks singular as expected. For this solution however we get \begin{equation} \frac{V(\mathcal{T},k)} {\sqrt{1+\mathcal{T}'^2}}= \frac{V(f(ax),k)}{\sqrt{1+a^2f'^2(ax)}} \end{equation} that vanishes everywhere at the limit $a\rightarrow \infty$ since the numerator vanishes (except at $x=0$) and the denominator blows up everywhere. Using this solution it is easy to find other components of the stress energy tensor \begin{eqnarray} T_{00}(x)=\frac{\tau_pV(T,2)}{\sqrt{ 1-\dot{T}^2}}V(\mathcal{T},k) \sqrt{1+\mathcal{T}'^2}= \frac{\tau_pV(T,2)}{\sqrt{1-\dot{T}^2}} V(f(ax),k) af'(ax) \ , \nonumber \\ T_{ij}(x)=-\delta_{ij}\tau_pV(T,2) \sqrt{1-\dot{T}^2}V(\mathcal{T},k) \sqrt{1+\mathcal{T}'^2}=\nonumber \\ = -\delta_{ij}\tau_pV(T,2) \sqrt{1-\dot{T}^2} V(f(ax),k) af'(ax) \nonumber \\ \end{eqnarray} in the limit $a\rightarrow \infty$. Then the integrated $T_{00} \ , T_{ij}$ associated with the codimension one solution are equal to \begin{eqnarray} T^{kink}_{00}= \int dx T_{00}= \frac{\tau_pV(T,2)}{\sqrt{ 1-\dot{T}^2}}\int dx V(f(ax),k) af'(ax)= \frac{\tau_pV(T,2)}{\sqrt{ 1-\dot{T}^2}}\int dy V(y,k)dy \ , \nonumber \\ T^{kink}_{ij}=-\delta_{ij} \tau_pV(T,2) \sqrt{1-\dot{T}^2} \int V(y)dy \ , \nonumber \\ \end{eqnarray} where $y=f(ax)$. Thus $T^{kink}_{\alpha\beta} \ , \alpha , \beta= 0,2,\dots,p$ depend on $V$ and not on the form of $f(u)$. It is clear from the exponential fall off the function of $V$ that most of the contribution is contained in the finite range of $y$. In fact, in the limit $a\rightarrow \infty$ the stress energy tensor $T^{kink}_{\alpha\beta}$ is localised on codimension one D(p-1)-brane with the tension given as \begin{equation} T_{p-1}=\tau_p\int dy V(y,k)=\tau_p \int \frac{dy}{\cosh\frac{y}{\sqrt{k}}} =\frac{(2\pi)\sqrt{k}}{2}\tau_p =2\pi T_p^{non}R \end{equation} using the fact that $\tau_p$ is equal to \begin{equation} \tau_p=\frac{2T^{non}_pR} {\sqrt{k}} \ . \end{equation} Finally we obtain \begin{equation} T_{00}^{kink}=\delta(x) T_{p-1}\frac{V(T,2)}{\sqrt{1-\dot{T}^2}} \ , T_{ij}=-\delta(x)\delta_{ij}T_{p-1}V(T,2) \sqrt{1-\dot{T}^2} \ . \end{equation} The geometrical meaning of this solution is clear \cite{Kutasov:2004ct}. Since $\mathcal{T}$ is directly related to the coordinate $y$ that parameterises the position of a non-BPS Dp-brane on the transverse circle, the singular kink solution corresponds to non-BPS D-brane that sits on top of the fivebranes for all $x<0$ then at $x=0$ goes around the y circle and then back to the fivebranes at $y=2\pi R$ where it stays for all $x>0$. This describes non-BPS Dp-brane wrapped around the transverse circle. Then the time dependent solution of the equation of motion for $T$ \begin{equation} \sinh \frac{T}{\sqrt{2}}= \sinh\frac{t}{\sqrt{2}} \end{equation} describes the annihilation of this Dp-brane to the closed string vacuum \cite{Sen:2004nf}. To obtain BPS-like D-brane that is stable in the NS5-brane background we should consider the situation when both $\mathcal{T}$ and $T$ are spatial dependent. For that reason we take following ansatz \begin{equation}\label{Tan} T(x^1)=f_1(x^1) \ , x^1\equiv \xi^{p-1} \ , \mathcal{T}(x^2)=f_2(x^2) \ , x^2\equiv \xi^p \ , \end{equation} where $f_i(u)$ are functions with the properties given in (\ref{fd}). For this ansatz the matrix $\mathbf{I}+\mathbf{M}$ takes the form \begin{equation} \mathbf{I}+\mathbf{M}= \left(\begin{array}{ccc} \mathbf{I}_{(p-1)\times (p-1)} & 0 & 0 \\ 0 & 1+(\partial_1T)^2 & 0 \\ 0 & 0 & 1+(\partial_2\mathcal{T})^2 \\ \end{array} \right) \ , \end{equation} where $\partial_1\equiv \partial_{x^1} \ , \partial_2\equiv \partial_{x^2}$. Then the components of the stress energy tensor are equal to \begin{eqnarray} T_{00}=\tau_pV(T,2)V(\mathcal{T},k) \sqrt{(1+(\partial_2\mathcal{T})^2) (1+(\partial_1T)^2)} \ , T_{0i}=0 \ , i=1,\dots, p \ , \nonumber \\ T_{x^1x^1}=-\tau_pV(\mathcal{T},k)\sqrt{1+ (\partial_2\mathcal{T})^2} \frac{V(T,2)}{\sqrt{1+(\partial_1T)^2}} \ , T_{x^1i}=0 \ , i=1,\dots,p-1 \nonumber \\ T_{x^2x^2}=\tau_pV(T,2)\sqrt{1+(\partial_1T)^2} \frac{V(\mathcal{T},k)}{\sqrt{1+(\partial_2\mathcal{T})^2}} \ , T_{x^2i}=0 \ , i=1,\dots,p-2 \ , \nonumber \\ T_{ij}=-\delta_{ij}\tau_p V(T,2)V(\mathcal{T},k) \sqrt{(1+(\partial_2\mathcal{T})^2) (1+(\partial_1T)^2)} \ , i,j=1,2,\dots,p-2 \ . \nonumber \\ \end{eqnarray} Now the conservation of the stress energy tensor implies \begin{equation} \partial_{x^1}T_{x^1x^1}=0 \ , \partial_{x^2}T_{x^2x^2}=0 \ . \end{equation} In other words, $T_{x^1x^1}$ does not depend on $x^1$ and $T_{x^2x^2}$ does not depend on $x^2$. Since for $x^1\rightarrow \infty$ $V(T,2)\rightarrow 0$ and using the same arguments as in the case given above we get that $T_{x^1x^1}$ is equal to zero for all $x^1$. In the same way one can argue that $T_{x^2x^2}=0$ for all $x^2$. Then the ansatz (\ref{Tan}) has following physical interpretation: The condensation of the tachyon field $T$ leads to the emergence of BPS D(p-1)-brane localised at the point $x^1=0$ on the worldvolume of non-BPS Dp-brane. Then the next condensation of the field $\mathcal{T}$ describes D(p-1)-brane that for $x^2<0$ sits at the worldvolume of NS5-branes at $y=0$ at the point $x^2=0$ wraps the transverse circle back to $y=2\pi R$ and then it sits on the worldvolume of NS5-branes for $x^2>0$. In other words, this tachyon condensation leads to the BPS D(p-1)-brane that wraps the transverse circle. As is well known such a configuration is stable as opposite to the case of the BPS D-brane whose worldvolume is parallel with the worldvolume of the fivebranes. In order to further support this picture we will calculate the stress energy tensor corresponding to this D(p-1)-brane \begin{eqnarray} T_{\alpha\beta}^{D(p-1)}= \int dx^1dx^2 T_{\alpha\beta}= -\eta_{\alpha\beta}\tau_p\int dx^1 V(T,2)\sqrt{ 1+T'^2}\int dx^2V(\mathcal{T},k)\sqrt{1+\mathcal{T}'^2}= \nonumber \\ =-\eta_{\alpha\beta}\tau_p\int V(y^1,2)dy^1 \int V(y^2,k)dy^2 \ , y^i=f_i(a_ix^i) \ , i=1,2 \ , \nonumber \\ \end{eqnarray} where $\alpha\ , \beta=0,1,\dots,p-2$. Since in the limit $a_i\rightarrow \infty$ the tachyon potential vanishes almost everywhere except at the point $x^i=0$ we can write the resulting stress energy tensor as \begin{equation} T_{\alpha\beta}=-\eta_{\alpha\beta} \delta(x^1)\delta(x^2)T_{p-1} \end{equation} where \begin{eqnarray} T_{p-1}=\tau_p\int dy^1\frac{1}{\cosh\frac{y^1}{ \sqrt{2}}}\int dy^2\frac{1}{\cosh\frac{y^2}{\sqrt{k}}} =\frac{T_{p-1}^{BPS}}{g_s}2\pi R \nonumber \\ \end{eqnarray} that is exactly an energy of BPS D(p-1)-brane wrapped around the circle with radius $R$. Now we would like to determine the effective action for translation zero modes of this solution following the analysis performed in \cite{Sen:2003tm}. For that reason we will consider the ansatz for $T$ and $\mathcal{T}$ \begin{equation} T(x^1,\xi)=f_1(a_1(x^1-t^1(\xi)) \ , \mathcal{T}(x^2,\xi)=f_2(a_2(x^2-t^2(\xi)) \ , \end{equation} where we have denoted $\xi^{\alpha} \ , \alpha=0,1,\dots,p-2$ the coordinates tangential to the kink worldvolume. For such a configuration we get \begin{eqnarray} \mathbf{A}_{x^1x^1}=1+a_1^2f_1'^2 \ , \mathbf{A}_{x^2x^2}=1+a_2^2f_2'^2 \ , \nonumber \\ \mathbf{A}_{\alpha x^1}=\mathbf{A}_{x^1\alpha}= -a^2_1f'^2_1\partial_{\alpha}t^1 \ , \mathbf{A}_{\alpha x^2}=\mathbf{A}_{x^2\alpha}= -a_2^2f'^2_2\partial_{\alpha}t^2 \ , \nonumber \\ \mathbf{A}_{\alpha\beta}=(a_1^2f'^2_1-1) \partial_{\alpha}t^1\partial_{\beta}t^1+ (a_2^2f'^2_2-1) \partial_{\alpha}t^2\partial_{\beta}t^2+ \mathbf{a}_{\alpha\beta} \ , \nonumber \\ \mathbf{a}_{\alpha\beta}= \eta_{\alpha\beta}+\partial_{\alpha}t^i \partial_{\beta}t^i \ .\nonumber \\ \end{eqnarray} Let us now define following matrices \begin{eqnarray} \hat{\mathbf{A}}_{\mu\beta}= \mathbf{A}_{\mu\beta}+ \mathbf{A}_{\mu x^1}\partial_{\beta}t^1+ \mathbf{A}_{\mu x^2}\partial_{\beta}t^2, \ \hat{\mathbf{A}}_{\mu x^1}=\mathbf{A}_{\mu x^1}, \ \hat{\mathbf{A}}_{\mu x^2}=\mathbf{A}_{\mu x^2} \ , \nonumber \\ \tilde{\mathbf{A}}_{\alpha \nu}= \hat{\mathbf{A}}_{\alpha\nu}+ \hat{\mathbf{A}}_{x^1\nu} \partial_{\alpha}t^1+ \hat{\mathbf{A}}_{x^2\nu}\partial_{\alpha}t^2, \ \tilde{\mathbf{A}}_{x^1 \nu}=\hat{\mathbf{A}}_{x^1\nu}, \ \tilde{\mathbf{A}}_{x^2 \nu}=\hat{\mathbf{A}}_{x^2\nu} \ \nonumber \\ \end{eqnarray} that obey \begin{equation} \det\hat{\mathbf{A}}=\det\tilde{\mathbf{A}}= \det \mathbf{A} \ . \end{equation} On the other hand the explicit calculation gives \begin{eqnarray} \tilde{\mathbf{A}}_{\alpha\beta}= \mathbf{a}_{\alpha \beta} \ , \tilde{\mathbf{A}}_{x^1\alpha} =\tilde{\mathbf{A}}_{\alpha x^1}= \partial_{\alpha}t^1 \ , \tilde{\mathbf{A}}_{x^2\alpha}= \tilde{\mathbf{A}}_{\alpha x^2}= \partial_{\alpha}t^2 \ , \nonumber \\ \tilde{\mathbf{A}}_{x^1x^1}=1+a^2_1f'^2_1 \ , \tilde{\mathbf{A}}_{x^2x^2}=1+a^2_2f'^2_2 \ \nonumber \\ \end{eqnarray} and hence the determinant $\det \tilde{\mathbf{A}}$ for large $a_1, a_2$ takes the form \begin{equation} \det\tilde{\mathbf{A}}=a^2_1f_1'^2a^2_2f'^2_2 \left[\det \mathbf{a}_{\alpha\beta} +\mathcal{O}(\frac{1}{a_1^2}) +\mathcal{O}(\frac{1}{a_2^2})\right] \ . \end{equation} Substituting this expression into the effective action we get \begin{eqnarray} S=-\tau_p\int d^{p-1}\xi\sqrt{-\det \mathbf{a}}\int dx^1dx^2 V(f_1(a_1),2)V(f_2 (a_2,k))a_1f'_1(a_1x^1) a_2f'_2(a_2x^2)=\nonumber \\ =-\frac{T^{BPS}_{p-1}2\pi R}{g_s} \int d^{p-1}\xi\sqrt{-\det \mathbf{a}} \nonumber \\ \end{eqnarray} that is the right form of the action for zero modes describing transverse fluctuations of BPS D(p-1)-brane that wraps the circle with radius $R$. \section{Conclusion}\label{fourth} In this paper we have studied the dynamics of a non-BPS Dp-brane in the background of $k$ $NS5$-branes on transverse $R^3\times S^1$, following paper \cite{Kutasov:2004ct}, where the dynamics of a BPS Dp-brane in this background was discussed. The main motivation was to clarify the relation between the true tachyon mode on the worldvolume of a non-BPS Dp-brane-that expresses an instability of this object even in flat spacetime background-and the new tachyon field that arises from the redefinition of the mode that describes the position of the non-BPS D-brane on a transverse circle $S^1$. We have found that a non-BPS Dp-brane in this background is invariant under the exchange $T$ with $\mathcal{T}$ on condition that the numerical factors in the tachyon potentials are exchanged as well. This fact is clearly very puzzling since while $k$ is the number of $NS5$-branes and hence has clear physical meaning the factor $2$ in the tachyon potential $V$ does not have such a clear physical interpretation. In the same way it is not clear whether the extended symmetry between $T,\mathcal{T}$ and parameters in the tachyon potentials that was discussed in section (\ref{second}) has some physical meaning or whether this is only pure coincidence. On the other hand we mean that the idea that the tachyon on the worldvolume of a non-BPS Dp-brane could have geometrical origin is very intriguing and certainly deserve to be investigated further. Then in section (\ref{third}) we have studied the solutions of the tachyon effective action in the background defined above. For the spatial dependent tachyon $T$ and time dependent tachyon $\mathcal{T}$ this solution describes collection of D(p-1)-branes and D(p-1)-antibranes that move towards to the worldvolume of fivebranes. On the other hand for the time dependent $T$ and spatial dependent $\mathcal{T}$ we have considered the solution that corresponds to the emergence of a non-BPS D(p-1)-brane that wraps the transverse circle and that further annihilates in the process of the time dependent tachyon condensation. An finally, we have constructed solutions where both $T$ and $\mathcal{T}$ were spatial dependent. We have then argued that this configuration describes BPS D(p-1)-brane wrapped around transverse circle $S^1$. We hope that this modest contribution to the study of the dynamics of BPS and non-BPS D-branes in the NS5-branes background that we have performed in this paper could be helpful for further research of this very interesting subject and for better understanding of the role of the tachyon in string theory. \\ \\ {\bf Acknowledgement} This work was supported by the Czech Ministry of Education under Contract No. 14310006.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Since the development of the Princeton WordNet (PWN) and its successful application to computational linguistics and information retrieval \citep{Fellbaum:98}, there have been many efforts to extend it to other languages and improve its synset relations and sense associations. Doing this by hand is difficult and resource-intensive, making automated methods desirable. However, these are often tailored to a specific language structure or depend heavily on resource availability, complicating application to many languages. We develop an unsupervised approach for synset representation and word-sense induction and apply it to automated Wordnet construction for French and Russian. The method requires only an unannotated corpus in the target language and machine translation (MT) between that language and English. The basis of our work is the use of {\em word embeddings}: representations of words as vectors, typically real and low-dimensional \citep{Turney:10}. Although many vector representations of synsets have been proposed, most already depend on Wordnet, limiting their use for building it in new languages. We instead represent translated synset information directly using recent work on document representations \citep{sentence}. We also apply a method for linear algebraic word-sense induction (WSI) to develop a sense-clustering procedure that can be further used to improve Wordnet construction \citep{polysemy}. Our further contribution is the application of these representations to the {\em extend} approach for automated Wordnet construction \citep{Vossen:98}. This framework assumes that synset relations are invariant across languages and generates a set of candidate synsets for each word $w$ in the target language by using a set of English translations of $w$ to query PWN (we refer to this as MT+PWN). As the number of candidate synsets produced may be quite large, we need to select those synsets that are its appropriate senses. Here a simple word embedding approach is to use a cutoff on the average similarity between a word and the synset's lemmas. We find that using our synset representations improve greatly upon this baseline and outperforms other language-independent methods as well as language-specific approaches such as WOLF, the French Wordnet used by the Natural Language ToolKit \citep{wolf,omw,nltk}. A further contribution is two 600-word test sets in French and Russian that are the largest and most comprehensive available, containing 200 each of nouns, verbs, and adjectives. We construct them by presenting native speakers with all candidate synsets produced by MT+PWN and treating the senses picked as ``ground truth" for measuring accuracy. Besides its size, our data sets also have the advantage of being separated by part-of-speech (POS), making evident differences in performance across POS. With these test sets, we hope to address the difficulties in evaluating non-English Wordnets from the use of different and unreported data, incompatible metrics (e.g. matching synsets vs. retrieving synset lemmas), and differing cross-lingual dictionaries. \section{Related Work} \label{sec:related} Much past work on automated Wordnets has focused on language-specific approaches --- using resources or properties specific to a language or language family. Efforts for Korean \citep{Lee:00}, French \citep{wolf,wonef}, and Persian \citep{Montazery:10}, have found success in using bilingual corpora, expert knowledge, or Wordnets in related languages on top of an MT+PWN step. We compare to the Wordnet Libre du Fran\c{c}ais (WOLF), which leverages multiple European Wordnets \citep{wolf}; in our evaluation an embedding method outperforms the approach in F-score while having far fewer resource requirements. Wordnet du Fran\c{c}ais (WoNeF), an extension of WOLF that combined linguistic models via a voting scheme \citep{wonef}, was found to have performance generally below WOLF's, so we compare to the earlier database. There have also been recent vector approaches for Wordnet construction, specifically for an Arabic Wordnet \citep{AlTarouti:16} and a Bengali Wordnet \citep{Nasiruddin:14}. The small size of these Wordnets (below 1000 synsets for high-F-score versions) underscores the difficulty of extracting sense information from unsupervised representations. In particular, we found that stronger sense-induction methods, specifically sparse coding, than those presented in \cite{Nasiruddin:14} were needed to distinguish word-senses well. Another approach is to leverage and expand upon existing resources. Two multi-lingual Wordnets thus constructed are the Extended Open Multilingual Wordnet (OMW) \cite{omw}, which scraped Wiktionary, and the Universal Multilingual Wordnet (UWN) \citep{uwn}, which used multiple translations to match word-senses. Through evaluation we found that the approach leads to high precision/low recall Wordnets. This method is also used for BabelNet, which extends Wordnet and Wikipedia \citep{Navigli:12}. Existing ontologies are also frequently used for sense representations; these include efforts using Wordnet \citep{Rothe:15} and BabelNet \citep{Iacobacci:15}. The approach often uses unsupervised embeddings for initialization and attains state-of-the-art on standard NLP tasks \citep{CamachoCollados:16}. However, such representations depend on existing ontologies and so are difficult to apply to Wordnet construction. We instead use unsupervised embeddings, shown empirically \citep{Mikolov:13} and under a generative model \citep{randwalk} to recover word-similarity and analogies from word-cooccurrences. We use the latter paper's Squared-Norm (SN) vectors, which are similar in form to GloVe \citep{Pennington:14}. \section{Distributed Synset Representation} \label{sec:synsetrep} \begin{figure*}[ht!] \centering \includegraphics[width=105mm]{threshold-procedure.pdf} \caption{ The score-threshold procedure for French word $w=$ {\em dalle} (flagstone, slab). Candidate synsets generated by MT+PWN are given a score and matched to $w$ if the score is above a threshold $\alpha$. } \label{fig:threshold} \end{figure*} We introduce an unsupervised method for representing PWN synsets in non-English languages needing only a large corpus and machine translation. For a vocabulary $V$ of target language words with $d$-dimensional\footnote{$d\ll|V|$, e.g. $d=300$ for vocabulary size $|V|=50000$.} unit vectors $v_w\in\mathbb{R}^d$, the representation of a synset $S$ will also be a vector $u_S\in\mathbb{R}^d$. The construction of $u_S$ will be motivated by the following {\em score-threshold} procedure, illustrated in Figure \ref{fig:threshold}, for automated Wordnet construction. Given a target word $w$, we use a bilingual dictionary to get its translations in English and let its set of candidate synsets be all PWN senses of the translations (MT+PWN). We then assign a score $u_S\cdot v_w$ to each $S$ and accept as correct all synsets with score above a threshold $\alpha$; if no synset is above the cutoff we accept only the highest-scoring synset. Thus we want synset representations $u_S$ whose inner product with $v_w$ is high if $S$ is a matching synset of $w$ and low otherwise. We present a simple baseline representation and then a more involved approach using embeddings of glosses. \subsection{Baseline: Average Embedding} \label{subsec:baseline} Given a candidate synset $S$, define $T_S\subset V$ as the set of translations of its lemmas from English to the target language. Then represent $S$ as $$u_S=\frac{1}{|T_S|}\sum_{w\in T_S}v_w$$ In this case the synset score in the score-threshold procedure is equivalent to the average cosine similarity between $w$ and the translations of the lemmas of $S$. Although straightforward, this representation is quite noisy and does not use all synset information provided by PWN. \subsection{Synset Representation Method} \label{subsec:synsetrep} We now add synset relation and gloss information into the representation $u_S$. Recalling the set $T_S$ of translations of lemmas of synset $S$, define $R_S$ to be the union over all synsets $S'$ related to $S$ of lemma-translation sets $T_{S'}$ . Then the {\em lemma embedding} and {\em related-synset embedding} of $S$ are (before normalization) the element-wise sums $$v_{T_S}^{(SUM)}=\sum\limits_{w\in T_S}v_w\qquad\textrm{and}\qquad v_{R_S}^{(SUM)}=\sum\limits_{w\in R_S}v_w$$ While both gloss translations and the translated lemmas have mistakes from translation noise and polysemy, glosses also have irrelevant words (both stopwords and otherwise). As we would like to downweight these, we use the sentence embedding formula of \cite{sentence}, a {\em smooth inverse frequency} (SIF) weighted average. Given a list $L$ of words $w\in V$ with corpus frequency $f_w$, the SIF-embedding is $$v_L^{(SIF)}=\sum\limits_{w\in L}\frac{a}{a+f_w}v_w$$ where $a$ is a parameter (commonly set to $10^{-4}$). Note that weight $\frac{a}{a+f_w}$ is low for high frequency words and so is similar to TF-IDF \citep{Salton:88}. Through experiments with word-synset matching data, we found that simple sums work for representing the lemmas and related synsets of $S$ but SIF-embeddings are better for gloss representations. Defining $D_S$ to be the translated synset definition of $S$ and $\mathcal{E}_S$ to be the set of translated example sentences of $S$, we set the {\em definition embedding} of $S$ to be $\hat{v}_{D_S}^{(SIF)}$ and the {\em example-sentence embedding} to be $$\frac{1}{|\mathcal{E}_S|}\sum\limits_{E\in\mathcal{E}_S}\hat{v}_E^{(SIF)}$$ The representation $u_S$ of synset $S$ is then an average of all four (lemma, related-synset, definition, example-sentence\footnote{If $S$ has no example sentences this is not included in the average.}) of these embeddings. \section{Cluster-Based Sense Representation} \label{sec:senserep} The representations above work well for automated Wordnets but make no use of the polysemous structure found to be encoded in embeddings themselves by \cite{polysemy}. Here we describe their {\em Linear Word-Sense Induction} (Linear-WSI) model and introduce a {\em sense purification} procedure to represent each induced sense as a word-cluster. Finally, we discuss an application to PWN sense clustering. We again assume a vocabulary $V$ with each word $w$ represented by a unit vector $v_w\in\mathbb{R}^d$. \subsection{Summary of Linear-WSI Model} \label{subsec:wsi} \cite{polysemy} posit that a vector of a word can be linearly decomposed into vectors associated to its senses. Thus $w=$ {\em tie} --- which can be an article of clothing, a drawn match, and so on --- would be $v_w\approx av_{w\textrm{-clothing}}+bv_{w\textrm{-match}}+\dots$ for $a,b\in\mathbb{R}$. Learning such fine-grained sense-vectors is difficult\footnote{Indeed this is a standard criticism of unsupervised approaches to WSI.}, but one expects some words to have related sense-vectors, e.g. the vector $v_{\textrm{tie-clothing}}$ would be close to the vector $v_{\textrm{bow-clothing}}$. Thus Linear-WSI hypothesizes that for $k>d$ there exist unit basis vectors, or {\em atoms}, $a_1,\dots,a_k\in\mathbb{R}^d$ such that $\forall~w\in V$ \begin{equation} \label{eqn:wsi} v_w=\sum\limits_{i=1}^kR_{w,i}a_i+\eta_w \end{equation} where $\eta_w$ is a noise vector and at most $s$ coefficients $R_{w,i}$ are nonzero. $v_w$ is thus modeled as a sparse linear combination of $s$ vectors $a_i$, with the hope that the sense-vectors $v_{\textrm{tie-clothing}}$ and $v_{\textrm{bow-clothing}}$ are both close to a clothing-related atom $a_i$. (\ref{eqn:wsi}) is signals problem, {\em sparse coding}, that can be approximated by K-SVD \citep{Aharon:06}. For $k=2000$ and $s=5$ \cite{polysemy} report that that the solution represents English word-senses as well as a competent non-native speaker and significantly better than clustering methods for WSI. \subsection{Sense Purification} \label{subsec:purification} Though effective for WSI, the model produces comparatively few senses $a_i$ relative to the total number of synsets in WordNet; indeed, if $k$ is set to be more than a few thousand the senses become repetitive. For finer-grained representations we develop a {\em sense purification} procedure that views each sense as a pair $(w,a_i)$, where $a_i$ is a sense-vector s.t. $R_{w,i}>0$, and represents it as a cluster of words $C\subset V$. For each word-sense pair $(w,a_i)$, sense purification finds a cluster $C$ of words whose embeddings are close to each other, to $v_w$, and to $a_i$. The hope is that these words are used in contexts of $w$ in which the sense used is $a_i$. Explicitly, given a word $w$, one of its senses $a_i$, and a fixed set-size $n$, we find $C$ as the $\argmax$ of: \begin{align} \label{eqn:objective} \begin{split} \maximize\limits_{C\subset V',C\ni w,|C|=n}&\qquad\gamma\\ \textrm{subject to}&\qquad\gamma\le\textrm{Median}\{v_x\cdot v_{w'}:w'\in C\backslash\{x\}\}~\forall~x\in C\\ &\qquad\gamma\le\textrm{Median}\{a_i\cdot v_{w'}:w'\in C\} \end{split} \end{align} The constraints on the objective ensure that in order to maximize it the words $w'\in C$ must have high average cosine similarity with each other, with $w$, and with $a_i$. For computational purposes we find $C$ approximately using a greedy algorithm that starts with $C=\{w\}$ and repeatedly adds to it the word $w\in V\backslash C$ that results in the highest objective value $\gamma$ of the new cluster. Processing time is further reduced by restricting our search-space to be a subset of words in $V$ whose embeddings have cosine similarity of at least .2 with $v_w$ and $a_i$. A depiction of the senses recovered via sense-purification is shown in Figure \ref{fig:isomap}. Despite the difficulty of recovering small sense distinctions by distributional algorithms (partly due to Zipf's Law holding for word-senses), the algorithm is still able to distinguish very fine difference such as TV station Fox News vs. film corporation 20th Century Fox. \begin{figure*}[ht!] \centering \includegraphics[width=50mm]{brilliant-isomap5.pdf} \hspace{5mm} \includegraphics[width=50mm]{fox-isomap5.pdf} \includegraphics[width=50mm]{ru-isomap5.pdf} \caption{ Isometric mapping of sense-cluster vectors for $w=$ {\em brilliant}, {\em fox}, and {\em\selectlanguage{russian}лук\selectlanguage{english}} (bow, onion). $w$ is marked by a star and each sense $a_i$ of $w$, shown by a large marker, has an associated cluster of words with the same marker shape. Contours are densities of vectors close to $w$ and at least one sense $a_i$. Note how correct senses are recovered across POS and languages and for both proper and common noun senses.} \label{fig:isomap} \end{figure*} \subsection{Synset Clusters and Sense Clustering} \label{subsec:clustering} Thus far our work with word-senses has been entirely unsupervised, based only upon the polysemous structure of word embeddings. We now consider an application of Linear-WSI and sense purification to the problem of {\em sense-clustering} --- reducing the granularity of Wordnet's sense distinctions by merging closely related senses of different words, This is a well-studied but difficult problem in NLP that is useful for applications requiring a much coarser set of senses for each word than that provided by PWN \citep{Agirre:03,Snow:07}. To define our approach, we first specify a cluster similarity metric and a method for finding synset-atoms/synset-clusters for each word-synset pair $w,S$ using the atoms in the sparse representation of $v_w$. This similarity condition (\ref{eqn:similar}) and the synset-atom/synset-cluster pairs $(a_S,C_S)$ will also be useful in improving Wordnet construction performance in the next section. First, we take any two word-clusters $C_1,C_2\subset V$ and define a {\em cluster similarity function} $$\rho(C_1,C_2)=\textrm{Median}\{v_x\cdot v_y:x\in C_1,y\in C_2\}$$ We then declare $C_1$ and $C_2$ to be {\em similar} if \begin{equation} \label{eqn:similar} \rho(C_1,C_2)\ge\min\{\rho(C_1,C_1),\rho(C_2,C_2)\} \end{equation} i.e. if their cluster similarity with each other exceeds either one's cluster similarity with itself. Next, given a synset $S$ we define the set $V_S\subset V$ to be the union of all sets of translated lemmas of synsets related to $S$. Then for any word-synset pair $w,S$ we let their {\em synset-atom} be the sense $a_i$ from all $a_i$ s.t. $R_{w,i}>0$ for which sense-purification using $V'=V_S$ as the search-space produces the {\em synset-cluster} $C_S$ with maximal objective value (see Equation \ref{eqn:objective}). This can be done by running purification on each atom and choosing the best resulting cluster. As formalized in Algorithm \ref{alg:clustering}, the sense-clustering algorithm merges synsets that share a sense $a_i$ in the sparse representation of $v_w$ and whose clusters share similar words. Here the atoms $a_i$ s.t. $R_{w,i}>0$ represent the coarse set of senses of $w$ and each synset $S$ of $w$ is assumed to be related to one of them; therefore merging synsets sharing an atom clusters those synsets together. \begin{algorithm}[H] \label{alg:clustering} \SetAlgoLined \SetAlgoNoEnd \KwData{$w\in V$, its PWN synsets $\mathcal{S}$, and atoms $a_i$ s.t. $R_{w,i}>0$} \For{candidate synset pairs $(S,S')\in\mathcal{S}\times\mathcal{S}$}{ compute synset-atoms $a_S,a_{S'}$ and synset-clusters $C_S,C_{S'}$\\ \If{$a_S=a_{S'}$ and $C_S,C_{S'}$ are similar (\ref{eqn:similar})}{merge the senses of $w$ associated with synsets $S$ and $S'$}} \caption{Sense Clustering} \end{algorithm} \section{Methods for Automated Wordnet Construction} \label{sec:wordnet} Our basis for automated Wordnet construction is the score-threshold procedure described in Section \ref{sec:synsetrep}, where a candidate synset $S$ is matched to a word $w$ if $u_S\cdot v_w\ge\alpha$ for synset representation $u_S$ and a threshold $\alpha$. The representation described in Section \ref{subsec:synsetrep} performs well compared to previous methods and our baseline; however, through examination we identified two cases in which the method performs poorly: \begin{enumerate} \item $w$ has no candidate synset $S$ with score $u_S\cdot v_w$ that clears the score-threshold $\alpha$. \item $w$ has multiple closely related synsets that are all correct matches but some have a much lower score than the others. \end{enumerate} In this section we discuss how to improve performance in these cases by addressing a cause of noise in representing synset $S$ in the target language --- that due to polysemy many translated lemmas of $S$ and related synsets are irrelevant. As seen before, sense-purification addresses a similar problem of Linear-WSI --- that each sense $a_i$ has too many related words --- by extracting a cluster of words related to both $w$ and $a_i$. Thus synset clusters produced via purification as in Section \ref{subsec:clustering} may also lead to more useful representations of synsets than simply $u_S$. Previously, given a word $w\in V$ we constructed a synset cluster $C_S$ and associated sense $a_S$ by using $V_S$, the union of the sets of all lemmas of synsets related to $S$, as the search-space $V'$ in sense-purification. Since we now want synset clusters in the target vocabulary, we simply replace $V_S$ its translations. Then for each candidate synset $S$ of $w$ we obtain an associated sense $a_S$ and cluster $C_S$ as in Section \ref{subsec:clustering}. \begin{table*}[ht] \centering \begin{tabular}{@{}lll@{}} Synset & $a_S$ & $C_S$ \\ \toprule flag.n.01 & $a_{789}$ & poteau (goalpost), flèche (arrow), $\dots$ \\ flag.n.04 & $a_{892}$ & flamme (flame), fanion (pennant), $\dots$ \\ {\bf flag.n.06} & $a_{892}$ & dallage (paving), carrelage (tiling), $\dots$ \\ flag.n.07 & $a_{1556}$ & pan (section), empennage, queue, tail, $\dots$ \\ iris.n.01 & $a_{1556}$ & bœuf (beef), usine (factory), plante $\dots$ \\ masthead.n.01 & $a_{1556}$ & inscription, lettre (letter), $\dots$ \\ pin.n.08 & $a _{1556}$& trou (hole), tertre (mound), marais $\dots$ \\ {\bf slab.n.01} & $a_{892}$ & carrelage (tiling), carreau (tile) $\dots$ \\ \bottomrule \end{tabular} \caption{\label{tbl:clusters} Synset-atoms $a_S$ and clusters $C_S$ of {\em dalle} (flagstone, slab). The correct synsets (bold) have $C_S$ more related to their meaning.} \end{table*} \subsection{A Better Threshold Using the Purification Objective} \label{subsec:better} The first failure case of the score-threshold procedure --- no candidate synset scores above the cutoff $\alpha$ --- often occurs when synsets have little information in their glosses. Letting $f(C):2^V\mapsto[0,1]$ be the objective function in Equation \ref{eqn:objective}, the synset-clusters $C_S$ obtained as above allow $f(C_S)$ to be used as another measure of relevance of $S$ with $w$, as an incorrect candidate synset $S$ likely has fewer translated related lemmas sharing a context with $w$ to put in the search-space $V_S$ for sense-purification and thus a lower objective value. To exploit this, define $S^*=\arg\max f(C_S)$ as the synset whose cluster has maximal objective value. Then replace $\alpha$ by a new cutoff $\alpha_w=\min\{\alpha,u_{S^*}\cdot v_w\}$ and match all candidate synsets $S$ with score $u_S\cdot v_w\ge\alpha_w$. This ensures that if no synset's score is above $\alpha$, the synset $S^*$ with the best synset-cluster is matched to $w$ and, if it is polysemous, so are any candidate synsets $S$ of $w$ with score $u_S\cdot v_w\ge u_{S^*}\cdot v_w$. \subsection{Recovering Similar Synsets Using Synset Clusters} \label{subsec:recover} The second failure case of the score-threshold procedure --- many similar candidate synsets of $w$ are correct but some have scores below the cutoff --- occurs for words with fine sense distinctions. Thus we address the issue similarly to the sense clustering algorithm in Section \ref{subsec:clustering}. \begin{algorithm}[H] \label{alg:recover} \SetAlgoLined \SetAlgoNoEnd \KwData{$w\in V$, its PWN synsets $\mathcal{S}$, and atoms $a_i$ s.t. $R_{w,i}>0$} $\forall~S\in\mathcal{S}$ compute a synset-atom $a_S$ and a synset-cluster $C_S$\\ \For{each atom $a_i$}{ let $M_i\subset\mathcal{S}$ be candidates $S$ s.t. $a_S=a_i$ and score $u_S\cdot v_w\ge\alpha_w$\\ \For{each $S$ s.t. $a_S=a_i$ and $\beta\le u_S\cdot v_w<\alpha_w$}{ \If{$C_S$ and $C_{S'}$ are similar (\ref{eqn:similar}) $\forall~S'\in M_i$}{match synset $S$ to word $w$}}} \caption{Synset Recovery} \end{algorithm} Given a word $w$, first run the score-threshold procedure with modified cutoff $\alpha_w$. Then for a fixed low cutoff $\beta\le\alpha$ run Algorithm \ref{alg:recover}, allowing a candidate $S$ unmatched by the score-threshold procedure to be compared to $\beta\le\alpha$ if its synset-atom $a_S$ is the same as that of a matched synset $S'$ and their clusters $C_S,C_{S'}$ are similar. This exploits the fact that similar synsets are likely associated with the same sense $a_i$ of $w$ and have similar words from which to construct their synset-clusters. The final improvement of the score-threshold method with a $w$-dependent objective $\alpha_w$ and sense-recovery is outlined in Figure \ref{fig:wsi}. \begin{figure*}[ht!] \centering \includegraphics[width=105mm]{wsi-procedure.pdf} \caption{ Score-threshold and sense-recovery procedures for French word $w=$ {\em dalle} (flagstone, slab). Candidates are matched to $w$ if their score clears a cutoff $\alpha_w$. If an unmatched synset shares a sense $a_i$ with a matched one, it is compared to a lower cutoff $\beta$ (sense-recovery). } \label{fig:wsi} \end{figure*} \section{Evaluation of Automated Wordnet Construction} \label{sec:results} We evaluate our methods by constructing automated French and Russian Wordnets. For word embeddings we train 300-dimensional SN vectors on $|V|\approx 50000$ words, restricted to those with 1000 occurrences or having candidate PWN synsets and 100 occurrences in the lemmatized Wikipedia corpus \citep{randwalk}. We use sparsity $s=4$ and basis-size $k=2000$ for Linear-WSI and set-size $n=5$ for sense-purification. Translation dictionaries were built from Google and Microsoft Translate and the dictionary of the translation company ECTACO; Microsoft was used for the sentence-length MT needed for translating synset glosses. \subsection{Test Sets for Word-Synset Matching} \label{subsec:testsets} A natural method of evaluation is by using a manually constructed Wordnet as a source of ``ground truth" senses. However, the ELRA French Wordnet\footnote{\url{http://catalog.elra.info/product_info.php?products_id=550}} is private while Russian Wordnets are too small and unlinked with PWN\footnote{\url{http://project.phil.spbu.ru/RussNet/}} or gotten by direct translation from PWN\footnote{\url{http://wordnet.ru/}}. We instead construct and release test sets for each language by randomly choosing 200 each of adjectives, nouns, and verbs from words whose English translations appear in the synsets of the {\em Core WordNet}, a semi-automatically selected set of about 5000 most-used synsets in PWN \citep{Fellbaum:98}. Choosing words from Core synset lemmas makes the evaluation more difficult since common words are more polysemous, with more synsets to retrieve; this is reflected in the lower performance of WOLF relative to \citep[Table 4]{wolf}. The ``ground truth" senses are picked by native speakers asked to match synsets to a word given a set of candidates synsets generated by MT+PWN. For example, the French word {\em foie} has one translation, {\em liver}, with four PWN synsets: 1-``glandular organ"; 2-``liver used as meat"; 3-``person with a special life style"; 4-``someone living in a place." Only the first two align with senses of {\em foie}, so the expert marks the first two as good and the others as negative. Two native speakers of each language were trained by a conversant author with knowledge of WordNet; the latter also resolved discrepancies. We get 600 words and $\sim 12000$ candidate word-synset pairs in each language, with adjectives and nouns having on average about 15 candidates and verbs having about 30. This comprises a very large data set compared to previous efforts. Accuracy compared to this ground truth estimates how well an algorithm does compared to humans. One property of this test set is its dependence on the translation we use to get candidate synsets, which can leave out correct synset matches if they are not in the bilingual dictionaries. However, providing both correct and incorrect candidates allows future work to focus on selecting senses and not worry about finding the best dictionary. This dictionary-independent evaluation is an important feature since translation systems used by many authors are often not provided in full. When comparing our performance to previous work, we do not penalize word-synset matches in which the synset is not among the candidates generated for that word, reducing the loss of precision incurred by other methods due to the use of different dictionaries. We also do not penalize other Wordnets for test words they do not contain. In addition to precision and recall, we report {\em coverage} as the proportion of synsets in the Core WordNet that are matched to. While an imperfect metric given different sense usage by language, the synsets are universal-enough for it to be a good indicator of usability. \subsection{Experimental Results} \label{subsec:results} \begin{table*}[ht] \centering \begin{threeparttable} \fontsize{7pt}{8.4pt}\selectfont \begin{tabular}{@{}cccccccr@{}} Method & POS & $F_{.5}^\ast$ & $\textrm{Prec.}^\ast$ & $\textrm{Rec.}^\ast$ & Coverage & Synsets \\ \toprule \multirow{4}*{\specialcell{Wordnet Libre\\du Fran\c{c}ais\\(WOLF)\\{\tiny \citep{wolf}}}} & Adj. & 66.3 & 78.1 & 53.4 & 84.8 & 6865 \\ & Noun & 68.6 & 83.2 & 51.5 & 95.0 & 36667 \\ & Verb & 60.8 & 81.0 & 39.6 & 88.2 & 7671 \\ & Total & 65.2 & 80.8 & 48.2 & 92.2 & $\textrm{52757}^\dagger$ \\ \midrule \multirow{4}*{\specialcell{Universal Wordnet\\(UWN)\\{\tiny \citep{uwn}}}} & Adj. & 64.5 & 88.3 & 42.3 & 69.2 & 7407 \\ & Noun & 67.5 & 94.1 & 40.8 & 75.9 & 24670 \\ & Verb & 55.4 & 88.0 & 28.5 & 76.2 & 5624 \\ & Total & 62.5 & 90.1 & 37.2 & 75.0 & $\textrm{39497}^\dagger$ \\ \midrule \multirow{4}*{\specialcell{Extended Open\\Multilingual Wordnet\\(OMW)\\{\tiny \citep{omw}}}} & Adj. & 58.4 & {\bf 90.9} & 28.4 & 54.7 & 2689 \\ & Noun & 61.3 & {\bf 96.5} & 31.7 & 66.6 & 14936 \\ & Verb & 47.8 & {\bf 95.9} & 18.6 & 57.7 & 2331 \\ & Total & 55.9& {\bf 94.5} & 26.2 & 63.2 & $\textrm{20449}^\dagger$ \\ \midrule \multirow{4}*{\specialcell{Baseline:\\Average Similarity\\(Section~\ref{subsec:baseline})}} & Adj. & 62.8 & 65.3 & {\bf 68.5} & 88.7 & 9687 \\ & Noun & 67.3 & 71.6 & {\bf 69.0} & 92.2 & 37970 \\ & Verb & 51.8 & 55.9 & {\bf 57.0} & 83.5 & 10037 \\ & Total & 60.6 & 64.3 & {\bf 64.9} & 90.0 & $\textrm{58962}^\dagger$ \\ \midrule \multirow{4}*{\specialcell{Method 1:\\Synset Representation\\(Section~\ref{subsec:synsetrep})}} & Adj. & 65.9 & 75.9 & 59.5 & 85.1 & 8512 \\ & Noun & 71.0 & 78.7 & 69.1 & \bf 96.7 & 35663 \\ & Verb & 61.6 & 78.7 & 49.8 & 89.9 & 8619 \\ & Total & 66.2 & 77.8 & 59.5 & \bf 93.7 & $\textrm{53852}^\dagger$ \\ \midrule \multirow{4}*{\specialcell{Method 2:\\Synset Representation\\+ Linear-WSI\\(Section~\ref{sec:wordnet})}} & Adj. & {\bf 67.7} & 76.9 & 62.6 & \bf 91.2 & 8912 \\ & Noun & {\bf 73.0} & 83.7 & 62.0 & 90.9 & 34001 \\ & Verb & {\bf 64.4} & 79.3 & 51.5 & \bf 93.6 & 9262 \\ & Total & {\bf 68.4} & 80.0 & 58.7 & 91.5 & $\textrm{53208}^\dagger$ \\ \bottomrule \end{tabular} \begin{tablenotes} \item[$\ast$] Parameters tuned on a random-selected half of the data; evaluation done on the other half. All percentages are accurate within .2 with 95\% confidence. \item[$\dagger$] Includes adverb synsets using same parameter values ($\alpha$ and $\beta$) as for adjectives. \end{tablenotes} \end{threeparttable} \caption{\label{tbl:fr}French Wordnet Results} \end{table*} We report results in Tables~\ref{tbl:fr}~\&~\ref{tbl:ru}. Parameters $\alpha$ and $\beta$ are tuned to maximize micro-average $F_{.5}$-score $\frac{1.25\cdot\textrm{Precision}\cdot\textrm{Recall}}{.25\cdot\textrm{Precision}+\textrm{Recall}}$, used instead of $F_1$ to prioritize precision (often more important for applications). Our synset representations (Section~\ref{subsec:synsetrep}) outperform the baseline by 6\% in $F_{.5}$-score for French and 10\% for Russian; in French it is competitive with (WOLF) and in both it exceeds both multi-lingual Wordnets. Linear-WSI heuristics further improve $F_{.5}$-score by 1\% in Russian and 2\% for French, exceeding WOLF in $F_{.5}$-score across POS while having similar coverage. Notably, OMW consistently achieves best precision, although it and UWN have low recall and coverage. \begin{table*}[ht] \centering \begin{threeparttable} \fontsize{7pt}{8.4pt}\selectfont \begin{tabular}{@{}ccccccccrcc@{}} Method & POS & $F_{.5}^\ast$ & $\textrm{Prec.}^\ast$ & $\textrm{Rec.}^\ast$ & Coverage & Synsets \\ \toprule \multirow{4}*{\specialcell{Universal Wordnet\\(UWN)\\{\tiny \citep{uwn}}}} & Adj. & 52.4 & 80.3 & 29.6 & 51.0 & 11412 \\ & Noun & 65.0 & 87.5 & 45.1 & 71.1 & 19564 \\ & Verb & 48.1 & 74.8 & 25.7 & 65.0 & 3981 \\ & Total & 55.1 & 80.8 & 33.4 & 67.1 & $\textrm{30015}^\dagger$ \\ \midrule \multirow{4}*{\specialcell{Extended Open\\Multilingual Wordnet\\(OMW)\\{\tiny \citep{omw}}}} & Adj. & 58.7 & {\bf 91.7} & 29.2 & 55.3 & 2419 \\ & Noun & 67.8 & {\bf 93.5} & 42.5 & 68.4 & 14968 \\ & Verb & 51.1 & {\bf 84.5} & 23.9 & 56.6 & 2218 \\ & Total & 59.2 & {\bf 89.9} & 31.9 & 64.2 & $\textrm{19983}^\dagger$ \\ \midrule \multirow{4}*{\specialcell{Baseline:\\Average Similarity\\(Section~\ref{subsec:baseline})}} & Adj. & 61.4 & 60.9 & {\bf 77.3} & 92.1 & 10293 \\ & Noun & 55.9 & 59.9 & 59.9 & 77.0 & 32919 \\ & Verb & 46.3 & 49.0 & {\bf 55.1} & 84.1 & 9749 \\ & Total & 54.5 & 56.6 & {\bf 64.1} & 80.5 & $\textrm{54372}^\dagger$ \\ \midrule \multirow{4}*{\specialcell{Method 1:\\Synset Representation\\(Section~\ref{subsec:synsetrep})}} & Adj. & 69.5 & 78.1 & 61.7 & 84.2 & 8393 \\ & Noun & 69.8 & 77.6 & 66.0 & 85.2 & 29076 \\ & Verb & 54.2 & 63.3 & 57.4 & 91.2 & 8303 \\ & Total & 64.5 & 73.0 & 61.7 & 86.3 & $\textrm{46911}^\dagger$ \\ \midrule \multirow{4}*{\specialcell{Method 2:\\Synset Representation\\+ Linear-WSI\\(Section~\ref{sec:wordnet})}} & Adj. & {\bf 69.7} & 77.3 & 63.6 & \bf 93.3 & 9359 \\ & Noun & {\bf 71.6} & 78.1 & {\bf 68.0} & \bf 91.0 & 31699 \\ & Verb & {\bf 54.4} & 64.9 & 52.6 & \bf 91.9 & 8582 \\ & Total & {\bf 65.2} & 73.4 & 61.4 & \bf 91.5 & $\textrm{50850}^\dagger$ \\ \bottomrule \end{tabular} \begin{tablenotes} \item[$\ast$] Parameters tuned on a random-selected half of the data; evaluation done on the other half. All percentages are accurate within .2 with 95\% confidence. \item[$\dagger$] Includes adverb synsets using same parameter values ($\alpha$ and $\beta$) as for adjectives. \end{tablenotes} \end{threeparttable} \caption{\label{tbl:ru}Russian Wordnet Results} \end{table*} Across POS, we do best on nouns and worst on verbs, a standard result likely exacerbated in this case due to the greater polysemy of verbs. Comparing between languages, we see slightly better performance on Russian adjectives, slightly worse performance on Russian nouns, and much worse performance on Russian verbs. The latter can be explained by a difference in treating the reflexive case and aspectual variants due to the grammatical complexity of Russian verbs. In French, making a verb reflexive requires adding a word while in Russian the verb itself changes, e.g. {\em to wash}$\to${\em to wash oneself} is {\em laver}$\to${\em se laver} in French but {\em\selectlanguage{russian}мыть\selectlanguage{english}}$\to${\em\selectlanguage{russian}мыться\selectlanguage{english}} in Russian. Thus we do not distinguish them for French as the token is the same but for Russian we do, so both {\em\selectlanguage{russian}мыть\selectlanguage{english}} and {\em\selectlanguage{russian}мыться\selectlanguage{english}} may appear and have distinct synset matches. Matching Russian verbs is thus harder as the reflexive usage is often contextually similar to the non-reflexive usage. Aspectual verb pairs are another complication; for Russian, {\em to do} has aspects {\em\selectlanguage{russian}(делать, сделать)\selectlanguage{english}} that are treated as distinct while in French these are just tenses of {\em faire}. Overall the word embedding method seems robust to the language's closeness to English, with similar noun and adjective performance and a verb-performance discrepancy stemming from an intrinsic quality rather than language dissimilarity. Such a claim can be further examined by constructing Wordnets for non-European languages. \section{Conclusion} We have introduced unsupervised synset and sense representations via word vectors that can be used to improve WordNet and extend it to other languages. These methods outperform language-specific and resource-heavy approaches, enabling the construction of automated Wordnets in low-resource languages. We also release two large POS-split test sets for automated Wordnets for French and Russian that give a more accurate picture of a method's strengths and weaknesses. In future work these methods may be improved upon by incorporating other language representation methods such as multi-lingual embeddings \citep{Faruqui:14}. Furthermore, the sense-purification procedure we introduce has direct applications to word-sense induction, clustering, and disambiguation. \bibliographystyle {cslipubs-natbib} \bibliography {lilt2017} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The ability to prepare graphene (single graphite sheets) \cite{graphene_nat05} has spurred the study of electronic behavior of this unique system, in which a pair of Dirac points occur at the edge of the Brillouin Zone \cite{antonio}. Bands extend linearly (referred to as ``massless Dirac'') both to lower and higher energy from point Fermi surfaces. This unusual behavior requires special symmetry and non-bonding bands. In bilayer graphene \cite{graphene_bilayer}, the linearly dispersing bands become quadratic, while retaining many of the symmetry properties. Spin-orbit coupling in these systems should lead to a gapped insulator in the bulk. Gapless modes remain at the edge of the system, protected by topological properties and time-reversal symmetry. This new state of matter, insulating in the bulk and metallic at the edges, has been called a topological insulator \cite{qshe_graphene,fu}. Point Fermi surfaces also arise in gapless semiconductors in which the bands extend quadratically (``massively'') from a single point separating valence and conduction bands \cite{Tsidilkovski}. However, these systems are, generically, not topological insulators. It has been argued that in HgTe quantum wells, where $s$ and $p$ bands overlap each other at the $\Gamma$ point as a function of well thickness, Dirac-like spectra can also arise with exotic topological properties. Due to the enhanced spin-orbit coupling in these materials, a state of matter exhibiting quantum spin Hall effect, has been predicted \cite{bernevig} and observed \cite{konig}. Recent developments in the synthesis of controlled nanostructures, heterojunctions and interfaces of transition metal oxides represent one of the most promising areas of research in materials physics. While several recent studies of oxide interfaces have focused on the polarity discontinuity that can give rise to unexpected states between insulating bulk oxides, including conductivity \cite{ohtomo, hwang}, magnetism \cite{magn_IF}, orbital order \cite{pentcheva}, even superconductivity \cite{IF_sc}, unanticipated behavior unrelated to polarity can also arise. The VO$_2$/TiO$_2$ interface involves no polar discontinuity, but only an open-shell charge and local magnetic discontinuity, according to the change $d^1 \leftrightarrow d^0$ across the interface. It was recently discovered \cite{vo2_tio2} that a three unit cell slab of VO$_2$ confined within insulating TiO$_2$ possesses a unique band structure. It shows four symmetry related point Fermi surfaces along the (1,1) directions in the 2D Brillouin zone, in this respect appearing to be an analog to graphene. The dispersion away from this point is however different and unanticipated: a gap opens linearly along the symmetry line, but opens quadratically along the perpendicular direction. The descriptive picture is that the associated (electron or hole) quasiparticles are relativistic along the diagonal with an associated ``speed of light" $v_F$, as they are in graphene in both directions, but they are non-relativistic in the perpendicular direction, with an effective mass $m$. Seemingly the laws of physics (energy vs. momentum) are different along the two principal axes. The situation is neither conventional zero-gap semiconductor-like, nor graphene-like, but has in some sense aspects of both. This kind of spectra was found to be robust under modest changes in the structure. Here, we develop a tight-binding model description of this semi-Dirac spectra. We find that a three-band model is needed, which can be downfolded to two bands at low energies. A variant of the model, with only two bands, gives rise to anisotropic Dirac spectra, where one has linearly dispersing modes around point Fermi surfaces, with very different ``speed of light'' along two perpendicular axes. A common feature of the various systems discussed above are point Fermi-surfaces. From a device point of view, for example in thinking of p-n or p-n-p junctions, these systems may share common qualitative features. The actual dispersion, which would give rise to different density of states, may control more quantitative differences. However, a more fundamental difference may be in their topological properties.\cite{fu,thouless,haldane,moore} We begin with a 3-band tight-binding model of spinless fermions [corresponding to the half-metallic VO$_2$ trilayer although the system could also be nonmagnetic (spin degenerate)], on a square-lattice, defined by the Hamiltonian \begin{eqnarray} {\cal H}=&\sum_{\alpha=1}^3 (\sum_{i}\epsilon_\alpha n_{i,\alpha}+ \sum_{<i,j>} t_\alpha (c_{i,\alpha}^\dagger c_{j,\alpha}+ h.c.)) \\ \nonumber &+\lambda_1 \sum_{<i>,\pm} (c_{i,1}^\dagger c_{i\pm \hat x,3} -c_{i,1}^\dagger c_{i\pm \hat y,3} + h.c.)\\ \nonumber &+\lambda_2 \sum_{<i>,\pm} (c_{i,2}^\dagger c_{i\pm \hat x,3} -c_{i,2}^\dagger c_{i\pm \hat y,3} + h.c.)\\ \nonumber \label{eq:tb-realspace} \end{eqnarray} with $\epsilon_3>>\epsilon_1,\epsilon_2$, so that we have two overlapping bands $1$ and $2$, with no coupling between them. Instead, they couple through the third band, by a coupling which changes sign under rotation by $90$ degrees. Such a coupling can be shown to arise by symmetry between $d$ and $s$ orbitals, for example. The important aspect is that the coupling vanishes along the symmetry line, allowing the bands to cross (they have different symmetries along the (1,1) line). Now, since the third band is far from the Fermi energy it can be taken as dispersionless. Furthermore, without affecting any essential physics, we take $t_1=-t_2=t$ and $\lambda_1=\lambda_2=t'$. Thus, in momentum space the Hamiltonian becomes a $3\times 3$ matrix: \begin{eqnarray} H = \begin{pmatrix} \widetilde{\varepsilon}_{1k} & 0 & V_k \\ 0 & \widetilde{\varepsilon}_{2k} & V_k \\ V_k & V_k & \varepsilon_3 \end{pmatrix} \label{eq:tightBindingH} \end{eqnarray} where the dispersions and coupling are given by \begin{eqnarray} \begin{tabular} {r c l} \(\widetilde{\varepsilon}_{1k} \) & \( = \) & \( \varepsilon_1 + 2t(\cos k_x + \cos k_y) \) \\ \nonumber \(\widetilde{\varepsilon}_{2k} \) & \( = \) & \( \varepsilon_2 - 2t(\cos k_x + \cos k_y) \) \\ \(V_k\) & \( = \) & \( 2t'(\cos k_x - \cos k_y) \) \nonumber \end{tabular} \label{eq:eev} \end{eqnarray} Using the fact that orbital 3 is distant in energy, the three-orbital problem can be downfolded to a renormalized two orbital problem which becomes (neglecting a parallel shift of the two remaining bands) \begin{eqnarray} H = \begin{pmatrix} \widetilde{\varepsilon}_{1k} & \frac{V_k^2}{\varepsilon_3} \\ \frac{V_k^2}{\varepsilon_3} & \widetilde{\varepsilon}_{2k} \end{pmatrix} \label{eq:tightBindingH2} \end{eqnarray} The eigenvalues $E_{k\pm}$ of $H$ as given by \begin{eqnarray} E_{k\pm} = \frac{\widetilde{\varepsilon}_{1k}+\widetilde{\varepsilon}_{2k}}{2} \pm \frac{1}{2}\sqrt{(\widetilde{\varepsilon}_{1k}-\widetilde{\varepsilon}_{2k})^2 + 4[{\frac{V_k^2}{\varepsilon_3}}]^2} \label{eq:eigenValue} \end{eqnarray} With some (not very stringent) restrictions on $\varepsilon_1 - \varepsilon_2$ to ensure that the uncoupled bands actually overlap, the two bands touch only at the point $\vec k_{sd}$ along the (1,1) lines where $\widetilde{\varepsilon}_{1k}= \widetilde{\varepsilon}_{2k}$, otherwise the two bands lie on either side of the touching point (the Fermi energy). When the 2$\times$2 Hamiltonian is expanded around the semi-Dirac point $\vec k_{sd}$ it becomes \begin{eqnarray} H = \begin{pmatrix} \widetilde{\varepsilon}_{1k} & \frac{V_k^2}{\varepsilon_3} \\ \frac{V_k^2}{\varepsilon_3} & \widetilde{\varepsilon}_{2k} \end{pmatrix} \rightarrow \begin{pmatrix} v_F q_2 & q_1^2/2m \\ q_1^2/2m & -v_F q_2 \end{pmatrix} \label{eq:tightBindingH2b} \end{eqnarray} where $q_2$ and $q_1$ denote the distance from $\vec k_{sd}$ along the (1,1) symmetry direction, and the orthogonal (1,${\bar 1}$), respectively. The Fermi velocity $v_F$ and effective mass $m$ can be related explicitly to the tight binding model parameters, and also calculated by standard ab initio techniques. The dispersion relation is that found for the three layer slab of VO$_2$ trilayer in TiO$_2$ at low energy, \begin{eqnarray} E_{q\pm} \rightarrow \pm \sqrt{(q_1^2/2m)^2 + (v_F q_2)^2}. \end{eqnarray} For comparison, the graphene dispersion relation is $E^g_{q\pm}=\pm v_F \sqrt{q_1^2 + q_2^2}$. A plot of the low-energy dispersion of the model giving rise to a semi-Dirac point in the 2D Brillouin Zone is shown in Fig. \ref{2DE_sD}. For the VO$_2$ trilayer,\cite{vo2_tio2} this dispersion holds up to 10-30 meV in the valence and conduction bands. \begin{figure}[ht] \begin{center} \includegraphics[width=\columnwidth,draft=false]{fig1_condmat.eps} \caption{On the left, the plot shows the low energy band eigenvalues $E_{q\pm}$ in a region near $E_F$ for the semi-Dirac point. On the right is the same plot for the anisotropic Dirac point.} \label{2DE_sD} \end{center} \end{figure} A few observations can be made at this point. First, if the original bands 1 and 2 were simply coupled by the same anisotropic mixing $V_k$ (without any third band in the picture), then anisotropic Dirac points (rather than semi-Dirac points) occur along the (1,1) directions. This dispersion is also shown in Fig. \ref{2DE_sD}. This type of two-band situation should not be particularly unusual, hence Dirac points in 2D systems are probably not as unusual as supposed, i.e. they are not restricted to graphene nor are they restricted to high symmetry points. While the constant energy surfaces of our model may appear to be elliptical (the common situation; the Dirac point has circular FSs), they are actually quite distinct. As $E\rightarrow$0 the velocity is constant in one direction and is $\sqrt{2mE}$ in the other; the FSs vanish as needles with their long axis perpendicular to the (1,1) direction. This can be seen in Fig. \ref{contours}, showing the constant energy surfaces for electron doping according to our model, showing the 4 semi-Dirac points in the tetragonal k$_x$-k$_y$ Brillouin zone. The density of states (DOS) n(E), which is constant for effective mass systems and goes as $|E|$ for graphene, is proportional to $\sqrt{|E|}$ at a semi-Dirac point. When doped, the density of carriers will follow $n(E_F) \propto |E_F|^{3/2}$ behavior. \begin{figure}[ht] \begin{center} \includegraphics[width=6cm,draft=false]{fig2_condmat.eps} \caption{Plot showing the Fermi surfaces for electron doping that derive from the low energy excitation spectra of the semi-Dirac point at the $\Gamma-M$ direction in the k$_x$-k$_y$ plane of the square Brillouin zone.} \label{contours} \end{center} \end{figure} Another observation is that the {\it same bands} $E_{q\pm}$ can be obtained from related but distinct low-energy models, such as \begin{eqnarray} H_2 = \begin{pmatrix} v_F q_2 & iq_1^2/2m \\ -i q_1^2/2m & -v_F q_2 \end{pmatrix} \label{eq:tightBindingH2c} \end{eqnarray} and \begin{eqnarray} H_3 = \begin{pmatrix} 0 & q_1^2/2m +i v_F q_2\\ q_1^2/2m -i v_F q_2 & 0 \end{pmatrix}. \label{eq:tightBindingH2d} \end{eqnarray} Although the bands resulting from $H_2$ and $H_3$ are the same, the eigenfunctions are different and are intrinsically complex for $H_2$ and $H_3$ unlike for the specific semi-Dirac point we discuss. One of the issues of most interest to such systems is the behavior in a magnetic field. Making the usual substitution $\vec q \rightarrow \vec p +\frac{e}{c}\vec A$ with momentum operator $\vec p$ and vector potential $\vec A$, we find the Landau gauge $\vec A = B(-x_2,0,0)$ to be the most convenient here. First, however, we note that the characteristics of the two directions, the mass $m$ and velocity $v_F$, introduce a natural unit of momentum $p_o =m v_F$ and length $x_o = \hbar/p_o$, and of energy $e_o = m v_F^2/2$. Introducing the atomic unit of magnetic field $B_{\circ}$ such that $\mu_B B_{\circ}$ = 1 Ha, and the dimensionless field $b = B/B_{\circ}$, units can be scaled away from the Hamiltonian by defining for each coordinate $x$ \begin{eqnarray} x_2 = \Large(\frac{{1}}{\gamma b}\Large)^{2/3} x_{\circ}\tilde{x}_2, \end{eqnarray} and similarly for $x_1$. Here $\gamma$ is the dimensionless ratio of the two natural energy scales: $\gamma = \mu_B B_{\circ}/(mv_F^2/2)$. Under this scaling \begin{eqnarray} p_1 +\frac{e}{c}A_1 = p_1 -\frac{e}{c}B x_2 \rightarrow p_{\circ}(\gamma b)^{2/3} (\tilde{p}_1 - \tilde{x}_2) \end{eqnarray} where $\tilde{p}_1, \tilde{x}_1$ are conjugate dimensionless variables, etc. Thus all possible semi-Dirac points (all possible $m$ and $v_F$ combinations) scale to a {\it single unique semi-Dirac point}, with the materials parameters determining only the overall energy scale. There is no limiting case in which the semi-Dirac point becomes either a Dirac point or a conventional effective mass zero-gap semiconductor. For the case of trilayer VO$_2$, $\gamma$ does not differ greatly from unity \cite{vo2_tio2}. Shifting $\tilde{x}_2$ to $u=\tilde{x}_2 - \tilde{p}_1$, with conjugate dimensionless momentum $p$, the Hamiltonian in a field becomes \begin{eqnarray} H& = &2 e_o ~(\gamma b)^{2/3} ~[p~ \sigma_z +\frac{1}{2}u^2 ~\sigma_x] \\ \nonumber & \equiv &2 e_o~(\gamma b)^{2/3}~h. \end{eqnarray} The energy scale is much larger than for conventional orbits though smaller than in graphene \cite{antonio}, so the VO$_2$ trilayer may display an integer quantum Hall effect at elevated temperature as does graphene \cite{roomtempQHE}. A scalar equation for the eigenvalues can be obtained from $h^2$. Introducing the operator $Q = p + i u^2/2$, the eigenvalues of $h^2$ are $Q^{\dag}Q$ and $QQ^{\dag}$, giving the mathematical problem \begin{eqnarray} Q^{\dag}Q \phi_n(u) \equiv \Large(-\frac{d^2~}{du^2} + \frac{1}{4}u^4 -u\Large)\phi_n(u) =\varepsilon_n^2\phi_n(u). \end{eqnarray} The equation for $QQ^{\dag}$ has the opposite sign of the linear term, with identical eigenvalues and eigenfunctions related by inversion. Note that every eigenfunction of $h$ is also an eigenfunction of $h^2$, and that although the potential is negative in the interval (0,4$^{1/3}$), the eigenvalues $\varepsilon_n^2$ must be non-negative. \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth,draft=false]{figureQuantizedEnergyModifiedV2.eps} \caption {Potential energy function for the one-dimensional Schr\"odinger equation and the resulting quantized energy levels $\varepsilon_n^2$ of $h^2$. The lowest numerical vales of the three energy eigenvalues $\varepsilon_n=+\sqrt{\varepsilon_n^2}$ are provided.}\label{fig:quantizedEDia} \end{figure} We have obtained the eigenvalues both by precise numerical solution and by WKB approximation, finding that the latter is an excellent approximation. Initially neglecting the linear term in the potential, the WKB condition \cite{wkb} \begin{eqnarray} \int_{-\sqrt{2}\epsilon_{n}^{\frac{1}{2}}}^{\sqrt{2}\epsilon_{n}^{\frac{1}{2}}} \sqrt{\epsilon^2_{n}-\frac{1}{4}u^4} \ du=(n+\frac{1}{2})\pi \label{eq:WKBint} \end{eqnarray} can be solved to give the WKB eigenvalues for the quartic potential as \begin{eqnarray} \epsilon^2_{n}=\Large[3 \sqrt{\frac{\pi}{2}} \frac{\Gamma(\frac{3}{4})}{\Gamma(\frac{1}{4})}\Large]^{4/3} (n+\frac{1}{2})^{\frac{4}{3}} = 1.3765 (n+\frac{1}{2})^{\frac{4}{3}}. \label{eq:WKBEn} \end{eqnarray} The linear perturbation corrects the eigenvalues only to second order, which is significant only for the ground state ($\sim$0.74 versus the numerical solution of 0.59). The WKB error is less than 0.01 for the first excited state and gets successively smaller for higher eigenvalues. We see then that the semi-Dirac system has eigenvalues in a magnetic field which scale as $B^{2/3}$ and increase as $(n+\frac{1}{2})^{2/3}$ as $n$ gets large. Both aspects lie between the behaviors for conventional Landau levels (linear in $B$, proportional the $n+\frac{1}{2}$) and the Dirac point behavior (proportional to $\sqrt{B~n}$), as might have been anticipated. Some low-lying eigenvalues of $h^2$ are shown in Fig. \ref{fig:quantizedEDia} against the potential well. Note that there is no zero-energy solution as in the graphene problem. Another way in which Dirac spectra can arise on a square-lattice can be motivated in terms of the model of Bernevig {\sl et al.} \cite{bernevig} for HgTe quantum wells. In their model the two bands crossing each other have $s$ and $p$ characters respectively. Thus the interband hopping term changes sign under reflection. This can lead to a ($\sin{k_x} + i \sin{k_y}$) coupling between the bands. Note that in this model, only a single Dirac point can occur and it must be at $k=0$, when the two bands touch each other at that point. In contrast, in the models discussed here, there are four symmetry related semi-Dirac (or anisotropic Dirac) points whose location can vary continuously along the symmetry axis (1,1), with changes in band parameters. A feature unique (so far) to the VO$_2$ trilayer system is that point Fermi surface arises in a half metallic ferromagnetic system where time-reversal symmetry is broken. Applications of the VO$_2$ trilayer and related semi-Dirac point systems may provide unusual spintronics characteristics and applications. In conclusion, we have developed a tight-binding model description of the semi-Dirac and anisotropic Dirac spectra relevant to VO$_2$-TiO$_2$ multi-layer systems. Our tight binding model contains nothing unconventional, indicating that semi-Dirac and anisotropic point systems are not as rare as has been assumed. The low energy characteristics of the semi-Dirac point are intermediate between those of zero-gap (massive) semiconductors and Dirac (massless) point systems. The study of such oxide nano-heterostructures has only just begun and they clearly promise a number of diverse electronic structures and novel phases of matter. This project was supported by DOE grant DE-FG02-04ER46111 and by the Predictive Capability for Strongly Correlated Systems team of the Computational Materials Science Network. V.P. acknowledges financial support from Xunta de Galicia (Human Resources Program).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Consider a degree $n$ polynomial $p(x)$ with roots $x_1, \ldots, x_n$: $$p(x) = x^n+a_1x^{n-1}+\ldots+a_nx+a_n = \prod_{i=1}^n (x-x_i).$$ A \emph{Tschirnhaus transformation} is a polynomial transformation of the roots of $p$. That is, given a polynomial $$T(x) = b_{n-1}x^{n-1}+b_{n-2}x^{n-2}+\ldots+b_1x+b_0,$$ and setting $$y = T(x)$$ we can form a new polynomial with roots $T(x_1), \ldots, T(x_n)$: $$q(y) = y^n+A_1y^{n-1}+\ldots+A_{n-1}y+A_{n}= \prod_{i=1}^n (y-T(x_i)).$$ Tschirnhaus's original idea was to transform $p$ to a solvable form -- specifically, to choose $T$ such that $q(y) = y^n+A_n$ -- as part of a proposed method for determining in radicals the roots of $p$. This method is not successful because in general it is not possible to determine in radicals the coefficients of the necessary $T$. On the other hand, a number of interesting results have since been proved involving the use of Tschirnhaus transformations to reduce families of polynomials to certain canonical forms; in particular, the problem of finding $T$ such that $q$ has the $(n-k)$-parameter form $$q(y) = y^n+A_{k+1}y^{n-k-1}+\ldots+A_{n-1}y+A_n$$ with $A_1 = A_2 = \ldots = A_k = 0$ has been widely considered. We provide a partial review of the historical development of this theory in Section \ref{background}. Recently progress has been made by casting the problem in terms of the geometric notion of \emph{resolvent degree}, as introduced by Farb and Wolfson.\cite{farb2020resolvent} We review the precise definitions in Section \ref{rd}, but roughly speaking the idea is as follows. We say a generically finite dominant map $X \to Y$ has \emph{essential dimension} at most $d$ if there is a pullback square \[ \begin{tikzcd} X \arrow[rr] \arrow[dd] & & W \arrow[dd] \\ & & \\ Y \arrow[rr] & & Z \end{tikzcd} \] with $\dim(Z) \leq d$. We then say $X \to Y$ has \emph{resolvent degree} at most $d$ if it factors as a composition of maps each of essential dimension at most $d$. (This is somewhat stronger than what is required in the precise definition; for instance, it is sufficient for $X \to Y$ to be birationally equivalent to such a composition.) In particular, we are interested in the resolvent degree of the root cover of the family of degree $n$ polynomials. That is, let $\mathcal{P}_n$ be the space of monic degree $n$ polynomials, and $\widetilde{\mathcal{P}_n}$ the space of pairs $(p, \lambda)$ of polynomials $p \in \mathcal{P}_n$ with a choice of root $\lambda$. Then there is a generically finite dominant map $\widetilde{\mathcal{P}_n} \to \mathcal{P}_n$ by ``forgetting the root'', and we are interested in bounds on $$\mathrm{RD}(n) := \mathrm{RD}(\widetilde{\mathcal{P}_n} \to \mathcal{P}_n).$$ Intuitively, we have $\mathrm{RD}(n) \leq d$ if the roots of any $p \in \mathcal{P}_n$ can be determined by solving algebraic functions of at most $d$ variables. For example, for $n = 2$, the quadratic formula implies that the root cover $\widetilde{\mathcal{P}_2} \to \mathcal{P}_2$ is a pullback of the map \begin{align*} \mathbb{P}^1 \to \mathbb{P}^1 z \mapsto z^2. \end{align*} Similarly, the existence of ``formulas in radicals'' for $n=3$ and $n = 4$ implies that $\widetilde{\mathcal{P}_3} \to \mathcal{P}_3$ and $\widetilde{\mathcal{P}_4} \to \mathcal{P}_4$ factor into towers of pullbacks of cyclic self-covers of $\mathbb{P}^1$, so that $\mathrm{RD}(3) = \mathrm{RD}(4) = 1$. Many classical results concerning Tschirnhaus transformations can be translated into statements about resolvent degree: if any $p \in \mathcal{P}_n$ can be reduced to an $(n-k)$-parameter form by means of a Tschirnhaus transformation $T$, we have $$\mathrm{RD}(n) \leq n-k,$$ provided that $T$ can \emph{itself} always be determined by solving algebraic functions of at most $n-k$ variables. For example, a result of Bring shows that any degree $5$ polynomial can be put into the one-parameter form $y^5+A_5y+1$.\cite{chen2017erland} This is sufficient to imply $\mathrm{RD}(5) = 1$; in Section \ref{rd} we treat this example in detail. The bounds so-obtained are generally not sharp, as the improvements obtained by Wolfson and Sutherland have demonstrated.\cite{wolfson2021tschirnhaus, sutherland2021upper} On the other hand, this formulation of the problem implies different constraints on which Tschirnhaus transformations $T$ are permissible than were generally assumed by classical authors. For example, in considering the problem of transforming a general degree $n$ polynomial $p(x)$ into an $(n-k)$-parameter form by means of a Tschirnhaus transformation $T$, Sylvester only considers those transformations whose coefficients can be determined without solving any equation of degree higher than $k$. This is much more restrictive than necessary if one's goal is to show $$\mathrm{RD}(n) \leq n-k.$$ For example, for $k = 6$, Sylvester obtains the following: \begin{proposition}[Sylvester] For $n \geq 44$, a general polynomial of degree $n$ can be put into an $(n-6)$-parameter form by means of a Tschirnhaus transformation whose coefficients can be determined without solving any equation of degree higher than 5. \end{proposition} As a corollary, one obtains the resolvent degree bound $\mathrm{RD}(n) \leq n-6$ for $n \geq 44$; on the other hand, finding the necessary Tschirnhaus transformation is a resolvent degree $\mathrm{RD}(5) = 1$ problem, since only equations of degree at most 5 are involved. Sylvester's was nonetheless the best known bound for $k = 6$ prior to Wolfson's 2020 proof that $\mathrm{RD}(n) \leq n-6$ for $n \geq 41$. \cite{wolfson2021tschirnhaus} In Wolfson's argument, finding the necessary Tschirnhaus transformation is shown to be at worst a resolvent degree 35 problem. It has been conjectured by Wiman and (separately) Chebotarev that $\mathrm{RD}(n) \leq 15$ for $n \geq 21$, though their proposed arguments contain gaps. \cite{wiman1927anwendung, chebotarev1954problem} Recently, Sutherland has provided a rigorous proof of this bound using the classical theory of polarity.\cite{sutherland2021upper} In the current note we use ideas from Sylvester to provide an alternative proof of the $\mathrm{RD}(n) \leq n-6$ for $n \geq 21$ bound. In particular, we show \begin{theorem*}[Main Theorem] For $n \geq 21$, a general polynomial of degree $n$ can be put into an $(n-6)$-parameter form by means of a Tschirnhaus transformation whose coefficients can be determined without solving an equation of degree higher than 20. \end{theorem*} This implies $\mathrm{RD}(n) \leq n-6$ for $n \leq 21$. We now give an outline of the remaining sections of the paper. In Section \ref{background} we give a brief historical overview of the classical theory of Tschirnhaus transformations and some of the important results therein. In Section \ref{sylvester} we turn to Sylvester's work in particular and give a description of his ``method of obliteration'': a technique for finding solutions of systems of polynomial equations when the number of variables is large relative to the number of equations, which lies at the heart of his work on Tschirnhaus transformations. In Section \ref{rd} we discuss resolvent degree and consider the $n = 5$ case in detail as an illustration of the relation between resolvent degree and the classical theory of Tschirnhaus transformations. Section \ref{linear} describes a key ingredient of our main result: the use of the ``method of obliteration'' to determine linear subspaces of hypersurfaces. Finally, in Section \ref{actual_proof} we give the proof of our main theorem. \section{Some Background on Tschirnhaus Transformations}\label{background} \subsection{Preliminaries} Let $p(x)$ be a polynomial over a (not necessarily closed) field $K$. Then we have $$p(x) = x^n+a_1x^{n-1}+\ldots+a_{n-1}x+a_n = \prod_{i=1}^n (x-x_i)$$ for some coefficients $a_1, \ldots, a_n \in k$ and roots $x_1, \ldots, x_n \in \overline{K}$. Fixing some algebraic extension $L$ of $K$, a \emph{Tschirnhaus transformation} is a polynomial $T \in L[x]$, $$T(x) = b_0+b_1x+\ldots+b_{n-1}x^{n-1}$$ which we apply to the roots of $p$ to obtain a new polynomial $$q(y) = \prod_{i=1}^n (y-T(x_i)) = y^n+A_1y^{n-1}+\ldots+A_1y+A_0.$$ Note that the coefficients $A_0, \ldots, A_n$ are symmetric functions of the transformed roots $T(x_1), \ldots, T(x_n)$, and hence are symmetric in $x_1, \ldots, x_n$. The coefficients $a_0, \ldots, a_n$ generate the algebra of symmetric polynomials in $x_1, \ldots, x_n$, so we can determine the $A_i$ polynomially in terms of $a_0, \ldots, a_n$ and $b_0, \ldots, b_{n-1}$. In particular, this implies $q \in L[y]$. The precise constraints on the field $L$ from which the coefficients of $T$ may be taken vary from author to author but generally one considers towers of extensions of bounded degree. Tschirnhaus's original goal was to find a formula in radicals for the roots of $p$. More precisely, taking $L$ to be the solvable closure of $K$, his strategy was to determine a Tschirnhaus transformation $T \in L[x]$ which transforms $p$ into the solvable form $q(y) = y^n+A_n$. If $y_i$ is a root of $q$, we can find the corresponding root(s) of $p$ (i.e., those $x_j$ such that $T(x_j) = y_i$) as the roots of $$\mathrm{GCD}(p(x), T(x)-y_i).$$ If there are $d$ roots of $p$ which map to $y_i$, this will be a degree $d$ polynomial. In particular, if $q$ has distinct roots, then the roots of $p$ can be recovered rationally from the roots of $q$. Thus if $q$ is solvable and $T$ has coefficients in a solvable extension of $K$, then $p$ will in general be solvable. (And at worst recovering the roots of $p$ will require the solution of an equation of degree $n-1$.) This strategy fails because it is not generally possible to determine $T$ solvably such that all the intermediate terms $A_1, A_2, \ldots, A_{n-1}$ of $q$ vanish. On the other hand, there are weaker versions of Tschirnhaus's problem which are tractable. More precisely, we can ask: what conditions on $n$ and $L$ are sufficient for there to exist $T$ in $L[x]$ transforming the degree $n$ polynomial $p \in K[x]$ to a polynomial $q \in L[y]$ with $A_1 = \ldots = A_k = 0$? That is, we consider the problem of setting some (rather than all) of the intermediate coefficients to zero (thus reducing the number of parameters of the family of polynomials), and we relax the requirement that $L$ necessarily be a solvable extension. We now consider in more detail what is necessary to determine $T$ such that $A_1 = \ldots = A_k = 0$. As we remarked above, $A_i$ is a polynomial function of $a_1, \ldots, a_n$ and $b_0, \ldots, b_{n-1}$. Treating the $b_i$'s as unknowns to be determined, we further observe that $A_i$ is homogeneous of degree $i$ in the variables $b_0, \ldots, b_{n-1}$. For example, \begin{align*} A_1 &= -\sum_{i=1}^n T(x_i)\\ &= -\sum_{i=1}^n (b_{n-1}x_i^{n-1}+b_{n-2}x_i^{n-2}+\ldots+b_1x_i+b_0)\\ &= \left(-\sum_{i=1}^n x_i^{n-1}\right)b_{n-1}+\left(-\sum_{i=1}^n x_i^{n-2}\right)b_{n-2}+\ldots+\left(-\sum_{i=1}^n x_i\right)b_{1}-nb_0 \end{align*} we can ensure $A_1 = 0$ as long as the coefficients $b_0, \ldots, b_{n-1}$ of $T$ are chosen to satisfy a single homogeneous linear equation. (The coefficients of this equation are symmetric in the $x_i$, so again can be expressed in terms of the coefficients $a_1, \ldots, a_n$ of $p$; it is not necessary to know the roots $x_i$.) More generally, $A_i$ vanishes if and only if the $i$th symmetric function of $T(x_1), \ldots, T(x_n)$ vanishes, so to find a Tschirnhaus transformation $T$ such that $A_i = 0$ in the transformed polynomial we need to solve a degree $i$ polynomial in $b_0, \ldots, b_{n-1}$ whose coefficients are polynomials in $a_1, \ldots, a_n$. Thus, to determine $T$ we must find a point in $\mathbb{P}^{n-1}_{\mathbf{b}}$ on the intersection of the hypersurfaces $V(A_1), V(A_2), \ldots, V(A_k)$, which are of degree $1, 2, \ldots, k$, respectively. \subsection{Some Results on Tschirnhaus Transformations from 1683 to Present} Tschirnhaus, in his 1683 paper introducing these transformations, claimed to be able to solve any degree $n$ polynomial by removing \emph{all} intermediate terms.\cite{von1683methodus} That is, he claimed one could find a transformation $T$ such that $$A_1 = A_2 = \ldots = A_{n-1} = 0$$ and the transformed polynomial has the solvable form $$y^n+A_n = 0.$$ The problem with this strategy is that finding the necessary $T$ requires the solution of a system of polynomial equations of degrees $1, 2, \ldots, n-1$. In general this leads to an equation of degree $(n-1)!$, so for $n > 3$ finding $T$ is a priori more complicated than finding the roots of the original degree $n$ polynomial. Tschirnhaus did show explicitly how to find $T$ in the $n = 3$ case, and his proof readily generalizes to removing the first two intermediate terms from any degree $n$ polynomial. Finding a Tschirnhaus transformation $T$ yielding $A_1 = A_2 = \ldots = A_k = 0$ requires solving a system of $k$ equations in $n$ variables, of degree $1, 2, 3, \ldots, k$. In the worst case this leads to an equation of degree $k!$, but when the system is sufficiently undetermined (i.e., $n$ much larger than $k$), a solution can be found without solving any polynomial of degree greater than $k$. The first result of this kind was given in 1786 by Bring, who showed that when $n \geq 5$, we can find $T$ such that $A_1 = A_2 = A_3 = 0$ without solving any equation of degree higher than three.\cite{chen2017erland} In 1834, Jerrard recovered independently Bring's result that removal of three terms from a degree 5 polynomial was possible by means of a Tschirnhaus transformation.\cite{Jerrard} In fact, Jerrard went on to claim that his methods could yield a reduction of a general quintic to a \emph{solvable} form.\cite{jerrard1835xxiv} (Jerrard was aware of, but did not accept, Abel's 1824 work on the insolvability of the quintic.) Shortly thereafter Hamilton was commissioned by the British Association for the Advancement of Science to investigate the validity of Jerrard's methods, issuing his report in 1837.\cite{hamilton1836inquiry} He showed that Jerrard's reductions were in many cases ``illusory". Jerrard allowed Tschirnhaus transformations of degree potentially as high or higher than the original polynomial $p$, and Hamilton demonstrated that in the ``illusory" cases the transformation $T$ found by Jerrard was a multiple of $p$. In this case the transformed polynomial is always $q(y) = y^n$ (the same as if the ``transformation" $y = T(x) = 0$ were permitted) for any degree $n$ polynomial $p$, so there is no hope of determining the roots of $p$ by studying $q$. On the other hand, Hamilton showed that Jerrard's methods can be a made to work --and the ``illusory" cases avoided -- in higher degrees. More precisely, for any fixed $k$, one can find a (nonzero, degree $\leq n-1$) Tschirnhaus transformation $T$ yielding $A_1 = A_2 \ldots = A_k = 0$ without solving any equation of degree higher than $k$, provided the degree $n$ of the original polynomial $p$ is large enough relative to $k$. In particular, for $k = 4$, Hamilton showed $n \geq 11$ suffices, and for $k = 5$, $n \geq 47$. In 1886, Sylvester published a geometric explanation of the Hamilton/Jerrard method, sharpened some of the bounds (finding, in particular, $n \geq 10$ suffices for $k = 4$, and $n \geq 44$ for $k = 5$).\cite{sylvester1887so} Sylvester claimed that these sharpened bounds are optimal if no ``elevation of degree" is allowed -- that is, if no equations of degree higher than $k$ are to be solved. This claim requires careful interpretation -- Sylvester does not consider the possibility of higher degree polynomials arising but factoring into lower degree terms. This is known to happen for $k = 3$: if one attempts to remove three terms from a degree $n$ polynomial, the system of equations of degree 1, 2, and 3 leads in general to an equation for degree 6. Sylvester's method is able to avoid this elevation of degree only when $n \geq 5$. For $n = 4$, Sylvester's method is inapplicable, but Lagrange had shown (in 1771, more than 100 years prior) that the degree 6 equation that arises in this case always factors into a pair of cubics, so that the necessary Tschirnhaus transformation can in fact be determined without solving any equation of degree higher than 3.\cite{lagrange1771reflexions} Further reductions have been achieved, but all loosen or ignore the ``elevation of degree" restriction in one way or another. Viewing the degree 9 polynomial $$p(x) = a_0x^9+a_1x^8+\ldots+a_8x+a_9$$ as a multi-valued ``algebraic function" sending $(a_0, \ldots, a_9)$ to the set of roots of $p$, Hilbert asked whether this could be written as a composition of algebraic functions of at most 4 variables.\cite{hilbert1927gleichung} As part of his investigation, Hilbert sketched a method for finding a Tschirnhaus transformation such that $A_1 = A_2 = A_3 = A_4 = 0$ when $n \geq 9$. Hilbert's method requires finding a line on a cubic surface. This in turn requires the solution of an equation of degree 27. However, by putting the equation of the cubic surface into a four-parameter canonical form (the ``pentahedral form", originally suggested by Sylvester), Hilbert was able to show that the coefficients of the necessary line were themselves algebraic functions of four variables, so this was sufficient for his purposes. In 1945, B. Segre strengthened Hilbert's result, describing an algorithm for finding a Tschirnhaus transformation removing 4 terms from a degree $n \geq 9$ polynomial, without solving any equation of degree higher than 5.\cite{segre1945algebraic} The same result was later proven by Dixmier using different methods.\cite{dixmier1993histoire} Also building on Hilbert's work were Wiman, who sketched an alternative proof of the $n = 9$ bound for $k = 4$, and G. Chebotarev, who extended Wiman's ideas to sketch a proof that $n = 21$ for $k = 5$.\cite{chebotarev1954problem, wiman1927anwendung, sutherland2021gn} In 1975, Brauer showed that, when $n > k!$, a degree $n$ polynomial can be written as a composition of algebraic functions of $(n-k-1)$ variables; in this framing of the problem it is permissible to simply solve the degree $k!$ to find a Tschirnhaus transformation removing $k$ terms, since this will be an algebraic function of fewer variables than the original degree $n$ polynomial.\cite{brauer1975resolvent} For $k\geq 7,$ this $n > k!$ bound is lower than the corresponding bounds for removal of $k$ terms found by Sylvester, though it should again be emphasized that this ignores the ``no elevation of degree`` constraint which is explicit in Sylvester, and implicit in pre-Sylvester work on the problem. In 2020, Wolfson described a function $F(k)$ such that a degree $n$ polynomial can be put in an $(n-k-1)$-parameter form whenever $n \geq F(k)$, and which improved significantly on Brauer's bounds.\cite{wolfson2021tschirnhaus} Wolfson frames the problem in terms of the theory \emph{resolvent degree}, which allows bounds on the number of necessary parameters for a family of algebraic functions to be computed based on the dimensions of certain spaces. For $k = 5$, Wolfson shows $n \geq 41$, which also improves on Sylvester's bound of $n \geq 44$. To find the Tschirnhaus transformation accomplishing the reduction in this case requires, among other things, finding a 2-plane inside of a cubic hypersurface in $\mathbb{P}^6$. Informally, since the dimension of the moduli space of cubic hypersurfaces in $\mathbb{P}^6$ is 35, finding a 2-plane inside such a hypersurface is at worst a 35-parameter problem, so is permissible when putting a degree 41 polynomial into 35-parameter form. This approach does not produce the degrees of the equations which must be solved to find the Tschirnhaus transformation (which may be higher than $n$, as in Hilbert's proof that $n = 9$ suffices for $k = 4$, where it was necessary to solve a degree 27 equation to find a line on a cubic surface). Most recently, Sutherland used a combination of ideas from the classical theory of polarity together with optimizations to the moduli-dimension-counting method to improve on the $F(k)$ bound for $k = 5, \ldots, 15$ and all $k \geq 18$.\cite{sutherland2021upper} In particular, for $k = 5$, Sutherland shows that $n = 21$ suffices, giving the first rigorous proof of the bound first claimed by Chebotarev. In the current note, we show that the $n = 21$ result can be recovered from Sylvester's method if partial elevation of degree is allowed. More precisely, one can find a Tschirnhaus transformation removing 5 terms (i.e., such that the coefficients of the transformed polynomial satisfy $A_1 = A_2 = A_3 = A_4 = A_5 = 0$) from a degree $n$ polynomial whenever $n \geq 21$, by solving no equation of degree higher than 20. A key ingredient is that there exists an explicit algorithm to find a 2-plane on a cubic hypersurface in $\mathbb{P}^9$, without solving any equation of degree higher than 5. \section{Sylvester's Obliteration Formula}\label{sylvester} Though motivated by the problem of finding Tschirnhaus transformations, the procedure Sylvester describes in \cite{sylvester1887so} is remarkably general, allowing for a solution to be found to any sufficiently underdetermined system, without elevation of degree. More precisely, given a system of equations $S$ of polynomial equations in $N$ variables of degree at most $k$, with $n_i$ the number of equations of degree $i$, Sylvester shows there is a function $l(n_1, \ldots, n_k)$ such that, if $N > l(n_1, \ldots, n_k)$, then a solution to $S$ can be found without solving any equation of degree greater than $k$. The method is as follows: Pick any $f$ of maximal degree $k$ from the system $S$. We can find a solution to $S$ by first finding a line $L$ contained in the solution set of the subsystem $S' = S \setminus \{f\}$, then intersecting this line with the vanishing locus of $f$. To find $L$, first find a solution $Q = (q_0, \ldots, q_N)$ to $S'$. Then to get the required line it suffices to find a point $P = (p_0, \ldots, p_N)$ such that $P+\lambda Q$ is a solution to $S'$ for all $\lambda \in \mathbb{C}$. For any equation $g \in S'$, then, view $g(P+\lambda Q)$ as a polynomial in $\lambda$; if $\deg(g) = d$, the coefficients of $1, \lambda, \ldots, \lambda^{d-1}, \lambda^d$ must vanish identically. In fact the coefficient of $\lambda^d$ is just $g(Q)$, so vanishes since we chose $Q$ to be a solution of $S'$. The vanishing of the remaining coefficients imposes polynomial conditions on the $p_0, \ldots, p_N$ of degrees $1, 2, \ldots, d$. Ranging over all $g \in S'$, we see that $P$ must be a solution to a system $S''$ with $m_i$ equations of degree $i$, where \begin{align*} m_k &= n_k-1\\ m_{k-1} &= n_k+n_{k-1}\\ m_{k-2} &= n_k+n_{k-1}+n_{k-2}\\ &\ \vdots\\ m_1 &= n_k+n_{k-1}+\ldots+n_1. \end{align*} Now to find this needed point solution, we proceed inductively: again hold aside some polynomial of highest degree (say $h$), and find a linear solution to the subsystem $S'' \setminus \{h\}$, then intersect that line with $h$, and so on. The number of equations of maximal degree decreases by 1 with each step, so eventually we are left with a (possibly very large) system of linear equations; a line in the solution set to this system can be found provided the number of variables is (at least) one greater than the number of equations. Then to backtrack to a linear solution to the original system requires only for equations of degree $\leq k$ to be solved one at a time. The core reduction is succinctly summarized by Sylvester's formula of obliteration, which we will refer to repeatedly: \begin{proposition}[Sylvester] Given $n_1, \ldots, n_k \in \mathbb{N},$ let $$[n_k, n_{k-1}, \ldots, n_2, n_1]$$ denote the minimum number of variables such that a linear solution to any system of equations with exactly $n_i$ equations of degree $i$ can be found without elevation of degree. Then $$[n_k, n_{k-1}, \ldots, n_2, n_1] \leq 1+[m_k, m_{k-1}, \ldots, m_2, m_1]$$ where $$m_i = \begin{cases} \sum_{j=i}^k n_j & i \neq k \\ n_k-1 & i = k.\end{cases}$$ \end{proposition} Note that applying the obliteration formula $n_k$ times yields a system with no equations of degree $k$, so all equations of the highest degree can be removed. This can be repeated until only (a large number of) linear equations remain, in which case the minimum number of variables needed is easy to determine. For example, consider the problem of finding a line on a cubic hypersurface. We have a system of equations with 1 equation of degree 3, 0 equations of degree 2, and 0 of degree 1. By repeated application of the obliteration formula, $$[1, 0, 0] \leq 1+[1, 1] \leq 2+[2] = 2+3 = 5,$$ so it is possible to find a line solvably (in fact, by solving no equation of degree greater than 3) on a cubic hypersurface in $\mathbb{P}^5$. \section{Resolvent Degree}\label{rd} \subsection{Definitions} Let $k$ be an algebraically closed field and let $X \to Y$ be a generically finite dominant map of $k$-varieties. Following Buhler and Reichstein \cite{buhler1999tschirnhaus}, we define the \emph{essential dimension} of this map, $\mathrm{ed}(X \to Y)$, to be the minimal $d$ such that there exists a pullback square \[ \begin{tikzcd} X \arrow[rr] \arrow[dd] & & W \arrow[dd] \\ & & \\ Y \arrow[rr] & & Z \end{tikzcd} \] with $\dim Z \leq d$. The idea of resolvent degree is to extend this notion to allow towers of pullbacks; essentially, a map is of resolvent degree at most $d$ if it can be written as a composition of maps each of essential dimension at most $d$. More precisely, following Farb and Wolfson \cite{farb2020resolvent}, the \emph{resolvent degree}, $\mathrm{RD}(X \to Y)$, is defined to be the minimal $d$ such that there exists a tower \[ E_r \to \cdots \to E_1 \to E_0 = U, \] where $U$ is an open subset of $Y$, $E_r \to U$ factors through a dominant map $E_r \to X$, and $\mathrm{ed}(E_i \to E_{i-1}) \leq d$ for each $i$. It follows from these definitions that \[ \mathrm{RD}(X \to Y) \leq \mathrm{ed}(X \to Y) \leq \dim Y. \] It is convenient, in proving results on resolvent degree, to construct towers of maps of bounded resolvent degree (rather than essential dimension). Lemma 2.7 in Farb and Wolfson \cite{farb2020resolvent} shows that this is sufficient. The connection to polynomials and their roots is as follows. Let $\mathcal{P}_n$ denote the space of monic degree $n$ polynomials with coefficients in $k$ and let \[ \widetilde{\mathcal{P}_n} = \{(p, r) \in \mathcal{P}_n \times \mathbb{A}^1_k \mid p(r) = 0\}, \] the space of monic polynomials together with a choice of root. We then define \[ \mathrm{RD}(n) := \mathrm{RD}(\widetilde{\mathcal{P}_n} \to \mathcal{P}_n) \] where the map is ``forgetting the root". $\mathrm{RD}(n)$ captures the complexity of the root cover in the sense that if there is a formula in terms of algebraic functions of at most $d$ variables for finding the roots of a degree $n$ polynomial in terms of its coefficients, then $\mathrm{RD}(n) \leq d$. \subsection{Resolvent Degree of a Dominant Map} It is necessary to extend the definition of resolvent degree to dominant maps of $k$-varieties which are not necessarily generically finite. Although we are principally interested in $\mathrm{RD}(n) = \widetilde{\mathcal{P}_n} \to \mathcal{P}_n$, which is generically finite, a number of maps which arise naturally in the study of Tschirnhaus transformations are not. For example, a key step in Bring's proof that a general quintic can be reduced to a one-parameter form involves finding a line on a quadric surface in $\mathbb{P}^3$. Thus we would like to study $$\mathcal{H}_{2,3}^1 \to \mathcal{H}_{2,3}$$ where $\mathcal{H}_{2, 3}$ is the parameter space of quadric surfaces in $\mathbb{P}^3$ and $\mathcal{H}_{2,3}^1$ is the space of such quadrics together with a choice of line on the surface, and the map is ``forgetting the line''. Since any quadric surface contains an infinite family of lines, this map is not generically finite. We would like to extend the notion of resolvent degree so that $\mathrm{RD}(\mathcal{H}_{2,3}^1 \to \mathcal{H}_{2,3})$ captures the complexity of finding lines on quadric surfaces in the same way that $\mathrm{RD}(\widetilde{\mathcal{P}_n} \to \mathcal{P}_n)$ captures the complexity of finding roots. This is done in \cite{wolfson2021tschirnhaus} in the following way: given a dominant map $\pi: X \to Y$, define the \emph{resolvent degree} $\mathrm{RD}(X \to Y)$ to be the minimum $d$ for which there exists a dense collection of subvarieties $\{U_\alpha \subset X\}$ with $$\pi|_{U_{\alpha}}: U_\alpha \to Y$$ a generically finite dominant map and $\mathrm{RD}(U_\alpha \to Y) \leq d$ for all $\alpha.$ (Such a subvariety $U_\alpha$ is called a \emph{rational multi-section} for $\pi$.) Requiring the collection of multisections to be dense in $X$ is necessary to ensure that resolvent degree behaves well with respect to composition of dominant maps: given $X \to Y$ and $Y \to Z$ dominant, for the composition $X \to Z$ we have $\mathrm{RD}(X \to Z) \leq \max\{\mathrm{RD}(X \to Y), \mathrm{RD}(Y \to Z)\}$ by \cite[Lemma~4.10]{wolfson2021tschirnhaus}. Having multi-sections $V \to Y$ for $X \to Y$ and $U \to Z$ for $Y \to Z$ is not generally enough to product a multisection for $X \to Z$ (the range of $V \to Y$ could entirely miss $U$). On the other hand, if $V \to Y$ is \emph{surjective}, then we do obtain a multi-section of $X \to Z$. Returning to the example of lines on quadrics, an algorithm for determining a line (or finite set of lines) on each quadric surface in terms of algebraic functions of degree $d$ gives a multi-section $U \to \mathcal{H}_{2,3}$ with $\mathrm{RD}(U \to \mathcal{H}_{2,3})\leq \mathrm{RD}(d)$, but this is somewhat less than what is required to show that $\mathrm{RD}(\mathcal{H}_{2,3}^1 \to \mathcal{H}_{2,3}) \leq \mathrm{RD}(d)$. On the other hand, as we shall see in the next section, determining any one line is sufficient to yield a Tschirnhaus transformation reducing the quintic to a one-parameter form, so that $\mathrm{RD}(5) = 1$. \subsection{Resolvent Degree of the Quintic} As an illustrative example, we now give a detailed proof that (as observed in \cite{farb2020resolvent}) Bring's work on Tschirnhaus transformations of quintic polynomials implies $\mathrm{RD}(5)= 1$. (Recall that by definition, $\mathrm{RD}(5) = \mathrm{RD}(\widetilde{\mathcal{P}_5} \to \mathcal{P}_5)$, where $\mathcal{P}_5$ is the parameter space of monic degree 5 polynomials and $\widetilde{\mathcal{P}_5}$ is the space of such polynomials together with a choice of root.) Recall that given a polynomial $p(x) = x^5+a_1x^4+a_2x^3+a_3x^2+a_4x+a_5$ in $\mathcal{P}_5$ and a Tschirnhaus transformation of the form $T(x) = b_0x^4+b_1x^3+b_2x^2+b_3x+b_4$, we can set $y = T(x)$ and eliminate $x$ to obtain a polynomial $$q(y) = y^5+A_1y^4+A_2y^3+A_3y^2+A_4y+A_5,$$ where the coefficient $A_i$ is a degree $i$ homogeneous polynomial in $b_0, b_1, b_2, b_3, b_4$, whose coefficients are integral functions of $a_0, a_1, a_2, a_3, a_4, a_5$. Our strategy will be to construct a tower of maps $$X_5 \to X_4 \to X_3 \to X_2 \to X_1 \to \mathcal{P}_5$$ such that the resolvent degree of each map is 1 and there is a dominant rational map $X_5 \to \widetilde{\mathcal{P}_5}$. Informally, in $X_1$ there will be associated to each quintic polynomial $p$ the equation of a hyperplane determining the set of Tschirnhaus transformations such that $A_1 = 0$. In $X_2$, we will further have the information of a quadric $Q$ contained in this hyperplane, corresponding to $A_1 = A_2 = 0$, and the equation of a line $l$ contained in $Q$. In $X_3$ we will further have a choice of Tschirnhaus transformation $T$ which lies on $l$ while also satisfying $A_3 = 0$. Using this information we can construct a map from $X_3$ to the one-dimensional space of quintic polynomials of the form $z^5+Az+1$, and we will take $X_4$ to be the pullback of the root cover of this space, so that $X_4$ associates to $p$ a root of the polynomial obtained by applying $T$ to $p$. Finally, in constructing $X_5$, we recover the roots of $p$ itself from the roots of the transformed polynomial. Each step of this process is of bounded resolvent degree. More precisely: viewing the $b_i$ as homogeneous coordinates on $\mathbb{P}^4$, there is associated to each polynomial $p \in \mathcal{P}_5$ a hyperplane $H \subset \mathbb{P}^4_{\mathbf{b}}$ consisting of all Tschirnhaus transformations that send $p$ to a polynomial $q(y)$ with $A_1 = 0$. Let $X_1$ denote the space of ordered pairs $(p, H)$ with $p$ and $H$ as just described. Then the map $X_1 \to \mathcal{P}_5$ is birational, hence has resolvent degree 1. Next, we consider the set of all Tschirnhaus transformations such that $A_1 = 0$ and $A_2 = 0$ in the transformed polynomial. The latter is a degree 2 homogeneous polynomial in $b_0, b_1, b_2, b_3, b_4$, so to each polynomial $p$ there is associated a quadric surface $Q \subset H \cong \mathbb{P}^3$ whose points are Tschirnhaus transformations satisfying both conditions. This defines a map $$X_1 \to \mathcal{H}_{2,3}$$ where $\mathcal{H}_{2,3}$ is the parameter space of quadric surfaces in $\mathbb{P}^3$. Letting $\mathcal{H}_{2,3}^1$ denote the space of such surfaces together with a choice of line, we consider the map $$\mathcal{H}_{2,3}^1 \to \mathcal{H}_{2,3}.$$ An algorithm exists for finding a line on a quadric surface that requires solving only a single quadratic equation, so there is a rational multi-section $U \to \mathcal{H}_{2,3}$ with $$\mathrm{RD}(U \to \mathcal{H}_{2,3}) \leq \mathrm{RD}(2) = 1.$$ (In fact there is a dense collection of such multi-sections, so $\mathrm{RD}(\mathcal{H}_{2,3}^1 \to \mathcal{H}_{2,3}) = 1$, but this is stronger than we require.) We define $X_2$ via the pullback square \[ \begin{tikzcd} X_2 \arrow[d] \arrow[r]& X_1\arrow[d] \\ U \arrow[r] & \mathcal{H}_{2,3} \end{tikzcd} \] so that $\mathrm{RD}(X_2 \to X_1) \leq \mathrm{RD}(U \to \mathcal{H}_{2,3}) = 1$. Points of $X_2$ are ordered tuples $(p, H, Q, l)$, where $p$ is a quintic polynomial, $H$ is the space of all Tschirnhaus transformations that send $p$ to a polynomial with $A_1 = 0$, $Q \subset H$ is the quadric surface whose points are Tschirnhaus transformations such that $A_1 = 0$ and $A_2 = 0$, and $l$ is a line contained in $Q$. Finally, to find a Tschirnhaus transformation $T$ satisfying $A_1 = 0$, $A_2 = 0$, and $A_3 = 0$ requires intersecting the cubic hypersurface defined by $A_3 = 0$ with the line $l$. To find the three points of intersection requires only the solution of a cubic equation. Defining $X_3$ to be the space of tuples $(p, H, Q, l, T)$, with $p, H, Q, l$ as above and $T$ a Tschirnhaus transformation yielding $A_1 = A_2 = A_3 = 0$, the projection map $$X_3 \to X_2$$ has resolvent degree $RD(3) = 1$. Now let $\mathcal{P}_5'$ denote the space of quintic polynomials of the form $z^5+Az+1$. Given a point $(p, H, Q, l, T)$ in $X_3$, we can apply the transformation $T$ to $p$ to obtain a polynomial $$q(y) = y^5+A_4y+A_5.$$ Applying a change of variables $y = \sqrt[5]{A_5}z$ and dividing by $A_5$ yields $$z^5+\frac{A_4\sqrt[5]{A_5}}{A_5}z+1$$ so this procedure defines a map $X_3 \to \mathcal{P}_5'$. Now, the root cover $$\widetilde{\mathcal{P}_5'} \to \mathcal{P}_5'$$ has resolvent degree $$\mathrm{RD}(\widetilde{\mathcal{P}_5'} \to \mathcal{P}_5') \leq \dim\left(\mathcal{P}_5'\right) = 1,$$ so defining $X_4$ via the pullback square \[ \begin{tikzcd} X_4 \arrow[d] \arrow[r]& X_3\arrow[d] \\ \widetilde{\mathcal{P}_5'} \arrow[r] & \mathcal{P}_5' \end{tikzcd} \] we have $\mathrm{RD}(X_4 \to X_3) = 1$. Points of $X_4$ are of the form $(p, H, Q, l, T, \lambda)$, where $(p, H, Q, l, T) \in X_3$ and $\lambda$ is a root of the transformed polynomial $z^5+\frac{A_4\sqrt[5]{A_5}}{A_5}z+1$. Let $X_5$ be the space whose points are tuples $(p, H, Q, l, T, \mu)$, where $\mu$ is a root of $p$. There is a map $$X_5 \to X_4$$ defined by $$(p, H, Q, l, T, \mu) \mapsto \left(p, H, Q, l, T, \frac{T(\mu)}{\sqrt[5]{A_5}}\right).$$ On the other hand, given a root $\lambda$ of the transformed polynomial, we can find the corresponding roots of $p$ by solving the (at worst degree 4) polynomial equation $$\mathrm{GCD}(p(x), \sqrt[5]{A_5}T(x)-\lambda) = 0$$ so $\mathrm{RD}(X_5 \to X_4) = \mathrm{RD}(4) = 1$. In all, we have constructed a tower of maps $$X_5 \to X_4 \to X_3 \to X_2 \to X_1 \to \mathcal{P}_5$$ each of which has resolvent degree 1. Since the root cover $\widetilde{\mathcal{P}_5} \to \mathcal{P}_5$ factors through the projection $X_5 \to \widetilde{\mathcal{P}_5}$, $(p, H, Q, l, T, \mu) \mapsto (p, \mu)$, we have $$\mathrm{RD}(5) = \mathrm{RD}(\widetilde{\mathcal{P}_5} \to \mathcal{P}_5) = 1.$$ \section{Linear subspaces of hypersurfaces}\label{linear} To prove $\mathrm{RD}(5) = 1$ we needed to show one can always find a Tschirnhaus transformation which eliminates the first three intermediate terms of a quintic polynomial. This required finding a solution of a system of three equations of degree 1, 2, and 3, respectively. A key geometric fact which made this tractable was that any quadric surface in $\mathbb{P}^3$ contains a line, and that an equation for such a line can be found by solving a quadratic equation; this allows the degree 2 equation (which determines the quadric surface) to be replaced by a degree 1 equation (which determines the line), so that to find a solution of the system requires only the solution of a cubic equation. Similarly, in Hilbert's proof that $\mathrm{RD}(9) = 4$, a Tschirnhaus transformation removing \emph{four} intermediate terms is required, so a system of equations of degrees 1, 2, 3, and 4 must be solved. Informally, Hilbert's idea is to first find a 3-plane inside the hypersurface determined by the equation of degree 2 (which is possible as long as the ambient dimension is high enough), then to find a line on the cubic surface determined inside this 3-plane by the equation of degree 3. If this can be done, all that remains is to intersect the line with the remaining equation of degree 4 and a solution can be found. In general, one can consider the problem of finding a $k$-plane inside a degree $d$ hypersurface in $\mathbb{P}^N$. We can ask several questions: \begin{enumerate} \item[Q1:] In terms of $d$ and $k$, how large must the ambient dimension $N$ be to guarantee that any degree $d$ hypersurface contains a $k$-plane? \item[Q2:] What is the ``resolvent degree of finding the $k$-plane"? That is, if $\mathcal{M}_{d,N}$ is a moduli space of degree $d$ hypersurfaces in $\mathbb{P}^N$ and $\mathcal{M}_{d, N}^k$ is the space of such hypersurfaces together with a choice of $k$-plane, what is the resolvent degree of the map $$\mathcal{M}_{d,N}^k \to \mathcal{M}_{d, N}$$ which forgets the choice of plane? \item[Q3:] When is there a constructive algorithm to determine the $k$-plane? What are the degrees of the equations that must be solved? How large must $N$ be if no equation of degree higher than some given bound is to be permitted? \end{enumerate} As the $\mathrm{RD}(5) = 1$ and $\mathrm{RD}(9) = 4$ examples above illustrate, the answers to these questions have implications for the problem of finding Tschirnhaus transformations, since replacing a hypersurface with a linear subspace of that hypersurface allows us to replace a degree $d$ equation with one or more equations of degree 1, reducing the total degree of the system to be solved. An answer to Q1 is given in Debarre and Manivel \cite{debarre1998vari}: for $d > 3$, any degree $d$ hypersurface in $\mathbb{P}^N$ contains a $k$-plane if $$N \geq \frac{{d+k\choose k}}{k+1}+k.$ In this case, the map $\mathcal{M}_{d, N}^k \to \mathcal{M}_{d, N}$ is surjective, and an upper bound on its resolvent degree is given by the dimension of the codomain: $$\mathrm{RD}\left(\mathcal{M}_{d, N}^k \to \mathcal{M}_{d, N}\right) \leq \dim\left(\mathcal{M}_{d, N}\right).$$ For example, $$\mathrm{RD}\left(\mathcal{M}_{3, 3}^1 \to \mathcal{M}_{3,3}\right) \leq \dim\left(\mathcal{M}_{3,3}\right) = 4,$$ corresponding to Hilbert's observation that finding a line on a cubic surface requires the solution of at most an algebraic function of four variables. These dimension-counting arguments do not provide enough information to address Q3. For this we return to Sylvester's ideas. First, for the problem of finding a line on a degree $d$ hypersurface, repeated application of the obliteration formula suffices to compute an ambient dimension $N$ such that the desired line may be found without solving any equation of degree higher than $d$. Sylvester's methods can be readily adapted to the problem of finding a $k$-plane on a degree $d$ hypersurface without solving any equation of degree higher than $d$. Note that, when $N$ is large enough for this to be possible, $$\mathrm{RD}\left(\mathcal{M}_{d,N}^k \to \mathcal{M}_{d, N}\right) \leq \mathrm{RD}(d)$$ giving a bound on resolvent degree which is often sharper than what one obtains from dimension-counting alone (though at the price of requiring a larger ambient dimension $N$ than that given by Waldron's theorem), so Sylvester's ideas are of interest even if one is only concerned with finding bounds on resolvent degree. In the remainder of this section we look at the $d = 3$ case in detail. \subsection{Linear subspace of cubic hypersurfaces} Let $\mathcal{M}_{3, N}$ be the moduli space of smooth cubic hypersurfaces in $\mathbb{P}^N$ and let $\mathcal{M}_{3,N}^r$ be the moduli space of pairs $(C, P)$ where $C$ is a smooth cubic hypersurface and $P$ is an $r$-plane contained in $C$. We can then consider the ``resolvent degree of finding an $r$-plane", i.e., \[ \mathrm{RD}(\mathcal{M}_{3,N}^r \to \mathcal{M}_{3,N}), \] where the map forgets the choice of plane. For example, for the problem of finding a line on a cubic surface, we have \[ \mathrm{RD}(\mathcal{M}_{3,3}^1 \to \mathcal{M}_{3,3}) \leq \dim(\mathcal{M}_{3,3}) = 4, \] so that finding a line on a cubic surface in $\mathbb{P}^3$ requires the solution of algebraic equations of no more than 4 variables. Farb and Wolfson \cite{farb2020resolvent}, using an argument due to Klein, show this bound can be improved to $$\mathrm{RD}(\mathcal{M}_{3,3}^1 \to \mathcal{M}_{3,3}.) \leq 3.$$ In higher dimensions it is easier to find a line. In particular, by Sylvester's obliteration method, as discussed in section 3, finding a line on a cubic surface in $\mathbb{P}^5$ requires the solution of no equation of degree higher than 3, and so is a resolvent degree 1 problem. \begin{proposition}[Sylvester] Given a cubic hypersurface $V$ in $\mathbb{P}^n$, we can find a line contained in $V$ by solving equations of degree no higher than 3 provided $n \geq 5.$ Hence $$\mathrm{RD}(\mathcal{M}_{3, 5}^1 \to \mathcal{M}_{3, 5}) \leq \mathrm{RD}(3) = 1.$$ \end{proposition} \begin{proof} We wish to find a linear solution to a system of equations consisting of 1 equation of degree $3$ and no equations of lower degree. By Sylvester's obliteration formula, the ambient dimension required is $$[1, 0, 0] \leq 1+[1, 1] \leq 2+[2] = 2+3 = 5.$$ \end{proof} Sylvester's method also extends to finding higher-dimensional linear subspaces of hypersurfaces, when the dimension of the ambient space is large enough. For example, for cubic hypersurfaces in $\mathbb{P}^9$, finding a 2-plane can also be done solvably. \begin{proposition}[Sylvester] Given a cubic hypersurface $V = V(f)$ in $\mathbb{P}^{n}$, we can find a 2-plane contained in $V$ by solving equations of degree no higher than 3, provided $n \geq 11$. Hence $$\mathrm{RD}(\mathcal{M}_{3, 11}^2 \to \mathcal{M}_{3, 11}) \leq \mathrm{RD}(3) = 1.$$ \end{proposition} \begin{proof} Since $n \geq 5$, we can find a line $l$ contained in $V \subset \mathbb{P}^n$ solvably. Let $q$ and $r$ be distinct points of $l$ such that $q+\lambda r \in V$ for all $\lambda \in \mathbb{V}$. To find a plane contained in $V$, we look for a point $$p = \mathbb{P}^{n-2} \subset \mathbb{P}^n \setminus l$$ such that $f(p+\mu q+\lambda r) = 0$ for all $\mu, \lambda \in \mathbb{V}$. Expanding this as a polynomial in $\mu$ and $\lambda$, the coefficients of $1$, $\lambda$, $\mu$, $\lambda\mu$, $\lambda^2$, and $\mu^2$, must vanish identically. This gives a system of equations in $n-2$ variables with 1 equation of degree 3, 2 equations of degree 2, and 3 equations of degree 1. To solve it, we look for a linear solution to the subsystem of equations of degree $<3$, then intersect this line with the remaining degree 3 equation. Using Sylvester's formula of obliteration, we have $$[2, 3] \leq 1+[1, 5] \leq 2+[6] = 2+7 = 9,$$ so we can find the needed linear solution when $n-2 \geq 9$. Thus we can find a line on a cubic hypersurface in $\mathbb{P}^n$ when $n \geq 11$. \end{proof} The ambient dimension can be reduced slightly if some elevation of degree is allowed. From Segre [1945, pg 295, Sec 12], it is possible to determine a line on the intersection of two given quadrics by solving equations of degree no higher than 5, provided the ambient dimension is at least 4. Thus to find a linear solution of a system with 2 equations of degree 2, and 3 equations of degree 1, an ambient dimension of 7 is sufficient. Comparing to the last paragraph in the proof above we have \begin{proposition} Given a cubic hypersurface $V = V(f)$ in $\mathbb{P}^{n}$, we can find a 2-plane contained in $V$ by solving equations of degree no higher than 5, provided $n \geq 9$. Hence $$\mathrm{RD}(\mathcal{M}_{3, 9}^2 \to \mathcal{M}_{3, 9}) \leq \mathrm{RD}(5) = 1.$$ \end{proposition} \section{Removing 5 Terms from a Polynomial}\label{actual_proof} Given a degree $n$ polynomial $$x^n+a_1x^{n-1}+\ldots+a_nx+a_n$$ we wish to find a Tschirnhaus transformation $$T(x) = b_{n-1}x^{n-1}+b_{n-2}x^{n-2}+\ldots+b_1x+b_0$$ such that after setting $y = T(x)$ the transformed polynomial $$y^n+A_1y^{n-1}+\ldots+A_{n-1}y+A_{n}$$ satisfies $A_1 = A_2 = A_3 = A_4 = A_5 = 0$. To determine the coefficients $b_0, \ldots, b_{n-1}$ of $T$ then requires the solution of a system of equations with degrees 1, 2, 3, 4, 5, in $\mathbb{P}^{n-1}$. In general, to find a solution to such a system requires solving a polynomial of degree $5! = 120$, but when $n$ is large enough this elevation of degree can be partially avoided. We first informally sketch the geometry underlying Wolfson's bound of $n = 41$. The general idea is that by finding linear subspaces of the hypersurfaces corresponding to the polynomials of our system, the total degree of the system can be reduced. In this case, to find the necessary Tschirnhaus transformation one first finds a $6$-plane inside a quadric surface in $\mathbb{P}^{13}$ (this only requires solving degree 2 equations, so is resolvent degree 1). Then, intersecting the degree 3 equation with the 6-plane yields a cubic hypersurface in $\mathbb{P}^6$. If we can then find a 2-plane inside this cubic hypersurface, then by intersecting the degree 4 and 5 equations with this plane, we are left with a system of total degree 20 to solve. In summary, we have the chain $$V_4 \cap V_5 \subset \mathbb{P}^2 \subset V_3 \subset \mathbb{P}^{6} \subset V_2 \subset \mathbb{P}^{13} = V_1 \subset \mathbb{P}^{14}.$$ This gives an algorithm for finding the desired Tschirnhaus transformation provided one is able to find the necessary 2-plane inside the cubic hypersurface in $\mathbb{P}^6$. Wolfson uses a dimension count to show that \[RD(\mathcal{M}_{3,6}^2 \to \mathcal{M}_{3,6}) \leq \dim(\mathcal{M}_{3,6}) = 35, \] and so is able to use this Tschirnhaus transformation to show $\mathrm{RD}(n) \leq n-6$ whenever $n-6 \geq 35$. By increasing the ambient dimension in which the the cubic hypersurface lives, we can use Sylvester's ideas to find a plane inside a cubic hypersurface in $\mathbb{P}^9$ by solving only equations of degree 5 or less. (i.e., in a resolvent degree 1 way). This allows the $n = 41$ bound for removing 5 terms from a degree $n$ polynomial to be reduced to $n = 21$, with simpler irrationalities involved in finding the necesary Tschirnhaus transformation. \begin{theorem*}[Main Theorem] Let $n \geq 21$. Given a degree $n$ polynomial $$x^n+a_1x^{n-1}+\ldots+a_nx+a_n$$ we can find a Tschirnhaus transformation $$T(x) = b_{n-1}x^{n-1}+b_{n-2}x^{n-2}+\ldots+b_1x+b_0$$ such that after setting $y = T(x)$ the transformed polynomial $$y^n+A_1y^{n-1}+\ldots+A_{n-1}y+A_{n}$$ satisfies $A_1 = A_2 = A_3 = A_4 = A_5 = 0$. The coefficients $b_0, \ldots b_{n-1}$ of $T$ can be determined by solving equations of degree at most $20$. \end{theorem*} \begin{proof} The equations $A_1 = 0$, $A_2 = 0$, $A_3 = 0$, $A_4 = 0$, $A_5 = 0$ impose polynomial conditions of degree 1, 2, 3, 4, and 5, respectively, on the point $(b_0, \ldots, b_{n-1}) \in \mathbb{P}^{n-1}$ to be determined. Using the degree 1 equation we can eliminate one variable. The degree 2 equation then determines a quadric hypersurface in $\mathbb{P}^{n-2}$. By the classical theory of linear subspaces of quadrics, we can find a $\mathbb{P}^9$ contained in this hypersurface provided $n-2 \geq 19$. Next, we consider the cubic hypersurface determined by this $\mathbb{P}^9$ and the degree 3 equation. By Proposition 5 of the previous section, we can find a $\mathbb{P}^2$ contained in this hypersurface by solving equations of degree at worst 5. Finally, intersecting the remaining equations of degree 4 and 5 determine two curves in this $\mathbb{P}^2$, whose points of intersection are governed by an equation of degree at most 20. Each such point then satisfies all 5 polynomial conditions, by construction, and so yields a Tschirnhaus transformation of the desired form. \end{proof} In the same way that the reduction of the quintic to one-parameter form can be translated into the language of resolvent degree to yield $\mathrm{RD}(5) = 1$, this theorem implies $\mathrm{RD}(n) \leq n-k$ for $n \geq 21$.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\@startsection{section}{1 \title[Quantitative Estimates on the Singular Set of Minimal Hypersurfaces]{Quantitative Estimates on the Singular Set of Minimal Hypersurfaces with Bounded Index} \author{Nicolau S. Aiex, Sean McCurdy, and Paul Minter} \date{\today} \address{Department of Mathematics, National Taiwan Normal University, Taipei, Taiwan} \email{nsarquis@math.ntnu.edu.tw} \address{Department of Mathematics, National Taiwan Normal University, Taipei, Taiwan} \email{smccurdy@ntnu.edu.tw} \address{Institute for Advanced Study (Fuld Hall) and Princeton University (Fine Hall), Princeton, New Jersey, USA, 08544, USA} \email{pm6978@princeton.edu\textnormal{,} pminter@ias.edu} \begin{document} \begin{abstract} We prove local measure bounds on the tubular neighbourhood of the singular set of codimension one stationary integral $n$-varifolds $V$ in Riemannian manifolds which have both: (i) finite index on their smoothly embedded part; and (ii) $\H^{n-1}$-null singular set. A direct consequence of such a bound is a bound on the upper Minkowski content of the singular set of such a varifold in terms of its total mass and its index. Such a result improves on known bounds, namely the corresponding bound on the $\H^{n-7}$-measure of the singular set established by A.~Song (\cite{antoine-song}), as well as the same bounds established by A.~Naber and D.~Valtorta (\cite{naber-valtorta}) for codimension one area minimising currents. Our results also provide more structural information on the singular set for codimension one integral varifolds with finite index (on the regular part) and no classical singularities established by N.~Wickramasekera (\cite{wickstable}). \end{abstract} \maketitle \section{Introduction} The aim of the present paper is to prove quantitative estimates on the size of the singular set for a large class of codimension one stationary integral varifolds which have a smoothly embedded part of finite index (see Section \ref{sec:prelim} for precise definitions). In particular, we prove measure bounds on the size of the tubular neighbourhood of the singular set (and thus bound the upper Minkowski content of the singular set) in terms of the total (varifold) mass and the index. More precisely, our main result in the Euclidean setting can be stated as follows: \begin{thmx}\label{thm:A} Let $\Lambda>0$ and $n\geq 8$. Suppose $V$ is a stationary integral $n$-varifold in $B^{n+1}_2(0)$ such that: \textnormal{(i)} $\|V\|(B_2^{n+1}(0))\leq \Lambda$; \textnormal{(ii)} its smoothly embedded part has finite index, i.e., $\textnormal{index}(\reg(V))<\infty$; and \textnormal{(iii)} $\mathcal{H}^{n-1}(\sing(V))=0$. Then, $\sing(V)$ is countably $(n-7)$-rectifiable, and moreover we have, for any $0<r\leq 1/2$, $$\mathcal{H}^{n+1}\left(\eball_{r/8}(\sing(V))\cap \eball_{1/2}\right) \leq C_0\left(1+\textnormal{index}(\reg(V))\right)r^8;$$ $$\|V\|\left(\eball_{r/8}(\sing(V))\cap \eball_{1/2}\right)\leq C_0(1+\textnormal{index}(\reg(V)))r^7.$$ In particular, the upper Minkowski content of $\sing(V)$ obeys $$\mathcal{M}^{* n-7}\left(\sing(V)\cap \eball_{1/2}\right) \leq C_0(1+\textnormal{index}(\reg(V)))$$ which in turn implies that $\mathcal{H}^{n-7}\left(\sing(V)\cap\eball_{1/2}\right) \leq C_0(1+\textnormal{index}(\reg(V)))$; here, $C_0 = C_0(n,\Lambda)\in (0,\infty)$. \end{thmx} When our varifolds instead are defined on an ambient smooth Riemannian manifold, our main result is: \begin{thmx}\label{thm:B1} Let $\Lambda>0$, $n\geq 8$, $K>0$. Let $(N^{n+1},g)$ be a smooth Riemannian manifold with $0\in N$ and obeying $\left|\left.\textnormal{sec}\right|_{B^N_2(0)}\right|\leq K$ and $\left.\textnormal{inj}\right|_{B^N_2(0)}\geq K^{-1}$, where here $\textnormal{sec}$ is the sectional curvature of $N$ and $\textnormal{inj}$ is the injectivity radius. Suppose that $V$ is a stationary integral $n$-varifold in $B^N_2(0)$ which obeys: \textnormal{(i)} $\|V\|(B^{N}_2(0))\leq \Lambda$; \textnormal{(ii)} the smoothly embedded part of $V$ has finite index; and \textnormal{(iii)} $\H^{n-1}(\sing(V)) = 0$. Then, $\sing(V)$ is countably $(n-7)$-rectifiable, and moreover we have, for any $0<r\leq 1/2$, $$\H^{n+1}(B^N_{r/8}(\sing(V))\cap B^N_{1/2}(0))\leq C_0\left(1+\index(\reg(V))\right)r^8;$$ $$\|V\|(B^N_{r/8}(\sing(V))\cap B^N_{1/2}(0))\leq C_0\left(1+\index(\reg(V))\right)r^7.$$ In particular, we have $\mathcal{M}^{*n-7}(\sing(V)) \leq C_0\left(1+\index(\reg(V))\right)$; here, $C_0 = C_0(n,\Lambda,K)\in (0,\infty)$ and $B^N_r(A)$ denotes the $r$-neighbourhood in $N$ of the a subset $A\subset N$. \end{thmx} We remark that from Theorem \ref{thm:B1}, a simple covering argument gives a global result in closed Riemannian manifolds, as there is a uniform bound on the sectional curvature and a uniform lower bound on the injectivity radius (depending on $(N^{n+1},g)$): \begin{thmx}\label{thm:B} Let $\Lambda> 0$, $n\geq 8$, and let $(N^{n+1},g)$ be a closed smooth Riemannaian manifold. Suppose $V$ is a stationary integral $n$-varifold in $N$ obeying: \textnormal{(i)} $\|V\|(N)\leq \Lambda$; \textnormal{(ii)} the smoothly embedded part of $V$ has finite index; and \textnormal{(iii)} $\mathcal{H}^{n-1}(\sing(V)) = 0$. Then, $\sing(V)$ is countably $(n-7)$-rectifiable, and moreover we have, for any $0<r\leq 1/2$, $$\mathcal{H}^{n+1}\left(B^N_{r/8}(\sing(V))\right)\leq C_0(1+\textnormal{index}(\reg(V)))r^8;$$ $$\|V\|\left(B^N_{r/8}(\sing(V))\right) \leq C_0(1+\textnormal{index}(\reg(V)))r^7.$$ In particular, we have $\mathcal{M}^{*n-7}(\sing(V))\leq C_0(1+\textnormal{index}(\reg(V)))$; here, $C_0 = C_0(\Lambda,N,g)$. \end{thmx} In the special case where $V$ is the varifold associated to a codimension one area minimising current $T$ (which in particular obey $\textnormal{index}(\reg(V)) = 0$), we note that the corresponding results have already been established in \cite{naber-valtorta}*{Theorem 1.6}. Under the same assumptions, A.~Song (\cite{antoine-song}) established that the $\mathcal{H}^{n-7}$-measure of the singular set obeys, for each $n\geq 7$, $$\mathcal{H}^{n-7}(\sing(V)\cap \eball_{1/2}) \leq C^{\text{AS}}_0(1+\textnormal{index}(\reg(V)))^{7/n}$$ where $C^{\text{AS}}_0 = C^{\text{AS}}_0(\Lambda, N,g)$. In particular, in the special case when $n=7$ (and thus the singular set is $0$-dimensional, and in fact by \cite{wickstable} consists only of isolated points), this already implies measure bounds on the tubular neighbourhood of $\sing(V)$; indeed, it is for this reason that we are only interested in the case $n\geq 8$ (although our argument only works for $n\geq 8$, much in the same way as how the corresponding argument in \cite{antoine-song} for $n\geq 8$ also cannot be used to prove the $n=7$ case). However, when $n\geq 8$, the $\H^{n-7}$-measure bound is significantly weaker than control on the measure of the entire tubular neighbourhood. We also remark that, as the constants $C_0,C_0^{\text{AS}}$ depend on the total varifold mass, it is unclear which bound on the $\mathcal{H}^{n-7}$-measure of the singular set is optimal, as there are conjectured relationships between the area of a stationary integral varifold and the index of its embedded part. It would be an interesting question to determine the dependence of the constant $C_0$ on the mass $\|V\|(B^{n+1}_2(0))$. As a further remark, since $\dim_{\mathcal{H}}(\sing(V))\leq n-7$, we know that $\reg(V)$ is a connected smooth submanifold, and as such by the constancy theorem occurs with some constant (integer) multiplicity. Thus, without loss of generality we can assume the multiplicity is 1 when proving the first estimate of our theorems as this does not change the value of the index. Under this assumption, $\|V\|(B^{n+1}_2(0)) = \mathcal{H}^{n}(\reg(V))$, and if we set $M:= \reg(V)$, we can work directly with $M$, viewing $\sing(V)\equiv \sing(M):= \overline{M}\backslash M$. Furthermore, this tells us that the constant $C_0$ in our main theorems, for the first inequality at least, only depends on $\H^n(\reg(V)\cap B^{n+1}_2(0))$, and not on $\Lambda$. It should be noted that, whilst the results of \cite{antoine-song} are stated under the assumption that the singular set a priori obeys the stronger assumption $\dim_{\mathcal{H}}(\sing(V))\leq n-7$ (i.e., for each $\gamma>0$, $\mathcal{H}^{n-7+\gamma}(\sing(V)) = 0$), under the assumption that the smoothly embedded part of $V$ has finite index, the assumption that $\mathcal{H}^{n-1}(\sing(V)) = 0$ \textit{actually implies} $\dim_{\mathcal{H}}(\sing(V))\leq n-7$. This is due to the fact that when the index is finite, the smoothly embedded part of the varifold is locally stable, and thus one may apply the regularity theory for stable codimension one stationary integral varifolds established by N.~Wickramasekera (\cite{wickstable}). Consequently, $\sing(V) = \mathcal{S}_{n-7}$ is equal to the $(n-7)^{\text{th}}$ strata, and thus is countably $(n-7)$-rectifiable by \cite{naber-valtorta}. Moreover, it should be noted that \cite{wickstable} also gives that one may replace in our results the assumption that $\mathcal{H}^{n-1}(\sing(V)) = 0$ with the equivalent condition that $V$ does not contain any so-called \textit{classical singularities}, i.e. singularities locally about which $V$ can be written as a sum of at least 3 $C^{1,\alpha}$ submanifolds-with-boundary, for some $\alpha\in (0,1)$, which all have the same common boundary. \textbf{ Acknowledgements:} The first author would like to thank Professor Ulrich Menne for suggesting the problem and putting this collaboration together. NSA was supported through the grant with No. MOST 108-2115-M-003-016-MY3 by the National Science and Technology Council. \section{Preliminaries}\label{sec:prelim} Throughout this article $n$ will always denote a positive integer unless its range is specified. Given $r>0$, we denote by $\eball^{n+1}_r(x)$ the Euclidean ball of radius $r$ centered at $x\in\real^{n+1}$, and as a shorthand we write $\eball^{n+1}_r(0)\equiv \eball^{n+1}_r$. If $A\subset\real^{n+1}$ is any set and $x\in\real^{n+1}$ then we write $d(x,A):=\inf\{d(x,a):a\in A\}$, $\eball_r(A):=\{x\in\real^{n+1}:d(x,A)<r\}$ and given $B\subset\real^{n+1}$ another subset, we define their distance as $d(A,B):=\inf\{d(a,b):a\in A,b\in B\}$. Given an open set $U\subset\real^{n+1}$ and a positive integer $k$ we will denote by $\ivarifolds_k(U)$ the space of integral $k$-varifolds in $\real^{n+1}$ with support in $U$. We endow $\ivarifolds_k(U)$ with the weak topology of measure-theoretic convergence, which is induced by the corresponding Fr\' echet structure, for which we write $\mathbf{d}$ for the corresponding metric which induces the topology. Given $V\in\ivarifolds_k(U)$ and $x\in\supp(V)$ we denote by $\Theta_V(x)$ its density at $x$, which is a positive integer $\|V\|$-almost everywhere. If $R\subset U$ is a $\haus^k$-rectifiable set and $\theta:U\rightarrow\integer_{\geq 0}$ is a $\haus^k$-measurable function which is locally integrable on $U$, we denote the induced integral $k$-varifold of density $\theta$ by $\setv(R,\theta)$; in the special case when $R$ is the graph of a $C^2$ function $u$ and the multiplicity $\theta$ is identically 1, we write the shorthand $\setv(R,\theta)\equiv \setv(u)$. We will always assume that $1\leq k\leq n$ when providing general definitions since the extreme cases $k=0$ and $k=n+1$ are often trivial and irrelevant to usual applications. For $x_0\in \R^{n+1}$ and $r>0$, we define the homothetic rescaling by $r$ about $x_0$ to be the function $\eta_{x_0,r}:\R^{n+1}\to \R^{n+1}$ given by $\eta_{x_0,r}(x):= r^{-1}(x-x_0)$. Given $V\in \ivarifolds_k(U)$, $x\in\spt\|V\|$, and $r>0$, we write $\tangentcones_x(V)\subset\ivarifolds_k(\R^{n+1})$ for the set of tangent cones to $V$ at $x$, i.e. accumulation points of (convergent subsequences of) $(\eta_{x,r})_\#V$ as $r\downarrow 0$. Given $V\in\ivarifolds_k(U)$ we define its \textit{regular part} as \begin{equation*} \begin{aligned} \reg(V) := \{ & x\in\supp\|V\|: \supp\|V\|\cap\eball_\rho(x) \text{ is a properly smoothly embedded }\\ &\hspace{17em} k\text{-submanifold of }U \text{ for some }\rho>0 \} \end{aligned} \end{equation*} and its \textit{singular part} to be $\sing(V):=\supp\|V\|\backslash\reg(V)$. We note that, according to the definition above, even if a varifold $V$ is given by a smooth immersed submanifold, its points of self-intersection are in $\sing(V)$. Let $\csvf(U,\real^{n+1})$ be the space of compactly supported vector fields of class $C^1$ in $U$. If $V\in\ivarifolds_k(U)$, $X\in\csvf(U,\real^{n+1})$, and $\phi^X_t$ is the $C^2$ flow generated by $X$ defined for $t\in(-\varepsilon,\varepsilon)$, we denote the first and second variations of $V$ in the direction of $X$ by $\delta V(X)=\left.\frac{d}{dt}\right|_{t=0}\|(\phi^X_t)_\# V\|$ and $\delta^2 V(X,X)=\left.\frac{d^2}{dt^2}\right|_{t=0}\|(\phi^X_t)_\# V\|$ respectively. We say that $V$ is \emph{stationary} in $U$ if $\delta V(X)=0$ for all $X\in\csvf(U,\real^{n+1})$ and of \emph{bounded first variation} if there exists $H>0$ such that $\delta V(X)\leq H\int_U|X|d\|V\|$ for all $X\in\csvf(U,\real^{n+1})$. A direct computation shows (\cite{simon-gmt}): $$\delta V(X) = \int_{U\times G(k,n+1)}\textnormal{div}_S(X_x)\ dV(x,S)$$ and \begin{align*} \delta^2V(X,X) = \int_{U\times G(k,n+1)}&\left\{\divergence_S(Y_x) + \left(\divergence_S(X_x)\right)^2 + \sum^k_{i=1}\left|(\nabla_{\tau_i}X_x)^{\perp_S}\right|^2\right.\\ & \hspace{3em} \left. - \sum^k_{i,j=1}(\tau_j\cdot \nabla_{\tau_i}X_x)(\tau_i\cdot \nabla_{\tau_j}X_x)\right\}\ d V(x,S) \end{align*} where $Y_x:= \left.\frac{d^2}{dt^2}\right|_{t=0}\phi^X_t(x)$, $\nabla$ is the Euclidean connection, and $\{\tau_1,\dotsc,\tau_k\}$ is a choice of orthonormal basis for the subspace $S\in G(k,n+1)$, with $\perp_S$ denoting the corresponding orthogonal projection onto the orthogonal complement of $S$. \begin{definition}\label{definition bounded index} Let $\Omega\subset U\subset\real^{n+1}$ be open sets, $V\in\ivarifolds_k(U)$ a stationary integral varifold and $I\in\integer_{\geq 0}$ a non-negative integer. We say that the regular part of $V$ has index bounded by $I$ in $\Omega$ if and only if for all subspaces $P\subset\csvf(\Omega\backslash\sing(V),\R^{n+1})$ of dimension $I+1$ there exists $X\in P$ such that $X\neq 0$ and $\delta^2 V (X,X)\geq 0$. When the regular part of $V$ has bounded index in $\Omega$, we may define its corresponding index as \begin{equation*} \Index(\reg(V);\Omega):=\min\{I\in\integer_{\geq 0}: \reg(V)\text{ has index bounded by } I\text{ in }\Omega\} \end{equation*} and we write $\textnormal{index}(\reg(V)):= \textnormal{index}(\reg(V);U)$. \end{definition} The regular part of a varifold $V$ is said to be \emph{stable} in an open subset $\Omega\subset U$ if $\Index(\reg(V);\Omega)=0$. Otherwise we say that the regular part of $V$ is \textit{unstable} in $\Omega$. The following related notion of \emph{folding number} was introduced in \cite{antoine-song}: \begin{definition}\label{defn:folding Let $U\subset\real^{n+1}$ be an open subset and $V\in\ivarifolds_k(U)$ be a stationary integral varifold. We define the folding number $\folding(V)$ of $V$ as follows: if $V$ is stable in $U$, then $\folding(V):= 0$; otherwise, we define $\folding(V)$ to be the largest size of a (possibly infinite) collection of disjoint unstable open subsets in $U$ \end{definition} The folding number simply quantifies the number of disjoint unstable subsets in $V$. However, each such subset may have index strictly larger than $1$ without the possibility of breaking into further disjoint unstable subsets. The core property of varifolds with bounded index is the fact that they cannot be unstable in too many disjoint open subsets. This fact is quantified by the folding number in the following lemma. \begin{lemma}\label{bounded index disjoint sets} Let $U\subset\real^{n+1}$ be an open set, $I\in\integer_{\geq 0}$ be a non-negative integer, and $V\in\ivarifolds_l(U)$ be a stationary integral varifold with $\Index(\reg(V))\leq I$. If $\Omega_1,\ldots,\Omega_{I+1}\subset U$ is a collection of $I+1$ open sets such that $\Index(\reg(V);\Omega_j)\geq 1$ for each $j=1,\ldots,I+1$ then there exist $j\neq j'$ such that $\Omega_{j}\cap\Omega_{j'}\neq\emptyset$. In particular $\folding(V)\leq I$. \end{lemma} \begin{proof} Suppose the lemma is false. Then, we may find a pairwise disjoint collection of open subsets $\Omega_1,\dotsc,\Omega_{I+1}\subset U$ for which $\textnormal{index}(\reg(V);\Omega_j)\geq 1$ for each $j=1,\dotsc,I+1$. By definition, for each such $j$ we can then find $X_j\in C^1_c(\Omega_j\backslash\sing(V);\R^{n+1})$ such that $\delta^2 V(X_j,X_j)<0$. By extension by $0$ outside $\spt(X_j)$, we can view each $X_j$ as an element of $C^1_c(U\backslash\sing(V);\R^{n+1})$. Moreover, we know for $i\neq j$ that $\spt(X_i)\cap \spt(X_j) \subset \Omega_i\cap \Omega_j = \emptyset$, and thus if we set $P:= \textnormal{span}\{X_1,\dotsc,X_{I+1}\}$, $P$ is a subspace of dimension $I+1$ in $C^1_c(U\backslash\sing(V);\R^{n+1})$, and $\delta^2V(X,X)<0$ for all $X\in P$. This then implies that $\Index(\reg(V))\geq I+1$, which contradicts our assumption that $\Index(\reg(V))\leq I$. Thus the lemma must hold. \end{proof} Another useful quantity somewhat dual to the folding number is a useful radius quantity known as the \textit{stability radius}, defined analogously to that in \cite{antoine-song}: \begin{definition}\label{defn:stab-radius} Let $U\subset\R^{n+1}$ be an open subset and $V\in \ivarifolds_k(U)$ be a stationary integral varifold. Then the stability radius is the function $s_V:U\to [0,\infty]$ defined by: $$s_V(x):= \sup\{r\geq 0:\textnormal{index}(\reg(V);\eball_r(x)\cap U) = 0\}.$$ \end{definition} In this definition, we take $\textnormal{index}(\reg(V);\eball_r(x)) =0$ when $\spt\|V\|\cap\eball_r(x) = \emptyset$. Clearly, if $s_V(x)<\infty$ then $\reg(V)$ is unstable on $\eball_{\lambda s_V(x)}(x)$ for all $\lambda>1$. This immediately implies that if $s_V(x)<\infty$ for some $x\in U$, then $s_V(y)<\infty$ for all $y\in U$, and in particular $s_V(y) \leq s_V(x) + |x-y|$; indeed, if this inequality were not true, then we could find $\lambda>1$ for which $\reg(V)$ is stable in $\eball_{\lambda(s_V(x)+|x-y|)}(y)$. But $\eball_{\tilde{\lambda}s_V(x)}(x)\subset \eball_{\lambda(s_V(x)+|x-y|)}(y)$ for $\tilde{\lambda}\in (1,\lambda)$ sufficiently close to $1$, and thus we would have $B_{\tilde{\lambda}s_V(x)}(x)$ is stable (as the index is non-decreasing with respect to set-inclusion), a contradiction to the definition of $s_V(x)$. In particular, we have a dichotomy that either $s_V\equiv \infty$ (i.e. $\reg(V)$ is stable in $U$) or $s_V$ is finite everywhere. The above simple argument shows that the stability function is always Lipschitz with Lipschitz constant at most $1$ when $V$ is not stable in $U$; in particular, $s_V$ is continuous. \begin{lemma}\label{stability radius continuity} Let $U\subset\R^{n+1}$ be open and $V\in \ivarifolds_k(U)$ be a stationary integral varifold. Then, either $s_V\equiv \infty$, or we have for all $x,y\in U$, $|s_V(x)-s_V(y)|\leq |x-y|$. \end{lemma} \begin{proof} Suppose $s_V\not\equiv \infty$, and so $s_V<\infty$ pointwise. In the above discussion we saw that for any $x,y\in U$, we have $s_V(y) \leq s_V(x) + |x-y|$. Swapping $x$ and $y$ in this identity and combining gives $|s_V(x) - s_V(y)| \leq |x-y|$, as desired. \end{proof} The next fact we need is that stationary integral varifolds with bounded index on their regular part are in fact locally stable about every point. This is a well-known fact about regular points, as indeed a smooth minimal submanifold is in fact locally area-minimising (see \cite{federer-2}*{Section 4}) and hence locally stable. \begin{lemma}\label{locally stable} Let $U\subset\R^{n+1}$ be open and let $V\in \ivarifolds_k(U)$ be a stationary integral varifold obeying $\Index(\reg(V))<\infty$. Then, for each $x\in \spt\|V\|$, there exists $\rho_x>0$ for which $\Index(\reg(V);\eball_{\rho_x}(x)) = 0$; in particular, $s_V(x)\geq \rho_x>0$. \end{lemma} \begin{proof} As discussed before the statement of the lemma, this result follows by \cite{federer-2}*{Section 4} whenever $x\in\reg(V)$. So let us assume that $x\in \sing(V)$. We first claim that there exists $\rho_x>0$ for which $$\Index(\reg(V);\eball_{\rho_x}(x)\backslash\{x\}) = 0.$$ Indeed, suppose this were not true. Then, for each $\rho>0$, we would have $$\Index(\reg(V);\eball_{\rho}(x)\backslash\{x\})\geq 1.$$ Hence, we can find a (non-zero) vector field $X\in C^1_c(U\backslash\sing(V);\R^{n+1})$ for which $\delta^2V(X,X) <0$ and $\spt(X)\subset (B_\rho(x)\backslash\{x\})\cap \reg(V)$. In particular, there is a $\tau_\rho\in (0,\rho)$ for which $\spt(X)\subset B_\rho(x)\backslash B_{\tau_\rho}(x)$. Thus, we may find a sequence $\rho_i\downarrow 0$ with $\rho_{i+1}<\tau_{\rho_i}$ for all $i$ and, for each $i\geq 1$, $\Index(\reg(V);B_{\rho_i}(x)\backslash B_{\tau_{\rho_i}}(x))\geq 1$. But then as $(B_{\rho_i}(x)\backslash B_{\tau_{\rho_i}}(x))_{i=1}^\infty$ are pairwise disjoint open sets, this is a direct contradiction to the fact that $\Index(\reg(V))<\infty$ (the contradiction coming via Lemma \ref{bounded index disjoint sets}). Hence the claim holds. We now further claim that, with this radius $\rho_x$, $\Index(\reg(V),\eball_{\rho_x}(x)) = 0$. Indeed, as $x\in \sing(V)$, if this were not true then one may find a non-zero $X\in C^1_c(\eball_{\rho_x}(x)\backslash\sing(V);\R^{n+1})$ for which $\delta^2V(X,X)<0$. But as $x\in \sing(V)$, $\spt(X)\subset \eball_{\rho_x}(x)\backslash \{x\}$, and so there is a $\tau>0$ for which $\spt(X)\subset \eball_{\rho_x}(x)\backslash\eball_{\tau}(x)$, a direct contradiction to the above claim. This concludes the proof of the lemma. \end{proof} We now introduce the last general piece of terminology we need, namely the \textit{regularity scale} of a varifold (as in \cite{cheeger-naber2013}*{Section 5.3}). Given a $k$-dimensional subspace $P\subset \R^{n+1}$, let $\{\eta_1,\dotsc,\eta_{n+1-k}\}$ be a basis of $P^\perp$. We can then represent functions $B_1\cap P\to P^\perp$ as functions $u:B_1\cap P\to \R^{n+1-k}$, with their graphs being given by $\graph(u):= \{p+\sum_{j}u^j(p)\eta_j: p\in B_1\cap P\}$, where $u = (u^1,\dotsc,u^{n+1-k})$. Now consider $V\in \ivarifolds_k(U)$, $x\in \spt\|V\|$, $\rho>0$, and $q\in \mathbb{Z}_{\geq 0}$. We say that $V\restrictv \eball_\rho(x)$ is a \textit{union of $q$ $C^2$ functions} if there exist $k$-dimensional subspaces $P_1,\dotsc,P_q\subset\R^{n+1}$, and $C^2$ functions $(u_i)_{i=1}^{q}$ with $u_i:B_1\cap P_i\to \R^{n+1-k}$ such that $$(\eta_{x,\rho})_\#V = \sum^q_{i=1} \mathbf{v}(u_i)\restrictv B_1.$$ \textbf{Note:} The functions $u_i$ need not be distinct, as they can, for example, coincide when $\reg(V)$ occurs with multiplicity $>1$. Moreover, $q$ is not constant: two surfaces intersecting transversely are locally the graph of $2$ distinct functions (with different domains) near any point of intersection, and locally the graph of a single function away from the intersection points. \begin{definition}\label{regularity scale} Let $U\subset\R^{n+1}$ be open, $V\in \ivarifolds_k(U)$, $x\in \spt\|V\|$, and $Q\in\integer_{>0}$. First define $$\regscale^Q_{0,V}(x):= \sup\{\rho>0: \eball_\rho(x)\subset U\text{ and }V\restrictv\eball_\rho(x)\textit{ is a union of at most }Q\ C^2\text{ functions}\}$$ where we set $\sup(\emptyset) := 0$. Now, if $x\in \spt\|V\|$ is such that $\regscale^Q_{0,V}(x) = \rho>0$, and $y\in \eball_\rho(x)\cap \spt\|V\|$, then there exists $q\leq Q$, $k$-dimensional subspaces $P_1,\dotsc,P_q\subset\R^{n+1}$, and $C^2$ functions $(u_i)_{i=1}^q$, $u_i:B_1\cap P_i\to \R^{n+1-k}$ (all depending on $y$) with $y\in \graph(u_i)$ for each $i=1,\dotsc,q$; let $q$ be maximal obeying this. We then define the norm of the second fundamental form of $V$ at $y$ as: $$|A_V(y)|:= \sum^q_{i=1}|A_{\graph(u_i)}(y)|.$$ Then, the regularity scale of $V$ at $x\in \spt\|V\|$ is defined to be $$\regscale^Q_{V}(x):= \sup\left\{0<\rho\leq \regscale^Q_{0,V}(x): \sup_{\spt\|V\|\cap \eball_\rho(x)}\rho|A_V| \leq 1\right\}$$ where once again, if this set is empty (i.e. $\regscale^Q_{0,V}(x) = 0$) we set $\regscale^Q_{V}(x):= 0$. We then write $\badreg_r^Q(V):= \{x\in\spt\|V\|:\regscale^Q_V(x)\leq r\}$ and $\badreg_r(V):= \{x\in \spt\|V\|:\regscale^Q_V(x)\leq r\ \text{for some }Q\}$. \end{definition} \subsection{Codimension one integral varifolds in $\real^{n+1}$} In this section we give details of the relevant results and definitions for codimension one stationary integral varifolds that will be necessary for our work. \begin{definition}\label{s-alpha condition} Let $n\in\mathbb{Z}_{\geq 2}$, $I\in \mathbb{Z}_{\geq 0}$, and $V\in \ivarifolds_n(\eball_2^{n+1})$. We say that $V\in\salpha_I$ if it obeys the following conditions: \begin{enumerate} \item [\textnormal{(1)}] $V$ is stationary in $\eball_2^{n+1}$, i.e. $\delta V\equiv 0$; \item [\textnormal{(2)}] $\Index(\reg(V);\eball^{n+1}_2)\leq I$; \item [\textnormal{(3)}] $\H^{n-1}(\sing(V)) = 0$. \end{enumerate} Let us write $\salpha := \cup_{I\geq 0}\salpha_I$. \end{definition} \textbf{Remark:} From Lemma \ref{locally stable}, we know that any $V\in\salpha_I$ is locally stable and hence, from the regularity theory of Wickramasekera (\cite{wickstable}), obeys $\dim_\H(\sing(V))\leq n-7$\footnote{To be precise, when $n\leq 7$, by this we mean that for $2\leq n\leq 6$ we have $\sing(V) = \emptyset$ and for $n=7$ that $\sing(V)$ is discrete; we will write $\dim_\H(\sing(V))\leq n-7$ as a shorthand for this throughout. However, for our main results we will always have $n\geq 8$.}. In fact, it obeys $\sing(V) = \mathcal{S}_{n-7}$, i.e. it is equal to the $(n-7)^{\text{th}}$ strata (see Definition \ref{defn:strata}), and so is actually countably $(n-7)$-rectifiable by \cite{naber-valtorta}. In particular, on any ball $\Omega\subset\eball^{n+1}_2$, we know that $\reg(V)$ is two-sided in $\Omega$, and thus the second variation is only non-zero for vector fields in the normal direction of $\reg(V)$, namely, if $X = \zeta \nu$, for $\nu$ a choice of unit normal of $\reg(V)$ in $\Omega$, and $\zeta\in C^1_c(\Omega\backslash\sing(V);\R)$, then we have $$\delta^2V(X,X) = \int_U \left\{|\nabla^V\zeta|^2 - \zeta^2|A|^2\right\}\ d\|V\|.$$ Thus, the bounded index condition in (2) is requiring that only at most $I$ linearly independent $\zeta\in C^1_c(\eball^{n+1}_2\backslash\sing(V);\R)$ can obey \begin{equation}\label{E:index} \int_U |A|^2\zeta^2\ d\H^n > \int_U |\nabla^V\zeta|^2\ d\H^n \end{equation} (here, we have removed any multiplicity from $V$, as it is constant by connectedness of $\reg(V)\cap \Omega$ and the constancy theorem). In particular, our assumption on the index in condition (2) is equivalent to requiring the ``usual'' condition that $\Index(\reg(V);\Omega)\leq I$ for each ball $\Omega\subset\eball_2^{n+1}$ with $\dim_\H(\Omega\cap \sing(V))\leq n-7$. Furthermore, as $\sing(V)$ is countably $(n-7)$-rectifiable, a simple cut-off argument (based on the fact that the 2-capacity of $\sing(V)$ vanishes) gives that, when $\Index(\reg(V))<\infty$, that $\Index(V) = \Index(\reg(V))$, where by $\Index(V)$ we mean the dimension of the maximal subspace of $\zeta\in C^1_c(B^{n+1}_2;\R)$ which obey \eqref{E:index}. \textbf{Note:} Condition (3) in Definition \ref{s-alpha condition} can be replaced by the equivalent condition that $V$ contains no classical singularities, in the sense of \cite{wickstable}; the fact that, given (1) and (2), these two conditions are equivalent follows from the local stability provided by Lemma \ref{locally stable} and the regularity theory of \cite{wickstable}. Our first lemma is a characterisation of the singular set of $V\in\salpha$ as the zero set of the regularity scale: \begin{lemma}\label{regular implies positive regscale} Let $n\geq 2$ and $V\in \salpha$. Then, $x\in \reg(V)$ if and only if $\regscale^Q_V(x)>0$ for some positive integer $Q$. \end{lemma} \begin{proof} One direction is clear: indeed, if $x\in \reg(V)$, then there is a $\rho>0$ such that $\spt\|V\|\cap \eball_\rho(x)$ is an embedded hypersurface, and furthermore is expressible as a smooth graph over the unique tangent plane $T_x\spt\|V\|$. By the constancy theorem, $V\restrictv \eball_\rho(x)$ is then a constant integer multiple of this graph, which shows that $\regscale_V^Q(x)>0$, where $Q=\Theta_V(x)\in\mathbb{Z}_{\geq 1}$. Now suppose $x\in\spt\|V\|$ satisfies $\regscale^Q_V(x)>0$ for some $Q\in\mathbb{Z}_{\geq 1}$; thus, locally about $x$ we have that $V$ is a sum of $Q_*$ embedded (indeed graphical) $C^2$ hypersurfaces, for some $Q_*\leq Q$. In particular, as each such hypersurface has unique tangent cones at every point which are (multiplicity one) hyperplanes, this means that $V$ has a unique tangent cone $\mathbf{C}$ at $x$ which is supported on a union of hyperplanes. However, the minimal distance theorem of Wickramasekera (\cite{wickstable}*{Theorem 3.4}, which we can apply as $V$ is stable locally about $x$ by assumption of it being a union of codimension one graphs) implies that in fact $\mathbf{C}$ must be supported on a \textit{single} hyperplane (as if there were more than one, this would create a classical singularity in $\mathbf{C}$, contradiction the minimal distance theorem for $V$). But now we can apply Wickramasekera's sheeting theorem (\cite{wickstable}*{Theorem 3.3}) to see that in fact $x\in\reg(V)$, which completes the proof. \end{proof} \textbf{Remark:} All we needed for the above proof was stationarity of $V$ and $\H^{n-1}(\sing(V)) = 0$; we did not need the bounded index assumption. Let us recall the main compactness theorem of Wickramasekera's regularity theory, which applies to the class $\salpha_0$: \begin{theorem}[\cite{wickstable}*{Theorem 3.1}]\label{compactness theorem} Let $n\in \mathbb{Z}_{\geq 2}$. Suppose $(V_i)_{i=1}^\infty\subset \salpha_0$ is a sequence satisfying $\sup_i\|V_i\|(\eball_2^{n+1})<\infty$. Then, there exists a subsequence $(V_{i_j})_{j=1}^\infty$ and $V\in \salpha_0$ such that $V_{i_j}\to V$ as varifolds in $\eball^{n+1}_2$ and smoothly (i.e. in the $C^k$ topology for each $k$) locally in $\eball^{n+1}_2\backslash\sing(V)$. \end{theorem} We also recall the so-called sheeting theorem of Wickramasekera, stated in terms of the regularity scale previously defined: \begin{theorem}[Sheeting Theorem, \cite{wickstable}*{Theorem 3.3}]\label{sheeting theorem} Let $n\in \mathbb{Z}_{\geq 2}$, $\Lambda>0$, and $\theta\in (0,1)$. Then, there exists $\varepsilon_0 = \varepsilon_0(n,\Lambda,\theta)\in (0,1)$ and $Q_0 = Q_0(n,\Lambda,\theta)\in \mathbb{Z}_{\geq 1}$ such that the following is true: whenever $V\in \salpha_0$ with $0\in \spt\|V\|$ satisfies: \begin{enumerate} \item [\textnormal{(a)}] $(2^n\omega_n)^{-1}\|V\|(\eball^{n+1}_2)\leq \Lambda$; \item [\textnormal{(b)}] $\textnormal{dist}_\H(\spt\|V\|\cap (B^n_1(0)\times\R), B^n_1(0))<\varepsilon_0$; \end{enumerate} then we have $\regscale^{Q_0}_V(0)\geq \theta$; here, $\omega_n := \H^n(B_1^n(0))$ and $\textnormal{dist}_\H$ denotes the Hausdorff distance. \end{theorem} \begin{proof} It follows from Schauder theory and the sheeting theorem of Wickramasekera (\cite{wickstable}*{Theorem 3.3}) that there exists $\varepsilon_0 = \varepsilon_0(n,\Lambda,\theta)\in (0,1)$ and $C_1 = C_1(n,\Lambda,\theta)\in (0,\infty)$ such that if the above assumptions hold, then there is a $Q\in\mathbb{Z}_{\geq 1}$ and smooth functions $u_i:B_\theta^n(0)\to \R$ for $i=1,\dotsc,Q$ such that $$V\restrictv (B^n_\theta(0)\times\R) = \sum^Q_{i=1}\mathbf{v}(u_i)$$ and moreover that $\|u_i\|_{C^4}\leq C_1\varepsilon_0$ for each $i=1,\dotsc,Q$. Clearly we have $\|\mathbf{v}(u_i)\|(B_\theta^{n}(0)\times\R) \geq \H^n(B_\theta^{n}(0))$ for each $i=1,\dotsc,Q$, which gives $Q\omega_n\theta^n \leq (2^n\omega_n)\Lambda$, i.e. $Q\leq Q_0$, where $Q_0 = Q_0(n,\Lambda,\theta)$. We can clearly choose $\varepsilon_0$ small enough to guarantee that $\graph(u_i)\cap \eball^{n+1}_\theta(0)\neq\emptyset$ for all $i=1,\dotsc,Q$, and thus we have $\regscale^Q_{0,V}(0)\geq \theta$ (as defined in Definition \ref{regularity scale}). From our estimate on $\|u_i\|_{C^4}$ for each $i=1,\dotsc,Q$, we clearly have for any $\rho\in (0,\theta)$, $$\sup_{\spt\|V\|\cap B_\rho^{n+1}(0)}\rho|A_V| \leq \theta Q_0 C_1\varepsilon_0$$ and so this is $\leq 1$ when $\theta Q_0C_1\varepsilon_0 \leq 1$; this can be guaranteed by taking $\varepsilon_0 = \varepsilon_0(n,\Lambda,\theta)$ smaller if necessary. Thus, this shows that $\regscale^Q_V(0)\geq \theta$, completing the proof, as $\regscale^{Q_0}_V(0)\geq \regscale^Q_V(0)$. \end{proof} \section{Quantitative Stratification}\label{sec:strata} In this section we recall the notion of \textit{strata} as well as the \textit{quantitative strata} in codimension one, and prove a relation (essentially an $\varepsilon$-regularity theorem) between the regularity scale and certain strata. \begin{definition} Let $\BC\in\ivarifolds_n(\R^{n+1})$. We say that $\BC$ is a cone if $(\eta_{0,r})_\#\BC = \BC$ for all $r>0$. \end{definition} Given a cone $\BC\in \ivarifolds_n(\R^{n+1})$, we define the \textit{spine} of $\BC$, denoted $S(\BC)$, to be the set of points along which $\BC$ is translation invariant, i.e. $$S(\BC):= \{x\in \R^{n+1}:(\eta_{x,1})_\#\BC = \BC\}.$$ It is simple to check that $S(\BC)\subset\R^{n+1}$ is a subspace; thus, $\dim(S(\BC))\leq n$. We then say that $\BC$ is $k$\textit{-symmetric} if $\dim(S(\BC)) \geq k$. Let us write $\kcone_k\subset\ivarifolds_n(\R^{n+1})$ for the set of (codimension one) $k$-symmetric cones. Note that when $S(\BC)\neq \spt\|\BC\|$ (i.e. $\BC$ is not supported on a hyperplane), then $S(\BC)\subset \sing(\BC)$. Next we define a quantitative notion of being \textit{almost conical} for varifolds, in the same way as seen in \cite{naber-valtorta}*{Definition 1.1} and \cite{cheeger-naber2013}*{Definition 5.3}. \begin{definition}\label{defn:conical} Fix $U\subset\R^{n+1}$ open, and let $\delta>0$, $r>0$, and $k\in \{0,1,\dotsc,n\}$. We say that $V\in\ivarifolds_n(U)$ is $(\delta,r,k)$-conical at a point $x\in \spt\|V\|$ if $\eball_r(x)\subset U$ and there exists a $k$-symmetric cone $\BC\in\kcone_k$ such that $$\mathbf{d}\left((\eta_{x,r})_\#V\restrictv \eball_1,\BC\restrictv\eball_1\right)\leq \delta;$$ here, we recall $\mathbf{d}$ is the metric corresponding to the Fr\' echet structure of varifold topology, which induces the same topology. \end{definition} We now define various stratifications of the singular set, as in \cite{naber-valtorta}*{Definition 1.2}: \begin{definition}\label{defn:strata} Fix $U\subset\R^{n+1}$ open and let $\delta>0$, $R\in (0,1]$, $r\in (0,R)$, and $V\in \ivarifolds_n(U)$ with bounded first variation. Then for each $k\in\{0,\dotsc,n\}$, we define: \begin{enumerate} \item [\textnormal{(1)}] The $k^{\text{th}}$ $(\delta,r,R)$-stratification by: $$\strata^k_{\delta,r,R}(V):= \{x\in\spt\|V\|: V\text{ is not }(\delta,s,k+1)\text{-conical at }x\text{ for all }s \in [r,R)\};$$ \item [\textnormal{(2)}] The $k^{\text{th}}$ $\delta$-stratification by: $$\strata^k_\delta(V):= \bigcap_{0<r<1}\strata^k_{\delta,r,1}(V);$$ \item [\textnormal{(3)}] The $k^{\text{th}}$ stratification by: $$\strata^k(V):= \bigcup_{\delta>0}\strata^k_\delta(V).$$ \end{enumerate} \end{definition} \textbf{Note:} We have $\strata^k(V) \equiv \{x\in\spt\|V\|: \dim(S(\BC))\leq k\text{ for each }\BC\in\tangentcones_x(V)\}$. By definition, $\strata^0(V)\subset\cdots\subset\strata^{n-1}(V)\subset\strata^n(V)\equiv \spt\|V\|$, and $\strata^{n-1}(V)\subset\sing(V)$. We remark that it is well-known (see \cite{almgren} or \cite{simon-cylindrical}*{(1.10)}) that $\dim_\H(\strata^k(V))\leq k$ for each $k\in\{0,1,\dotsc,n\}$, and moreover that $\strata^k(V)$ is countably $k$-rectifiable (see \cite{naber-valtorta}). We have already remarked that in the above (codimension one) situation, for $V\in \salpha$ we have $\sing(V) = \strata^{n-7}(V)$ (which follows from the regularity theory \cite{wickstable} along with the local stability provided by Lemma \ref{locally stable}); in particular, this applies to codimension one area minimisers. We note the following immediate consequence of the definition: \begin{lemma}\label{rescaled strata} Let $U\subset\R^{n+1}$ be open, $\delta>0$, $0<r<R\leq 1$, $k\in\{0,1,\dotsc,n\}$ and $V\in\ivarifolds_n(U)$ with bounded first variation. Then for any $x\in \spt\|V\|$ and $\rho>0$ with $\eball_\rho(x)\subset U$, we have $$\eta_{x,\rho}\left(\strata^k_{\delta,r,R}(V)\right) = \strata^k_{\delta,r/\rho,R/\rho}\left((\eta_{x,\rho})_\#V\right).$$ \end{lemma} \begin{proof} This is immediate by definition, as $\eta_{y,s}\circ\eta_{x,\rho} = \eta_{x+\rho y,s\rho}$, and so $y\in \strata^k_{\delta,r/\rho,R/\rho}\left((\eta_{x,\rho})_\#V\right)$ if and only if $x+\rho y\in \strata^k_{\delta,r,R}(V)$. \end{proof} The following $\varepsilon$-regularity result is the corresponding version of \cite{cheeger-naber2013}*{Theorem 6.2} for locally stable varifolds instead of codimension one area minimising currents. \begin{theorem}[$\varepsilon$-Regularity Theorem]\label{epsilon regularity} Let $n\in\mathbb{Z}_{\geq 2}$, $\Lambda\in (0,\infty)$, and $K\subset\eball^{n+1}_2$ compact. Then, there exist constants $\varepsilon_0 = \varepsilon_0(n,\Lambda,K)\in (0,1)$ and $Q_0 = Q_0(n,\Lambda,K)\in \mathbb{Z}_{\geq 1}$ such that the following holds: if $V\in \ivarifolds_n(\eball^{n+1}_2)$ is a stationary integral varifold, $x\in \spt\|V\|\cap K$, and $\rho\in (0,d(x,\del\eball^{n+1}_2)]$ satisfy: \begin{enumerate} \item [\textnormal{(a)}] $\|V\|(\eball^{n+1}_2)\leq \Lambda$; \item [\textnormal{(b)}] $(\eta_{x,\rho/2})_\#V\in \salpha_0$; \item [\textnormal{(c)}] $V$ is $(\varepsilon_0,\rho/2,n-6)$-conical at $x$; \end{enumerate} then we have $\regscale^{Q_0}_V(x)\geq \rho/4$. \end{theorem} \begin{proof} We argue this by contradiction, so suppose the claim was false. Then, we can find $\Lambda>0$ and a compact set $K\subset\eball^{n+1}_2$ such that for each $k\in \mathbb{Z}_{\geq 1}$, we can find a varifold $V_k\in \ivarifolds_n(\eball^{n+1}_2)$, a point $x_k\in \spt\|V_k\|\cap K$, and $\rho_k \in (0,d(x_k,\del\eball^{n+1}_2)]$ such that $\|V_k\|(\eball^{n+1}_2)\leq \Lambda$, $(\eta_{0,\rho_k/2})_\#V_k \in \salpha_0$, with $V_k$ being $(1/k,\rho_k/2,n-6)$-conical at $x_k$, yet $\regscale^k_{V_k}(x_k)<\rho_k/4$. Now set $W_k:= (\eta_{x_k,\rho_k/2})_\#V_k$. Then we have $W_k\in \salpha_0$, and \begin{align*} \|W_k\|(\eball_2^{n+1}) \equiv (\rho_k/2)^{-n}\|V_k\|(B_{\rho_k}(x_k)) & \leq 2^n\cdot d(x_k,\del\eball^{n+1}_2)^{-n}\|V_k\|\left(\eball_{d(x_k,\del\eball^{n+1}_2)}(x_k)\right)\\ & \leq 2^n\cdot d(K,\del\eball^{n+1}_2)^{-n}\Lambda \end{align*} where in the second inequality here we have used the monotonicity formula for stationary integral varifolds. Moreover, $W_k$ is $(1/k,1,n-6)$-conical at $0$, and $\regscale^k_{W_k}(0) = (\rho_k/2)^{-1}\regscale^k_{V_k}(x_k)<1/2$. In particular, $(W_k)_k$ satisfies the conditions of Theorem \ref{compactness theorem}, and so there is a subsequence $(W_{k_j})_j$ and $W\in \salpha_0$ such that $W_{k_j}\weakly W$ as varifolds in $\eball^{n+1}_2(0)$. However, as $W_k$ is $(1/k,1,n-6$)-conical for each $k$, this implies that $W$ must be a cone, with $\dim_\H(S(W))\geq n-6$. However, as $W\in \salpha_0$, we know $\dim_\H(\sing(W))\leq n-7$, and thus this implies that $W$ must be a hyperplane of some integer multiplicity. But then, as the $W_k$ are stationary and so varifold convergence implies local convergence of the supports in Hausdorff distance (see, e.g. \cite{mondino}*{Proposition 3.9}), it follows from Theorem \ref{sheeting theorem} (with $\theta=1/2$) that for all $j$ sufficiently large, we have $\regscale^{Q_0}_{W_{k_j}}(0) \geq 1/2$ for some $Q_0 = Q_0(n,\Lambda,K)$; this is a direct contradiction to the fact that $\regscale^{k_j}_{W_{k_j}}(0)<1/2$ for all $j$ sufficiently large. Hence, the proof is completed. \end{proof} \textbf{Remark:} The above proof is slightly different from that for codimension one area minimisers seen in \cite{cheeger-naber2013}, as it does not require estimates on the various stratifications. We also note that the above proof shows that the dependence on the compact set $K\subset\eball^{n+1}_2$ of the constants $\varepsilon_0$ and $Q_0$ in Theorem \ref{epsilon regularity} is in fact only on a lower bound for $d(K,\del\eball^{n+1}_2(0))$ (the same is true for the next Corollary also). Finally, we remark the following corollary of Theorem \ref{epsilon regularity}, which is essentially a rephrasing of the result when $n\geq 7$ in terms of the quantitative strata. \begin{corollary}\label{corollary of sheeting theorem} Let $n\in \mathbb{Z}_{\geq 7}$, $\Lambda\in (0,\infty)$, and $K\subset\eball^{n+1}_2$ be a compact subset. Then, there exist constants $\varepsilon_0 = \varepsilon_0(n,\Lambda,K)\in (0,1)$ and $Q_0 = Q_0(n,\Lambda,K)\in \mathbb{Z}_{\geq 1}$ such that the following is true: if $V\in \ivarifolds_n(\eball^{n+1}_2)$ is a stationary integral varifold obeying: \begin{enumerate} \item [\textnormal{(a)}] $\|V\|(\eball^{n+1}_2)\leq \Lambda$; \item [\textnormal{(b)}] $(\eta_{x,d(K,\del\eball_2^{n+1})})_\#V\in \salpha_0$ for all $x\in K$; \end{enumerate} then we have $\badreg^{Q_0}_{\sigma/5}(V)\cap K\subset \strata^{n-7}_{\varepsilon_0,\sigma/2,d(K,\del\eball^{n+1}_2)}(V)\cap K$ for all $\sigma\in (0,d(K,\del\eball^{n+1}_2)]$. \end{corollary} \begin{proof} Let $\varepsilon_0 = \varepsilon_0(n,\Lambda,K)\in(0,1)$ and $Q_0 = Q_0(n,\Lambda,K)\in \mathbb{Z}_{\geq 1}$ be as in Theorem \ref{epsilon regularity}. If the result were not true with this choice of $\varepsilon_0$ and $Q_0$, then we could find $\sigma\in (0,d(K,\del\eball^{n+1}_2)]$ and $x\in \badreg^{Q_0}_{\sigma/5}(V)\cap K$ with $x\not\in \strata^{n-7}_{\varepsilon,\sigma/2,d(K,\del\eball^{n+1}_2)}(V)$, i.e. there exists some $s\in [\sigma,d(K,\del\eball^{n+1}_2))$ for which $V$ is $(\varepsilon_0,s/2,n-6)$-conical at $x$. Hence, as $\eball_s(x)\subset \eball_{d(K,\del\eball^{n+1}_2)}(x)$, we may apply Theorem \ref{epsilon regularity} to see that $\regscale^{Q_0}_V(x)\geq {\sigma/4}$, which is a contradiction. \end{proof} \section{Singular set estimates}\label{sec:estimates} In this section we prove Theorem \ref{thm:A}, i.e. local estimates for measure of the tubular neighbourhood of the singular set of codimension one stationary integral varifolds in Euclidean space which have regular part of bounded index. The following result is the main measure bound on the quantitative strata from \cite{naber-valtorta} (in the codimension one setting): \begin{theorem}[\cite{naber-valtorta}*{Theorem 1.3 and 1.4}]\label{quantitative estimates bounded variation} Let $n\in\mathbb{Z}_{\geq 2}$, $\Lambda\in (0,\infty)$, $H\in (0,\infty)$, and $\varepsilon\in (0,1)$. Then there exists a constant $C_\varepsilon = C_\varepsilon(n,\Lambda,H,\varepsilon)\in (0,\infty)$ such that the following is true: if $V\in\ivarifolds_n(\eball^{n+1}_2)$ has $\|V\|(\eball^{n+1}_2)\leq\Lambda$ and has first variation bounded by $H$ in $\eball^{n+1}_2(0)$, then for each $k\in \{0,1,\dotsc,n\}$ we have $$\H^{n+1}\left(\eball_r(\strata^k_{\varepsilon,r,1}(V))\cap \eball_1\right) \leq C_\varepsilon r^{n+1-k}\ \ \ \ \text{for all }r\in (0,1].$$ \end{theorem} We remark that Theorem \ref{quantitative estimates bounded variation} has an immediate rescaled version, which we state in the special case of \textit{stationary} integral varifolds for sake of simplicity (as it is all we will need for our results): \begin{corollary}\label{rescaled quantitative estimates} Let $n\in\mathbb{Z}_{\geq 2}$, $\Lambda\in (0,\infty)$, and $\varepsilon\in (0,1)$. Then, there exists a constant $C_\varepsilon = C_\varepsilon(n,\Lambda,\varepsilon)\in (0,\infty)$ such that the following is true: for any $R\in (0,1/2]$ and any stationary integral varifold $V\in \ivarifolds_n(\eball^{n+1}_2)$ with $\|V\|(\eball_2^{n+1}) \leq \Lambda$, we have $$\H^{n+1}\left(\eball_r(\strata^k_{\varepsilon,r,R}(V))\cap \eball_R(x)\right)\leq C_\varepsilon r^{n+1-k}R^k\ \ \ \ \text{for all }r\in (0,R]\text{ and }x\in\eball_1^{n+1}.$$ \end{corollary} \begin{proof} Since $\eball^{n+1}_{2R}(x)\subset\eball_2^{n+1}$, $W:= (\eta_{x,R})_\#V$ is a stationary integral varifold in $\eball^{n+1}_2$, and moreover from the monotonicity formula we know that $$\|W\|(\eball_2^{n+1}) \equiv R^{-n}\|V\|(\eball^{n+1}_{2R}(p))\leq 2^n\|V\|(B_1(x))\leq 2^n\Lambda.$$ By Lemma \ref{rescaled strata} with $\rho = R$, we know that $\eta_{x,R}\left(\strata_{\varepsilon,r,R}(V)\right) = \strata^k_{\varepsilon,r/R,1}(W)$. Thus, if $C_\varepsilon = C_\varepsilon(n,2^n\Lambda,0,\varepsilon)\in (0,\infty)$ is the constant from Theorem \ref{quantitative estimates bounded variation} with these parameters, then we know from Theorem \ref{quantitative estimates bounded variation} that $$\H^{n+1}\left(\eball_r(\strata^k_{\varepsilon,\rho,1}(W))\cap\eball_1\right) \leq C_\varepsilon \rho^{n+1-k}\ \ \ \ \text{for all }\rho\in (0,1].$$ Hence, \begin{align*} \H^{n+1}\left(\eball_r(\strata^k_{\varepsilon,r,R}(V))\cap\eball_R(x)\right) & = \H^{n+1}\left[\eball_r\left(\eta^{-1}_{x,R}(\strata^k_{\varepsilon,r/R,1}(W))\right)\cap \eball_R(x)\right]\\ & = \H^{n+1}\left[\eta^{-1}_{x,R}\left(\eball_{r/R}(\strata^k_{\varepsilon,r/R,1}(W))\right)\cap \eball_R(x)\right]\\ & = \H^{n+1}\left[\eta^{-1}_{x,R}\left(\eball_{r/R}(\strata^k_{\varepsilon,r/R,1}(W))\cap \eball_1\right)\right]\\ & = R^{n+1}\H^{n+1}\left(\eball_{r/R}(\strata^k_{\varepsilon,r/R,1}(W))\cap \eball_1\right)\\ & \leq R^{n+1}\cdot C_\varepsilon(r/R)^{n+1-k}\\ & = C_\varepsilon r^{n+1-k}R^k \end{align*} as desired. \end{proof} Our first estimate toward proving Theorem \ref{thm:A} is independent of the Naber--Valtorta measure estimates of Theorem \ref{rescaled quantitative estimates} and doesn't require any assumption on the size of the singular set. It applies for tubular neighbourhoods about the points where the stability radius is at most the size of the tubular radius; morally, this is when the stability radius is ``small''. We will see that in fact we can get a much better bound on this set: \begin{lemma}\label{low stability bound} Let $n\in\mathbb{Z}_{\geq 2}$. Then, there exists $C_0 = C_0(n)\in (0,\infty)$ such that for any stationary integral varifold $V\in \ivarifolds_n(\eball^{n+1}_2)$ with $\Index(\reg(V))<\infty$, we have $$\H^{n+1}\left(\eball_r(s_V^{-1}(0,r))\cap \eball_{1/2}\right) \leq C_0\folding(V)r^{n+1}\ \ \ \ \text{for all }r\in (0,1/2].$$ \end{lemma} \begin{proof} Set $A:= s_V^{-1}(0,r)\cap \eball^{n+1}_1$. Then as $\mathcal{A}:= \{\overline{\eball}_{s_V(a)}(a): a\in A\}$ is a Besicovitch cover of $A$, the Besicovitch covering theorem gives the existence of a constant $N_0 = N_0(n)$ and subcollections $A_1,\dotsc,A_N\subset A$, where $N\leq N_0$, for which each individual collection $\mathcal{A}_j:= \{\overline{\eball}_{s_V(a)}:a\in A_j\}$ consists of pairwise disjoint balls, and moreover we still have $$A\subset\bigcup^N_{j=1}\bigcup_{a\in A_j}\overline{\eball}_{s_V(a)}(a).$$ We now claim that $|A_j|\leq \folding(V)$ for each $j=1,\dotsc,N$ (recall that $\folding(V)\leq \Index(\reg(V);\eball_2^{n+1})$, so in particular $\folding(V)<\infty$ by assumption). Indeed, if this failed, we would be able to find distinct $a_1,\dotsc,a_{\folding(V)+1}$ in some $A_j$. But then, as $\overline{B}_{s_V(a_i)}(a_i)$ are pairwise disjoint for $i=1,\dotsc,\folding(V)+1$, we can find $\delta>0$ for which $\eball_{s_V(a_i)+\delta}(a_i)$ are pairwise disjoint for $i=1,\dotsc,\folding(V)+1$. But by definition of $s_V$, $V$ is unstable in each $\eball_{s_V(a_i)+\delta}(a_i)$, and so this would imply that $\folding(V)\geq \folding(V)+1$, which is clearly a contradiction as $\folding(V)<\infty$ (by Lemma \ref{bounded index disjoint sets}). Thus, $|A_j|\leq \folding(V)$ for each $j=1,\dotsc,N$. Now, as $s_V(a)<r$ for all $a\in A$, we then have for any $\varepsilon>0$, $$\eball_r(A)\subset\bigcup^N_{j=1}\bigcup_{a\in A_j}\eball_{2r+\varepsilon}(a)$$ and therefore: $$\H^{n+1}(\eball_r(A))\leq N\cdot\folding(V)\cdot\omega_{n+1}(2r+\varepsilon)^{n+1}.$$ Hence the result follows, with $C_0 := 2^{n+1}N\omega_{n+1}$, by taking $\varepsilon\downarrow 0$ and then noting that $\eball_r(s_V^{-1}(0,r))\cap \eball_{1/2}\subset \eball_r(A)$ for $r\in (0,1/2]$. \end{proof} The next lemma is the key covering argument we need to prove Theorem \ref{thm:A}. The idea is that control on the folding number provides control on the size of a covering by smaller stable balls. \begin{lemma}\label{covering lemma} Let $n\in \mathbb{Z}_{\geq 2}$. Then, there exists a constant $C_1 = C_1(n)\in (0,\infty)$ such that the following holds: if $V\in\ivarifolds_n(\eball^{n+1}_2)$ is a stationary integral varifold with $\folding(V)<\infty$, then for any $0<a<b<\infty$, $\gamma\in (0,1)$, and subset $A\subset s_V^{-1}([a,b])\cap \eball_1$, there exists a finite set $B\subset A$ such that $$A\subset\bigcup_{y\in B}\eball^{n+1}_{\gamma s_V(y)}(y)$$ and moreover we have the size bound $$|B|\leq C_1\folding(V)\cdot\left(\frac{b}{a}\right)^{n+1}\left(1+\gamma^{-1}\right)^{n+1}.$$ \end{lemma} \begin{proof} We first prove that for such $A$ we have $\H^{n+1}(B_r(A))\leq C\folding(V)\omega_{n+1}(b+r)^{n+1}$ for some $C = C(n)$; this will follow in much the same way it did in Lemma \ref{low stability bound}. Indeed, consider the Besicovitch covering of $A$ given by $\mathcal{A}:= \{\overline{\eball}_{s_V(x)}(x):x\in A\}$. The Besicovitch covering theorem then gives the existence of a constant $N_0 = N_0(n)$ and (countable) subcollections $A_1,\dotsc,A_N\subset A$, where $N\leq N_0$ and, for each $i=1,\dotsc,N$, $\{\overline{\eball}_{s_V(x)}(x):x\in A_i\}$ is formed of pairwise disjoint balls, and moreover $$A\subset\bigcup^N_{i=1}\bigcup_{x\in A_i}\overline{\eball}_{s_V(x)}(x).$$ We then claim that in fact $|A_i|\leq \folding(V)$ for each $i=1,\dotsc,N$. Indeed, if not, then we can find $i$ (which without loss of generality we can assume to be $i=1$) such that $|A_1|\geq \folding(V)+1$. In particular, we can find points $x_1,\dotsc,x_{\folding(V)+1}\in A_1$, and for these points we can then find a $\delta>0$ for which $\eball_{s_V(x_1)+\delta}(x_1),\dotsc,\eball_{s_V(x_{\folding(V)+1})+\delta}(x_{\folding(V)+1})$ are disjoint. As $\reg(V)$ is unstable in each $\eball_{s_V(x_j)+\delta}(x_j)$ for $j=1,\dotsc,\folding(V)+1$, this contradicts the definition of $\folding(V)$ (as again, by assumption and Lemma \ref{bounded index disjoint sets} we know $\folding(V)<\infty$); hence this shows that $|A_i|\leq \folding(V)$ for each $i=1,\dotsc,N$. Hence, as we have that for any $r>0$ $$\eball_{r}(A)\subset\bigcup_{i=1}^N\bigcup_{x\in A_i}\overline{\eball}_{s_V(x)+r}(x),$$ this immediately gives \begin{equation}\label{E:cover-1} \H^{n+1}(\eball_r(A)) \leq N\cdot \folding(V)\cdot \omega_{n+1}(b+r)^{n+1} \end{equation} as here we have $s_V(x)\leq b$ for all $x\in A$. We now use \eqref{E:cover-1} to prove the lemma. Consider a new Besicovitch cover of $A$ given by $\mathcal{B}:= \{\overline{\eball}_{\frac{1}{2}\gamma s_V(x)}(x): x\in A\}$. Applying the Besicovitch covering theorem again, we again get the existence of $M\leq N_0$ and (countable) subcollections $B_1,\dotsc,B_M \subset A$ where, for $i=1,\dotsc,M$, $\{\overline{B}_{\frac{1}{2}\gamma s_V(x)}(x): x\in B_i\}$ consists of pairwise disjoint balls, and moreover \begin{equation}\label{E:cover-3} A\subset\bigcup^M_{i=1}\bigcup_{x\in B_i}\overline{\eball}_{\frac{1}{2}\gamma s_V(x)}(x). \end{equation} We now claim that each $B_i$ is a finite set, and moreover for each $i=1,\dotsc,M$ we have \begin{equation}\label{E:cover-2} |B_i|\leq C\folding(V)\left(\frac{b}{a}\right)^{n+1}(1+\gamma^{-1})^{n+1}. \end{equation} To see this, consider any finite subset $\{x_1,\dotsc,x_L\}\subset B_i$ (for some $i=1,\dotsc,M$). Then, clearly have $$\bigcup^L_{j=1}\eball_{\frac{1}{2}\gamma s_V(x_j)}(x_j)\subset\bigcup^L_{j=1}\eball_{\frac{1}{2}\gamma b}(x_j)\subset \eball_{\gamma b}(A).$$ Thus, as the left-hand side consists of pairwise disjoint balls, and thus from \eqref{E:cover-1} with $r= b\gamma$ gives $$L\omega_{n+1} \left(\frac{1}{2}\gamma a\right)^{n+1} \leq C\folding(V)\omega_{n+1}(b + b\gamma)^{n+1}$$ i.e. $L \leq C\folding(V)\left(\frac{b}{a}\right)^{n+1}(1+\gamma^{-1})^{n+1}$, where $C = C(n)$; this proves \eqref{E:cover-2}. Thus, as $\overline{\eball}_{\frac{1}{2}\gamma s_V(x)}(x)\subset \eball_{\gamma s_V(x)}(x)$, the claim follows from \eqref{E:cover-3} by taking $B:= \cup_{i=1}^M B_i$. \end{proof} We can now prove Theorem \ref{thm:A}. Since we know $V\in \salpha$ have that $\sing(V)$ is countably $(n-7)$-rectifiable (see the remark following Definition \ref{s-alpha condition}), all that remains is to prove the following \begin{theorem}\label{thm:A-2} Let $n\in \mathbb{Z}_{\geq 8}$, $\Lambda>0$. Then, there exists $C_0 = C_0(n,\Lambda)\in (0,\infty)$ such that for any $V\in \salpha$ with $\|V\|(\eball^{n+1}_2)\leq \Lambda$, we have for all $r\in (0,1/2]$ $$\H^{n+1}\left(\eball_{r/8}(\sing(V))\cap \eball_{1/2}\right)\leq C_0(1+\folding(V))r^8$$ $$\|V\|\left(\eball_{r/8}(\sing(V))\cap \eball_{1/2}\right)\leq C_0(1+\folding(V))r^7.$$ \end{theorem} \begin{proof} For $r\in (0,1/2]$. Observe that $\eball_{r/8}(\sing(V))\cap \eball_{1/2}\subset \eball_{r/8}(\sing(V)\cap \eball_1)$. Write $$\sing(V) = \left(s_V^{-1}(0,r)\cup s_V^{-1}([r,1])\cup s_V^{-1}((1,\infty])\right)\cap \sing(V)$$ and so \begin{align*} \H^{n+1}\left(\eball_{r/8}(\sing(V))\cap \eball_{1/2}\right) & \leq \H^{n+1}\left(\eball_{r/8}(s_V^{-1}(0,r)\cap \sing(V))\cap \eball_{1/2}\right)\\ & + \H^{n+1}\left(\eball_{r/8}(s_V^{-1}([r,1])\cap \sing(V)\cap \eball_{1})\right)\\ & + \H^{n+1}\left(\eball_{r/8}(s_V^{-1}((1,\infty])\cap \sing(V))\cap \eball_{1/2}\right) \end{align*} We will bound each term individually. Note firstly that Lemma \ref{low stability bound} gives $$\H^{n+1}\left(\eball_{r/8}(s_V^{-1}(0,r)\cap\sing(V))\cap \eball_{1/2}\right) \leq C_0\folding(V)r^{n+1}$$ where $C_0 = C_0(n)\in (0,\infty)$; this deals with the first term. For the third term, note that if $x\in s_V^{-1}((1,\infty])\cap \sing(V)\cap \eball_1$, then in particular $x\in \sing(V)$, and so Lemma \ref{regular implies positive regscale} gives $\regscale^Q_V(x)=0$ for some positive integer $Q$. Hence, if we take $K = \overline{s_V^{-1}((1,\infty])\cap \eball_1}$ in Corollary \ref{corollary of sheeting theorem}, it gives that $$s_V^{-1}((1,\infty])\cap \sing(V)\cap \eball_1 \subset \strata^{n-7}_{\varepsilon_0,r,1/2}(V)$$ for each such $r$, where $\varepsilon_0 = \varepsilon_0(n,\Lambda)$. Hence, \begin{align*} \H^{n+1}\left(\eball_{r/8}(s_V^{-1}((1,\infty])\cap\sing(V))\cap \eball_{1/2}\right) & \leq \H^{n+1}(\eball_{r/8}(\strata^{n-7}_{\varepsilon_0,r,1/2})\cap \eball_{1/2})\\ & \leq C_{\varepsilon_0}r^{8}(1/2)^{n-7} \end{align*} where $C_{\varepsilon_0} = C_{\varepsilon_0}(n,\Lambda,\varepsilon_0)$ only depends on $n$ and $\Lambda$. Hence, all that remains to bound is the second term above, where the stability radius is in $[r,1]$. Set $k_0:= \min\{k\in\mathbb{Z}_{\geq 1}:2^{-k}\leq r\}$. Then for $j=1,\dotsc,k_0$, set $$A_j:= s_V^{-1}([2^{-j},2^{-j+1}])\cap\sing(V)\cap \eball_1.$$ Then note that $$s_V^{-1}([r,1])\cap\sing(V)\cap\eball_1\subset A_1\cup\cdots \cup A_{k_0}.$$ Now, applying Lemma \ref{covering lemma} with $a = 2^{-j}$, $b=2^{-j+1}$, $\gamma = 1/8$, we obtain for each $j=1,\dotsc,k_0$ a set $B_j\subset A_j$ with $|B_j|\leq 16^{n+1}C_1\folding(V)$, where $C_1 = C_1(n)\in (0,\infty)$, and $$A_j\subset \bigcup_{y\in B_j}\eball_{s_V(y)/8}(y).$$ Observe that, by continuity of the stability radius (Lemma \ref{stability radius continuity}), we know that $s_V\geq 2^{-j}$ on $\overline{A}_j$, and thus for each $x\in \overline{A}_j$ we know that $\reg(V)$ is stable in $\eball_{2^{-j}}(x)$. Hence, using Lemma \ref{regular implies positive regscale} and Corollary \ref{corollary of sheeting theorem} (with $K = \overline{A}_j$, which has $d(K,\del \eball^{n+1}_2)\geq 1$, $\sigma = 5r/8$) we have that $$A_j\subset \strata^{n-7}_{\varepsilon_0,r/8,2^{-j-1}}(V)$$ where here $\varepsilon_0 = \varepsilon_0(n,\Lambda)$ (note that $r/8 < (1/8)2^{-k_0+1} = 2^{-k_0-2}<2^{-j-1}$ for any $j=1,\dotsc,k_0$). Thus, we in particular have $$\eball_{r/8}(A_j)\subset\bigcup_{y\in B_j}\eball_{r/8}\left(\strata^{n-7}_{\varepsilon_0,r/8,2^{-j-1}}(V)\right)\cap \eball_{s_V(y)/8 + r/8}(y)$$ and as $s_V(y)\leq 2^{-j+1}$ in $A_j$, $$\eball_{r/8}(A_j)\subset\bigcup_{y\in B_j}\eball_{r/8}\left(\strata^{n-7}_{\varepsilon_0,r/8,2^{-j-1}}(V)\right)\cap \eball_{2^{-j-1}}(y).$$ Hence, by Corollary \ref{rescaled quantitative estimates}, we can find a constant $C^\prime = C^\prime(n,\Lambda)$ (for this choice of $\varepsilon_0 = \varepsilon_0(n,\Lambda)$) such that for each $y\in B_j$ we have $$\H^{n+1}\left(\eball_{r/8}\left(\strata^{n-7}_{\varepsilon_0,r/8,2^{-j-1}}(V)\right)\cap \eball_{2^{-j-1}}(y)\right) \leq C^\prime\cdot\left(r/8\right)^8\cdot (2^{-j-1})^{-(n-7)}.$$ Hence, we have by combining $$\H^{n+1}(\eball_{r/8}(A_j)) \leq |B_j|\cdot C^\prime\cdot (r/8)^8 \cdot (2^{-j-1})^{-(n-7)} \leq C_* \folding(V)r^8\cdot \left(2^{-n+7}\right)^{-j-1}$$ and this is true for each $j=1,\dotsc,k_0$; here $C_* = C_*(n,\Lambda)$. Therefore, $$\H^{n+1}\left(\eball_{r/8}(A_1\cup\cdots\cup A_{k_0})\right) \leq C_*\folding(V)r^8\cdot\sum^{k_0}_{j=1}\left(\frac{1}{2^{n-7}}\right)^{-j-1}.$$ Since $n\geq 8$, the sum on the right-hand side is bounded above by the finite number $\sum^\infty_{j=0} \left(\frac{1}{2^{n-7}}\right)^{-j}$, which only depends on $n$; this completes the bound of the second term. So combining, we have $$\H^{n+1}\left(\eball_{r/8}(\sing(V))\cap \eball_{1/2}\right) \leq C_0\folding(V) r^{n+1} + C^*_1\folding(V)r^8 + C_2 r^{8} \leq \tilde{C}(1+\folding(V))r^8$$ for some constants $C_0,C^*_1,C_2,\tilde{C}$ only depending on $n$ and $\Lambda$; here we have used that $r^{n+1}<r^8$; this therefore completes the proof of the first claimed bound. To prove the second inequality, we shall use the first to prove a packing estimate. Indeed, we claim the following: for any $r\in (0,1/4]$, we can find a covering of $\sing(V)\cap B_{1/4}$ by balls $\{B_r(x_i)\}_{i=1}^N$ with $x_i\in\sing(V)$ and $N\leq C_3r^{-(n-7)}$, where $C_3 = C_3(n,\Lambda)$. Indeed, to see this simply choose $x_i\in \sing(V)\cap B_{1/4}$ a maximal collection of points such that $\{B_{r/2}(x_i)\}_{i}$ is a pairwise disjoint collection. Then, by construction we have $\sing(V)\cap B_{1/4} \subset \bigcup_{i=1}^N B_r(x_i)$, and \begin{align*} N r^{n-7} = \sum_i r^{n-7} & \leq \omega_{n+1}^{-1}2^{n+1}\cdot r^{-8}\sum_i \H^{n+1}(B_{r/2}(x_i))\\ &\leq \omega_{n+1}^{-1}2^{n+1}\cdot r^{-8} \H^{n+1}(B_{r/2}(\sing(V)\cap B_{1/4}))\\ & \leq \omega_{n+1}^{-1}2^{n+1}\cdot r^{-8}\cdot C(n,\Lambda)r^{8}\\ & \equiv C_3 \end{align*} where $C_3 = \omega^{-1}_{n+1}2^{n+1}C$, where $C = C(n,\Lambda)$ is the constant from the first inequality we have already established; this proves the packing estimate claimed. Thus, using this cover we now have \begin{align*} \|V\|(B_{r/8}(\sing(V))\cap B_{1/8}) & \leq \|V\|(B_{r}(\sing(V)\cap B_{1/4}))\\ &\leq \|V\|\left(\bigcup_{i=1}^NB_{2r}(x_i)\right) \leq \sum_{i=1}^N\|V\|(B_{2r}(x_i))\\ & \hspace{8.5em}\leq \sum_{i=1}^N (2r)^n\|V\|(B_1(x_i)) \leq 2^n\Lambda r^n N \leq 2^n\Lambda C_3 r^7 \end{align*} where in the fourth inequality we have used the monotonicity formula for stationary integral varifolds to get that, as $x_i\in \sing(V)\cap B_{1/4}$ and $0<2r<1$, $(2r)^{-n}\|V\|(B_{2r}(x_i)) \leq \|V\|(B_1(x_i))\ (\leq \Lambda)$. Covering $B_{1/2}$ by at most $C(n)$ balls of radius $1/8$, repeating the same argument in each ball, and summing the estimates completes the proof of the second estimate. \end{proof} We also remark the following theorem, which generalises \cite{naber-valtorta}*{Theorem 1.8}: \begin{theorem} Let $n\in\mathbb{Z}_{\geq 8}$ and $\Lambda>0$. Then, there exists a constant $C_0 = C_0(n,\Lambda)\in (0,\infty)$ such that for any $V\in \salpha$ and $r\in (0,1/2]$, $$\H^{n+1}(\{x\in\spt\|V\|: |A_V(x)|>r^{-1}\}\cap \eball_{1/2})\leq C_0(1+\folding(V))r^8;$$ $$\H^{n+1}(\badreg_{r}(V)\cap \eball_{1/2}) \leq C_0(1+\folding(V))r^8;$$ $$\|V\|(\badreg_r(V)\cap \eball_{1/2}) \leq C_0(1+\folding(V))r^7.$$ \end{theorem} \begin{proof} The second estimate follows from Corollary \ref{corollary of sheeting theorem} and Corollary \ref{rescaled quantitative estimates} in much the same way we have already seen in the proof of Theorem \ref{thm:A-2}. The first follows from the second along with the fact that $|A_V(x)|\leq \left[\regscale_V^Q(x)\right]^{-1}$ (for the appropriate choice of $Q = Q(x)$) and so this set is contained within the set of points where $\regscale_V^Q(x)< r$, i.e. $\badreg_r(V)$. The third inequality then follows from the second in the same way as the second inequality in Theorem \ref{thm:A-2} followed from the first in that setting. \end{proof} \section{Generalization to Riemannian Manifolds: Theorem \ref{thm:B1}} In this section we will detail how one can modify the proof of Theorem \ref{thm:A} seen in the previous sections to prove Theorem \ref{thm:B1}. So fix $(N^{n+1},g)$ a smooth Riemannian manifold. Write $\exp_x$ for the exponential map of $N$ at $x\in N$ and $\inj(x)\in (0,\infty]$ for the injectivity radius of $N$ at $x$. Let us first precisely define what we mean by the index of the regular part of a stationary integral varifold $V$ in $N$, adapting Definition \ref{definition bounded index} using \cite{wickstable}*{Section 18}. First, we define what it means for $V$ to have finite index on its regular part on a normal coordinate ball in $N$. So let $x\in \spt\|V\|$, and let $\mathcal{N}_{\rho}(x)$ be the normal coordinate ball of radius $\rho\in (0,\inj(x))$ around $x$, which we will assume also obeys $\dim_\H(\sing(V)\cap \mathcal{N}_\rho(x))\leq n-7$. Set $\tilde{V}:= \left(\exp^{-1}_x\right)_\#(V\restrictv \mathcal{N}_\rho(x))$, which is then an integral $n$-varifold on $B^{n+1}_\rho(0)\subset T_{x}N\cong \R^{n+1}$, which is stationary with respect to the functional $$\mathcal{F}_{x}(\tilde{V}):= \int_{B^{n+1}_\rho(0)\times G(n,n+1)}|\Lambda_n D\exp_x(y)\circ S|\ d\tilde{V}(y,S).$$ For $\psi\in C^1_c(B^{n+1}_\rho(0)\backslash\sing(\tilde{V});\R^{n+1})$, the second variation with respect to $\mathcal{F}_x$ is then given by (see \cite{SS}*{(1.8), (1.10), (1.12)}) $$\delta^2_{\mathcal{F}_x}\tilde{V}(\psi) = \int_{\reg(\tilde{V})}\left\{\sum^n_{i=1}|(D_{\tau_i}\psi)^\perp|^2 + (\divergence_{\reg(\tilde{V})}\psi)^2 - \sum^n_{i,j=1}(\tau_i\cdot D_{\tau_j}\psi)\cdot(\tau_j\cdot D_{\tau_i}\psi)\right\}\ d\H^n + R(\psi)$$ where $\{\tau_1,\dotsc,\tau_n\}$ is an orthonormal basis for the tangent space $T_y(\reg(\tilde{V}))$ of $\reg(\tilde{V})$ at $y$, $D_\tau\psi$ denotes the directional derivative of $\psi$ in the direction $\tau$, and $$|R(\psi)|\leq c\mu\int_{\reg(\tilde{V})}\left\{\tilde{c}\mu|\psi|^2 + |\psi||\nabla\psi| + |y||\nabla\psi|^2\right\}\ d\H^n(y)$$ where $c,\tilde{c}$ are absolute constants and $\mu$ is a constant depending only on the metric on $N$. Since $\reg(\tilde{V})$ is orientable on this ball (as the size of the singular set is sufficiently small and $\tilde{V}$ is codimension one), we may choose a continuous choice of unit normal $\nu$ to $\reg(\tilde{V})$ and, for any $\zeta\in C^1_c(\reg(\tilde{V}))$, extend $\zeta\nu$ to a vector field in $C^1_c(B^{n+1}_\rho(x)\backslash\sing(\tilde{V});\R^{n+1})$ and take in the above $\psi = \zeta\nu$ to deduce that $$\delta^2_{\mathcal{F}_x}\tilde{V}(\zeta)\equiv\delta^2_{\mathcal{F}_x}\tilde{V}(\psi) = \int_{\reg(\tilde{V})}\left\{|\nabla\zeta|^2 - |A|^2\zeta^2 + H^2\zeta^2\right\}\ d\H^n + R(\psi)$$ where $A$ denotes the second fundamental form of $\reg(\tilde{V})$, $|A|$ the length of $A$, $H$ the mean curvature of $\reg(\tilde{V})$, and $$|R(\psi)| \leq c\mu\int_{\reg(\tilde{V})}\left\{\tilde{c}\mu|\zeta|^2 + |\zeta||\nabla\zeta| + \zeta^2|A||y||\nabla\zeta|^2 + |y|\zeta^2|A|^2\right\}\ d\H^n(y).$$ We then say that the \textit{index of the regular part of $V$ in the normal coordinate ball $\mathcal{N}_\rho(x)$} (which we stress was assumed to obey $\dim_\H(\sing(V)\cap \mathcal{N}_\rho(x))\leq n-7$), denoted $\index(\reg(V);\mathcal{N}_\rho(x))$, is the dimension of the largest subspace $P$ of $\zeta\in C^1_c(\reg(\tilde{V}))$, where $\tilde{V}$ is as above, such that for all $\zeta\in P$, $$\delta^2_{\mathcal{F}_x}\tilde{V}(\zeta) <0.$$ For a collection of disjoint normal coordinate balls $\mathcal{B}:= \{\mathcal{N}_{\rho_\beta}(x_\beta)\}_{\beta\in B}$ in $N$ such that $\dim_\H(\sing(V)\cap \mathcal{N}_{\rho_\beta}(x_\beta))\leq n-7$ for each $\beta\in B$, we say that the index of this collection, denoted $\index(\reg(V);\mathcal{B})$, is $$\index(\reg(V);\mathcal{B}) := \sum_{\beta\in B}\index(\reg(V);\mathcal{N}_{\rho_\beta}(x_\beta)).$$ Finally, we say that the \textit{index of the regular part of} $V$, denoted $\index(\reg(V))$, is $$\index(\reg(V)) := \sup_{\mathcal{B}}\index(\reg(V);\mathcal{B})$$ where this supremum is taken over all collections $\mathcal{B}$ is disjoint normal coordinate balls in $N$ with small singular set as above. We can then define the index of the regular part of $V$ in a subset $A\subset N$, denoted $\index(\reg(V);A)$, via the index of the regular part of $V\restrictv A$. We say that the regular part of $V$ is \textit{stable} if $\index(\reg(V)) = 0$. One may then prove, in an analogous manner to that seen in Lemma \ref{locally stable}, that about each point $x\in \spt\|V||$, there is a radius $\rho_x\in (0,\inj(x))$ such that $\index(\reg(V);B^N_{\rho_x}(x)) = 0$, i.e. $V$ is stable in $B^N_{\rho_x}(x)$; here, $B^N_\rho(x)$ is the usual Riemannian ball in $N$ centred at $x$ of radius $\rho$, defined to be the set of points $y$ in $N$ such that the infimum of the length over all paths connecting $x$ to $y$ is $<\rho$. In particular, if we assume that $\H^{n-1}(\sing(V)\cap N) = 0$, one can then invoke the regularity theory of \cite{wickstable}*{Theorem 18.1} to see that necessarily $\dim_\H(\sing(V)\cap N)\leq n-7$, and so in fact the above definition of index includes each normal coordinate ball $B^N_\rho(x)$ in $N$. We then define the \textit{stability radius} at each $x\in \spt\|V\|$ in the same manner as in Definition \ref{defn:stab-radius}, replacing Euclidean balls in the definition by the balls $B^N_\rho(x)$, i.e. $$s_V(x):= \sup\{r\geq 0:\index(\reg(V);B^N_r(x)) = 0\}.$$ The discussion above tells us that $s_V(x)>0$ at every point $x\in \spt\|V\|$. However, it is only now clear that $s_V$ is \textit{locally} Lipschitz, i.e. for each $x\in \spt\|V\|$, there is a radius $\tilde{\rho}_x>0$ such that $s_V$ is Lipschitz (with Lipschitz constant at most $2$) on the ball $B^N_{\tilde{\rho}_x}(x)$; this follows in an analogous manner to Lemma \ref{stability radius continuity} on sufficiently small balls in $N$ for which $(N^{n+1},g)$ is close to the Euclidean ball of dimension $n+1$. In particular, $s_V$ is still a continuous function (when it is finite, i.e. when $V$ is unstable). We define the \textit{folding number} in the same manner as Definition \ref{defn:folding}, except now we restrict only to open subsets which are given by normal coordinate balls (this restriction is still sufficient for our later purposes). Thus, $\folding(V)$ is defined to be the largest size of a (possibly infinite) collection of disjoint normal coordinate balls which are unstable. Analogously to Lemma \ref{bounded index disjoint sets}, we see that if $\index(\reg(V))\leq I<\infty$, then $\folding(V)\leq I$. Now let us additionally assume that $0\in N$ and for some $K\in (0,\infty)$, we have $\left|\left.\textnormal{sec}\right|_{B^N_2(0)}\right| \leq K$ and $\left.\inj\right|_{B^N_2(0)}\geq K^{-1}$. Let us write $\salpha^*$ for the collection of all stationary integral varifolds $V$ in $B^N_2(0)\subset N$ which have $\H^{n-1}(\sing(V)\cap B^N_2(0)) = 0$ and whose regular part has finite index in the manner explained above. Let us also write $\tilde{\salpha}^*$ for the collection of all varifolds of the form $$\tilde{V}:= (\eta_{0,\rho})_\#(\exp^{-1}_x)_\#(V\restrictv \mathcal{N}_\rho(x))$$ where $V\in \salpha^*$, $x\in \spt\|V\|$, and $\rho\in (0,\inj(x))$. We write $\salpha^*_I$ and $\tilde{\salpha}^*_I$ for the corresponding subsets of $\salpha^*$ and $\tilde{\salpha}^*$ respectively when the varifolds have regular part of index at most $I$. For $\tilde{V}\in \tilde{\salpha}^*$, we may define the regularity scale $\regscale^Q_{\tilde{V}}$ analogously to that in Definition \ref{regularity scale}. We may use this to define the regularity scale of $V\in \salpha^*$ via $$\regscale^{*Q}_{V}(x):= \sup\{\rho\in (0,\inj(x)): \regscale^Q_{\tilde{V}_\rho}(0) = 1\}$$ where $\tilde{V}_\rho := (\eta_{0,\rho})_\#(\exp^{-1}_x)_\#(V\restrictv \mathcal{N}_\rho(x))\in \tilde{\salpha}^*$ and again we set $\sup(\emptyset):= 0$. One may now follow the same proof as in Lemma \ref{regular implies positive regscale}, using instead the Riemannian versions of Wickramasekera's regularity theorem (i.e. \cite{wickstable}*{Theorem 18.2 and Theorem 18.3}), to prove that, for $V\in \salpha^*$, $x\in \reg(V)$ if and only if $\regscale^{*Q}_V(x)>0$ for some positive integer $Q$. Wickramasekera's compactness (i.e. Theorem \ref{compactness theorem}) for stable varifolds still holds in the Riemannian setting, appropriately modified. For the readers convenience, we restate this result in the current setting: \begin{theorem}[\cite{wickstable}*{Theorem 18.1}] Let $(N^{n+1},g)$ be a smooth Riemannian manifold and $x\in N$. Suppose that $(V_i)_i\subset \salpha^*_0$ is a sequence with $x\in \spt\|V_i\|$ for each $i=1,2,\dotsc$ and with $\limsup_{i\to\infty}\|V_i\|(N)<\infty$. Then, there exists a subsequence $(V_{i_j})_j$ and $V\in \salpha^*_0$ with $x\in \spt\|V\|$ such that $V_{i_j}\to V$ as varifolds in $N$ and smoothly (i.e. in the $C^k$ topology for every $k$) locally in $N\backslash\sing(V)$. \end{theorem} To complete the recasting of all results in Section \ref{sec:prelim} to the Riemannian setting, we finally have the following modified version of Theorem \ref{sheeting theorem}, which follows from \cite{wickstable}*{Theorem 18.2} in the same manner as before: \begin{theorem}[Sheeting Theorem, \cite{wickstable}*{Theorem 18.2}]\label{thm:Riemann-sheeting} Let $n\geq 2$, $\Lambda>0$, and $K>0$. Let $(N^{n+1},g)$ be a smooth Riemannian manifold which obeys $0\in N$ and $\left.\inj\right|_{B^N_2(0)}\geq K^{-1}$. Then, there exists $\varepsilon_0 = \varepsilon_0(n,\Lambda,K)\in (0,1/4)$ and $Q_0 = Q_0(n,\Lambda,K)\in \mathbb{Z}_{\geq 1}$ such that the following is true: whenever $\tilde{V}\in \tilde{\salpha}^*_0$ satisfies: \begin{enumerate} \item [\textnormal{(a)}] $\omega_n^{-1}\|\tilde{V}\|(B^{n+1}_1(0)) \leq \Lambda$; \item [\textnormal{(b)}] $\sigma^{-1}\dist_\H(\spt\|\tilde{V}\|\cap (\R\times B_\sigma(0)), \{0\}\times B_\sigma)< \varepsilon_0$, for some $\sigma<\varepsilon_0$; \item [\textnormal{(c)}] $(\sigma^n\omega_n)^{-1}\|\tilde{V}\|(B^{n+1}_\sigma(0))\leq \Lambda$; \end{enumerate} then we have $\regscale^{Q_0}_{\tilde{V}}(0)\geq \sigma/2$. \end{theorem} Now let us turn our attention to Section \ref{sec:strata}. The notion of a varifold $V\in \salpha^*$ being $(\delta,r,k)$-conical at a point $x\in N$, as originally defined in Definition \ref{defn:conical}, is modified for the Riemannian setting using the corresponding varifold in the varifold class $\tilde{\salpha}^*$ as done in \cite{naber-valtorta}*{Definition 1.1(2)}; in particular, this is only defined when $r<\inj(x)$. This allows us to define the various strata as in Definition \ref{defn:strata} in the same manner. One must be careful as we no longer necessarily have a suitable homothety map $\eta_{x,\rho}$ (as this would modify $N$ also), so Lemma \ref{rescaled strata} must be understood differently and instead as applying only to varifolds in $\tilde{\salpha}^*$. Using Theorem \ref{thm:Riemann-sheeting}, we may prove the following Riemannian variant of Theorem \ref{epsilon regularity}: \begin{theorem}[Riemannian $\varepsilon$-Regularity Theorem]\label{thm:Riemann-epsilon-reg} Let $n\geq 2$ and $\Lambda,K,d\in (0,\infty)$. Let $(N^{n+1},g)$ be a smooth Riemannian manifold which obeys $0\in N$ and $\left.\inj\right|_{B^N_2(0)}\geq K^{-1}$. Let $A\subset B^N_2(0)$ be a compact subset which obeys $d(A,\del B^N_2(0))\geq d$. Then, there exist constants $\varepsilon_0 = \varepsilon_0(n,\Lambda,K,d)$ and $Q_0 = Q_0(n,\Lambda,K,d)\in \mathbb{Z}_{\geq 1}$ such that the following holds: if $V\in \salpha^*$, $x\in \spt\|V\|\cap A$, and $\rho\in (0,d]$ satisfy: \begin{enumerate} \item [\textnormal{(a)}] $\|V\|(B^{N}_2(0))\leq \Lambda$; \item $V$ is stable in $B^N_{\rho/2}(x)$; \item $V$ is $(\varepsilon_0,\rho/2,n-6)$-conical at $x$; \end{enumerate} then we have $\regscale^{*Q_0}_V(x)\geq \varepsilon_0\rho$. \end{theorem} This is established in the same manner as Theorem \ref{epsilon regularity}; note that we are again applying the compactness theorem to the corresponding varifolds in $\tilde{\salpha}^*$, which does hold (either by \cite{wickstable}*{Section 18} or also \cite{SS}*{Theorem 2} as our varifolds have sufficiently small singular set here) and then applying Theorem \ref{thm:Riemann-sheeting} with $\sigma = \varepsilon_1/2$, for the choice of constant $\varepsilon_1 = \varepsilon_1(n,\Lambda,K)\in (0,1/4)$ in Theorem \ref{thm:Riemann-sheeting}. We then have from Theorem \ref{thm:Riemann-epsilon-reg} the corresponding Riemannian version of Corollary \ref{corollary of sheeting theorem}: \begin{corollary}\label{cor:Riemann-version} Let $n\geq 7$ and $\Lambda,K,d\in (0,\infty)$. Let $(N^{n+1},g)$ be a smooth Riemannian manifold which obeys $0\in N$ and $\left.\inj\right|_{B^N_2(0)}\geq K^{-1}$. Let $A\subset B^N_2(0)$ be a compact subset which obeys $d(A,\del B^N_2(0))\geq d$. Then, there exist constants $\varepsilon_0 = \varepsilon_0(n,\Lambda,K,d)$ and $Q_0 = Q_0(n,\Lambda,K,d)\in \mathbb{Z}_{\geq 1}$ such that the following holds: if $V\in \salpha^*$ obeys: \begin{enumerate} \item [\textnormal{(a)}] $\|V\|(B^N_2(0))\leq \Lambda$; \item [\textnormal{(b)}] $V$ is stable in $B^N_d(x)$ for all $x\in A$; \end{enumerate} then we have $\badreg^{Q_0}_{\varepsilon_0\sigma}(V)\cap A\subset \strata^{n-7}_{\varepsilon_0,\sigma/2,d}(V)\cap A$ for all $\sigma\in (0,d]$. \end{corollary} Finally, we detail how one can modify the results of Section \ref{sec:estimates} to the Riemannian setting given the above. The first point to note is that the local estimates of Naber--Valtorta hold in the Riemannian setting as long as one assumes a lower bound on the injectivity radius as well as an absolute bound on the sectional curvature: \begin{theorem}[\cite{naber-valtorta}*{Theorem 1.3}] Let $n\geq 2$ and $\Lambda,K,H\in (0,\infty)$, and $\varepsilon\in (0,1)$. Then, there exists a constant $C_\varepsilon = C_\varepsilon(n,\Lambda,K,\varepsilon)\in (0,\infty)$ such that the following is true: let $(N^{n+1},g)$ be a smooth Riemannian manifold obeying $0\in N$, $\left|\left.\textnormal{sec}\right|_{B^N_2(0)}\right| \leq K$ and $\left.\inj\right|_{B^N_2(0)}\geq K^{-1}$. Suppose that $V$ is a stationary integral $n$-varifold in $B^N_2(0)$ obeying $\|V\|(B^{N}_2(0))\leq\Lambda$ which has first variation bounded by $H$. Then, for each $k\in \{0,1,\dotsc,n\}$ we have $$\H^{n+1}\left(B^N_r(\strata^k_{\varepsilon,r,1}(V))\cap B^N_1(0)\right) \leq C_\varepsilon r^{n+1-k}\ \ \ \ \text{for all }r\in (0,1].$$ \end{theorem} The rescaled version of this corresponding to Corollary \ref{rescaled quantitative estimates} also holds for radii $R$ at most the injectivity radius at the point by passing through to the varifolds $\tilde{\salpha}^*$ via the exponential map and applying the rescaled estimates there (note that if we rescale $N$, one can still bound the scalar curvature and injectivity radius of the rescaled version of $N$ by $K$) and then pushing these back to the Riemannian manifold level; note that this will introduce some additional constant which depends on the metric $g$ (through the exponential map) but this can be controlled in terms of $K$, so the new constant has the same dependencies. Thus, in order to rerun the proofs of Lemma \ref{low stability bound}, Lemma \ref{covering lemma}, and Theorem \ref{thm:A-2} using the above modified results for the Riemannian setting (as the proofs will be unchanged up to constant factors) we need a suitable version of the Besicovitch covering lemma for our setting. However, Riemannian manifolds with a lower bound on the sectional curvature and have finite diameter (as is our situation here when working on $B^N_2(0)$) necessarily are \textit{directionally limited}, as defined in \cite{federer}*{Definition 2.8.9}. As such, a form of the Besicovitch covering theorem does hold in this setting, by \cite{federer}*{Theorem 2.8.14}, and moreover the corresponding Besicovitch constant only depends on $n$ and $K$. Hence, our arguments in Section \ref{sec:estimates} pass through to the current setting, and prove Theorem \ref{thm:B1}. To see Theorem \ref{thm:B}, note that closed Riemannian manifolds have the property that there is a $K = K(N,g)\in (0,\infty)$ such that $|\textnormal{sec}|\leq K$ and $\inj\geq K^{-1}$ (and indeed have finite diameter, so a Besicovitch theorem will hold on all of $N$), and as such the result follows from Theorem \ref{thm:B1} by a simple covering and compactness argument. \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Graphene nanoribons have been extensively studied in the past years \cite{Han07,Lin07,Wan08,Mol10,Gal10,Ter11}, mainly due to their promise of an electronic band-gap making them interesting for electronic applications. Confinement of electrons in these nanoscaled structures is predicted to form a quasi one-dimensional system \cite{Lin08} with its properties strongly depending on the configuration of the edges \cite{Son06,Yan07}. However, experimental and theoretical studies have revealed graphene nanoribbons to be extremely sensitive to small amounts of disorder, in particular to edge disorder~\cite{Wan11,Kos09}. In fact, the transport characteristics of nanostructured graphene ribbons are mainly dominated by statistical Coulomb blockade effects~\cite{Gal10,Sta09}. Improvements on the fabrication techniques allowing for cleaner edge configurations are therefore of great importance and may not only improve the transport properties~\cite{Tom11} but also enable the investigation of the unique vibrational properties of these graphene nanostructures~\cite{Sai10}. Despite theoretical work~\cite{Gil09,Gil10} there are - to our knowledge - only a few optical characterization studies of graphene nanoribbons~\cite{Bis11,Ryu11}. Raman spectroscopy of carbon materials, in general, has been identified as a powerful tool for determining the number of graphene layers \cite{Fer07,Mal09}, the local amount of strain and doping \cite{Lee12}, and for studying electron-phonon interactions \cite{{Yan07,Che11,Sta07,Pis07}} and therefore the electronic properties themselves. In this work, we report on Raman spectroscopy measurements on non-etched graphene ribbons of various widths (from $\approx$15 to 160~nm) resulting from peeling-off a graphene flake on the boundary region of a hexagonal boron nitride (hBN) flake and its underlying SiO$_{\text{2}}$ substrate. We show that the characteristic signatures of single-layer graphene are well preserved and that the configuration of the edges is more regular compared to previously studied graphene ribbons fabricated by state-of-the-art reactive ion etching (RIE) techniques~\cite{Bis11,Ryu11}. Moreover, the analysis of the full width at half maximum (FWHM) of the G- and 2D-line ($\Gamma_{G}$ and $\Gamma_{2D}$) as well as the frequency of the G- and 2D-line ($\omega_{G}$ and $\omega_{2D}$) provide strong indications of finite size and/or edge effects~\cite{Gil09,Gil10,Ryu11}. \section{Fabrication} \begin{figure}[t]% \includegraphics*[draft=false,keepaspectratio=true,clip,% width=1\linewidth% ,height=9.0cm]{Fig1.eps} \caption{% (color online) (a) Optical microscope image of a $\approx\!30\,$nm thin hBN flake (light blue color) on top of a Si/SiO$_{\text{2}}$ substrate (grey color). (b), (c) and (d) Scanning force microscope (SFM) images taken in the region highlighted by the black box in panel (a). (c) SFM close-up image of the white-dashed box in panel (b). In this region the ribbons are separated by a distance of around $1\,\mu$m, twice as large as the spot-size $d\!\approx\!500$ nm (white circle) of the linearly polarized laser with an angle $\theta$. (d) SFM close-ups of the ribbons (1), (2), (3) and (4), also displayed in panel (c). Ribbons (1) and (2) do not have a constant width, as highlighted in the two upper subpanels of panel (d). We show the wider and narrower ends of these ribbons. (e) Characteristic Raman spectra of bulk graphene on hBN [acquisition point B in panel (b)] and of ribbon (2) [acquisition point A, panel (c)]. } \label{onecolumnfigure} \end{figure} The fabrication of the graphene ribbons is based on purely mechanical exfoliation of graphite. We initially prepared Si/SiO$_{\text{2}}$ samples with deposited hBN flakes (Fig. 1a). The hBN flakes have been mechanically exfoliated from pure hBN crystals and deposited onto the Si/SiO$_{\text{2}}$ substrate. Thereafter, the samples were immersed in a piranha solution, 3:1 mixture of sulfuric acid (H$_2$SO$_4$) and $30\%$ hydrogen peroxide (H$_2$O$_2$), for 3 minutes and later rinsed with ultrapure water. This cleaning procedure has a similar effect on the SiO$_{\text{2}}$ surface than a plasma etching step prior deposition of the graphene flakes. Both methods are supposed to hydroxylate the SiO$_{\text{2}}$ surface~\cite{Tib13} and therefore increase the local adhesion of graphene to the surface. The Raman spectrum of graphene on such a treated SiO$_{\text{2}}$ substrate is characterized by a very slight increase of the FWHM of the 2D-line~\cite{Wan12}. The hBN flakes are known to be chemically inert and therefore not affected by the piranha solution at room temperature \cite{Alt07}. Interestingly, we nonetheless observe an increase in doping of graphene on hBN compared to graphene regions resting on SiO$_{\text{2}}$. \begin{figure}[b]% \includegraphics*[draft=false,keepaspectratio=true,clip,% width=1\linewidth% ,height=12.0cm]{Fig2.eps} \caption{% (color online) (a) Raman spectra (D- and G-line) of the ribbon (3) on SiO$_{\text{2}}$ as a function of the polarization angle $\theta$ (see Fig. 1c). The difference in polarization angle between subsequent traces is $\theta\!=\!22.5^{\circ}$. The Raman spectra are normalized to the G-line maximum height and shifted for clarity. (b) Polar plot of $I(D)/I(G)$ as a function of $\theta$ for ribbon (3) on both hBN (blue trace) and SiO$_{\text{2}}$ (red trace) substrates. (c) and (d) Raman spectra of ribbon (4), on the hBN substrate, at $\theta\!=\!0^{\circ}$ (c) and $90^{\circ}$ (d). The Lorentzian fits to the data are shown in blue. } \label{onecol} \end{figure} While the hBN flakes have been directly deposited on the SiO$_{\text{2}}$ substrate, the graphene flakes have been prepared on top of a $\approx$300\,nm-thick layer of polymethylmethacrylate (PMMA) previously spin-coated on a glass slide \cite{Zom11}. Raman spectroscopy was used to identify and select individual single-layer graphene flakes \cite{Fer07,Gra07}. The resulting graphene-PMMA-glass stamp was then mounted in a mask-aligner in such a way that the graphene flake could be aligned on top of the hBN-SiO$_{\text{2}}$ piranha-treated chip \cite{Wan13}. Once on top of the hBN-SiO$_{\text{2}}$ target region, the two flakes were brought into contact. This process was repeated until some parts of the graphene flake stuck to the hBN-SiO$_{\text{2}}$ surface. This technique utilizes van der Waals adhesion to peel-off the graphene ribbons (shown in Fig. 1a), the hBN substrate is therefore important for this fabrication process since graphene adheres more strongly to the hBN than SiO2 \cite{Wan13}. The yield of this fabrication process is nonetheless low and neither the position nor the width of the obtained graphene ribbons is controllable. Therefore, this fabrication method is -~in its present form~- irrelevant from a technological point of view, but it is extremely valuable since it allows the Raman (and potentially transport) investigation of non-etched, i.e. pristine, graphene ribbons. Moreover, we would like to emphasize that these graphene ribbons were never in contact with any spin-coated polymer resist typically involved in the fabrication of etched ribbons, nor with any solvents such as acetone, isopropanol or even water. An optical microscope image of a fabricated sample is shown in Fig.~1a. For simplicity, we grouped the ribbons of similar width and labeled them as (1)-(4) (shown in Fig.~1c). The widths were extracted from scanning force microscope (SFM) images (Figs.~1b, 1c and 1d), resulting in a width of W $\approx$ 160 and 120 nm for the ribbons (4) and (3). The widths of the ribbons (1) and (2) differ significantly between left and right ribbon ends (see upper panels in Fig.~1d). Specifically, ribbons (1) and (2) have a varying width from W $\approx$ 40 to 15~nm [ribbon (1)] and W $\approx$ 45 to 20~nm [ribbon (2)]. In the following, we will therefore refer to the average width $W\approx$ 25, 35, 120 and 160~nm of the ribbons (1), (2), (3) and (4). The Raman data were recorded using a laser wave length of 532~nm ($\hbar \omega_L\!=\!2.33\,eV$) through a single-mode optical fiber whose spot size is limited by diffraction. The measurement setup is a commercially available confocal Raman Microscope Alpha 300R from Witec, whose laser is linearly polarized. The sample was fixed to a high-precision rotation mount model PRM-1 from Thorlabs, in order to manually adjust the polarization laser direction relative to the ribbon axis (see inset in Fig. 1c). A long working distance focusing lens with numerical aperture of 0.85 is used to obtain a spot size of approximately 500 nm (circle in Fig. 1c). Characteristic Raman spectra of the narrow ribbon (2) and bulk graphene, both on the hBN substrate, are presented in Fig.~1e. The Raman spectra (labels A and B in Fig.~1b and 1c) show the prominent G-line ($\approx$1582 cm$^{-1}$) as well as the single Lorentzian-shaped 2D-line ($\approx$2675 cm$^{-1}$) as expected for graphene. No defect induced D-line ($\approx$1340 cm$^{-1}$) or D'-line ($\approx$1620 cm$^{-1}$) are observed on the bulk graphene region (acquisition point B), which confirms that the fabrication method does not induce defects into the graphene flake. In both spectra, a third prominent sharp peak arises at $\approx$1365 cm$^{-1}$, which is attributed to the Raman-active LO phonon of hBN~\cite{Gei66}. \begin{figure}[t]% \includegraphics*[draft=false,keepaspectratio=true,clip,% width=1\linewidth% ,height=9.0cm]{Fig3.eps} \caption{% (color online) (a) Correlation between $\omega_{2D}$ and $\omega_{G}$ at T = 300 K. The description of the black and gray axis as well as the color code are introduced in the main text. (b) and (c) False-colored Raman maps of I(hBN) and I(2D). (b) The boundary between the hBN and SiO$_{\text{2}}$ substrates is marked with a white dashed line. (c) The individual ribbons (1), (2), (3) and (4) are well differentiated from each other. (d) Map of the local 2D-line shifts due to strain $\omega_{2D,\varepsilon}$ obtained after projecting all the Raman data points onto the strain axis [solid black line in panel (a)] relative to its maximum value [$\omega_{2D,\varepsilon}^{max}$, green point in panel (a)]. The scale bar is $2\mu$m. } \label{onecol} \end{figure} \section{Characterization of the edges} In order to characterize the edges and in particular the edge roughness of the graphene ribbons, we performed polarization angle dependent Raman measurements. Fig. 2 shows the Raman spectra of the ribbons (3) (W $\approx$ 120 nm, Figs.~2a and 2b) and (4) (W $\approx$ 160 nm, Fig. 2c and 2d) as function of the polarization angle $\theta$ of the incident light (see inset in Fig.~1c). For each ribbon and each polarization angle, a spectrum has been recorded and the G-, D- and hBN-lines were fitted with a single Lorentzian line shape (see examples in Figs.~2c and 2d). In agreement with previous work~\cite{Bis11,Can04,Gr�03,Cas09}, the D-line intensity $I(D)$ appears to be strongest for polarization parallel to the edge and reaches a minimum for the perpendicular polarization $\theta\!=\!90^{\circ}$. This can be observed in Fig. 2a, where each Raman spectrum corresponds to a different polarization angle $\theta$, starting from $\theta\!=\!11.25^{\circ}$ to $\theta\!=\!348.75^{\circ}$ in steps of $22.5^{\circ}$. Every trace in this plot is normalized to the maximum intensity of the G-line and shifted in the intensity and frequency axis for clarity. For the rest of the analysis, we compare the ratio $I(D)/I(G)$ using the peak area of the fitted Lorentzian function as a measure of intensity. In Fig. 2b we show a corresponding polar plot which illustrates the expected mirror planes at $\theta\!=\!0^{\circ}$ and $\theta\!=\!90^{\circ}$ ~\cite{Can04,Gr�03}. Raman spectra with Lorentzian fits for the direction of maximum and minimum D-line intensity ($\theta\!=\!0^{\circ}$ and $\theta\!=\!90^{\circ}$, respectively) of ribbon (4) are presented in Figs.~2c and 2d. According to Ref.~\cite{Cas09} and assuming that $I(G)$ does not depend on $\theta$, a lower bound for the edge disorder correlation length $\xi\!\approx\!2v_F/(\omega_Lb)$ can be estimated from the ratio $b\!=\!I(D)_{min}/I(D)_{max}$ between the lowest and highest normalized D-line intensity ($I(D)_{min}/I(G)$ and $I(D)_{max}/I(G)$, respectively). For the ribbon (4) (Fig. 1c and 1d), we obtain the lowest intensity ratio of $b\,\approx\,0.055$ (Fig. 2c and 2d), which yields a correlation length of $\xi\,\approx\,10\,$nm. This value is significantly higher than the correlation length of $\xi\!\approx\!1\,$nm reported on plasma etched graphene nanoribbons~\cite{Bis11} and therefore suggests that the graphene ribbons have a more regular crystallographic orientation of the edges. \section{Strain, doping and finite size effects} For a more detailed investigation of the dependence of the Raman spectra on the width of the graphene ribbons, we study spatially resolved Raman maps of the sample. In particular, we recorded a Raman map of the 6 $\mu$m by 10 $\mu$m sample region shown in Fig. 1b with a spatial oversampling of 200~nm and an integration time of 15~s (with a laser spot size of 500~nm and a laser power of $\approx$\,1~mW). The corresponding Raman maps of the hBN-line and the 2D-line intensities, $I(hBN)$ and $I(2D)$, are shown in Figs.~3b and 3c. One can identify the hBN and SiO$_{\text{2}}$ substrates and the graphene ribbons (1)-(4), partly crossing both substrates. In the lower right corner of Fig.~3c, bulk graphene is also observed. By means of the so-called vector decomposition method introduced by Lee et al. \cite{Lee12}, we analyze the presence of strain and/or doping variations in our sample. Accordingly, we plot the dependence of the G-line ($\omega_{G}$) and the 2D-line ($ \omega_{2D}$) positions (i.e. frequencies) for all the Raman spectra recorded in the inspected area (Fig. 1b) in Fig.~3a. The red points show the extracted Raman data from spectra recorded on bulk graphene and ribbons, both on SiO$_{\text{2}}$, whereas the light blue points are from graphene regions resting on hBN. The blue data points with error bars show the average values of $\omega_{G}$ and $\omega_{2D}$ obtained for every individual graphene ribbon (1)-(4) and bulk graphene (B) on the hBN substrate (see labels in Fig. 3a). \begin{figure}[t]% \includegraphics*[draft=false,keepaspectratio=true,clip,% width=1\linewidth% ,height=12.0cm]{Fig4.eps} \caption{% (color online) (a) and (b) Local distribution of $\omega_{G}$ and $\Gamma_{2D}$, respectively. The scale bars are 2$\mu$m. (c) Averaged $\omega_{G}$ and $\omega_{2D}$ for every individual graphene ribbon on hBN as function of 1/$W$. (d) Averaged $\Gamma_{G}$ and $\Gamma_{2D}$ for the individual graphene ribbons on hBN as function of 1/$W$. (e) Correlations between $\Gamma_{G}$ and $\omega_{G}$ for the ribbons and bulk graphene on the hBN substrate. The light red data points correspond to the narrowest ribbons (1) and (2) with their respective averages marked in red. The ribbons (3), (4) and bulk graphene (B) appear as gray data points with their average values in black. The error bars in all the panels are half the standard deviation. } \label{onecol} \end{figure} The solid and dashed lines indicate the slopes of the strain and large-scale doping axis according to Ref.~\cite{Lee12}. Please note that we do not know the exact origin of these two axis and, for simplicity, we marked the same origin as in Ref.~\cite{Lee12} (see green point in Fig. 3a). Interestingly, the red cloud of data points clearly follows a slope of $\Delta \omega_{2D}/\Delta\,\omega_{G}\!=\!2.2$ (solid black line), characteristic of uniaxial strain - in good agreement with Lee et al. \cite{Lee12} -. Both red and main light blue data clouds appear to be offset by $\approx\!2.2\,$cm$^{-1}$ in the $\omega_{G}$ axis with a direction parallel to the strain axis. This offset is understood as a difference in induced doping \cite{Che11} between the SiO$_{\text{2}}$ and the hBN substrates (extracted doping difference: $\Delta n\approx 2\,$x$\,10^{11}\,$cm$^{-2}$), most likely due to the treatment with the piranha solution of the hBN surface. More interestingly, Fig.~3a suggests that the narrowest ribbons [(1) and (2)] are subject to stronger doping compared to bulk graphene and the wider ribbons. This is noticeable from their mean positions [labeled (1) and (2) in Fig.~3a], which are at the very right of this plot (see right gray dashed line of slope 2.2). However, there is an inconsistency with the line width of the G-peak that will be discussed in the following section. Interestingly, the same ribbons [(1) and (2)] seem to have also different strain values compared to bulk graphene and the wider ribbons [(3) and (4)] (see lower dashed line). This finding is highlighted after projecting all ($\omega_{2D}$; $\omega_{G}$) points onto the strain axis (the obtained values are labeled as $\omega_{2D,\epsilon}$). In Fig. 3d we show the corresponding spatial map of the difference $\omega_{2D,\epsilon} - \omega_{2D,\epsilon}^{max}$ relative to its maximum value $\omega_{2D,\epsilon}^{max}$. Here, we show that the strongest deviations are clearly for the two most narrow ribbons (see yellow and red regions in Fig. 3d). Please note that in bulk graphene, the values decrease close to the hBN edge and on bubbles (marked by white arrows in Figs.~1b and 3d), which is a further sign that this quantity is indeed related to strain. For a more quantitative analysis of the dependence of the Raman G- and 2D-modes on the ribbon width, we analyze the changes in frequency and broadening of the G-line as well as $\omega_{2D}$ and $\Gamma_{2D}$ as a function of the averaged ribbon width $W$. Apart from the aforementioned difference in doping between the hBN and SiO$_{\text{2}}$ substrates (Fig.~3a), the spatial representation of $\omega_{G}$ (Fig.~4a) reveals a stiffening of the G-line for the narrower ribbons (1) and (2), which is in agreement with Fig. 3a and earlier work~\cite{Ryu11}. Fig.~4c shows $\omega_{G}$ and $\omega_{2D}$ as a function of the inverse averaged width ($1/W$) for the different ribbons. Interestingly, we observe an increase of $\omega_{G}$ as function of $1/W$ (see dashed line in Fig.~4c), meaning that the smaller the ribbon the stiffer the G-line. This is commonly attributed to edge doping and/or confinement effects~\cite{Ryu11}. The 2D-line frequency $\omega_{2D}$ does not show any substantial dependence with the width of the ribbons (see red data points in Fig.~4c). In Fig. 4d we show that also the G- and 2D-peak line widths ($\Gamma_{G}$ and $\Gamma_{G}$) increase with decreasing ribbon width $W$. This width dependent broadening might be an indication of finite size effects~\cite{Fer00}. In order to exclude doping effects for the increase of $\Gamma_{G}$, we show the dependence of $\omega_{G}$ as function of $\Gamma_{G}$ (re-plotting the data shown in Figs.~4c and 4d), in Fig.~4e. In complete disagreement with experimental results on bulk graphene~\cite{Yan07,Sta07,Pis07} and theory~\cite{And06} on doping dependent Landau damping, we observe an increase of $\omega_{G}$ with increasing $\Gamma_{G}$. For bulk graphene, exactly the opposite has been observed in earlier experiments~\cite{Yan07,Sta07,Pis07}. Finally, from Figs.~3a and 3d we can estimate a maximum strain difference in the narrow ribbons. Assuming uniaxial strain, we extract a maximum strain difference on the order of 0.23\%~\cite{Lee12}. It is important to note that according to Ref.~\cite{Moh09} these values cannot explain the observed maximum broadening of the G-line (Fig. 4e), making edge effects and/or finite size effects a prime candidate to explain our experimental findings. \section{Conclusion} In summary, we discussed Raman spectroscopy measurements on lithography-free fabricated graphene ribbons made by direct exfoliation of graphene on hBN flakes. Despite a prominent doping of the hBN substrate, most probably induced by the fabrication process, we were able to perform polarization dependent measurements that confirm a more regular crystallographic orientation of the ribbon edges. Analysis of the frequency and broadening of the G- and 2D-line show prominent differences between the narrowest ribbons ($\approx$ 15 and 20 nm) and the widest ones (bulk graphene included), suggesting the presence of confinement and/or edge effects in these narrow structures. The results of this work highlight that further developments in the fabrication process yielding cleaner graphene samples with regularly oriented edges may enhance both the vibrational and electronic characteristics of graphene devices. {Acknowledgment ---} We thank G. Rossell�, D. May and P. Nemes-Incze for valuable discussions. Support by the HNF, JARA Seed Fund, the DFG (SPP-1459 and FOR-912), the ERC (GA-Nr. 280140) and the EU project Graphene Flagship (contract no. NECT-ICT-604391) are gratefully acknowledged.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The experimental observation of the Fractional Quantum Hall (FQH) effect~\cite{FQHEexp} marked the discovery of quantum phases of matter with intrinsic topological order. Interacting electrons confined in two dimensions and subject to a large perpendicular magnetic field lead to the emergence of fractional excitations and a quantized Hall conductance. The theoretical understanding of the FQH effect has heavily relied on the study of trial wavefunctions (WFs)~\cite{FractionalStatLaughlin,ReviewCFTfqhe}. In a seminal paper~\cite{MooreReadCFTCorrelator}, Moore and Read introduced a procedure to express the bulk FQH WFs for the ground state and quasiholes excitations as correlators of primary fields of a Conformal Field Theory (CFT). This CFT is chosen to match the one used to describe the gapless edge modes of the target state, making the correspondence between the bulk and edge properties transparent. Although this construction provides insights in the study of these topologically ordered phases~\cite{WenCFTnonAbelian,ReviewCFTfqhe}, many physical observables cannot be extracted analytically from these CFT conformal blocks. Their evaluation still relies on numerical studies. Finite size numerical investigations~\cite{NumericsTough1,NumericsTough2,NumericsTough3} are limited to rather small system size, as the many body Hilbert space grows exponentially with the later. Combining the CFT construction with Matrix Product State (MPS) algorithmic methods~\cite{ZaletelMongMPS,RegnaultConstructionMPS,DMRGhallmicroscopic} helps circumventing this bottleneck. The exact MPS description enables larger system sizes and hence new prediction on physical observables previously out of reach~\cite{RegnaultCorrelLength,RegnaultYangLeBraiding}. New experimental and theoretical interests in the realization of non-Abelian excitations come from their appearance when twist defects are added to a more conventional Abelian FQH states~\cite{TwistDefect1,TwistDefect2,TwistDefect3,CecilecouplingSC}. They are motivated by advances in the experimental realization of Abelian multilayer/multicomponent systems~\cite{BilayerGraphene,BilayerBoseCondensation,onefourthbilayerpossibly} and the relative experimental simplicity~\cite{331observedpossibly} of these phases compared to non-Abelian ones~\cite{NonAbelianGaugeColdAtom}. MPS variational approaches have been applied to multicomponent FQHE states~\cite{DMRGmulticomponent}. In this article, we derive an \emph{exact} MPS formalism for the Abelian multicomponent Halperin series~\cite{HalperinOriginalPaper,HalperinSecondPaper} to gain some physical insight in their structure. We are able to make quantitative prediction on physical quantities which are only qualitatively understood in the plasma analogy~\cite{LaughlinPlasma2}, without assuming bulk screening. While we exemplify our derivation for the two-component Halperin states, it can be extended to the generic $p$-component case. Our method shows interesting features which might be useful for the study of the FQH states. First, it deals with the manipulation of multiple fields in an MPS formalism, which is for instance needed when considering quasi-electrons~\cite{QuasiholesMPS}. We also put forward a way to treat indistinguishability in the CFT formalism and its MPS implementation. The solution we find can readily be applied to other FQH states such as the Non-Abelian Spin Singlet series~\cite{NASSstates}. Finally, we are able to probe the topological features of multicomponent Abelian states through their long range entanglement properties~\cite{LongRangeEntanglement} and the structure of their gapless edge modes. To make our discussion self-contained, we start with a detailed analysis of the MPS description of the Laughlin states~\cite{FractionalStatLaughlin} in Sec.~\ref{sec:MotivationCylinder}. This will also set the notations and give a comprehensive understanding of the methods used this article. In Sec.~\ref{sec:halperin}, we present the general $\mathbf{K}$-matrix formalism~\cite{WenZeeKmatrix,WenTopoOrderPhase} to study the Abelian multicomponent FQH states and obtain a first MPS description of the Halperin WFs. We transform this MPS in Sec.~\ref{sec:OrbIndepMPS} to account for the translation invariance of the system. Our numerical results are presented in Sec.~\ref{sec:NumResults}. We characterize the topologically ordered phases under scrutiny with the numerical evaluation of the entanglement spectra~\cite{LiHaldaneEntanglementSpec} and the Topological Entanglement Entropy~\cite{TopoCorrectionKitaev,TopoCorrectionLevinWen} (TEE). We also compute the bulk correlation length, directly showing it is finite for several Halperin states. \section{Fractional Quantum Hall Effect on the Cylinder} \label{sec:MotivationCylinder} Although the trial WFs~\cite{FractionalStatLaughlin,MooreReadCFTCorrelator,ReviewCFTfqhe} might not describe the exact ground state of a system at filling factor $\nu$, it is believed that they are adiabatically connected to the later. For instance, the Laughlin WF at filling $\nu=1/3$: \begin{equation} \Psi_\text{Lgh}^{(3)} (z_1,\cdots , z_{N_e}) = \prod_{1\leq i<j\leq N_e} \left(z_i-z_j\right)^3 \, , \label{eq:Laughlin} \end{equation} where $z_i$ denotes the position of $i$-th electron, is the densest zero energy state of a system with hollow-core interaction. The Gaussian factors have been omitted in Eq.~\eqref{eq:Laughlin}. This is done for the sake of clarity and we should apply this norm whenever necessary. The plasma analogy enables analytic predictions on Eq.~\eqref{eq:Laughlin} such as the existence of quasi-particles with fractional electric charge $e/3$, which were indeed observed experimentally~\cite{FractionalChargeExperimentalEtienne,FractionalChargeExperimentalHeiblum,FractionalChargeExperimentalSu}. $\Psi_\text{Lgh}^{(3)}$ is no longer the exact ground state if we consider Coulomb interaction. Numerical evidence \cite{AnyonicNumericalLeinaas,AnyonicNumericalPollman,AnyonicNumericalVidal} however strongly suggests that the Laughlin WF at filling $\nu=1/3$ still captures the universal behaviors of such a system. The aim of this paper is to derive more economical representations of the Halperin model WFs (introduced in Sec.~\ref{sec:halperin}) in which computations can be performed with large system size. \subsection{Notations} In the symmetric gauge on the plane, the sphere or the cylinder, the Lowest Landau Levels (LLL) orbitals are labeled by their angular momentum $j$. The one-body WF reads \begin{equation} \psi_j(z)=\mathcal{N}_j z^j \, , \label{eq:onebodyWF} \end{equation} where $\mathcal{N}_j$ is a geometry dependent coefficient. Considering $(N_\phi+1)$ orbitals in the system, the non-interacting basis for spinless particles is $\ket{m_{N_\phi} \cdots m_0}$ where $m_j$ is the occupation number of the $j$-th orbital. For fermions $m_j \in \{0,1\}$ while bosonic occupation numbers satisfy $m_j \in \mathbb{N}$. They sum to the number of particles $N_e=\sum_j m_j$. The many-body Hilbert space is equivalently described by ordered lists of occupied orbitals $\lambda=(\lambda_1,\cdots ,\lambda_{N_e})$: \begin{eqnarray} & N_\phi \geq \lambda_1 \geq \cdots \geq \lambda_{N_e} \geq 0 & \quad \text{for bosons,} \\ & N_\phi \geq \lambda_1 > \cdots > \lambda_{N_e} \geq 0 & \quad \text{for fermions.} \label{eq:partitions} \end{eqnarray} The partitions $\lambda$ provide an efficient mapping between occupation numbers and the monomial appearing in the expansion of the model WFs. More precisely, including the geometrical factor and the proper symmetrization (respectively anti-symmetrization) with respect to electronic positions, the basis state for bosons (respectively fermions) is: \begin{gather} \braket{z_1 \cdots z_{N_e}}{m_{N_\phi} \cdots m_0} = \left( \! \prod_{j=0}^{N_\phi} \mathcal{N}_j^{m_j} \! \! \right) \! m_\lambda(z_1 \cdots z_{N_e}) \\ m_\lambda(z_1 \cdots z_{N_e}) = \dfrac{1}{\sqrt{\prod_i m_i!}} \sum_{\sigma \in \mathfrak{S}_{N_e}} \dfrac{\varepsilon(\sigma)}{\sqrt{N_e!}} \prod_{i=1}^{N_e} z_{\sigma(i)}^{\lambda_i} \, , \label{eq:monomialLaughlin} \end{gather} where $\mathfrak{S}_{N_e}$ is the permutation group of $N_e$ elements and $\varepsilon(\sigma)$ is the signature of the permutation $\sigma$ for fermions and is equal to 1 for bosons. The expansion of polynomial model WFs such as Eq.~\eqref{eq:Laughlin} naturally involves the monomials Eq.~\eqref{eq:monomialLaughlin}, ensuring the correct symmetry (respectively anti-symmetry) of the bosonic (respectively fermionic) WFs. Of special interest for our construction is the cylinder geometry with perimeter $L$, whose LLL orbitals are sketched in Fig.~\ref{fig:orbitalcylinder}. In this geometry, $j\in \mathbb{Z}$ also labels the momentum over the compact dimension $k_j=2j \pi / L$, and \begin{equation} \mathcal{N}_j = \dfrac{1}{\sqrt{L \sqrt{\pi}}} \exp\left(-\gamma^2 \dfrac{j^2}{2}\right), \quad \gamma = \dfrac{2\pi}{L} . \label{eq:geometricfactorcylinders} \end{equation} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/OrbitalsCylinder.eps} \caption{\emph{Sketch of the LLL orbitals on a cylinder of perimeter $L$. They are centered at $x_j=(2\pi j \ell_B^2)/L$ for $j\in \mathbb{N}$ and their typical width $\ell_B$ is the magnetic length. Their label $j$ also quantizes the momentum along the compact dimension. The configuration $\ket{0100110010}$ is sketched, we show occupied fermionic orbitals in red and empty ones in grey. A possible MPS representation of the weight of this configuration could use two matrices, for empty and filled orbitals.}} \label{fig:orbitalcylinder} \end{figure} \subsection{Methods} We would like to compute the coefficients of a model WF on the orbital basis of the cylinder $\ket{m_{N_\phi} \cdots m_0}$. They can be labeled with the corresponding partition: \begin{equation} \ket{\Psi}= \sum_\lambda c_\lambda \ket{m_{N_\phi} \cdots m_0} \, . \end{equation} We notice that any "orbital cut" along this cylinder, $\ket{m_{N_\phi} \cdots m_0} \rightarrow \ket{m_{N_\phi} \cdots m_{N_A+1}} \otimes \ket{m_{N_A} \cdots m_{0}}$ for some integer $N_A=0 , \cdots , (N_\phi-1)$, will have the same perimeter $L$. The area law enforces that any of these cuts will lead to the same Entanglement Entropy (EE)~\cite{AreaLawRevMod}. Gapped one dimensional systems exhibiting a constant EE are known to be efficiently described as MPS~\cite{StrengthMPS}. This is the economical representation of the state we were aiming at for our two-dimensional system. Note however that an orbital cut is not rigorously equivalent to a real-space cut perpendicular to the cylinder axis, the difference is investigated in Sec.~\ref{sec:EntanglementSpectra}. The correspondence becomes exact when the magnetic length is large with respect to $L$. In this limit, orbitals do not overlap and the problem can be treated classically. In this thin cylinder limit however the system is effectively one-dimensional. The MPS description of FQH states was first obtained by Zalatel and Mong in Ref.~\cite{ZaletelMongMPS}. They also provided the explicit calculation of the matrices for the Laughlin and the Moore-Read states~\cite{MooreReadCFTCorrelator}. The derivation was later generalized to any spinless WFs that can be written as a CFT correlator in Ref.~\cite{RegnaultConstructionMPS}. There, it was shown how to assign to each occupation number $m$ an operator $B^{(m)}$ in order to compute the coefficients $c_\lambda$ as a product of matrices: \begin{equation} c_\lambda \propto \langle B^{(m_{N_\phi})} \cdots B^{(m_0)} \rangle . \end{equation} In this article, we will extend this representation to some spinful WFs which can be written as CFT correlators. In order to fix some notation which will be useful thereafter, we first sketch how to find the MPS representation of the Laughlin WF at filling $\nu=1/q$, $q\in\mathbb{N}^*$: \begin{equation} \Psi_\text{Lgh}^{(q)} (z_1,\cdots , z_N) = \prod_{1\leq i<j\leq N} \left(z_i-z_j\right)^q \, . \label{eq:Laughlin_q} \end{equation} This WF describes fermionic statistics for $q$ odd and bosonic statistics for $q$ even. As in the $q=3$ case, the WF predicts the existence of quasi-particles with fractional electric charge $\pm e/q$, $-e/q$ being the quasi-hole and $+e/q$ the quasi-electron. \subsection{Compact Boson and Laughlin Wavefunction}\label{sec:laughlin} \subsubsection{Compact Boson} \label{subsub:compactboson} The underlying CFT describing the Laughlin WF~\cite{MooreReadCFTCorrelator} is a free massless chiral boson $\varphi(z)$ of central charge $c=1$ described in Ref.~\cite{YellowBook}. Its two-point correlation function is given by $\langle \varphi(z_1) \varphi(z_2) \rangle =-\log (z_1-z_2)$ and its mode expansion on the plane is: \begin{equation} \varphi (z) = \varphi_0 -i a_0 \log z + i \sum_{n\in \mathbb{Z}^*} \dfrac{1}{n} a_{n} z^{-n} . \label{eq:freeboson} \end{equation} The $a_n$ satisfy a U(1) Kac-Moody algebra: $[a_n,a_m]=n \delta_{m+n,0}$. This U(1) symmetry implies the conservation of the current $J(z)=i\partial \varphi (z)$ and the U(1) charge is measured by the zero-mode $a_0$. The compactification radius $R=\sqrt{q}$ shapes the possible U(1) charges: $R a_0$ measures the charge in units of the quasi-electron charge which must be an integer. The zero point momentum $\varphi_0$ is the canonical conjugate of $a_0$, $[\varphi_0,a_0]=i$. As such, the operator $e^{-i \sqrt{\nu} \varphi_0}$ shifts the U(1) charge by one in units of quasi-electrons. Primary fields with respect to the U(1) Kac-Moody algebra are vertex operators of quantized charges: \begin{equation} \mathcal{V}_N (z) = : \exp \left(i \dfrac{N}{R} \varphi (z) \right): \quad \text{where } N\in \mathbb{Z} \, . \end{equation} They have a U(1) charge $N$ in unit of the quasi-electron charge, and a conformal dimension $N^2/(2R^2)=N^2/(2q)$. To each of these primary fields, we associate a primary state $\ket{N}=\mathcal{V}_N (0) \ket{0}$. The CFT Hilbert space is constructed by applying the bosonic creation operators $a_{-n}$ with $n\in\mathbb{N}^*$ to those primary fields. Partitions provide an elegant way to describe those states. Indeed, a generic state of the Hilbert space basis can be written as: \begin{equation} \ket{N, \mu} = \dfrac{1}{\sqrt{\Xi_\mu}} \prod_{i=1}^{\ell (\mu)} a_{-\mu_i} \ket{N} \, , \end{equation} where $\ell (\mu)$ is the length of the partition $\mu$ (\textit{i.e.} the number of non-zero elements), and the prefactor reads $\Xi_\mu = \prod_i i^{n_i} n_i!$ where $n_i$ is the multiplicity of the occupied mode $i$ in the partition $\mu$. We also define the size of the partition $|\mu |=\sum_i \mu_i$. The conformal dimension of $\ket{N,\mu}$ is measured by $L_0$, the $0^{\rm th}$ Virasoro mode. $L_0$ is proportional to the CFT Hamiltonian on the circle. We have $L_0 \ket{N,\mu}= \Delta_{N,\mu}\ket{N,\mu}$ with \begin{equation} \Delta_{N,\mu} = \dfrac{N^2}{2q} + |\mu| = \dfrac{N^2}{2q} + \sum_i \mu_i \, . \label{eq:conformaldimension1} \end{equation} \subsubsection{Laughlin Wavefucntion} \label{subsub:Laughlin} We define the electronic operators as $\mathcal{V}_\text{el} (z) =\mathcal{V}_{N=q} (z)$. Note that the name "electronic operator" is improper for bosons, but we will nevertheless keep the same name for both statistics. The Operator Product Expansion (OPE) of two electronic operators $\mathcal{V}_\text{el} (z) \mathcal{V}_\text{el} (w) \sim (z-w)^q$ ensures the commutation (respectively anticommutation) of the electronic operators for bosons (respectively fermions) for $q$ even (respectively odd). The $N_e$-points correlators reproduces the Laughlin WF~\cite{MooreReadCFTCorrelator}: \begin{equation} \Psi_\text{Lgh}^{(q)} (z_1,\cdots ,z_{N_e}) = \braOket{0}{\mathcal{O}_\text{bc} \mathcal{V}_\text{el} (z_1) \cdots \mathcal{V}_\text{el} (z_{N_e}) }{0} \, . \end{equation} The operator $\mathcal{O}_\text{bc}= e^{-i \frac{N_e}{\sqrt{\nu}} \varphi_0}$ is the neutralizing background charge ensuring the overall conservation of the U(1) charge. The $n^\text{th}$ mode of the electronic operator describes its effect on the $n^\text{th}$ orbital, since it is linked to a factor $z^n$ on the plane \begin{equation} \mathcal{V}_\text{el} (z)=\sum_{n \in \mathbb{Z}} z^n \, V_{-n-h} \, , \end{equation} where $h=q/2$ is the conformal dimension of the electronic operators. The OPE of two electronic operators ensures that the electronic modes commute (respectively anticommute) if $q$ is even (respectively odd). We can use this property to order the modes in the correlator, once the latest is expanded onto the occupation number basis: \begin{align} & \Psi_\text{Lgh}^{(q)} (z_1, \cdots , z_{N_e}) = \braOket{0}{\mathcal{O}_\text{bc} \mathcal{V}_\text{el} (z_1) \cdots \mathcal{V}_\text{el} (z_{N_e}) }{0} \\ &=\, \sum_{\lambda_1 \cdots \lambda_{N_e} } \!\!\! \langle 0| \mathcal{O}_\text{bc} V_{-\lambda_1-h} \cdots V_{-\lambda_{N_e} -h} |0 \rangle z_1^{\lambda_1} \cdots z_{N_e}^{\lambda_{N_e}} \\ &= \quad \sum_{\lambda } c_\lambda \left(\prod_{j=0}^{N_\phi} \mathcal{N}_j^{m_j} \right) m_\lambda (z_1, \cdots z_{N_e}) \, . \end{align} Where after ordering, the sum runs over the ordered lists $\lambda$ (\textit{c.f.} Eq.~\eqref{eq:partitions}). The corresponding many-body coefficient $c_\lambda$ is expressed as an orbital-dependent MPS: \begin{gather} \dfrac{c_\lambda}{\sqrt{N_e!}} = \braOket{0}{\mathcal{O}_\text{bc} A^{(m_{N_\phi})}[N_\phi] \cdots A^{(m_0)}[0] }{0} \, , \\ A^{(m)}[j] = \dfrac{1}{\mathcal{N}_j^m \sqrt{m!}} \left( V_{-j-h} \right)^m \, . \label{eq:mpslaughlin1} \end{gather} Since the WFs considered in this article are not normalized, we will systematically drop the global factor $\sqrt{N_e!}$ or any other irrelevant factors. In order to be complete, we provide the explicit matrix coefficient of the vertex operators. They are given by the following formula \begin{equation*} \braOket{N',\mu'}{\!:\! e^{i \frac{Q}{R} \varphi(z)}\! :\!}{N, \mu} = z^{QN/R^2+|\mu'|-|\mu|} \Gamma_{\mu',\mu}^{(Q/R)} \delta_{N',N+Q} \end{equation*} where the non-trivial coefficient $\Gamma_{\mu',\mu}^{(Q/R)}$ is equal to \begin{equation} \prod_{j=1}^{\infty} \sum_{r,s} \delta_{m_j'+s,m_j+r} \dfrac{(-1)^s}{\sqrt{r!s!}} \Big(\dfrac{Q}{R\sqrt{j}}\Big)^{r+s} \sqrt{\binom{m_j'}{r}\binom{m_j}{s}} \, . \label{eq:coefvertex}\end{equation} \subsubsection{Truncation Scheme and Orbital Independent MPS} \label{sec:laughlinorbitalindependant} The MPS form Eq.~\eqref{eq:mpslaughlin1} might however not be really useful from a practical perspective. First, it is orbital-dependent which is an issue when considering systems in the thermodynamic limit. This dependence is made explicit in Eq.~\eqref{eq:mpslaughlin1} through the geometrical factor $\mathcal{N}_j$ and the mode $V_{-j-h}$. Another reason to improve the MPS description of Eq.~\eqref{eq:mpslaughlin1} is that in practice a truncation should be applied. As depicted on Fig.~\ref{fig:chargespreadingLaughlin}(a), the U(1) charge can only grow along the cylinder. In other words, applying the matrices of Eq.~\eqref{eq:mpslaughlin1} one after the other increases the U(1) charge until we get to the background charge which abruptly sets it to zero for neutrality. This requires to keep all primaries $\ket{N}$ of charge $N\leq q N_e$ which is impossible in the thermodynamic limit. To avoid such a situation we will show here how to keep the U(1)-charge controlled and encode the geometrical factors for the cylinder geometry in the MPS matrices. Irrespective of the geometry, we apply the following procedure: we should find an invertible operator $U$ satisfying $U A^{(m)}[j] U^{-1} = A^{(m)}[j-1]$. The $U$ operator shifts the orbital number by one. If applied to the whole MPS matrices once, it is just a re-labeling of the orbitals. In order to obtain an orbital independent MPS, we use the identity $ A^{(m)}[j] = (U^{-1})^j A^{(m)}[0] U^j $ on each orbital. We get: \begin{equation} c_{\lambda}= \braOket{\alpha_L}{\big( A^{(m_{N_\phi})}[0] U \big) \cdots \big( A^{(m_{0})}[0] U \big)}{\alpha_R} \, , \label{eq:mpslaughlinorbitindep} \end{equation} where we have defined the states $\ket{\alpha_R} = U^{-1} \ket{0}$ and $\bra{\alpha_L} = \bra{0} \mathcal{O}_\text{bc} (U^{-1})^{N_\phi}$. Eq.~\eqref{eq:mpslaughlinorbitindep} is the wanted orbital independent MPS description. Its derivation relies only on the existence of the operator $U$ which we shall now write down explicitly. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/SpredingBackgroundLaughlin.eps} \caption{\emph{Sketch of the evolution of the U(1) charge along the cylinder. (a) The MPS representation built on matrices $A$ (see Eq.~\eqref{eq:mpslaughlin1}) involves large charges, leading to an explosion of the auxiliary space dimension. The charge grows by $q$ every time the occupation number is non zero, and is abruptly set to zero at the end of the cylinder where the background charge sits. (b) The orbital-independent $B$-matrices of Eq.~\eqref{eq:Bmatrices_Laughlin} MPS keep the U(1)-charge controlled and can be used for numerical simulation. The neutralizing background charge is spread equally between orbitals and geometrical factors are accounted for.}} \label{fig:chargespreadingLaughlin} \end{figure} We first focus on the thin annulus limit to address the U(1) charge issue since in this geometry all orbital have the same shape, $\mathcal{N}_j=1$. Controlling its growth is achieved by spreading the neutralizing background charge. One possible choice of $U$ for the procedure is: \begin{equation} U_\text{TA} = e^{-i \sqrt{\nu} \varphi_0} \, , \end{equation} The operators $B_\text{TA}^{(m)} = A^{(m)}[0] U_\text{TA}$ form a site independent representation of the previous MPS. The site-independent nature of Eq.~\eqref{eq:mpslaughlinorbitindep} on the thin-annulus is quite natural and can be seen as taking off small parts of the background charge $\mathcal{O}_\text{bc} = U_\text{TA}^{N_\phi+1}$ and spreading it equally between orbitals with the factor $U_\text{TA}$. This amounts to inserting one quasiholes per orbital. We have reached the situation depicted in Fig.~\ref{fig:chargespreadingLaughlin}(b) where the U(1) charge is controlled. The spreading of the background charge should be repeated on the cylinder while accounting for the geometrical factors Eq.~\eqref{eq:geometricfactorcylinders}. The procedure still holds with a slightly different choice~\cite{ZaletelMongMPS,RegnaultConstructionMPS}: \begin{equation} U_\text{cyl} = e^{-\gamma^2 L_0-i \sqrt{\nu} \varphi_0} \, . \label{eq:UgeometryLaughlin} \end{equation} The part involving $L_0$ reproduces the exponential factors of Eq.~\eqref{eq:geometricfactorcylinders} appearing in the relation $U_\text{cyl} A^{(m)}[j] U_\text{cyl}^{-1} = A^{(m)}[j-1]$. Defining the operators $B^{(m)} = A^{(m)}[0] U_\text{cyl}$, the many-body coefficients can be computed in the cylinder geometry (as sketched in Fig.~\ref{fig:orbitalcylinder}) as: \begin{equation} c_\lambda= \braOket{\alpha_L}{B^{(m_{N_\phi})} \cdots B^{(m_0)}}{\alpha_R} \, . \label{eq:Bmatrices_Laughlin} \end{equation} The last problem that we still face is the infinite dimension of the MPS auxiliary space. Indeed, it is the Hilbert space of the underlying CFT since the $B^{(m)}$ are operators of this theory. The matrix $U_\text{cyl}$ (Eq.~\eqref{eq:UgeometryLaughlin}) shows that states become exponentially irrelevant with their conformal dimension given by Eq.~\eqref{eq:conformaldimension1} and measured by $L_0$. There are many ways of truncating with respect to the conformal dimension. We choose to truncate on its integer part, denoted as $\text{E}$: $\text{E}(\Delta_{N,\mu})\leq P_\text{max}$ with $P_\text{max} \in \mathbb{N}$. This choice has the advantage to allow the root partition~\cite{RootPartitionBernevig,RootPartitionSpinBernevig,SqueezingRezayiHaldane} to be the only one with a non-vanishing coefficient at $P_\text{max}=0$~\cite{RegnaultConstructionMPS}. The truncated matrices $B^{(m)}$ can be computed with Eq.~\eqref{eq:coefvertex}. Their product gives the coefficients of the Laughlin WF on the occupation basis of the cylinder with Eq.~\eqref{eq:Bmatrices_Laughlin}. For a finite number of particles $N_e$, the MPS becomes exact for some $P_\text{max} \propto N_e^2$. \section{Halperin Wavefunctions} \label{sec:halperin} The spin degree of freedom of the electron is often neglected at first in the study of the FQHE since it is assumed to be quenched by the strong magnetic field applied. This picture is usually valid for low filling factors but breaks down for filling factors close to unity where inter-band crosstalk starts to play a role. Other situations require a multicomponent description and the use of a pseudo-spin as a good quantum number. This is the case of the valley degeneracy in graphene or in bilayer systems~\cite{BilayerGraphene}. In the rest of this article, we focus on the special case of an internal degree of freedom of dimension two. We will use the name "spin up" and "spin down" for the two possible values, even if we will not necessarily deal with actual spin. The case that we derive shows how the calculation should be performed and the potential caveats when deriving an MPS expression for the spinful FQH WFs. Our formalism and derivation can be easily extended to richer internal structures. Among the spinful trial WFs, the Halperin WFs~\cite{HalperinOriginalPaper,HalperinSecondPaper} are the simplest generalization of Laughlin WFs to the multicomponent case. Consider $N_\uparrow$ particles with a spin up and $N_\downarrow$ particles with a spin down, the Halperin WFs take three integer parameters $(m,m',n)$ describing the intra-species interactions for the $m$'s and the inter-species interaction for $n$. The WF itself is often introduced~\cite{HalperinHierarchy,HalperinWFExample} as: \begin{align} \label{eq:haplerinposition} & \Psi_{mm'n} (z_1 \cdots z_{N_\uparrow}, z_{[1]} \cdots z_{[N_\downarrow]} ) = \\ & \prod_{1 \leq i <j \leq N_\uparrow} \!\!\!\!\! (z_i - z_j)^m \!\!\!\!\! \prod_{1 \leq i <j \leq N_\downarrow} \!\!\!\!\! (z_{[i]} - z_{[j]})^{m'} \!\!\!\!\! \prod_{\substack{1\leq i \leq N_\uparrow \\ 1 \leq j \leq N_\downarrow}} \!\!\!\!\! (z_i - z_{[j]})^n \, , \notag \end{align} where the index $[i]=N_\uparrow + i$ runs from $N_\uparrow+1$ to $N_e$. Here particles are not indistinguishable and this WF should be understood as the projection of the total many-body state onto the spin component $(\uparrow \cdots \uparrow \downarrow \cdots \downarrow)$ where the spin up are associated with the $z_i$ while the spin down are associated with the $z_{[i]}$. To compute expectation values of operators which do not couple to the spin such as the electronic density, the expression of Eq.~\eqref{eq:haplerinposition} is enough~\cite{AntisymHalperin}. This is the main reason why the spin symmetrization is often discarded in the discussion of spinful FQHE states. In our case, we would like to describe the many-body WF in term of an MPS in order to compute the expectation value of \textit{any} operator. We shall hence be more careful about the symmetrization issue in our derivation. For simplicity we focus on the case $m=m'$. The Halperin $(m,m,m)$ state describes a Laughlin state of parameter $q=m$ with indistinguishable spin states (compare Eq.~\eqref{eq:Laughlin_q} with Eq.~\eqref{eq:haplerinposition} in that case) and was already treated in Sec.~\ref{sec:MotivationCylinder} following the ideas of Refs.~\cite{ZaletelMongMPS,RegnaultConstructionMPS}. When $n>m$, the states are unstable and undergo a phase separation~\cite{HalperinWFExample}, no translational invariant MPS can be hoped for. In the rest of this article, we thus focus on the case $m=m'$ and $n<m$. With these parameters, the Halperin $(m,m,n)$ WF describes a FQH droplets at filling $\nu=\frac{2}{m+n}$. \subsection{CFT description of the Halperin Wavefunctions} \subsubsection{\textbf{K}-matrix Formalism} \label{sec:CFThalperin} We recall here the $\mathbf{K}$-matrix formalism~\cite{ReviewCFTfqhe,WenZeeKmatrix,WenTopoOrderPhase} and a recipe for finding the CFT for the multicomponent Abelian states, as a straightforward generalization of the Laughlin case. For a $p$-component WF, the symmetric and invertible $\mathbf{K}$-matrix gives a way to systematically create a vertex operator $\mathcal{V}^\alpha$ per layer ($\alpha=1\cdots p$) whose OPEs are: \begin{equation} \mathcal{V}^\alpha(z) \mathcal{V}^\beta(w) \sim (z-w)^{K_{\alpha \beta}} \, . \label{eq:OPEmulticomponentvertex} \end{equation} The multiparticle WF Eq.~\eqref{eq:haplerinposition} is made of such factors and thus the $\mathbf{K}$-matrix entirely defines a specific Halperin state. Such vertex operators can be built from any factorization of the form $\mathbf{K}=\mathbf{Q} \mathbf{Q}^T$ where $\mathbf{Q}$ is a matrix of size $p \times k$ with $k\geq p$. Note that this factorization is only possible if ${\rm det}\mathbf{K}>0$. We introduce $k$ independent free chiral bosons as described in Eq.~\eqref{eq:freeboson} which satisfy $\langle \varphi^\alpha (z)\varphi^\beta(w) \rangle = -\delta_{\alpha,\beta} \log (z-w)$. The vertex operators are then defined as: \begin{equation} \mathcal{V}^\alpha = :\exp \left( i \sum_\beta Q_{\alpha \beta} \, \varphi^\beta \right): \, . \end{equation} The Laughlin $\nu=1/q$ case is recovered by taking $\mathbf{K}=q$ to be scalar ($p=1$) such that $Q_{11}=\sqrt{q}$. Because the $\mathbf{K}$ matrix should be invertible, the Halperin $(m,m,m)$ case is also described by a scalar $\mathbf{K}=m$. However, there are two physical components and hence $\mathbf{Q}$ is a $1\times 2$ matrix: $\mathbf{Q}= (\sqrt{m/2} \, , \, \sqrt{m/2})$. In that case, the WF requires $k=2>p$. For the two components Halperin states $(m,m,n)$ with $n<m$ of interest, two independent bosons are enough to describe the physics: $p=k=2$~\cite{MooreReadCFTCorrelator}. We choose a symmetric factorization of the $\mathbf{K}$-matrix~\cite{ReviewCFTfqhe}: \begin{equation} \mathbf{K} = \matrix22{m}{n}{n}{m} = \matrix22{Q_c}{Q_s}{Q_c}{-Q_s} \cdot \matrix22{Q_c}{Q_c}{Q_s}{-Q_s} = \mathbf{Q} \mathbf{Q}^T \, ,\end{equation} with the following coefficients: \begin{equation} Q_c=\sqrt{\dfrac{m+n}{2}} \, , \quad Q_s=\sqrt{\dfrac{m-n}{2}} \, . \end{equation} The vertex operators can be written: \begin{align} \! \! \mathcal{V}^\uparrow (z) = : \! \exp \! \left( \! i \sqrt{\dfrac{m+n}{2}}\varphi^c(z) +i \sqrt{\dfrac{m-n}{2}}\varphi^s(z) \! \right) \! : \, , \\ \! \! \mathcal{V}^\downarrow (z) = : \! \exp \! \left( \! i \sqrt{\dfrac{m+n}{2}}\varphi^c(z) -i \sqrt{\dfrac{m-n}{2}}\varphi^s(z) \! \right) \! : . \label{eq:vertexoperators} \end{align} $\varphi^c$ is called the "charge" boson and $\varphi^s$ the "spin" boson since this factorization is reminiscent of the spin-charge separation in Luttinger liquids~\cite{ChargeSpinLuttinger,giamarchiQuantumOneD}. Both bosons should be compactified as in Sec.~\ref{subsub:compactboson}. $\varphi^c$ (respectively $\varphi^s$) should have a compactification radius $R_c=\sqrt{2(m+n)}$ (respectively $R_s=\sqrt{2(m-n)}$). Primary fields with respect to the charge and spin U(1) Kac-Moody algebra are vertex operators of the form \begin{equation} \mathcal{V}_{N_c,N_s} (z) = :e^{i \frac{N_c}{R_c}\varphi^c(z) +i\frac{N_s}{R_s}\varphi^s(z) } : \, , \label{eq:primary331} \end{equation} where the two integers $(N_c,N_s) \in \mathbb{Z}^2$ have the same parity. Notice that $\mathcal{V}^{\uparrow \downarrow}=\mathcal{V}_{m+n,\pm(m-n)}$. Reproducing the reasoning of Sec.~\ref{subsub:compactboson}, we associate to each primary field a primary state $\ket{N_c,N_s}=\mathcal{V}_{N_c,N_s}(0)\ket{0}$ and span the Hilbert space through repeated action of the bosonic creation operators $a_{-n}^c$ and $a_{-n}^s$ on those primaries. This procedure generates the states $\{|N_c,\mu_c, N_s,\mu_s\rangle\} $, which form a basis for the CFT Hilbert space. This is the choice that we make throughout our article and in our numerical simulations. The conformal dimension of $|N_c,\mu_c N_s,\mu_s\rangle $ is \begin{eqnarray} \Delta_{N_c,\mu_c,N_s,\mu_s} &=& \dfrac{N_c^2}{2R_c^2} +\dfrac{N_s^2}{2R_s^2} + |\mu_c| + |\mu_s| \\ &=& \dfrac{N_c^2}{4(m+n)} +\dfrac{N_s^2}{4(m-n)} + P \, , \end{eqnarray} where we will often write $P=|\mu_c| + |\mu_s| \in \mathbb{N}$. Fig.~\ref{fig:TopoSectors331} sketches a graphical construction of the Hilbert Space for the Halperin 331 case. Each point $(N_c,N_s)$ of the lattice embodies the primary state $\ket{N_c,N_s}$ and all its descendants $\{|N_c,\mu_c N_s,\mu_s\rangle\} $. On this lattice, the operators Eq.~\eqref{eq:vertexoperators} act as vectors. They generate the whole lattice from a unit cell composed of $m^2-n^2$ inequivalent sites. Physically, they corresponds to the ground state degeneracy of the Halperin $(m,m,n)$ WF on the torus which is known to be $|\det \mathbf{K}|=m^2-n^2$~\cite{WenZeeKmatrix} since $m>n$. We thus label the topological sectors of the Halperin WFs with a pair of integers $(a,b)$ corresponding to the coordinates of the points within the unit cell in the $(N_c,N_s)$ plane. Note that the fact that $N_c$ and $N_s$ have same parity plays an important role because the number of inequivalent site in the unit cell reproduces the ground state degeneracy of the WF on the torus. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/TopoSector331.eps} \caption{\emph{Graphical construction of the auxiliary space for the MPS representation of the Halperin 331 WF. Each point correspond to a primary state $\ket{N_c,N_s}$ build from the vertex operators Eq.~\eqref{eq:primary331}. There are eight different topological sectors, represented in the shaded unit cell. They can be split into four pairs, each represented with a certain color (see Sec.~\ref{sec:CorrelLength}). Adding electrons does not change the topological sector as shown with the action of $\mathcal{W}^\uparrow$ (see Eq.~\eqref{eq:elecoperator}). A similar property holds true for $\mathcal{W}^\downarrow$. Spreading the background charge couples the different topological sectors as shown with the action of $U$ (see Eq.~\eqref{eq:Ugeometry}).}} \label{fig:TopoSectors331} \end{figure} \subsubsection{Electronic Operators and Symmetrization} In the case where $N_\uparrow=N_\downarrow=N_e/2$, only the charge boson needs a background charge such that the Halperin WF of Eq.~\eqref{eq:haplerinposition} is faithfully written as a correlator~\cite{MooreReadCFTCorrelator}: \begin{align} \label{eq:WFfromKmatrix} & \Psi_{mmn} (z_1 \cdots z_{N_\uparrow}, z_{[1]} \cdots z_{[N_\downarrow]} ) = \\ & \quad \braOket{0}{\mathcal{O}_\text{bc} \mathcal{V}^\uparrow (z_1) \cdots \mathcal{V}^\uparrow (z_{N_e/2}) \mathcal{V}^\downarrow (z_{[1]}) \cdots \mathcal{V}^\downarrow (z_{[N_e/2]}) }{0} , \notag \end{align} where the background charge reads $\mathcal{O}_\text{bc} = e^{-i \frac{N_e}{\sqrt{\nu}}\varphi_0^c} = e^{-i Q_c N_e\varphi_0^c}$. Note that we have not used the term "electronic operator" for the vertex operators as in Laughlin case. Indeed, although they reproduce the unsymmetrized Halperin WF, they cannot in general describe electrons since their commutation relation are different. For instance, the Halperin 332 WF with $m=3$ and $n=2$ describes fermions since $m$ is odd. However, $n$ is even and the OPE of Eq.~\eqref{eq:OPEmulticomponentvertex} implies that $\mathcal{V}^\uparrow$ and $\mathcal{V}^\downarrow$ commute. They cannot be taken for electronic operator as such and should be modified to create the true electronic operators, with mutual statistics between particles of opposite spins being identical to the statistics of the particles of identical spins. We define as $[\, . \, , \, . \,]_m$ the commutator (respectively anticommutator) assuming $m$ is even (respectively odd), $[A,B]_m=AB-(-1)^mBA$. Let us introduce the operator \begin{equation} \chi=e^{2i\pi Q_c a_0^c} =(-1)^{R_c a_0^c} \, , \end{equation} in order to build the electronic operator from the vertex operators $\mathcal{V}^{\sigma}$ with $\sigma \in \{\uparrow\downarrow\}$. Notice that $\chi = \pm 1$ on the CFT basis of Sec.~\ref{sec:CFThalperin}. The zero-mode commutation relation of the free boson $[\varphi_0^c , a_0^c]=i$ implies $\chi \mathcal{V}^{\sigma}(z)=(-1)^{m+n} \mathcal{V}^{\sigma}(z) \chi$. We define the electronic operator as: \begin{equation} \mathcal{V}(z) = \mathcal{V}^\uparrow (z) \chi \ket{\uparrow} +\mathcal{V}^\downarrow (z) \ket{\downarrow} \, , \label{eq:elecoperator} \end{equation} and we shall refer to its spin components as the spin up (respectively down) electronic operators. For clarity, we write:\begin{equation} \mathcal{W}^\downarrow (z)=\mathcal{V}^\downarrow (z) \quad \text{and}\quad \mathcal{W}^\uparrow (z)=\mathcal{V}^\uparrow (z) \chi \, . \end{equation} The electronic operator Eq.~\eqref{eq:elecoperator} satisfies the correct commutation relation $[ \mathcal{V} (z) , \mathcal{V} (w)]_{m}=0$. This can be seen from the commutation relations of the spin up and down electronic operators. The transformation $\mathcal{V}^\sigma \rightarrow \mathcal{W}^\sigma$, $\sigma \in \{\uparrow,\downarrow\}$ does not change the statistics of particles with identical spins and corrects the problematic commutation relation $[ \mathcal{W}^\downarrow (z) , \mathcal{W}^\uparrow (w)]_{m}= 0$. A similar phase operator can be found for the Halperin $(m,m',n)$ WF. Up to an irrelevant global phase factor, the WF obtained by using these electronic operators is still the Halperin state: \begin{align} & \Psi_{mmn} (z_1 \cdots z_{N_\uparrow}, z_{[1]} \cdots z_{[N_\downarrow]} ) = \\ & \, \braOket{0}{\mathcal{O}_\text{bc} \mathcal{W}^\uparrow (z_1) \cdots \mathcal{W}^\uparrow (z_{N_e/2}) \mathcal{W}^\downarrow (z_{[1]}) \cdots \mathcal{W}^\downarrow (z_{[N_e/2]}) }{0} . \notag \end{align} The newly derived electronic operators allows us to come back on the full antisymmetrization (respectively symmetrization) of the fermionic (respectively bosonic) Halperin WF. The complete many-body WF can be written as: \begin{equation} \ket{\Phi_{mmn}^\text{TOT}(z_1, \cdots z_{N_e} )} = \langle \mathcal{O}_\text{bc} \prod_{i=1}^{N_e} \mathcal{V}(z_i) \rangle \, . \label{eq:CFTantisym} \end{equation} The symmetry or antisymmetry of the complete WF follows from the commutation or anticommutation relation of the operators $\mathcal{V}(z_i)$. Notice that the absence of background charge for the spin boson in the correlator ensures that all configurations have the same number of spin up and spin down. The method may be once again generalized to an imbalanced number of spin up and down by adding a well chosen spin U(1)-charge background. \begin{widetext} \subsection{Orbital Decomposition} The first quantized form of Eq.~\eqref{eq:CFTantisym} can be written with the help of the spin electronic operators: \begin{equation} \ket{\Phi_{mmn}^\text{TOT}(z_1, \cdots z_{N_e} )} = \Big(N_\uparrow! N_\downarrow! \Big)^{-1} \mathcal{P} \Big( \langle \mathcal{O}_\text{bc} \mathcal{W}^\uparrow (z_{1}) \cdots \mathcal{W}^\uparrow (z_{{N_e/2}})\mathcal{W}^\downarrow (z_{[1]}) \cdots \mathcal{W}^\downarrow (z_{{[N_e/2]}}) \rangle \cdot \ket{\uparrow \cdots \uparrow \downarrow \cdots \downarrow} \Big) \, , \end{equation} where $\mathcal{P}$ stands for the full symmetrization for bosons or the full antisymmetrization for fermions. Once this first-quantized symmetrization or anti-symmetrization is set up, we may decompose the electronic operators onto the orbitals. This is achieved by decomposing the operators $\mathcal{V}^{\uparrow}$ and $\mathcal{V}^{\downarrow}$ in modes, following a similar prescription to the one in Sec.~\ref{subsub:Laughlin}. We write for convenience $\mathcal{W}_{-\lambda}^\downarrow = \mathcal{V}_{-\lambda-h}^\downarrow$ and $\mathcal{W}_{-\lambda}^\uparrow = \mathcal{V}_{-\lambda-h}^\uparrow \chi$, where $h=m/2$ is the conformal dimension of the vertex operators (cf. Eq.~\eqref{eq:vertexoperators}). $\ket{\Phi_{mmn}^\text{TOT}(z_1, \cdots z_{N_e} )}$ becomes: \begin{equation} \Big(N_\uparrow! N_\downarrow! \Big)^{-1} \mathcal{P} \Big( \hspace{-10pt} \sum_{\substack{\lambda_1 \cdots \lambda_{N_e/2} \\ \rho_1 \cdots \rho_{N_e/2}}} \hspace{-10pt} \langle \mathcal{O}_\text{bc} \mathcal{W}_{-\lambda_1}^\uparrow \cdots \mathcal{W}_{-\lambda_{N_e/2}}^\uparrow \mathcal{W}_{-\rho_1}^\downarrow \cdots \mathcal{W}_{-\rho_{N_e/2}}^\downarrow \rangle \prod_{i=1}^{N_e/2} z_{i}^{\lambda_i} z_{[i]}^{\rho_i} \cdot \ket{\uparrow \cdots \uparrow \downarrow \cdots \downarrow} \Big) \, . \end{equation} Given the OPE of the operators $\mathcal{V}^{\uparrow}$ and $\mathcal{V}^{\downarrow}$, we can see that all the modes in the above expression commute or anti-commute. In particular, we can always order all modes, both the $\mathcal{W}^\downarrow$ and the $\mathcal{W}^\uparrow$. \begin{equation} \ket{\Phi_{mmn}^\text{TOT}(z_1, \cdots z_{N_e} )} = \sum_{\lambda , \rho} \langle \mathcal{O}_\text{bc} \mathcal{W}_{-\lambda_{1}}^\uparrow \cdots \mathcal{W}_{-\lambda_{N_e/2}}^\uparrow \mathcal{W}_{-\rho_1}^\downarrow \cdots \mathcal{W}_{-\rho_{N_e/2}}^\downarrow \rangle \mathcal{P} \Big( \prod_{i=1}^{N_e/2} \dfrac{z_{i}^{\lambda_i} z_{[i]}^{\rho_i}}{m_i^\uparrow!\, m_i^\downarrow!} \cdot \ket{\uparrow \cdots \uparrow \downarrow \cdots \downarrow} \Big) \, . \label{eq:linkCFTantisymfirstQuantized}\end{equation} The sum now runs over the ordered lists $\lambda$ associated to the spin up and $\rho$ associated to the spin down as described in Eq.~\eqref{eq:partitions}. We recognize the elements of the occupation basis \begin{equation} \braket{z_1 \cdots z_{N_e}}{m_{N_\phi}^\uparrow \cdots m_0^\uparrow m_{N_\phi}^\downarrow \cdots m_0^\downarrow} = \dfrac{1}{\sqrt{N_e!}} \left(\prod_{j=0}^{N_\phi} \mathcal{N}_j^{m_j^\uparrow+m_j^\downarrow} \right) \mathcal{P} \Big( \prod_{i=1}^{N_e/2} \dfrac{z_{i}^{\lambda_i} z_{[i]}^{\rho_i}}{\sqrt{m_i^\uparrow!\, m_i^\downarrow!}} \cdot \ket{\uparrow \cdots \uparrow \downarrow \cdots \downarrow} \Big) \, . \label{eq:occupationbasis} \end{equation} Combining Eq.~\eqref{eq:linkCFTantisymfirstQuantized} and Eq.~\eqref{eq:occupationbasis}, we have derived an MPS representation for the many-body coefficients $\ket{\Phi_{mmn}^\text{TOT}} = \sum_{\lambda,\rho} c_{\lambda,\rho} \ket{m_{N_\phi}^\uparrow \cdots m_0^\uparrow m_{N_\phi}^\downarrow \cdots m_0^\downarrow}$: \begin{equation} \dfrac{c_{\lambda,\rho}}{\sqrt{N_e!}} = \braOket{0}{\mathcal{O}_\text{bc} M_\uparrow^{(m_{N_\phi}^\uparrow)}[N_\phi] \cdots M_\uparrow^{(m_{0}^\uparrow)}[0] M_\downarrow^{(m_{N_\phi}^\downarrow)}[N_\phi] \cdots M_\downarrow^{(m_{0}^\downarrow)}[0] }{0} \, , \label{eq:productoperatornotgood} \end{equation} with the following operators : \begin{equation} M_\downarrow^{(m)}[j] = \dfrac{1}{\sqrt{m!}} \left( \dfrac{1}{\mathcal{N}_j} \mathcal{V}_{-j-h}^\downarrow \right)^m \quad \text{and } \, M_\uparrow^{(m)}[j] = \dfrac{1}{\sqrt{m!}} \left( \dfrac{1}{\mathcal{N}_j} \mathcal{V}_{-j-h}^\uparrow \chi \right)^m \, . \label{eq:Mmatrices} \end{equation} \end{widetext} \section{Orbital-Independent Matrix Product State} \label{sec:OrbIndepMPS} A few remarks should be pointed out here. First, there is some arbitrariness in the choice of the reference spin configuration $(\uparrow \cdots \uparrow \downarrow \cdots \downarrow)$. Since the electronic modes have the same statistics as the particles, this choice is not relevant anymore: we can reorder in the same manner the occupation basis states and the product of operators of Eq.~\eqref{eq:productoperatornotgood}. This form was chosen to underline that we could index the sum of Eq.~\eqref{eq:linkCFTantisymfirstQuantized} using the two partitions $\lambda$ and $\rho$. Second, we face the same problem as in Sec.~\ref{sec:laughlinorbitalindependant}: the charge boson U(1)-charge in this formalism is not controlled, preventing us from exploring thermodynamic properties for now. Moreover, this MPS form runs over the system twice, once for each spin state. After the first $N_\phi$ steps, the U(1)-charge of the spin boson will also be gigantic in the thermodynamic limit. This situation is depicted in Fig.~\ref{fig:chargespreading}(a). In the following we will consider both species on each orbitals in order to keep the spin U(1)-charge under control, as depicted in Fig.~\ref{fig:chargespreading}(b). As for the Laughlin case, the U(1)-charge of the charge boson keeps increasing until it sees the background charge. We should spread the background charge along the cylinder as in Sec.~\ref{sec:laughlinorbitalindependant}. In the following section, we describe how to go from the MPS form of Eq.~\eqref{eq:productoperatornotgood} to the situation depicted in Fig.~\ref{fig:chargespreading}(c) where all U(1)-charge are under control. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/SpredingBackground.eps} \caption{\emph{Sketch of the evolution of the charge and spin U(1) charges along the cylinder. The red lines depict the U(1) charge of the charge boson, the blue ones are associated to the U(1) charge of the spin boson. (a) The MPS representations built on matrices $M$ of Eq.~\eqref{eq:Mmatrices} and (b) on matrices $A$ of Eq.~\eqref{eq:Amatrices} involve larges charges, leading to an explosion of the auxiliary space dimension. (c) Spreading the background charge, considering both spin species on each orbital as implemented in the $B$ matrices of Eq.~\eqref{eq:Bmatrices} keeps both U(1)-charges under control.}} \label{fig:chargespreading} \end{figure} \subsection{Orbital Independent MPS and Truncation}\label{sec:Truncation} Keeping in mind that we can reorder the operators in Eq.~\eqref{eq:productoperatornotgood} thanks to the commutation relation of the electronic operators, we find a way to control the spin U(1)-charge by reordering our occupation basis as \begin{equation} \ket{\Phi_{mmn}^\text{TOT}} = \sum_{\lambda,\rho} {c'}_{\lambda,\rho} \ket{m_{N_\phi}^\downarrow m_{N_\phi}^\uparrow \cdots m_0^\downarrow m_0^\uparrow}, \label{eq:defCprimeCoefficients} \end{equation} which translates as a reordering of our operators (see Fig.~\ref{fig:chargespreading}(b)):\begin{gather} {c'}_{\lambda,\rho} = \braOket{0}{\mathcal{O}_\text{bc} A^{(m_{N_\phi}^\downarrow m_{N_\phi}^\uparrow)}[N_\phi] \cdots A^{(m_{0}^\downarrow m_{0}^\uparrow)}[0]}{0} \, , \\ A^{(m^\downarrow m^\uparrow)} [j] = M_\downarrow^{(m^\downarrow)}[j] M_\uparrow^{(m^\uparrow)}[j] \, . \label{eq:Amatrices} \end{gather} Spreading the background charge of the charge boson is similar to what we have done in Sec.~\ref{sec:laughlinorbitalindependant}. We can apply the exact same procedure and look for an invertible operator satisfying $U A^{(m^\downarrow,m^\uparrow)}[j] U^{-1} = A^{(m^\downarrow,m^\uparrow)}[j-1]$ so that: \begin{equation} {c'}_{\lambda,\rho} = \braOket{\alpha_L}{\big( A^{(m_{N_\phi}^\downarrow , m_{N_\phi}^\uparrow)}[0] U \big) \cdots \big( A^{(m_{0}^\downarrow , m_{0}^\uparrow)}[0] U \big)}{\alpha_R} , \end{equation} where $\ket{\alpha_R} = U^{-1} \ket{0}$ and $\bra{\alpha_L} = \bra{0} \mathcal{O}_\text{bc} (U^{-1})^{N_\phi}$. The operators \begin{equation} B^{(m^\downarrow , m^\uparrow)} = A^{(m^\downarrow , m^\uparrow)}[0] U \, , \label{eq:DefBTensors} \end{equation} then form a site independent representation of the previous MPS. This time however, spreading the background charge amounts to the insertion of two quasiholes per orbital, one for each spin component. Since the operator $\chi$ commutes with both $U_\text{TA}$ and $U_\text{cyl}$, the exact same choice of $U$ works: \begin{align} & U_\text{TA} = e^{-\frac{i}{Q_c} \varphi_0^c}= e^{-i \frac{2}{R_c} \varphi_0^c} & & \text{on the thin annulus,} \\ & U_\text{cyl} = e^{-\gamma^2 L_0-\frac{i}{Q_c} \varphi_0^c} & & \text{on the cylinder.} \label{eq:Ugeometry} \end{align} We obtain a site independent MPS formulation for the coefficient, in which the charge is under control to facilitate the truncation of the CFT Hilbert space (\textit{c.f.} Fig.~\ref{fig:chargespreading}(c)): \begin{equation} {c'}_{\lambda,\rho}= \braOket{\alpha_L}{B^{(m_{N_\phi}^\downarrow , m_{N_\phi}^\uparrow)} \cdots B^{(m_{0}^\downarrow , m_{0}^\uparrow)}}{\alpha_R} \, . \label{eq:Bmatrices} \end{equation} As for the Laughlin case of Sec.~\ref{sec:laughlinorbitalindependant}, basis states $\ket{N_c,\mu_c,N_s,\mu_s}$ introduced in Sec.~\ref{sec:CFThalperin} becomes exponentially irrelevant with their increasing conformal dimension on the cylinder (see Eq.~\eqref{eq:Ugeometry}). Here we choose to use a cutoff $P_\text{max} \in \mathbb{N}$ and we keep all states satisfying $\text{E}(\Delta_{N_c,\mu_c,N_s,\mu_s})\leq P_\text{max}$ where $\text{E}$ denotes the integer part. The coefficients of these matrices can be computed using Eq.~\eqref{eq:coefvertex}. Note that this truncation guarantees that the Halperin states $(m,m,m-1)$ remains spin singlets after truncation (see App.~\ref{app:su2}). \subsection{Transfer Matrix And Infinite Cylinder} \label{sec:transfermatrixformalism} We now introduce the transfer matrix formalism, particularly useful for numerical computations with infinite Matrix Product States. The transfer matrix $E$ is a linear operator on $\mathcal{H}_\text{CFT} \otimes \bar{\mathcal{H}}_\text{CFT}$ defined as \begin{equation} E= \sum_{m^\downarrow,m^\uparrow} B^{(m^\downarrow,m^\uparrow)} \otimes \big(B^{(m^\downarrow,m^\uparrow)} \big)^* \, , \label{eq:transfermatrixdefinition}\end{equation} and can be equivalently thought of as a superoperator on the space of matrices of $\mathcal{H}_\text{CFT}$ through the isomorphism $\ket{ \alpha, \beta^*} \to |\alpha \rangle \langle \beta |$: \begin{equation} \mathcal{E}(X)= \sum_{m^\downarrow,m^\uparrow} B^{(m^\downarrow,m^\uparrow)} X \big(B^{(m^\downarrow,m^\uparrow)} \big)^\dagger \, . \end{equation} Where the complex conjugation used to define $\ket{\beta^*}$ is implicitly taken with respect to the CFT Hilbert space basis of Sec.~\ref{sec:CFThalperin}. The transfer matrix is in general not Hermitian and might contain non-trivial Jordan blocks. It is however known~\cite{TransferMatrixAll} that its largest eigenvalue in modulus is real and positive, and that the corresponding right and left eigenvectors can be chosen to be positive matrices. Consider the states \begin{align} &\ket{ \Phi_{\alpha_R}^{\alpha_L}} = \sum_{\lambda,\rho} c_{\lambda,\rho}^{\alpha_R , \alpha_L} \ket{m_{N_\phi}^\downarrow,m_{N_\phi}^\uparrow \cdots m_0^\downarrow,m_0^\uparrow} \notag\\ & c_{\lambda,\rho}^{\alpha_R , \alpha_L}= \braOket{\alpha_L}{B^{(m_{N_\phi}^\downarrow,m_{N_\phi}^\uparrow)} \cdots B^{(m_0^\downarrow,m_0^\uparrow)}}{\alpha_R} \, , \label{eq:MPSstates} \end{align} for any pair of states $(\alpha_L,\alpha_R)$ belonging to the CFT Hilbert Space (this definition englobes the Halperin WFs of Eq.~\eqref{eq:Bmatrices}). The overlaps between any two of these MPS are given by \begin{equation} \braket{ \Phi_{\beta_R}^{\beta_L}}{ \Phi_{\alpha_R}^{\alpha_L}} = \braOket{\alpha_L, \beta_L^*}{E^{N_\phi +1}}{\alpha_R,\beta_R^*} \, . \label{eq:overlaptransfermatrix} \end{equation} Expectation values of operators having support on a finite number of orbital may be computed in a similar way. Assuming the largest eigenvalue of $E$ has no degeneracy and that the gap of the transfer matrix remains finite in the thermodynamic limit $N_\phi \to \infty$, the overlaps given by Eq.~\eqref{eq:overlaptransfermatrix} on an infinite cylinder are dominated by the largest eigenvector of the transfer matrix. All other contributions vanish exponentially with the size of the system. In this limit, the overlaps of Eq.~\eqref{eq:overlaptransfermatrix} are thus the elements of the largest eigenvector. Note that the positivity of the largest eigenvector of $\mathcal{E}$ is coherent with its interpretation as an overlap matrix. The situation is more involved for topologically ordered phases of matter. The CFT Hilbert space splits into distinct topological sectors. This might lead to extra degeneracies in the transfer matrix eigenvalues, whose corresponding eigenvectors belong to different sectors. In the CFT Hilbert space introduced in Sec.~\ref{sec:CFThalperin}, there are $m^2-n^2$ topological sectors corresponding to the number of ground states for the Halperin $(m,m,n)$ WF on a torus or an infinite cylinder. They are characterized by a number of spin and charge quasiholes at the edge of the FQH droplet. Because we have spread the background charge, the $B^{(m^\downarrow,m^\uparrow)}$ matrices add charge quasiholes between orbitals and hence shift the topological sector (see Fig.~\ref{fig:TopoSectors331}). It is therefore better suited for our calculation to consider the transfer matrix over $m+n$ orbitals. We thus group together $m+n$ consecutive orbitals and define: \begin{equation} \mathcal{E}^{m+n}(X)= \sum_{\vec{m}^\downarrow,\vec{m}^\uparrow} \mathbf{B}^{(\vec{m}^\downarrow,\vec{m}^\uparrow)} X \big(\mathbf{B}^{(\vec{m}^\downarrow,\vec{m}^\uparrow)} \big)^\dagger \, , \end{equation} where $\mathbf{B}^{(\vec{m}^\downarrow,\vec{m}^\uparrow)} = B^{(m_{m+n}^\downarrow,m_{m+n}^\uparrow)} \cdots B^{(m_{1}^\downarrow,m_{1}^\uparrow)}$. The $\mathbf{B}$ matrices are block diagonal with respect to the topological sectors: \begin{equation} \mathbf{B}^{(\vec{m}^\downarrow,\vec{m}^\uparrow)} = \begin{bmatrix} \mathbf{B}_{(0,0)}^{(\vec{m}^\downarrow,\vec{m}^\uparrow)} & 0 & 0 & 0 \\ 0 & \ddots & 0&0 \\ 0 & 0 & \mathbf{B}_{(a,b)}^{(\vec{m}^\downarrow,\vec{m}^\uparrow)} &0 \\ 0&0&0& \ddots \end{bmatrix} . \end{equation} The transfer matrix is also block diagonal with respect to the right and left topological sector~\cite{RegnaultConstructionMPS}. Moreover, the sectors coupled with the background charge $U$ share the exact same block, leading to a degeneracy $m+n$ of the largest eigenvalue of the transfer matrix as defined in Eq.~\eqref{eq:transfermatrixdefinition}. We can specify the right and left topological sectors \begin{equation} E^{m+n}=\sum\limits_{(a,b),(a',b')}E^{m+n}_{(a,b)(a',b')} \, , \end{equation} where we have defined \begin{equation} E^{m+n}_{(a,b)(a',b')} = \sum_{\vec{m}^\downarrow,\vec{m}^\uparrow} \mathbf{B}_{(a,b)}^{(\vec{m}^\downarrow,\vec{m}^\uparrow)} \otimes \big(\mathbf{B}_{(a',b')}^{(\vec{m}^\downarrow,\vec{m}^\uparrow)} \big)^* \, . \end{equation} These are the blocks of the transfer matrix we will refer to as diagonal if $(a,b)=(a',b')$ and off-diagonal otherwise. This block structure allows to study the system in a given topological sector, where the degeneracy of the largest eigenvalue has disappeared. Thus, we may apply the standard methods of the transfer matrix formalism for MPS to compute overlaps or operators expectation values. \section{Numerical Results}\label{sec:NumResults} As a direct application of the construction presented in Secs.~\ref{sec:halperin}~and~\ref{sec:OrbIndepMPS}, we now extract several quantitative characteristics of the Halperin states. \subsection{Correlation Length}\label{sec:CorrelLength} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/CorrelationLength.eps} \caption{\emph{Inverse of the correlation lengths for several the $(m,m,m-1)$ Halperin spin-singlet states, $(m,m,m-2)$ Halperin states and the Laughlin states as a function of $(L/\ell_B)^{-1}$, the inverse of the cylinder perimeter. All correlation lengths are finite in the diagonal blocks of the transfert matrix for finite perimeters. We extract the thermodynamic values through a linear extrapolation (dotted lines). See App.~\ref{app:morenumerics} for more details. Note that we show both bosonic and fermionic states and that the data for Laughlin 1/3 and 1/5 were already given in Ref.~\cite{RegnaultCorrelLength}.}} \label{fig:InterpolCorrel} \end{figure} We start with the correlation length of the Halperin droplets, which is intimately related to the spectral gap of the transfer matrix~\cite{GapTransferMatrix}. Consider a cylinder with a finite perimeter $L$, the correlation function of a generic operator $\mathcal{O}(x)$ should vanish exponentially with distance according to \begin{equation} \langle \mathcal{O}(x)\mathcal{O}(0)\rangle - \langle \mathcal{O}(x) \rangle \langle \mathcal{O}(0) \rangle \propto e^{-|x|/\zeta(L)} \, , \label{eq:correlationfunctioncorrel} \end{equation} where the correlation length $\zeta(L)$ of the system can be written in terms of the two largest eigenvalues of the transfer matrix -- $\lambda_1(L)$ and $\lambda_2(L)$ -- and the magnetic length $\ell_B$ of the system: \begin{equation} \zeta(L)= \dfrac{2 \pi \ell_B^2}{L\log\big| \frac{\lambda_1(L)}{\lambda_2(L)}\big|} \, . \end{equation} We remark that the diagonal blocks of the transfer matrix are always gapped for finite perimeters (\textit{i.e.} there is no degeneracy), which leads to a finite correlation length $\zeta(L)$. In order to extract the experimentally relevant correlation length, we extrapolate the thermodynamic value $\zeta(\infty)$ from the finite perimeters results in these diagonal blocks. In Fig.~\ref{fig:InterpolCorrel}, we show the results for several Halperin $(m,m,m-1)$ spin-singlet states, Halperin $(m,m,m-2)$ and Laughlin states. There, all the considered Halperin WFs exhibit a finite correlation length in this limit (see App.~\ref{app:morenumerics} for a more detailed discussion). Moreover, the extrapolated correlation length of the diagonal blocks does not depend on the topological sector. Although the plasma analogy can be extended to any Abelian state described by a $\mathbf{K}$-matrix, there is to our knowledge no direct evidence of the gapped nature of the Halperin droplets or analytic computation of their correlation lengths. Our numerical analysis shows that these Halperin states have indeed a finite correlation length. For the Laughlin WFs, all off-diagonal blocks are Jordan blocks. Thus, the decay of the correlation function of operators mixing different topological sectors is faster than the exponential decay given in Eq.~\eqref{eq:correlationfunctioncorrel}. In the MPS language, off diagonal blocks of the transfer matrix arise when computing two quasiholes correlation functions. The plasma analogy calculation, which assumes bulk screening as opposed to the MPS derivation, shows that the two quasiholes correlation function vanishes with Gaussian, rather than exponential falloff, as derived in Ref.~\cite{PlasmaMooreReadAndOthers} for the Laughlin state. Numerically, we observe that for all the considered Halperin $(m,m,n)$ states, the transfer matrix has non-zero eigenvalues between the topological sectors $(a,b)$ and $(a,b+2k)$ with $k \in [\![1;m-n-1]\!]$. Let's exemplify this properties on the Halperin 331 case at filling $\nu=1/2$ which is known to have similarities with the Moore-Read (MR) WF~\cite{Regnault331Pfaffian,Regnault331Pfaffian1}. Reminiscent of the MR case~\cite{SchoutensMRsectors}, we see that the eight topological sectors of the Halperin 331 state can be split into four pairs as depicted in Fig.~\ref{fig:TopoSectors331}. For example $(a,b)$ and $(a+4,b)$ belong to the same pair, \textit{i.e.} they have the same center of mass momentum on a finite torus and arise from the same largest eigenvalue (in magnitude) of the transfer matrix in finite size. Thus, any off diagonal block of the transfer matrix involving two different pairs leads to a strict zero eigenvalue (either due to different momenta or different largest eigenvalues of the transfer matrix). Numerically, we indeed observe this strict zero eigenvalue as in the Laughlin case. Within a given pair, the finite perimeter off diagonal correlation length $\zeta_{\rm off}(L)$ for the Halperin 331 state is finite. But as opposed to the MR case, it goes to zero when the perimeter increases (see App.~\ref{app:morenumerics}). This is a striking difference between the Abelian Halperin 331 state and the non Abelian MR state for which both diagonal and off-diagonal correlation length are equal~\cite{RegnaultCorrelLength}. Overall, this shows that for all Halperin WFs considered, any correlation function involving off diagonal blocks of the transfer matrix decays faster than the ones in the diagonal sectors. This observation is in agreement with the plasma analogy arguments presented in App.~\ref{app:PlasmaAnalogy} which extends the ideas discussed in Ref.~\cite{PlasmaMooreReadAndOthers} to the Halperin case. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/fillingfraction.eps} \caption{\emph{Product of the correlation length and the filling factor as a function of the filling factor. For both the Laughlin and the Halperin $(m,m,m-1)$ series, the results lie around the same value when the filling factor goes to zero. This could be an evidence of the fact that the screening of local fluctuations have the same microscopic cause in both fluids.}} \label{fig:fillingfractionscaling} \end{figure} Focusing back on the diagonal correlation length, all those exhibited in Fig.~\ref{fig:InterpolCorrel} increase with decreasing filling factor $\nu$. Though the plasma analogy holds, it is difficult to extract a closed form expression for the correlation lengths and to know its explicit dependence with the filling factor. We can however try to understand the trend with the following naive thinking. The denser the FQH droplets, the faster local fluctuations are screened by the electronic gas. We follow this intuitive idea and normalize the computed correlation length to the inverse filling factor, and hence by the density. The results presented on Fig.~\ref{fig:fillingfractionscaling} show qualitative agreement with this intuitive picture for sparse Hall droplets. Denser liquids with a filling factor close to one are expected not to follow this trend. Indeed, in the limit $\nu \to 1$ we should recover the Integer Quantum Hall Effect for which the correlation function of a generic operator decays with Gaussian rather than exponential falloff. Although we can understand the asymptotic convergence of $\nu \zeta(\infty)$ to a finite value in the limit $\nu \to 0$, the limit is not universal. Consider for instance the Laughlin $\nu=1/m$ series and the Halperin $(m,m,0)$ series which share the same correlation length but have filling factors differing by a factor two. It is hence surprising to see that the Laughlin series and the Halperin $(m,m,m-1)$ series seem to converge to the same value, as Fig.~\ref{fig:fillingfractionscaling} points out. Although our numerical limitations and uncertainties prevent a more rigorous statement, it may be interesting to see whether the screening processes have similarities for the Laughlin states and the spin-singlet Halperin states. Deep within the screening phase, the plasma analogy reduces to Debye-H\"uckel theory~\cite{DebyeHuckel} and gives results on the Debye screening length of the plasma~\cite{PlasmaMooreReadAndOthers}. This theory fails to reproduce some of the features we see such as the dependence of the screening length with respect to the filling factor. We compare the prediction of this model to our data in App.~\ref{app:morenumerics}. \subsection{Entanglement Spectra} \label{sec:EntanglementSpectra} The edge theory of FQH states is encoded in their entanglement spectrum, as first exhibited by Li and Haldane~\cite{LiHaldaneEntanglementSpec}. Consider a bipartition of the system described by the WF $\ket{\psi}$ in two parts $\cal A$ and $\cal B$. Performing a Schmidt decomposition gives: \begin{equation} \ket{\psi} = \sum_i e^{-\xi_i/2} \ket{\psi_i^{\cal B}} \otimes \ket{\psi_i^{\cal A}} \, , \label{eq:schmidtdecomposition} \end{equation} where $\langle \psi_j^{\cal A} | \psi_i^{\cal A} \rangle =\langle \psi_j^{\cal B} | \psi_i^{\cal B} \rangle = \delta_{j,i}$ and $\ket{ \psi_j^{\cal A} }$ and $\ket{ \psi_i^{\cal B} }$ have different support. The $\xi_i$ are called entanglement energies and form the entanglement spectrum relative to the bipartition ${\cal A}-{\cal B}$. In order to obtain the Schmidt decomposition Eq.~\eqref{eq:schmidtdecomposition}, we first write $\ket{\psi} = \sum_j \ket{\phi_j^{\cal B}} \otimes \ket{\phi_j^{\cal A}}$ with $\ket{\phi_j^{\cal A}}$ being non-zero only in part ${\cal A}$ and $\ket{\phi_j^{\cal B}}$ non-zero only in part ${\cal B}$. Computing overlaps between states $\ket{\phi_j^{\cal A}}$ (respectively $\ket{\phi_j^{\cal B}}$) leads to the construction of the orthonormal basis $\ket{\psi_i^{\cal A}}$ (respectively $\ket{\psi_i^{\cal B}}$) and gives the entanglement energies. Different partitions lead to different spectra and probe different physics. Mainly three partitions are used in the study of the FQH effect: the orbital entanglement spectrum (OES)~\cite{OESforFCI}, the real space entanglement spectrum (RSES)~\cite{PESmanyparticles,RSESdubail,RSESsterdyniak} and the particle entanglement spectrum (PES)~\cite{PESlattice,PESmanyparticles}. In this section, we study the two former. \subsubsection{Orbital Entanglement Spectrum} \label{sec:OrbitalEntanglementSpectrum} We start with the orbital bipartition. It consists of a cut of the system after $\ell_{\cal A}$ orbitals, part ${\cal A}$ contains all orbitals on the right of the cut while ${\cal B}$ is made of the one on the left. To benefit from the block structure of the transfer matrix, we choose $\ell_{\cal A}+1$ to be a multiple of $m+n$. The MPS representation yields a natural way to decompose a state into a sum of product states in ${\cal A}$ and ${\cal B}$. Indeed, we can decompose any state of the occupation basis as $\ket{m_{N_\phi}^\downarrow m_{N_\phi}^\uparrow \cdots m_0^\downarrow m_0^\uparrow} = \ket{m_{N_\phi}^\downarrow m_{N_\phi}^\uparrow \cdots m_{\ell_{\cal A}+1}^\downarrow m_{\ell_{\cal A}+1}^\uparrow} \otimes \ket{m_{\ell_{\cal A}}^\downarrow m_{\ell_{\cal A}}^\uparrow \cdots m_0^\downarrow m_0^\uparrow} = \ket{\{m^{\cal B}\}} \otimes \ket{\{m^{\cal A}\}}$ and use a closure relation to get: \begin{equation} \ket{\Phi_{\alpha_R}^{\alpha_L}} = \sum_{\beta \in \mathcal{H}_{\rm CFT}} \ket{\phi_\beta^{\cal B}} \otimes \ket{\phi_\beta^{\cal A}} \, , \end{equation} where \begin{align} &\ket{\phi_\beta^{\cal B}} = \sum_{\{m^{\cal B}\}} c_{\{m^{\cal B}\}}^{\alpha_L,\beta} \ket{m_{N_\phi}^\downarrow,m_{N_\phi}^\uparrow \cdots m_{\ell_{\cal A}+1}^\downarrow,m_{\ell_{\cal A}+1}^\uparrow} \notag\\ & c_{\{m^{\cal B}\}}^{\alpha_L,\beta}=\braOket{\alpha_L}{B^{(m_{N_\phi}^\downarrow,m_{N_\phi}^\uparrow)} \cdots B^{(m_{\ell_{\cal A}+1}^\downarrow,m_{\ell_{\cal A}+1}^\uparrow)}}{\beta} \, , \end{align} and a similar expression hold for $\ket{\phi_\beta^{\cal A}}$. The Schmidt decomposition of the state $\ket{\Phi_{\alpha_R}^{\alpha_L}}$ can be computed once the overlaps between the states $\ket{\phi_\beta^{\cal A}}$ in ${\cal A}$ and $\ket{\phi_\beta^{\cal B}}$ in ${\cal B}$ are known. Interpreting theses states as FQH droplets living on part ${\cal A}$ and ${\cal B}$, we can use Eq.~\eqref{eq:overlaptransfermatrix}: \begin{eqnarray} & \braket{\phi_\beta^{\cal A}}{\phi_{\beta'}^{\cal A}} = \braOket{\beta, \beta'^*}{E^{\ell_{\cal A}+1}}{\alpha_R,\alpha_R^*} , \\ & \braket{\phi_\beta^{\cal B}}{\phi_{\beta'}^{\cal B}} = \braOket{\alpha_L,\alpha_L^*}{E^{N_\phi - \ell_{\cal A}}}{\beta, \beta'^*} . \end{eqnarray} As explained in Sec.~\ref{sec:transfermatrixformalism}, in the thermodynamic limit $N_\phi \to \infty$ and $N_\phi-\ell_{\cal A} \to \infty$, overlaps of ${\cal A}$ (respectively ${\cal B}$) are given by the right (respectively left) eigenvector of the transfer matrix. It is thus enough to compute the later to perform the Schmidt decomposition of the state $\ket{\Phi_{\alpha_R}^{\alpha_L}}$. The spectrum $\xi_i$ obtained from this decomposition can be plotted as a function of the quantum numbers in part ${\cal A}$. It can be shown that the right and left largest eigenvectors of the transfer matrix in each topological sector exhibit a block structure with respect to both charge and spin U(1)-charges and to the conformal dimension~\cite{RegnaultConstructionMPS}. They are hence good quantum numbers and we can plot the entanglement energies as a function of the conformal dimension in restricted spin and charge sectors. In that case, the conformal dimension can be identified to the momentum along the cylinder perimeter up to a shift corresponding to the total charging energy. A representative example of the results are shown of Fig.~\ref{fig:orbitalentanglementspectrum} for the Halperin 443 state. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/OrbitalSpectrumSU2Singlet.eps} \caption{\emph{Orbital entanglement spectrum for the Halperin 443 state on a cylinder with a perimeter $L=20\ell_B$. The main picture depicts the entanglement energies of states having the U(1)-charges $N_c=0$ and $N_s=0$. The state counting is the one of two free bosons as expected from the theory presented in Sec.~\ref{sec:halperin}. The inset shows the perfect overlap between different sectors of $N_s$ corresponding to the spin projection along the quantization axis. This organization of the entanglement energies $\xi_i$ in multiplets is a consequence of the SU(2) symmetry of the Halperin $(m,m,m-1)$ states. This symmetry is preserved at any level of truncation and is exact as shown in App.~\ref{app:su2}.}} \label{fig:orbitalentanglementspectrum} \end{figure} The character of the Halperin state is the product two Laughlin's characters since it contains two free bosons. This counting is faithfully reproduced by our MPS description but this comes at no surprise given the construction of Sec.~\ref{sec:OrbIndepMPS}. The Halperin $(m,m,m-1)$ states have additional symmetries, namely they are spin singlets. Although it has been known for a long time that those states presented a SU(2) symmetry, we find interesting to rederive the same property from the underlying CFT. The details of the derivation can be found in App.~\ref{app:su2}, the main arguments are the identification of dimension 1 spin raising and lowering operators and the use of Ward identities. As a consequence, the entanglement spectrum of these states is organized in multiplets as can be seen on the inset of Fig.~\ref{fig:orbitalentanglementspectrum}. It should be pointed out here that the truncation with respect to the conformal dimension preserves this multiplet structure and that the SU(2) symmetry is exact at \textit{any} level of truncation (see App.~\ref{app:su2}). We can additionally perform an SVD compression, keeping only a certain number of multiplets. Such an SVD compression can be performed while preserving the SU(2) symmetry of the trial WF at any step in the algorithm (see Fig.~\ref{fig:SVDreductionSU2}). The counting of the Halperin 221 state can be seen in Fig.~\ref{fig:SVDreductionSU2}, where we show all spin sectors in the same charge sector. They reproduce the first terms in the non-trivial characters of SU(3)${}_1$, which is known to be the underlying CFT~\cite{SU3level1,LecheminantSU31}. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/SVDreduction.eps} \caption{\emph{Orbital entanglement spectrum plotted as a function of the conformal dimension (or equivalently the momentum along the cylinder perimeter) for the Halperin 221 state before (stars) and after (pluses) an SVD compression of the state. Stars and pluses are shifted for the sake of clarity, but share the same conformal dimension at each level. The multiplet structure of the OES shows that both the truncation in conformal dimension presented in Sec.~\ref{sec:Truncation} and the SVD compression on multiplets preserves the SU(2) invariance of the trial WF. The black line indicates the truncation threshold for the SVD compression.}} \label{fig:SVDreductionSU2} \end{figure} \subsubsection{Real Space Entanglement Spectrum} \label{sec:RealSpaceEntanglementSpectrum} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/CompareOESvsRSES.eps} \caption{\emph{Real Space Entanglement Spectrum for the Halperin 221 state on a cylinder of perimeter $L=10\ell_B$, we show the U(1) sector $N_c=0$ and $N_s=0$. It corresponds to a sharp cut at $x=0$ perpendicular to the cylinder axis (top). The OES for the same state and is shown for comparison. For readability, the two types of spectrum have been shifted but share the same conformal dimension.}} \label{fig:OESvsRSES} \end{figure} The orbital bipartition presented above is particularly suited to the fractional quantum Hall states MPS description as described in Secs.~\ref{sec:MotivationCylinder}~-~\ref{sec:halperin} and~\ref{sec:OrbIndepMPS}. Indeed, it is the natural cut used for the physical space of the MPS (\textit{i.e.} the orbital occupation). Whenever the partition ${\cal A}-{\cal B}$ mixes the physical indices of the MPS, computing the entanglement spectrum is more involved. This is the case when we want to perform a sharp cut in the real space to compute the RSES~\cite{PESmanyparticles,RSESsterdyniak,RSESdubail}. We consider here a sharp cut in real space, perpendicular to the cylinder axis. We call $x=0$ the position of the cut. Part ${\cal A}$ contains all points $x>0$ on the right of the real space cut while ${\cal B}$ is made of the points on the left. The one-body WF $\psi_j$ (\textit{cf.} Eq.~\eqref{eq:onebodyWF}) corresponding to orbital $j$ is has support on both ${\cal A}$ and ${\cal B}$. A particle in orbital $j$ belongs to ${\cal A}$ with probability $|g_{{\cal A},j}|^2$ and in ${\cal B}$ with the complementary probability $|g_{{\cal B},j}|^2=1-|g_{{\cal A},j}|^2$, where: \begin{equation} |g_{{\cal A},j}|^2 = \dfrac{\int_{x>0} {\rm d}^2\vec{r} |\psi_j(\vec{r})|^2 }{\int {\rm d}^2\vec{r} |\psi_j(\vec{r})|^2} \, . \label{eq:coefRSES} \end{equation} A transfer matrix description of such a real space partition is presented in App.~\ref{app:RSES} and is equivalent to the derivation obtained in Refs.~\cite{ZaletelMongMPS} or~\cite{RegnaultConstructionMPS}. The idea is to weight the transfer matrix components of Eq.~\eqref{eq:transfermatrixdefinition} with the $g_{{\cal I},j}$ for ${\cal I} \in \{{\cal A},{\cal B}\}$ and to introduce a transition region near the cut. A typical RSES is shown in Fig.~\ref{fig:OESvsRSES} for the Halperin 221 state on a cylinder of perimeter $L=10 \ell_B$. For comparison, we plot the RSES together with the OES computed for the same parameters. Although both spectra show the same counting, they differ drastically in the distribution of the entanglement energies. These differences were studied in detail in Ref.~\cite{ZaletelMongMPS}. \subsection{Topological Entanglement Entropy} \label{sec:TEE} For a cut of length $L$ in real space, the Von Neumann entanglement entropy $S_{\cal A}(L) = \sum_i \xi_i e^{-\xi_i}$ is of particular interest. Noticing that it follows an area law (see Ref.~\cite{AreaLawRevMod} or Ref.~\cite{AreaLawReview2} for a review) supports the idea of an efficient MPS description of FQH states on the cylinder (see Sec.~\ref{sec:MotivationCylinder}). More importantly for topologically ordered ground states, the first correction to the area law is a constant which is known to characterize the topological order~\cite{TopoCorrectionLevinWen,TopoCorrectionKitaev}. It is referred to as the Topological Entanglement Entropy~\cite{TopoCorrectionKitaev} (TEE) and is denoted as $\gamma$. We have: \begin{equation} S_{\cal A}(L) = \alpha L - \gamma + \mathcal{O}(L^{-1}) \, , \label{eq:AreaLaw} \end{equation} where the constant $\alpha$ depends on the microscopic details of the system while $\gamma$ is the universal TEE. Because the Halperin $(m,m,n)$ state is Abelian, the TEE is independent of the topological sector and reads $\gamma = {\rm ln} \left(\sqrt{m^2-n^2} \right) $~\cite{TopoCorrectionLevinWen,WenZeeKmatrix}. We computed the entanglement entropy (EE) for different perimeters. The Von Neumann EE follows the area law as seen in the inset of Fig.~\ref{fig:TopoCorrection221}. Our numerical work does not make any assumption on the perimeter $L$ of the cylinder, so that we can numerically evaluate the derivative of the EE with finite differences. We locally remove this linear contribution to the EE by numerically computing the derivative $(\partial S_{\cal A}/ \partial L) $, and we plot $S_{\cal A}-(\partial S_{\cal A}/ \partial L) L$ to extract the sub-leading TEE with no fitting parameters. The results are presented in Fig.~\ref{fig:TopoCorrection221} for the Halperin 221 WF. When the perimeter is too small, finite size effects dominate. On the other side for large $L$, the truncation of the Hilbert space prevents the convergence of the TEE. In between these two regions, we see that the EE has the expected behavior Eq.~\eqref{eq:AreaLaw}. The TEE extracted from the plateau (see Eq.~\eqref{eq:AreaLaw}) gives $0.545(5)$ which agrees with the theoretical value of $\log \sqrt{3} \simeq 0.549306$. Moreover, we have checked that the TEE is indeed the same for the different topological sectors (see Fig.~\ref{fig:TopoCorrection221}), as expected for Abelian states. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/primertopocorrec.eps} \caption{\emph{Main picture: Entanglement entropy of the Halperin 221 state with the local contribution to the area law removed, $S_{\cal A}-(\partial S_{\cal A} / \partial L) L$, as a function of the cylinder perimeter for different truncation parameters $P_{\rm max}$ and different topological sectors. The topological entanglement entropy is extracted from the plateau and agrees with the theoretical prediction (dotted line). All sectors share the same TEE while higher order correction seen in the finite size effects are clearly non-universal. Inset: The entanglement entropy indeed follows an area law. }} \label{fig:TopoCorrection221} \end{figure} To avoid finite size effects, we should consider perimeters significantly larger than the correlation length. Satisfying this condition while keeping a reasonable auxiliary space dimension is often impossible and limits the size of the plateau in Fig.~\ref{fig:TopoCorrection221}. This is also why we focused on the Halperin 221 state which has the smallest correlation length (see Fig.~\ref{fig:InterpolCorrel} and App.~\ref{app:morenumerics} for a similar analysis on the Halperin 332 state). Rigorously, the area law and its first universal correction Eq.~\eqref{eq:AreaLaw} only holds true for a real space cut. It is not clear whether other corrections appear for orbital cuts. Are the significant differences between the OES and RSES seen in Fig.~\ref{fig:OESvsRSES} a mere rearrangement of the states? A first insight~\cite{RSESdubail} is that the orbital cut is non-local and might pick up other correction in addition to the scaling Eq.~\eqref{eq:AreaLaw}. To investigate this further, we consider the orbital entanglement entropy $S_{\cal A}^{\rm orb} = \sum_i \xi_i e^{-\xi_i}$ where the $\xi_i$ are the entanglement energies of an orbital bipartition. The results for an orbital cut can be found in Fig.~\ref{fig:DifferenceTopoCorrection}. They are qualitatively equivalent to the one obtained for a real space cut, finite size effects dominate for small perimeters and the truncation limits the range of perimeter for which the area law is satisfied. The convergence is found to be much easier for the orbital cut, and to be valid for larger perimeters. However, we are not able to extract the TEE from the plateau of $S_{\cal A}^{\rm orb}-(\partial S_{\cal A}^{\rm orb} / \partial L) L$. While the plateau seems to have a small finite slope, the second derivative of $S_{\cal A}^{\rm orb}$ is comparable to the one obtained for a real space cut. Indeed, we numerically get $\left| \dfrac{\partial^2 S_{\cal A}^{\rm orb}}{\partial L^2} \right| \leq 8 \cdot 10^{-3} \ell_B^{-2}$ for $13\leq L/\ell_B \leq 22$. The same calculation for the RSES gives $\left| \dfrac{\partial^2 S_{\cal A}^{\rm orb}}{\partial L^2} \right| \leq 10^{-2} \ell_B^{-2}$ for $10 \leq L/\ell_B \leq 13$. A similar analysis with the infinite Renyi entropy for an orbital cut did not give better results. Our methods suggests that the TEE can only be extracted from a real space cut in the regime of accessible perimeters. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/CorrectionTopologiqueEntropieIntrication.eps} \caption{\emph{Main picture: Orbital Von Neumann EE without the local contribution to the area law $S_{\cal A}^{\rm orb}-(\partial S_{\cal A}^{\rm orb} / \partial L) L$ as a function of the cylinder perimeter for different truncation parameters $P_{\rm max}$ and different topological sectors. It presents the same features as Fig.~\ref{fig:TopoCorrection221}, i.e. the finite size effects dominate for small perimeters and the entropy saturates for large $L$ because of the auxiliary Hilbert space truncation. Other corrections to the area law seem to prevent us from extracting the TEE from this dataset. The theoretical TEE is depicted by the dotted line. Inset: The orbital entanglement entropy indeed follows an area law.}} \label{fig:DifferenceTopoCorrection} \end{figure} \section{Conclusion} In this article, we derived an exact MPS representation for the Halperin $(m,m,n)$ series. The derivation deals with the possible caveats of indistinguishability in the CFT formalism coming from the use of multiple electronic operators. We emphasize that our MPS has an exact SU(2) symmetry for any finite truncation parameter $P_{\rm max}$ when $n=m-1$. While our efforts were focused on two-component fluids, the core of the derivation may be extended to any richer internal structure. As an application, we have computed the bulk correlation lengths of several Halperin WFs thus establishing that they describe gapped phases. We compared our results to prediction made with the plasma analogy and checked the conjecture made in Ref.~\cite{PlasmaMooreReadAndOthers} about the Gaussian falloff off two quasiholes correlation functions. We were able to characterize the topological content of the Halperin WFs with the unambiguous extraction of the TEE. All topological sectors share the same TEE, a signature of their Abelian nature. With this platform in hand, future works will focus on attaining larger system sizes to support quantitatively recent experimentally oriented proposals~\cite{TwistDefect1,TwistDefect2,TwistDefect3}, which aim at realizing non-Abelian excitations by adding twist defects to more conventional Abelian FQH droplets. Large size numerical works are highly desirable to assess the feasibility of such proposals and to confirm the evidence of non Abelian statistics already witnessed in finite size numerics~\cite{CecilecouplingSC}. \section*{Acknowledgement} We thank P. Bonderson for enlightening discussions about the plasma analogy. V.C., B.E. and N.R. were supported by the grant ANR TNSTRONG No. ANR-16-CE30-0025. BAB acknowledges support from Department of Energy de-sc0016239, Simons Investigator Award, the Packard Foundation, and the Schmidt Fund for Innovative Research, NSF EAGER grant DMR-1643312, ONR - N00014-14-1-0330, ARO MURI W911NF-12-1-0461, NSF-MRSEC DMR-1420541.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \hspace*{.3in} Orbifold models \cite{DHVW1,DHVW2,NSV1,NSV2,IbaNilQue2,IbaNilQue}, describing the compactification of the heterotic string from ten dimensions down to four, have been extensively studied in the past due to the fact that these models are exactly solvable and that they can predict semi-realistic physics. Orbifold compactifications possess various continuous parameters, called moduli, corresponding to marginal deformations of the underlying conformal theory. They enter into the 4D $ {\cal N} = 1$ supersymmetric low-energy effective Lagrangian as chiral matter fields with flat potentials. Moduli take their values in a manifold called moduli space. For models yielding $ {\cal N} =1$ space-time supersymmetry this moduli space is, locally, a K\"{a}hlerian manifold. Its K\"{a}hler potential is one of the three functions \cite{CreJul,CreFer} which describe the coupling of moduli fields to $ {\cal N}=1$ supergravity in the 4D low-energy effective Lagrangian. It has been known for quite some time now that a certain class of moduli, called continuous Wilson lines, can occur whenever the gauge twist in the $E_8 \otimes E_8$ root lattice is realized not by a shift, but rather by a rotation \cite{IbaNilQue,IbaMasNilQue,Moh2}. Continuous Wilson lines are of big interest, not only because they enter into the 4D low-energy effective Lagrangian as described above, but also because turning them on leads to $(0,2)$ models with observable gauge groups smaller than the generic gauge group $E_6 \otimes H$ (where $H=SU(3),SU(2) \otimes U(1),U(1)^2$) occuring in $(2,2)$ symmetric $Z_N$ orbifold compactifications. This gauge symmetry breaking is related to the stringy Higgs effect \cite{IbaLerLu}. It was only recently \cite{Moh2}, however, that a first step towards a complete classification of the untwisted moduli space of $Z_N$ orbifold theories with continuous Wilson lines was taken. After reviewing some of the relevant facts about toroidal and orbifold compactifications in section 2, we will in section 3 derive the local structure of the untwisted moduli space of asymmetric $Z_N$ orbifolds with continuous Wilson lines. We find that its local structure is given by a direct product of $\frac{SU(n,m)}{SU(n) \otimes SU(m) \otimes U(1)}$ and $\frac{SO(r,p)}{SO(r) \otimes SO(p)}$ cosets and that it is entirely determined by the eigenvalues of the twist $\Theta$ on the underlying Narain lattice. We then proceed with a general discussion of target space modular symmetries in asymmetric $Z_N$ orbifolds with continuous Wilson lines. This is presented in section 4. Target space duality symmetries consist of discrete reparametrisations of the moduli fields which change the geometry of the internal space but leave the underlying conformal field theory invariant. This implies that certain points in the moduli space of orbifold models have to be identified. Thus, the moduli space of the underlying conformal field theory is an orbifold and not a smooth manifold. In section 4, we also introduce two sets of standard coordinates on the $\frac{SO(r,p)}{SO(r) \otimes SO(p)}$ cosets, namely real homogenous and real projective coordinates. This is useful because the group of modular transformations acts in a simple way on these coordinates. Next, we especialise to the case of $(0,2)$ $Z_N$ orbifold compactifications with continuous Wilson lines yielding $ {\cal N}=1$ space-time supersymmetry. We then proceed, in section 5, to explicitly construct the K\"{a}hler potentials for some of these moduli spaces. Namely, we will focus on the $Z_N$ orbifold compactifications for which the internal 6-torus $T_6$ can be decomposed into a direct sum $T_4 \oplus T_2$. Then, by using well known techniques \cite{PorZwi,FerPor,FerKouLuZw}, we will derive the K\"{a}hler potentials for the moduli spaces associated with the underlying 2-torus $T_2$. For the case when the twist operating on the internal $T_2$ torus has eigenvalues $-1$ we find that, in the presence of $r-2$ complex Wilson lines, the associated coset $\frac{SO(r,2)}{SO(r) \otimes SO(2)}$ doesn't factorise anymore into two submanifolds, contrary to the $\frac{SO(2,2)}{SO(2) \otimes SO(2)}$ case with no Wilson lines turned on. Moreover, we find that the associated K\"{a}hler potential contains a holomorphic term describing the mixing of complex Wilson lines. Such a term is precisely of the type which recently has been shown \cite{LouKap,Anto} to induce a mass term for Higgs particles of the same order of the gravitino mass once supergravity is spontaneoulsy broken at low energies. In section 6 we proceed to explicitly discuss target space modular symmetries \cite{FLST} of the K\"{a}hler potentials constructed in section 5. We show that, for the K\"{a}hler potentials we have explicitly constructed, these discrete reparametrisations induce particular K\"{a}hler transformations on the K\"{a}hler potentials. Hence, these target space duality transformations are symmetries of the 4D $ {\cal N}=1$ tree level low-energy Lagrangian \cite{FLST}. These target-space duality symmetries also manifest themselves in the string threshold corrections that are of importance for the unification of gauge couplings \cite{Kaplu,Dix,AntNar,AntGav,DFKZ,AELN,ILR,IbaLu,May,BLST1,BLST2}. We point out that, for the case where the twist operating on the internal $T_2$ has eigenvalues of $-1$, the associated $T$ and $U$ moduli mix under target space duality transformations due to the presence of the mixing terms between complex Wilson lines in the K\"{a}hler potential. We present our conclusions in section 7. \setcounter{equation}{0} \section{Toroidal and Orbifold Compactifications} \hspace*{.3in}Let us first briefly recall some of the relevant facts about toroidal compactifications with general constant background fields \cite{Nar,NSW}. If one compactifies the ten--dimensional heterotic $E_{8} \otimes E_{8}$ string on a $d$--dimensional torus, \begin{equation} {\bf T}^{d} = \frac{ {\bf R}^{d} }{\Lambda} \end{equation} (where $\Lambda$ is a $d$--dimensional lattice) then the moduli dependent degrees of freedom can be parametrized by $16 + 2 d$ chargelike integer quantum numbers, namely the winding numbers $n^{i}$, the internal momentum numbers $m_{i}$ and the charges $q^{A}$ of the leftmoving current algebra which is generated by the extra leftmoving sixteen coordinates ($i=1,\ldots,d$, $A=1,\ldots, 16$). The moduli dependence of the untwisted states is encoded in the $16 + 2d$ dimensional Narain lattice $\Gamma$. If one expands the Narain vector ${\bf P}$ of an untwisted state in terms of a standard lattice basis \cite{Gin} $k^{i}, \overline{k}_{i}, l_{A}$, then the quantum numbers appear as components, whereas the moduli dependence is absorbed into the geometry of the lattice: \begin{equation} {\bf P} = q^{A} l_{A} + n^{i} \overline{k}_{i} + m_{i} k^{i} \in \Gamma \end{equation} The moduli are usually grouped into a symmetric matrix $G_{ij}$, which denotes the lattice metric of the $d$--dimensional lattice $\Lambda$, an antisymmetric matrix $B_{ij}$ and $d$ sixteen dimensional vectors ${\bf A}_{i}$, called Wilson lines. The standard basis of $\Gamma$ can then be constructed in terms of a basis ${\bf e}_{A}$ of the $E_{8} \otimes E_{8}$ lattice and of bases ${\bf e}_{i}$, ${\bf e}^{i}$ of $\Lambda$ and $\Lambda^{*}$ (the dual of $\Lambda$) as a function of the moduli $G_{ij}, B_{ij}, {\bf A}_{i}$ \cite{Gin}: \begin{equation} k^{i} = \left( {\bf 0}, \frac{1}{2} {\bf e}^{i};\frac{1}{2} {\bf e}^{i} \right) \end{equation} \begin{equation} \overline{k}_{i} = \left( {\bf A}_{i}, (G_{ij} + B_{ij} - \frac{1}{4} ({\bf A}_{i} \cdot {\bf A}_{j}) ) {\bf e}^{j}; (-G_{ij} + B_{ij} - \frac{1}{4} ({\bf A}_{i} \cdot {\bf A}_{j}) ) {\bf e}^{j} \right) \end{equation} \begin{equation} l_{A} = \left( {\bf e}_{A}, -\frac{1}{2} ({\bf e}_{A} \cdot {\bf A}_{i}) {\bf e}^{i}; -\frac{1}{2} ({\bf e}_{A} \cdot {\bf A}_{i}) {\bf e}^{i} \right) \label{lA} \end{equation} Defining the $16 \times d$ matrix $A_{Ai}$ as \begin{equation} A_{Ai} = {\bf e}_{A} \cdot {\bf A}_{i} \label{DefA} \end{equation} yields \begin{equation} {\bf A}_i \cdot {\bf A}_j = C^{AB} A_{Ai} A_{Bj} \label{AiAj} \end{equation} where the metric $C_{AB}= {\bf e}_A \cdot {\bf e}_B$ for lowering and raising $A$-indices denotes the Cartan metric of the $E_8 \otimes E_8$ lattice. Another parametrization of the moduli, which will turn out to be quite useful later on, is given by a $d \times d$ matrix $D_{ij}$ defined by \begin{equation} D_{ij} = 2(B_{ij} - G_{ij} - \frac{1}{4}({\bf A}_{i} \cdot {\bf A}_{j})) \label{DefD} \end{equation} At those points in the moduli space, where one of the matrices (\ref{DefA}) and (\ref{DefD}) becomes integer valued, both the symmetry of the Narain lattice and the gauge symmetry of the model are enhanced \cite{Moh1}. Another useful representation of the Narain vector is to specify its components with respect to an orthonormal frame, which allows one to separate the $16 + d$ leftmoving from the $d$ rightmoving degrees of freedom \begin{equation} {\bf P} = ( {\bf P}_{L}; {\bf P}_{R} ) \end{equation} In terms of this decomposition the condition $(L_{0} - \tilde{L}_{0}) | \Phi \rightarrow = 0$ for physical states reads \begin{equation} \frac{1}{2} ( {\bf P}_{L}^{2} - {\bf P}_{R}^{2} ) + N + \widetilde{N} - 1 = 0 \end{equation} Since the (moduli independent) contribution of the number operators $N$ and $\widetilde{N}$ is an integer\footnote{This is true after applying the GSO condition and after absorbing the normal ordering constant of the NS sector into the definition of the rightmoving number operator.} it is evident that the Narain lattice must be an even lattice with respect to the indefinite bilinear form of type $(+)^{16 + d} (-)^{d}$. As shown by Narain \cite{Nar} modular invariance implies that the lattice $\Gamma$ must also be selfdual. Since even selfdual lorentzian lattices are unique up to isometries, this then implies that the allowed deformations of such a lattice $\Gamma$ form a group isomorphic to $O(16 + d,\; d)$. The moduli dependent contribution to the mass $M$ of an untwisted state is given by \begin{equation} \alpha' M^{2} = ({\bf P}_{L}^{2} + {\bf P}_{R}^{2}) + \cdots \label{m15} \end{equation} Since not only the spectrum but also the interactions are invariant under the subgroup $O(16+d) $$\otimes O(d)$ of $O(16+d, \; d)$, the moduli space of toroidal compactifications is locally given by the coset space \cite{Nar} \begin{equation} {\cal M}_{T} \simeq \frac{ O(16+d, \; d) }{ O(16 + d) \otimes O(d) } \label{ModuliSpaceT} \end{equation} In order to get the global geometry one has to take into account further discrete identifications due to duality (also called modular) symmetries of the target space, which will be discussed later. Toroidal compactifications are, however, not of big phenomenological interest, because they all yield models with an extended ${\cal N} = 4$ space--time supersymmetry, which doesn't admit chiral matter multiplets, and with gauge groups of rank $16 + d$ \cite{Nar}. They are, nevertheless, the natural starting point for the construction of more realistic models, namely orbifold models. It is well known from the work of Dixon, Harvey, Vafa and Witten \cite {DHVW1,DHVW2} that by modding out rotations both the number of space--time supersymmetries and the rank of the gauge group can be reduced. If one starts with a toroidal compactification these rotations must be automorphisms of finite order of the corresponding Narain lattice $\Gamma$ \cite{NSV1}. We will study the case in which the point twist group ${\cal P}$ defining the orbifold is a cyclic group \begin{equation} {\cal P} = \langle \Theta \rightarrow = \{ \Theta, \Theta^{2}, \ldots, \Theta^{N} = {\bf 1} \} \end{equation} generated by a single twist $\Theta$ satisfying \begin{equation} \Theta \in \mbox{AUT}(\Gamma),\;\;\; \Theta^{N} = {\bf 1} \end{equation} As shown by Narain, Sarmadi and Vafa \cite{NSV2} holomorphic factorization and modular invariance imply that the twist must not mix left- and rightmoving degrees of freedom. It must therefore be a rotation\footnote{ The asymmetric orbifold construction given in \cite{NSV1} is slightly more general since it also allows for the modding out of a rotation followed by a translation. Note that modding out by translations is much simpler and better understood as it is equivalent to imposing different toroidal boundary conditions. For the purpose of trying to learn more about the effect of modding out rotations, we will keep the situation as simple as possible and, in the following, only consider pure rotations.} (not just a pseudo--rotation) \begin{equation} \Theta = \Theta_{L} \otimes \Theta_{R} \in O(16 + d) \otimes O(d) \label{m1} \end{equation} We will, in the next section, determine the local structure of the moduli spaces for orbifolds defined by a twist as given in (\ref{m1}). Since most of the work on orbifolds has, up to now, focused on more special constructions we will, however, first have to recall some more facts and results. People have, from the beginning, been especially interested in orbifold models that can be interpreted as compactifications on a six--dimensional orbifold \cite{DHVW1,DHVW2}. In these cases the twist $\Theta$ of the Narain lattice $\Gamma$ must act in a left-right symmetric way, to be specified below, so as to have well defined coordinates on the internal $d$--dimensional orbifold. More precisely, the twist $\Theta$ must be given in terms of a $d$--dimensional twist $\theta$ which defines this orbifold and an additional gauge twist $\theta'$ which is an automorphism of the $E_{8} \otimes E_{8}$ root lattice. That is, if one decomposes the Narain vector as \begin{equation} {\bf P} = (p^{A}, p^{i}_{L}; p_{R}^{i} ) \end{equation} then the twist $\Theta$ must be given as \begin{equation} \Theta = \theta' \otimes \theta \otimes \theta \in O(16) \otimes (O(d) \otimes O(d) )_{\mbox{diag}} \label{m2} \end{equation} We will refer to all compactifications, for which the twist $\Theta$ is given by (\ref{m2}), as {\em orbifold compactifications}. Note that (\ref{m2}) is a special case of (\ref{m1}). One further restriction that is often used is to consider only Narain lattices of the special form $\Gamma_{16} \oplus \Gamma_{6;6}$ where $\Gamma_{16}$ denotes the root lattice of $E_{8} \otimes E_{8}$. This means that most of the deformation parameters, namely the $16 \cdot d$ parameters corresponding to the Wilson lines $A_{Ai}$, are set to zero. One can then replace the gauge twist $\theta'$ by an equivalent shift (i.e. by a translation) which is much easier to handle. However, the price of this simplification is quite high, as the rank of the gauge group is then at least\footnote{ The rank of the gauge group cannot be reduced by shifts but only by rotations. More precisely, the rank of the gauge group of an asymmetric orbifold is greater or equal to the number of nontrivial eigenvalues of $\Theta_{L}$, because for each eigenvalue 1 there is a twist invariant leftmoving oscillator and, therefore, an unbroken $U(1)$.} 16. Although it is possible to have nonvanishing Wilson lines when using the shift realization, they are then constrained to a discrete set of values and, hence, are not moduli of the orbifold model \cite{IbaNilQue2,IbaMasNilQue}. Since discrete Wilson lines act like additional shifts, they also cannot reduce the rank of the gauge group but only break (or extend) the gauge group. On the other hand, it was pointed out in \cite{IbaNilQue,IbaMasNilQue} that, if one realizes the gauge twist by a rotation, some of the components of the Wilson lines are still moduli and that they can be used to reduce the rank of the gauge group below 16. Thus, it is important to keep the continuous Wilson lines in the game and we will do so in the following. Clearly, a deformation of the Narain lattice can only lead to a modulus of an orbifold model if the twist $\Theta$ is still an automorphism of the deformed lattice. This was used in \cite{IbaMasNilQue} to derive a set of equations for the moduli which, in principle, allow one to decide which of the toroidal moduli are still moduli of the orbifold model and which are frozen to discrete values. In \cite{ErlJunLau,Jun} it was shown how these equations can be explicitly solved for the moduli in the case of bosonic or heterotic orbifold compactifications without Wilson lines. For the number of surviving $G_{ij}$ and $B_{ij}$ moduli closed formulas were derived. This was later \cite{Moh2} generalized to heterotic orbifold compactifications with continuous Wilson lines. One drawback of the approach used in \cite{Moh2} is that one can derive the number of moduli, but the expected coset structure of the moduli space remains obscure. In the case of vanishing Wilson lines this coset structure was derived in \cite{FKP,CveLouOvr} for all the ${\bf Z}_{N}$ orbifold compactifications with ${\cal N} = 1$ and ${\cal N} = 2$ space--time supersymmetry. In that approach one uses symmetries of the world sheet action to constrain the K\"ahler potential appearing in the 4D effective action. The associated coset is then obtained from the explicit expression of the K\"ahler potential. In the next section we will use a different method for determining the local structure of the moduli space of asymmetric $Z_N$ orbifolds. We will not make use of the effective action, but rather of the compatiblity equation between the Narain twist $\Theta$ and the moduli. Note that we will be dealing with orbifolds defined by a twist $\Theta$ as given in (\ref{m1}). \setcounter{equation}{0} \section{The coset structure of asymmetric ${\bf Z}_{N}$ orbifolds} \hspace*{.3in} Consider the coset (\ref{ModuliSpaceT}) which parametrizes the moduli space of toroidal compactifications locally. The simplest way of arriving at the untwisted moduli space of a general ${\bf Z}_{N}$ orbifold (locally) is simply to find the subspace of this coset that is compatible with the action of the twist $\Theta$ on the underlying Narain lattice $\Gamma=$ $\Gamma_{16 + d;d}$. Suppose now that $\Gamma$ is a lattice on which $\Theta$ acts as an automorphism. A deformation ${\cal T} \in O(16+d,\;d)$ of $\Gamma$ is compatible with $\Theta$ if and only if $\Theta$ is also an automorphism of the deformed lattice $\Gamma' = {\cal T} (\Gamma)$. But this is, by inspection, equivalent to ${\cal T}^{-1} \Theta {\cal T}$ being in the point group $\cal{P}$ of $\Gamma$. Since we are taking\footnote{There may, of course, exist lattices which are more symmetric than required when modding out by $\Theta$ and, therefore, have bigger point groups. It is possible to define orbifolds by modding out these bigger groups. The number of allowed deformations will then be different. For our purpose these models are just a subset of models with extended symmetry because we want to find all lattices whose point {\em symmetry} group contains the cyclic group generated by $\Theta$, which then is chosen to be the point {\em twist} group.} the point group $\cal P$ to be the cyclic group generated by $\Theta$, this then means that \begin{equation} {\cal T}^{-1} \Theta {\cal T} = \Theta^{k} \;\;, 1 \leq k <N \label{m13} \end{equation} for some $k$. That is, $\cal T$ is in the {\em normalizer} $\cal{N}$ of the point group $\cal P$ in $O(16+d,\;d)$ \begin{equation} {\cal T} \in {\cal N} ( {\cal P}, O(16+d,\; d) ) \label{m14} \end{equation} Statement (\ref{m14}), namely that $\cal T$ has to be in the normalizer $\cal N$ of the point group, also holds for bigger (abelian or non-abelian) point groups $\cal P$ which have more than one generator. It is, though, intuitively clear that a twist $\Theta$ and a deformation $\cal T$ do not have to strictly commute, but that they have to commute on orbits, that is up to point transformations as in (\ref{m13}). Of course, only those deformations $\cal T$ with $k=1$ can be continuously connected to the identity, whereas the others will describe nontrivial, discrete deformations. This corresponds to the appearence of discrete background fields in the standard approach \cite{Erl}. On the other hand, any special solution of equation (\ref{m13}) with $k \not= 1$ can be continuously deformed by any solution to (\ref{m13}) with $k = 1$. This means that, in order to identify the moduli, one has to find the general solution to (\ref{m13}) with $k=1$. We will, therefore, in the following only deal with the (most general) case of purely continuous background fields and set $k=1$. After introducing matrices with respect to an orthonormal basis of ${\bf R}^{16+d, \; d}$ we have to solve the homogenous matrix equation \begin{equation} [\Theta, {\cal T}] = \mbox{\bf 0} \label{comequ} \end{equation} for\footnote{ We use the same characters for the maps $\Theta$ and $\cal T$ themselves and for the matrices representing them.} $\cal T$. We proceed to show that the moduli space of this equation only depends on the eigenvalues of $\Theta$. The method used in the following is a modification of the method used in \cite{FerFreSor} for ${\bf Z}_{3}$ orbifold compactifications without Wilson lines. First recall that $\Theta$ must be a proper rotation (\ref{m1}), that is, it must be an element of $O(16+d) \otimes O(d)$. The eigenvalues of the twist are $N$--th roots of unity. Those which are real must be equal to $\pm 1$, whereas the complex ones come in pairs of complex conjugated numbers of length 1. Let us denote the number of eigenvalues $\pm1$ in the left (right) part of $\Theta$ by $d_{\pm 1}^{(L)}$ ($d_{\pm 1}^{(R)}$) and the total number by $d_{\pm 1}$ $=$ $d_{\pm 1}^{(L)} + d_{\pm 1}^{(R)}$. Analogously the number of complex pairs $\exp(\pm i \phi_{k})$ of eigenvalues of the left (right) part is denoted by $p_{k}$ ($q_{k}$). By relabeling the orthonormal basis of $\mbox{\bf R}^{16+d,\;d}$, the matrix of the twist $\Theta$ can be brought to the form \begin{equation} \left( \begin{array}{cccccc} \begin{array}{cc} R_{1} & 0 \\ 0 & R_{1}' \\ \end{array} & {\bf 0} & \cdots & \cdots & \cdots & {\bf 0} \\ {\bf 0} & \ddots & & & & \vdots \\ \vdots & & \begin{array}{cc} R_{i} & 0 \\ 0 & R_{i}' \\ \end{array} & & & \vdots \\ \vdots & & & \ddots & & \vdots \\ \vdots & & & & \begin{array}{cc} - {\bf 1}_{d_{-1}^{(L)} } & 0 \\ 0 & - {\bf 1}_{d_{-1}^{(R)}} \\ \end{array} & {\bf 0} \\ {\bf 0} & \cdots & \cdots & \cdots & {\bf 0} & \begin{array}{cc} {\bf 1}_{d_{1}^{(L)} } & 0 \\ 0 & {\bf 1}_{d_{1}^{(R)}} \\ \end{array} \\ \end{array} \right) \end{equation} where \begin{equation} R_{i} = \left( \begin{array}{cc} c_{i} {\bf 1}_{p_{i}} & -s_{i} {\bf 1}_{p_{i}} \\ s_{i} {\bf 1}_{p_{i}} & c_{i} {\bf 1}_{p_{i}} \\ \end{array} \right) \;\;,\;\; R_{i}' = \left( \begin{array}{cc} c_{i} {\bf 1}_{q_{i}} & -s_{i} {\bf 1}_{q_{i}} \\ s_{i} {\bf 1}_{q_{i}} & c_{i} {\bf 1}_{q_{i}}, \\ \end{array} \right) \end{equation} with \begin{equation} c_{i} = \cos(\phi_{i}) \;\;,\;\; s_{i} = \sin(\phi_{i}) \end{equation} Then the matrix of an admissible deformation $\cal T$ with respect to the same basis has the blockdiagonal form \begin{equation} \left( \begin{array}{cccccc} T_{1} & {\bf 0} & \cdots & \cdots & \cdots & {\bf 0} \\ {\bf 0} & \ddots & & & & \vdots \\ \vdots & & T_{i} & & & \vdots \\ \vdots & & & \ddots & & \vdots \\ \vdots & & & & P & {\bf 0} \\ {\bf 0} & \cdots & \cdots & \cdots & {\bf 0} & Q \\ \end{array} \right). \end{equation} Since $\cal T$ $\in$ $O(16 + d,\; d)$ we get that \begin{equation} T_{i} \in O(2p_{i},\; 2q_{i} ), \;\;\; P \in O(d_{-1}^{(L)},\; d_{-1}^{(R)}), \;\;\; Q \in O(d_{1}^{(L)},\; d_{1}^{(R)} ) \end{equation} Whereas the matrices $P$ and $Q$ are not further constrained by the commutator equation (\ref{comequ}), the $T_{i}$ must commute with the twist matrix restricted to the $i$--th complex eigenspace. Decomposing $T_{i}$ into suitable blocks as \begin{equation} T_{i} = \left( \begin{array}{cc} A & B \\ C & D \end{array} \right) \end{equation} yields (3.3) as \begin{equation} \left[ \left( \begin{array}{cc} R_{i} & 0 \\ 0 & R_{i}' \end{array} \right), \left( \begin{array}{cc} A & B \\ C & D \end{array} \right) \right] = \mbox{\bf 0} \label{comeq} \end{equation} The blocks $A,B,C,D$ depend of course on the index $i$, but since the different eigenspaces decouple it is possible and convenient to suppress this label. Equation (\ref{comeq}) implies that $R_{i} A = A R_{i}$ for the $2p_{i} \times 2p_{i}$ block $A$. Similar equations hold for $B, C$ and $D$. These equations can now again be analyzed blockwise. In the case of $A$ (\ref{comeq}) gives \begin{equation} [A, R_{i}] = \left[ \left( \begin{array}{cc} A_{1} & A_{2} \\ A_{3} & A_{4} \\ \end{array} \right), \left( \begin{array}{cc} c_{i} \mbox{\bf 1}_{p_{i}} & -s_{i} \mbox{\bf 1}_{p_{i}} \\ s_{i} \mbox{\bf 1}_{p_{i}} & c_{i} \mbox{\bf 1}_{p_{i}} \\ \end{array} \right) \right] = \mbox{\bf 0} \end{equation} implying that only two of the four $p_{i} \times p_{i}$ blocks $A_{i}$ are independent, namely \begin{equation} A = \left( \begin{array}{cc} A'& -A'' \\ A''& A' \\ \end{array} \right) \end{equation} where $A' = A_{1} = A_{4}, \;\; A''= -A_{2} = A_{3}$. The other blocks $B, C, D$ of $T_{i}$ have the same structure. The off--diagonal blocks $B$ and $C$ are, however, in general not quadratic. The matrices $T_{i}$ form a group called the centralizer of $\Theta$ (restricted to the $i$--th eigenspace) in $O(2p_{i}, \; 2q_{i})$. The special structure found for these matrices resembles the one appearing in the standard isomorphism between $GL(n, \mbox{\bf C})$ and a $2n^{2}$ dimensional subgroup of $GL(2n, \mbox{\bf R})$ given by \begin{equation} GL(n, \mbox{\bf C}) \ni m = m' + i m'' \longleftrightarrow M = \left( \begin{array}{cc} m' & -m'' \\ m'' & m' \\ \end{array} \right) \in GL(2n, \mbox{\bf R}) \label{standaut} \end{equation} The only modification needed is a permutation of certain rows and columns in $T_{i}$, in order to reposition some of the blocks. Since such a permutation is an automorphism of $GL(2p_{i} + 2 q_{i})$ the composition with (\ref{standaut}) yields again an isomorphism \begin{equation} T_{i} = \left( \begin{array}{cccc} A' & -A'' & B' & - B''\\ A'' & A' &B'' & B'\\ C' & - C'' & D' & - D'' \\ C'' & C' & D'' & D' \\ \end{array} \right) \longleftrightarrow \left( \begin{array}{cccc} A'& B' &-A'' & -B'' \\ C'& D' &-C'' & -D'' \\ A''& B'' & A'& B' \\ C''&D'' & C'& D' \\ \end{array} \right) \longleftrightarrow \end{equation} \begin{equation} \longleftrightarrow \left( \begin{array}{cc} A' + i A'' & B' + i B''\\ C' + i C'' & D' + i D'' \\ \end{array} \right) = \left( \begin{array}{cc} a & b \\ c & d \\ \end{array} \right) = t_{i} \end{equation} Since the $T_{i}$ must not only commute with the twist but also be in $O(2p_{i},\; 2q_{i})$, we finally have to translate this into a condition for the $t_{i}$. The pseudo--orthogonality\footnote{The condition on the determinant does not lead to a relation between the matrix elements of $T_{i}$, because the determinant of a pseudo--orthogonal matrix can only take discrete values.} of $T_{i}$ can be expressed in terms of the blocks $A,B,C,D$ as \begin{equation} \begin{array}{ll} A^{T}A - C^{T}C = \mbox{\bf 1},& A^{T}B - C^{T}D = \mbox{\bf 0},\\ B^{T}A - D^{T}C = \mbox{\bf 0},& B^{T}B - D^{T}D= -\mbox{\bf 1} \\ \end{array} \end{equation} These relations imply that the blocks of $t_{i}$ satisfy \begin{equation} \begin{array}{ll} a^{+} a - c^{+}c = \mbox{\bf 1},& a^{+}b - c^{+}d = \mbox{\bf 0},\\ b^{+}a - d^{+}c= \mbox{\bf 0},& b^{+}b - d^{+}d= -\mbox{\bf 1}\\ \end{array} \end{equation} This means that $t_{i}$ is pseudo--unitary, $t_{i} \in$ $U(p_{i},\; q_{i})$. Therefore the group of those deformations in the $i$--th eigenspace that commutes with the twist is (at least locally) isomorphic to $U(p_{i},\; q_{i})$. Combining the above results for all the blocks in the decomposition of a general $\cal T$ $\in$ $O(16 +d,\; d)$ we have shown that those deformations $\cal T$ commuting with the twist $\Theta$ form a subgroup isomorphic to \begin{equation} \bigotimes_{i=1}^{K} U(p_{i},\; q_{i}) \otimes O(d_{-1}^{(L)},\;d_{-1}^{(R)}) \otimes O(d_{1}^{(L)},\;d_{1}^{(R)}) \end{equation} where $K$ is the total number of distinct pairs of complex eigenvalues. However, those deformations $\cal T$ which are pure rotations, $\cal T$ $\in$ $O(16+d) \otimes O(d)$, do not change the physical content of a model. To get the (untwisted) moduli space (up to modular transformations) we have to factorize this subgroup. The resulting coset space is given by \begin{equation} {\cal M}_{O}(\Theta) \simeq \bigotimes_{i=1}^{K} \frac{ SU(p_{i},\; q_{i}) }{SU(p_{i}) \otimes SU( q_{i}) \otimes U(1)} \otimes \frac{SO(d_{- 1}^{(L)},\;d_{- 1}^{(R)})} {SO( d_{- 1}^{(L)} ) \otimes SO( d_{- 1}^{(R)} )} \otimes \frac{SO(d_{1}^{(L)},\;d_{1}^{(R)})} {SO( d_{1}^{(L)} ) \otimes SO( d_{1}^{(R)} )}. \label{ModuliSpaceO} \end{equation} Note that we have made use of the local isomorphisms $U(p,\;q)$ $\simeq$ $SU(p,\;q) \otimes U(1)$ and $O(p,\;q) \simeq SO(p,\;q)$ to bring our result into the form usually used in the supergravity literature. As claimed above, the local structure of the untwisted moduli space is completely determined by the eigenvalues of the twist $\Theta$. The dimension of the moduli space is \begin{equation} \mbox{dim} ( {\cal M}_{O}(\Theta) ) = 2 \sum_{i=1}^{K} p_{i} q_{i} + d_{-1}^{(L)} d_{-1}^{(R)} + d_{1}^{(L)} d_{1}^{(R)} \label{DimMO} \end{equation} It only depends on the multiplicities of the eigenvalues of $\Theta$. Moduli do only exist if an eigenvalue appears both in the left and in the right part of the twist. We can now compare our result (\ref{ModuliSpaceO}) with the coset spaces found in \cite{FKP,CveLouOvr} for the ${\bf Z}_{N}$ orbifold compactifications without Wilson lines yielding ${\cal N} = 1$ and ${\cal N} = 2$ space-time supersymmetry. In order to do so, we simply have to restrict ourselves to an $O(d,d)$ subsector and to set $d=6$ as well as $\Theta_{L} = \Theta_{R}$ $=\theta$. Then, by plugging into (\ref{ModuliSpaceO}) the wellknown eigenvalues of the symmetric ${\bf Z}_{N}$ twists leading to ${\cal N} =1,2$ space-time supersymmetry \cite{DHVW2,ErlKle}, we find that all the results agree \footnote{In fact, it is easily seen that the form of the world sheet action which is crucial in the approach of \cite{CveLouOvr} depends only on the eigenvalues of $\theta$ and their multiplicities.},as expected. As a straightforward generalization we can now similarly write down the cosets for all these models with continuous Wilson lines turned on. The result will, of course, now also depend on the gauge twist $\theta'$ and its eigenvalues. Since the choice of a gauge twist is also constrained by world sheet modular invariance one has to proceed as follows. First, one has to find all $E_{8} \otimes E_{8}$ Weyl twists $\theta'$ which have the required order and lead to modular invariant twists $\Theta$. Then one has to calculate their eigenvalues in order to get the coset. To carry out this program will require some work because there are, in general, a lot of gauge twists satisfying the constraints from modular invariance , especially for higher $N$. Based on \cite{KKKOT} one can estimate that there will be roughly 500 models. This will, therefore, be the subject of a later publication \cite{CarLuMoh}. However, to give an explicit example, we will list the cosets for all modular invariant ${\bf Z}_{3}$ orbifold compactifications with ${\cal N}=1$ space-time supersymmetry. This is easy to do, since both $\theta$ and $\theta'$ consist of several copies of the $A_{2}$ coxeter twist\footnote{ $A_{2}$ is the complex form of $su(3)$.}. This is a rotation by 120 degrees and therefore has the eigenvalues $\exp(\pm 2 \pi i/3)$. More precisely, $\theta$ contains three copies of this twist and the gauge twist $\theta'$ is constrained by modular invariance to contain 0, 3 or 6 further copies. This leads to five inequivalent models \cite{IbaNilQue2}. In table (\ref{Z3}) we list these twists together with the corresponding moduli spaces and the maximal and minimal gauge group. The maximal gauge group is the one of the model with vanishing Wilson lines and can be found in \cite{IbaNilQue2} or in the table of $E_{8}$ automorphisms given in \cite{HolMyh}. The minimal gauge group is the one for generic (purely continuous) Wilson lines and can be calculated using the method introduced in \cite{Moh2}. \begin{table} \[ \begin{array}{|c|c|c|c|} \hline \mbox{Gauge Twist} & \mbox{Coset} & \mbox{Max. Gauge Group} & \mbox{Min. Gauge Group} \\ \hline \hline \emptyset \otimes \emptyset & \frac{ SU(3,\;3)}{ SU(3) \otimes SU(3) \otimes U(1) } & E_{8} \otimes E_{8}\,' & E_{8} \otimes E_{8}\,' \\ \hline A_{2}^{2} \otimes A_{2} & \frac{ SU(6,\;3)}{ SU(6) \otimes SU(3) \otimes U(1) } & (SO(14) \otimes U(1) ) \otimes (E_{7} \otimes U(1))\,' & (SU(3) \otimes SU(3) ) \otimes E_{6}\,' \\ \hline A_{2}^{3} \otimes \emptyset & \frac{ SU(6,\;3)}{ SU(6) \otimes SU(3) \otimes U(1) } & (E_{6} \otimes SU(3)) \otimes E_{8}\,' & SU(3) \otimes E_{8}\,' \\ \hline A_{2}^{3} \otimes A_{2}^{3} & \frac{ SU(9,\;3)}{ SU(9) \otimes SU(3) \otimes U(1) } & (E_{6} \otimes SU(3)) \otimes (E_{6} \otimes SU(3))\,' & SU(3) \otimes SU(3)\,' \\ \hline A_{2}^{4} \otimes A_{2}^{2} & \frac{ SU(9,\;3)}{ SU(9) \otimes SU(3) \otimes U(1) } & SU(9) \otimes (SO(14) \otimes U(1) )\,' & (SU(3)\otimes SU(3))\,' \\ \hline \end{array} \] \caption{Table of all ${\bf Z}_{3}$ orbifold compactifications with ${\cal N} =1$ space-time supersymmetry.} \label{Z3} \end{table} Let us conclude with two further remarks. The first is that the formula (\ref{DimMO}) coincides with the one derived in \cite{Moh2} for orbifold compactifications with continuous Wilson lines and a special choice of the gauge twist. Conversely the results of this section indicate that it should be possible to generalize the results of \cite{Moh2} to the general asymmetric case. The final remark concerns the K\"{a}hlerian structure of the cosets given in (\ref{ModuliSpaceO}). Whereas the $SU$ cosets are K\"ahlerian for any value of the parameters, the $SO$ cosets are K\"ahlerian if they are isomorphic to $SU$ cosets by some accidental isomorphism of (low dimensional) Lie groups or if one of the parameters equals 2, that is for cosets \cite{Gil} \begin{equation} \frac{SO(p,\; 2)}{SO(p) \otimes SO(2)} \end{equation} For ${\cal N}=1$ supersymmetric orbifold compactifications it is well known that the eigenvalue $+1$ does not appear, whereas $-1$ does only appear with multiplicities 0 or 2 \cite{DHVW2,ErlKle}. Therefore, the moduli spaces of ${\cal N}=1$ ${\bf Z}_{N}$ orbifold compactifications with continuous Wilson lines found here are all K\"ahlerian. For general asymmetric ${\bf Z}_{N}$ orbifolds the situation is less clear, since the necessary and sufficent condition for ${\cal N} =1$ space-time supersymmetry is not known. Our result suggests that the only real eigenvalue $\Theta_{L}$ may have is $-1$, with multiplicity 0 or 2 as in the compactification case. Note, however, that our investigations have been restricted to the untwisted sector and that it has been pointed out recently \cite{Sas} that space-time supercharges may also appear from twisted sectors in asymmetric orbifolds. \setcounter{equation}{0} \section{Modular symmetries} \hspace*{.3in} The fact that the naive moduli spaces will contain several copies of the same model is clear from the beginning since one can easily imagine that there will be large deformations ${\cal T} \in O(16+d,\,d)$ which are at the same time automorphisms of the Narain lattice and therefore do not lead to a different model. Therefore all transformations of the type \begin{equation} {\cal T} \in O(16+d,\,d) \cap \mbox{AUT}(\Gamma) \end{equation} are symmetries of the toroidal moduli space ${\cal M}_{T}$. Those which also fulfill the normalizer constraint for a twist $\Theta$ \begin{equation} {\cal T} \in {\cal N}(\langle \Theta \rightarrow, O(16+d,\;d) \cap \mbox{AUT}(\Gamma)) \end{equation} are then symmetries of the orbifold moduli space ${\cal M}_{O}(\Theta)$. In the following we will recall and extend the analysis performed by Spalinski \cite{Spa}. In this approach one first finds the action of modular transformations on the quantum numbers and then derives the induced action on the moduli themselves. To do so, one first writes down the indefinite bilinear form in the lattice basis (in matrix notation) as \begin{equation} {\bf P}_{L}^{2} - {\bf P}_{R}^{2} = v^{T} H v \end{equation} Here $v$ is a vector consisting of the $16 + 2d$ integer quantum numbers which label a state, \begin{equation} v^{T} = ( q^{A}, n^{i}, m_{i} ) \in {\bf Z}^{16 + 2d}, \;\;\; A= 1,\ldots, 16, \; i = 1,\ldots, d, \end{equation} and $H$ is the lorentzian lattice metric of $\Gamma$ given as \begin{equation} H = \left( \begin{array}{ccc} (l_{A}, l_{B}) & (l_{A}, \overline{k}_{j}) & (l_{A}, k^{n}) \\ (\overline{k}_{i}, l_{B})& (\overline{k}_{i}, \overline{k}_{j}) & (\overline{k}_{m}, k^{n}) \\ (k^{m}, l_{B}) & (k^{m}, \overline{k}_{j})& (k^{m}, k^{n}) \\ \end{array} \right) = \left( \begin{array}{ccc} C & 0 & 0 \\ 0 & 0 & I \\ 0 & I & 0 \\ \end{array} \right) \label{M5} \end{equation} where $C$ is the Cartan matrix of $E_{8} \otimes E_{8}$ and I is the $d \times d$ unit matrix. Modular symmetry transformations $\cal T$ can now also be described in terms of their matrices $\Omega$ with respect to the lattice basis, which act as \begin{equation} v \rightarrow v' = \Omega^{-1} v \end{equation} on the quantum numbers. To be a symmetry, the matrix $\Omega$ must be integer valued, \begin{equation} \Omega \in GL(16 + 2d, {\bf Z}) \Leftrightarrow {\cal T} \in \mbox{AUT}(\Gamma) \end{equation} and it must leave the indefinite bilinear form invariant \begin{equation} \Omega^{T} H \Omega = H \Leftrightarrow {\cal T} \in O(16 +d,d) \label{M8} \end{equation} These two conditions combined define a matrix group, \begin{equation} G_{T} = \{ \Omega \in GL(16 + 2d,{\bf Z}) | \Omega^{T} H \Omega = H \} \label{M9} \end{equation} which is called the modular invariance group of toroidal compactifications. It is usually denoted by $O(16 + d,d; {\bf Z})$ for obvious reasons, although it is not a group of pseudo--orthogonal matrices. To get the modular invariance group for an orbifold one simply has to add the normalizer constraint. Then \begin{equation} G_{O} = {\cal N}( \langle \Theta \rightarrow, G_{T} ) = \{ \Omega \in GL(16 +2d, {\bf Z}) | \Omega^{T} H \Omega = H, \;\;\; \Omega^{-1} \Theta \Omega = \Theta^{k}, \exists k:\; 1 \leq k < N \} \label{M10} \end{equation} This last formula was given by Spalinski \cite{Spa} but without mentioning its group theoretical interpretation. The calculation of such normalizers is in general a difficult task which must be done case by case. Nevertheless, several examples have been discussed in the literature \cite{FerFreSor,Spa,ErlSpa,Erl,BLST1,BLST2} for models with no or with discrete Wilson lines. As pointed out in \cite{ErlSpa} a factorization of the modular invariance group into factors corresponding to different eigenvalues of the twist can only be expected if the underlying lattice itself decomposes into an orthogonal direct sum, which is not the case generically. Therefore the local decomposition of orbifold moduli spaces into a product of coset spaces does not imply a corresponding decomposition of $G_{O}$. The second step is to deduce the action of modular symmetry transformations on the moduli. This can be done through the mass formula (\ref{m15}). Therefore we express the euclidean bilinear form in lattice coordinates as \begin{equation} {\bf P}_{L}^{2} + {\bf P}_{R}^{2} = v^{T} \Sigma v \label{M11} \end{equation} The mass matrix $\Sigma$ which encodes the complete moduli dependence of the whole spectrum is the euclidean lattice metric \begin{equation} \Sigma = \left( \begin{array}{ccc} l_{A} \cdot l_{B} & l_{A} \cdot \overline{k}_{j} & l_{A} \cdot k^{n} \\ \overline{k}_{i} \cdot l_{B} & \overline{k}_{i} \cdot \overline{k}_{j} & \overline{k}_{m} \cdot k^{n} \\ k^{m} \cdot l_{B} & k^{m} \cdot \overline{k}_{j}& k^{m} \cdot k^{n} \\ \end{array} \right) \label{M12} \end{equation} Introducing matrix notation and working out the scalar products one gets \begin{equation} \Sigma = \left( \begin{array}{ccc} C + \frac{1}{2} A G^{-1} A^{T} & - \frac{1}{2} A G^{-1} D^{T} & - \frac{1}{2} A G^{-1} \\ - \frac{1}{2} D G^{-1} A^{T} & \frac{1}{2} D G^{-1} D^{T} & I + \frac{1}{2} D G^{-1} \\ - \frac{1}{2} G^{-1} A^{T} & I + \frac{1}{2} G^{-1} D^{T} & \frac{1}{2} G^{-1} \\ \end{array} \right) \label{Sigma} \label{M13} \end{equation} Here $G = (G_{ij})$ and $G^{-1} = (G^{ij})$ are the lattice metrics\footnote{Note that some authors choose the lattice metric of $\Lambda$ to be 2$G$. With this convention, some of the matrix elements differ by a factor 2.} of the compactification lattice $\Lambda$ and its dual $\Lambda^{*}$. The matrices $A=(A_{Ai})$ and $D = (D_{ij})$ were defined in (\ref{DefA}) and (\ref{DefD}), respectively. The action of a modular symmetry transformation on the euclidean bilinear form is given by \begin{equation} v^{T} \Sigma v \rightarrow v^{T} \Omega^{T,-1} \Sigma' \Omega^{-1} v \stackrel{!}{=} v^{T} \Sigma v \label{M14} \end{equation} Note that the moduli dependent matrix $\Sigma$ will in general also transform, if the deformation $\Omega$ is not a pure rotation of the Narain lattice. The fact that the deformations we consider are symmetry transformations and therefore must leave the euclidean bilinear form invariant fixes the transformation law of $\Sigma$ to be \begin{equation} \Sigma \rightarrow \Sigma' = \Omega^{T} \Sigma \Omega \label{M15} \end{equation} with $\Omega \in G_{T}, G_{O}$ respectively \cite{Spa}. Since the functional dependence of $\Sigma$ on the moduli is known via (\ref{Sigma}), this allows one, in principle, to calculate the transformation law of the moduli. But as the dependence is quite complicated and nonlinear, this is tedious to do in practice. Since $\Sigma$ is symmetric it is tempting to try to factorize it in the form \begin{equation} \Sigma = \phi^{T} \phi \label{Fac} \end{equation} hoping that the moduli dependence of $\phi$ might be simpler. In the case of bosonic strings the construction given by Giveon, Porrati and Rabinovici \cite{GPR} does precisely this. But in order to apply their bosonic result to heterotic strings one has do embed the heterotic string into the bosonic one and calculations are still complicated. We will, therefore, use a different approach where one directly works with the heterotic string. It is also motivated by the question how the moduli $G_{ij}, B_{ij}, {\bf A}_{i}$ are related to standard (homogenous and projective) coordinates on cosets, which are known to have a simple transformation under the group action. To explain why these two questions are closely related let us write down the euclidean bilinear form in matrix notation, but now with respect to an orthonormal frame, and then apply a deformation (not a modular transformation) ${\cal T} \in O(16+d,d;{\bf R})$. Then \begin{equation} {\bf P}^2_L + {\bf P}^2_R = u^{T} u \rightarrow u^{T} {\cal T}^{T} {\cal T} u \label{DeformONB} \end{equation} Note that we have expressed the deformed bilinear form in terms of the old coordinates $u$. The moduli dependence is now completely given by the symmetric matrix ${\cal T}^{T} {\cal T}$. The same deformation can be described with respect to lattice coordinates $v$: \begin{equation} {\bf P}_{L}^{2} + {\bf P}_{R}^{2} = v^{T} \Sigma_{0} v \rightarrow v^{T} \Sigma v \label{DeformLB} \end{equation} We consider $\Sigma_{0}$ as a fixed reference background and $\Sigma$ as a function of the moduli which describe the continuous deformations of $\Sigma_{0}$. If $N$ is the basis transformation connecting the $u$ and the $v$ frame by $u = Nv$ then, by combining (\ref{DeformONB}) and (\ref{DeformLB}), we get a decomposition of $\Sigma$ which has the desired form (\ref{Fac}) \begin{equation} \Sigma = N^{T} {\cal T}^{T} {\cal T} N \label{NT} \end{equation} As is well known from the case of the Lorentz group, elements of pseudo orthogonal groups can be decomposed into a rotation $R \in O(16+d) \otimes O(d)$ and a "boost" $B$. The latter can be used as a coset representative. In the case of (\ref{NT}) the rotational part always cancels out \begin{equation} {\cal T}= R B \Rightarrow {\cal T}^{T} {\cal T} = B^{T} R^{T} R B = B^{T} B \end{equation} which again shows that the spectrum only depends on the coset. We can therefore expect that it is possible to factorize $\Sigma$ in terms of a matrix $\phi$ which is a product of a coset representative $B \simeq R B$ and a basis transformation $N$. While this consideration has told us what $\phi$ should be, it didn't say how to construct it. There is however one obvious way to factorize $\Sigma$ as in (\ref{Fac}). Namely, we introduce an orthonormal basis \begin{equation} \widehat{\bf e}_{a} = \left( {\bf e}_{a}, {\bf 0}; {\bf 0} \right), \;\; \widehat{\bf e}_{\mu}^{(L)} = \left( {\bf 0}, {\bf e}_{\mu}; {\bf 0} \right), \;\; \widehat{\bf e}_{\mu}^{(R)} = \left( {\bf 0}, {\bf 0}; {\bf e}_{\mu} \right) \label{Orthbas} \end{equation} ($a = 1, \ldots, 16$, $\mu = 1,\dots,d$), and expand all the vectors appearing in (\ref{M12}) in terms of it. This yields: \begin{equation} \Sigma = \left( \begin{array}{ccc} l_{A} \cdot {\bf e}_{a} & l_{A} \cdot {\bf e}_{\mu}^{(L)} & l_{A} \cdot {\bf e}_{\nu}^{(R)} \\ \overline{k}_{i} \cdot {\bf e}_{a} & \overline{k}_{i}\cdot {\bf e}_{\mu}^{(L)} & \overline{k}_{i}\cdot {\bf e}_{\nu}^{(R)} \\ k^{m} \cdot {\bf e}_{a} & k^{m}\cdot {\bf e}_{\mu}^{(L)} & k^{m}\cdot {\bf e}_{\nu}^{(R)} \\ \end{array} \right) \left( \begin{array}{ccc} {\bf e}_{a}\cdot l_{B} & {\bf e}_{a}\cdot \overline{k}_{j} & {\bf e}_{a}\cdot k^{n} \\ {\bf e}_{\mu}^{(L)} \cdot l_{B} & {\bf e}_{\mu}^{(L)} \cdot \overline{k}_{j} & {\bf e}_{\mu}^{(L)} \cdot k^{n} \\ {\bf e}_{\nu}^{(R)} \cdot l_{B} & {\bf e}_{\nu}^{(R)} \cdot \overline{k}_{j} & {\bf e}_{\nu}^{(R)} \cdot k^{n} \\ \end{array} \right) = \phi^{T}\phi \end{equation} Working out the scalar products we get \begin{equation} \phi^{T} = \left( \begin{array}{ccc} {\cal E} & -\frac{1}{2} A E^{*} & - \frac{1}{2} A E^{*} \\ A^T {\cal E}^{*} & (2 G + \frac{1}{2} D) E^{*} & \frac{1}{2} D E^{*} \\ 0 & \frac{1}{2} E^{*} & \frac{1}{2} E^{*} \\ \end{array} \right) \label{cosetrep} \end{equation} The ambiguity in the factorization of $\Sigma$ is reflected by the appearence of the $n$--bein matrices ${\cal E} = ({\bf e}_{A} \cdot {\bf e}_{a})$, $E = ({\bf e}_{i} \cdot {\bf e}_{\mu})$ and their duals ${\cal E}^{*}={\cal E}^{T, -1}$, $E^{*} = E^{T,-1}$. Since we expect $\phi^{T}$ to be a coset representative we now compare (\ref{cosetrep}) to the well known standard form of such an object \cite{Gil}. Although (\ref{cosetrep}) is not in the standard form of a coset representative, its structure is similar enough to allow for the construction of analogues of standard homogenous and projective coordinates. Let us first introduce \begin{equation} X = \left( \begin{array}{c} - \frac{1}{2} A E^{*} \\ \frac{1}{2} D E^{*} \\ \end{array} \right),\;\; \;\;\; Y = \left( \frac{1}{2} E^{*} \right) \end{equation} Note that all the moduli appear in $X$, so that the whole information is encoded in it. But $X$ is not a nice coordinate. However, following the standard procedure described in \cite{Gil}, the $(16 + 2d) \times d$ matrix \begin{equation} \left( \begin{array}{c} X \\ Y \\ \end{array} \right) = \left( \begin{array}{c} X_1 \\ X_2 \\ Y \\ \end{array} \right)= \left( \begin{array}{c} - \frac{1}{2} A E^{*} \\ \frac{1}{2} D E^{*} \\ \frac{1}{2} E^{*} \\ \end{array} \right) \label{M22} \end{equation} which consists of the last $d$ columns of $\phi^{T}$, should be a homogenous coordinate. This means that it must transform linearly under the left action of the group modulo rotations acting from the right. In our case the group action is given by \begin{equation} \phi^{T} \rightarrow \Omega^{T} \phi^{T} \end{equation} where \begin{equation} \Omega \in \{ M \in GL(16 + 2d; {\bf R}) | M^{T} H M = H \} \simeq O(16 + d, d; {\bf R}) \end{equation} Decomposing $\Omega^{T}$ into appropriate blocks \begin{equation} \Omega^{T} = \left( \begin{array}{cc} a & b \\ c & d \\ \end{array} \right) \label{M25} \end{equation} we see that our coordinate transforms indeed linearly as \begin{equation} \left( \begin{array}{c} X \\ Y \\ \end{array} \right) \rightarrow \left( \begin{array}{c} a X + b Y \\ c X + d Y \\ \end{array} \right) \end{equation} This linear transformation law is (as usual for homogenous coordinates) achieved by treating $X$ and $Y$ as being independent of each other. It can be checked, however, that $X$ and $Y$ are really constrained to satisfy \begin{equation} X_1^T C^{-1} X_1 + X_2^T Y + Y^T X_2 = - I \end{equation} where $C^{-1}$ is the inverse of the Cartan matrix of $E_{8} \otimes E_{8}$. These are, in fact, the constraint equations for a $\frac{ O(16 + d, d) }{O(16 + d) \otimes O(d)}$ coset as is well known from both the mathematical \cite{Gil} and the supergravity literature \cite{PorZwi}. Such equations are crucial for the construction of supergravity actions and especially of the associated K\"ahler potential. As an example, we will in section 5 discuss the case of an $\frac{SO(4,2)}{SO(4) \otimes SO(2)}$ coset in big detail. From the homogenous coordinate it is easy to construct a projective one, that is a coordinate that transforms under the group action by fractional linear transformations \cite{Gil}. Defining \begin{equation} Z = X Y^{-1} = \left( \begin{array}{c} - A \\ D \end{array} \right) \label{Pro} \end{equation} we see that the group acts in fact on $Z$ by fractional linear transformations \begin{equation} Z \rightarrow (aZ + b) (cZ + d)^{-1} \label{Tra} \end{equation} Note also that the dependence on rotations from the right that the homogenous coordinate still had, has completely cancelled out in (\ref{Pro}). This is manifest since the $d$--bein variable $E^{*}$ has disappeared. This is the second typical feature of a projective coset coordinate \cite{Gil}. One useful application of the projective coordinate is that the transformation properties of the moduli can be deduced very simply from it. To pass from the deformation group $O(16 + d,\, d)$ to the modular invariance group $O(16 + d,\,d; {\bf Z})$ one simply has to restrict to the subgroup of integer valued matrices. As an example let us calculate the transformation of the moduli under the duality transformation which generalizes the well known $R \rightarrow \frac{1}{2R}$ duality known from compactification on the circle. On the level of quantum numbers the generalized duality transformation exchanges winding and momentum numbers while leaving the charges invariant. The corresponding matrix in $O(16 + d,d;{\bf Z})$ is obviously \begin{equation} \Omega^{T} =\left( \begin{array}{c|c} a & b \\ \hline c & d\\ \end{array} \right) = \left( \begin{array}{cc|c} I & 0 & 0 \\ 0 & 0 & I \\ \hline 0 & I & 0 \\ \end{array} \right) \end{equation} By acting with $\Omega^{T}$ on $Z$ as in (\ref{Tra}) one immediately gets that \begin{equation} Z = \left( \begin{array}{c} -A \\ D \end{array} \right) \rightarrow \left( \begin{array}{c} - A D^{-1} \\ D^{-1} \\ \end{array} \right) \label{M31} \end{equation} a result which is much harder to derive by other methods. In the orbifold case the transformation law of the real moduli remains, of course, the same but one has to check that the matrix acting on the quantum numbers fulfills the normalizer condition (\ref{m13}). There is, however, another problem (for models with ${\cal N} = 1$ supersymmetry) since one would like to use a complex parametrization of the moduli. This will be discussed in the next section. As a second application of the projective coordinate (\ref{Pro}) let us rederive the global structure of the toroidal moduli space. It is well known that the action of the modular invariance group on the moduli is not faithful \cite{Spa}. Therefore the true modular symmetry group ${\cal G}_{T}$ is a factor group of $G_{T}$ by those transformations which act trivially on the toroidal moduli . It is known that \cite{GPR} \begin{equation} {\cal G}_{T} = PO(16 + d,d; {\bf Z}) = O(16 + d,d; {\bf Z}) / \{ \pm I \} \end{equation} Whereas it is obvious that $-I$ acts trivially on the moduli, it is less obvious that there are no further trivial transformations. However, by using the projective coordinate $Z$ it becomes clear that, if one requires \begin{equation} (aZ + b) (cZ + d)^{-1} = Z \end{equation} for all $Z$, then it follows that $b=0$, $c=0$ and that either $a=I$, $d=I$ or $a=-I$, $d=-I$. In the orbifold case the connection between $G_{O}$ and ${\cal G}_{O}$ is an open problem \cite{Spa}. Clearly the group generated by the twist $\Theta$ acts trivially, by construction, but this is all one knows in general. Of course, one is ultimately interested in ${\cal G}_{O}$ because it is this group which acts on the moduli and, therefore, decides both about the global geometry of moduli space and the form of effective supergravity actions. At least in the cases where $G_{O}$ factorizes into groups acting on the cosets in the decomposition (\ref{ModuliSpaceO}) it should be possible to make a general statement about the connection between $G_{O}$ and ${\cal G}_{O}$, but we will not try to do so in this paper. We will, in the following, rather focus on (\ref{ModuliSpaceO}) and deal with the problem of its complexification and the derivation of the associated K\"ahler potentials. \setcounter{equation}{0} \section{K\"{a}hler potentials for $Z_N$ orbifold compactifications with continuous Wilson lines} \hspace*{.3in} As we have seen above, the untwisted moduli space of $Z_N$ orbifold compactifications with continuous Wilson lines preserving ${\cal N}=1$ space-time supersymmetry is locally given by a direct product of $\frac{SU(n,m)}{SU(n) \otimes SU(m) \otimes U(1)}$ and $\frac{SO(r,2)}{SO(r) \otimes SO(2)}$ cosets. Each of these cosets is a K\"{a}hlerian manifold, that is, its metric $g_{\phi \bar{\phi}}$ is locally expressible as $g_{\phi \bar{\phi}} = \partial_{\phi} \partial_{\bar{\phi}} K({\phi},{\bar{\phi}})$ in some complex coordinate system $( {\phi}, {\bar{\phi}})$. $K$ denotes its K\"{a}hler potential. The K\"{a}hler structure of the untwisted moduli space is then determined by the full K\"{a}hler potential given by the sum of the K\"{a}hler potentials of the individual cosets. The K\"{a}hler potential, on the other hand, is also one of the three fundamental functions which describe the tree-level couplings of generic matter multiplets to $4D,{\cal N}=1$ supergravity, as is well known \cite{CreJul,CreFer}. Thus, in order to determine the tree-level low-energy Lagrangian describing the coupling of the $Z_N$ orbifold moduli fields to supergravity, knowledge of the associated K\"{a}hler potential is crucial. This section is devoted towards explicitly constructing the K\"{a}hler potentials for some of the moduli spaces discussed in the previous section. Namely, we will focus on the $Z_N$ orbifold compactifications for which the internal 6-torus $T_6$ can be decomposed into a direct sum $T_4 \oplus T_2$. All such cases are given in table (\ref{T6}) \cite{ErlKle}. Then, by using well known techniques \cite{PorZwi,FerPor,FerKouLuZw}, we will derive the K\"{a}hler potentials for the moduli spaces associated with the underlying 2-torus $T_2$. We will first focus on the $\frac{SO(r,2)}{SO(r) \otimes SO(2)}$ cosets. As stated earlier, they arise when the twist operating on $T_2$ has two eigenvalues of $-1$. It is well known \cite{GPR} that, in the case when no continuous Wilson lines are turned on, the resulting $\frac{SO(2,2)}{SO(2) \otimes SO(2)}$ coset factorises into $\frac{SO(2,2)}{SO(2) \otimes SO(2)} = \left( {\frac{ SU(1,1)}{U(1)}} \right)_T \otimes \left( {\frac{ SU(1,1)}{U(1)}}\right)_U$, each of them being coordinatised by one complex modulus, $T$ and $U$, respectively. The associated K\"{a}hler potential is then simply given by the sum of two individual K\"{a}hler potentials. When turning on Wilson lines, however, the resulting coset $\frac{SO(r,2)}{SO(r) \otimes SO(2)}$ will in general not factorise anymore, and the resulting K\"{a}hler potential will be much more complicated. Nevertheless, one still expects to find one modified $T$ and one modified $U$ modulus among the complex coordinates of the $\frac{SO(r,2)}{SO(r) \otimes SO(2)}$ coset. This is in fact the case, as we shall see below. We will, for concreteness, discuss the $\frac{SO(4,2)}{SO(4) \otimes SO(2)}$ and the $\frac{SO(3,2)}{SO(3) \otimes SO(2)}$ cosets in great detail. They are the simplest non-trivial ones occuring when turning on the two Wilson lines ${\bf A}_i$ associated with the two directions of the underlying 2-torus $T_2$. Any other $\frac{SO(r,2)}{SO(r) \otimes SO(2)}$ coset, however, can in principle be analysed along very similar lines, although arriving at explicit results for the K\"{a}hler potential might be quite tedious. Finally, at the end of this section, we will discuss the K\"{a}hler potential for the $\frac{SU(n,1)}{SU(n) \otimes U(1)}$ cosets with Wilson lines turned on. They occur whenever the twist acting on the underlying 2-torus $T_2$ doesn't have eigenvalues $-1$. We will, for concreteness, discuss the $\frac{SU(2,1)}{SU(2) \otimes U(1)}$ coset in great detail. Its discussion, however, can be generalised to any other $\frac{SU(n,1)}{SU(n) \otimes U(1)}$ coset in a straightforward manner. \begin{table} \[ \begin{array}{|c|c|c|} \hline \mbox{Case} & \mbox{Twist} & \mbox{Lattice} \\ \hline \hline 2 \;\;\;\;\; Z_4 & (Z_2^{(1)},Z_2^{(1)},Z_4^{(2)},Z_4^{(2)}) & A_1 \times A_1 \times B_2 \times B_2\\ \hline 5 \;\;\;\;\; Z_6 & (Z_3^{(2)},Z_6^{(2)},Z_6^{(2)})& A_2 \times G_2 \times G_2 \\ \hline 7 \;\;\;\;\; Z_6' & (Z_2^{(1)},Z_2^{(1)},Z_3^{(2)},Z_6^{(2)}) & A_1 \times A_1 \times A_2 \times G_2 \\ \hline 12 \;\;\;\;\; Z_8 & (Z_4^{(2)},Z_8^{(4)})& B_2 \times B_4\\ \hline 14 \;\;\;\;\; Z_8' & (Z_2^{(1)},Z_2^{(1)},Z_8^{(4)}) & A_1 \times A_1 \times B_4 \\ \hline 16 \;\;\;\;\; Z_{12} & (Z_3^{(2)},Z_{12}^{(4)}) & A_2 \times F_4 \\ \hline 18 \;\;\;\;\; Z_{12}' & (Z_2^{(1)},Z_2^{(1)},Z_{12}^{(4)}) & A_1 \times A_1 \times F_4 \\ \hline \end{array} \] \caption{$Z_N$ orbifold compactifications with ${\cal N} =1$ space-time supersymmetry and $T_6=T_2 \oplus T_4$, as given in the classification of [19].} \label{T6} \end{table} Let us begin by explicitly constructing a suitable set of complex coordinates for a $\frac{SO(r,2)}{SO(r) \otimes SO(2)}$ coset along the lines of \cite{PorZwi,FerPor,FerKouLuZw}. A set of real coordinates $\Upsilon_{\mu}\,^I$ for a general coset space $\frac{SO(r,2)}{SO(r) \otimes SO(2)}$ was found in (\ref{M22}) and is given by \begin{eqnarray} (\Upsilon_{\mu}\,^I) = \left( \begin{array}{c} {-\cal{L}}_{\mu a} \\ - (L + \tilde{L})_{\mu}\,^i \\ \frac{1}{2} (L - \tilde{L})_{\mu}\,^i \\ \end{array} \right) = \left( \begin{array}{c} - \frac{1}{2} {\cal{A}}_{ak} E_{\mu}\,^k \\ \frac{1}{2} D_{ij} E_{\mu}\,^j \\ \frac{1}{2} E_{\mu}\,^i \end{array} \right) \label{K1} \end{eqnarray} The real moduli matrix $D_{ij}$, associated with the underlying 2-torus $T_2$, is given in (\ref{DefD}). $E^{*}=({\bf e}_{\mu} \cdot {\bf e}^k)= (E_{\mu}\,^k)$ denotes the inverse zweibein. Note, too, that we have, for later convenience, introduced component Wilson lines ${\cal A}_{ai}$ as \begin{equation} {\cal A}_{ai}= {\bf e}_a \cdot {\bf A}_i , \;\;\;\; a=1,...,16 \label{K2} \end{equation} They are defined relative to the orthonormal basis ${\bf e}_a$ introduced in (\ref{Orthbas}). The ${\cal A}_{ai}$ are not to be confused with the $A_{Ai}$ introduced in (\ref{DefA}). The latter ones are defined relative to the lattice basis ${\bf e}_A$. Note that (\ref{AiAj}) can be equivalently expressed as \begin{equation} {\bf A}_i \cdot {\bf A}_j = {\cal A}^a\,_i {\cal A}_{aj} \end{equation} Also note that $r$ can take any value between $2 \leq r \leq 18$. It can be readily checked that the $\Upsilon_{\mu}\,^I$ satisfy the relation \begin{equation} {\theta}_{IJ} \Upsilon_{\mu}\,^I \Upsilon_{\nu}\,^J = - {\delta}_{\mu \nu} \label{K3} \end{equation} where the metric tensor ${\theta}_{IJ}$ for pulling down indices is given by \begin{eqnarray} ({\theta}_{IJ})= \left( \begin{array}{ccc} {\delta}^{ab} & 0 & 0 \\ 0 & 0 &{\delta}^j \,_i \\ 0 & \delta_i \,^j & 0 \end{array} \right) \label{K4} \end{eqnarray} Disentangling the coordinates $L$ and $\tilde{L}$ \cite{PorZwi} yields a new set of real coordinates $\Upsilon_{\mu}\,^I$ given by \begin{eqnarray} (\Upsilon_{\mu}\,^I) = \left( \begin{array}{c} {\cal{L}}_{\mu a} \\ {\tilde{L}}_{\mu}\,^i \\ L_{\mu}\,^i \\ \end{array} \right) = \left( \begin{array}{c} \frac{1}{2} {\cal{A}}_{ak} E_{\mu}\,^k \\ \frac{1}{2}(-E_{\mu}\,^i + E_{\mu i} + E_{\mu}\,^j B_{ji} + \frac{1}{4} E_{\mu}\,^j ({ {\cal A}}^a\,_j { {\cal A}}_{ai})) \\ \frac{1}{2}(E_{\mu}\,^i + E_{\mu i} + E_{\mu}\,^j B_{ji} + \frac{1}{4} E_{\mu}\,^j ({ {\cal A}}^a\,_j { {\cal A}}_{ai})) \end{array} \right) \label{K5} \end{eqnarray} and satisfying the $\frac{SO(r,2)}{SO(r) \otimes SO(2)}$ coset relation \begin{equation} {\theta}_{IJ} \Upsilon_{\mu}\,^I \Upsilon_{\nu}\,^J = - {\delta}_{\mu \nu} \label{K6} \end{equation} where the metric tensor $\theta_{IJ}$ is now diagonal and given by \begin{eqnarray} ({\theta}_{IJ})= \left( \begin{array}{ccc} {\delta}^{ab} & 0 & 0 \\ 0 & {\delta}_{ij} & 0 \\ 0 & 0 & - \delta_{ij} \end{array} \right) \label{K7} \end{eqnarray} Inserting the metric tensor (\ref{K7}) into (\ref{K6}) yields \begin{eqnarray} {\delta}_{ij} L_{\mu}\,^i L_{\nu}\,^j = {\delta}_{\mu \nu} + {\delta}_{ij} {\tilde{L}}_{\mu}\,^i {\tilde{L}}_{\nu}\,^j + {\delta}^{ab} {\cal{L}}_{\mu a} {\cal{L}}_{\nu b} \label{K8} \end{eqnarray} Next, let us introduce a set of complex coordinates as follows \cite{Gil}. The real matrix ${\Upsilon}_{\mu}\,^I$ has two columns and r+2 rows. By combining the two real entries in each row into a single complex variable \begin{eqnarray} \phi_I = {\Upsilon}_1\,^I + i {\Upsilon}_2\,^I \label{K9} \end{eqnarray} one arrives at a set of r+2 complex coordinates. In terms of these new complex variables the orthogonality relation (\ref{K6}) now reads \begin{eqnarray} - {\phi}^{\dagger} I_{r,2} \phi &=& 2 \nonumber\\ {\phi}^T I_{r,2} {\phi} &=& 0 \label{K10} \end{eqnarray} where $\phi$ is a complex column vector with complex entries ${\phi}_I$ and where $I_{r,2}$ denotes a diagonal matrix with entries $I_{r,2} = diag(1,...,1,-1,-1)$. This set of $r+2$ complex coordinates is not yet a suitable one for constructing K\"{a}hler potentials, since they are not unconstrained but rather satisfy the orthogonality properties (\ref{K10}). The next step then consists in identifying a particular set of $r$ unconstrained complex coordinates which the K\"{a}hler potential is going to depend on. It is known \cite{FerKouLuZw} how to find such a set of analytic coordinates for a general $\frac{SO(r,2)}{SO(r) \otimes SO(2)}$ coset. For concreteness and in order to keep the formulae as simple as possible, we will in the following focus on the $\frac{SO(4,2)}{SO(4) \otimes SO(2)}$ coset and explicitly construct an analytic set of coordinates for it. To proceed, we first introduce an explicit parametrisation of the metric $G_{ij}$ of the underlying $T_2$ torus as follows \begin{eqnarray} G_{ij} = \left( \begin{array}{cc} R_1^2 & R_1R_2 \cos{\theta} \\ R_1R_2 \cos{\theta} & R_2^2 \end{array} \right) \label{K11} \end{eqnarray} The associated zweibein $E_{\mu i}$ satisfying $G_{ij} = {\delta}^{\mu \nu} E_{\mu i} E_{\nu j}$ and its inverse are then given by \begin{eqnarray} E_{\mu i} = \left( \begin{array}{cc} R_1 & R_2 \cos{\theta} \\ 0 & R_2 \sin{\theta} \end{array} \right), \,\,\, E_{\mu}\,^ i = \left( \begin{array}{cc} \frac{1}{R_1} & \frac{-1}{R_1} \frac{\cos{\theta}}{ \sin{\theta}} \\ 0& \frac{1}{R_2} \frac{1}{\sin{\theta}} \end{array} \right) \label{K12} \end{eqnarray} Then, by inserting parametrisation (\ref{K11}) and (\ref{K12}) into (\ref{K9}), the six complex coordinates $\phi_I$ can be readily expressed in terms of the real moduli fields $G_{ij}, B_{ij}$ and ${\cal{A}}_{ai}$ as \begin{eqnarray} \phi_1 &=& \frac{1}{\sqrt{Y}} \left( { {\cal A}}_{11} \frac{\sqrt{G}}{G_{11}} +i(-{ {\cal A}}_{11} \frac{G_{12}}{G_{11}} + {\cal{A}}_{12}) \right) \nonumber\\ \phi_2 &=& \frac{1}{\sqrt{Y}} \left( { {\cal A}}_{21} \frac{\sqrt{G}}{G_{11}} +i(-{ {\cal A}}_{21} \frac{G_{12}}{G_{11}} + {\cal{A}}_{22}) \right) \nonumber \\ \phi_3 &=& \frac{1}{\sqrt{Y}} \left( \sqrt{G} (1 - \frac{1}{G_{11}} + \frac{1}{4} \frac{ {\cal{A}}^a\,_1 {\cal{A}}_{a1}}{G_{11}}) +i(-B_{12} + \frac{G_{12}}{G_{11}} (1- \frac{1}{4} {\cal{A}}^a\,_1 {\cal{A}}_{a1}) + \frac{1}{4} {\cal{A}}^a\,_1 {\cal{A}}_{a2}) \right) \nonumber\\ \phi_4 &=& \frac{1}{\sqrt{Y}} \left( \frac{\sqrt{G}}{G_{11}}(G_{12} + B_{12} + \frac{1}{4} { {\cal A}}^a\,_1 { {\cal A}}_{a2}) \right.\nonumber\\ && \left. +i(-1+ \frac{G}{G_{11}} - B_{12} \frac{G_{12}}{G_{11}} -\frac{1}{4} { {\cal A}}^a\,_1 { {\cal A}}_{a2} \frac{G_{12}}{G_{11}}+ \frac{1}{4} { {\cal A}}^a\,_2 { {\cal A}}_{a2}) \right) \nonumber \\ \phi_5 &=& \frac{1}{\sqrt{Y}} \left( \sqrt{G} (1 + \frac{1}{G_{11}} + \frac{1}{4} \frac{ {\cal{A}}^a\,_1 {\cal{A}}_{a1}}{G_{11}}) +i(-B_{12} - \frac{G_{12}}{G_{11}} (1+ \frac{1}{4} {\cal{A}}^a\,_1 {\cal{A}}_{a1})+ \frac{1}{4} {\cal{A}}^a\,_1 {\cal{A}}_{a2}) \right) \nonumber\\ \phi_6 &=& \frac{1}{\sqrt{Y}} \left( \frac{\sqrt{G}}{G_{11}}(G_{12} + B_{12} + \frac{1}{4} { {\cal A}}^a\,_1 { {\cal A}}_{a2}) \right. \nonumber\\ && \left. +i(1+ \frac{G}{G_{11}} - B_{12} \frac{G_{12}}{G_{11}} -\frac{1}{4} { {\cal A}}^a\,_1 { {\cal A}}_{a2} \frac{G_{12}}{G_{11}} + \frac{1}{4} { {\cal A}}^a\,_2 { {\cal A}}_{a2}) \right) \label{K13} \end{eqnarray} where $G=\det{G_{ij}}$ and \begin{equation} Y= 4 \frac{G}{G_{11}} \label{K14} \end{equation} Note that there is an overall factor $\sqrt{Y}$ appearing in front of each of the six $\phi_I$. It is therefore convenient \cite{FerKouLuZw} to introduce rescaled coordinates \begin{eqnarray} y_I = \sqrt{Y} \phi_I \label{K15} \end{eqnarray} satisfying the rescaled constraints \begin{eqnarray} - y^{\dagger} I_{4,2} \, y &=& 2 Y \nonumber\\ y^T I_{4,2} \, y &=& 0 \label{K16} \end{eqnarray} The importance of the overall factor lays in that it determines the K\"{a}hler potential $K$ as \begin{eqnarray} K=-lnY \label{K17} \end{eqnarray} provided one chooses a solution to the constraint equations (\ref{K16}) possessing an $SO(1,3)$ symmetry (more generally an $SO(1,r-1)$ symmetry in the case of an $\frac{SO(r,2)}{SO(r) \otimes SO(2)}$ coset) \cite{FerKouLuZw}. That is, by choosing four of the $y_I$ as unconstrained coordinates one then seeks a solution of the constraint equations (\ref{K16}) for the two remaining $y$-coordinates as well as for $Y$ which exhibits an $SO(1,3)$ symmetry. We will, for concreteness, choose $y_1,y_2,y_3$ and $y_5$ as unconstrained variables. Then, it can be checked that $Y$, given in (\ref{K14}), can be expressed in terms of $y_1,y_2,y_3$ and $y_5$ as \begin{eqnarray} Y &=& \frac{1}{4} \left((y_5 + {\bar{y_5}})^2 - (y_1 + {\bar{y_1}})^2 - (y_2 + {\bar{y_2}})^2 - (y_3 + {\bar{y_3}})^2 )\right) \label{K18} \end{eqnarray} Note that (\ref{K18}) exhibits an $SO(1,3)$ symmetry. Inserting (\ref{K18}) into (\ref{K16}) yields \begin{eqnarray} y_4^2 - y_6^2 &=& y_5^2 - y_1^2 - y_2^2 - y_3^2 \nonumber\\ {\mid y_4 \mid}^2 - {\mid y_6 \mid}^2 &=& - \frac{1}{2} ( y_5^2 - y_1^2 - y_2^2 - y_3^2 + {\bar y}_5^2 - {\bar y}_1^2 - {\bar y}_2^2 - {\bar y}_3^2 ) \label{K19} \end{eqnarray} From (\ref{K13}), on the other hand, it follows that \begin{equation} y_6 - y_4 = 2i \label{K20} \end{equation} Then, it can be checked that the following \cite{FerKouLuZw} solves (\ref{K19}) subject to (\ref{K20}) \begin{eqnarray} y_4 &=& -i \left( 1 - \frac{1}{4} ( y_5^2 - y_1^2 - y_2^2 - y_3^2) \right) \nonumber\\ y_6 &=& i \left(1 + \frac{1}{4}( y_5^2 - y_1^2 - y_2^2 - y_3^2) \right) \label{K21} \end{eqnarray} Note that the solution (\ref{K21}) also exhibits an $SO(1,3)$ symmetry. Thus, (\ref{K18}) and (\ref{K21}) are a solution to the orthogonality relations (\ref{K16}) with manifest $SO(1,3)$ symmetry. The analytic structure of the K\"{a}hler potential $K=-lnY$ can be made manifest \cite{FerKouLuZw} by introducing four complex fields $M_{ij}$ as \begin{eqnarray} (M_{ij})= \left( \begin{array}{cc} y_5 + y_3& y_1 - i y_2 \\ y_1 + i y_2 & y_5 - y_3 \end{array} \right) \label{K22} \end{eqnarray} Then, $Y$ is given by \begin{eqnarray} Y=\frac{1}{4} \det{(M_{ij}+{\bar{M}}_{ij})} \label{K23} \end{eqnarray} Finally, introducing the linear combinations \begin{eqnarray} T&=&y_5+y_3 \nonumber\\ 2U&=&y_5-y_3 \nonumber\\ B&=&y_1 - iy_2 \nonumber\\ C&=&y_1 + iy_2 \label{K24} \end{eqnarray} yields \begin{eqnarray} (M_{ij})= \left( \begin{array}{cc} T & B \\ C & 2U \end{array} \right) \label{K25} \end{eqnarray} It follows from (\ref{K13}) that the complex $T,U,B$ and $C$ moduli fields are expressed in terms of the real ones as \begin{eqnarray} T&=& 2\left(\sqrt{G} (1+ \frac{1}{4} \frac{ { {\cal A}}^a_1 { {\cal A}}_{a1}}{G_{11}}) -i(B_{12}+\frac{1}{4} { {\cal A}}^a_1 { {\cal A}}_{a1} \frac{G_{12}}{G_{11}} - \frac{1}{4} {\cal{A}}^a_1 {\cal{A}}_{a2} ) \right) \nonumber\\ U&=& \frac{1}{G_{11}} (\sqrt{G} - i G_{12} )\nonumber\\ B&=& \frac{1}{G_{11}} \left( {\cal{A}}_{11} \sqrt{G} - { {\cal A}}_{21} G_{12} + { {\cal A}}_{22} G_{11}+ i (-{ {\cal A}}_{11} G_{12} + { {\cal A}}_{12} G_{11} - { {\cal A}}_{21} \sqrt{G})\right) \nonumber\\ C&=&\frac{1}{G_{11}} \left( {\cal{A}}_{11} \sqrt{G} + { {\cal A}}_{21} G_{12} - { {\cal A}}_{22} G_{11} + i (-{ {\cal A}}_{11} G_{12} + { {\cal A}}_{12} G_{11} +{ {\cal A}}_{21} \sqrt{G}) \right) \label{K26} \end{eqnarray} The $T$ and the $U$ modulus are related to the geometrical data of the two-dimensional torus $T_2$. The $T$ modulus is associated with deformations of the K\"{a}hler class. It reduces to the well-known expression \cite{Ver} when turning off the real Wilson lines $ {\cal A}_{ai}$. The $U$ modulus is associated with deformations of the complex structure. Note that it doesn't get admixtures of real Wilson lines $ {\cal A}_{ai}$, that is, it remains given by the well-known expression \cite{Ver} when no Wilson lines are turned on. Finally, the complex $B$ and $C$ moduli are linear expressions in the real Wilson lines $ {\cal A}_{ai}$. They vanish when turning off the real Wilson lines $ {\cal A}_{ai}$. Thus, they qualify to be called complex Wilson lines. The K\"{a}hler potential reads \begin{eqnarray} K &=& -lnY =- ln\left(\frac{1}{4} \det{(M_{ij}+{\bar{M}}_{ij})}\right) \nonumber\\ &=&- ln \left( (T + {\bar{T}})(U + {\bar{U}})- \frac{1}{2} (B + {\bar{C}})(C+\bar{B}) \right) + const \label{K27} \end{eqnarray} A few remarks are at hand. First note that in the absence of Wilson lines ($B=C=0$) the K\"{a}hler potential splits into the sum $K=K(T, {\bar{T}}) + K(U, \bar{U})$, which is the well-known K\"{a}hler potential for the coset $\frac{SO(2,2)}{SO(2) \otimes SO(2)} = \frac{ SU(1,1)}{U(1)} \otimes \frac{ SU(1,1)}{U(1)}$. On the other hand, turning on Wilson lines leads to the K\"{a}hler potential (\ref{K27}) which doesn't split into two pieces anymore. This is so, because the $\frac{SO(4,2)}{SO(4) \otimes SO(2)}$ coset doesn't factorise anymore into two submanifolds. Also note that the complex Wilson lines $B$ and $C$ do not just give rise to ${\bar{B}}B$ and ${\bar{C}}C$ terms in the K\"{a}hler potential but also to holomorphic $BC$ and antiholomorphic ${\bar{B}}{\bar{C}}$ pieces. This will have important consequences when discussing target space duality symmetries of the K\"{a}hler potential, as discussed in the next section. Finally, let us point out that, even in the presence of Wilson lines, the K\"{a}hler potential is in terms of the real moduli still given as $K=-ln \frac{4G}{G_{11}}$ and thus still proportional to the volume of the internal manifold. We proceed with a discussion of the $\frac{SO(3,2)}{SO(3) \otimes SO(2)}$ coset. Inspection of (\ref{K5}) shows that a $\frac{SO(3,2)}{SO(3) \otimes SO(2)}$ coset occurs when only retaining the first components ${ {\cal A}}_{11}$ and ${ {\cal A}}_{12}$ and setting all the other components of the two Wilson lines $\bf{A_1}$ and $\bf{A_2}$ to zero. Then, it follows from (\ref{K13}) that $\phi_2 = 0$ and, hence, $y_2 = 0$. The corresponding solutions (\ref{K18}) and (\ref{K21}) then exhibit an $SO(1,2)$ symmetry and the K\"{a}hler potential is thus given again by $K=-ln Y$. The complex moduli can be read off from (\ref{K24}), where now $B=C=y_1$. The complex $T, U$ and $B$ moduli are expressed in terms of the real ones as in (\ref{K26}) with ${ {\cal A}}_{21} = { {\cal A}}_{22} = 0$. It follows from (\ref{K27}) that the K\"{a}hler potential for the $\frac{SO(3,2)}{SO(3) \otimes SO(2)}$ coset reads \begin{eqnarray} K &=& - ln \left( (T + {\bar{T}})(U + {\bar{U}})- \frac{1}{2} (B + {\bar{B}})^2 \right) + const \label{KK28} \end{eqnarray} This concludes the discussion of the $\frac{SO(4,2)}{SO(4) \otimes SO(2)}$ and $\frac{SO(3,2)}{SO(3) \otimes SO(2)}$ cosets. Let us point out again that any other $\frac{SO(r,2)}{SO(r) \otimes SO(2)}$ coset can be analysed along very similar lines. We now discuss the situation when the twist ${\theta}=({\theta}_i\,^j)$ acting on the internal 2-torus $T_2$ has eigenvalues different from $-1$. Introducing ${\theta}^{T,-1}=({\theta}^j\,_k)$ and analysing the consistency conditions \cite{Jun,Moh2} \begin{eqnarray} G_{ij}\, {\theta}^j\,_k = {\theta}_i\,^j \, G_{jk} \nonumber\\ ({\bf A}_i \cdot {\bf A}_j) \, {\theta}^j\,_k = {\theta}_i\,^j \, ({\bf A}_j \cdot {\bf A}_k) \label{K28} \end{eqnarray} for the internal metric $G_{ij}$ and for the matrix ${\bf A}_i \cdot {\bf A}_j$ shows that both $G_{ij}$ and ${\bf A}_i \cdot {\bf A}_j$ have only one independant entry each. Denoting these independant entries by $G_{11}$ and ${\bf A}_1 \cdot {\bf A}_1$, respectively, yields \begin{eqnarray} (G_{ij})= \frac{G_{11}}{2} \left( \begin{array}{cc} 2 & \alpha \\ \alpha & \beta \end{array} \right),\;\;\;\; ({\bf A}_i \cdot {\bf A}_j)= \frac{{\bf A}_1 \cdot {\bf A}_1}{2} \left( \begin{array}{cc} 2 & \alpha \\ \alpha & \beta \end{array} \right) \label{K29} \end{eqnarray} where $\alpha$ and $\beta$ are some twist dependent constants. It then follows from (\ref{K26}) that the $U$-field takes a constant value given by \begin{eqnarray} U = \frac{\sqrt{2 \beta - \alpha^2}}{2} - i \alpha \label{K30} \end{eqnarray} The $T$-field survives as a modulus and is given by \begin{eqnarray} T=2 \left( \frac{\sqrt{2 \beta - \alpha^2}}{2} ( G_{11} + \frac{1}{4} { {\cal A}}^a\,_1 { {\cal A}}_{a1} ) - i B_{12} \right) \label{K31} \end{eqnarray} Note that only the first real Wilson line enters in this expression. In fact, the second real Wilson line is not independent anymore, but rather fixed in terms of the first real Wilson line \cite{IbaNilQue}. This follows directly from the consistency condition \cite{Moh2} \begin{eqnarray} \theta' {\bf A}_i = A^A\,_i \, {\theta'}_A\,^B \, {\bf e}_B = {\theta}_i\,^j \, {\bf A}_j \label{K32} \end{eqnarray} for the continuous Wilson lines. Then, (\ref{K32}) yields \begin{eqnarray} {\bf A}_1 \rightarrow {\bf A}_2 = {\theta}_1\,^j \, {\bf A}_j = {\theta'} \, {\bf A}_1 = A^A\,_1 \, {\theta'}_A\,^B \, {\bf e}_B \label{K33} \end{eqnarray} which, indeed, expresses ${\bf A}_2$ in terms of ${\bf A}_1$. It is useful to introduce a complex Wilson line as \begin{eqnarray} {\cal A} = { {\cal A}}_{11} + i { {\cal A}}_{21} \label{K34} \end{eqnarray} Then, it follows from (\ref{K26}) that the combination $B+ \bar{C}$ can be written as \begin{equation} B + \bar{C} = \sqrt{ 2 \beta - \alpha^2} \bar{ {\cal A}} \label{K35} \end{equation} and the K\"{a}hler potential (\ref{K27}) as \begin{eqnarray} K = - ln \left( T + \bar{T} - \frac{\sqrt{2 \beta - \alpha^2}}{2} \bar{ {\cal A}} {\cal A} \right) + const \label{K36} \end{eqnarray} This is the K\"{a}hler potential for the $\frac{SU(2,1)}{SU(2) \otimes U(1)}$ coset. More generally, a $\frac{SU(n,1)}{SU(n) \otimes U(1)}$ coset will be parametrised by one complex $T$ modulus and n-1 complex Wilson lines $ {\cal A}_l$ given by \begin{eqnarray} T &=& 2 \left( \frac{\sqrt{2 \beta - \alpha^2}}{2} ( G_{11} + \frac{1}{4} { {\cal A}}^a\,_l { {\cal A}}_{al} ) - i B_{12}\right) \nonumber \\ {\cal A}_l &=& A_{1l}+i{ {\cal A}}_{2l} \label{K37} \end{eqnarray} The corresponding K\"{a}hler potential reads \begin{eqnarray} K = - ln \left( T + \bar{T} - \frac{\sqrt{2 \beta - \alpha^2}}{2} \bar{ {\cal A}}_l { {\cal A}}_l \right) + const \label{K38} \end{eqnarray} Finally, let us point out that additional untwisted matter fields, charged or uncharged under the generic gauge group which survives when turning on Wilson lines, enter into the K\"{a}hler potentials (\ref{K27}) and (\ref{K38}) in precisely the same way as the complex Wilson lines $B$, $C$, ${\cal A}_l$. Hence, the modular symmetry properties of these K\"{a}hler potentials are preserved by the inclusion of untwisted matter fields. \setcounter{equation}{0} \section{Examples of Target Space Duality Symmetries} \hspace*{.3in} We are now poised to discuss the symmetry properties of the K\"{a}hler potentials we constructed in the previous section. We will, in particular, be concerned with target space duality symmetries, also referred to as modular symmetries. As stated in section 4, the spectrum of untwisted states of an orbifold theory is invariant under certain discrete transformations of the winding and momentum numbers accompanied by discrete transformations of the moduli fields. These transformations of the moduli fields induce particular K\"{a}hler transformations of the K\"{a}hler potential and, thus, are symmetries of the tree-level low-energy Lagrangian describing the coupling of moduli fields to supergravity. As explained in section 4, modular transformations act on the vector $v^T=(q^A,n^i,m_i)$ of quantum numbers as integer valued transformations $\Omega$ \begin{eqnarray} v^{\prime} = \Omega^{-1} v \label{E1} \end{eqnarray} As discussed earlier, $\Omega$ must satisfy (\ref{M8}). Modular transformations (\ref{E1}) act on the real moduli matrix $\Sigma$ given in (\ref{M13}) as \begin{eqnarray} \Sigma \rightarrow \Omega^T \Sigma \Omega \label{E2} \end{eqnarray} We begin by discussing modular symmetries of the $\frac{SO(r,2)}{SO(r) \otimes SO(2)}$ cosets. For concreteness, we will again focus on the $\frac{SO(4,2)}{SO(4) \otimes SO(2)}$ coset. The associated modular group $G_{O}$ will be called $O(4,2;{\bf Z})$. It is crucial at this point to notice that some care is required in order to specify this group. If the sixdimensional sublattice $\Gamma_{4;2}$ of the Narain lattice $\Gamma_{22;6}$, on which $G_{O}$ acts, happens to factorize (that is, if there is an orthogonal direct decomposition of $\Gamma_{22;6}$ into $\Gamma_{4;2}$ and its complement) then $G_{O}$ will be the group given by \begin{equation} \{ M \in Gl(6; {\bf Z}) | M^{T} H M = H \} \end{equation} where $H$ is the lattice metric of $\Gamma_{4;2}$. But such a decomposition will in general not exist \cite{ErlSpa} and therefore one has the further constraint that the elements of $G_{O}$ must also act cristallographically on the full lattice $\Gamma_{22;6}$. The resulting constraints should be similar to those found in the case where the internal six--dimensional torus does not factorize \cite{BLST1}. For definiteness and simplicity, we will only consider the following case which is the simplest one. As already explained in the last section, we demand that the internal torus decomposes as $T_{4} \oplus T_{2}$, where the twist $\theta$ acts on $T_{2}$ as $-I$. The corresponding directions are labeled by $i=1,2$. The modular group $G_{O}$ then contains the well known group \begin{equation} O(2,2;{\bf Z}) = \{ M \in Gl(4, {\bf Z}) | M^{T} \eta M = \eta \} \end{equation} as a subgroup, where \[ \eta = \left( \begin{array}{cc} 0 & I \\ I & 0 \\ \end{array} \right) \] A complete set of generators for $O(2,2,Z)$ can be found in \cite{GPR}. $O(2,2,Z)$ acts non-trivially on the components $n^{i}$, $m_{i}$ ($i = 1,2$) corresponding to the four basis vectors $\overline{k}_{i}$, $k^{i}$ of the Narain lattice. In order to find the full group $G_{O}$ we must identify those quantum numbers $q^{A}$ which will transform non-trivially under it. Since, by definition, $G_{O}$ is the group of modular transformations in the $-1$ eigenspace of the twist $\Theta$, this then means that we must identify all basis vectors $l_{A}$ of the Narain lattice which transform non-trivially under the projection of the twist to the $-1$ eigenspace. In \cite{Moh2} it was shown that, in the absence of discrete background fields, the $l_A$ transform under the twist $\Theta$ with the same integer valued matrix as the $E_{8} \otimes E_{8}$ basis vectors ${\bf e}_{A}$. Note furthermore that, due to the explicit form (\ref{lA}) of the $l_{A}$, modular transformations of the $q^{A}$ among themselves and automorphisms of $E_{8} \otimes E_{8}$ are in a one to one correspondence. Starting from these observations we can find choices for the twist which are quite close to the situation where the corresponding lattice factorizes. Namely, we will choose the gauge twist $\theta'$ such that its two eigenvalues $-1$ come from a coxeter twist in an $A_{1} \otimes A_{1}$ sublattice. (Note, however, that this is not the most general situation. A general Coxeter twist of an subalgebra of $E_{8} \otimes E_{8}$, which may be used to define an $E_{8} \otimes E_{8}$ automorphism, will have several different eigenvalues.) In order to construct the modular group $G_{O}$ we proceed in two steps. First consider the sixdimensional sublattice $\Gamma_{4;2}$ of $\Gamma_{22;6}$, which is spanned by $l_{i}$, $\overline{k}_{i}$, $k^{i}$ ($i=1,2$), where $l_{1}$ and $l_{2}$ correspond to the $A_{1} \otimes A_{1}$ sublattice. The group of pseudo--orthogonal automorphisms of $\Gamma_{4;2}$ is then given by \begin{equation} \{ M \in Gl(6; {\bf Z}) | M^{T} H M = H \} \label{MHM} \end{equation} with \begin{equation} H = \left( \begin{array}{ccc} C & 0 & 0 \\ 0 & 0 & I \\ 0 & I & 0 \\ \end{array} \right) \end{equation} where $C =2 $ diag$(1,1)$ is the Cartan matrix of $A_{1} \otimes A_{1}$. In a second step we should then identify all elements of this group which also act cristallographically on the full lattice. There are three classes of elements which automatically fulfill this condition, namely all elements of the subgroup $O(2,2;{\bf Z})$, all Weyl automorphisms of $A_{1} \otimes A_{1}$ and shifts of the $q^{i}$ by multiples of $n^{j}$, $m_{j}$ with $i,j =1,2$. To be able to say something about other elements would, however, require a more detailed analysis. Therefore we will, in the following, only consider various particularly interesting elements belonging to these three classes, and we will work out the corresponding transformation properties of the real and complex moduli as well as of the K\"ahler potential. There are, actually, three ways of deriving the action of the modular group $G_O$ on the real moduli fields $G_{ij}, B_{ij}$ and $A_{Ai}$. We will, in the following, make use of all three of them. They are as follows. The transformation law of the real moduli can, in principle, be obtained from (\ref{E2}). This, however, can prove to be quite cumbersome. An alternative way of obtaining the transformation law of the real moduli fields is given by looking at the transformation law of the projective coset coordinate $Z$ given in (\ref{Pro}). Modular transformations (\ref{E1}) act on $Z$ by fractional linear transformations (\ref{Tra}). Yet another way for deriving the transformation law of the real moduli fields is to look at the background field matrix $\varepsilon$ \cite{Ven,GPR} given by \begin{eqnarray} \varepsilon = \left( \begin{array}{cc} E & F \\ 0 & X \\ \end{array} \right) = \left( \begin{array}{cc} 2 ( G+B+\frac{1}{4} { {\cal A}}^a { {\cal A}}_a )_{ij} & A_{Ai}\\ 0 & (G + B)_{AB} \\ \end{array} \right) \label{E3} \end{eqnarray} where the $(G+B)_{AB}$-data on the $E_8 \otimes E_8$ root lattice are given by \begin{equation} (G+B)_{AB}=C_{AB},\,\,\, A > B,\,\,\, (G+B)_{AA}=\frac{1}{2}C_{AA},\,\,\, (G+B)_{AB}=0,\, A < B \end{equation} Consider an element $\hat{g} \in O(4,4,Z)$ and the bilinear form ${\hat{\eta}}$ \begin{eqnarray} \hat{g} = \left( \begin{array}{cc} \hat{a} & \hat{b}\\ \hat{c} & \hat{d} \\ \end{array} \right) \;\;\;,\;\;\; \hat{{\eta}} = \left( \begin{array}{cc} 0 & I\\ I & 0 \\ \end{array} \right) \label{E4} \end{eqnarray} where $\hat{a}, \hat{b}, \hat{c}, \hat{d}, I$ are $4 \times 4$-dimensional matrices. $\hat{g}$ satisfies $\hat{g}^T \hat{\eta} \hat{g} = \hat{\eta}$. The action of $O(4,4,Z)$ on $\varepsilon$ is given as \cite{GivRo,GPR} \begin{eqnarray} \varepsilon^{\prime} = \hat{g}( \varepsilon )= ( \hat{a} \varepsilon + \hat{b}) (\hat{c} \varepsilon + \hat{d})^{-1} \label{E5} \end{eqnarray} Then, the modular group $O(4,2,Z)$ is the subgroup of $O(4,4,Z)$ that preserves the heterotic structure of $\varepsilon$ in (\ref{E3}) while acting on $\varepsilon$ by fractional linear transformations (\ref{E5}). The modular group $O(4,2,Z)$ contains an $O(2,2,Z)$ subgroup. There is a natural embedding \cite{GivRo,GPR} of $O(2,2,Z)$ into $O(4,2,Z)$ given as follows. Consider an element $g \in O(2,2,Z)$ and the bilinear form $\eta$ \begin{eqnarray} g = \left( \begin{array}{cc} a & b \\ c & d \\ \end{array} \right) \;\;\;,\;\;\; \eta = \left( \begin{array}{cc} 0 & I\\ I & 0 \\ \end{array} \right) \label{E6} \end{eqnarray} where $a,b,c,d,I$ are $2 \times 2$-dimensional matrices. $g$ satisfies $g^T \eta g = \eta$. Then, the embedding of $O(2,2,Z)$ into $O(4,2,Z)$ is given as \begin{eqnarray} \hat{a} = \left( \begin{array}{cc} \hat{a} & 0\\ 0 & I \\ \end{array} \right),\, \,\, \hat{b} = \left( \begin{array}{cc} b & 0\\ 0 & 0 \\ \end{array} \right),\, \,\, \hat{c} = \left( \begin{array}{cc} c & 0\\ 0 & 0 \\ \end{array} \right),\, \,\, \hat{d} = \left( \begin{array}{cc} d & 0 \\ 0 & I \\ \end{array} \right) \label{E7} \end{eqnarray} The action of $O(2,2,Z)$ on $\varepsilon$ yields \begin{eqnarray} {\varepsilon}^{\prime} = \hat{g}(\varepsilon)= \left( \begin{array}{cc} E^{\prime} & (a-E^{\prime}c)F \\ 0 & X \\ \end{array} \right) \label{E8} \end{eqnarray} where \begin{eqnarray} E^{\prime}=(aE+b)(cE+d)^{-1} \label{E9} \end{eqnarray} Let us now look at the subgroup of $O(2,2,Z)$ modular transformations. A set of generators for $O(2,2,Z)$ can be found in the literature \cite{GPR}. Here, we will, in the following, look at specific $O(2,2,Z)$ modular transformations and derive the transformation laws for the real moduli fields. Consider the inverse duality transformation given by \begin{eqnarray} \Omega = \left( \begin{array}{cc} I & 0 \\ 0 & g^T \\ \end{array} \right) ,\,\,\,\,\, g= \left( \begin{array}{cc} 0 & I \\ I & 0 \\ \end{array} \right) \label{E10} \end{eqnarray} It acts on the quantum numbers as in (\ref{E1}), thus interchanging the winding numbers $n^i$ with the momentum numbers $m_i$. From (\ref{E2}) one finds that \cite{Ven} \begin{eqnarray} G_{ij} &\rightarrow& \frac{1}{4} \left( (G+B+\frac{1}{4} { {\cal A}}^a { {\cal A}}_a)^{-1} G (G-B+\frac{1}{4} { {\cal A}}^a { {\cal A}}_a)^{-1} \right)_{ij} \nonumber\\ B_{ij} &\rightarrow& - \frac{1}{4} \left( (G+B+\frac{1}{4} { {\cal A}}^a { {\cal A}}_a)^{-1} B (G-B+\frac{1}{4} { {\cal A}}^a { {\cal A}}_a)^{-1} \right)_{ij} \nonumber\\ { {\cal A}}^a\,_i { {\cal A}}_{aj} &\rightarrow& \frac{1}{4} \left( (G+B+\frac{1}{4} { {\cal A}}^a { {\cal A}}_a)^{-1} { {\cal A}}^b { {\cal A}}_b (G-B+\frac{1}{4} { {\cal A}}^a { {\cal A}}_a)^{-1} \right)_{ij} \label{E11} \end{eqnarray} From (\ref{E11}) it then follows that \begin{equation} E \rightarrow \frac{1}{4} E^{-1} \label{E12} \end{equation} which is in agreement with what one obtains from (\ref{E9}). The transformation law of the $A_{Ai}$ can, alternatively, be obtained from (\ref{M31}) in a straightforward way. It is consistent with the transformation property of the ${\cal A}^a\,_i {\cal A}_{aj}$ given in (\ref{E11}). We proceed to show that inverse duality is a symmetry transformation of the K\"{a}hler potential. To do so, one has to compute the transformation laws of the complex moduli fields $T,U,B$ and $C$ given in (\ref{K26}). We will, in the following, only list a few of the lengthy expressions arising when working out (\ref{E11}) and (\ref{M31}) explicitly. For instance, it can be checked that \begin{eqnarray} G_{11} &\rightarrow& \frac{1}{4} \frac{ G_{22}}{(\det (G+B+\frac{1}{4} { {\cal A}}^a { {\cal A}}_a )_{ij})^2} \left( (B_{12} + \frac{1}{4} { {\cal A}}^a\,_1 { {\cal A}}_{a2} - { {\cal A}}^a\,_2 { {\cal A}}_{a2} \frac{G_{12}}{G_{22}})^2 \right. \nonumber\\ &+& G (1 + \frac{1}{4} \left. \frac{{ {\cal A}}^a\,_2 { {\cal A}}_{a2}}{G_{22}} )^2 \right) \label{E13} \end{eqnarray} and that \begin{eqnarray} G &\rightarrow& \frac{1}{16} \frac{G}{(\det (G+B+\frac{1}{4} { {\cal A}}^a { {\cal A}}_a )_{ij})^2} \label{E14} \end{eqnarray} It can also be verified that (\ref{E13}) can be rewritten as \begin{eqnarray} G_{11} \rightarrow \frac{1}{16} G_{11} \frac{1}{(\det (G+B+\frac{1}{4} { {\cal A}}^a { {\cal A}}_a)_{ij})^2} \mid U \mid^2 \mid T - \frac{1}{2} \frac{BC}{U} \mid^2 \label{E15} \end{eqnarray} Similarly, one finds from (\ref{M31}) that \begin{eqnarray} A_{11} &\rightarrow& - \frac{1}{2} \frac{1}{({\det}(G-B+ \frac{1}{4} {\cal A}^a {\cal A}_a)_{ij})} \left(A_{11} (G_{22}+\frac{1}{4} {\cal A}^a\,_2 {\cal A}_{a2}) \right. \nonumber\\ && \left. - A_{12} (G_{12}+B_{12}+\frac{1}{4} {\cal A}^a\,_1 {\cal A}_{a2}\right) \label{E16} \end{eqnarray} Inserting all these expressions into (\ref{K26}) yields \begin{eqnarray} U &\rightarrow&- \frac{T}{- UT + \frac{1}{2} BC} \nonumber\\ T &\rightarrow&- \frac{U}{- UT + \frac{1}{2} BC} \nonumber\\ B &\rightarrow& \frac{B}{- UT + \frac{1}{2} BC} \nonumber\\ C &\rightarrow& \frac{C}{- UT + \frac{1}{2} BC} \label{E17} \end{eqnarray} Note that, in the presence of the complex Wilson lines $B$ and $C$, the $T$ and $U$ moduli now mix under inverse duality. When switching off the complex Wilson lines $B$ and $C$, no mixing occurs and one obtains the familiar transformation law for the $T$ and $U$ moduli \cite{GPR}. Finally, inserting (\ref{E17}) into (\ref{K27}) yields \begin{eqnarray} K \rightarrow K+F+\bar{F} \label{E18} \end{eqnarray} with the holomorphic $F$ given by \begin{eqnarray} F = ln \left( UT (1-\frac{1}{2} \frac{BC}{UT}) \right) \label{E19} \end{eqnarray} A useful check on (\ref{E19}) is to look at how $Y=\frac{4G}{G_{11}}$ transforms under (\ref{E11}). It follows from (\ref{E14}) and (\ref{E15}) that \begin{eqnarray} Y \rightarrow Y \frac{1}{|U|^2} \frac{1}{\mid T - \frac{1}{2} \frac{BC}{U}\mid^2} \label{E20} \end{eqnarray} Thus, $K= - \ln \frac{4G}{G_{11}}$ transforms indeed as in (\ref{E18}) and (\ref{E19}). Next, let us look at the subgroup of $O(2,2,Z)$ transformations which acts as $SL(2,Z)_U$ transformations on the $U$ modulus. Let \begin{eqnarray} \Omega = \left( \begin{array}{cc} I & 0 \\ 0 & g^T \\ \end{array} \right),\,\, \,\, g^T = \left( \begin{array}{cc} A& \\ & A^{T,-1} \\ \end{array} \right) \label{E21} \end{eqnarray} where $A \in SL(2,Z)$. $SL(2,Z)$ is generated by two elements \cite{GPR} \begin{eqnarray} T= \left( \begin{array}{cc} 1 & 1 \\ 0 & 1 \\ \end{array} \right),\,\, \,\, S= \left( \begin{array}{cc} 0 & 1 \\ -1 & 0\\ \end{array} \right) \end{eqnarray} Consider the case where $A=T^p$ with $p \in Z$. Then, the transformation law of the real moduli fields is readily obtained from (\ref{E9}) and reads \begin{eqnarray} (G_{ij}) &\rightarrow& (G_{ij}) + \left( \begin{array}{cc} 1 & pG_{11} \\ pG_{11} & p^2 G_{11}+2 p G_{11} \\ \end{array} \right) \nonumber \\ B_{12} &\rightarrow& B_{12} \nonumber\\ { {\cal A}}_{a1} &\rightarrow& { {\cal A}}_{a1} \nonumber\\ { {\cal A}}_{a2} &\rightarrow& { {\cal A}}_{a2} + p{ {\cal A}}_{a1} \label{E23} \end{eqnarray} Inserting (\ref{E23}) into (\ref{K26}) yields \begin{eqnarray} U &\rightarrow& U -i p \nonumber\\ T &\rightarrow& T \nonumber\\ B &\rightarrow& B \nonumber\\ C &\rightarrow& C \end{eqnarray} and, hence, \begin{eqnarray} K \rightarrow K \end{eqnarray} Now, consider the case where $A=S$. Then, it follows from (\ref{E9}) that \begin{eqnarray} G_{11} &\leftrightarrow& G_{22} \nonumber\\ G_{12} &\rightarrow& - G_{12} \nonumber\\ B_{12} &\rightarrow& B_{12} \nonumber \\ { {\cal A}}^a\,_1 &\rightarrow& - { {\cal A}}^a\,_2 \nonumber\\ { {\cal A}}^a\,_2 &\rightarrow& { {\cal A}}^a\,_1 \label{E26} \end{eqnarray} Inserting (\ref{E26}) into (\ref{K26}) yields \begin{eqnarray} U &\rightarrow& \frac{1}{U} \nonumber\\ T &\rightarrow& T - \frac{1}{2} \frac{BC}{U} \nonumber\\ B &\rightarrow& - \frac{B}{i U} \nonumber\\ C &\rightarrow& - \frac{C}{i U} \label{E27} \end{eqnarray} Note the peculiar Wilson line admixture in the transformation law of the $T$ modulus. Its presence is necessary to make the K\"{a}hler potential transform properly. Indeed, inserting (\ref{E27}) into (\ref{K27}) yields \begin{eqnarray} K &\rightarrow& K + F(U) + \bar{F} (\bar{U}) \end{eqnarray} where \begin{equation} F(U) = ln \,U \end{equation} Under more general $SL(2,Z)_U$ transformations \cite{BLST2} \begin{eqnarray} A = \left( \begin{array}{cc} \delta & \beta \\ \gamma & \alpha \\ \end{array} \right),\,\,\,\, \alpha \delta - \beta \gamma = 1 \end{eqnarray} it can be checked that the following holds \begin{eqnarray} U &\rightarrow& \frac{\alpha U -i \beta}{i \gamma U + \delta} \nonumber\\ T &\rightarrow& T - \frac{i \gamma}{2} \frac{BC}{i \gamma U + \delta} \nonumber\\ B &\rightarrow& \frac{B}{i \gamma U + \delta} \nonumber\\ C &\rightarrow& \frac{C}{i \gamma U + \delta} \label{E31} \end{eqnarray} and that \begin{equation} F(U) = ln(i \gamma U + \delta) \label{FU} \end{equation} As it is well known \cite{GPR}, there is also a subgroup of $O(2,2,Z)$ transformations which act as $SL(2,Z)_T$ transformations on the $T$ modulus. Let \cite{BLST2} \begin{eqnarray} \Omega= \left( \begin{array}{cc} I & \\ & g^T \\ \end{array} \right),\,\,\,\, g^T= \left( \begin{array}{cc} \alpha I & \gamma J \\ - \beta J& \delta I \\ \end{array} \right), \,\,\,\, \alpha \delta -\beta \gamma = 1 \label{E32} \end{eqnarray} Then, similarly, it can be shown that \begin{eqnarray} T &\rightarrow& \frac{\alpha T -i \beta}{i \gamma T + \delta} \nonumber\\ U &\rightarrow& U - \frac{i \gamma}{2} \frac{BC}{ i \gamma T + \delta} \nonumber\\ B &\rightarrow& \frac{B}{i \gamma T + \delta} \nonumber\\ C &\rightarrow& \frac{C}{i \gamma T + \delta} \label{E34} \end{eqnarray} and \begin{equation} K \rightarrow K + F(T) + \bar{F} (\bar{T}) \label{E35} \end{equation} where \begin{equation} F(T) = ln(i\gamma T + \delta) \label{E36} \end{equation} Next, let us look at elements of $O(4,2,Z)$ which are not in the $O(2,2,Z)$ subgroup. The biggest additional subgroup which commutes with $O(2,2,Z)$ is given by the group of automorphisms of the $A_1 \otimes A_1$ sublattice of the $E_8 \otimes E_8$ root lattice. Since this subgroup acts trivially on the winding numbers $n^i$ and on the momentum numbers $m_i$, we will not be interested in it. Consider, however, the following additional generator of $O(4,2,Z)$, whose action on the quantum numbers is non-trivial, as follows. Consider the generator \cite{Lauer} \begin{eqnarray} W_L= \left( \begin{array}{cccccc} 1 & 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ -2& 0 & -1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ \end{array} \right) \end{eqnarray} It satisfies (\ref{MHM}). Now, look at the action of the group element \begin{equation} \Omega = (W_L)^{p}\,\,\,\,, \,\, p \in Z \end{equation} on the quantum numbers as given in (\ref{E1}). It produces a shift in the first component of $q$ (the momentum vector on the $E_8 \otimes E_8$ lattice) \begin{eqnarray} q^1 \rightarrow q^1 - p\, n^1 \label{E50} \end{eqnarray} by $p$ units of the winding number $n^1$. The corresponding transformation of the real moduli fields can be read off from the transformation properties (\ref{M25}) and (\ref{Tra}) of the projective coordinate $Z$ given in (\ref{Pro}). One finds that \begin{eqnarray} G_{ij} &\rightarrow& G_{ij} \nonumber\\ B_{12} &\rightarrow& B_{12} - \frac{1}{4}\, p A_{12} \nonumber\\ A^1\,_1 &\rightarrow& A^1\,_1 + p \nonumber\\ A^2\,_1 &\rightarrow& A^2\,_1 \nonumber\\ A^1\,_2 &\rightarrow& A^1\,_2 \nonumber\\ A^2\,_2 &\rightarrow& A^2\,_2 \label{E51} \end{eqnarray} Thus, the shift given in (\ref{E50}) is accompanied by a shift in the first component of the first Wilson line. Inserting (\ref{E51}) into (\ref{K26}) yields \begin{eqnarray} T &\rightarrow& T + \frac{p}{2} C_{11} \,U + \frac{p}{2} \sqrt{C_{11}} (B + C) \nonumber\\ U &\rightarrow& U \nonumber\\ B &\rightarrow& B + p \sqrt{C_{11}} \,U \nonumber\\ C &\rightarrow& C + p \sqrt{C_{11}} \,U \label{E61} \end{eqnarray} and, from (\ref{K27}), \begin{equation} K \rightarrow K \label{E100} \end{equation} We would like to point out that the associated group element $\hat{g} \in O(4,2,Z)$ reproducing (\ref{E51}) via (\ref{E5}) can be constructed and that it is given by (\ref{E4}) with \begin{eqnarray} \hat{a}&=& \left( \begin{array}{cc} \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array}\right) & p \left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \\ \end{array}\right) \\ \left( \begin{array}{cc} 0 & 0 \\ 0 & 0 \\ \end{array}\right) & \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array}\right) \\ \end{array} \right) \;\;,\;\; \hat{b}= \left( \begin{array}{cc} -\frac{1}{2} p^2 C_{11} \left( \begin{array}{cc} 1&0 \\ 0&0 \\ \end{array}\right) & \frac{1}{2} p C_{11} \left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \\ \end{array}\right) \\ -\frac{1}{2} p C_{11} \left( \begin{array}{cc} 1&0 \\ 0&0 \\ \end{array}\right) & \left( \begin{array}{cc} 0&0 \\ 0&0 \\ \end{array}\right) \\ \end{array} \right) \nonumber\\ \hat{c}&=& 0 \;\;,\;\; \hat{d} = \hat{a}^{T,-1} \end{eqnarray} Note that, since $C_{11}$ is even integer valued, $\hat{b}$ is also integer valued. We now briefly discuss the target-space duality symmetries of the K\"{a}hler potential (\ref{KK28}) for a $\frac{SO(3,2)}{SO(3) \otimes SO(2)}$ coset. The associated modular group $G_O$ is given by $O(3,2,Z)$. Under $O(3,2,Z)$ the complex moduli $T,U$ and $B$ transform as in (\ref{E17}), (\ref{E31}), (\ref{E34}) and (\ref{E61}) with $C=B$. The associated K\"{a}hler potential (\ref{KK28}) then transforms as in (\ref{E19}), (\ref{FU}), (\ref{E36}) and (\ref{E100}), where again $C=B$. Finally, let us turn to the target-space duality symmetries of the K\"{a}hler potential (\ref{K38}) for an $\frac{SU(n,1)}{SU(n) \otimes U(1)}$ coset. It possesses an $SL(2,Z)_T$ symmetry as in (\ref{E32}) with the $T$ modulus and the complex Wilson lines ${ {\cal A}}_l$ transforming as \begin{eqnarray} T &\rightarrow& \frac{\alpha T -i \beta}{i \gamma T + \delta} \nonumber\\ {\cal A}_l &\rightarrow& \frac{ {\cal A}_l}{i \gamma T + \delta} \label{last}\end{eqnarray} and the K\"{a}hler potential transforming as in (\ref{E35}) and (\ref{E36}). Let us finally remark that the modular transformation rules (\ref{E31}), (\ref{E34}) and (\ref{last}) agree with previous results \cite{FLT,IbaLu,IbaLu2} for untwisted matter fields. \section{Conclusion} \hspace{.3in} In this paper we showed that the local structure of the untwisted moduli space of asymmetric $Z_N$ orbifolds is given by a product of $\frac{SU(n,m)}{SU(n) \otimes SU(m) \otimes U(1)}$ and $\frac{SO(r,p)}{SO(r) \otimes SO(p)}$ cosets. We then especialised to the case of $(0,2)$ symmetric orbifold compactifications with continuous Wilson lines. For the case where the underlying 6-torus $T_6$ is given by a direct sum $T_4 \oplus T_2$ we showed that interestingly enough, when the twist on the internal torus lattice has eigenvalues $-1$, there are holomorphic terms in the associated K\"{a}hler potential describing the mixing of complex Wilson lines. These terms deserve further study since they were recently shown \cite{LouKap,Anto} to be of the type which induce a mass term for Higgs particles of the order of the gravitino mass once supergravity is spontaneously broken. We proceeded to identify the associated target space duality symmetry groups and explicitly checked that they induce particular K\"{a}hler transformations of the K\"{a}hler potentials. In the case where the twist on the internal torus lattice has eigenvalues of $-1$, the associated $T$ and $U$ modulus were shown to mix under target space duality transformations due to the presence of the holomorphic mixing terms in the K\"{a}hler potential. In more general terms, the discussed orbifold examples clearly show that for (0,2) compactifications the moduli spaces of the moduli corresponding to the deformations of the K\"{a}hler class and of the complex structure respectively do not in general factorize like in the (2,2) compactifications, and that they get mixed by target space modular transformations. Having thus checked that these target space symmetries are indeed symmetries of the 4D tree level low-energy Lagrangian, it would be very interesting to know how these modular symmetries manifest themselves in the string loop threshold corrections \cite{Dix,AntNar,AntGav,May}. There, one expects to find that Wilson lines break some of the duality symmetries and it would be interesting to find out to what subgroups they are broken down. Also, when turning on continuous Wilson lines, one generically expects to find smaller gauge groups than the ones present in $(2,2)$ symmetric orbifold compactifications \cite{IbaLerLu}. Let us point out that it would be interesting to determine the generic gauge groups occuring at generic points in the moduli space of the $(0,2)$ models discussed in this paper. Work along these lines is in progress \cite{CarLuMoh}. Finally, it would also be of interest to extend the above investigations to the twisted sector. On the one hand, twisted moduli are important, because orbifolds can be smoothen out into Calabi-Yau manifolds by assigning non-zero vevs to twisted moduli. On the other hand, it has recently been pointed out \cite{Sas} that twisted sectors in asymmetric orbifolds may give rise to additional space-time supercharges.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Cognitive Computing}\label{sec:cognitivecomputing} IBM is certainly one of the major companies that pushed the development of modern computers from the very beginning. With respect to the development of intelligent machines, IBM succeeded twice to set a milestone: In 1997 the chess playing computer \textit{Deep Blue} managed to beat the world-champion Garry Kasparov. There was a discussion after this match whether the IBM team was cheating during the tournament. Kasparov demanded a rematch, which was refuted and, even more, Deep Blue was dismantled. In 2011 the IBM computer system \textit{Watson} beat two former winners in the quiz-show Jeopardy. In Jeopardy, the players have to understand natural language questions from various domains and give quick answers. This kind of question answering and reasoning is called deep question answering. The Watson system used many different sources of knowledge. Being not connected to the internet, Watson had access to databases, dictionaries, encyclopedias, formal ontologies but also literary works and newspaper articles. Very different to Deep Blue, after this effective public event, the Watson system was developed further and also tailored to various application domains \cite{Watson2013}. It is now applied in eHealth, cancer research, finance and the list is steadily increasing. There is even a version of Watson which is acting as chef, creating really extraordinary dishes, e.g. a Vietnamese Apple Kebab \cite{chef2014}. The keyword which turns the Jeopardy winning system into the basis of a business plan is \textit{cognitive computing system}. Such a system is designed to learn and to interact with people in a way that the result could not be achieved either by humans or machine on their own. Of course, mastering \textit{Big Data} also plays an important role -- IBM's marketing slogan is "Artificial Intelligence meets Business Intelligence". Such a cognitive computing system has the following properties: \begin{enumerate}[(a)] \item Multiple knowledge formats have to be processed: Formal knowledge, like ontologies but also a broad variety of natural language sources, like textbooks, encyclopedias, newspapers and literary works.\label{a} \item The different formats of knowledge also entail the necessity to work with different reasoning mechanisms, including information retrieval, automated deduction in formal logic and probabilistic reasoning. \label{b} \item The different parts and modules have to interact and cooperate very closely.\label{c} \item The entire processing is time critical, because of the interaction with humans.\label{d} \item The system must be aware of its own state and accuracy in order to rank its outcome.\label{e} \end{enumerate} Natural language question answering is obviously one example of cognitive computing as depicted above. There are one or several huge text corpora together with other background knowledge, which can be given in various formats. The user interaction is rather simple: The user asks a natural language question and the system answers in natural language. In the following natural language question answering is briefly introduced. \subsection{Autoren} Prof. Dr. Ulrich Furbach ist Professor für Künstliche Intelligenz an der Universität Koblenz-Landau. Seine Forschungsgebiete umfassen automatisches Schließen, Wissensrepräsentation, Frage-Antwort-Systeme und Kognitionsforschung. \medskip\noindent Dipl.-Inform. Claudia Schon ist wissenschaftliche Mitarbeiterin in der Arbeitsgruppe K\"unstliche Intelligenz an der Universit\"at Koblenz-Landau und arbeitet im Forschungsprojekt RatioLog. Ihre Forschungsinteressen beinhalten K\"unstliche Intelligenz, Kognition und Logik, wobei ihr Hauptinteresse im Bereich der Beschreibungslogiken liegt. \medskip\noindent Prof. Dr. Frieder Stolzenburg ist Professor für Wissensbasierte Systeme an der Hochschule Harz in Wernigerode und leitet das Labor Mobile Systeme am Fachbereich Automatisierung und Informatik. Seine Forschungsinteressen umfassen Künstliche Intelligenz, Logik, Kognition sowie Mobile Robotik. \subsection{Kontakt} Universität Koblenz-Landau\\ Universitätsstr. 1\\ 56070 Koblenz\\[\medskipamount] Tel.: +49\,261\,/\,287-2728\\ URL: http://www.uni-koblenz-landau.de/koblenz/fb4/ifi/AGKI \begin{abstract}This paper briefly characterizes the field of cognitive computing. As an exemplification, the field of natural language question answering is introduced together with its specific challenges. A possibility to master these challenges is illustrated by a detailed presentation of the LogAnswer system, which is a successful representative of the field of natural language question answering. \textbf{ } \emph{Keywords:} cognitive computing; natural language processing; question answering systems; theorem proving. \end{abstract} \noindent Human computer interaction is a discipline with increasing importance. Many people spend a lot of time with computers playing games, watching movies but, of course, also solve problems during their professional activities. This becomes even more important, the more data and information has to be taken into account. Indeed, this amount is increasing every day. Big data and open data are keywords that relate to fields of computer science, where exactly these aspects are tackled. This paper briefly describes the term \textit{cognitive computing} and demonstrates that natural language question answering is an example for this new computing paradigm. In the next section, cognitive computing is discussed. After this, a brief overview on natural language question answering is given. Then the LogAnswer system is described and finally we conclude with current extensions and future work. \input{cognitive_computing} \section{Natural Language Question-Answering} \label{sec:nlp_qa} \input{nlp_qa.tex} \section{The LogAnswer System} \label{sec:loganswer} \input{loganswer.tex} \section{Conclusions} In this paper, the state of the art in cognitive computing systems and in natural language question answering is discussed. As a prototypical example, the LogAnswer system is described in detail and its properties are checked against criteria for cognitive computing systems. Currently the LogAnswer system is extended in the follow-up project RatioLog, aiming at the inclusion of rational and human-like reasoning components. In \cite{kik} the use of deontic logic for modeling human reasoning and its automating with the logical machinery from LogAnswer is demonstrated. Another extension is with respect to defeasible reasoning \cite{WS14}, which is helpful to determine the best answer from the possibly contradicting answer candidates. \bigskip \noindent\emph{Dieser Beitrag entstand im Rahmen des Projekts RatioLog -- Rationale Erweiterungen des Logischen Schließens, das von der Deutschen Forschungsgemeinschaft (DFG) unter den Kennzeichen FU~263/15-1 und STO~421/5-1 gefördert wird.} \renewcommand{\sc}{} \bibliographystyle{acm} \section{The LogAnswer System} LogAnswer \cite{Furbach:Gloeckner:Helbig:Pelzer:LogicBasedQA:2009} is an open domain question answering system. It is accessible by a web interface (\url{http://www.loganswer.de}) similar to that of a search engine, see Fig. \ref{fig:loganswer}. The user enters a question into the text box and LogAnswer presents the three best answers, which are highlighted in the relevant textual sources to provide a context. While many systems for natural language question answering focus on shallow linguistic methods, LogAnswer uses an automated theorem prover (ATP) to compute the replies. \begin{figure}[top] \centering \includegraphics[width=\textwidth]{screenshot_la} \caption{Screenshot of the LogAnswer System. \label{fig:loganswer} \end{figure} The system was developed in the LogAnswer project which was a cooperation between the IICS (Intelligent Information and Communication Systems) at the Fernuniversit\"at in Hagen and the Artificial Intelligence Research Group (AGKI) at the University Koblenz-Landau. The project was funded by the German Research Foundation DFG (Deutsche Forschungsgemeinschaft) and aimed at the development of efficient and robust methods for logic-based question answering. The IICS is experienced in computational linguistics and knowledge engineering. Within the LogAnswer project the IICS handled the natural language aspects and provided the knowledge base. As an expert in automated theorem proving, the AGKI was responsible for the deductive aspects of the LogAnswer project. As indicated in (\ref{c}) in the list of properties of cognitive computing systems, it is important to take care that the different modules interact and cooperate closely. When combining NLP and automated reasoning as in the LogAnswer system, paying attention to the conflicting aims of the two fields is important. Since NLP methods are often confronted with flawed textual data, they strive toward robustness and speed. Nevertheless, they lack the ability to perform complex inferences. In contrast to that, a theorem prover uses a sound calculus to derive precise complex proofs. However, even minor flaws or omissions in the data can lead to a failure of the derivation process. Furthermore, refutationally complete theorem provers can have problems when dealing with large amounts of data due to the fact that they can easily get stuck performing redundant inferences. In the LogAnswer system NLP is used to filter the input for the theorem prover to a fraction of the knowledge available to LogAnswer, and the prover is embedded into a relaxation mechanism which can lessen the proof requirements for imperfect input data \cite{DBLP:journals/aicom/FurbachGP10}. As claimed in (\ref{a}) in the list of properties, the LogAnswer system uses multiple knowledge formats. One part of the knowledge is provided by a snapshot of the German Wikipedia, which has been translated into a semantic network representation in the MultiNet (Multilayered Extended Semantic Networks) formalism \cite{DBLP:series/cogtech/Helbig2006}. To make the semantic networks accessible to modern theorem provers, LogAnswer is also equipped with a representation of the MultiNet knowledge base in first-order logic (FOL). See \cite{DBLP:journals/aicom/FurbachGP10} for details on the translation of the MultiNet knowledge base into a first-order logic knowledge base. All in all, 29.1 million natural language sentences have been translated. In addition to that, a background knowledge consisting of 12,000 logical rules and facts is used. This background knowledge provides general knowledge which is advantageous for the setting of question answering. Automated reasoning enables the integration of this background knowledge. \begin{figure}[top] \centering \includegraphics[width=\textwidth]{lauebersicht} \caption{Question Processing of the LogAnswer System.} \label{fig:qa} \end{figure} In Figure~\ref{fig:qa} it is depicted how LogAnswer processes a question. Since it is a web-based question answering system, users expect the system to respond quickly. This aspect of time criticality corresponds to (\ref{d}) in the list of properties and is a serious restriction of the time available for the LogAnswer system to process a question. In such a restricted time, a question cannot be answered directly using the the whole knowledge base. Therefore, several different techniques such as natural language processing, information retrieval, machine learning and automated deduction come to use. This corresponds to claim (\ref{b}) in the list of properties. After translating the question into the MultiNet and FOL representation, the Wikipedia content is matched against the given query using retrieval and shallow linguistic criteria. By this, lists of features like the number of matching lexemes between passages and the question or the occurrences of proper names in the passage are computed. Afterwards an ML algorithm ranks text passages using these features. Then up to 200 text passages are extracted from the knowledge base according to this ranking. These so-called \emph{answer candidates} have a high probability to contain the answer and can be computed rapidly. The computation of feature lists is implemented robustly, which allows to handle documents containing syntactic errors and thus to extract answers from text passages which cannot be parsed completely. In the next step the theorem prover Hyper \cite{WernhardPelzer} is used. The Hyper theorem prover is an implementation of the hypertableaux calculus \cite{DBLP:conf/jelia/BaumgartnerFN96} extended with equality. It has been shown to be very suitable for the type of reasoning problems occurring in the question answering setting, which are characterized by their large number of irrelevant axioms. With the help of Hyper the answer candidates are tested consecutively. For each of these tests, the logical representation of both the query and an answer candidate together with the background knowledge are fed into Hyper. A successful proof provides an answer by giving an instantiation of the variables of the logical representation of the query. If no proof can be found in time, query relaxation techniques come to pass. These techniques allow certain subgoals of the query to be weakened or dropped in order to enable the prover to find a proof in short time. Query relaxation increases the likelihood of finding an answer even if the knowledge at hand is incomplete. However, the drawback of this technique is that it decreases the probability that the answer found is relevant to the query. As claimed in (\ref{e}) in the list of properties, the LogAnswer system is aware of its own accuracy, because all proofs are ranked by machine learning algorithms. The three proofs with the highest rank are translated back into natural language answers and are presented to the user.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The introduction is organized as follows. The first section is devoted to the main definitions and the statement of the Grozman Theorem. In the second section, our result is stated. In the last section, the main ideas of the proof are explained. \subsection{Grozman's Theorem:}\label{sect_0.1} Let $X$ be a manifold of dimension $n$, let $W_X$ be the Lie algebra of vector fields over $X$ and let $M,\,N$ and $P$ be three tensor density modules over $X$. The precise meaning of tensor density module will be clarified later on and the geometric context (differential geometry, algebraic geometry, $\dots$) is not yet precised. In a famous paper \cite{G}, Yu. Grozman has classified all bilinear differential operators $\pi:M\times N\rightarrow P$ which are $W_X$-equivariant. Since differential operators are local \cite{P}, it is enough to consider the case of the formal geometry, namely $X=\mathrm{Spec}\, \mathbb{C}[[z_1,\dots,z_n]]$. The most intersting and difficult part of Grozman's theorem involves the case where $\mathrm{dim} X=1$, indeed the general case follows from this case. Therefore, we will now assume that $X=\mathrm{Spec}\, \mathbb{C}[[z]]$. For this manifold, the tensor density modules are the modules $\Omega^\delta$, where the parameter $\delta$ runs over $\mathbb{C}$. As a $\mathbb{C}[[z]]$-module, $\Omega^\delta$ is a rank one free module whose generator is denoted by $(dz)^\delta$. The structure of $W_X$-module on $\Omega^\delta$ is described by the following formula: $$\xi.[f.(dz)^\delta] =(\xi.f+\delta f\div(\xi)).(dz)^\delta$$ \noindent for any $f\in \mathbb{C}[[z]]$ and $\xi\in W_X$, where, as usual, $\xi.f=gf'$, $\div( \xi)=g'$ whenever $\xi=g \dfrac{d}{dz}$ for some $g\in \mathbb{C}[[z]]$. When $\delta$ is a non-negative integer, $\Omega^\delta$ is the space $(\Omega^1_X)^{\otimes\delta}$, where $\Omega^1_X$ is the space of K\"{a}lher differential of $X$. The space $\oplus_{\delta}\,\Omega^\delta$ can be realized as the space of symbols of twisted pseudo-differential operators on the circle (see e.g. \cite{IM}, {\it twisted} means that complex powers of $\dfrac{d}{dz}$ are allowed) and therefore it carries a structure of Poisson algebra. The Poisson structure (a commutative product $P$ and a Lie bracket $B$) induces two series of $W_X$-equivariant bilinear maps, namely the maps $P^{\delta_1,\delta_2}:\Omega^{\delta_1}\times \Omega^{\delta_2} \rightarrow \Omega^{\delta_1+\delta_2}$ and the map $B^{\delta_1,\delta_2}:\Omega^{\delta_1}\times \Omega^{\delta_2} \rightarrow \Omega^{\delta_1+\delta_2 +1}$. These operators are explicitly defined by: $P^{\delta_1,\delta_2}(f_1.(dz)^{\delta_1},f_2.(dz)^{\delta_2})=f_1f_2 (dz)^{\delta_1+\delta_2}$ $B^{\delta_1,\delta_2}(f_1.(dz)^{\delta_1},f_2.(dz)^{\delta_2})=( \delta_2f_1'f_2-\delta_1f_1f_2') (dz)^{\delta_1+\delta_2+1}$ Moreover, the de Rham operator is a $W_X$-equivariant map $\d:\Omega^0\rightarrow\Omega^1$. So we can obtained additional $W_X$-equivariant bilinear maps between tensor density module by various compositions of $B^{\delta_1,\delta_2}$ and $P^{\delta_1,\delta_2}$ with $\d$. An example is provided by the map $B^{1,\delta}\circ (\d\times id):\Omega^0\times \Omega^\delta \rightarrow \Omega^{\delta+2}$. Following Grozman, the {\it classical} $W_X$-equivariant bilinear maps are (the linear combinations of) the maps $B^{\delta_1,\delta_2}, P^{\delta_1,\delta_2}$, and those obtained by various compositions with $\d$. Grozman discovered one additional $W_X$-equivariant bilinear map, namely Grozman's operator $G: \Omega^{-2/3}\times \Omega^{-2/3} \rightarrow \Omega^{5/3}$ defined by the formula: $G(f_1.(dz)^{-2/3},f_2.(dz)^{-2/3})= [2(f_1'''f_2-f_2'''f_1)+3(f_1''f_2'-f_1'f_2'')](dz)^{5/3}. $ With this, one can state Grozman's result: \bigskip {\bf Grozman Theorem.} {\it Any differential $W_X$ -equivariant bilinear map $\pi:\Omega^{\delta_1}\times \Omega^{\delta_2} \rightarrow \Omega^{\gamma}$ between tensor density modules is either classical, or it is a scalar multiple of the Grozman operator.} \bigskip \subsection{The result of the present paper:}\label{sect_0.2} In this paper, a similar question is investigated, namely the determination of all $W_X$-equivariant bilinear maps $\pi:M\times N\rightarrow P$ between tensor density modules, without the hypothesis that $\pi$ is a differential operator. Since differential operators are local, we will establish a global (=non-local) version of Grozman Theorem. For this purpose, we will make new hypotheses. From now on, the context is the algebraic geometry, and the manifold $X$ of investigation is the {\it circle}, namely $\mathbb{C}^\ast=\mathrm{Spec} \;\mathbb{C}[z,z^{-1}]$. Set $\mathbf{W}=W_X$. Fix two parameters $\delta, s\in \mathbb{C}$ and set $\rho_{\delta,s}(\xi)= \xi+\delta\,\div\xi + i_{\xi}\alpha_s$ for any $\xi\in \mathbf{W}$, where $\alpha_s=s z^{-1} \d z$. By definition, $\Omega^\delta_s$ is the $\mathbf{W}$-module whose underlying space is $\mathbb{C}[z,z^{-1}]$ and the action is given by $\rho_{\delta,s}$. To describe more naturally the action $\rho_{\delta,s}$, it is convenient to denote by the symbol $z^s (\d z)^\delta$ the generator of this module, and therefore the expressions $(z^{n+s} (\d z)^\delta)_{n\in\mathbb{Z}}$ form a basis of $\Omega^\delta_s$. It follows easily that $\Omega^\delta_s$ and $\Omega^\delta_u$ are equal if $s-u$ is an integer. Therefore, we will consider the parameter $s$ as an element of $\mathbb{C}/\mathbb{Z}$. We will not provide a rigorous and general definition of the {\it tensor density modules} (see e.g. \cite{M2}). Just say that the {\it tensor density modules} considered here are the $\mathbf{W}$-modules $\Omega^\delta_{u}$, where $(\delta,s)$ runs over $\mathbb{C}\times \mathbb{C}/\mathbb{Z}$. As before, there are $\mathbf{W}$-equivariant bilinear maps $P^{\delta_1,\delta_2}_{u_1,u_2}:\Omega^{\delta_1}_{u_1}\times \Omega^{\delta_2}_{u_2} \rightarrow \Omega^{\delta_1+\delta_2}_{u_1+u_2}$ and $B^{\delta_1,\delta_2}_{u_1,u_2}:\Omega^{\delta_1}_{u_1}\times \Omega^{\delta_2}_{u_2} \rightarrow \Omega^{\delta_1+\delta_2 +1}_{u_1+u_2}$, as well as the de Rham differential $\d:\Omega^{0}_u\rightarrow\Omega^{1}_u$. There is also a map $\rho:\Omega^{1}_u\rightarrow\Omega^{0}_u$, which is defined as follows. For $u\not\equiv 0\, \mod \mathbb{Z}$, the opeartor $d$ is invertible and set $\rho=\d^{-1}$. For $u \equiv 0\, \mod \mathbb{Z}$, denote by $\rho:\Omega^{1}_u\rightarrow\Omega^{0}_u$ the composite of the residue map $\mathrm{Res}:\Omega^{1}_0\rightarrow \mathbb{C}$ and the natural map $\mathbb{C} \rightarrow\Omega^{0}_0 =\mathbb{C} [z,z^{-1}]$. By definition, a {\it classical} bilinear map between tensor density modules over the circle is any linear combination of the operators $B^{\delta_1,\delta_2}_{u_1,u_2}$, $P^{\delta_1,\delta_2}_{u_1,u_2}$ and those obtained by composition with $\d$ and $\rho$. An example of a classical operator is $\rho\circ P: \Omega^{\delta}_{u_1}\times \Omega^{1-\delta}_{u_2} \rightarrow \Omega^{0}_{u_1+u_2}$. Of course, the Grozman operator provies a family of non-classical operators $G_{u,v}: \Omega^{-2/3}_{u}\times \Omega^{-2/3}_{v} \rightarrow \Omega^{5/3}_{u+v}$. A {\it trivial operator} is a scalar multiple of the bilinear map $\Omega^1_0\times\Omega^1_0 \rightarrow \Omega^0_0, (\alpha,\beta)\mapsto\mathrm{Res}(\alpha)\mathrm{Res}(\beta)$. There is also another non-classical $\mathbf{W}$-equivariant operator $\Theta_{\infty}:\Omega^1_0\times \Omega^1_0\rightarrow \Omega^0_0$ which satisfies: $\d \Theta_{\infty}(\alpha,\beta)=\mathrm{Res}(\alpha)\beta-\mathrm{Res}(\beta)\alpha$ \noindent for any $\alpha,\beta\in \Omega^1_0$. Indeed $\Theta_{\infty}$ is unique modulo a trivial operator. Our result is the following: \bigskip {\bf Theorem:} {\it (restricted version) Let $X$ be the circle. Any $\mathbf{W}$-equi\-va\-riant bilinear map between tensor density module is either classical, or it is a scalar multiple of $G_{u,v}$ or of $\Theta_{\infty}$ (modulo a trivial operator).} \bigskip In the paper, a more general version, which also involves deformations of tensor density modules, is proved. Set $L_0=z\dfrac{\d}{\d z}$. For a $\mathbf{W}$-module $M$ and $s\in \mathbb{C}$, set $M_s=\mathrm{Ker} (L_0-s)$. Let $\mathcal{S}$ be the class of all $\mathbf{W}$ modules $M$ which satisfies the following condition: there exists $u\in \mathbb{C}/\mathbb{Z}$ such that \centerline{$M=\oplus_{s\in u}\,M_s$ and $\mathrm{dim} M_s=1$ for all $s\in u$.} \noindent The $\mathbb{Z}$-coset $u$ is called the {\it support of $M$}, and it is denoted by $\mathrm{Supp}\,\,M$. It turns out that all modules of the class $\mathcal{S}$ have been classified by Kaplansky and Santharoubane \cite{KS})] and, except deformations of $\Omega^0_0$ and $\Omega^1_0$, all modules of the class $\mathcal{S}$ are tensor density modules. Our full result is the classification of all $\mathbf{W}$-equi\-va\-riant bilinear maps between modules of the class $\mathcal{S}$. \subsection{About the proofs:}\label{sect_0.3} In order to describe the proof and the organization of the paper, it is necessary to introduce the notion of germs of bilinear maps. For any three vector spaces $M,\,N$ and $P$, denote by $\mathbf{B}(M\times N,P)$ the space of bilinear maps $\pi:M\times N\rightarrow P$. Assume now that $M,\,N$ and $P$ are $\mathbf{W}$-modules of the class $\mathcal{S}$. For $x\in{\bf R}$, set $M_{\geq x}=\oplus_{\Re s\geq x}\, M_s$ and $N_{\geq x}=\oplus_{\Re s\geq x}\, N_s$. By definition, a {\it germ} of bilinear map from $M\times N$ to $P$ is an element of the space $\mathcal{G}(M\times N,P):=\lim\limits_{x\rightarrow +\infty} \,\mathbf{B}(M_{\geq x}\times N_{\geq x},P)$. It turns out that $\mathcal{G}(M\times N,P)$ is a $\mathbf{W}$-module. Denote by $\mathbf{B}_{\mathbf{W}}(M\times N,P)$ the space of $\mathbf{W}$-equivariant bilinear maps $\pi:M\times N\rightarrow P$, by $\mathbf{B}^0_{\mathbf{W}}(M\times N,P)$ the subspace of all $\pi\in \mathbf{B}_{\mathbf{W}}(M\times N,P)$ whose germ is zero and by $\mathcal{G}_{\mathbf{W}}(M\times N,P)$ the space of $\mathbf{W}$-equivariant germs of bilinear maps from $M\times N$ to $P$. There is a short exact sequence: \centerline{$0\rightarrow \mathbf{B}^0_{\mathbf{W}}(M\times N,P) \rightarrow \mathbf{B}_{\mathbf{W}}(M\times N,P) \rightarrow \mathcal{G}_{\mathbf{W}}(M\times N,P)$.} \noindent The paper contains three parts Part 1 which determines $\mathbf{B}^0_{\mathbf{W}}(M\times N,P)$, see Theorem 1, Part 2 which determines $\mathcal{G}_{\mathbf{W}}(M\times N,P)$, see Theorem 2, Part 3 which determines \\ \centerline{$\mathbf{B}_{\mathbf{W}}(M\times N,P) \rightarrow \mathcal{G}_{\mathbf{W}}(M\times N,P)$,} \noindent see Theorem 3. Part 1 is discussed in Section \ref{sect_Thm1}. The map $\Theta_{\infty}$ is an example of a degenerate map. Part 2 is the main difficulty of the paper. One checks that $\mathrm{dim} \mathcal{G}_{\mathbf{W}}(M\times N,P)\leq 2$. So it is enough to determine when $\mathcal{G}_{\mathbf{W}}(M\times N,P)$ is non-zero and when $\mathrm{dim} \mathcal{G}_{\mathbf{W}}(M\times N,P)= 2$. We will now explain our approach. The {\it degree} of the modules of the class $\mathcal{S}$ is a multivalued function defined as follows. If $M=\Omega^{\delta}_s$ for some $\delta\neq 0$ or $1$, set $\deg M=\delta$. Otherwise, set $\deg M=\{0,1\}$. Next, let $M, \, N$ and $P\in\mathcal{S}$ with $\delta_1\in\deg M$, $\delta_2\in \deg N$ and $\gamma\in \deg P$. We can assume that $\mathrm{Supp}\,\,P=\mathrm{Supp}\,\,M+\mathrm{Supp}\,\,N$, since otherwise $\mathcal{G}_{\mathbf{W}}(M\times N,P)$ would be obviously zero. We introduce a $6$ by $6$ matrix $\mathbf{M}=(m_{i,j}(\delta_1,\delta_2,\gamma,x,y))_{1\leq i,j\leq 6}$ whose entries are quadratic polynomials in the five variables $\delta_1,\delta_2,\gamma,x,y$ and which satisfies the following property: \centerline{$\mathrm{det} \mathbf{M}=0$ for all $x, y$ if $\mathcal{G}_{\mathbf{W}}(M\times N,P)\neq 0$.} Set $\mathrm{det} \mathbf{M}=\sum_{i,j}\,p_{i,j}(\delta_1,\delta_2,\gamma) x^iy^j$ and let $\mathfrak{Z}$ be the common set of zeroes of all polynomials $p_{i,j}$. Half of the entries of $\mathbf{M}$ are zero and only $16$ of the $720$ diagonals of $\mathbf{M}$ give a non-zero contribution for $\mathrm{det}\mathbf{M}$. However a human computation looks too complicated, because each non-zero entry of $\mathbf{M}$ is a linear combination of $9$ or $10$ distinct monomials. The computation of the polynomials $p_{i,j}$ has been done with MAPLE. As expected, $p_{1,3}$ and $p_{3,1}$ are degree eight polynomials. It turns out that each of them is a product of $6$ degree one factors and one quadratic factor. Indeed 4 degree one factors are obvious and the rest of the factorizations look miraculous. Moreover the two (suitably normalized) quadratic factors differ by a linear term. It follows that the common zero set of $p_{1,3}$ and $p_{3,1}$ is a union of affine planes, affine lines and some planar quadrics. This allows to explicitely solve the equations $p_{i,j}=0$. Since only the polynomials $p_{1,3}, p_{3,1}$ and $p_{2,2}$ are needed, the other polynomials $p_{i,j}$ are listed in Appendix A. It turns out that $\mathfrak{Z}$ decomposes into four planes, eight lines and four points. Using an additional trick, we determine when $\mathcal{G}_{\mathbf{W}}(M\times N,P)\neq 0$, and when $\mathrm{dim} \mathcal{G}_{\mathbf{W}}(M\times N,P)=2$. Although its proof is the main difficulty of the paper, the statement of Theorem 2 is very simple. Indeed $\mathcal{G}_{\mathbf{W}}(M\times N,P)$ is non-zero exactly when $(\delta_1,\delta_2,\gamma)$ belongs to an explicit algebraic subset $\mathfrak{z}$ of $\mathfrak{Z}$ consisting of two planes, six lines and five points. Moreover, it has dimension two iff $\{\delta_1,\delta_2,\gamma\} \subset\{0,1\}$). Theorem 3 determines which germs in $\mathcal{G}_{\mathbf{W}}(M\times N,P)$ can be lifted to a $\mathbf{W}$-equivariant bilinear map. Each particular case is easy, but the list is very long. Therefore Theorem 3 has been split into Theorem 3.1 and Theorem 3.2, corresponding to the case where $\mathcal{G}_{\mathbf{W}}(M\times N,P)$ has dimension one or two. It should be noted that all $\mathbf{W}$-modules of the class $\mathcal{S}$ are indecomposable except one, namely $\overline A\oplus \mathbb{C}$, where $\overline A=\mathbb{C}[z,z^{-1}]/\mathbb{C}$. In most statements about bilinear maps $\pi:M\times N\rightarrow P$, we assume that $M$, $N$ and $P$ are indecomposable. Indeed the case where some modules are decomposable follows easily. The indecomposability hypothesis removes many less interesting cases. This is helpful since some statements already contain many particular cases, e.g. Theorem 3.1 contains 16 of them. \begin{ack} We would like to thank our colleague J\'er\^ome Germoni for his computation of the determinant $\mathrm{det} M$ with the aid of MAPLE. \end{ack} \section{The Kaplansky-Santharoubane Theorem}\label{KS_Theorem} The {\it Witt algebra} $\mathbf{W}$ is the Lie algebra of derivations of the Laurent polynomial ring $A =\mathbb{C}[z,z^{-1}]$. Clearly the elements $L_n=z^{n+1}\dfrac{d}{dz}$, where $n$ runs over $\mathbb{Z}$, form a basis of $\mathbf{W}$ and we have \noindent\centerline{$ [L_m, L_n]=(n-m)L_{m+n}$.} \noindent Throughout this paper, $\mathfrak{sl}(2)$ refers to its subalgebra \noindent\centerline{$\mathbb{C} L_{-1}\oplus\mathbb{C} L_0\oplus\mathbb{C} L_1$.} \subsection{Statement of the theorem}\label{sect_KS} For a $\mathbf{W}$-module $M$, set $M_z=\{m \in M \vert L_0.m=zm\}$ for any $z \in \mathbb{C}$ and define its {\it support} as the set $\mathrm{Supp}\, M=\{z \in \mathbb{C} \vert M_z\neq 0\}$. Let $\mathcal{S}$ be the class of all $\mathbf{W}$-modules $M$ such that (i) $M=\oplus_{z \in \mathbb{C}} M_z$, (ii) $\mathrm{Supp}\, M$ is exactly one $\mathbb{Z}$-coset, and (ii) $\mathrm{dim} M_z=1$ for all $z \in \mathrm{Supp}\, (M)$. \noindent Here are three families of modules of the class $\mathcal{S}$: \begin{enumerate} \item The family of {\it tensor density modules} $\Omega_{u}^\delta$, where $(\delta,u)$ runs over $\mathbb{C}\times \mathbb{C}/\mathbb{Z}$. Here $\Omega_{u}^\delta$ is the $\mathbf{W}$-module with basis $(e_z^\delta)_{z\in u}$ and action given by the formula: \hskip3cm $ L_m.e_z^\delta=(m\delta+z)e_{z+m}^\delta.$ \item The {\it $A$-family} $(A_{a,b})_{(a,b)\in\mathbb{C}^2}$. Here $A_{a,b}$ is the $\mathbf{W}$-module with basis $(e_n^A)_{n\in\mathbb{Z}}$ and action given by the formula: \hskip3cm$ L_m.e_n^A=\begin{cases} (m+n)e_{m+n}^A \qquad & n \neq 0, \\ (am^2+bm)e_m^A \qquad &n=0. \end{cases} $ \item The {\it $B$-family} $(B_{a,b})_{(a,b)\in\mathbb{C}^2}$. Here $B_{a,b}$ is the $\mathbf{W}$-module with basis $(e_n^B)_{n\in\mathbb{Z}}$ and action given by the formula: \hskip3cm $L_m.e_n^B=\begin{cases} ne_{m+n}^B \qquad &n+m \neq 0, \\ (am^2+bm)e_0^B \qquad & n+m=0. \end{cases}$ \end{enumerate} Set $\overline{A}:=A/\mathbb{C}$. There are two exact sequences: \noindent\centerline{ $0 \longrightarrow \overline{A} \longrightarrow A_{a,b} {\longrightarrow} \mathbb{C} \longrightarrow 0$, and} \noindent\centerline{ $0 \longrightarrow \mathbb{C} { \longrightarrow} B_{a,b} \longrightarrow \overline{A} \longrightarrow 0$,} \noindent and we denote by $\mathrm{Res}: A_{a,b}\rightarrow\mathbb{C}$ the map defined by $\mathrm{Res}\, e_0^A=1$ and $\mathrm{Res}\, e_n^A=0$ if $n\neq 0$. These exact sequences do not split, except for $(a,b)=(0,0)$. Therefore the $A$-family is a deformation of $\Omega^1_0\simeq A_{0,1}$ and the $B$-family is a deformation of of $\Omega^0_0\simeq B_{0,1}$. Except the previous two isomorphisms and the obvious $A_{0,0}\cong B_{0,0} \cong \overline{A} \oplus \mathbb{C}$, there are some repetitions in the previous list due to the following isomorphisms: \begin{enumerate} \item the de Rham differential $d:\Omega_{u}^0 \rightarrow \Omega_{u}^1$, if $u\not\equiv 0$ mod $\mathbb{Z}$, \item $A_{\lambda a, \lambda b} \cong A_{a,b}$ and $B_{\lambda a, \lambda b} \cong B_{a,b}$ for $\lambda \in \mathbb{C}^\ast$, \end{enumerate} \noindent There are no other isomorphism in the class $\mathcal{S}$ beside those previously indicated. From now on, we will consider the couples $(a,b)\neq (0,0)$ as a projective coordinate, and the indecomposable modules in the $AB$-famillies are now parametrized by $\mathbb{P}^1$. Set $\infty=(0,1)$ and $\mathbb{A}^1=\mathbb{P}^1\setminus \infty$. Therefore the indecomposable $\mathbf{W}$-modules in the previous list, which are not tensor density modules, are the two $\mathbb{A}^1$-parametrized families $(A_\xi)_{\xi\in\mathbb{A}^1}$ and $(B_\xi)_{\xi\in\mathbb{A}^1}$, as in \cite{MP}'s paper. The classification of the $\mathbf{W}$-modules of the class $\mathcal{S}$ was given by I. Kaplansky and L. J. Santharoubane \cite{KS}, \cite{K} (with a minor correction in \cite{MP} concerning the parametrization of the $AB$-families): \bigskip {\bf Kaplansky-Santharoubane Theorem.} {\it Let $M$ be a $\mathbf{W}$-module of the class $\mathcal{S}$. \begin{enumerate} \item If $M$ is irreducible, then there exists $(u, \delta) \in \mathbb{C}/\mathbb{Z} \times \mathbb{C}$, with $(u,\delta)\neq (0,0)$ or $(0,1)$ such that $M \simeq\Omega_{u}^\delta$. \item If $M$ is reducible and indecomposable, then $M$ is isomorphic to either $A_{\xi}$ or $B_{\xi}$ for some $\xi \in \mathbb{P}^1$. \item Otherwise, $M$ is isomorphic to $\overline{A}\oplus \mathbb{C}$. \end{enumerate} } \subsection{Degree of the modules in the class $\mathcal{S}$} It follows from the previous remark that one can define {\it the degree} $\deg M$ for any $M \in \mathcal{S}$ as follows: \begin{enumerate} \item $\deg M=\delta$ if $M \cong \Omega_s^\delta$ for some $\delta \in \mathbb{C} \setminus \{0,1\}$, and \item $\deg M=\{0,1\}$ otherwise. \end{enumerate} By definition, the degree is a multivalued function. We also define {\it a degree} of $M$ as a value $\delta \in \deg M$. Let $\mathcal{S}^\ast$ be the class of all pairs $(M,\delta)$, where $M\in \mathcal{S}$ and $\delta \in \deg M$. A pair $(M,\delta) \in \mathcal{S}^\ast$ will be often simplified as $M$ and set $\deg M:=\delta$. So, the degree function is a single valued function on $\mathcal{S}^\ast$. Usually, we consider $\Omega_s^\delta$ as the element $(\Omega_s^\delta,\delta)$ of $\mathcal{S}^\ast$ for any $\delta$. For $M\in\mathcal{S}$, let $M^\ast$ be its restricted dual, namely $M^\ast=\oplus_{x \in \mathbb{C}} M_x^\ast$. By definition, the class $\mathcal{S}$ is stable by the restricted duality and we have: \begin{lemma}\label{lemma_dual-density} $(\Omega_u^\delta)^\ast \cong \Omega_{-u}^{1-\delta}$ and $(A_{\xi})^\ast \cong B_{\xi}$. \end{lemma} In particular, it follows that $\deg M^\ast=1-\deg M$ for any $M\in \mathcal{S}$. \section{Germs and bilinear maps} \subsection{On the terminology `$\mathfrak{g}$-equivariant'} Throughout the whole paper, we will use the following convention. Let $\mathfrak{g}$ be a Lie algebra and let $E$ be a $\mathfrak{g}$-module. When $E$ is a space of maps, a $\mathfrak{g}$-invariant element of $E$ will be called {\it $\mathfrak{g}$-equivariant}. We will use the same convention for spaces of germs of maps (see the definition below). When $\mathfrak{g}$ is the one-dimensional Lie algebra $\mathbb{C}.L$, we will use the terminology {\it $L$-invariant} and {\it $L$-equivariant} instead of $\mathfrak{g}$-invariant and $\mathfrak{g}$-equivariant. \subsection{Weight modules and the $\mathfrak{S}_3$-symmetry} A {\it $\mathbb{C}$-graded vector space} is a vector space $M$ endowed with a decomposition $M=\oplus_{z\in\mathbb{C}} M_z$ such that $\mathrm{dim} M_z<\infty$ for all $z\in \mathbb{C}$. Denote by $\mathcal{H}$ the category of all $\mathbb{C}$-graded vector spaces. It is convenient to denote by $L_0$ the degree operator, which acts as $z$ on $M_z$. Given $M,N\in{\cal H}$, we denote by ${\mathrm {Hom}}_{L_0}(M,N)$ the space of $L_0$-equivariant linear maps from $\phi:M\rightarrow N$. Equivalently ${\mathrm {Hom}}_{L_0}(M,N)$ is the space of maps in the category ${\cal H}$. By definition, a {\it Lie $L_0$-algebra} is a pair $(\mathfrak{g},L_0)$, where $\mathfrak{g}=\oplus_{z\in\mathbb{C}}\,\mathfrak{g}_z$ is a $\mathbb{C}$-graded Lie algebra, $L_0$ is an element of $\mathfrak{g}_0$ such that $\mathrm{ad}(L_0)$ acts as the degree operator. A {\it weight} $\mathfrak{g}$-module is a $\mathbb{C}$-graded vector space $M$ endowed with a structure of $\mathfrak{g}$-module. Of course it is required that $L_0$ acts as the degree operator on $M$ and therefore we have $\mathfrak{g}_y.M_z\subset M_{y+z}$ for all $y,\,z\in\mathbb{C}$. Let $\mathcal{H}_{\mathfrak{g}}$ be the category of weight $\mathfrak{g}$-modules. For $M$ and $N$ in $\mathcal{H}_{\mathfrak{g}}$, denote by ${\mathrm {Hom}}_{\mathfrak{g}}(M,N)$ the space of $\mathfrak{g}$-equivariant linear maps from $M$ to $N$. Given $M,\,N$ and $P$ in $\mathcal{H}$, denote by $\mathbf{B}(M\times N,P)$ the space of all bilinear maps $\pi:M\times N\rightarrow P$ and by $\mathbf{B}_{L_0}(M\times N,P)$ the subspace of $L_0$-equivariant bilinear maps. Similarly if $M,\,N$ and $P$ are in $\mathcal{H}_{\mathfrak{g}}$, denote by $\mathbf{B}_{\mathfrak{g}}(M\times N,P)$ the space of $\mathfrak{g}$-equivariant bilinear maps. For $P\in\mathcal{H}$, denote by $P^*$ its restricted dual. By definition, we have $P^*=\oplus_{z\in \mathbb{C}}\, P^*_z$, where $P^*_z=(P_{-z})^*$. \begin{lemma}\label{lemma_symmetry-bilinear} Let $M,\,N$ and $P$ in ${\mathcal{H}}_{\mathfrak{g}}$. We have: \noindent \centerline{$\mathbf{B}_{\mathfrak{g}}(M \times N, P^\ast) \simeq \mathbf{B}_{\mathfrak{g}}(M \times P, N^\ast)$.} \end{lemma} \begin{proof} The lemma follows easily from the fact that \centerline{$\mathbf{B}_{L_0}(M\times N,P^*)=\prod_{u+v+w=0}\,M^*_u \otimes N^*_v\otimes P^*_w$.} \end{proof} It follows that $\mathbf{B}_{\mathfrak{g}}(M\times N,P^\ast)$ is fully symmetric in $M,N$ and $P$. This fact will be referred to as the {\it $\mathfrak{S}_3$-symmetry}. The obvious symmetry $\mathbf{B}_{\mathfrak{g}}(M\times N,P)\simeq \mathbf{B}_{\mathfrak{g}}(N\times M,P)$ will be called the {\it $\mathfrak{S}_2$-symmetry} \subsection{Definition of germs} For $M\in\mathcal{H}$ and $x\in \mathbb{R}$, set $M_{\geq x}=\oplus_{\Re z\geq x}\,M_z$ and $M_{\leq x}=\oplus_{\Re z\leq x}\,M_z$, where $\Re z$ denotes the real part of $z$. Given another object $N\in\mathcal{H}$, let ${\mathrm {Hom}}^0(M,N)$ be the space of all linear maps $\phi:M\rightarrow N$ such that $\phi(M_{\geq x})=0$ for some $x\in{\bf R}$. Set $\mathcal{G}(M,N)={\mathrm {Hom}}(M,N)/{\mathrm {Hom}}^0(M,N)$. The image in $\mathcal{G}(M,N)$ of some $\phi\in {\mathrm {Hom}}(M,N)$, which is denoted by $\mathcal{G}(\phi)$, is called its {\it germ}. The space $\mathcal{G}(M,N)$ is called the {\it space of germs of maps} from $M$ to $N$. Let $\mathfrak{g}$ be a Lie $L_0$-algebra and let $M,\,N\in{\mathcal{H}}_{\mathfrak{g}}$. It is clear that ${\mathrm {Hom}}^0(M,N)$ is a $\mathfrak{g}$-submodule of ${\mathrm {Hom}}(M,N)$ and thus $\mathfrak{g}$ acts on $\mathcal{G}(M,N)$. Denote by $\mathcal{G}_{\mathfrak{g}}(M,N)$ the space of $\mathfrak{g}$-equivariant germs. We will often use the following obvious fact: any $\psi\in\mathcal{G}_{L_0}(M,N)$ is the germ of a $L_0$-equivariant map $\phi:M\rightarrow N$, but in general a $\mathfrak{g}$-equivariant germ $\psi$ is not the germ of a $\mathfrak{g}$-equivariant map $\phi$. Let $M,\,N\in\mathcal{H}$. A linear map $\phi:M\rightarrow N$ is called {\it continous} if for any $x\in\mathbb{R}$ there exist $y\in\mathbb{R}$ such that $\phi(M_{\geq y})\subset N_{\geq x}$. The germ of a continous map $\phi$ is called a {\it continous germ} of a map. It is not possible to compose arbitrary germs of maps. However let $\phi,\psi$ two morphisms of $\mathcal{H}$ such that the composition $\psi\circ\phi$ is defined. It is easy to show that $\mathcal{G}(\psi\circ\phi)$ only depends of $\mathcal{G}(\phi)$ and $\mathcal{G}(\psi)$ whenever $\phi$ is continous. Thus, it is possible to compose the continous germs. Since the $L_0$-equivariant germs of maps are continous, the $\mathfrak{g}$-equivariant germs can be composed. Therefore we can define the category $\mathcal{G}(\mathcal{H}_\mathfrak{g})$ of {\it germs of weight $\mathfrak{g}$-modules} as follows. Its objects are weight $\mathfrak{g}$-modules, and for $M,\,N\in\mathcal{H}_\mathfrak{g}$, the space of $\mathcal{G}(\mathcal{H}_\mathfrak{g})$-morphisms from $M$ to $N$ is $\mathcal{G}_{\mathfrak{g}}(M,N)$. Viewed as an object of the category $\mathcal{G}(\mathcal{H}_\mathfrak{g})$, an object $M\in\mathcal{H}_\mathfrak{g}$ is called a {\it germ of a weight $\mathfrak{g}$-module} and it is denoted by $\mathcal{G}_{\mathfrak{g}}(M)$. When $\mathfrak{g}_{\geq 0}$ and $\mathfrak{g}_{\leq 0}$ are finitely generated as Lie algebras, there is a concrete characterization of the $\mathfrak{g}$-equivariant germs of maps. Indeed let $M,N\in\mathcal{H}_{\mathfrak{g}}$ and let $\phi:M\rightarrow N$ be a $L_0$-equivariant map. Then $\mathcal{G}(\phi)$ is $\mathfrak{g}$-equivariant iff: (i) the restriction $\phi:M_{\geq x}\rightarrow N_{\geq x}$ is $\mathfrak{g}_{\geq 0}$-equivariant, and (ii) the induced map $\phi:M/M_{\leq x}\rightarrow N/N_{\leq x}$ is $\mathfrak{g}_{\leq 0}$-equivariant, \noindent for any $x>>0$. \subsection{Germs of modules of the class $\mathcal{S}$} It is easy to compute the germs of the $\mathbf{W}$-modules of the class $\mathcal{S}$. \begin{lemma}\label{W-germs} For any $\xi_1,\xi_2\in \mathbb{P}^1$, we have $\mathcal{G}_{\mathbf{W}}(A_{\xi_1})=\mathcal{G}_{\mathbf{W}}(B_{\xi_2})$. Thus for any $M\in\mathcal{S}$, we have ${\mathcal{G}}_{\mathbf{W}}(M)\simeq{\mathcal{G}}_{\mathbf{W}}(\Omega^\delta_{u})$ for some $\delta\in\mathbb{C}$ and $u\in\mathbb{C}/\mathbb{Z}$. \end{lemma} The proof of the lemma follows easily from the definition. Recall that $\mathfrak{sl}(2)$ is the Lie subalgebra $\mathbb{C} L_{-1}\oplus \mathbb{C} L_{0}\oplus \mathbb{C} L_{1}$ of $\mathbf{W}$. In what follows, it is useful to compare $\mathfrak{sl}(2)$-germs and $\mathbf{W}$-germs. \begin{lemma} \label{sl(2)-germs} Let $\delta\in\mathbb{C}$ and $u\in\mathbb{C}/\mathbb{Z}$. (i) We have $\mathcal{G}_{\mathfrak{sl}(2)}(\Omega^{\delta}_u)\simeq \mathcal{G}_{\mathfrak{sl}(2)}(\Omega^{1-\delta}_u)$. (ii) If $\mathcal{G}_{\mathbf{W}_{\geq -1}}(\Omega^{\delta}_u)\simeq \mathcal{G}_{\mathbf{W}_{\geq -1}}(\Omega^{\gamma}_u)$ for some $\delta\neq \gamma$, then $\{\delta,\gamma\}=\{0,1\}$. \end{lemma} \begin{proof} {\it Proof of the first assertion:} Choose a function $f:\mathbb{C}\rightarrow\mathbb{C}^*$ such that $f(z)=zf(z-1)$ whenever $\Re z>1$. Define $\psi:\Omega^{\delta}_u\rightarrow\Omega^{1-\delta}_u$ by the formula: \noindent\centerline{$\psi(e^\delta_z)=f(z-\delta)/f(z+\delta-1)\, e^{1-\delta}_z$,} \noindent for any $z\in u$. It is easy to check that $L_{\pm1}.\psi(e^\delta_z)=\psi(L_{\pm1}.e^\delta_z)$ whenever $z\in u$ and $\Re (z\pm\delta)>1$. Therefore the germs of $L_{\pm1}.\psi$ are zero, which means that $\mathcal{G}(\psi)$ is a $\mathfrak{sl}(2)$-equivariant isomorphism. {\it Proof of the second assertion:} Assume that the $\mathbf{W}_{\geq -1}$-germs of $\Omega^{\delta}_u$ and $\Omega^{\gamma}_u$ are isomorphic. Then there exist a $L_0$-equivariant isomorphism $\psi:\Omega^{\delta}_u\rightarrow\Omega^{\gamma}_u$ whose germ is $\mathbf{W}_{\geq -1}$-equivariant. It follows that $\psi(L_1^2. e^\delta_z)=L_1^2.\psi(e^\delta_z)$ and $\psi(L_2. e^\delta_z)=L_2.\psi(e^\delta_z)$, \noindent for any $z\in u$ with $\Re z>>0$. Set $\psi(e^\delta_{z})=ae^{\gamma}_{z}$ and $\psi(e^\delta_{z+2})=be^{\gamma}_{z}$. It follows that: $(z+\delta)(z+1+\delta)b =(z+\gamma)(z+1+\gamma)a$, and $(z+2\delta) b=(z+2\gamma)a$. Therefore we get: $(z+\delta)(z+1+\delta)(z+2\gamma)= (z+\gamma)(z+1+\gamma)(z+2\delta)$. \noindent Since this identity holds for any $z\in u$ with $\Re z>>0$, it is valid for any $z$. Since $\delta\neq\gamma$, it follows easily that $\{\delta,\gamma\}=\{0,1\}$. \end{proof} It follows from Lemma \ref{sl(2)-germs}(ii) that the degree of modules of the class $\mathcal{S}$ is indeed an invariant of their $\mathbf{W}$-germs. \subsection{Germs of bilinear maps} For $M,N$ and $P\in\mathcal{H}$, denote by $\mathbf{B}(M\times N,P)$ the space of bilinear maps from $M\times N$ to $P$. Also denote by $\mathbf{B}^0(M\times N,P)$ the space of all $\pi\in \mathbf{B}(M\times N,P)$ such that $\pi(M_{\geq x}\times N_{\geq x})=0$ for any $x>>0$. Set \centerline{$\mathcal{G}(M\times N,P)=\mathbf{B}(M\times N,P)/\mathbf{B}^0(M\times N,P)$,} The image of a bilinear map $\pi\in \mathbf{B}(M\times N,P)$ in $\mathcal{G}(M\times N,P)$ is called its {\it germ} and it is denoted by ${\mathcal{G}}(\pi)$. The set $\mathcal{G}(M\times N,P)$ is called the {\it space of germs of bilinear maps} from $M\times N$ to $P$. Let $\mathfrak{g}$ be a Lie $L_0$-algebra and let $M,\,N$ and $P$ be weight $\mathfrak{g}$-modules. As before, $\mathcal{G}(M\times N,P)$ is a $\mathfrak{g}$-module in a natural way and we denote by $\mathcal{G}_{\mathfrak{g}}(M\times N,P)$ the space of $\mathfrak{g}$-equivariant germs of bilinear maps. As before,the composition of $\mathfrak{g}$-equivariant germs of bilinear maps with $\mathfrak{g}$-equivariant germs of linear maps is well defined. Thus we obtain: \begin{lemma}\label{lemma_germ-gen} The space $\mathcal{G}_{\mathfrak{g}}(M\times N,P)$ depends functorially on the germs of the weight $\mathfrak{g}$-modules $M$, $N$ and $P$. \end{lemma} Let $M,N,P$, and $Q$ in $\mathcal{S}$ and let $\phi\in {\mathrm {Hom}}_{L_0}(P,Q)$. Assume that $\mathcal{G}(\phi)$ is a $\mathfrak{sl}(2)$-equivariant isomorphism. The composition with $\phi$ induces a map \noindent\centerline{$\mathcal{G}(\phi)_*:\mathcal{G}_{_{\mathfrak{sl}(2)}}(M\times N,P)\rightarrow \mathcal{G}_{_{\mathfrak{sl}(2)}}(M\times N,Q)$.} \begin{lemma}\label{zero_intersection} Assume that $\mathcal{G}(\phi)$ is not $\mathbf{W}_{\geq -1}$-equivariant. Then the two subspaces $\mathcal{G}_{_\mathbf{W}}(M\times N,Q)$ and $\mathcal{G}(\phi)_*\mathcal{G}_{\mathbf{W}}(M\times N,P)$ of $\mathcal{G}_{_{\mathfrak{sl}(2)}}(M\times N,Q)$ have a zero intersection. \end{lemma} \begin{proof} The lemma is equivalent to the following statement: for any $L_0$-equivariant bilinear map $\pi:M\times N\rightarrow P$ whose germ is $\mathbf{W}$-equivariant and non-zero, $\mathcal{G}(\phi\circ\pi)$ is not $\mathbf{W}$-equivariant. So, we prove this statement. Set $\mu=L_2.(\phi\circ\pi)$. We claim that $\mathcal{G}(\mu)\neq 0$, i.e. for any $r\in\mathbb{R}$ there are scalars $x,y$ with $\Re x>r$ and $\Re y>r$ such that $\mu(M_x\times N_y)\neq 0$ Indeed, if $r$ is big enough we have (i) the restriction $\pi_{\geq r}: M_{\geq r}\times N_{\geq r}\rightarrow P_{\geq 2r}$ of $\pi$ is ${\mathbf{W}}_{\geq 0}$-equivariant, and (ii)$L_1.P_z=P_{z+1}$ for any $z$ with $\Re z>2r$. \noindent Since $\mathcal{G}(\pi)\neq 0$, there exists $(x_0,y_0)\in\mathrm{Supp}\, M\times\mathrm{Supp}\, N$ with $\Re x_0>r,\, \Re y_0>r$ and $\pi(M_{x_0}\times N_{y_0})=P_{z_0}$, where $z_0=x_0+y_0$. By hypothesis $\mathcal{G}(\phi)$ is $\mathfrak{sl}(2)$-equivariant but not $\mathbf{W}_{\geq -1}$-equivariant. Since $\mathbf{W}_{\geq -1}$ is generated by $\mathfrak{sl}(2)$ and $L_2$, we have $\mathcal{G}(L_2.\phi)\neq 0$. Hence there exists $k\in{\bf Z}_{\geq 0}$ such that $(L_2.\phi)(P_{k+z_0})\neq 0$. By assumptions, the linear span of $\cup_{m.n\geq 0}\,\pi(M_{x_0+m}\times N_{y_0+n})$ is a $\mathbf{W}_{\geq 0}$-module and the $\mathbf{W}_{\geq 0}$-module $P_{\geq \Re z_0}$ is generated by $P_{z_0}=\pi(M_{x_0}\times N_{y_0})$. Thus there are $m,n\in\mathbb{Z}_{\geq 0}$ with $m+n=k$ such that $\pi(M_{x_0+m}\times N_{y_0+n})=P_{z_0+k}$. Since $L_2.(\phi\circ\pi_{\geq r}) =(L_2.\phi)\circ\pi_{\geq r}$, it follows that $\mu(M_{x_0+m}\times N_{y_0+n})\neq 0$, which proves the claim. \end{proof} \section{Degenerate and non-degenerate \\ bilinear maps}\label{sect_(non)deg-map} In this section, we define the notions of {\it degenerate} and {\it non-degenerate} bilinear maps and similar notions for germs of bilinear maps. We show that a $\mathbf{W}$-equivariant bilinear map $\pi$ between modules of the class $\mathcal{S}$ is degenerate if and only if $\mathcal{G}(\pi)=0$. Moreover, $\mathcal{G}(\pi)$ is non-degenerate if $\mathcal{G}(\pi)\neq 0$. Let $M,N$ and $P$ in $\mathcal{H}$. For $\pi\in \mathbf{B}(M\times N,P)$, the set $\mathrm{Supp}\,\,\pi=\{(x,y)\vert\,\pi(M_x\times N_y)\neq 0\}$ is called the {\it support} of $\pi$. The bilinear map $\pi$ is called {\it non-degenerate} if $\mathrm{Supp}\,\,\pi$ is Zarisky dense in $\mathbb{C}^2$. Otherwise, it is called {\it degenerate}. Any germ $\tau \in \mathcal{G}(M\times N,P)$ is represented by a bilinear map $\pi \in \mathbf{B}(M\times N,P)$, and let $\pi_{\geq x}$ be its restriction to $M_{\geq x} \times N_{\geq x}$. The germ $\tau$ is called {\it non-degenerate} if $\pi_{\geq x}$ is non-degenerate for any $x>>0$. From now on, assume that $M$, $N$ and $P$ are $\mathbf{W}$-modules of the class $\mathcal{S}$. For $\pi\in \mathbf{B}_{\mathbf{W}}(M\times N,P)$, set $M_\pi=\{m\in M\vert\,\pi(m\times N_{\geq x})=0\, \text{for}\; x>>0\}$ and $N_\pi=\{n\in N\vert\,\pi(M_{\geq x}\times n)=0\, \text{for}\; x>>0\}$. It is clear that $M_\pi$ and $N_\pi$ are $\mathbf{W}$-submodules. \begin{lemma}\label{lemma5} Let $\pi\in \mathbf{B}_{\mathbf{W}}^0(M\times N, P)$. Then we have: \begin{enumerate} \item[(i)] $\pi(M_\pi\times N_\pi)\subset P^{\mathbf{W}}$ and therefore $\pi(M_\pi\times N_\pi)$ is isomorphic to 0 or $\mathbb{C}$. \item[(ii)] $M/M_\pi$ is isomorphic to 0 or $\mathbb{C}$. \item[(iii)] $N/N_\pi$ is isomorphic to 0 or $\mathbb{C}$. \end{enumerate} \end{lemma} \begin{proof} It follows from the explicit description of all modules $X\in\mathcal{S}$ (see Section \ref{KS_Theorem}) that \begin{enumerate} \item if $Y$ is a $\mathbf{W}$-submodule with $L_0.Y\neq 0$, then $X/Y$ is isomorphic to $\mathbb{C}$ or $0$, and \item if $x \in X$ satisfies $L_k.x=0$ for $k>>0$, then $x$ is $\mathbf{W}$-invariant. \end{enumerate} Since $\mathcal{G}(\pi)$ is zero, $M_\pi$ contains $M_{\geq x}$ for any $x>>0$. Therefore, $L_0.M_\pi\neq 0$ and Assertions (ii) and (iii) follows. Moreover for any $(m,n)\in M_\pi\times N_\pi$, we have $L_k.\pi(m,n)=\pi(L_k.m,n)+\pi(m,L_k.n)=0$ for $k>>0$. Thus $\pi(m,n)$ is $\mathbf{W}$-invariant which proves the first assertion. \end{proof} Let $\pi\in \mathbf{B}_{\mathbf{W}}(M\times N, P)$ with $\mathrm{Supp}\,\,\pi\subset \{(0,0)\}$. Obviously there are $\mathbf{W}$-equivariant maps $a:M\rightarrow \mathbb{C}$, $b:N\rightarrow \mathbb{C}$ and $c:\mathbb{C}\rightarrow P$ such that $\pi(l,m)=c(a(l)b(m))$. Since it comes from a bilinear map between trivial modules, such a bilinear map $\pi$ will be called {\it trivial}. Note that non-zero trivial maps only occur when $M$ and $N$ are in the $A$-family and $P$ is in the $B$-family. For a subset $Z$ of $\mathbb{C}^2$, denotes by $\overline{Z}$ its Zariski closure. Also define the three lines $H$, $V$ and $D$ of $\mathbb{C}^2$ by: \noindent\centerline{$H=\mathbb{C} \times \{0\}$, $V=\{0\} \times \mathbb{C}$ and $D=\{(z,-z)\vert\, z\in \mathbb{C} \}$.} \begin{lemma}\label{lemma_support} Let $\pi\in \mathbf{B}_{\mathbf{W}}^0(M\times N, P)$ and set $S=\overline{\mathrm{Supp}\, \,\pi}$. Assume that $\pi$ is not trivial. Then we have: \begin{enumerate} \item[(i)] $S$ is a union of lines and $\mathrm{Supp}\,\,\pi\subset H\cup D\cup V$. \item[(ii)] $\pi(M_\pi\times N_\pi)\neq 0$ iff $D\subset S$. \item[(iii)] $M/M_\pi\neq 0$ iff $V\subset S$. \item[(iv)] $N/N_\pi\neq 0$ iff $H\subset S$. \end{enumerate} \end{lemma} \begin{proof} By Lemma \ref{lemma5}, we have $\pi(M_\pi\times N_\pi)=0$ or $\mathbb{C}$. Note that $\pi$ induces the two bilinear maps $\eta: M_\pi\times N_\pi\rightarrow \pi(M_\pi\times N_\pi)$ and $\theta: M\times N\rightarrow P/\pi(M_\pi\times N_\pi)$. {\it Step 1:} We claim that $\eta=0$ or the bilinear map $\eta$ has infinite rank. Assume otherwise. By Lemma \ref{lemma5}, the image of $\eta$ is $\mathbb{C}$. Since $\eta$ factors through finite dimensional modules, it follows that $M_\pi$ has a finite dimensional quotient. By Lemma \ref{lemma5} (ii), $M_\pi$ is infinite dimensional. Hence $M_\pi$ is reducible, which implies that $M=M_\pi$. Similarly, $N=N_\pi$. It follows easily that $\pi$ is a trivial bilinear map which contradicts the hypothesis. It follows that $\eta=0$ or $\overline{\mathrm{Supp}\,\, \eta}=D$. {\it Step 2:} We claim that $\overline{\mathrm{Supp}\,\,\pi}= \overline{\mathrm{Supp}\,\, \theta}\cup \overline{\mathrm{Supp}\,\, \eta}$. Since $\pi(M_\pi\times N_\pi)\simeq \mathbb{C}$ or $0$, we have: $\overline{\mathrm{Supp}\,\, \theta}\cup \overline{\mathrm{Supp}\,\, \eta} \subset \overline{\mathrm{Supp}\,\,\pi}\subset \overline{\mathrm{Supp}\,\, \theta}\cup D$. Therefore the claim follows from the previous step. {\it Step 3:} Using the short exact sequence \begin{align*} 0 \longrightarrow & M\otimes N/M_\pi\otimes N_\pi \longrightarrow M\otimes (N/N_\pi)\oplus (M/M_\pi)\otimes N \longrightarrow \\ &\longrightarrow M/M_\pi \otimes N/N_\pi \longrightarrow 0, \end{align*} it follows that, up to the point $(0,0)$, the sets $\mathrm{Supp}\,\, \theta$ and $\mathrm{Supp}\,\, M\times \mathrm{Supp}\,\,(N/N_\pi)\\ \cup \mathrm{Supp}\,\,(M/M_\pi)\times\mathrm{Supp}\,\, N$ coincide, which proves the lemma. \end{proof} \begin{lemma}\label{germ-deg} A bilinear map $\pi\in \mathbf{B}_{\mathbf{W}}(M\times N, P)$ is degenerate iff its germ is zero. Moreover, if $\mathcal{G}(\pi)\neq 0$, the germ $\mathcal{G}(\pi)$ is non-degenerate. \end{lemma} \begin{proof} Let $\pi \in \mathbf{B}_{\mathbf{W}}(M\times N, P)$. For $x\in\mathbb{R}$, denote by $\pi_{\geq x}$ the restriction of $\pi$ to $M_{\geq x}\times N_{\geq x}$. It is enough to prove the second assertion. {\it First Step:} We claim that if $(s,t)$ belongs to $\mathrm{Supp}\,\,\pi_{\geq 1}$, then $(s,t)+H$ or $(s,t)+V$ lies in $\overline{\mathrm{Supp}\,\, \pi_{\geq 1}}$. By hypothesis, $\Re (s+t)>0$ and it follows from the explicit description of modules of the class $\mathcal{S}$ that $L_k.P_{s+t}\neq 0$ for any $k>>0$. So we get \noindent\centerline{$\pi(L_k.M_s\times N_t)\neq 0$ or $\pi(M_s\times L_k.N_t)\neq 0$ for $k>>0$.} Hence $(s+k,t)$ or $(s,t+k)$ is in $\mathrm{Supp}\,\,\pi_x$ for infinitely many $k>0$, and the claim follows. {\it Second step:} Assume that $\mathcal{G}(\pi)$ is non-zero and prove that its germ is non-degenerate. Let $x\geq 1$ be an arbitrary real number, and let $(s,t)\in\mathrm{Supp}\,\, M_{\geq x}\times\mathrm{Supp}\,\, N_{\geq x}$. By definition there exists two increasing sequences of integers $0\leq a_1<a_2\dots$ and $0 \leq b_1<b_2\dots$ such that $(s+a_k, t+b_k)$ belongs to $\mathrm{Supp}\,\,\pi_{\geq x}$ for all $k$. Since all lines $(s+a_k, t+b_k)+V$, $(s+a_k, t+b_k)+H$ are distinct, $\overline{\mathrm{Supp}\,\,\pi_{\geq x}}$ contains infinitely many lines, and therefore $\overline{\mathrm{Supp}\,\,\pi_{\geq x}}=\mathbb{C}^2$ \end{proof} \section{Examples of $\mathbf{W}$-equivariant bilinear maps}\label{sect_example} This section provides a list of $\mathbf{W}$-equivariant bilinear maps between modules of the class $\mathcal{S}$. The goal of this paper is to prove that this list generates all bilinear maps between modules of the class $\mathcal{S}$. More precisely, if one allows the following operations: the $\mathfrak{S}_3$-symmetry, the composition with morphisms between modules in $\mathcal{S}$ and the linear combination, then one obtains all bilinear maps between modules of the class $\mathcal{S}$. \subsection {The Poisson algebra $\mathcal{P}$ of symbols twisted pseudo-differential operators} To be brief, we will not give the definition of the algebra $\mathcal{D}$ of twisted pseudo-differential operators on the circle, see e.g \cite{IM}. Just say that the term {\it twisted} refers to the fact that complex powers of $z$ and $\dfrac{d}{dz}$ are allowed. As usual, $\mathcal{D}$ is an associative filtered algebra whose associated graded space ${\mathcal{P}}$ is a Poisson algebra. Indeed ${\mathcal{P}}$ is explicitely defined as follows. As vector space, ${\mathcal{P}}$ has basis the family $(z^s\partial^\delta)$ where $s$ and $\delta$ runs over $\mathbb{C}$ (here $\partial$ stands for the symbol of $\dfrac{d}{dz}$). The commutative associative product on ${\mathcal{P}}$ is denoted by $.$ and the Lie bracket is denoted by $\{,\}$. These products are explicitely defined on the basis elements by \noindent\centerline{$(z^s\partial^\delta).(z^{s'}\partial^{\delta'})= (z^{s+s'}\partial^{\delta+\delta'})$, } \noindent\centerline{$\{(z^s\partial^\delta),(z^{s'}\partial^{\delta'})\} =(\delta s'-\delta' s)\,z^{s+s'-1}\partial^{\delta+\delta'-1}$.} \noindent It is clear that $\oplus_{n\in{\mathbb{Z}}}\,\mathbb{C}\, z^{n+1}\partial$ is a Lie subalgebra naturally isomorphic to $\mathbf{W}$. As a $\mathbf{W}$-module, there is a decomposition of $\mathcal{P}$ as \noindent\centerline{ $\mathcal{P}=\oplus_{(\delta,u)\in \mathbb{C}\times\mathbb{C}/\mathbb{Z}}\,\,\Omega_u^{\delta}$, } \noindent where $\Omega_u^{\delta}=\oplus_{s\in u}\,\mathbb{C}\,z^{s-\delta}\partial^{-\delta}$. We have \noindent\centerline{ $\Omega_u^{\delta_1}.\Omega_v^{\delta_2}\subset \Omega_{u+v}^{\delta_1+\delta_2}$, } \noindent\centerline{ $\{\Omega_u^{\delta_1},\Omega_v^{\delta_2}\}\subset \Omega_{u+v}^{\delta_1+\delta_2+1}$} \noindent for all $\delta_1,\delta_2\in\mathbb{C}$ and $u,v\in\mathbb{C}/\mathbb{Z}$. Therefore the Poisson structure induces two families of $\mathbf{W}$-equivariant bilinear maps: \noindent\centerline{ $P^{\delta_1,\delta_2}_{u,v}:\Omega_u^{\delta_1}\times \Omega_v^{\delta_2}\rightarrow \Omega_{u+v}^{\delta_1+\delta_2}, (m,n)\mapsto m.n$, } \noindent\centerline{ $B^{\delta_1,\delta_2}_{u,v}:\Omega_u^{\delta_1}\times \Omega_v^{\delta_2} \rightarrow \Omega_{u+v}^{\delta_1+\delta_2+1}, (m,n)\mapsto \{m,n\}$.} \noindent It is clear that all these maps are non-degenerate, except $B^{0,0}_{u,v}$. Indeed we have $B^{0,0}_{u,v}=0$, for all $u,v\in\mathbb{C}/\mathbb{Z}$. \subsection{The extended Lie algebra $\mathcal{P}_{\xi}$} Recall Kac's construction of an extended Lie algebra \cite{Kac}. Start with a triple $(\mathfrak{g},\kappa,\delta)$, where $\mathfrak{g}$ is a Lie algebra, $\delta:\mathfrak{g}\rightarrow\mathfrak{g}, x\mapsto\delta.x$ is a derivation and $\kappa:\mathfrak{g}\times\mathfrak{g}\rightarrow\mathbb{C}$ is a symmetric $\mathfrak{g}$-equivariant and $\delta$-equivariant bilinear form. Then the extended Lie algebra is the vector space $\mathfrak{g}_{e}=\mathfrak{g}\oplus \mathbb{C}\,\delta\oplus \mathbb{C}\,c$ and its Lie bracket $[,]_e$ is defined by the following relations: $[x,y]_e=[x,y]+\kappa(x,\delta.y)c$, $[\delta,x]_e=\delta.x$, $[c,\mathfrak{g}_e]_e=0$, \noindent for any $x,y\in\mathfrak{g}$ and where $[x,y]$ is the Lie bracket in $\mathfrak{g}$. We will apply this construction to the Lie algebra ${\mathcal{P}}$. The residue map $\mathrm{Res}:{\cal P}\rightarrow {\mathbb{C}}$ is defined as follows: $\mathrm{Res}(\Omega_u^{\delta})=0$ for $(\delta,u)\neq (1,0)$ and the restriction of $\mathrm{Res}$ to $\Omega^1_0$ is the usual residue. Thus set $\kappa(x,y)=\mathrm{Res}\,xy$ for any $x$, $y\in\mathcal{P}$. Let $(a,b)$ be projective coordinates of $\xi\in\mathbb{P}^1$. Define the derivation $\delta_\xi$ of $\mathcal{P}$ by $\delta_\xi x=z^{b-a}\partial^{-a}\{z^{a-b}\partial^{a},x\}$ for any $x\in\mathcal{P}$. Informally, we have $\delta_\xi x=\{\log( z^{a-b}\partial^{a}),x\}$. It is easy to check that $\kappa$ is equivariant under $\mathrm{ad}(\mathcal{P})$ and under $\delta_\xi$. The two-cocycle $x,y\in\mathcal{P}\mapsto \mathrm{Res}(x\delta_\xi y)$ is the Khesin-Kravchenko cocycle \cite {KK}. The corresponding extended Lie algebra will be denoted by ${\mathcal{P}}_\xi$. Thus we have \noindent\centerline{ ${\mathcal{P}}_\xi={\mathcal{P}}\oplus\mathbb{C}\, \delta_\xi\oplus\mathbb{C}\, c$.} Since ${\mathcal{P}}_\xi$ is not a Poisson algebra, its Lie bracket will be denoted by $[\,,]$. Set ${\mathcal{P}}^+_\xi=[{\mathcal{P}}_\xi,{\mathcal{P}}_\xi]$ and ${\mathcal{P}}^{-}_\xi={\mathcal{P}}_\xi/Z({\mathcal{P}}_\xi)$, where $Z({\mathcal{P}}_\xi)$ is the center of ${\mathcal{P}}_\xi$. As before $\mathbf{W}$ is a Lie subalgebra of ${\mathcal{P}}_\xi$, and the $\mathbf{W}$-modules ${\mathcal{P}}^{\pm}_\xi$ decomposes as follows \noindent\centerline{ $\mathcal{P}^{\pm}_\xi=\oplus_{(\delta,u)\in \mathbb{C}\times\mathbb{C}/\mathbb{Z}}\,\,\Omega_u^{\delta}(\xi,\pm)$, } \noindent where $\Omega_u^{\delta}(\xi,\pm)=\Omega_u^{\delta}$ or all $(\delta,u)\in \mathbb{C}\times\mathbb{C}/\mathbb{Z}$ except that $\Omega_0^{0}(\xi,-)\simeq A_\xi$ and $\Omega_0^{1}(\xi,+)\simeq B_\xi$. The Lie bracket of $\mathcal{P}_\xi$ induces a bilinear map \noindent\centerline{ $B(\xi):\mathcal{P}^{-}_\xi\times \mathcal{P}^{-}_\xi\rightarrow \mathcal{P}^{+}_\xi, (m\, \hbox{\rom modulo}\, Z(\mathcal{P}_\xi),n\, \hbox{\rom modulo}\,Z(\mathcal{P}_\xi)\mapsto [m,n]$. } \noindent As before, the components of $B(\xi)$ provide the following $\mathbf{W}$-equivariant bilinear maps \noindent\centerline{ $B^{\delta_1,\delta_2}_{u,v}(\xi):\Omega_u^{\delta_1}(\xi,-)\times \Omega_v^{\delta_2}(\xi,-) \rightarrow \Omega_{u+v}^{\delta_1+\delta_2+1}(\xi,+), (m,n)\mapsto [m,n]$.} \noindent It should be noted that if $\delta_1\delta_2(\delta_1+\delta_2)\neq 0$, then we have $B^{\delta_1,\delta_2}_{u,v}(\xi)=B^{\delta_1,\delta_2}_{u,v}$. \subsection{Other $\mathbf{W}$-equivariant bilinear maps.} {\it The Grozman operator:} Among the $\mathbf{W}$-equivariant bilinear maps between modules of the class $\mathcal{S}$, the most surprizing is the Grozman operator. It is the bilinear map $G_{u,v}:\Omega_{u}^{-2/3}\times \Omega_{v}^{-2/3}\rightarrow \Omega_{u+v}^{5/3}$, defined by the following formula \noindent \centerline{$G_{u,v}(e^{-2/3}_x,e^{-2/3}_y)=(x-y)(2x+y)(x+2y)e^{5/3}_{x+y}$.} \smallskip \noindent {\it The bilinear map $\Theta_{\infty}$:} Let $\xi\in\mathbb{P}^1$ with projective coordinates $(a,b)$. Define $\Theta_{\xi}: A_{a,b}\times A_{a,b}\rightarrow B_{a,b}$ by the following requirements: $\Theta_{\xi}(u^A_m,u^A_n)=0$ if $mn(m+n)\neq 0$ or if $m=n=0$ $\Theta_{\xi}(u^A_0,u^A_m)=-\Theta(u^A_m,u^A_0)=1/m\,u^B_m$ if $m\neq 0$, $\Theta_{\xi}(u^A_{-m},u^A_m)=1/m\,u^B_0$ if $m\neq 0$. It is easy to see that $a\Theta_{\xi}$ is identical to the bracket $-B^{0,0}_{0,0}(a,b)$. In particular, $\Theta_{\xi}$ is $\mathbf{W}$-equivariant (for $a=0$, this follows by extension of polynomial identities). So $\Theta_{\infty}$ is the only new bilinear map, since, for $\xi\neq \infty$, $\Theta_\xi$ is essentially the bracket of $\mathcal{P}_\xi$. \smallskip \noindent {\it The bilinear map $\eta(\xi_1,\xi_2,\xi_3)$:} Let $\xi_1,\xi_2,\xi_3$ be points in $\mathbb{P}^1$ which are not all equal, with projective coordinates $(a_1,b_1),\,(a_2,b_2),\,(a_3,b_3)$. Choose a non-zero triple $(x,y,z)$ such that $z(a_3,b_3)=x(a_1,b_1)+y(a_2,b_2)$. Recall that all $A_\xi$ have the same underlying vector space, therefore the map \noindent\centerline{$y\mathrm{Res}\times id +x id\times \mathrm{Res}: A_{a_1,b_1}\times A_{a_2,b_2} \rightarrow A_{a_3,b_3}$} \noindent is well defined. This defines a map, up to a scalar multiple, \noindent\centerline{ $\eta(\xi_1,\xi_2,\xi_3):A_{\xi_1}\times A_{\xi_2}\rightarrow A_{\xi_3}$,} \noindent which is clearly $\mathbf{W}$-equivariant. \smallskip \noindent {\it The obvious map $P^M$:} Also for each $M\in\mathcal{S}$ denote by $P^M$ the obvious map $((a+x),m)\in (\overline A\oplus\mathbb{C})\times M\mapsto xm\in M$. \subsection{ Primitive bilinear maps} Let $\mathcal{N}$ be the class of all non-zero $\mathbf{W}$-equivariant maps $\phi:M\rightarrow N$, where $M,N$ are non-isomorphic modules of the class $\mathcal{S}$. Up to conjugacy, there are only two possibilities: (i) $M$ is in the $A$-family, $N$ is in the $B$-family and $\phi$ is the morphism $M\twoheadrightarrow\mathbb{C}\hookrightarrow N$, or (ii) $M$ is in the $B$-family, $N$ is in the $A$-family and $\phi$ is the morphism $M\twoheadrightarrow\overline A\hookrightarrow N$. \noindent Hence, the condition $M \not\simeq N$ means that $M$ and $N$ are not simultaneously isomorphic to $\overline A\oplus \mathbb{C}$. Let $M,M',N,N',P$ and $P'$ in $\mathcal{S}$. Let $\phi:M'\rightarrow M$, $\psi:N'\rightarrow N$ and $\theta:P\rightarrow P'$ be $\mathbf{W}$-equivariant maps, and let $\pi\in \mathbf{B}_{\mathbf{W}}(M\times N, P)$ be a $\mathbf{W}$-equivariant bilinear map. Set $\pi'=\theta\circ\pi\circ(\phi\times\psi)$. If at least one of the three morphisms $\phi$, $\psi$ or $\theta$ is of the class $\mathcal{N}$, then $\pi'$ is called an {\it imprimitive form} of $\pi$. A $\mathbf{W}$-equivariant bilinear map between modules of the class $\mathcal{S}$ is called {\it primitive} if is not a linear combinations of imprimitive forms. The composition of any three composable morphisms of the class $\mathcal{N}$ is zero. It follows easily that any $\mathbf{W}$-equivariant bilinear maps between modules of the class $\mathcal{S}$ is either primitive, or it is a linear combination of imprimitive bilinear form. Thus the classification of all $\mathbf{W}$-equivariant bilinear maps between modules of the class $\mathcal{S}$ reduces to the classification of primitive ones. \begin{lemma} \label {primitivity} Let $M,N$ and $P$ be $\mathbf{W}$ modules of the class $\mathcal{S}$, and let $\pi\in\mathbf{B}_{\mathbf{W}}(M\times N,P)$. Assume one of the following conditions holds (i) $M$ and $N$ are irreducible and $P$ is the linear span of $\pi(M\times N)$ (ii) $N$ and $P$ are irreducible and the left kernel of $\pi$ is zero. Then $\pi$ is primitive. \end{lemma} This obvious lemma is useful to check easily that some bilinear maps are primitive. Some of the bilinear forms defined in this section are not primitive. It will be proved in Corollary \ref{cor1} of Section \ref{sect_conclusion} that, up to $\mathfrak{S}_3$-symmetry, all primitive bilinear forms between modules of the class $\mathcal{S}$ have been defined in this section. \subsection{Examples of $\mathbf{W}$-equivariant germs}\label{sect_example-germ} In what follows, the elements of $\mathbb{C}^3$ will be written as triples $(\delta_1,\delta_2,\gamma)$. Let $\sigma$ be the involution defined by $(\delta_1,\delta_2,\gamma)^\sigma=(\delta_2,\delta_1,\gamma)$. Let $\mathfrak{z}$ be the union of the two affine planes $H_i$, the six affine lines $D_i$ and the five points $P_i$ defined as follows. For $i=0,1$, the plane $H_i$ is defined by the equation $\gamma=\delta_1+\delta_2+i$. The six lines $D_i$ are parametrized as follows: $D_1=\{(0,\delta,\delta+2)\vert \delta\in\mathbb{C}\}$ and $D_2= D_1^\sigma$, $D_3=\{(\delta,1,\delta)\vert \delta\in\mathbb{C}\}$ and $D_4=D_3^\sigma$, $D_5=\{(\delta,-(1+\delta),1)\vert \delta\in\mathbb{C}\}$ $D_6=\{(\delta,1-\delta,0)\vert \delta\in\mathbb{C}\}$. \noindent The fives points are $P_1=(0,0,3)$, $P_2=(0,-2,1)$, $P_3=P_2^\sigma$ and $P_4=(1,1,0)$ and $P_5=(-2/3,-2/3,5/3)$. Also let $\mathfrak{z}^\ast$ be the set of all $(\delta_1,\delta_2,\gamma)\in\mathfrak{z}$ such that $\{\delta_1,\delta_2,\gamma\}\not\subset\{0,1\}$. In what follows, we will consider $\Omega_u^\delta$ as a module of the class ${\mathcal{S}}^\ast$. Thus we can define without ambiguity the degree of any $\pi\in\mathcal{G}_{\mathbf{W}}(\Omega_u^{\delta_1}\times\Omega_v^{\delta_2}, \Omega^\gamma_{u+v})$ as the scalar $\gamma-\delta_1-\delta_2$. The following table provide a list of germs $\pi$ in $\mathcal{G}_{\mathbf{W}}(\Omega_u^{\delta_1}\times\Omega_v^{\delta_2}, \Omega^\gamma_{u+v})$, when $(\delta_1,\delta_2,\gamma)$ runs over $\mathfrak{z}^\ast$. In the table, we omit the symbol $\mathcal{G}$. For example, $d^{-1}$ stands for $\mathcal{G}(d)^{-1}$, which is well-defined even for $u\equiv 0 \mod \mathbb{Z}$. \begin{table}[h] \caption{\textbf{List of $\pi\in\mathcal{G}_{\mathbf{W}}(\Omega_u^{\delta_1}\times\Omega_v^{\delta_2}, \Omega^\gamma_{u+v})$, where $(\delta_1,\delta_2,\gamma)$ runs over $\mathfrak{z}^*$}} \label{table1} \begin{center} \begin{tabular}{|c|c|c||c|} \hline &$\deg \pi$ & $(\delta_1,\delta_2,\gamma)$&$\pi$ \\ \hline 1. & $3$ & $(-\frac{2}{3}, -\frac{2}{3}, \frac{5}{3})$ & $G_{u,v}$\\ \hline 2.& $3$ & $(0,0,3)$ & $B^{1,1}_{u,v}\circ(d\times d)$\\ \hline 3.& $3$ & $(0,-2,1)$ & $d\circ B^{1,-2}_{u,v}\circ (d\times id)$ \\ \hline 4.& $3$ & $(-2,0,1)$ & $d\circ B^{-2,1}_{u,v}\circ (id\times d)$ \\ \hline 5.& $2$ & $(0,\delta, \delta+2)$ & $B^{1,\delta}_{u,v}\circ(d\times id)$ \\ \hline 6.& $2$ & $(\delta,0, \delta+2)$ & $B^{\delta,1}_{u,v}\circ(id\times d)$ \\ \hline 7.& $2$ & $(\delta, -\delta-1,1)$ & $d\circ B^{\delta,-\delta-1}_{u,v}$\\ \hline 8.& $1$ & $(\delta_1,\delta_2, \delta_1+\delta_2+1)$ & $B^{\delta_1,\delta_2}_{u,v}$ \\ \hline 9.& $0$ & $(\delta_1,\delta_2, \delta_1+\delta_2)$ & $P^{\delta_1,\delta_2}_{u,v}$ \\ \hline 10.& $-1$ & $(1,\delta,\delta)$ & $P^{0,\delta}_{u,v}\circ (d^{-1}\times id)$\\ \hline 11.& $-1$ & $(\delta,1,\delta)$ &$P^{\delta,0}_{u,v}\circ(id\times d^{-1})$\\ \hline 12.& $-1$ & $(\delta,1-\delta,0)$ & $d^{-1}\circ P^{\delta,1-\delta}_{u,v}$ \\ \hline \end{tabular} \end{center} {\small The condition $(\delta_1,\delta_2,\gamma)\in\mathfrak{z}^*$ implies that $(\delta_1,\delta_2)\neq (0,0)$ in the line 8, $(\delta_1,\delta_2)\neq (0,0), (0,1)$ or $(1,0)$ in the line 9, $\delta\neq 0$ or $1$ in the lines 10-12.} \end{table} Let $M,\,N$ and $P$ be $\mathbf{W}$-modules of the class $\mathcal{S}$. Set $u=\mathrm{Supp}\,\,M$, $v=\mathrm{Supp}\,\,N$ and assume that $\mathrm{Supp}\, \,P=u+v$. Let $\delta_1\in\deg M$, $\delta_2\in\deg N$ and $\gamma\in\deg P$. It follows from Lemma \ref{W-germs} that \noindent \centerline{$\mathcal{G}(M)=\mathcal{G}(\Omega_u^{\delta_1})$, $\mathcal{G}(N)=\mathcal{G}(\Omega_v^{\delta_2})$ and $\mathcal{G}(P)=\mathcal{G}(\Omega_{u+v}^\gamma)$.} \begin{lemma}\label{lower} Assume that $(\delta_1,\delta_2,\gamma)\in\mathfrak{z}$. Then $\mathcal{G}_\mathbf{W}(M\times N,P)$ is not zero, and moreover we have $\mathrm{dim}\,\mathcal{G}_\mathbf{W}(M\times N,P)\geq 2$ if $\{\delta_1,\delta_2,\gamma\}\subset\{0,1\}$. More precisely we have: (i) for $(\delta_1,\delta_2,\gamma)\in\mathfrak{z}^*$, Table \ref{table1} provides a non-zero $\pi\in\mathcal{G}_\mathbf{W}(M\times N,P)$, (ii) if $\{\delta_1,\delta_2,\gamma\}\subset\{0,1\}$, then we have $\mathcal{G}(M)=\mathcal{G}(\Omega_u^{0})$, $\mathcal{G}(N)=\mathcal{G}(\Omega_v^{0})$ and $\mathcal{G}(P)=\mathcal{G}(\Omega_{u+v}^1)$ and the maps $\pi_1:=P^{0,1}_{u,v}\circ(id\times d)$ and $\pi_2:=P^{1,0}_{u,v}\circ(d\times id)$ are non-proportional elements of $\mathcal{G}_\mathbf{W}(M\times N,P)$. \end{lemma} Theorem 2, proved in Section 7, states that the maps listed in the previous lemma provide a basis of $\mathcal{G}_{\mathbf{W}}(M\times N,P)$. It also states that $\mathcal{G}_\mathbf{W}(M\times N,P)=0$ if $(\delta_1,\delta_2,\gamma)\not\in\mathfrak{z}$. \section{Classification of $\mathbf{W}$-equivariant degenerate bilinear maps}\label{sect_Thm1} Let $M, N$ and $P$ be in the class $\mathcal{S}$. The goal of the section is the classification of all $\mathbf{W}$-equivariant degenerate bilinear maps $\pi:M\times N\rightarrow P$. In order to simplify the statements, we will always assume that $M, N$ and $P$ are indecomposable. Assume that $\pi\neq 0$ and set $S=\overline{\mathrm{Supp}\,\, \pi}$. By Lemma \ref{lemma_support}, $S\subset H\cup D\cup V$ is an union of lines. Thus there are four cases, of increasing complexity: \begin{enumerate} \item[(i)] $S=0$, \item[(ii)] $S$ consists of one line $H,\, D$ or $V$, \item[(iii)] $S$ consists of two lines among $H,\, D$ and $V$, or \item[(iv)] $S=H\cup D\cup V$. \end{enumerate} Since the three lines $H$, $V$ and $D$ are exchanged by the $\mathfrak{S}_3$-symmetry, we can reduce the full classification to the following four cases: \begin{enumerate} \item[(i)]$S=0$, \item[(ii)] $S=V$, \item[(iii)] $S=H\cup V$, \item[(iv)]$S=H\cup V\cup D$. \end{enumerate} In the following lemmas, it is assumed that $\pi:M\times N\rightarrow P$ is a non-zero degenerate $\mathbf{W}$-equivariant bilinear map. \begin{lemma} If $S=0$, then $\pi$ is trivial. \end{lemma} The lemma is obvious. A $\mathbf{W}$-equivariant map $\psi: N\rightarrow P$ is called an {\it almost-isomorphism} if its kernel has dimension $\leq 1$. The only almost-isomorphisms which are not isomorphisms between modules of the class $\mathcal{S}$ are the maps from $B_{\xi}$ to $A_{\eta}$ obtained as the composition of $B_{\xi}\twoheadrightarrow\overline{A}$ and $\overline{A}\hookrightarrow A_{\eta}$. \begin{lemma}\label{lemma_deg-V} Assume that $S=V$. Then there is surjective maps $\phi:M\rightarrow \mathbb{C}$ and almost-isomorphism $\psi: N\rightarrow P$ such that \centerline{$\pi(x,y)=\phi(x)\psi(y)$,} \noindent for any $(x,y)\in M\times N$. \end{lemma} Lemma \ref{lemma_deg-V} is obvious. In order to investigate the case where $S$ contains two lines, it is necessary to state a diagram chasing lemma. Let $\mathfrak{g}$ be a Lie algebra, let $X$ be a $\mathfrak{g}$-module. For $\xi\in H^1(\mathfrak{g},M)$, denote by $X_\xi$ the corresponding extension: \centerline{$0\rightarrow X\rightarrow X_\xi\rightarrow \mathbb{C} \rightarrow 0$.} \begin{lemma} \label{chasing} Assume that $End_{\mathfrak{g}}(X)=\mathbb{C}$ and that $X=\mathfrak{g}.X$. Let $\xi_1,\xi_2,\xi_3\in H^1(\mathfrak{g},X)$ and set $Z=X_{\xi_1}\otimes X_{\xi_2}/X\otimes X$. We have \centerline{$\mathrm{dim} {\mathrm {Hom}}_{\mathfrak{g}}(Z,X_{\xi_3})=3-r$,} \centerline{$\mathrm{dim} {\mathrm {Hom}}_{\mathfrak{g}}(Z,X)=2-s$,} \noindent where $r$ is the rank of $\{\xi_1,\xi_2,\xi_3\}$ and $s$ is the rank of $\{\xi_1,\xi_2\}$ in $H^1(\mathfrak{g},M)$. \end{lemma} \begin{proof} For $i=1$ to $3$, there are elements $\delta_i\in X_{\xi_i}\setminus X$ such that the map $x\in\mathfrak{g}\rightarrow x.\delta_i\in X$ is a cocycle of the class $\xi_i$. Since $X_{\xi_i}=X\oplus\mathbb{C} \delta_i$, we have \noindent\centerline{$Z= [\delta_1\otimes X]\oplus [X\otimes\delta_2] \oplus \mathbb{C}(\delta_1\otimes\delta_2)$.} Let $\pi\in {\mathrm {Hom}}_{\mathfrak{g}}(Z,X_{\xi_3})$. Note that $\delta_1\otimes X$ is a submodule of $Z$ isomorphic with $X$. Since $\mathfrak{g}.X=X$, we have $\pi(\delta_1\otimes X)\subset X$. Thus there exists $\lambda\in\mathbb{C}$ such that $\pi(\delta_1\otimes x)=\lambda x$ for all $x\in X$. Similarly, there exists $\mu\in\mathbb{C}$ such that $\pi(x\otimes\delta_2)=\mu x$ for all $x\in X$. By definition, we have $\pi(\delta_1\otimes\delta_2)=\nu\delta_3 +x_0$ for some $\nu\in\mathbb{C}$ and some $x_0$ in $X$. The $\mathfrak{g}$-equivariance of $\pi$ is equivalent to the equation: \noindent\centerline{ $\mu g.\delta_1+\lambda g.\delta_2 =\nu g.\delta_3 +g.x_0$ for all $g\in \mathfrak{g}$.} \noindent Thus $\mathrm{dim}{\mathrm {Hom}}_{\mathfrak{g}}(Z,X_{\xi_3})$ is exactly the dimension of the space of triples $(\lambda,\mu,\nu)\in \mathbb{C}^3$ such that \centerline{$\lambda\delta_2 +\mu \delta_1 -\nu\delta_3\equiv 0$ in $H^1(\mathfrak{g},X)$,} \noindent and the first assertion follows. The second assertion is similar. \end{proof} For $\xi\in\mathbb{P}^1$ and $t\in\mathbb{C}$, define $\eta^{t}_{(\xi)}: A_{\xi}\times A_{\xi}\rightarrow A_{\xi}$ by the formula \noindent\centerline{$\eta^{t}_{\xi}(m,n)= \mathrm{Res}(m) n+ t \mathrm{Res}(n) m$,} \noindent and recall that $\eta(\xi_1,\xi_2,\xi_3)$ is defined in Section $4.3$. \begin{lemma} Assume that $S=V\cup H$. Then $\pi$ is conjugate to one of the following: (i) $\eta(\xi_1,\xi_2,\xi_3)$, for some $\xi_1,\xi_2,\xi_3\in\mathbb{P}^1$ with $\xi_3\notin\{\xi_1,\xi_2\}$, or (ii) $\eta^{t}_{\xi}$ for some $t\neq 0$ and $\xi\in\mathbb{P}^1$. \end{lemma} \begin{proof} By Lemma \ref{lemma_support} we have $\pi(M_\pi\times N_\pi)=0$, $M/M_\pi=\mathbb{C}$ and $N/N_\pi=\mathbb{C}$. It follows that $M\simeq A_{\xi_1}$ and $N\simeq A_{\xi_2}$ for some $\xi_1,\xi_2\in\mathbb{P}^1$. Thus $(M/M_{\pi})\otimes N_{\pi}$ is isomorphic to $\overline {A}$. Since $\pi$ induces a non-zero map $(M/M_{\pi})\otimes N_{\pi}\rightarrow P$, the $\mathbf{W}$-module $P$ contains $\overline{A}$, hence $P$ is isomorphic to $A_{\xi_3}$ for some non-zero $\xi_3\in\mathbb{P}^1$. Let $B=\{\mu\in \mathbf{B}_{\mathbf{W}}(M\times N,P)\vert \mu(M_\pi\times N_\pi)=0\}$. It follows from the Kaplansky-Santharoubane Theorem that $\mathrm{dim} H^1(W,\overline A)=2$. Thus if $\xi_1,\xi_2,\xi_3$ are not all equal, it follows from Lemma \ref{chasing} that $B=\mathbb{C}\, \eta(\xi_1,\xi_2,\xi_3)$. However $\mathrm{Supp}\,\,\eta(\xi_1,\xi_2,\xi_3)$ lies inside $H$ or $V$ if $\xi_1=\xi_3$ or $\xi_2=\xi_3$. Hence we have $\xi_3\notin\{\xi_1,\xi_2\}$ and $\pi$ is conjugate to $\eta(\xi_1,\xi_2,\xi_3)$. Similarly, if all $\xi_i$ are equal to some $\xi\in \mathbb{P}^1$, then $B$ is the two dimensional vector space generated by the affine line $\{\eta^{t}_{\xi}\vert\,t\in\mathbb{C}\}$. Thus $\pi$ is conjugate to some $\eta^t_\xi$. Moreover the hypothesis $\mathrm{Supp}\,\,\pi= H\cup V$ implies that $t\neq 0$. \end{proof} \begin{lemma}\label{lemma_class-deg2} Assume that $S=V \cup H \cup D$. Then, $\pi$ is conjugate to $\Theta_{\xi}$ for some $\xi\in \mathbb{P}^1$, modulo a trivial map. \end{lemma} \begin{proof} It follows from Lemma \ref{lemma_support} that we have $\pi(M_\pi \times N_\pi) \cong \mathbb{C}, M/M_\pi \cong \mathbb{C}$ and $N/N_\pi \cong \mathbb{C}$. Thus $M=A_{\xi_1}, N=A_{\xi_2}$ and $P=B_{\xi_3}$ form some $\xi_i \in \mathbb{P}^1$. Set $Z=A_{\xi}\otimes A_{\xi}/\overline{A}\otimes \overline{A}$. By Lemma \ref{lemma_support}, the composition of any $\mu\in \mathbf{B}^0_{\mathbf{W}}(A_{\xi_1}\times A_{\xi_2}, B_{\xi_3})$ with the map $B_{\xi_3}\rightarrow\overline{A}$ provides a linear map ${\overline\mu}: Z\rightarrow \overline A$. Since $\overline{\mu}$ is not zero, Lemma \ref{chasing} implies that $\xi_1=\xi_2$. By the $\mathfrak{S}_3$-symmetry, $B_{\xi_3}$ is the restricted dual of $A_{\xi_2}$, so we have $\xi_3=\xi_1$. Set $\xi:=\xi_1=\xi_2=\xi_3$ and consider the following exact sequence \noindent\centerline {$0 \rightarrow \mathbf{B}_{\mathbf{W}}(A_\xi\times A_\xi, \mathbb{C}) \rightarrow \mathbf{B}_{\mathbf{W}}^0(A_\xi\times A_\xi, B_{\xi}) \rightarrow {\mathrm {Hom}}_{\mathbf{W}}(Z,B_{\xi}/\mathbb{C})$,} \noindent where the last arrow is the map $\mu\mapsto\overline{\mu}$. It is clear that the subspace $\mathbf{B}_{\mathbf{W}}(A_{\xi}\times A_{\xi}, \mathbb{C})$ of $\mathbf{B}_{\mathbf{W}}^{0}(A_{\xi}\times A_{\xi}, B_{\xi})$ is the space of trivial bilinear maps. By the previous lemma, ${\mathrm {Hom}}_{\mathbf{W}}(Z,B_{\xi}/\mathbb{C})$ has dimension one. It follows from its definition that $\mathrm{Supp}\, \Theta_\xi=H\cup V\cup D$. Hence we have \noindent\centerline{$\mathbf{B}_{\mathbf{W}}^{0}(A_{\xi}\times A_{\xi}, B_{\xi})= \mathbf{B}_{\mathbf{W}}(A_{\xi}\times A_{\xi}, \mathbb{C})\oplus \mathbb{C} \Theta_\xi$.} \noindent Hence $\pi$ is conjugate to $\Theta_{\xi}$ modulo a trivial map. \end{proof} \begin{thm} Let $M,N$ and $P$ indecomposable modules of the class $\mathcal{S}$ and let $\pi:M\times N\rightarrow P$ be a $\mathbf{W}$-equivariant degenerate bilinear map. Up to the $\mathfrak{S}_3$-symmetry, $\pi$ is conjugate to one of the following: (i) a trivial bilinear map $\pi:A_{\xi_1}\times A_{\xi_2}\rightarrow B_{\xi_3}$, (ii) the map $\pi:A_{\xi}\times N\rightarrow P$, $(m,n)\mapsto \mathrm{Res}(m)\psi(n)$ where $\xi\in\mathbb{P}^1$ and where $\psi:N\rightarrow P$ is an almost-isomorphism, (iii) the map $\eta(\xi_1,\xi_2,\xi_3)$ for some $\xi_1,\xi_2$ and $\xi_3\in \mathbb{P}^1$ with $\xi_3\notin\{\xi_1,\xi_2\}$, or the map $\eta^{t}_{\xi}$ for some $\xi\in \mathbb{P}^1$ and $t\neq 0$, (iv) $\Theta_{\xi}+\tau$, where $\xi\in\mathbb{P}^1$ and $\tau$ is a trivial map. \end{thm} The following table is another presentation of Theorem 1. To limit the number of cases, the list is given up to the $\mathfrak{S}_3$-symmetry. That is why the datum in the third column is $P^*$ and not $P$. \begin{table}[h] \caption{\textbf{List, up to the $\mathfrak{S}_3$-symmetry, of possible $S$ and $d$, where $d=\mathrm{dim} \mathbf{B}_{\mathbf{W}}^0(M\times N, P)$ and $S=\overline{\mathrm{Supp}\,\,\pi}$ for $\pi\in \mathbf{B}_{\mathbf{W}}^0(M\times N, P)$.}} \label{table2} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline &$M\times N$ & $P^*$& $S$&d & Restrictions \\ \hline 1.& $A_{\xi_1}\times A_{\xi_2}$ & $A_{\xi_3}$ & $0$& $1$ &$Card \{\xi_1,\xi_2,\xi_3\}\geq 2$\\ \hline 2.& $A_{\xi}\times X$ & $X^*$ & $V$& $1$ &$X\not\simeq A_\xi$ or $B_\xi$ \\ \hline 3.& $A_{\xi_1}\times B_{\xi_2}$ & $B_{\xi_3}$ & $V$& $1$& \\ \hline 4.& $A_{\xi_1}\times A_{\xi_2}$ & $B_{\xi_3}$ & $H\cup V$ & $1$ &$\xi_3\notin\{\xi_1,\xi_2\}$ \\ \hline 5.& $A_{\xi}\times A_{\xi}$ & $B_{\xi}$& $H$, $V$ or $H\cup V$&$2$ & \\ \hline 6.& $A_{\xi}\times A_{\xi}$ & $A_{\xi}$& $0$ or $H\cup V\cup D$&$2$ & \\ \hline \end{tabular} \end{center} \small{ Except the indicated restrictions, $X\in\mathcal{S}$ is arbitrary and $\xi,\xi_1,\xi_2$ and $\xi_3$ are arbitrary.} \end{table} \section{Bounds for the dimension of the spaces of germs of bilinear maps} Let $M,N$ and $P$ be $\mathbf{W}$-modules of the class $\mathcal{S}$. In this section we introduce a space $\tilde \mathcal{G}_{\mathfrak{sl}(2)}(M\times N, P)$ which is a good approximation of $\mathcal{G}_{\mathbf{W}}(M\times N, P)$. Indeed we have: \noindent\centerline {$\mathcal{G}_{\mathbf{W}}(M\times N\, P)\subset \tilde \mathcal{G}_{\mathfrak{sl}(2)}(M\times N\, P) \subset \mathcal{G}_{\mathfrak{sl}(2)}(M\times N, P)$.} \subsection{On $\mathrm{dim} \mathcal{G}_{\mathfrak{sl}(2)}(M\times N,P)$} For an element $\gamma\in\mathbb{C}^2$, set $L.\gamma=\gamma+(1,0)$ and $R.\gamma=\gamma+(0,1)$. Similarly, for a pair $\{\alpha,\beta\}$ of elements of $\mathbb{C}^2$, set $L.\{\alpha,\beta\}=\{L.\alpha,L.\beta\}$ and $R.\{\alpha,\beta\}=\{R.\alpha,R.\beta\}$. The pair $\{\alpha,\beta\}$ is called {\it adjacent} if $\beta=\alpha+(1,-1)$ or $\alpha=\beta+(1,-1)$. Thus $L.\{\alpha,\beta\}$ and $R.\{\alpha,\beta\}$ are adjacent whenever $\{\alpha,\beta\}$ is adjacent. Given $(x,y)\in\mathbb{C}^2$, let $C(x,y)$ be the set of all elements of $\mathbb{C}^2$ of the form $(x+m, y+n)$ with $m,n\in\mathbb{Z}_{\geq 0}$ and $(m,n)\neq (0,0)$. Let $\delta_1,\,\delta_2$ and $\gamma$ be scalar, and let $u$ and $v$ be $\mathbb{Z}$-cosets. Let $\pi:\Omega^{\delta_1}_{u}\times \Omega^{\delta_2}_{v}\rightarrow \Omega^{\gamma}_{u+v}$ be an $L_0$-equivariant bilinear map. Let $(e_x^{\delta_1})_{x\in{u}}$ be the basis of $\Omega^{\delta_1}_{u}$ defined in Section 1. Similarly denote by $(e_y^{\delta_2})_{y\in{v}}$ and $(e_z^{\gamma})_{z\in{u+v}}$ the corresponding bases of $\Omega^{\delta_2}_{v}$ and $\Omega^{\gamma}_{u+v}$. Since $\pi$ is $L_0$-equivariant, there exists a function $X: u\times v\rightarrow\mathbb{C}$ defined by the identity: \noindent\centerline{$\pi(e_x^{\delta_1},e_y^{\delta_2})=X(x,y) e_{x+y}^{\gamma}$.} \begin{lemma}\label{tech} Assume there exists $(x,y)\in u\times v$ such that: (i) $\mathrm{Supp}\, (L_{\pm1}.\pi) \cap C(x,y)=\emptyset$, (ii) $\Re x>\pm\Re\delta_1$, $\Re y>\pm\Re\delta_2$, $\Re (x+y)>\pm\Re\gamma$, and (iii) $X(x+1,y)=X(x,y+1)=0$. Then $\mathcal{G}(\pi)=0$ \end{lemma} \begin{proof} {\it First step:} We claim that for any adjacent pair $\{\alpha,\beta\}$ in $C(x,y)$ with $X(\alpha)=X(\beta)=0$, then $X$ vanishes on $R^k.L^l.\{\alpha,\beta\}$ for any $k,l\in\mathbb{Z}_{\geq 0}$. First prove that $X$ vanishes on $L. \{\alpha,\beta\}$ Indeed we can assume that $\alpha=\beta+(1,-1)$, and therefore we have $\alpha=(x'+1,y')$, $\beta=(x',y'+1)$ for some $(x',y')\in \mathbb{C}^2$. Since $L_{-1}.\pi(e_{x'+1}^{\delta_1},e_{y'+1}^{\delta_2})$ is a linear combination of $\pi(e_{x'}^{\delta_1},e_{y'+1}^{\delta_2})$ and \noindent $\pi(e_{x'+1}^{\delta_1},e_{y'}^{\delta_2})$, we get \noindent\centerline{$0= L_{-1}.\pi(e_{x'+1}^{\delta_1},e_{y'+1}^{\delta_2})= X(x'+1,y'+1)L_{-1}. e_{x'+y'+2}^{\gamma}$.} \noindent Since $\Re (x'+y'+2-\gamma)>0$, we get $L_{-1}. e_{x'+y'+2}^{\gamma}\neq 0$ and therefore $X(x'+1,y'+1)=0$. Moreover we have \noindent\centerline{$0=L_{1}.\pi(e_{x'+1}^{\delta_1},e_{y'}^{\delta_2}) =\pi(L_1.e_{x'+1}^{\delta_1},e_{y'}^{\delta_2}) +\pi(e_{x'+1}^{\delta_1},L_1.e_{y'}^{\delta_2})$.} \noindent Using that $X(x'+1,y'+1)=0$, it follows that $\pi(L_1.e_{x'+1}^{\delta_1},e_{y'}^{\delta_2})=0$. Since $\Re(x'+1+\delta_1)>0$, we have $L_1.e_{x'+1}^{\delta_1}\neq 0$ and therefore $X(x'+2,y')=0$. Hence $X$ vanishes on $L.\{\alpha,\beta\}$. Similarly, $X$ vanishes on $R.\{\alpha,\beta\}$. It follows by induction that $X$ vanishes on $R^k.L^l.\{\alpha,\beta\}$ for any $k,l\in\mathbb{Z}_{\geq 0}$. {\it Second step:} Set $\alpha=(x+1,y)$ and $\beta=(x,y+1)$. We have $C(x,y)=\cup_{k,l\in\mathbb{Z}_{\geq 0}} \,R^k.L^l.\{\alpha,\beta\}$. It follows that $X$ vanishes on $C(x,y)$. Hence $\mathcal{G}(\pi)=0$. \end{proof} Let $\pi\in \mathbf{B}_{L_0}(\Omega^{\delta_1}_{u}\times \Omega^{\delta_2}_{v}, \Omega^{\gamma}_{u+v})$. As before, set \noindent\centerline{$\pi(e_x^{\delta_1},e_y^{\delta_2})=X(x,y) e_{x+y}^{\gamma}$.} \begin{lemma}\label{consecutive} Assume that $\mathcal{G}(\pi)$ is $\mathfrak{sl}(2)$-equivariant and non-zero. Then there exists $(x_0,y_0)\in u\times v$ such that \noindent\centerline{$X(\alpha)\neq 0$ or $X(\beta)\neq 0$,} \noindent for any adjacent pair $\{\alpha,\beta\}$ in $C(x_0,y_0)$. \end{lemma} \begin{proof} Since $\mathcal{G}(\pi)$ is $\mathfrak{sl}(2)$-equivariant, we can choose $(x_0,y_0)\in u\times v$ such that: (i) $\mathrm{Supp}\, L_{\pm1}\circ \pi\cap C(x_0,y_0)=\emptyset$ \noindent Moreover we can assume that $\Re x_0$ and $\Re y_0$ are big enough in order that: (ii) $\Re x_0>\pm\Re\delta_1$, $\Re y_0>\pm\Re\delta_2$, $\Re (x_0+y_0)>\pm\Re\gamma$. Let $\{(x+1,y),(x,y+1)\}$ be an adjacent pair in $C(x_0,y_0)$. The couple $(x,y)$ satisfies the conditions (i) and (ii) of the previous lemma. Since $\mathcal{G}(\pi)\neq 0$, it follows that $X(x+1,y)\neq 0$ or $X(x,y+1)\neq 0$. \end{proof} Let $M,N$ and $P$ be $\mathbf{W}$-modules of the class $\mathcal{S}$. \begin{lemma}\label{Gsl(2)} We have $\mathrm{dim}\,\mathcal{G}_{\mathfrak{sl}(2)}(M\times N,P)\leq 2$. \end{lemma} \begin{proof} Let $\pi_1,\pi_2,\pi_3$ be any elements in $\mathbf{B}_{L_0}(M\times N,P)$ such that $\mathcal{G}(\pi_i)\in \mathcal{G}_{\mathfrak{sl}(2)}(M\times N,P)$. Since the space $\mathcal{G}(M\times N,P)$ only depends on the germs of $M$, $N$ and $P$, we can assume that $M= \Omega^{\delta_1}_{u}$, $N=\Omega^{\delta_2}_{v}$ and $P=\Omega^{\gamma}_{w}$ for some scalars $\delta_1,\delta_2$ and $\gamma$ and some $\mathbb{Z}$-cosets $u$, $v$ and $w$. Moreover, we can assume $w= u+v$, since otherwise it is obvious that $\mathcal{G}_{\mathfrak{sl}(2)}(M\times N,P)=0$. There is $(x,y)\in u\times v$ such that $\mathrm{Supp}\, (L_{\pm1}.\pi_i)\cap C(x,y)=\emptyset$ for $i=1$ to $3$. Adding some positive integers to $x$ and $y$ if necessary, we can assume that $\Re x>\pm\Re\delta_1$, $\Re y>\pm\Re\delta_2$ and $\Re (x+y)>\pm\Re\gamma$. There is a non-zero triple $(a,b,c)$ of scalars with: \noindent\centerline{$[a\pi_1+b\pi_2+c\pi_3] (e_{x+1}^{\delta_1},e_{y}^{\delta_2})= [a\pi_1+b\pi_2+c\pi_3] (e_{x}^{\delta_1},e_{y+1}^{\delta_2})=0$.} \noindent It follows from Lemma \ref{tech} that $a\mathcal{G}(\pi_1)+b\mathcal{G}(\pi_2)+c\mathcal{G}(\pi_3)=0$. Since any three arbitrary elements $\mathcal{G}(\pi_1),\mathcal{G}(\pi_2)$ and $\mathcal{G}(\pi_3)$ of $\mathcal{G}_{\mathfrak{sl}(2)}(M\times N,P)$ are linearly dependant, it follows that $\mathrm{dim}\,\mathcal{G}_{\mathfrak{sl}(2)}(M\times N,P)\leq 2$. \end{proof} \subsection{The recurrence relations} Let $M,\,N$ and $P$ be three $\mathbf{W}$-modules of the class $\mathcal{S}$. Let $\pi\in \mathbf{B}(M\times N,P)$. Set $\tilde{\pi}(m,n)=L_{-2}L_2.\pi(m,n)- \pi(L_{-2}L_2.m,n)-$ \hskip2cm$-\pi(m, L_{-2}L_2.n)- \pi(L_{-2}.m, L_2.n)- \pi(L_{2}.m, L_{-2}.n)$, \noindent for all $(m, n)\in M\times N$. Note that we have $\tilde{\pi} =L_{-2}\circ(L_{2}.\pi)+(L_{-2}.\pi)\circ (L_{2}\times \mathrm{id})+ (L_{-2}.\pi)\circ (\mathrm{id} \times L_{2})$. \noindent Similarly, for a germ $\mu\in \mathcal{G}(M\times N,P)$, set \noindent\centerline{$\tilde{\mu} =L_{-2}\circ(L_{2}.\mu) +(L_{-2}.\mu)\circ( L_{2}\times \mathrm{id}) + (L_{-2}.\mu)\circ (\mathrm{id}\times L_{2})$.} \noindent Set $\tilde{\mathcal{G}}_{{\mathfrak{sl}}(2)}(M\times N,P)=\{ \mu\in {\mathcal{G}}_{{\mathfrak{sl}}(2)}(M\times N,P)\vert \,\tilde{\mu}=0\}$. It follows from the definitions that we have \noindent \centerline{ ${\mathcal{G}}_{{\mathbf{W}}}(M\times N,P)\subset \tilde{\mathcal{G}}_{{\mathfrak{sl}}(2)}(M\times N,P)\subset {\mathcal{G}}_{{\mathfrak{sl}}(2)}(M\times N,P)$. } Let $\delta_1,\delta_2$ and $\gamma$ be three scalars. For $k=1,\,2$, set \begin{align*} &a_k(x,y)=(x+k\delta_1)(y-k\delta_2), \\ &b_k(x,y)=k^2(\delta_1+\delta_2-\gamma-\delta_1^2-\delta_2^2+\gamma^2)-2xy, \\ &c_k(x,y)=(x-k\delta_1)(y+k\delta_2). \end{align*} \noindent Let $u$ and $v$ be two $\mathbb{Z}$-cosets and let $\pi:\Omega^{\delta_1}_{u}\times \Omega^{\delta_2}_{v}\rightarrow \Omega^{\gamma}_{u+v}$ be a $L_0$-equivariant bilinear map. As before, define the function $X$ by the identity: \noindent\centerline{$\pi(e_x^{\delta_1},e_y^{\delta_2})=X(x,y) e_{x+y}^{\gamma}$.} \begin{lemma}\label{Casimir} Assume that the $\mathcal{G}(\pi)$ belongs to $\tilde{\mathcal{G}}_{{\mathfrak{sl}}(2)} (\Omega^{\delta_1}_{u}\times \Omega^{\delta_2}_{v}, \Omega^{\gamma}_{u+v})$. Then there exists $(x_0,y_0)\in u\times v$ such that: \noindent\centerline{ $a_k(x,y)X(x+k,y-k)+b_k(x,y)X(x,y)+c_k(x,y)X(x-k,y+k)=0$,} \noindent for $k=1,2$ and all $(x,y)\in C(x_0,y_0)$. \end {lemma} \begin{proof} For $k=1,2$, set $\pi_k=L_{-k}\circ (L_k. \pi)+ (L_{-k}.\pi)\circ (L_{k}\times id)+ (L_{-k}.\pi)\circ (id\times L_k)$. Since the germ of $\pi$ is $\mathfrak{sl}(2)$-equivariant we have $\mathcal{G}(\pi_1)=0$ and since $\pi_2=\tilde\pi$ we also have $\mathcal{G}(\pi_2)=0$. Therefore there exist $(x_0,y_0)\in u\times v$ such that $\mathrm{Supp}\, \pi_1$ and $\mathrm{Supp}\, \pi_2$ do not intersect $C(x_0,y_0)$. This condition is equivalent to the recurrence relations of the lemma. \end{proof} \subsection{The case $\mathrm{dim} \tilde{\mathcal{G}}_{{\mathfrak{sl}}(2)}(M \times N,P)=2$} \begin{lemma}\label{dim_tilde_cG} Let $M,N$ and $P$ be $\mathbf{W}$-modules of the class $\mathcal{S}$. If \noindent\centerline {$\mathrm{dim} \tilde{\mathcal{G}}_{{\mathfrak{sl}}(2)}(M \times N,P)=2$,} \noindent then one of the following assertions holds: (i) $\deg M=\deg N= \deg P =\{0,1\}$, (ii) $\deg M=-1/2$, $\deg N =\{0,1\}$ and $\deg P\in\{-1/2,3/2\}$ (iii) $\deg M =\{0,1\}$, $\deg N=-1/2$, and $\deg P\in\{-1/2,3/2\}$. \end{lemma} \begin{proof} Assume that $\mathrm{dim} \tilde{\mathcal{G}}_{{\mathfrak{sl}}(2)}(M \times N,P)=2$. We can assume that $M=\Omega^{\delta_1}_{u}, \, N=\Omega^{\delta_2}_{v}$ and $P=\Omega^{\gamma}_{u+v}$ for some $\delta_1,\,\delta_2$ and $\gamma$ in $\mathbb{C}$ and some $u,\,v\in {\mathbb{C}/\mathbb{Z}}$. Choose $\pi_1,\pi_2\in\mathbf{B}_{L_0}(\Omega^{\delta_1}_{u} \times\Omega^{\delta_2}_{v}, \Omega^{\gamma}_{u+v})$ whose germs form a basis of $\tilde{\mathcal{G}}_{{\mathfrak{sl}}(2)} (\Omega^{\delta_1}_{u} \times\Omega^{\delta_2}_{v}, \Omega^{\gamma}_{u+v})$. For $i=1,\,2$ define the functions $X_i(x,y)$ by the identity \noindent\centerline{$\pi_i(e_x^{\delta_1},e_y^{\delta_2})=X_i(x,y) e_{x+y}^{\gamma}$.} By Lemma \ref{Casimir}, there exists $(x_0,y_0)\in u\times v$ such that \noindent (1)\hskip6mm $a_1(x,y)X_i(x+1,y-1)+b_1(x,y)X_i(x,y)+c_1(x,y)X_i(x-1,y+1)=0$, \noindent (2)\hskip6mm $a_2(x,y)X_i(x+2,y-2)+b_2(x,y)X_i(x,y)+c_2(x,y)X_i(x-2,y+2)=0$, \noindent for $i=1,2$ and all $(x,y)\in C(x_0,y_0)$. Moreover by Lemma \ref{tech}, we can assume that the vectors $(X_1(x+1,y),X_1(x,y+1))$ and $(X_2(x+1,y),X_2(x,y+1))$ are linearly independant, for all $(x,y)\in C(x_0,y_0)$. {\it First step:} We claim that \noindent $a_2(x,y)b_1(x+1,y-1)c_1(x-1,y+1)c_1(x,y)=$ \hskip3cm $a_1(x,y)a_1(x+1,y+1)b_1(x-1,y+1)c_2(x,y)$, \noindent for all $(x,y)\in\mathbb{C}^2$, where the functions $a_i,\,b_i$ and $c_i$ are defined in the previous section. From now on, we assume that $(x,y)$ belongs to $C(x_0+1,y_0+1)$. To simplify the expressions in the proof, we set $A_{il}=a_i(x+l,y-l)$, $B_{il}=b_i(x+l,y-l)$ and $C_{il}=c_i(x+l,y-l)$, for any $l\in \{-1,0,1\}$). Thus Identity (2) can be written as: \noindent \centerline{ $A_{2,0}X_i(x+2,y-2)+*X_i(x,y)+C_{2,0}X_i(x-2,y+2)=0$,} \noindent Here and in what follows, $*$ stands for a certain constant whose explicit value is not important at this stage. Multiplying by $A_{1,1}C_{1,-1}$, we obtain \noindent (3){ $A_{2,0}C_{1,-1}[A_{1,1}X_i(x+2,y-2)]+*X_i(x,y)+$} \hskip5cm $A_{1,1}C_{2,0}[C_{1,-1}X_i(x-2,y+2)]=0$, \noindent Note that $(x+1,y-1)$ and $(x-1,y+1)$ belong to $C(x_0,y_0)$. Using Relation (1) we have \noindent (4)\centerline{ $A_{1,1}X_i(x+2,y-2)+B_{1,1}X_i(x+1,y-1)+*X_i(x,y)=0$} \noindent (5)\centerline{ $*X_i(x,y)+B_{1,-1}X_i(x-1,y+1)+C_{1,-1}X_i(x-2,y+2)=0$} \noindent With Relations (4) and (5) we can eliminate the terms $[A_{1,1}X_i(x+2,y-1)]$ and $[C_{1,-1}X_i(x-2,y+2)]$ in Relation (3). We obtain \noindent (6) $A_{2,0}B_{1,1}C_{1,-1}X_i(x+1,y-1)+* X_i(x,y)+$ \hskip5cm $A_{1,1}B_{1,-1}C_{2,0}X_i(x-1,y+1)=0$. Moreover Relation (1) can be written as \noindent (7)\centerline{ $A_{1,0}X_i(x+1,y-1)+*X_i(x,y)+C_{1,0}X_i(x-1,y+1)=0$,} Thus Relations (6) and (7) provide two linear equations connecting $X_i(x+1,y-1)$, $X_i(x,y)$ and $X_i(x-1,y+1)$. Since $(x,y), (x+1,y-1)$ is an adjacent pair, it follows that the two triples $(X_1(x+1,y-1),X_1(x,y),X_1(x-1,y+1))$ and $(X_2(x+1,y-1),X_2(x,y),X_2(x-1,y+1))$ are linearly independent. So the linear relations (6) and (7) are proportional, which implies that \noindent\centerline{$A_{2,0}B_{1,1}C_{1,-1}C_{1,0}=A_{1,0}A_{1,1}B_{1,-1}C_{2,0}$,} \noindent or equivalently (8) $a_2(x,y)b_1(x+1,y-1)c_1(x-1,y+1)c_1(x,y)=$ \hskip3cm $a_1(x,y)a_1(x+1,y+1)b_1(x-1,y+1)c_2(x,y).$ \noindent This identity holds for all $(x,y)\in C(x_0+1,y_0+1)$. Since $C(x_0+1,y_0+1)$ is Zariski dense in $\mathbb{C}^2$, Identity (8) holds for any $(x,y)\in\mathbb{C}^2$. {\it Second step:} We claim that \centerline{$\delta_1+\delta_2-\gamma-\delta_1^2-\delta_2^2+\gamma^2=0$.} Assume otherwise and set $\tau=\delta_1+\delta_2-\gamma-\delta_1^2-\delta_2^2+\gamma^2$. We have $b_1(x,y)=\tau-2xy$ , therefore the polynomial $b_1$ is irreducible. Observe that all irreducible factors of the left side of (8) are degree 1 polynomials, except $b_1(x+1,y-1)$ and all irreducible factors of the right side side of (8) are degree 1 polynomials, except $b_1(x-1,y+1)$. Hence the irreducible factors of both sides do not coincide, which proves the claim. {\it Third step:} We claim that $\delta_1$ and $\delta_2$ belong to $\{-1/2,0,1\}$. Using that $\delta_1+\delta_2-\gamma-\delta_1^2-\delta_2^2+\gamma^2=0$, Identity (8) looks like $(x+2\delta_1) (x+1)(x-1-\delta_1)(x-\delta_1) f(y)$ \hskip3cm $=(x+\delta_1)(x+1+\delta_1)(x-1)(x-2\delta_1) g(y)$ \noindent where $f(y)$ and $g(y)$ are some functions of $y$. Since $x+1$ is a factor of the left side of the identity, it follows that $\delta_1=1,\,0$ or $-1/2$. The proof that $\delta_2=1,\,0$ or $-1/2$ is identical. {\it Fourth step:} We claim that the case $\delta_1=\delta_2=-1/2$ is impossible. The Equations (6) and (7) can be written as \noindent (6) \centerline{$aX_i(x+1,y-1)+b X_i(x,y)+ *X_i(x-1,y+1)=0$,} \noindent (7)\centerline{$cX_i(x+1,y-1)+dX_i(x,y)+*X_i(x-1,y+1)=0$,} \noindent where $a,\,b,\,c$ and $d$ are explicit functions of $x$ and $y$ (as before, * denotes some functions which are irrelevant for the present computation). Using that $\delta_1+\delta_2-\gamma-\delta_1^2-\delta_2^2+\gamma^2=0$ and a brute force computation, we obtain \noindent \centerline{$ad-bc=9/32(1+2y)(2x-1)(2xy-1)$.} \noindent So the Equations (6) and (7) are not proportional, which contradicts that $(X_1(x+1,y-1), X_1(x,y))$ and $(X_2(x+1,y-1), X_2(x,y))$ are independent. {\it Final step:} If $\delta_1$ and $\delta_2$ belongs to $\{0,1\}$, then $\gamma^2-\gamma=0$ i.e $\gamma=0$ or $1$ and the triple $(\delta_1,\delta_2, \gamma)$ satisfies Assertion (i). If $\delta_1=-1/2$, then $\delta_2=0$ or $1$ and $\gamma^2-\gamma=3/4$, i.e. $\gamma=-1/2$ or $3/2$ and the triple $(\delta_1,\delta_2, \gamma)$ satisfies Assertion (ii). Similarly if $\delta_2=-1/2$, the triple $(\delta_1,\delta_2, \gamma)$ satisfies Assertion (iii). \end{proof} \subsection{The case $\mathrm{dim} \mathcal{G}_{\mathbf{W}}(M \times N,P)=2$} Let $M,N,P\in{\mathcal{S}}$ with $\mathrm{Supp}\, P= \mathrm{Supp}\, M+\mathrm{Supp}\, N$. \begin{lemma}\label{germ_dim=2} We have $\mathrm{dim} \mathcal{G}_{\mathbf{W}}(M \times N,P)=2$ iff $\deg M=\deg N=\deg P=\{0,1\}$. \end{lemma} \begin{proof} Set $u=\mathrm{Supp}\, M$ and $v=\mathrm{Supp}\, N$. By Lemma \ref{Gsl(2)}, we have $\mathrm{dim} \mathcal{G}_{\mathfrak{sl}(2)}(M \times N,P)\leq 2$ and therefore $\mathrm{dim} \mathcal{G}_{\mathbf{W}}(M \times N,P)\leq 2$. \noindent {\it First step:} Assume that $\deg M=\deg N=\deg P=\{0,1\}$. By Lemma \ref{lower} we have $\mathrm{dim} \mathcal{G}_{\mathbf{W}}(M \times N,P)\geq 2$. Thus $\mathrm{dim} \mathcal{G}_{\mathbf{W}}(M \times N,P)=2$. \noindent{\it Second step:} Set $d^-=\mathrm{dim}\mathcal{G}_{\mathbf{W}}(\Omega^0_u\times\Omega^{-1/2}_v,\Omega^{-1/2}_{u+v})$ and $d^+=\mathrm{dim}\mathcal{G}_{\mathbf{W}}(\Omega^0_u\times\Omega^{-1/2}_v,\Omega^{3/2}_{u+v})$. We claim that $d^+=d^-=1$. By Lemma \ref{sl(2)-germs}, there is a $\mathfrak{sl}(2)$-equivariant isomorphism $\phi:\mathcal{G}(\Omega^{-1/2})\rightarrow \mathcal{G}(\Omega^{3/2})$. By Lemma \ref{zero_intersection}, $\phi_*\,\mathcal{G}_{\mathbf{W}}(\Omega^0_u\times\Omega^{-1/2}_v,\Omega^{-1/2}_{u+v})$ and $\mathcal{G}_{\mathbf{W}}(\Omega^0_u\times\Omega^{-1/2}_v,\Omega^{3/2}_{u+v})$ are two subspaces of $\mathcal{G}_{\mathfrak{sl}(2)}(\Omega^0_u\times\Omega^{-1/2}_v,\Omega^{3/2}_{u+v})$ with trivial intersection. Thus we have $d^++d^-\leq 2$. However by Lemma \ref{lower}, we have $d^+\geq 1$ and $d^-\geq 1$. It follows that $d^+=d^-=1$. \noindent{\it Third step:} Conversely, assume that $\mathrm{dim} \mathcal{G}_{\mathbf{W}}(M \times N,P)=2$. It follows that $\mathrm{dim} \tilde{\mathcal{G}}_{\mathfrak{sl}(2)}(M \times N,P)=2$. By the previous step, the case (ii) of the assertion of Lemma \ref{dim_tilde_cG} cannot occur. Using the $\mathfrak{S}_2$-symmetry, the case (iii) is excluded as well. It follows that $\deg M=\deg N=\deg P=\{0,1\}$. \end{proof} \section{Determination of $\mathcal{G}_{\mathbf{W}}(M\times N,P)$} \label{sect_matrix-M} Let $M,N$ and $P\in\mathcal{S}$. In this section, we will compute the space $\mathcal{G}_{\mathbf{W}}(M\times N,P)$. We will always assume that $\mathrm{Supp}\, P=\mathrm{Supp}\, M+\mathrm{Supp}\, N$, otherwise it is obvious that $\mathcal{G}_{\mathbf{W}}(M\times N,P)=0$. In the previous section, it has been shown that $\mathrm{dim} \mathcal{G}_{\mathbf{W}}(M\times N,P)\leq 2$, and the case $\mathrm{dim} \mathcal{G}_{\mathbf{W}}(M\times N,P)=2$ has been determined. So it remains to decide when $\mathcal{G}_{\mathbf{W}}(M\times N,P)$ is zero or not. The final result is very simple to state, because $\mathrm{dim} \mathcal{G}_{\mathbf{W}}(M\times N,P)$ only depends on $\deg M,\deg N$ and $\deg P$. \subsection {Upper bound for $\mathrm{dim} \mathcal{G}_{\mathbf{W}}(M\times N,P)$} \begin{lemma} Let $M,N,P$ and $Q\in \mathcal{S}$ and let $\phi\in\mathcal{G}_{\mathfrak{sl}(2)}(P,Q)$. We have \noindent\centerline{$\phi_*\tilde{\mathcal{G}}_{\mathfrak{sl}(2)}(M\times N,P) \subset \tilde{\mathcal{G}}_{\mathfrak{sl}(2)}(M\times N,Q)$} \end{lemma} \begin{proof} {\it Step 1:} Set $\Omega_1=L_0^2+L_0-L_{-1}L_1$ and $\Omega_2=L_0^2+2L_0-L_{-2}L_2$. Indeed $\Omega_1$ is the Casimir element of $U(\mathfrak{sl}(2))$ and it acts as some scalar $c(X)$ on any $\mathbf{W}$-module $X \in \mathcal{S}$. It turns out that $\Omega_2$ acts on $X$ as $4c(X)$. {\it Step 2:} In order to prove the lemma, we can assume that $\phi\neq 0$. Therefore $c(P)=c(Q)$ and $\Omega_2$ acts by the same scalar on $P$ and on $Q$. Thus we get \noindent\centerline{$\Omega_2\circ\phi=\phi\circ\Omega_2$.} \noindent Since $\phi$ commutes with $L_0$, we get \noindent\centerline{$(L_{-2}L_2)\circ\phi=\phi\circ(L_{-2}L_2)$.} \noindent from which the lemma follows. \end{proof} \begin{lemma}\label{inequality} Let $\delta_1,\delta_2$ and $\gamma\in\mathbb{C}$ and let $u$ and $v$ be $\mathbb{Z}$-cosets. Assume that none of the following conditions is satisfied (i) $\gamma= 0,1/2$ or $1$, (ii) $\delta_1=-1/2$, $\delta_2\in\{0,1\}$ and $\gamma\in\{-1/2,3/2\}$, (iii) $\delta_1\in\{0,1\}$, $\delta_2=-1/2$ and $\gamma\in\{-1/2,3/2\}$. Then we have: \noindent\centerline {$\mathrm{dim} {\mathcal{G}}_{\mathbf{W}}(\Omega^{\delta_1}_u\times\Omega^{\delta_2}_v, \Omega^{\gamma}_{u+v})+ \mathrm{dim} {\mathcal{G}}_{\mathbf{W}}(\Omega^{\delta_1}_u\times\Omega^{\delta_2}_v, \Omega^{1-\gamma}_{u+v})\leq 1$.} \end{lemma} \begin{proof} By Lemma \ref{sl(2)-germs}, there exists an isomorphism $\phi:\mathcal{G}_{\mathfrak{sl}(2)}(\Omega^{1-\gamma}_{u+v})\rightarrow \mathcal{G}_{\mathfrak{sl}(2)}(\Omega^{\gamma}_{u+v})$. Obviously we have ${\mathcal{G}}_{\mathbf{W}}(\Omega^{\delta_1}_u\times\Omega^{\delta_2}_v, \Omega^{\gamma}_{u+v})\subset \tilde{\mathcal{G}}_{\mathfrak{sl}(2)}(\Omega^{\delta_1}_u\times\Omega^{\delta_2}_v, \Omega^{\gamma}_{u+v})$ and by the previous lemma we also have $\phi_*{\mathcal{G}}_{\mathbf{W}}(\Omega^{\delta_1}_u\times\Omega^{\delta_2}_v, \Omega^{1-\gamma}_{u+v})\subset \tilde{\mathcal{G}}_{\mathfrak{sl}(2)}(\Omega^{\delta_1}_u\times\Omega^{\delta_2}_v, \Omega^{\gamma}_{u+v})$. By condition (i) and Lemma \ref{zero_intersection}, the two subspaces ${\mathcal{G}}_{\mathbf{W}}(\Omega^{\delta_1}_u\times\Omega^{\delta_2}_v, \Omega^{\gamma}_{u+v})$ and $\phi_*{\mathcal{G}}_{\mathbf{W}}(\Omega^{\delta_1}_u\times\Omega^{\delta_2}_v, \Omega^{1-\gamma}_{u+v})$ intersects trivially, thus we have \noindent $\mathrm{dim} {\mathcal{G}}_{\mathbf{W}}(\Omega^{\delta_1}_u\times\Omega^{\delta_2}_v, \Omega^{\gamma}_{u+v})+ \mathrm{dim} {\mathcal{G}}_{\mathbf{W}}(\Omega^{\delta_1}_u\times\Omega^{\delta_2}_v, \Omega^{1-\gamma}_{u+v})\leq$ \hskip7cm$\mathrm{dim} \tilde{\mathcal{G}}_{\mathfrak{sl}(2)}(\Omega^{\delta_1}_u\times\Omega^{\delta_2}_v, \Omega^{\gamma}_{u+v})$. By conditions (i), (ii) and (iii), Lemma \ref{dim_tilde_cG}, we have $\mathrm{dim} \tilde{\mathcal{G}}_{\mathfrak{sl}(2)}(\Omega^{\delta_1}_u\times\Omega^{\delta_2}_v, \Omega^{\gamma}_{u+v})\leq 1$, which proves the lemma. \end{proof} \subsection {Necessary condition for $\tilde{\mathcal{G}}_{\mathfrak{sl}(2)}(\Omega^{\delta_1}_u\times\Omega^{\delta_2}_v, \Omega^{\gamma}_{u+v})\neq 0$}\label{subsect_det} Recall the notations of the previous section. For $k=1,\,2$, set \begin{align*} &a_k(x,y)=(x+k\delta_1)(y-k\delta_2), \\ &b_k(x,y)=k^2(\delta_1+\delta_2-\gamma-\delta_1^2-\delta_2^2+\gamma^2)-2xy, \\ &c_k(x,y)=(x-k\delta_1)(y+k\delta_2). \end{align*} Given an auxiliary integer $l$, set $A_{il}(x,y)=a_i(x+l,y-l)$, $B_{il}(x,y)=b_i(x+l,y-l)$ and $C_{il}(x,y)=c_i(x+l,y-l)$ and set \[ \mathbf{M}=\begin{pmatrix} A_{1,5}(x,y) & B_{1,5}(x,y) & C_{1,5}(x,y) & 0 & 0 & 0 \\ 0 & A_{1,4}(x,y) & B_{1,4}(x,y) & C_{1,4}(x,y) & 0 & 0 \\ 0 & 0 & A_{1,3}(x,y) & B_{1,3}(x,y) & C_{1,3}(x,y) & 0 \\ 0 & 0 & 0 & A_{1,2}(x,y) & B_{1,2}(x,y) & C_{1,2}(x,y) \\ A_{2,4}(x,y) & 0 & B_{2,4}(x,y) & 0 & C_{2,4}(x,y) & 0 \\ 0 & A_{2,3}(x,y) & 0 & B_{2,3}(x,y) & 0 & C_{2,3}(x,y) \end{pmatrix}, \] \noindent Moreover set \noindent\centerline{$\mathbf{D}_{\delta_1,\delta_2,\gamma}(x,y)=\mathrm{det}\mathbf{M}$.} In what follows, we will consider $\mathbf{D}_{\delta_1,\delta_2,\gamma}(x,y)$ as a polynomial in the variables $x$ and $y$, with parameters $\delta_1,\delta_2$ and $\gamma$. \begin{lemma}\label{vanishing_det} If $\tilde{\mathcal{G}}_{\mathfrak{sl}(2)}(\Omega^{\delta_1}_u\times\Omega^{\delta_2}_v, \Omega^{\gamma}_{u+v})\neq 0$ then $\mathbf{D}_{\delta_1,\delta_2,\gamma}(x,y)=0$, for all $(x,y)\in\mathbb{C}^2$. \end{lemma} \begin{proof} Assume that $\mathcal{G}_{\mathbf{W}}(\Omega^{\delta_1}_u\times\Omega^{\delta_2}_v,\Omega^{\gamma}_{u+v})\neq 0$. Thus there exists a $L_0$-equivariant bilinear map $\pi: \Omega^{\delta_1}_u\times\Omega^{\delta_2}_v\rightarrow\Omega^{\gamma}_{u+v}$ whose germ is a non-zero element of $\tilde{\mathcal{G}}_{\mathfrak{sl}(2)}(\Omega^{\delta_1}_u\times\Omega^{\delta_2}_v, \Omega^{\gamma}_{u+v})$. For $(x,y)\in u\times v$, define the scalar $X(x,y)$ by the identity \noindent\centerline{$\pi(e_x^{\delta_1},e_y^{\delta_2})=X(x,y) e_{x+y}^{\gamma}$.} Set $\mathbf{X}(x,y)=(X(x+6,y-6), X(x+5,y-5),\dots,X(x+1,y-1))$. Using Lemmas \ref{consecutive} and \ref{Casimir} there exists $(x_0,y_0)$ such that (i) $(X(x+2,y-2),X(x+1,y-1))\neq 0$, and (ii) $\mathbf{M}.^t\mathbf{X}(x,y)=0$, \noindent for all $(x,y)\in C(x_0,y_0)$. The first assertion implies that $\mathbf{X}(x,y)\neq 0$ for any $(x,y)\in C(x_0,y_0)$. Thus $\mathrm{det}\mathbf{M}$ vanishes on $C(x_0,y_0)$. Since $C(x_0,y_0)$ is Zariski dense, $\mathbf{D}_{\delta_1,\delta_2,\gamma}(x,y)=0$, for all $(x,y)\in\mathbb{C}^2$. \end{proof} \subsection{Zeroes of the polynomials $p_{i,j}(\delta_1,\delta_2,\gamma)$} \label{sect_poly-pij} Define the polynomials $p_{i,j}(\delta_1,\delta_2,\gamma)$ by the identity \noindent\centerline{$\mathbf{D}_{\delta_1,\delta_2,\gamma}(x,y)= \sum_{i,j}p_{i,j}(\delta_1,\delta_2,\gamma) x^iy^j$.} \noindent Since the entries of the matrix $\mathbf{M}$ are quadractic polynomials in $x,y,\delta_1,\delta_2$ and $\gamma$, $\mathbf{D}_{\delta_1,\delta_2,\gamma}(x,y)$ is a polynomial of degree $\leq 12$. Set $C(\delta_1,\delta_2,\gamma)=(\delta_1+\delta_2+\gamma)(\delta_1+\delta_2-\gamma) (\delta_1+\delta_2+1-\gamma)(\delta_1+\delta_2-1+\gamma)$. \begin{lemma}\label{divisibility} (i) We have $p_{i,j}(\delta_1,\delta_2,\gamma)=p_{i,j}(\delta_1,\delta_2,1-\gamma)$, (ii) Each polynomial $p_{i,j}(\delta_1,\delta_2,\gamma)$ is divisible by $C(\delta_1,\delta_2,\gamma)$. \end{lemma} \begin{proof} All entries of the matrix $\mathbf{M}$ are invariant under the involution $\gamma\mapsto 1-\gamma$, so we have \noindent\centerline{$\mathbf{D}_{\delta_1,\delta_2,\gamma}(x,y)=\mathbf{D}_{\delta_1,\delta_2,1-\gamma}(x,y)$} \noindent which implies the first assertion. It follows from Lemma \ref{lower} that $\mathcal{G}_{\mathbf{W}}(\Omega^{\delta_1}_u\times\Omega^{\delta_2}_v,\Omega^{\gamma}_{u+v})\neq 0$ whenever $\gamma=\delta_1+\delta_2$ or $\gamma=\delta_1+\delta_2+1$. Hence by Lemma \ref{vanishing_det}, as a polynomial in $\delta_1,\delta_2,\gamma,x$ and $y$, $\mathbf{D}_{\delta_1,\delta_2,\gamma}(x,y)$ is divisible by $D(\delta_1,\delta_2,\gamma)$, where $D(\delta_1,\delta_2,\gamma) =(\delta_1+\delta_2-\gamma) (\delta_1+\delta_2+1-\gamma)$. By the first assertion, it is also divisible by $D(\delta_1,\delta_2,1-\gamma)$. Since we have \noindent\centerline{$C(\delta_1,\delta_2,\gamma)= D(\delta_1,\delta_2,\gamma)D(\delta_1,\delta_2,1-\gamma)$} \noindent each $p_{i,j}$ is divisible $C(\delta_1,\delta_2,\gamma)$. \end{proof} Denote by $\tau$ the involution $(\delta_1,\delta_2,\gamma)\mapsto(\delta_1,\delta_2,1-\gamma)$. Let $\mathfrak{Z}\subset \mathbb{C}^3$ be the following set $\mathfrak{Z}=(\bigcup_{0\leq i\leq 1} H_i\cup H_i^\tau) \bigcup (\bigcup_{1\leq i\leq 4} D_i\cup D_i^\tau) \bigcup (\bigcup_{1\leq i\leq 2} \{ P_i\cup P_i^\tau)\}$, \noindent where the planes $H_i$, the lines $D_i$ and the points $P_i$ are defined in Section \ref{sect_example-germ}. For a polynomial $f$, denote by $Z(f)$ its zero set. \begin{lemma}\label{zero_pij} We have $Z(p_{1,3})\cap Z(p_{3,1})\cap Z(p_{2,2})\subset\mathfrak{Z}$. \end{lemma} The proof requires the explicit computation of $\mathbf{D}_{\delta_1,\delta_2,\gamma}(x,y)=\mathrm{det}\mathbf{M}$, and we have used MAPLE for this purpose. The next proof contains the explicit expressions of $p_{1,3}, p_{3,1}$ and $p_{2,2}$. The whole expression for $\mathrm{det}\mathbf{M}$ is given in Appendix A. \begin{proof} {\it Step 1:} By Lemma \ref{divisibility}, there are polynomials $q_{ij}$ such that \noindent\centerline{$p_{ij}(\delta_1,\delta_2,\gamma)=C(\delta_1,\delta_2,\gamma) q_{ij}(\delta_1,\delta_2,\gamma)$.} \noindent Since $Z(C)$ is the union of the four planes $H_0, H_0^\tau, H_1, H_1^\tau$, it remains to prove that $Z(q_{1,3})\cap Z(q_{3,1})\cap Z(q_{2,2})\subset\mathfrak{Z}$. Using MAPLE, it turns out that \vspace{-0.1in} \begin{align*} q_{2,2}= &\gamma^2(1-\gamma)^2+2\gamma(1-\gamma)(\delta_1^2+\delta_2^2-2\delta_1 \delta_2-2(\delta_1+\delta_2)+4)\\ &+(\delta_1^4+\delta_2^4-4\delta_1\delta_2(\delta_1^2+ \delta_2^2)+38\delta_1^2\delta_2^2)-4(\delta_1+\delta_2)^3\\ &-(13(\delta_1^2+\delta_2^2)-6\delta_1\delta_2)+4(\delta_1+\delta_2)+12, \end{align*} \vspace{-0.3in} \begin{align*} -\frac{1}{8}q_{1,3}= &\delta_1(\delta_1-1)[\gamma(1-\gamma) +\delta_1^1+\delta_2^2-4\delta_1\delta_2+3(\delta_1-\delta_2)+2].\\ \end{align*} \vspace{-0.3in} \begin{align*} -\frac{1}{8}q_{3,1}= &\delta_2(\delta_2-1)[\gamma(1-\gamma) +\delta_1^1+\delta_2^2-4\delta_1\delta_2-3(\delta_1-\delta_2)+2], \\ \end{align*} {\it Step 2:} The previous expressions provide (miraculous) factorizations \noindent\centerline{$-\frac{1}{8} q_{1,3}=L_1 L_2 Q$ and $-\frac{1}{8} q_{3,1}=L'_1 L'_2 Q'$} \noindent where $L_1, L_2,L'_1,$ and $L'_2$ are degree one polynomials and $Q$ and $Q'$ are quadratic polynomials. We have to prove that $Z(P)\cap Z(P')\cap Z(q_{2,2})\subset \mathfrak{Z}$ for any factor $P$ of $q_{1,3}$ and any factor $P'$ of $q_{3,1}$. This amonts to 9 cases, which will be treated seperately. {\it Step 3: proof that $Z(L_i)\cap Z(L'_j)\cap Z(q_{2,2})\subset \mathfrak{Z}$, $\forall i,j\in\{1,2\}$.} We claim that, in each case, the intersection consists of $4$ points lying in $\mathfrak{Z}$. Since the four cases are similar, we will only consider the case where the first factor is $\delta_1$ and the second one is $\delta_2$. For a point $(0,0,\gamma)\in Z(\delta_1)\cap Z(\delta_2)\cap Z(q_{2,2})$, we have \noindent\centerline{ $q_{2,2}(0,0,\gamma)=\gamma^2(1-\gamma)^2 + 8 \gamma(1-\gamma)+12$=0.} \noindent Thus we have $\gamma(1-\gamma)=-2$ or $-6$. It follows that $Z(\delta_1)\cap Z(\delta_2)\cap Z(p_{2,2})$ consists of the four points $(0,0,-2), (0,0,-1),(0,0,2),(0,0,3)$ which are all in $\mathfrak{Z}$. {\it Step 4: proof that $Z(L_i)\cap Z(Q')\cap Z(q_{2,2})\subset \mathfrak{Z}$, $\forall i\in\{1,2\}$.} More precisely, we claim that the planar quadric $Z(L_i)\cap Z(Q')$ consists of two lines which are both in $\mathfrak{Z}$. Since the two cases are similar, we will just treat the case where the factor $L_i$ is $\delta_1$. We have $Q'(0,\delta_2,\gamma)=\gamma(1-\gamma)+\delta_2^2+3\delta_2+2$ \noindent \hskip2.65cm$=-(\gamma+\delta_2+1)(\gamma-\delta_2-2),$ \noindent which proves the claim. It follows that $Z(L_i)\cap Z(Q')\cap Z(q_{2,2})\subset \mathfrak{Z}$. {\it Step 5: Proof that $Z(Q)\cap Z(L_j')\cap Z(q_{2,2})\subset \mathfrak{Z}$, $\forall j\in\{1,2\}$.} This case is identical to the previous one. {\it Step 6: Proof that $Z(Q)\cap Z(Q')\cap Z(q_{2,2})\subset \mathfrak{Z}$.} Indeed $Q'-Q$ is a scalar multiple of $\delta_1-\delta_2$ and therefore $Z(Q)\cap Z(Q')$ is (again miraculously) a planar quadric. We have $Q(\delta,\delta,\gamma)= \gamma(1-\gamma)-2\delta^2+2$, so $Z(Q)\cap Z(Q')$ is the sets of all $(\delta,\delta,\gamma)\in\mathbb{C}^3$ such that $\gamma(1-\gamma)=2\delta^2-2$. Since $q_{2,2}(\delta,\delta,\gamma)$ is a polynomial in $\delta$ and $\gamma(1-\gamma)$ we can eliminate $\gamma(1-\gamma)$. We have \noindent\centerline{ $q_{2,2}(\delta,\delta,\gamma)=12\delta(3\delta+2)(\delta-1)^2$} \noindent for any $(\delta,\delta,\gamma)\in Z(Q)\cap Z(Q')$. It follows that $Z(Q)\cap Z(Q')\cap Z(q_{2,2})$ consists of the $6$ points $(1,1,0), (1,1,1), (0,0,-1),(0,0,2),(-2/3,-2/3,-2/3)$ and $(-2/3,-2/3,5/3)$. Since there are all in $\mathfrak{Z}$, the proof is complete. \end{proof} With more care, it is easy to prove that $\bigcap Z(p_{i,j})$ is precisely $\mathfrak{Z}$ but this is not necessary for what follows. \subsection{Determination of $\mathcal{G}_{\mathbf{W}}(M\times N,P)$} Recall that $\mathfrak{z}^*$ the set of all $(\delta_1,\delta_2,\gamma)\in\mathfrak{z}$ such that $\{\delta_1,\delta_2,\gamma\}\not\subset\{0,1\}$. Let $M$, $N$ and $P$ be in $\mathcal{S}$. In order to determine $\mathcal{G}_{\mathbf{W}}(M\times N,P)$ we will always assume that \noindent\centerline {$\mathrm{Supp}\, P=\mathrm{Supp}\, M+\mathrm{Supp}\, N$.} \noindent Otherwise $\mathcal{G}_{\mathbf{W}}(M\times N,P)$ would be obviously zero. Next let $\delta_1\in\deg M$, $\delta_2\in\deg N$ and $\gamma\in\deg P$. \begin{thm}\label{thm2} We have (i) $\mathrm{dim} \mathcal{G}_{\mathbf{W}}(M\times N,P)=2$ if $\{\delta_1,\delta_2,\gamma\}\subset\{0,1\}$, and the maps $\pi_1$, $\pi_2$ of Lemma \ref{lower} form a basis of this space, (ii) $\mathrm{dim} \mathcal{G}_{\mathbf{W}}(M\times N,P)=1$ if $(\delta_1,\delta_2,\gamma)\in \mathfrak{z}^*$ and the map $\pi$ of Table \ref{table1} provides a generator of this space, (iii) $\mathrm{dim} \mathcal{G}_{\mathbf{W}}(M\times N,P)=0$ otherwise. \end{thm} \begin{proof} Set $u=\mathrm{Supp}\, M$ and $v=\mathrm{Supp}\, N$. By Lemmas \ref{W-germs} and \ref{lemma_germ-gen}, we can assume that $M=\Omega_u^{\delta_1}$, $N=\Omega_v^{\delta_2}$ and $P=\Omega_{u+v}^{\gamma}$. {\it Step 1:} We claim that $(\delta_1,\delta_2,\gamma)$ belongs to $\mathfrak{z}$ if $\mathcal{G}_{\mathbf{W}}(M\times N,P)\neq 0$. Assume that $\mathcal{G}_{\mathbf{W}}(M\times N,P)\neq 0$. Since $\tilde{\mathcal{G}}_{\mathfrak{sl}(2)}(M\times N,P)\neq 0$ it follows from Lemmas \ref{vanishing_det} and \ref{zero_pij} that $(\delta_1,\delta_2,\gamma)$ belongs to $\mathfrak{Z}$. It is clear from its definition that $\mathfrak{Z}\subset \mathfrak{z}\cup\mathfrak{z}^\tau$. Hence $(\delta_1,\delta_2,\gamma)$ or $(\delta_1,\delta_2,1-\gamma)$ belongs to $\mathfrak{z}$. When $(\delta_1,\delta_2,1-\gamma)\notin\mathfrak{z}$ the claim is proved. Moreover if $\gamma=0,1/2$ or $1$, we have $\mathcal{G}_{\mathbf{W}}(\Omega_{u+v}^{\gamma})=\mathcal{G}_{\mathbf{W}}(\Omega_{u+v}^{1-\gamma})$ and thus $\mathcal{G}_{\mathbf{W}}(\Omega_u^{\delta_1}\times \Omega_v^{\delta_2}, \Omega_{u+v}^{1-\gamma}) =\mathcal{G}_{\mathbf{W}}(\Omega_u^{\delta_1}\times \Omega_v^{\delta_2}, \Omega_{u+v}^{\gamma})$, which proves the claim in this case. Therefore, we can assume that $(\delta_1,\delta_2,1-\gamma)\in \mathfrak{z}$ and that $\gamma\notin\{0,1/2,1\}$. By Lemma \ref{lower}, we have $\mathcal{G}_{\mathbf{W}}(\Omega_u^{\delta_1}\times \Omega_v^{\delta_2}, \Omega_{u+v}^{1-\gamma})\neq 0$. Therefore it follows that \noindent\centerline{ $\mathrm{dim} \mathcal{G}_{\mathbf{W}}(\Omega_u^{\delta_1}\times \Omega_v^{\delta_2}, \Omega_{u+v}^{\gamma}) +\mathrm{dim} \mathcal{G}_{\mathbf{W}}(\Omega_u^{\delta_1}\times \Omega_v^{\delta_2}, \Omega_{u+v}^{1-\gamma}) \geq 2$.} \noindent By Lemma \ref{inequality} we have (i) $\delta_1=-1/2$, $\delta_2 \in \{0,1\}$ and $\gamma\in\{-1/2,3/2\}$, or (ii) $\delta_1 \in\{0,1\}$, $\delta_2=-1/2$, and $\gamma\in\{-1/2,3/2\}$. \noindent These $8$ possible triples for $(\delta_1,\delta_2,\gamma)$ belong to $\mathfrak{z}$ and therefore the claim is proved. {\it Step 2:} Assertion (i) follows from Lemma \ref{germ_dim=2}. From now on, we can assume that $\{\delta_1,\delta_2,\gamma\}\not\subset\{0,1\}$. It follows that $\mathrm{dim} \mathcal{G}_{\mathbf{W}}(\Omega_u^{\delta_1}\times \Omega_v^{\delta_2}, \Omega_{u+v}^{\gamma})=0$ or $1$. In particular Assertion (ii) and (iii) are equivalent and it is enough to prove the first one. If $(\delta_1,\delta_2,\gamma)\in \mathfrak{z}^*$ we have $\mathcal{G}_\mathbf{W}(\Omega_u^{\delta_1}\times \Omega_v^{\delta_2}, \Omega_{u+v}^{\gamma})\neq 0$ by Lemma \ref{lower} and therefore $\mathrm{dim} \mathcal{G}_\mathbf{W}(\Omega_u^{\delta_1}\times \Omega_v^{\delta_2}, \Omega_{u+v}^{\gamma})=1$. Conversely if $\mathrm{dim} \mathcal{G}_\mathbf{W}(\Omega_u^{\delta_1}\times \Omega_v^{\delta_2}, \Omega_{u+v}^{\gamma})=1$ it follows from the previous step that $(\delta_1,\delta_2,\gamma)$ belongs to $\mathfrak{z}^*$. Thus assertion (ii) is proved. \end{proof} \section{On the map $\mathbf{B}_{\mathbf{W}}(M\times N,P)\rightarrow\mathcal{G}_{\mathbf{W}}(M\times N,P)$} Let $M,N$ and $P$ be in $\mathcal{S}$. The space $\mathcal{G}_{\mathbf{W}}(M\times N,P)$ has been determined by Theorem 2. In particular $\mathcal{G}_{\mathbf{W}}(M\times N,P)$ has always dimension $0$, $1$ or $2$. In the Sections 8-10, we determine which germs $\mu\in \mathcal{G}_{\mathbf{W}}(M\times N,P)$ can be lifted to a $\mathbf{W}$-equivariant bilinear map $\pi:M\times N\rightarrow P$. Since the final result contains many particular case, it has been split into two parts. Indeed Theorem 3.1 (in Section 9) involves the case where $\mathcal{G}_{\mathbf{W}}(M\times N,P)$ has dimension one, and and Theorem 3.2 (in Section 10) involves the case where $\mathcal{G}_{\mathbf{W}}(M\times N,P)$ has dimension two. In this section, we recall general facts and conventions used in Sections 9 and 10. \subsection{Germs and $\mathfrak{S}_3$-symmetry} Let $M,N,P\in{\mathcal{S}}$. Recall the exact sequence: \noindent\centerline{ $0\rightarrow \mathbf{B}_{\mathbf{W}}^0(M\times N,P)\rightarrow \mathbf{B}_{\mathbf{W}}(M\times N,P)\rightarrow \mathcal{G}_{\mathbf{W}}(M\times N,P)$.} Determining the image of the map $\mathbf{B}_{\mathbf{W}}(M\times N,P)\rightarrow \mathcal{G}_{\mathbf{W}}(M\times N,P)$ is easy, but it requires a very long case-by-case analysis. It would be pleasant to use the $\mathfrak{S}_3$-symmetry to reduce the number of cases. Unfortunately the definition of $\mathcal{G}_{\mathbf{W}}(M\times N,P)$ is not $\mathfrak{S}_3$-symmetric. However, set \noindent\centerline{ $\overline{\mathbf{B}}_{\mathbf{W}}(M\times N,P)= \mathbf{B}_{\mathbf{W}}(M\times N,P)/\mathbf{B}^{0}_{\mathbf{W}}(M\times N,P)$.} \begin{lemma} For any $M,N$ and $P\in\mathcal{S}$, we have \noindent\centerline{ $\overline{\mathbf{B}}_{\mathbf{W}}(M\times N,P^*)=\overline{\mathbf{B}}_{\mathbf{W}}(M\times P,N^*)$} \end{lemma} \begin{proof} By Lemma \ref{germ-deg}, the space $\mathbf{B}^{0}_{\mathbf{W}}(M\times N,P^*)$ is exactly the space of degenerate $\mathbf{W}$-equivariant maps from $M\times N$ to $P^\ast$. Hence $\mathbf{B}^{0}_{\mathbf{W}}(M\times N,P^*)$ and $\overline{\mathbf{B}}_{\mathbf{W}}(M\times N,P^*)$ are fully symmetric in $M,N$ and $P$. \end{proof} \subsection{List of cases for the proof of Theorem 3} Start with a general result: \begin{lemma}\label{lemma_simple} Let $M,N$ and $P\in\mathcal{S}$ be irreducible $\mathbf{W}$-modules. We have \noindent\centerline{$\mathbf{B}_\mathbf{W}(M\times N,P)\simeq\mathcal{G}_{\mathbf{W}}(M\times N,P)$.} \end{lemma} \begin{proof} Looking at Table \ref{table2}, it is clear that $\mathbf{B}^0_\mathbf{W}(M\times N,P)=0$ whenever $M,N$ and $P$ are irreducible. Moreover, it is clear from Table \ref{table1} that any germ can be lifted. \end{proof} Let $M,N$ and $P\in \mathcal{S}$. In order to determine $\overline{\mathbf{B}}_{\mathbf{W}}(M\times N,P)$, we will always tacitely assume that $\mathcal{G}_{\mathbf{W}}(M\times N,P)\neq 0$. By the previous lemma, we can assume that at least one module is reducible, i.e. in the $AB$-family. As usual, we will assume that all modules are indecomposable. Using the $\mathfrak{S}_3$-symmetry, we can reduce to the following 6 cases: \begin{enumerate} \item $\deg M=\deg N=\{0,1\}$ and $\deg P=2$, \item $\deg M=\deg N=\{0,1\}$ and $\deg P=3$, \item $\deg M=\{0,1\}$, $\deg N=\delta$ and $\deg P=\delta$ with $\delta \in \mathbb{C} \setminus \{0,1\}$. \item $\deg M=\{0,1\}$, $\deg N=\delta$ and $\deg P=\delta+1$ with $\delta \in \mathbb{C} \setminus \{0,1\}$. \item $\deg M=\{0,1\}$, $\deg N=\delta$ and $\deg P=\delta+2$ with $\delta \in \mathbb{C} \setminus \{0,1\}$ \item $\deg M=\deg N=\deg P=\{0,1\}$. \end{enumerate} The cases case 1-5 are treated in Section 9. In this case, we have $\mathrm{dim}\mathcal{G}_\mathbf{W}(M\times N,P)=1$, so it is enough to decide if ${\overline\mathbf{B}}_\mathbf{W}(M\times N,P)$ is zero or not. The case 6 is treated in section 10. In this case, $\mathcal{G}_\mathbf{W}(M\times N,P)$ is two dimensional and the analysis is more involved. \subsection{Typical argument for the proof of Theorem 3} Let $M \in \mathcal{S}$ and let $u$ be its support. In Sections 9 and 10, we will denote by $(e_x^M)_{x \in u}$ a basis of $M$ as in Section \ref{KS_Theorem}. The proofs of Theorems 3.1 and 3.2 are given by several lemmas and a repeated procedure, that we call an \textit{argument by restriction}, which is described as follows. For an integer $d \in \mathbb{Z}_{>1}$, the subalgebra $\mathbf{W}^{(d)}:=\bigoplus_{n} \mathbb{C} L_{dn}$ is isomorphic to $\mathbf{W}$. Let $M$ be a $\mathbf{W}$-module in the class $\mathcal{S}$ and let $x \in \mathrm{Supp}\,\,M$. The subspace \[ M_d(x):=\bigoplus_{\substack{y \in u \\ x-y \in d\mathbb{Z}}} M_y \] of $M$ is a $\mathbf{W}^{(d)}$-module. Moreover, when $x \not\in d\mathbb{Z}$, $M_d(x)$ is irreducible. Now, let $M,N$ and $P$ be $\mathbf{W}$-modules in the class $\mathcal{S}$, let $x \in \mathrm{Supp}\,\,M,~y \in \mathrm{Supp}\,\, N$ and let $\overline{\pi} \in\mathcal{G}_{\mathbf{W}}(M\times N,P)$. Since $\mathcal{G}_{\mathbf{W}^{(d)}}(X\times Y,Z)\simeq \mathbf{B}_{\mathbf{W}^{(d)}}(X\times Y,Z)$ whenever $X,Y$ and $Z$ are irreducible $\mathbf{W}^{(d)}$-modules of the class $\mathcal{S}$, $\overline{\pi}$ has unique lifting $\pi$ to $M_d(x)\times N_d(y)$ whenever $x,y,x+y \not\in d\mathbb{Z}$. Hence, varying $d$ and $x,y$, we see that $\pi(e_x^M,e_y^N)$ is uniquely determined by $\overline{\pi}$ whenever $x,y,x+y\neq 0$. \section{Computation of $\overline{\mathbf{B}}_{\mathbf{W}}(M\times N,P)$ when $\mathrm{dim}\mathcal{G}_{\mathbf{W}}(M\times N,P)=1$} \subsection{The Theorem 3.1} The following table provides a list of triples $(M,\,N,\,P)$ of $\mathbf{W}$-modules of the class $\mathcal{S}$ and one non-zero element $\pi\in {\overline\mathbf{B}}_{\mathbf{W}}(M\times N,P)$. Since for each entry $(M,\,N,\,P)$ of Table \ref{table4} we have $\deg M$ or $\deg N$ or $\deg P\neq \{0,1\}$, the corresponding $\pi$ is a basis of ${\overline\mathbf{B}}_{\mathbf{W}}(M\times N,P)$. In what follows, we denote by $d_\xi$ and $d^\xi$ the natural maps: \centerline{$d^\xi:\Omega^0_0\rightarrow A_\xi$ and $d_\xi: B_\xi\rightarrow \Omega^1_0$.} \begin{table}[h]\caption{\textbf{Non-zero ${\overline\mathbf{B}}_{\mathbf{W}}(M\times N,P)$, when $\deg M, \,\deg N$ or $\deg P\neq \{0,1\}$}} \label{table4} \begin{center} \begin{tabular}{|c|c|c||c|} \hline & $M\times N$ or $N\times M$ & $P$ & $\pi$ \\ \hline 1.& $\Omega^{\delta_1}_u\times\Omega^{\delta_2}_{v}$ & $\Omega^{\delta_1+\delta_2}_{u+v}$& $P^{\delta_1,\delta_2}_{u,v}$\\ \hline 2.& $\Omega^{\delta_1}_u\times\Omega^{\delta_2}_{v}$ & $\Omega^{\delta_1+\delta_2+1}_{u+v}$&$B^{\delta_1,\delta_2}_{u,v}$ \\ \hline 3.& $A_\xi\times\Omega^{\delta}_{u}$ & $\Omega^{\delta+1}_{u}$&$B^{0,\delta}_{0,u}(\xi)$ \\ \hline 4.& $\Omega^{\delta}_{u}\times \Omega^{-\delta}_{-u}$ & $B_\xi$&$B^{\delta,-\delta}_{u,-u}(\xi)$ \\ \hline 5.& $\Omega^{-2/3}_u\times \Omega^{-2/3}_{v}$ & $\Omega^{5/3}_{u+v}$&$G_{u,v}$ \\ \hline 6.& $B_\xi\times\Omega^{\delta}_{u}$ & $\Omega^{\delta+1}_{u}$&$P^{1,\delta}_{0,u}\circ (d_\xi\times id)$ \\ \hline 7.& $\Omega^{\delta}_{u}\times \Omega^{-\delta}_{-u}$ & $A_\xi$&$\d^\xi\circ P^{\delta,-\delta}_{u,-u}$ \\ \hline 8.& $B_\xi\times \Omega^{\delta}_{u}$ & $\Omega^{\delta+2}_{u}$&$B^{1,\delta}_{0,u}\circ (d_\xi\times id)$ \\ \hline 9.& $\Omega^{\delta}_{u}\times \Omega^{-\delta-1}_{-u}$ & $A_\xi$&$\d^\xi\circ B^{\delta,-\delta-1}_{u,-u}$ \\ \hline 10.& $A_{\eta}\times B_{\xi}$ & $\Omega^{2}_{0}$&$B^{0,1}_{0,0}(\eta)\circ(id\times d_{\xi})$ \\ \hline 11.& $A_{\eta}\times \Omega^{-1}_0$ & $A_{\xi}$& $\d^{\xi}\circ B^{0,-1}_{0,0}(\eta)$ \\ \hline 12. & $B_{\xi}\times \Omega^{-1}_0$ & $B_{\eta}$& $B^{1,-1}_{0,0}(\eta)(d_{\xi}\times id)$ \\ \hline 13.& $B_{\eta}\times B_\xi$ & $\Omega^{2}_{0}$& $P^{1,1}_{0,0}\circ(d_{\eta}\times d_{\xi})$ \\ \hline 14.& $B_{\eta}\times \Omega^{-1}_0$& $A_{\xi}$& $d^{\xi}\circ P^{1,-1}_{0,0}\circ(d_{\eta}\times id)$ \\ \hline 15.&$B_{\eta}\times B_{\xi}$ & $\Omega^3_0$ & $B^{1,1}_{0,0}\circ(d_{\eta}\times d_{\xi})$\\ \hline 16.&$B_{\eta}\times \Omega^{-2}_0$ & $A_\xi$ & $d^\xi\circ B^{1,-2}_{0,0}\circ(d_{\eta}\times id)$\\ \hline \end{tabular} \end{center} \vbox{\small{ The degree condition implies the following restrictions: $\{\delta_1,\delta_2,\delta_1+\delta_2\}\not\subset\{0,1\}$ in line 1, $(\delta_1,\delta_2)\neq (0,0)$ in line 2, $\delta\neq 0$ in lines 3 and 4, and $\delta\neq 0,\,1$ in lines 6 and 7.}} \end{table} Conversely, we have \begin{thm}\hskip-1.7mm{\bf 1} Let $M, N$ and $P$ be $\mathbf{W}$-modules of the class $\mathcal{S}$ with $\deg M$ or $\deg N$ or $\deg P\neq \{0,1\}$. Then ${\overline\mathbf{B}}_{\mathbf{W}}(M\times N,P)$ has dimension one if the triple $(M,N,P)$ appears in Table \ref{table4}. Otherwise, we have ${\overline\mathbf{B}}_{\mathbf{W}}(M\times N,P)=0$. \end{thm} \subsection{The case $\deg M=\deg N=\{0,1\}$ and $\deg P=2$} In this case, there can be five subcases as follows: \begin{enumerate} \item $(M,N,P)=(A_{\xi},\Omega_u^0,\Omega_u^2)$ with $u \not\equiv 0 \mod \mathbb{Z}$, \item $(M,N,P)=(B_{\xi},\Omega_u^0,\Omega_u^2)$ with $u \not\equiv 0 \mod \mathbb{Z}$, \item $(M,N,P)=(A_{\eta},A_{\xi},\Omega_{0}^2)$, \item $(M,N,P)=(A_{\eta},B_{\xi},\Omega_{0}^2)$, \item $(M,N,P)=(B_{\eta},B_{\xi},\Omega_{0}^2)$. \end{enumerate} \begin{lemma}\label{prop_002} Let $M,N,P$ be $\mathbf{W}$-modules of the class $\mathcal{S}$ as above. Then $\overline{\mathbf{B}}_{\mathbf{W}}(M\times N,P)$ is trivial iff $(M,N,P)=(A_{\eta},A_{\xi},\Omega_{0}^2)$ with $\xi\neq \infty$ and $\eta\neq\infty$ \end{lemma} \begin{proof} By Table \ref{table4}, except for the case 3 with $\eta=\infty$ or $\xi=\infty$, we have $\mathrm{dim} \overline{\mathbf{B}}_{\mathbf{W}}(M\times N,P)=1$. Hence, we show the proposition in the case $(M,N,P)=(A_\eta,A_\xi,\Omega_0^2)$. Let $(a,b)$ and $(c,d)$ be projective coordinates of $\eta, \xi \in \mathbb{P}^1$, respectively, and let $\{e_m^M\}, \{e_m^N\}$ and $\{e_m^P\}$ be basis of $A_{a,b}, A_{c,d}$ and $\Omega_0^2$, respectively, as in Section 1.1. Assume that $\overline{\mathbf{B}}_{\mathbf{W}}(M\times N,P)\neq 0$. By an argument by restriction, one sees that there exists $\pi \in \mathbf{B}_{\mathbf{W}}(M \times N,P)$ such that $\pi(e_m^M,e_n^N)=e_{m+n}^P$ whenever $m,n,m+n\neq 0$. It can be checked that this formula extends to the case $m,n \neq 0$. Set $\pi(e_m^M,e_0^N)=X_1(m)e_m^P$ and $\pi(e_0^M,e_m^N)=X_2(m)e_m^P$ for $m\neq 0$. Calculating $L_n.\pi(e_m^M,e_0^N)$ for $n=1,2$, one obtains \\ \centerline{$(m+2n)X_1(m)=(m+n)X_1(m+n)+cn^2+dn$} \noindent from which we have $X_1(m)=-cm+d$. Similarly, one also obtains $X_2(m)=-am+b$. By calculating $L_m.\pi(e_0^M,e_0^N)$, we obtain $ac=0$. \end{proof} \subsection{The case $\deg M=\deg N=\{0,1\}$ and $\deg P=3$} In this case, there can be five subcases as follows: \begin{enumerate} \item $(M,N,P)=(A_\xi,\Omega_u^0,\Omega_u^3)$ with $u \not\equiv 0 \mod \mathbb{Z}$, \item $(M,N,P)=(B_\xi,\Omega_u^0,\Omega_u^3)$ with $u \not\equiv 0 \mod \mathbb{Z}$, \item $(M,N,P)=(A_{\xi},A_{\eta},\Omega_0^3)$, \item $(M,N,P)=(A_{\xi},B_{\eta},\Omega_0^3)$, \item $(M,N,P)=(B_{\xi},B_{\eta},\Omega_0^3)$. \end{enumerate} \begin{lemma}\label{prop_003} Let $M,N,P$ be $\mathbf{W}$-modules as above. Then ${\overline \mathbf{B}}_{\mathbf{W}}(M\times N,P)$ is trivial iff $(M,N,P)$ is one of the following types: \begin{enumerate} \item $(M,N,P)=(A_\xi,\Omega_u^1,\Omega_u^3)$ with $\xi\neq\infty$ and $u \not\equiv 0 \mod \mathbb{Z}$, \item $(M,N,P)=(A_\xi,A_\eta,\Omega_0^3)$ with $(\xi,\eta) \neq (\infty,\infty)$, \item $(M,N,P)=(A_{\xi},B_\eta,\Omega_0^3)$ with $\xi\neq\infty$. \end{enumerate} \end{lemma} \begin{proof} By Table \ref{table4}, it is clear that ${\overline \mathbf{B}}_{\mathbf{W}}(M\times N,P)$ is trivial only if $(M,N,P)$ is one of the three cases in Lemma \ref{prop_003}. Hence, it is sufficient to show that, for these three cases, ${\overline \mathbf{B}}_{\mathbf{W}}(M\times N,P)$ is trivial. The proof for the first and third cases are similar to that of Lemma \ref{prop_002} and is left to the reader. The second case can be proved as follows. Choose any $\tau\in\mathbb{P}^1$. Any non-degenerate $\pi \in \mathbf{B}_{\mathbf{W}}(A_{\eta}\times A_{\xi},\Omega_0^3)$ induces a non-degenerate bilinear map $\pi' \in \mathbf{B}_{\mathbf{W}}(A_\eta \times B_\tau,\Omega_0^3)$, by composing with the map $B_\tau \rightarrow \overline{A} \hookrightarrow A_\xi$. It follows from the first case that $\eta=\infty$. Similarly, we have $\xi=\infty$. \end{proof} \subsection{The case $\deg M=\{0,1\}$, $\deg N=\delta$ and $\deg P=\delta+2$ with $\delta \in\mathbb{C} \setminus \{0,1\}$} In this case, there can be two subcases as follows: \begin{enumerate} \item $(M,N,P)=(A_\xi,\Omega_u^\delta,\Omega_u^{\delta+2})$, \item $(M,N,P)=(B_\xi,\Omega_u^\delta,\Omega_u^{\delta+2})$. \end{enumerate} \begin{lemma}\label{prop_0dd+2} Let $M,N,P$ be $\mathbf{W}$-modules as above. Then, ${\overline\mathbf{B}}_{\mathbf{W}}(M\times N,P)$ is trivial iff $M=A_\xi$ with $\xi\neq\infty$ and $\delta\neq -1$. \end{lemma} The proof is similar to that of Lemma \ref{prop_002}. \subsection{The case $\deg M=\{0,1\}$, $\deg N=\delta$ and $\deg P=\delta+1$ with $\delta \in \mathbb{C} \setminus \{0,1\}$} In this case, there can be two subcases as follows: \begin{enumerate} \item $(M,N,P)=(A_\xi,\Omega_u^\delta,\Omega_u^{\delta+1})$, \item $(M,N,P)=(B_\xi,\Omega_u^\delta,\Omega_u^{\delta+1})$. \end{enumerate} Looking at the lines 3 and 6 in Table \ref{table4}, we obtain \begin{lemma}\label{prop_0dd+1} Let $M,N,P$ be $\mathbf{W}$-modules of the class $\mathcal{S}$ as above. Then $\mathrm{dim} {\overline\mathbf{B}}_{\mathbf{W}}(M\times N,P)=1$. \end{lemma} \subsection{The case $\deg M=\{0,1\}$, $\deg N=\delta$ and $\deg P=\delta$ with $\delta \in \mathbb{C} \setminus \{0,1\}$} In this case, there can be two subcases as follows: \begin{enumerate} \item $(M,N,P)=(A_{\xi},\Omega_u^\delta,\Omega_u^\delta)$, \item $(M,N,P)=(B_\xi,\Omega_u^\delta,\Omega_u^\delta)$. \end{enumerate} \begin{lemma}\label{prop_0dd} Let $M,N,P$ be $\mathbf{W}$-modules as above. Then, $\mathrm{dim} \overline{\mathbf{B}}_{\mathbf{W}}(M\times N,P)=1$ iff $M=B_\infty$. \end{lemma} The proof is similar to that of Lemma \ref{prop_002}. \section{Computation of $\overline{\mathbf{B}}_{\mathbf{W}}(M\times N,P)$ when $\mathrm{dim}\mathcal{G}_{\mathbf{W}}(M\times N,P)=2$} \subsection{The Theorem 3.2} The following table provides a list of triples $(M,\,N,\,P)$ of $\mathbf{W}$-modules of the class $\mathcal{S}$ and some elements $\pi\in {\overline\mathbf{B}}_{\mathbf{W}}(M\times N,P)$. For each entry $(M,\,N,\,P)$ we have $\deg M=\deg N=\deg P= \{0,1\}$ \begin{table}[h] \caption{\textbf{The non-zero $\overline{\mathbf{B}}_{\mathbf{W}}(M\times N,P)$, with $\deg M=\deg N=\deg P= \{0,1\}$}} \label{table5} \begin{center} \begin{tabular}{|c|c|c||c|} \hline & $M\times N$or $N\times M$ &$P$& Elements of ${\overline \mathbf{B}}_{\mathbf{W}}(M\times N,P)$ \\ \hline 1. &$\Omega^0_u\times \Omega^0_{v}$ & $\Omega^1_{u+v}$& $P^{0,0}_{u,v}\circ(d\times id)$ and $P^{0,0}_{u,v}\circ(id\times d)$ \\ \hline 2.& $\Omega^0_u\times \Omega^0_{-u}$ & $\Omega^0_{0}$&$P^{0,0}_{u,-u}$ \\ \hline 3.& $\Omega^0_u\times \Omega^1_{0}$ & $\Omega^1_{u}$&$P^{0,1}_{u,0}$ \\ \hline 4.& $\Omega^0_u\times \Omega^0_{-u}$ & $A_{\xi}$ & $d^\xi\circ P^{0,0}_{u,-u}$ \\ \hline 5.& $B_\xi\times \Omega^0_{u}$ & $\Omega^1_u$ & $P^{1,0}_{0,u}\circ(d_\xi\times id)$ \\ \hline \end{tabular} \end{center} \vbox {\small{To avoid repetitions, one can assume that $\xi\neq\infty$ in lines 4 or 5}} \end{table} From the table, it is clear that (i) If $(M,\,N,\,P)$ appears in line 1 of the Table \ref{table5}, the two listed elements of ${\overline \mathbf{B}}_{\mathbf{W}}(M\times N,P)$ are linearly independent and therefore ${\overline \mathbf{B}}_{\mathbf{W}}(M\times N,P)$ has dimension 2, (ii) if $(M,\,N,\,P)$ appears in lines 2-5 of the Table \ref{table5}, the listed element of ${\overline \mathbf{B}}_{\mathbf{W}}(M\times N,P)$ is not zero and therefore ${\overline \mathbf{B}}_{\mathbf{W}}(M\times N,P)$ has dimension $\geq 1$. Indeed, we have \setcounter{thm}{2} \begin{thm}\hskip-1.7mm{\bf 2} Let $M, N$ and $P$ be $\mathbf{W}$-modules of the class $\mathcal{S}$ with $\deg M=\deg N=\deg P= \{0,1\}$. Then (i) if $(M,\,N,\,P)$ appears in line 1 of the Table \ref{table5}, we have $\mathrm{dim} {\overline \mathbf{B}}_{\mathbf{W}}(M\times N,P)=2$, (ii) if $(M,\,N,\,P)$ appears in lines 2-5 of the Table \ref{table5}, we have $\mathrm{dim} {\overline \mathbf{B}}_{\mathbf{W}}(M\times N,P)=1$, (iii) otherwise, we have ${\overline \mathbf{B}}_{\mathbf{W}}(M\times N,P)=0$. \end{thm} In order to prove Theorem 3.2, note that the case where $M$, $N$ and $P$ are irreducible is already treated in Lemma \ref{lemma_simple}. Thus, up to the $\mathfrak{S}_3$-symmetry, there are only three cases to consider: \begin{enumerate} \item $(M,N)=(A_{\xi_1}, A_{\xi_2})$ and $P=A_\xi$ or $P=B_\xi$, \item $(M,N)=(B_{\xi_1}, B_{\xi_2})$ and $P=A_{\xi_3}$ or $P=B_{\xi_3}$, \item $(M,N)=(\Omega_u^0, \Omega_{-u}^0)$ with $u \not\equiv 0 \mod \mathbb{Z}$ and $P=A_\xi$ or $P=B_\xi$. \end{enumerate} These cases will be treated in the next three subsections. \subsection{The case $(M,N)=(A_{\xi_1}, A_{\xi_2})$} \begin{lemma}\label{prop_AAA} Any bilinear map in $\mathbf{B}_{\mathbf{W}}(A_{\xi_1}\times A_{\xi_2}, P)$, where $P$ is in the $AB$-family, is degenerate. \end{lemma} \begin{proof} Any module $P$ in the $AB$-family admits an almost-isomorphism to a module of the $A$-family. So we can assume $P\simeq A_{\xi_3}$ for some $\xi_3$ in $\mathbb{P}^1$. Fix a basis $\{e_m^M\}$, $\{e_m^N\}$ and $\{e_m^P\}$ of $M=A_{\xi_1}$, $N=A_{\xi_2}$ and $P=A_{\xi_3}$, respectively, as in Section \ref{sect_KS}. We have $L_{-1}.e_0^M\neq 0$ or $L_{1}.e_0^M\neq 0$. Since both cases are similar, we case assume that $L_{-1}eu_0^M\neq 0$. Using that $L_{-1}.e_1^N=L_{-1}.e_1^P= 0$, we obtain that $\pi(L_{-1}.e_0^M,e_1^N)=L_{-1}.\pi(e_0^M,e_1^N)- \pi(e_0^M,L_{-1}.e_1^N)=0$, hence we have \noindent\centerline{$\pi(e_{-1}^M,e_1^N)=0$.} Since $M_{\leq -1}$ (respectively $N_{\geq 1}$) is an irreducible Verma $\mathfrak{sl}(2)$-module (respectively the restricted dual of an irreducible Verma $\mathfrak{sl}(2)$-module), it follows that $M_{\leq -1}\otimes N_{\geq 1}$ is generated by $e_{-1}^M\otimes e_1^N$. Hence we get $\pi(M_{\leq -1}\times N_{\geq 1})=0$. Since $N_{\geq 1}$ is a $\mathbf{W}_{\geq 0}$-submodule and since $\overline A$ is the $\mathbf{W}_{\geq 0}$-submodule generated by $M_{\leq -1}$, we have \noindent\centerline{$\pi({\overline A}\times N_{\geq 1})=0$,} \noindent from which it follows that $\pi$ is degenerate. \end{proof} \subsection{The case $(M,N)=(B_{\xi_1}, B_{\xi_2})$} Recall that $\d_\xi\circ P_{0,0}^{0,0}$ is a non-degenerate bilinear map from $B_\infty\times B_\infty\rightarrow A_\xi$, for any $\xi\in\mathbb{P}^1$. Let $\xi_1,\xi_2$ and $\xi_3\in\mathbb{P}^1$, and let $s$ be the number of indices $i$ such that $\xi_i=\infty$. By $\mathfrak{S}_3$ symmetry, ${\overline \mathbf{B}}_{\mathbf{W}}(B_{\xi_1}\times B_{\xi_2},A_{\xi_3})$ is not zero $s\geq 2$. More precisely, we have: \begin{lemma} We have (i) $\mathrm{dim} {\overline \mathbf{B}}_{\mathbf{W}}(B_{\xi_1}\times B_{\xi_2},A_{\xi_3})=s-1$ if $s\geq 2$ (ii) $\mathrm{dim} {\overline \mathbf{B}}_{\mathbf{W}}(B_{\xi_1}\times B_{\xi_2},B_{\xi_3})=1$ if $s=3$ (iii) $\mathrm{dim} {\overline \mathbf{B}}_{\mathbf{W}}(B_{\xi_1}\times B_{\xi_2},A_{\xi_3})= \mathrm{dim} {\overline \mathbf{B}}_{\mathbf{W}}(B_{\xi_1}\times B_{\xi_2},B_{\xi_3})=0$ otherwise. \end{lemma} \begin{proof} Let $(a,b), (c,d)$ and $(e,f)$ be projective coordinates of $\xi_1,\xi_2$ and $\xi_3 \in \mathbb{P}^1$, respectively, and fix a basis $\{e_m^M\}$, $\{e_m^N\}$ of $M=B_{a,b}$ and $N=B_{c,d}$, respectively, as in Section \ref{sect_KS}. First, we consider $\pi \in \mathbf{B}_{\mathbf{W}}(B_{a,b} \times B_{c,d}, P)$ with $P=B_{e,f}$. Let $\{e_m^P\}$ be a basis of $B_{e,f}$ as in $\S$ \ref{sect_KS}. By calculating $L_{-n}.\pi(e_n^M,e_0^N)$ and $L_{-n}.\pi(e_0^M,e_n^N)$, we see that $\xi_1=\xi_2=\xi_3 \in \mathbb{P}^1$. Hence, we may assume that $(a,b)=(c,d)=(e,f)$ without loss of generality. Similarly, by calculating $L_m.\pi(e_n^M,e_0^N)$ and $L_m.\pi(e_0^M,e_n^N)$, we see that there exists a constant $C \in \mathbb{C}$ satisfying $\pi(e_m^M,e_0^N)=Ce_m^P=\pi(e_0^M,e_m^N)$ for any $m$. It can be shown that such a $\mathbf{W}$-equivariant map exists only if $a=0$ or $C=0$. In the former case, $\pi$ is a scalar multiple of $P_{0,0}^{0,0}$. In the latter case, $\pi$ factors through $\overline{A}\times \overline{A}$ and one can apply a similar argument to the latter half of the proof of Lemma \ref{prop_AAA} to see that $\pi$ is degenerate. Second, we consider $\pi \in B_{\mathbf{W}}(B_{a,b} \times B_{c,d}, P)$ with $P=A_{e,f}$. Let $\{e_m^P\}$ be a basis of $A_{e,f}$ as in $\S$ \ref{sect_KS}. By an argument by restriction, one sees that there exists constants $C_1, C_2 \in \mathbb{C}$ such that $\pi(e_m^M, e_n^N)=(C_1m+C_2n)e_{m+n}^P$ for $m,n, m+n\neq 0$. Set $\pi(e_m^M,e_0^N)=a(m)e_m^P, \pi(e_0^M, e_m^N)=b(m)e_m^P$ and $\pi(e_m^M, e_{-m}^N)=c(m)e_0^P$. It is clear that $a(0)=b(0)=c(0)=0$. By calculating $L_{-n}.\pi(e_m^M, e_n^N)$ with $m,n, m+n\neq 0$, one obtains that $a(m)=-d^{-1}C_1m$ if $c=0$ and that $a(m)=C_1=0$ otherwise. Similarly, on obtains that $b(m)=-b^{-1}C_2m$ if $a=0$ and that $b(m)=C_2=0$ otherwise. Finally, by calculating $L_n.\pi(e_m^M, e_{-m}^N)$ with $n\neq \pm m$, on obtains that $c(m)=-(C_1-C_2)f^{-1}m$ if $e=0$ and that $c(m)=0$ and $C_1=C_2$ otherwise. Hence, it follows that $\mathrm{dim} \overline{\mathbf{B}}_{\mathbf{W}}(B_{a,b} \times B_{c,d},A_{e,f})$ is equal to $0$ if $s\leq 1$ and is less than $s-1$ if $s\geq 2$. Now, for $s\geq 2$, the result follows from Table \ref{table5}. \end{proof} \subsection{The case $(M,N)=(\Omega_u^0, \Omega_{-u}^0)$ with $u \neq 0 \in \mathbb{C}/\mathbb{Z}$} The next lemma can be proved in a way similar to the proof of Lemma \ref{prop_002}. \begin{lemma} Let $\xi\in\mathbb{P}^1$. We have (i) $\mathrm{dim} {\overline \mathbf{B}}_{\mathbf{W}}(\Omega_u^0\times\Omega_{-u}^0,A_{\infty})=2$ and $\mathrm{dim} {\overline \mathbf{B}}_{\mathbf{W}}(\Omega_u^0\times\Omega_{-u}^0,A_{\xi})=1$ if $\xi\neq\infty$. (ii) $\mathrm{dim} {\overline \mathbf{B}}_{\mathbf{W}}(\Omega_u^0\times\Omega_{-u}^0,B_{\infty})=1$ and $\mathrm{dim} {\overline \mathbf{B}}_{\mathbf{W}}(\Omega_u^0\times\Omega_{-u}^0,B_{\xi})=0$ if $\xi\neq\infty$. \end{lemma} \section{Conclusion}\label{sect_conclusion} From the classification, we can derive the following corollaries, some of which are used in \cite{IM}. \begin{cor}\label{cor1} The primitive bilinear maps between modules of the class $\mathcal{S}$ are the following: (i) the Poisson products $P^{\delta_1,\delta_2}_{u,v}$, (ii) the Poisson brackets $B^{\delta_1,\delta_2}_{u,v}$ for $\delta_1.\delta_2.(\delta_1+\delta_2)\neq 0$ (iii) the Lie brackets $B^{\delta_1,\delta_2}_{u,v}(\xi)$ for $\delta_1.\delta_2.(\delta_1+\delta_2)= 0$ and $\xi\neq \infty$ if $\delta_1=\delta_2=0$, (iv) $\Theta_\infty$, (iv) the Grozman operation $G_{u,v}$, (vi) $\eta(\xi_1,\xi_2,\xi_3)$ for $\xi_1,\xi_2$ and $\xi_3$ are all distinct, and their $\mathfrak{S}_3$-symmetric (vii) the obvious map $P^M$, and their $\mathfrak{S}_3$-symmetric. \end{cor} This follows easily by closed examination. \begin{cor} Let $M,\,N,\,P\in\mathcal{S}^*$ such that $\mathbf{B}^0_\mathbf{W}(M\times N,P)\neq 0$. Then the number of reducible modules among $M$, $N$ and $P$ is $1$ or $3$. \end{cor} \begin{proof} Let $\pi\in\mathbf{B}_{\mathbf{W}}^0(M\times N,P)$ be non-zero. It follows from Theorem 1 that the three modules are reducible whenever $\mathrm{Supp}\,\,\pi$ is not one line. Otherwise, we can assume that. Assume $\mathrm{Supp}\,\,\pi=V$. Then $M$, which admits a trivial quotient is reducible. Moreover, there is an almost-isomorphism $\phi:N\rightarrow P$, which proves that $N$ and $P$ are simultaneously reducible or irreducible. \end{proof} \begin{cor} Let $M,\,N,\,P\in\mathcal{S}^*$. If $\mathbf{B}_\mathbf{W}(M\times N,P)\neq 0$, then we have \noindent\centerline{ $(\deg\,M,\,\deg\,N,\,\deg\,P)\in\mathfrak{z}$.} \end{cor} \begin{proof} If $\mathcal{G}_\mathbf{W}(M\times N,P)\neq 0$, then the corollary follows from Theorem \ref{thm2}. Otherwise, we have $\mathbf{B}^0_\mathbf{W}(M\times N,P)\neq 0$. If $\{\deg\,M,\,\deg\,N,\,\deg\,P\}\subset\{0,1\}$, then $(\deg\,M,\,\deg\,N,\,\deg\,P)$ belongs to $\mathfrak{z}$. Otherwise, $\overline{\mathrm{Supp}\,\,\pi}$ is one line for any non-zero $\pi\in \mathbf{B}^0_\mathbf{W}(M\times N,P)$. By the $\mathfrak{S}_3$-symmetry, we can assume that $\overline{\mathrm{Supp}\,\,\pi}=V$. In such a case, $N$ is isomorphic to $P$ and we have $\deg\,M\in\{0,1\}$ and $\deg\,N=\deg\,P\not\in\{0,1\}$. It follows that $(\deg\,M,\,\deg\,N,\,\deg\,P)$ belongs to $\mathfrak{z}$ as well. \end{proof} Let $M,\,N,\,P\in\mathcal{S}$. The triple $(M,\,N,\,P)$ is called {\it mixing} if we have $\mathbf{B}^0_\mathbf{W}(M\times N,P)\neq 0$ and $\overline{\mathbf{B}}_\mathbf{W}(M\times N,P)\neq 0$. Here is a table of example of mixing triples: \begin{table}[h] \caption{\textbf{Example of mixing triples $(M,N,P)$}} \label{table_mixing} \begin{center} \begin{tabular}{|c|c|c||c|} \hline $M\times N$ or $N\times M$ &$P$& A non-degenerate $\pi_1$ &A degenerate $\pi_2$ \\ \hline $\Omega^0_u\times \Omega^1_{0}$ & $\Omega^1_{u}$& $P^{0,1}_{u,0}$ & $(f,\alpha)\mapsto (\mathrm{Res}\,\alpha)\d f $ \\ \hline $\Omega^0_u\times \Omega^0_{-u}$ & $\Omega^0_{0}$&$P^{0,0}_{u,-u}$ & $(f,g)\mapsto \mathrm{Res}\,f \d g$\\ \hline \end{tabular} \end{center} \end{table} \begin{cor} Any mixing triple $(M,\,N,\,P)$ appears in the Table \ref{table_mixing} and in each case $\pi_1$ and $\pi_2$ form a basis of $\mathbf{B}_\mathbf{W}(M\times N,P)$. \end{cor} The corollary follows immediately from Tables \ref{table2} and \ref{table4}. \begin{cor} For any triple $(M,\,N,\,P)$ of indecomposable $\mathbf{W}$-modules in the class $\mathcal{S}$, we have $\mathrm{dim} \mathbf{B}_{\mathbf{W}}(M\times N,P)\leq 2$. \end{cor} This follows easily from Theorem 1, Theorem 2 and the previous corollary. Note that the hypothesis that $M,N$ and $P$ are indecomposable is necessary. For example we have $\mathrm{dim} \mathbf{B}_{\mathbf{W}}(X\times X,X)=4$ if $X=\overline{A}\oplus \mathbb{C}$.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{section:intro} In recent decades, a strong development in the intrinsic study of a wide variety of topics in theoretical physics, control theory and applied mathematics has been done, using methods of differential geometry. Thus, the intrinsic formulation of Lagrangian and Hamilonian formalisms has been developed both for autonomous and non-autonomous systems. This study has been carried out mainly for first-order dynamical systems; that is, those whose Lagrangian or Hamiltonian functions depend on the generalized coordinates of position and velocity (or momentum). From the geometric point of view, this means that the phase space of the system is in most cases the tangent or cotangent bundle of the smooth manifold representing the configuration space. However, there are a significant number of relevant systems in which the dynamics have explicit dependence on accelerations or higher-order derivatives of the generalized coordinates of position. These systems, usually called {\sl higher-order dynamical systems}, can be modeled geometrically using higher-order tangent bundles \cite{book:DeLeon_Rodrigues85}. These models are typical of theoretical physics; for example those describing the interaction of relativistic particles with spin, string theories from Polyakov and others, Hilbert's Lagrangian for gravitation or Podolsky's generalization of electromagnetism (see \cite{art:Batlle_Gomis_Pons_Roman88} and references cited there). They also appear in a natural way in numerical models arising from the discretization of first-order dynamical systems that preserve their inherent geometric structures \cite{art:DeLeon_Martin_Santamaria04}. There are a lot of works devoted to the development of the formalism of these kinds of theories and their application to many models in mechanics and field theory (see, for instance, \cite{art:Aldaya_Azcarraga78_2}, \cite{art:Aldaya_Azcarraga80}, \cite{art:Banerjee_Mukherjee_Paul10}, \cite{art:Barcelos_Natividade91}, \cite{art:Belvedere_Amaral_Lemos95}, \cite{art:Carinena_Lopez92}, \cite{proc:Garcia_Munoz83}, \cite{art:Krupkova94}, \cite{art:Kuznetsov_Plyushchay94}, \cite{art:Plyushchay91}, \cite{art:Saunders_Crampin90}, \cite{art:Schmidt97}). Furthermore, a generalization of the Lagrangian and Hamiltonian formalisms exists that compresses them into a single formalism. This is the so-called {\sl Lagrangian-Hamiltonian unified formalism}, or {\sl Skinner-Rusk formalism} due to the authors' names of the original paper. It was originally developed for first-order autonomous mechanical systems \cite{art:Skinner_Rusk83}, and later generalized to non-autonomous dynamical systems \cite{art:Barbero_Echeverria_Martin_Munoz_Roman08,art:Cortes_Martinez_Cantrijn02}, control systems \cite{art:Barbero_Echeverria_Martin_Munoz_Roman07}, first-order classical field theories \cite{art:DeLeon_Marrero_Martin03,art:Echeverria_Lopez_Marin_Munoz_Roman04} and, more recently, to higher-order classical field theories \cite{art:Campos_DeLeon_Martin_Vankerschaver09,art:Vitagliano10}. Nevertheless, although the geometrization of both higher-order Lagrangian and Hamiltonian formalisms was already developed for autonomous mechanical systems \cite{proc:Cantrijn_Crampin_Sarlet86,book:DeLeon_Rodrigues85,art:Gracia_Pons_Roman91,book:Miron10}, a complete generalization of the Skinner-Rusk formalism for higher-order mechanical systems has yet to be developed. A first attempt was outlined in \cite{art:Colombo_Martin_Zuccalli10}, with the aim of providing a geometric model for studying optimal control of underactuated systems, although a deep analysis of the model and its relation with the standard Lagrangian and Hamiltonian formalisms was not performed. Thus, the aim of this work is to provide a detailed and complete description of the Lagrangian-Hamiltonian unified formalism for higher-order autonomous mechanical systems. Our approach is different from that given in \cite{art:Colombo_Martin_Zuccalli10} (these differences are commented on Section \ref{section:outlook}). The paper is organized as follows: Section \ref{section:structures} consists of a review of the basic definitions and the geometric structures of higher-order tangent bundles, some of which are generalizations of the geometric structures of tangent bundles; namely, the {\sl canonical vector fields}, the {\sl almost-tangent structures} and {\sl semisprays}; whereas others such as the {\sl Tulczyjew derivation} are needed for developing the Lagrangian and Hamiltonian formalisms of higher-order mechanical systems, which are also described in this section. In particular, higher-order regular and singular systems are distinguished. The main contribution of the work is found in Section \ref{SkinnerRusk}, where the geometric formulation of the Lagrangian-Hamiltonian unified formalism for higher-order autonomous mechanical systems is described in detail, including the study of how the Lagrangian and Hamiltonian formalisms are recovered from that formalism. Finally, in Section \ref{section:examples}, two examples are analyzed in order to show the application of the formalism; the first is a regular system, the so-called {\sl Pais-Uhlenbeck oscillator}, while the second is a singular one, the {\sl second-order relativistic particle}. The paper concludes in Section \ref{section:outlook} with a summary of results, discussion and future research. All the manifolds, the maps and the structures are smooth. In addition, all the dynamical systems considered are autonomous. Summation over crossed repeated indices is understood, although on some occasions the symbol of summation is written explicitly in order to avoid confusion. \section{Higher-order dynamical systems} \label{section:structures} \subsection{Geometric structures of higher-order tangent bundles} (See \cite{book:DeLeon_Rodrigues85,book:Saunders89,art:Gracia_Pons_Roman91,art:Gracia_Pons_Roman92,phd:Martinez} for details). \subsubsection{Higher-order tangent bundles} Let $Q$ be a $n$-dimensional differentiable manifold, and $k\in\mathbb{N}$. The {\sl $k$th-order tangent bundle} of $Q$, denoted by ${\rm T}^kQ$, is the $(k+1)n$-dimensional manifold made of the $k$-jets with source at $0 \in \mathbb{R}$ and target $Q$; that is, ${\rm T}^kQ = J_0^k(\mathbb{R},Q)$. It is a submanifold of $J^k(\mathbb{R},Q)$. We have the following canonical projections: if $r\leq k$, $$ \begin{array}{rcclcrccl} \rho^k_r \colon & {\rm T}^kQ & \longrightarrow & {\rm T}^rQ & , & \beta^k \colon & {\rm T}^kQ & \longrightarrow & Q \\ \\ \ & \tilde{\sigma}^k(0) & \longmapsto & \tilde{\sigma}^r(0) & , & \ & \tilde{\sigma}^k(0) & \longmapsto & \sigma(0) \ , \end{array} $$ where $\tilde{\sigma}^k(0)$ denotes a point in ${\rm T}^kQ$; that is, the equivalence class of a curve $\sigma \colon I \subset \mathbb{R} \to Q$ by the $k$-jet equivalence relation. Notice that $\rho^k_0 = \beta^k$, where ${\rm T}^0Q$ is canonically identified with $Q$, and $\rho^k_k = {\rm Id}_{{\rm T}^kQ}$. If $\left(U,\varphi\right)$ is a local chart in $Q$, with $\varphi = \left(\varphi^A\right)$, $1\leq A \leq n$, and $\sigma \colon \mathbb{R} \to Q$ is a curve in $Q$ such that $\sigma(0) \in U$; by writing $\sigma^A = \varphi^A \circ \sigma$, the $k$-jet $\tilde{\sigma}^k(0)$ is given in $\left(\beta^k\right)^{-1}(U) = {\rm T}^kU$ by $\left(q^A,q^A_{1},\ldots,q^A_{k}\right)$, where $q^A = \sigma^A(0)$ and $\displaystyle q_{i}^A = \frac{d^i\sigma^A}{dt^i}(0)$ ($1\leq i \leq k$). Usually we write $q_{0}^A$ instead of $q^A$, and so we have the local chart $\left(\beta^k\right)^{-1}(U)$ in ${\rm T}^kQ$ with local coordinates $\left(q_{0}^A,q_{1}^A,\ldots,q_{k}^A\right)$. Local coordinates in ${\rm T}({\rm T}^kQ)$ are denoted by $\left(q_0^A,q_1^A,\ldots,q_k^A;v_0^A,v_1^A,\ldots,v_k^A\right)$. Using these coordinates, the local expression of the canonical projections are $\rho^k_r\left(q_0^A,q_1^A,\ldots,q_k^A\right) = \left(q_0^A,q_1^A,\ldots,q_r^A\right)$, and then for the tangent maps ${\rm T}\rho^k_r \colon {\rm T}({\rm T}^kQ) \to {\rm T}({\rm T}^rQ)$, we have the local expression ${\rm T}\rho^k_r\left(q_0^A,q_1^A,\ldots,q_k^A,v_0^A,v_1^A,\ldots,v_k^A\right) = \left(q_0^A,q_1^A,\ldots,q_r^A,v_0^A,v_1^A,\ldots,v_r^A\right)$. If $\sigma \colon \mathbb{R} \to Q$ is a curve in $Q$, the {\rm canonical lifting} of $\sigma$ to ${\rm T}^kQ$ is the curve $\tilde\sigma^k\colon \mathbb{R}\to{\rm T}^kQ$ defined as $\tilde{\sigma}^k(t) = \tilde{\sigma}^k_t(0)$, where $\sigma_t(s) = \sigma(s+t)$, (that is, the $k$-jet lifting of $\sigma$). If $k=1$, we will write $\tilde{\sigma}^1 \equiv \tilde{\sigma}$. Let $V(\rho^k_{r-1})$ be the vertical sub-bundle of ${\rm T}^kQ$ in ${\rm T}^{r-1}Q$. In the above coordinates, for every $p \in {\rm T}^kQ$ and $u\in V_p(\rho^k_{r-1})$, we have that its components are $u = \left(0,\ldots,0,v_r^A,\ldots,v_k^A\right)$. Furthermore, if $i_{k-r+1} \colon V(\rho^k_{r-1}) \hookrightarrow {\rm T}({\rm T}^kQ)$ is the canonical embedding, then $$ i_{k-r+1}\left(q_0^A,\ldots,q_k^A,v_r^A,\ldots,v_k^A\right) = \left(q_0^A,\ldots,q_k^A,0,\ldots,0,v_r^A,\ldots,v_k^A\right) \ . $$ Consider now the induced bundle of $\tau_{{\rm T}^{r-1}Q} \colon {\rm T}({\rm T}^{r-1}Q) \to {\rm T}^{r-1}Q$ by the canonical projection $\rho^k_{r-1}$, denoted by ${\rm T}^kQ \times_{{\rm T}^{r-1}Q}{\rm T}({\rm T}^{r-1}Q)$, which is a vector bundle over ${\rm T}^kQ$. We have the following commutative diagrams $$ \xymatrix{ {\rm T}^kQ \times_{{\rm T}^{r-1}Q}{\rm T}({\rm T}^{r-1}Q) \ar@{-->}[r] \ar@{-->}[d] & {\rm T}({\rm T}^{r-1}Q) \ar[d]^{\tau_{{\rm T}^{r-1}Q}} \\ {\rm T}^kQ \ar[r]^{\rho_{r-1}^k} & {\rm T}^{r-1}Q } \quad , \quad \xymatrix{ {\rm T}({\rm T}^kQ) \ar[rr]^{{\rm T}\rho_{r-1}^k} \ar[d]^{\tau_{{\rm T}^kQ}} & \ & {\rm T}({\rm T}^{r-1}Q) \ar[d]^{\tau_{{\rm T}^{r-1}Q}} \\ {\rm T}^kQ \ar[rr]^{\rho^k_{r-1}} & \ & {\rm T}^{r-1}Q \ . } $$ Then, there exists a unique bundle morphism $s_{k-r+1} \colon {\rm T}({\rm T}^kQ) \to {\rm T}^kQ \times_{{\rm T}^{r-1}Q}{\rm T}({\rm T}^{r-1}Q)$ such that the following diagram is commutative: $$ \xymatrix{ {\rm T}({\rm T}^kQ) \ar@/_/[ddr]_{\tau_{{\rm T}^kQ}} \ar[dr]^{s_{k-r+1}} \ar@/^/[drr]^{{\rm T}\rho^k_{r-1}} & \ & \ \\ \ & {\rm T}^kQ \times_{{\rm T}^{r-1}Q}{\rm T}({\rm T}^{r-1}Q) \ar@{-->}[r] \ar@{-->}[d] & {\rm T}({\rm T}^{r-1}Q) \ar[d]^{\tau_{{\rm T}^{r-1}Q}} \\ \ & {\rm T}^kQ \ar[r]^{\rho^k_{r-1}} & {\rm T}^{r-1}Q \ . \\ } $$ It is defined by $s_{k-r+1}(u) = \left(\tau_{{\rm T}^kQ}(u), {\rm T}\rho^k_{r-1}(u)\right)$, for every $u \in {\rm T}({\rm T}^kQ)$. Its local expression is $$ s_{k-r+1}\left(q_0^A,\ldots,q_k^A,v_0^A,\ldots,v_k^A\right) = \left(q_0^A,\ldots,q_{r-1}^A,q_{r}^A,\ldots,q_{k}^A,v_0^A,\ldots,v_{r-1}^A\right)\ . $$ As $s_{k-r+1}$ is a surjective map and ${\rm Im}\,(i_{k-r+1}) = \ker\,(s_{k-r+1})$, we have the exact sequence $$ 0 \longrightarrow V(\rho^k_{r-1}) \stackrel{i_{k-r+1}}{\longrightarrow} {\rm T}({\rm T}^kQ) \stackrel{s_{k-r+1}}{\longrightarrow} {\rm T}^kQ \times_{{\rm T}^{r-1}Q}{\rm T}({\rm T}^{r-1}Q) \longrightarrow 0 \ , $$ which is called the {\sl $(k-r+1)$-fundamental exact sequence}. In local coordinates, it is given by \begin{align*} 0 \longmapsto & \left(q_0^A,\ldots,q_k^A,v_r^A,\ldots,v_k^A\right) \stackrel{i_{k-r+1}}{\longmapsto} \left(q_0^A,\ldots,q_k^A,0,\ldots,0,v_r^A,\ldots,v_k^A\right) \\ & \left(q_0^A,\ldots,q_k^A,v_0^A,\ldots,v_k^A\right) \stackrel{s_{k-r+1}}{\longmapsto} \left(q_0^A,\ldots,q_k^A;q_0^A,\ldots,q_{r-1}^A,v_0^A,\ldots,v_{r-1}^A\right) \longmapsto 0 \ . \end{align*} Thus, we have $k$ exact sequences \begin{align*} 1st \colon \ & 0 \longrightarrow V(\rho^k_{k-1}) \stackrel{i_1}{\longrightarrow} {\rm T}({\rm T}^kQ) \stackrel{s_1}{\longrightarrow} {\rm T}^kQ \times_{{\rm T}^{k-1}Q}{\rm T}({\rm T}^{k-1}Q) \longrightarrow 0 \\ & \vdots \\ rth \colon \ & 0 \longrightarrow V(\rho^k_{k-r}) \stackrel{i_r}{\longrightarrow} {\rm T}({\rm T}^kQ) \stackrel{s_r}{\longrightarrow} {\rm T}^kQ \times_{{\rm T}^{k-r}Q}{\rm T}({\rm T}^{k-r}Q) \longrightarrow 0 \\ & \vdots \\ kth \colon \ & 0 \longrightarrow V(\beta^k) \stackrel{i_{k}}{\longrightarrow} {\rm T}({\rm T}^kQ) \stackrel{s_k}{\longrightarrow} {\rm T}^kQ \times_{Q}{\rm T} Q \longrightarrow 0 \ , \end{align*} where $V(\beta^k) \equiv V(\rho^k_0)$ denotes the vertical subbundle of ${\rm T}^kQ$ on $Q$. These sequences can be connected by means of the connecting maps $$ h_{k-r+1} \colon {\rm T}^kQ \times_{{\rm T}^{k-r}Q}{\rm T}({\rm T}^{k-r}Q) \longrightarrow V(\rho^k_{r-1}) $$ locally defined as $$ h_{k-r+1}\left(q_0^A,\ldots,q_{k}^A,v_0^A,\ldots,v_{k-r}^A\right) = \left(q_0^A,\ldots,q_k^A, 0,\ldots,0,\frac{r!}{0!}v_0^A,\frac{(r+1)!}{1!}v_1^A,\ldots,\frac{k!}{(k-r)!}v_{k-r}^A\right) \ . $$ These maps are globally well-defined and are vector bundle isomorphisms over ${\rm T}^kQ$. Then we have the following connection between exact sequences: $$ \xymatrix{ 0 \ar[r] & V(\rho^k_{k-r}) \ar[r]^{i_r} & {\rm T}({\rm T}^kQ) \ar[r]^-{s_r} & {\rm T}^kQ \times_{{\rm T}^{k-r}Q} {\rm T}({\rm T}^{k-r}Q) \ar[dll]^<<{h_{k-r+1}}|(.5){\hole} \ar[r] & 0 \\ 0 \ar[r] & V(\rho^k_{r-1}) \ar[r]_{i_{k-r+1}} & {\rm T}({\rm T}^kQ) \ar[r]_-{s_{k-r+1}} & {\rm T}^kQ \times_{{\rm T}^{r-1}Q} {\rm T}({\rm T}^{r-1}Q) \ar[ull]_<<{h_r} \ar[r] & 0 \ . } $$ \subsubsection{Higher-order canonical vector fields. Vertical endomorphisms and almost-tangent structures} \label{sect:Cap02_LiouvilleVectField} The {\sl canonical injection} is the map \begin{equation} \label{eqn:Cap02_DefCanonicalImmersion} \begin{array}{rccl} j_r \colon & {\rm T}^kQ & \longrightarrow & {\rm T}({\rm T}^{r-1}Q) \\ \ & \tilde{\sigma}^k(0) & \longmapsto & \tilde{\gamma}(0) \end{array} \quad , \quad (1\leq r\leq k) \ , \end{equation} where $$ \begin{array}{rccl} \gamma \colon & \mathbb{R} & \longrightarrow & {\rm T}^{r-1}Q \\ \ & t & \longmapsto & \gamma(t) = \tilde{\sigma}_t^{r-1}(0) \ . \end{array} $$ In local coordinates \begin{equation} \label{eqn:Cap02_LocalCoordCanonicalImmersion} j_r\left(q_0^A,\ldots,q_k^A\right) = \left(q_0^A,\ldots,q_{r-1}^A;q_1^A,q_2^A,\ldots,q_r^A\right) \ . \end{equation} Then, the following composition allows us to define a vector field $\Delta_r\in{\mathfrak{X}} ({\rm T}^kQ)$, $$ \xymatrix{ {\rm T}^kQ \ar[rr]^-{{\rm Id}\times j_{k-r+1}} \ar@/_1.5pc/[rrrrrr]_{\Delta_r} & \ & {\rm T}^kQ \times_{{\rm T}^{k-r}Q} {\rm T}({\rm T}^{k-r}Q) \ar[rr]^-{h_{k-r+1}} & \ & V(\rho^k_{r-1}) \ar[rr]^-{i_{k-r+1}} & \ & {\rm T}({\rm T}^kQ) \ ; } $$ that is, $\Delta_r = i_{k-r+1} \circ h_{k-r+1} \circ \left({\rm Id} \times j_{k-r+1}\right)$. From the local expressions of $i_{k-r+1}$, $h_{k-r+1}$ and $j_{k-r+1}$ we obtain that $\Delta_r\left( q_0^A,\ldots,q_k^A \right) = \left( q_0^A,\ldots,q_k^A,0,\ldots,0,r!\,q_1^A,(r+1)!\,q_2^A,\ldots,\frac{k!}{(k-r)!}q_{k-r+1}^A \right)$; or what is equivalent, $$ \Delta_r = \sum_{i=0}^{k-r} \frac{(r+i)!}{i!} q_{i+1}^A \derpar{}{q_{r+i}^A} = r!\,q_1^A\derpar{}{q_r^A} + (r+1)!\,q_2^A\derpar{}{q_{r+1}^A} + \ldots + \frac{k!}{(k-r)!}\,q_{k-r+1}^A\derpar{}{q_k^A} \ . $$ In particular $$ \Delta_1 = \sum_{i=1}^{k} i q_i^A \derpar{}{q_{i}^A} = \sum_{i=0}^{k-1} (i+1) q_{i+1}^A \derpar{}{q_{i+1}^A} = q_1^A\derpar{}{q_1^A} + 2q_2^A\derpar{}{q_2^A} + \ldots + kq_{k}^A\derpar{}{q_k^A} \ . $$ \begin{definition} The vector field $\Delta_r$ is the {\rm $r$th-canonical vector field} in ${\rm T}^kQ$. In particular, $\Delta_1$ is called the {\rm Liouville vector field} in ${\rm T}^kQ$. \end{definition} Remember that, if $N$ is a $(k+1)n$-dimensional manifold, an {\sl almost-tangent structure of order $k$} in $N$ is an endomorphism $J$ in ${\rm T} N$ such that $J^{k+1} = 0$ and ${\rm rank}\,J = kn$. Then, ${\rm T}^kQ$ is endowed with a canonical almost-tangent structure. In fact: \begin{definition} For $1 \leq r \leq k$, let $i_{k-r+1}$, $h_{k-r+1}$, $s_r$ be the morphisms of the fundamental exact sequences introduced above. The map $$ J_r = i_{k-r+1} \circ h_{k-r+1} \circ s_r \colon {\rm T}({\rm T}^kQ) \longrightarrow {\rm T}({\rm T}^kQ) $$ defined by the composition $$ \xymatrix{ {\rm T}({\rm T}^kQ) \ar[rr]^-{s_r} \ar@/_1.5pc/[rrrrrr]_{J_r} & \ & {\rm T}^kQ \times_{{\rm T}^{k-r}Q} {\rm T}({\rm T}^{k-r}Q) \ar[rr]^-{h_{k-r+1}} & \ & V(\rho^k_{r-1}) \ar[rr]^-{i_{k-r+1}} & \ & {\rm T}({\rm T}^kQ) } $$ is called the {\rm $r$th-vertical endomorphism} of ${\rm T}({\rm T}^kQ)$. \end{definition} From the local expressions of $s_r$, $h_{k-r+1}$, $i_{k-r+1}$ we obtain that $$ J_r\left(q_0^A,\ldots,q_k^A,v_0^A,\ldots,v_k^A\right) = \left(q_0^A,\ldots,q_k^A,0,\ldots,0,r!\,v_0^A,(r+1)!\,v_1^A,\ldots,\frac{k!}{(k-r)!}\,v_{k-r}^A\right) \ ; $$ that is, $\displaystyle J_r = \sum_{i=0}^{k-r} \frac{(r+i)!}{i!} \, dq_i^A \otimes \derpar{}{q_{r+i}^A}$. In particular, $\displaystyle J_1 = \sum_{i=0}^{k-1} (i+1) dq_i^A \otimes \derpar{}{q_{i+1}^A}$. The $r$th-vertical endomorphism $J_r$ has constant rank $(k-r+1)n$ and satisfies that $$ \left(J_r\right)^s = \begin{cases} 0 & \mbox{\rm if } rs \geqslant k+1 \\ J_{rs} & \mbox{\rm if } rs < k\end{cases} \ . $$ As a consequence, the $1$st-vertical endomorphism $J_1$ defines an almost-tangent structure of order $k$ in ${\rm T}^kQ$, which is called the {\sl canonical almost-tangent structure} of ${\rm T}^kQ$. Then, any other vertical endomorphism $J_r$ is obtained by composing $J_1$ with itself $r$ times. Furthermore, we have the following relation: $$ J_r\circ \Delta_s = \begin{cases} 0, & \mbox{\rm if } r+s\geqslant k+1 \\ \Delta_{r+s}, & \mbox{\rm if } r+s < k+1 \end{cases} $$ As a consequence, starting from the Liouville vector field and the vertical endomorphisms, we can recover all the canonical vector fields. However, as all the vertical endomorphisms are obtained from $J_1$, we conclude that all the canonical structures in ${\rm T}^kQ$ are obtained from the Liouville vector field and the canonical almost-tangent structure. Consider now the dual maps $J_r^*$ of $J_r$, $1 \leqslant r \leqslant k$; that is, the endomorphisms in ${\rm T}^*({\rm T}^kQ)$, and their natural extensions to the exterior algebra $\bigwedge({\rm T}^*({\rm T}^kQ))$ (also denoted by $J_r^*$). Their action on the set of differential forms is given by \begin{equation*} J_r^*\omega(X_1,\ldots,X_p) = \omega(J_r(X_1),\ldots,J_r(X_p)) \ , \end{equation*} for $\omega\in{\mit\Omega}^p({\rm T}^kQ)$ and $X_1,\ldots,X_p \in {\mathfrak{X}}({\rm T}^kQ)$, and for every $f\in{\rm C}^\infty({\rm T}^kQ)$ we write $J_r^*(f) = f$. The endomorphism $J_r^* \colon {\mit\Omega}({\rm T}^kQ) \to {\mit\Omega}({\rm T}^kQ)$, $1\leq r \leq k$, is called the {\sl $r$th-vertical operator}, and it is locally given by \begin{align*} &J_r^*(f) = f \quad , \quad \mbox{\rm for every } \ f \in {\rm C}^\infty({\rm T}^kQ) \\ &J_r^*({\rm d} q_i^A) = \begin{cases} 0, & \mbox{if }i < r \\ \frac{i!}{(i-r)!}\, {\rm d} q_{i-r}^A, & \mbox{if }i\geq r \end{cases} \ . \end{align*} \subsubsection{Vertical derivations and differentials. Tulczyjew's derivation} The {\sl inner contraction} of the vertical endomorphisms $J_r$ with any differential $p$-form $\omega\in{\mit\Omega}^p({\rm T}^kQ)$ is the $p$-form $\mathop{i}\nolimits(J_r)\omega$ defined as follows: for every $X_1,\ldots,X_p\in{\mathfrak{X}}({\rm T}^kQ)$ $$ \mathop{i}\nolimits(J_r)\omega(X_1,\ldots,X_p) = \sum_{i=1}^{p} \omega(X_1,\ldots,J_r(X_i),\ldots,X_p)\ , $$ and taking $\mathop{i}\nolimits(J_r)f = 0$, for every $f\in{\rm C}^\infty({\rm T}^kQ)$, we can state: \begin{definition} The map $$ \begin{array}{rcl} {\mit\Omega}({\rm T}^kQ) & \longrightarrow & {\mit\Omega}({\rm T}^kQ) \\ \omega & \longmapsto & \mathop{i}\nolimits (J_r)\omega \end{array} $$ is a derivation of degree $0$ in ${\mit\Omega}({\rm T}^kQ)$, which is called the {\rm $r$th-vertical derivation} in ${\mit\Omega}({\rm T}^kQ)$. \end{definition} Its local expression is $$ \mathop{i}\nolimits(J_r)({\rm d} q_i^A) = \begin{cases} 0, & \mbox{\rm if }i<r \\ \frac{i!}{(i-r)!}\,{\rm d} q_{i-r}^A, & \mbox{\rm if }i\geq r \end{cases} \ . $$ \begin{definition} The operator $d_{J_r} = [\mathop{i}\nolimits(J_r),{\rm d}]$ is a skew-derivation of degree $1$, which is called the {\rm $r$th-vertical differential}. \end{definition} Its local expression is given by $$ \begin{array}{l} \displaystyle d_{J_r}(f) = \sum_{i=r}^k \frac{i!}{(i-r)!} \derpar{f}{q_i^A} {\rm d} q_{i-r}^A \quad , \quad \mbox{\rm for every $f\in{\rm C}^\infty({\rm T}^kQ)$} \\ d_{J_r}({\rm d} q^i) = 0 \end{array} \ . $$ For $1 \leq r,s \leq k$, we have that $ d_{J_r}{\rm d} = -{\rm d} d_{J_r}$. In the set $\oplus_{k\geqslant 0}{\mit\Omega}({\rm T}^kQ)$, we can define the following equivalence relation: for $\omega \in {\mit\Omega}({\rm T}^kQ)$ and $\lambda \in {\mit\Omega}({\rm T}^{k'}Q)$, $$ \omega \sim \lambda \Longleftrightarrow \begin{cases} \omega = (\rho^k_{k'})^*(\lambda), & \mbox{if }k'\leqslant k \\ \lambda = (\rho^{k'}_k)^*(\omega), & \mbox{if }k' \geqslant k \end{cases} \ . $$ Then we consider the quotient set $\displaystyle \mit\Omega = \bigoplus_{k\geqslant0}{\mit\Omega}({\rm T}^kQ)/ \sim$, which is a commutative graded algebra. In this set we can define the so-called {\sl Tulczyjew's derivation} \cite{art:Tulczyjew75_1,book:DeLeon_Rodrigues85}, denoted by $d_T$, as follows: for every $f \in {\rm C}^\infty({\rm T}^kQ)$ we construct the function $d_Tf \in {\rm C}^\infty({\rm T}^{k+1}Q)$ given by $$ (d_Tf)(\tilde{\sigma}^{k+1}(0)) = (d_{\tilde{\sigma}^k(0)}f)(j_{k+1}(\tilde{\sigma}^{k+1}(0))) $$ where $j_{k+1} \colon {\rm T}^{k+1}Q \to {\rm T}({\rm T}^kQ)$ is the canonical injection introduced in the Section \ref{sect:Cap02_LiouvilleVectField}. From the coordinate expression for $j_{k+1}$, we obtain that $$ d_Tf\left(q_0^A,\ldots,q_{k+1}^A\right) = \sum_{i=0}^{k}q_{i+1}^A \derpar{f}{q_i^A}(q_0^A,\ldots,q_{k}^A) \ . $$ This map $d_T$ extends to a derivation of degree $0$ in $\mit\Omega$ and, as $d_T{\rm d} = {\rm d} d_T$, it is determined by its action on functions and by the property $d_T({\rm d} q_i^A) = {\rm d} q_{i+1}^A$. Furthermore, the maps $\mathop{i}\nolimits(J_s)$, $d_{J_s}$, $\mathop{i}\nolimits(\Delta_s)$ and $\mathop{\rm L}\nolimits(\Delta_s)$ extend to $\mit\Omega$ in a natural way. \subsubsection{Higher-order semisprays} \begin{definition} A vector field $X\in{\mathfrak{X}}({\rm T}^kQ)$ is a {\rm semispray of type $r$}, $1 \leq r \leq k$, if for every integral curve $\sigma$ of $X$, we have that, if $\gamma=\beta^k \circ \sigma$, then $\tilde\gamma^{k-r+1} = \rho^k_{k-r+1}\circ\sigma$ (where $\tilde\gamma^{k-r+1}$ is the canonical lifting of $\gamma$ to ${\rm T}^{k-r+1}Q$). $$ \xymatrix{ \ & \ & {\rm T}^kQ \ar[d]_{\rho^k_{k-r+1}} \ar@/^2.5pc/[ddd]^{\beta^k} \\ \mathbb{R} \ar@/^1.5pc/[urr]^{\sigma} \ar@/_1.5pc/[ddrr]_{\beta^k\circ\sigma} \ar[rr]^-{\rho^k_{k-r+1}\circ\sigma} \ar[drr]_{\widetilde{\gamma}^{k-r+1}} & \ & {\rm T}^{k-r+1}Q \ar[d]_{{\rm Id}} \\ \ & \ & {\rm T}^{k-r+1}Q \ar[d]_{\beta^{k-r+1}} \\ \ & \ & Q } $$ In particular, $X\in{\mathfrak{X}}({\rm T}^kQ)$ is a {\rm semispray of type $1$} if for every integral curve $\sigma$ of $X$, we have that $\gamma=\beta^k \circ \sigma$, then $\tilde\gamma^k=\sigma$. \end{definition} The local expression of a semispray of type $r$ is $$ X = q_1^A\derpar{}{q_0^A} + q_2^A\derpar{}{q_1^A} + \ldots + q_{k-r+1}^A\derpar{}{q_{k-r}^A} + X_{k-r+1}^A\derpar{}{q_{k-r+1}^A} + \ldots + X_k^A\derpar{}{q_k^A} $$ \begin{prop} The following assertions are equivalent: \begin{enumerate} \item A vector field $X\in{\mathfrak{X}}({\rm T}^kQ)$ is a semispray of type $r$. \item ${\rm T}\rho^k_{k-r} \circ X = j_{k-r+1}$; that is, the following diagram commutes $$ \xymatrix{ {\rm T}({\rm T}^kQ) \ar[drr]^{{\rm T}\rho^k_{k-r}} \\ {\rm T}^kQ \ar[u]^X \ar[rr]^-{j_{k-r+1}} & \ & {\rm T}({\rm T}^{k-r}Q) \ . } $$ \item $J_r(X) = \Delta_r$. \end{enumerate} \end{prop} Obviously, every semispray of type $r$ is a semispray of type $s$, for $s\geq r$. If $X\in{\mathfrak{X}}({\rm T}^kQ)$ is a semispray of type $r$, a curve $\sigma$ in $Q$ is said to be a {\sl path} or {\sl solution} of $X$ if $\tilde{\sigma}^k$ is an integral curve of $X$; that is, $\widetilde{\tilde{\sigma}^k} = X \circ \tilde{\sigma}^k$, where $\widetilde{\tilde{\sigma}^k}$ denotes the canonical lifting of $\tilde{\sigma}^k$ from ${\rm T}^kQ$ to ${\rm T}({\rm T}^kQ)$. Then, in coordinates, $\sigma$ verifies the following system of differential equations of order $k+1$: \begin{align*} \frac{d^{k-r+2}\sigma^A}{dt^{k-r+2}} &= X_{k-r+1}^A\left(\sigma,\frac{d\sigma}{dt},\ldots,\frac{d^k\sigma}{dt^k}\right)\\ & \; \vdots \\ \frac{d^{k+1}\sigma^A}{dt^{k+1}} &= X_k^A\left(\sigma,\frac{d\sigma}{dt},\ldots,\frac{d^k\sigma}{dt^k}\right) \end{align*} Observe that, taking $k=1$, then $r=1$ and $\rho^1_{1-1+1} = {\rm Id}_{{\rm T} Q}$, we recover the definition of the holonomic vector field ({\sc sode} in ${\rm T} Q$). So, semisprays of type $1$ in ${\rm T}^kQ$ are the analogue to the holonomic vector fields in ${\rm T} Q$; that is, they are the vector fields whose integral curves are the canonical liftings to ${\rm T}^kQ$ of curves on the basis $Q$. Their local expressions are $$ X = q_1^A\derpar{}{q_0^A} + q_2^A\derpar{}{q_1^A} + \ldots + q_k^A\derpar{}{q_{k-1}^A} + X_k^A\derpar{}{q_k^A}\ . $$ \subsection{Lagrangian formalism} Let $Q$ be a $n$-dimensional differentiable manifold and ${\cal L} \in {\rm C}^\infty({\rm T}^kQ)$. We say that ${\cal L}$ is a Lagrangian function of order $k$. \begin{definition} The {\rm Lagrangian $1$-form} $\theta_{\cal L} \in {\mit\Omega}^1({\rm T}^{2k-1}Q)$, associated to ${\cal L}$ is defined as $$ \theta_{\cal L} = \sum_{r=1}^k (-1)^{r-1} \frac{1}{r!} d_T^{r-1} d_{J_r}{\cal L} \ . $$ Then, the {\rm Lagrangian $2$-form}, $\omega_{\cal L} \in {\mit\Omega}^2({\rm T}^{2k-1}Q)$, associated to ${\cal L}$ is $$ \omega_{\cal L} = -{\rm d} \theta_{\cal L} = \sum_{r=1}^k (-1)^r \frac{1}{r!} d_T^{r-1}{\rm d} d_{J_r}{\cal L} \ . $$ \end{definition} Observe that the Lagrangian $1$-form is a semibasic form of type $k$ in ${\rm T}^{2k-1}Q$ . We assume that $\omega_{\cal L}$ has constant rank (we refer to this fact by saying that ${\cal L}$ is a {\sl geometrically admissible Lagrangian}). \begin{definition} The {\rm Lagrangian energy}, $E_{\cal L} \in {\rm C}^\infty({\rm T}^{2k-1}Q)$, associated to ${\cal L}$ is defined as $$ E_{\cal L} = \left(\sum_{r=1}^k (-1)^{r-1} \frac{1}{r!} d_T^{r-1}(\Delta_r({\cal L}))\right) - (\rho_k^{2k-1})^*{\cal L} $$ \end{definition} It is usual to write ${\cal L}$ instead of $(\rho_k^{2k-1})^*{\cal L}$, and we will do this in the sequel. The coordinate expressions of these elements are \begin{eqnarray} \label{eqn:Cap03_LocalCoordLag1Form} \theta_{\cal L} &=& \sum_{r=1}^k \sum_{i=0}^{k-r}(-1)^i d_T^i\left(\derpar{L}{q_{r+i}^A}\right) {\rm d} q_{r-1}^A \\ \omega_{\cal L} &=& \sum_{r=1}^k \sum_{i=0}^{k-r}(-1)^{i+1} d_T^i\,{\rm d}\left(\derpar{{\cal L}}{q_{r+i}^A}\right) \wedge {\rm d} q_{r-1}^A \nonumber \\ \label{eqn:Cap03_LocalCoordLagEnergy} E_{\cal L} &=& \sum_{r=1}^{k} q_{r}^A \sum_{i=0}^{k-r} (-1)^i d_T^i\left( \derpar{L}{q_{r+i}^A} \right)- {\cal L} \ . \end{eqnarray} \begin{definition} A Lagrangian function ${\cal L} \in {\rm C}^\infty({\rm T}^kQ)$ is said to be {\rm regular} if $\omega_{\cal L}$ is a symplectic form. Otherwise ${\cal L}$ is a {\rm singular} Lagrangian. \end{definition} To say that ${\cal L}$ is a regular Lagrangian is locally equivalent to saying that the Hessian matrix $\displaystyle \left(\frac{\partial^2{\cal L}}{\partial q_k^B \partial q_k^A}\right)$ is regular at every point of ${\rm T}^kQ$. \begin{definition} A {\rm Lagrangian system of order $k$} is a couple $({\rm T}^{2k-1}Q,{\cal L})$, where $Q$ represents the configuration space and ${\cal L} \in {\rm C}^\infty({\rm T}^kQ)$ is the Lagrangian function. It is said to be a {\rm regular} (resp. {\rm singular}) Lagrangian system if the Lagrangian function ${\cal L}$ is regular (resp. singular). \end{definition} Thus, in the Lagrangian formalism, ${\rm T}^{2k-1}Q$ represents the phase space of the system. The dynamical trajectories of the system are the integral curves of any vector field $X_{\cal L} \in {\mathfrak{X}}({\rm T}^{2k-1}Q)$ satisfying that: \begin{enumerate} \item It is a solution to the equation \begin{equation}\label{eqn:Cap03_IntrinsicLagEq} \mathop{i}\nolimits(X_{\cal L})\omega_{\cal L} = {\rm d} E_{\cal L} \end{equation} \item It is a semispray of type $1$ in ${\rm T}^{2k-1}Q$. \end{enumerate} Equation (\ref{eqn:Cap03_IntrinsicLagEq}) is the {\sl higher-order Lagrangian equation}, and a vector field $X_{\cal L}$ solution to (\ref{eqn:Cap03_IntrinsicLagEq}) (if it exists) is called a {\sl Lagrangian vector field of order $k$}. If, in addition, $X_{\cal L}$ satisfies condition 2, then it is called an {\sl Euler-Lagrange vector field of order $k$}, and its integral curves on the base are solutions to the {\sl higher-order Euler-Lagrange equations}. In natural coordinates of ${\rm T}^{2k-1}Q$, if $$ X_{\cal L} = \sum_{i=0}^{2k-1} f_i^A \derpar{}{q_i^A} = f_0^A\derpar{}{q_0^A} + f_1^A \derpar{}{q_1^A} + \ldots + f_{2k-1}^A\derpar{}{q_{2k-1}^A} \ , $$ as $$ {\rm d} E_{\cal L} = \sum_{r=1}^k \sum_{i=0}^{k-r}(-1)^i d_T^i \left(\derpar{{\cal L}}{q_{r+i}^A} \right){\rm d} q_r^A + \sum_{r=1}^k q_r^A \sum_{i=0}^{k-r} (-1)^i\sum_{j=0}^{k} d_T^i\left(\frac{\partial^2{\cal L}}{\partial q_j^B\partial q_{r+i}^A} {\rm d} q_j^B \right) - \sum_{r=0}^{k} \derpar{{\cal L}}{q_r^A} {\rm d} q_r^A \ , $$ from (\ref{eqn:Cap03_IntrinsicLagEq}) we obtain \begin{equation} \label{eqn:Cap03_LocalCoordLagEq} \begin{array}{l} \displaystyle \left(f_0^B-q_1^B\right) \frac{\partial^2{\cal L}}{\partial q_k^B\partial q_k^A} = 0 \\[10pt] \displaystyle \left(f_{1}^B - q_{2}^B\right)\frac{\partial^2{\cal L}}{\partial q_k^B\partial q_k^A} - \left(f_0^B-q_{1}^B \right)(\cdots\cdots) = 0 \\ \qquad \qquad \qquad \qquad \vdots \\ \displaystyle \left(f_{2k-2}^B - q_{2k-1}^B\right)\frac{\partial^2{\cal L}}{\partial q_k^B\partial q_k^A} - \sum_{i=0}^{2k-3} \left(f_{i}^B-q_{i+1}^B \right) (\cdots\cdots) = 0 \\ \displaystyle (-1)^k\left(f_{2k-1}^B - d_T\left(q_{2k-1}^B\right)\right) \frac{\partial^2{\cal L}}{\partial q_k^B\partial q_k^A} + \sum_{i=0}^{k} (-1)^id_T^i\left( \derpar{L}{q_i^A} \right) - \sum_{i=0}^{2k-2} \left(f_{i}^B-q_{i+1}^B \right) (\cdots\cdots) = 0 \, \end{array} \end{equation} where the terms in brackets $(\cdots\cdots)$ contain relations involving partial derivatives of the Lagrangian and applications of $d_T$, which for simplicity are not written. These are the local expressions of the Lagrangian equations for $X_{\cal L}$. Now, if $\sigma \colon \mathbb{R} \to {\rm T}^{2k-1}Q$ is an integral curve of $X_{\cal L}$, from (\ref{eqn:Cap03_IntrinsicLagEq}) we obtain that $\sigma$ must satisfy the {\sl Euler-Lagrange equation} \begin{equation} \label{eqn:Cap03_IntrinsicLagEqCI} \mathop{i}\nolimits(\tilde{\sigma})(\omega_{\cal L} \circ \sigma) = {\rm d} E_{\cal L} \circ \sigma \ , \end{equation} where $\tilde{\sigma}$ denotes the canonical lifting of $\sigma$ to ${\rm T}({\rm T}^{2k-1}Q)$; and as $X_{\cal L}$ is a semispray of type $1$, we have that $\sigma$ is the canonical lifting of a curve $\gamma \colon \mathbb{R} \to Q$ to ${\rm T}^{2k-1}Q$; that is, $\sigma = \tilde{\gamma}^{2k-1}$. Now, if ${\cal L} \in {\rm C}^\infty({\rm T}^kQ)$ is a regular Lagrangian, then $\omega_{\cal L}$ is a symplectic form in ${\rm T}^{2k-1}Q$, and as a consequence we have that: \begin{teor} \label{prop:Cap03_LagVectFieldRegLag} Let $({\rm T}^{2k-1}Q,{\cal L})$ be a regular Lagrangian system of order $k$. \begin{enumerate} \item There exists a unique $X_{\cal L} \in {\mathfrak{X}}({\rm T}^{2k-1}Q)$ which is a solution to the Lagrangian equation (\ref{eqn:Cap03_IntrinsicLagEq}) and is a semispray of type $1$ in ${\rm T}^{2k-1}Q$. \item If $\gamma \colon \mathbb{R} \to Q$ is an integral curve of $X_{\cal L}$ then $\sigma=\tilde{\gamma}^{2k-1}$ is a solution to the {\rm Euler-Lagrange equations}: \begin{equation} \label{eqn:Cap03_EulerLagrangeEquations} \derpar{{\cal L}}{q^0} \circ \tilde{\gamma}^{2k-1} - \frac{d}{dt}\derpar{{\cal L}}{q^1} \circ \tilde{\gamma}^{2k-1} + \frac{d^2}{dt^2}\derpar{{\cal L}}{q^2} \circ\tilde{\gamma}^{2k-1}+ \ldots + (-1)^k\frac{d^k}{dt^k}\derpar{{\cal L}}{q^k} \circ \tilde{\gamma}^{2k-1} = 0 \ . \end{equation} \end{enumerate} \end{teor} If ${\cal L} \in {\rm C}^\infty({\rm T}^kQ)$ is a singular Lagrangian, then $\omega_{\cal L}$ is a presymplectic form, so the existence and uniqueness of solutions to the Lagrangian equation (\ref{eqn:Cap03_IntrinsicLagEq}) is not assured, except in special cases (for instance, when $\omega_{\cal L}$ is a {\sl presymplectic horizontal structure} \cite{book:DeLeon_Rodrigues85}). In general, in the most favourable cases, equation (\ref{eqn:Cap03_IntrinsicLagEq}) has solutions $X_{\cal L} \in {\mathfrak{X}}({\rm T}^{2k-1}Q)$ in some submanifold $S_f\hookrightarrow{\rm T}^{2k-1}Q$, for which these vector fields solution are tangent. This submanifold is obtained by applying the well-known constraint algorithms (see, for instance, \cite{art:Gotay_Nester_Hinds78,art:Gotay_Nester79,art:Munoz_Roman92}). Nevertheless, these vector fields solution are not necessarily semisprays of type $1$ on $S_f$, but only on the points of another submanifold $M_f\hookrightarrow S_f\hookrightarrow {\rm T}^{2k-1}Q$ (see \cite{art:Gotay_Nester79,art:Munoz_Roman92}). On the points of this last submanifold, the integral curves of $X_{\cal L} \in {\mathfrak{X}}({\rm T}^{2k-1}Q)$ are solutions to the Euler-Lagrange equations (\ref{eqn:Cap03_EulerLagrangeEquations}). A detailed study of higher-order singular Lagrangian systems can be found in \cite{art:Gracia_Pons_Roman91,art:Gracia_Pons_Roman92}. \subsection{Hamiltonian formalism} \label{subsection:hamform} \begin{definition} Let $({\rm T}^{2k-1}Q,{\cal L})$ be a Lagrangian system. The {\rm Legendre-Ostrogradsky map} (or {\rm generalized Legendre map\/}) associated to ${\cal L}$ is the map ${\cal FL} \colon {\rm T}^{2k-1}Q \to {\rm T}^*({\rm T}^{k-1}Q)$ defined as follows: for every $u \in {\rm T}({\rm T}^{2k-1}Q)$, $$ \theta_{\cal L}(u) = \left\langle {\rm T} \rho_{k-1}^{2k-1}(u), {\cal FL}(\tau_{{\rm T}^{2k-1}Q}(u)) \right\rangle $$ \end{definition} This map verifies that $\pi_{{\rm T}^{k-1}Q} \circ {\cal FL} = \rho^{2k-1}_{k-1}$, where $\pi_{{\rm T}^{k-1}Q} \colon {\rm T}^*({\rm T}^{k-1}Q) \to {\rm T}^{k-1}Q$ is the natural projection. Furthermore, if $\theta_{k-1}\in{\mit\Omega}^1({\rm T}^*({\rm T}^{k-1}Q)$ and $\omega_{k-1}=-{\rm d}\theta_{k-1}\in{\mit\Omega}^2({\rm T}^*({\rm T}^{k-1}Q))$ are the canonical $1$ and $2$ forms of the cotangent bundle ${\rm T}^*({\rm T}^{k-1}Q)$, we have that $$ {\cal FL}^*\theta_{k-1} = \theta_{\cal L} \quad , \quad {\cal FL}^*\omega_{k-1} = \omega_{\cal L} \ . $$ Given a local natural chart in ${\rm T}^{2k-1}Q$, we can define the following local functions $$ \hat p^{r-1}_A = \sum_{i=0}^{k-r}(-1)^i d_T^i\left(\derpar{L}{q_{r+i}^A}\right) \ . $$ Observe that \begin{align*} \hat p^{r-1}_A - \derpar{{\cal L}}{q_r^A} &= \sum_{i=0}^{k-r}(-1)^i d_T^i\left(\derpar{{\cal L}}{q_{r+i}^A}\right) - \derpar{{\cal L}}{q_r^A} = \sum_{i=1}^{k-r} (-1)^i d_T^i\left(\derpar{{\cal L}}{q_{r+i}^A}\right) \\ &= \sum_{i=0}^{k-r-1} (-1)^{i+1} d_T^{i+1} \left( \derpar{{\cal L}}{q_{r+i+1}^A}\right) = -d_T\left(\sum_{i=0}^{k-(r+1)} (-1)^{i}d_T^i \left( \derpar{{\cal L}}{q_{(r+1)+i}^A}\right) \right) = -d_T(\hat p^r_A) \end{align*} and hence \begin{equation} \label{eqn:Cap04_MomentumCoordRelation} \hat p^{r-1}_A = \derpar{{\cal L}}{q_r^A} - d_T(\hat p^r_A) \quad , \quad 1 \leq r \leq k-1 \ . \end{equation} Thus, bearing in mind the local expression (\ref{eqn:Cap03_LocalCoordLag1Form}) of the form $\theta_{\cal L}$, we can write $\theta_{\cal L} = \sum_{r=1}^k \hat p^{r-1}_A{\rm d} q_{r-1}^A$, and we obtain that the expression in natural coordinates of the map ${\cal FL}$ is $$ {\cal FL}\left(q_0^A,q_1^A,\ldots,q_{2k-1}^A\right) = \left(q_0^A,q_1^A,\ldots,q_{k-1}^A,p^0_A,p^1_A,\ldots,p^{k-1}_A\right) \ , \ \mbox{\rm with $p^i_A\circ{\cal FL}=\hat p^i_A$} \ . $$ ${\cal L}$ is a regular Lagrangian if, and only if, ${\cal FL} \colon {\rm T}^{2k-1}Q \to {\rm T}^*({\rm T}^{k-1}Q)$ is a local diffeomorphism. As a consequence of this, we have that, if ${\cal L}$ is a regular Lagrangian, then the set $(q_i^A,\hat p^i_A)$, $0\leq i\leq k-1$, is a set of local coordinates in ${\rm T}^{2k-1}Q$, and $(\hat p^i_A)$ are called the {\sl Jacobi-Ostrogradsky momentum coordinates}. Observe that the relation (\ref{eqn:Cap04_MomentumCoordRelation}) means that we can recover all the Jacobi-Ostrogadsky momentum coordinates from the set $(\hat p^{k-1}_A)$. \begin{definition} ${\cal L} \in {\rm C}^\infty({\rm T}^{k}Q)$ is said to be a {\rm hyperregular Lagrangian} of order $k$ if ${\cal FL}$ is a global diffeomorphism. Then, $({\rm T}^{2k-1}Q,{\cal L})$ is a {\rm hyperregular Lagrangian system} of order $k$. \end{definition} As $\pi_{{\rm T}^{k-1}Q} \circ {\cal FL} = \rho^{2k-1}_{k-1}$, this condition is equivalent to demanding that the restriction of $\rho^{2k-1}_{k-1} \colon {\rm T}^{2k-1}Q \to {\rm T}^{k-1}Q$ to every fibre be one-to-one. In order to explain the construction of the canonical Hamiltonian formalism of a Lagrangian higher-order system, we first consider the case of hyperregular systems (the regular case is the same, but restricting on the suitable open submanifolds where ${\cal FL}$ is a local diffeomorphism). So, $({\rm T}^{2k-1}Q,{\cal L})$ being a hyperregular Lagrangian system, since ${\cal FL}$ is a diffeomorphism, there exists a unique function $h \in {\rm C}^\infty({\rm T}^*({\rm T}^{k-1}Q))$ such that ${\cal FL}^*h = E_{\cal L}$, which is called the {\sl Hamiltonian function} associated to this system. Then the triad $({\rm T}^*({\rm T}^{k-1}Q),\omega_{k-1},h)$ is called the {\sl canonical Hamiltonian system} associated to the hyperregular Lagrangian system $({\rm T}^{2k-1}Q,{\cal L})$. Thus, in the Hamiltonian formalism, ${\rm T}^*({\rm T}^{k-1}Q)$ represents the phase space of the system. The dynamical trajectories of the system are the integral curves of a vector field $X_h \in {\mathfrak{X}}({\rm T}^*({\rm T}^{k-1}Q))$ which is a solution to the {\sl Hamilton equation} \begin{equation} \label{eqn:Cap04_IntrinsicHamEq} \mathop{i}\nolimits(X_h)\omega_{k-1} = {\rm d} h \ . \end{equation} As $\omega_{k-1}$ is symplectic, there is a unique vector field $X_h$ solution to this equation, and it is called the {\sl Hamiltonian vector field}. In natural coordinates of ${\rm T}^*({\rm T}^{k-1}Q)$, $(q_i^A,p^i_A)$ (with $0\leq i\leq k-1$; $1\leq A\leq n$), taking $\displaystyle X_h = f_i^A \derpar{}{q_i^A} + g^i_A \derpar{}{p^i_A}$, as $\displaystyle {\rm d} h = \derpar{h}{q_i^A}{\rm d} q_i^A + \derpar{h}{p^i_A}{\rm d} p^i_A$, and $\omega_{k-1} ={\rm d} q_i^A \wedge{\rm d} p^i_A$, from (\ref{eqn:Cap04_IntrinsicHamEq}) we obtain that $$ f_i^A = \derpar{h}{p^i_A} \quad , \quad g^i_A = -\derpar{h}{q_i^A} \ . $$ Now, if $\sigma \colon \mathbb{R} \to{\rm T}^*({\rm T}^{k-1}Q)$ is an integral curve of $X_h$, we have that $\sigma$ must satisfy the {\sl Hamiltonian equation} $$ \mathop{i}\nolimits(\tilde{\sigma})(\omega_{k-1} \circ \sigma) = {\rm d} h \circ \sigma \ , $$ and, if $\sigma(t)=(q_i^A(t),p^i_A(t))$ in coordinates, it gives the classical expression of the Hamilton equations: $$ \frac{dq_i^A}{dt} = \derpar{h}{p^i_A} \circ \sigma \quad , \quad \frac{dp^i_A}{dt} = -\derpar{h}{q_i^A} \circ \sigma \ . $$ For the case of singular higher-order Lagrangian systems, in general there is no way to associate a canonical Hamiltonian formalism, unless some minimal regularity condition are imposed \cite{art:Gracia_Pons_Roman91}. In particular: \begin{definition} A Lagrangian ${\cal L} \in {\rm C}^\infty({\rm T}^kQ)$ is said to be an {\rm almost-regular Lagrangian function} of order $k$ if: \begin{enumerate} \item ${\cal FL}({\rm T}^{2k-1}Q) = P_o$ is a closed submanifold of ${\rm T}^*({\rm T}^{k-1}Q)$. (We denote the natural embedding by $j_{P_o} \colon P_o \hookrightarrow {\rm T}^*({\rm T}^{k-1}Q)$). \item ${\cal FL}$ is a surjective submersion on its image. \item For every $p \in {\rm T}^{2k-1}Q$, the fibers ${\cal FL}^{-1}({\cal FL}(p))$ are connected submanifolds of ${\rm T}^{2k-1}Q$. \end{enumerate} Then $({\rm T}^{2k-1}Q,{\cal L})$ is an {\rm almost-regular Lagrangian system} of order $k$. \end{definition} Denoting the map defined by ${\cal FL} = j_{P_o} \circ {\cal FL}_o$ by ${\cal FL}_o \colon {\rm T}^{2k-1}Q \to P_o$, we have that the Lagrangian energy $E_{\cal L}$ is a ${\cal FL}_o$-projectable function, and then there is a unique function $h_o\in {\rm C}^\infty(P_o)$ such that ${\cal FL}_o^*h_o = E_L$ (see \cite{art:Gracia_Pons_Roman91}). This $h_o$ is the {\sl canonical Hamiltonian function} of the almost-regular Lagrangian system and, taking $\omega_o = j_{P_o}^*\omega_{k-1}$, the triad $(P_o,\omega_o,h_o)$ is the canonical Hamiltonian system associated to the almost regular Lagrangian system $({\rm T}^{2k-1}Q,{\cal L})$. For this system we have the Hamilton equation \begin{equation} \label{sub0} \mathop{i}\nolimits(X_{h_o})\omega_o = {\rm d} h_o \quad , \quad X_{h_o}\in{\mathfrak{X}}(P_o) \ . \end{equation} As $\omega_o$ is, in general, a presymplectic form, in the best cases, this equation has some vector field $X_{h_o}$ solution only on the points of some submanifold $P_f\hookrightarrow P_o\hookrightarrow {\rm T}^*({\rm T}^{k-1}Q)$, for which $X_{h_o}$ is tangent to $P_f$. This vector field is not unique, in general. It can be proved that $P_f = {\cal FL}(S_f)$, where $S_f\hookrightarrow{\rm T}^{2k-1}Q$ is the submanifold where there are vector field solutions to the Lagrangian equation (\ref{eqn:Cap03_IntrinsicLagEq}) which are tangent to $S_f$ (see the above section). Furthermore, as ${\cal FL}_o$ is a submersion, for every vector field $X_{h_o} \in {\mathfrak{X}}({\rm T}^*({\rm T}^{k-1}Q))$ which is a solution to the Hamilton equation (\ref{sub0}) on $P_f$, and tangent to $P_f$, there exists some semispray of type $1$, $X_{\cal L} \in {\mathfrak{X}}({\rm T}^{2k-1}Q)$, which is a solution of the Euler-Lagrange equation on $S_f$, and tangent to $S_f$, such that ${{\cal FL}_o}_*X_{\cal L}= X_{h_o}$. This ${\cal FL}_o$-projectable semispray of type $1$ could be defined only on the points of another submanifold $M_f\hookrightarrow S_f$. (See \cite{art:Gracia_Pons_Roman91,art:Gracia_Pons_Roman92} for a detailed exposition of all these topics). \section{Skinner-Rusk unified formalism} \label{SkinnerRusk} \subsection{Unified phase space. Geometric and dynamical structures} Let ${\cal L} \in {\rm C}^\infty({\rm T}^{k}Q)$ be the Lagrangian function of order $k$ of the system. First we construct the {\sl unified phase space} $$ {\cal W} = {\rm T}^{2k-1}Q \times_{{\rm T}^{k-1}Q} {\rm T}^*({\rm T}^{k-1}Q) $$ (the fiber product of the above bundles), which is endowed with the canonical projections $$ \operatorname{pr}_1 \colon {\rm T}^{2k-1}Q \times_{{\rm T}^{k-1}Q} {\rm T}^*({\rm T}^{k-1}Q) \to {\rm T}^{2k-1}Q \quad ; \quad \operatorname{pr}_2 \colon {\rm T}^{2k-1}Q \times_{{\rm T}^{k-1}Q} {\rm T}^*({\rm T}^{k-1}Q) \to {\rm T}^*({\rm T}^{k-1}Q) \ , $$ and also with the canonical projections onto ${\rm T}^{k-1}Q$. So we have the diagram: $$ \xymatrix{ \ & {\cal W} \ar[dl]_-{\operatorname{pr}_1} \ar[dr]^-{\operatorname{pr}_2} & \ \\ {\rm T}^{2k-1}Q \ar[dr]_-{\rho^{2k-1}_{k-1}} \ar@/_1.3pc/[ddr]_-{\beta^{2k-1}} & \ & {\rm T}^*({\rm T}^{k-1}Q) \ar[dl]^-{\pi_{{\rm T}^{k-1}Q}} \\ \ & {\rm T}^{k-1}Q \ar[d]_-{\beta^{k-1}} & \ \\ \ & Q & \ } $$ If $(U,q_0^A)$ is a local chart of coordinates in $Q$, denoting by $((\beta^{2k-1})^{-1}(U);q_0^A,q_1^A,\ldots,q_{2k-1}^A)$ and $((\pi_{{\rm T}^{k-1}Q}\circ \beta^{k-1})^{-1}(U);q_0^A,q_1^A,\ldots,q_{k-1}^A,p^0_A,p^0_A,\ldots,p^{k-1}_A)$ the induced charts in ${\rm T}^{2k-1}Q$ and in ${\rm T}^*({\rm T}^{k-1}Q)$, respectively, we have that $(q_0^A,\ldots,q_{k-1}^A;q_{k}^A,\ldots,q_{2k-1}^A;p^0_A,\ldots,p^{k-1}_A)$ are the natural coordinates in the suitable open domain $W\subset{\cal W}$. Note that $\dim({\cal W}) = 3kn$. The bundle ${\cal W}$ is endowed with some canonical geometric structures. First, let $\omega_{k-1}\in{\mit\Omega}^2({\rm T}^*({\rm T}^{k-1}Q))$ be the canonical symplectic form of ${\rm T}^*({\rm T}^{k-1}Q)$. Then we define $$ \Omega = \operatorname{pr}_2^*\omega_{k-1} \in {\mit\Omega}^2({\cal W}) \ , $$ which is a presymplectic form in ${\cal W}$, whose local expression is \begin{equation}\label{eqn:Cap06_LocalCoordOmega} \Omega = \operatorname{pr}_2^*\omega_{k-1} = \operatorname{pr}_2^*\left({\rm d} q_i^A \wedge {\rm d} p^i_A\right) = {\rm d} q_i^A \wedge {\rm d} p^i_A \ . \end{equation} Observe that \begin{equation} \label{eqn:Cap06_LocalBasisKerOmega} \ker\,\Omega = \left\langle \derpar{}{q^k},\ldots,\derpar{}{q^{2k-1}} \right\rangle= {\mathfrak{X}}^{V(\operatorname{pr}_2)}({\cal W}) \ . \end{equation} The second relevant canonical structure in ${\cal W}$ is the following: \begin{definition} Let $p \in {\rm T}^{2k-1}Q$, its projection $q = \rho^{2k-1}_{k-1}(p)$ to ${\rm T}^{k-1}Q$, and a covector $\alpha_q \in {\rm T}_q^*({\rm T}^{k-1}Q)$. The {\rm coupling function} ${\cal C} \in {\rm C}^\infty({\cal W})$ is defined as follows: \begin{equation}\label{eqn:Cap06_DefCouplingFunc} \begin{array}{rcl} {\cal C} \colon {\rm T}^{2k-1}Q \times_{{\rm T}^{k-1}Q} {\rm T}^*({\rm T}^{k-1}Q) & \longrightarrow & \mathbb{R} \\ (p,\alpha_q) & \longmapsto & \langle \alpha_q \mid j_{k}(p)_q \rangle \end{array} \ , \end{equation} where $j_{k} \colon {\rm T}^{2k-1}Q \to {\rm T}({\rm T}^{k-1}Q)$ is the canonical injection introduced in (\ref{eqn:Cap02_DefCanonicalImmersion}), $j_{k}(p)_q$ is the corresponding tangent vector to ${\rm T}^{k-1}Q$ in $q$, and $\langle \alpha_q \mid j_{k}(p)_q \rangle \equiv \alpha_q(j_{k}(p)_q)$ denotes the canonical pairing between vectors of ${\rm T}_q({\rm T}^{k-1}Q)$ and covectors of ${\rm T}^*_q({\rm T}^{k-1}Q)$. \end{definition} Note that, in this case, $j_k \colon {\rm T}^{2k-1}Q \to {\rm T}({\rm T}^{k-1}Q)$ is a diffeomorphism. In local coordinates, if $p = (q_0^A,\ldots,q_{k-1}^A,q_k^A,\ldots,q_{2k-1}^A)$, then $q = \rho^{2k-1}_{k-1}(p) = (q_0^A,\ldots,q_{k-1}^A)$, and bearing in mind the local expression (\ref{eqn:Cap02_LocalCoordCanonicalImmersion}) of $j_{k}$, we have $j_{k}(p) = (q_0^A,\ldots,q_{k-1}^A,q_1^A,\ldots,q_k^A)$. Therefore if $\displaystyle j_{k}(p)_q = q_{i+1}^A\restric{\derpar{}{q_i^A}}{q} \in {\rm T}_q({\rm T}^{k-1}Q)$, and if $\displaystyle \alpha_q = p^i_A \restric{{\rm d} q_i^A}{q}$ we obtain the following local expression for the coupling function ${\cal C}$ \begin{equation}\label{eqn:Cap06_LocalCoordCouplingFunc} {\cal C}(p,\alpha_q) = \langle \alpha_q \mid j_{k}(p)_q \rangle = \left\langle p^i_A \restric{dq_i^A}{q} \bigg| \, q_{i+1}^A \restric{\derpar{}{q_i^A}}{q} \right\rangle = \restric{p^i_Aq_{i+1}^A}{q} \ . \end{equation} Observe that, if $k=1$, the map $j_1 \colon {\rm T} Q \to {\rm T} Q$ is the identity on ${\rm T} Q$, and we recover the standard canonical coupling between vectors in ${\rm T}_pQ$ and covectors in ${\rm T}^*_pQ$. Using the coupling function, given a Lagrangian function ${\cal L}\in{\rm C}^\infty({\rm T}^kQ)$, we can define the {\sl Hamiltonian function} $H \in {\rm C}^\infty({\cal W})$ as \begin{equation}\label{eqn:Cap06_DefUnifiedHamiltFunc} H = {\cal C} - (\rho^{2k-1}_k\circ\operatorname{pr}_1)^*{\cal L} \ , \end{equation} whose coordinate expression is \begin{equation} \label{eqn:Cap06_LocalCoordUnifiedHamiltFunc} H = p^i_Aq_{i+1}^A - {\cal L}(q_0^A,\ldots,q_k^A) \ . \end{equation} Now, $({\cal W},\Omega,H)$ is a presymplectic Hamiltonian system. Finally, in order to give a complete description of the dynamics of higher-order Lagrangian systems, we need to introduce the following concept: \begin{definition} A vector field $X\in{\mathfrak{X}}({\cal W})$ is said to be a {\rm semispray of type $r$} in ${\cal W}$ if, for every integral curve $\sigma \colon I \subset\mathbb{R} \to {\cal W}$ of $X$, the curve $\sigma_1 = \operatorname{pr}_1 \circ \sigma \colon I \to {\rm T}^{2k-1}Q$ satisfies that, if $\gamma = \beta^{2k-1}\circ\sigma_1$, $\tilde{\gamma}^{2k-r} = \rho^{2k-1}_{2k-r}\circ\sigma_1$. In particular, $X\in{\mathfrak{X}}({\cal W})$ is a {\rm semispray of type $1$} if $\tilde{\gamma}^{2k-1} = \sigma_1$. \end{definition} The local expression of a semispray of type $r$ in ${\cal W}$ is $$ X = \sum_{i=0}^{2k-1-r}q_{i+1}^A\derpar{}{q_i^A} + \sum_{i=2k-r}^{2k-1}X_i^A\derpar{}{q_i^A} +\sum_{i=0}^{k-1}G^i_A\derpar{}{p^i_A} \ , $$ and, in particular, for a semispray of type $1$ in ${\cal W}$ we have $$ X = \sum_{i=0}^{2k-2}q_{i+1}^A\derpar{}{q_i^A} + X_{2k-1}^A\derpar{}{q_{2k-1}^A} +\sum_{i=0}^{k-1}G^i_A\derpar{}{p^i_A} \ . $$ \subsection{Dynamical vector fields} \subsubsection{Dynamics in ${\cal W} = {\rm T}^{2k-1}Q \times_{{\rm T}^{k-1}Q} {\rm T}^*({\rm T}^{k-1}Q)$} \label{section:DynamicsW} As we know, the dynamical equation of the presymplectic Hamiltonian system $({\cal W},\Omega,H)$ is geometrically written as \begin{equation}\label{eqn:Cap06_EqDinImp} \mathop{i}\nolimits(X)\Omega = {\rm d} H \quad ; \quad X \in {\mathfrak{X}}({\cal W}) \ . \end{equation} Then, according to \cite{art:Gotay_Nester_Hinds78} we have: \begin{prop} \label{prop:Cap06_ExistSolEqDin} Given the presymplectic Hamiltonian system $({\cal W}, \Omega,H)$, a solution $X \in {\mathfrak{X}}({\cal W})$ to equation (\ref{eqn:Cap06_EqDinImp}) exists only on the points of the submanifold ${\cal W}_c \hookrightarrow {\cal W}$ defined by \begin{equation} \label{W0} {\cal W}_c = \left\{ p \in {\cal W} \colon \xi(p) \equiv (\mathop{i}\nolimits(Y){\rm d} H)(p) = 0 \ , \ \forall \, Y \in \ker\,\Omega \right\} \ . \end{equation} \end{prop} We have the following result: \begin{prop} \label{prop:Cap06_W0GrafFL} The submanifold ${\cal W}_c \hookrightarrow {\cal W}$ contains a submanifold ${\cal W}_o \hookrightarrow {\cal W}_c$ which is the graph of the Legendre-Ostrogradsky map defined by ${\cal L}$; that is, ${\cal W}_o = {\rm graph}\,{\cal FL}$. \end{prop} \begin{proof} As ${\cal W}_c$ is defined by (\ref{W0}), it suffices to prove that the constraints defining ${\cal W}_c$ give rise to those defining the graph of the Legendre-Ostrogradsky map associated to ${\cal L}$. We make this calculation in coordinates. Taking the local expression (\ref{eqn:Cap06_LocalCoordUnifiedHamiltFunc}) of the Hamiltonian function $H \in {\rm C}^\infty({\cal W})$, we have $$ {\rm d} H = \sum_{i=0}^{k-1}(q_{i+1}^A{\rm d} p^i_A + p^i_A{\rm d} q_{i+1}^A) - \sum_{i=0}^{k} \derpar{{\cal L}}{q_i^A}{\rm d} q_i^A \ , $$ and using the local basis of $\ker\,\Omega$ given in (\ref{eqn:Cap06_LocalBasisKerOmega}), we obtain that the equations defining the submanifold ${\cal W}_c$ are $$ \mathop{i}\nolimits(Y){\rm d} H = 0 \Longleftrightarrow p^{k-1}_A - \derpar{{\cal L}}{q_k^A} = 0 \ . $$ Observe that these expressions relate the momentum coordinates $p^{k-1}_A$ with the Jacobi-Ostrogadsky functions $\displaystyle \hat p^{k-1}_A= \partial {\cal L} / \partial q_k^A$, and so we obtain the last group of equations of the Legendre-Ostrogradsky map. Furthermore, in Section \ref{subsection:hamform} we have seen that the other Jacobi-Ostrogradsky functions $\hat p^{r-1}_A$ ($1\leq r\leq k-1$) satisfy the relations (\ref{eqn:Cap04_MomentumCoordRelation}). Thus we can consider that ${\cal W}_c$ contains a submanifold ${\cal W}_o$ which can be identified with the graph of a map $$ \begin{array}{rcl} F \colon {\rm T}^{2k-1}Q & \longrightarrow & {\rm T}^*({\rm T}^{k-1}Q) \\ (q_i^A) & \longmapsto & (q_0^A,\ldots,q_{k-1}^A,p^0_A,\ldots,p^{k-1}_A) \end{array} $$ which we identify with the Legendre-Ostrogradsky map by making the identification $p^{r-1}_A=\hat p^{r-1}_A$. \end{proof} \textbf{Remark}: The submanifold ${\cal W}_o$ can be obtained from ${\cal W}_c$ using a constraint algorithm. Hence, ${\cal W}_o$ acts as the initial phase space of the system. We denote by $j_o \colon {\cal W}_o \hookrightarrow {\cal W}$ the natural embedding and by ${\mathfrak{X}}_{{\cal W}_o}({\cal W})$ the set of vector fields in ${\cal W}$ at support on ${\cal W}_o$. Hence, we look for vector fields $X\in{\mathfrak{X}}_{{\cal W}_o}({\cal W})$ which are solutions to equation (\ref{eqn:Cap06_EqDinImp}) at support on ${\cal W}_o$; that is \begin{equation}\label{eqn:Cap06_EqDinSupW0} \restric{\left[\mathop{i}\nolimits(X)\Omega - {\rm d} H\right]}{{\cal W}_o} = 0 \ . \end{equation} In natural coordinates a generic vector field in ${\mathfrak{X}}({\cal W})$ is $$ X = \sum_{i=0}^{k-1}f_i^A\derpar{}{q_i^A} + \sum_{i=k}^{2k-1}F_i^A\derpar{}{q_i^A} + \sum_{i=0}^{k-1}G^i_A\derpar{}{p^i_A} \ , $$ bearing in mind the local expressions of $\Omega$ and ${\rm d} H$, from (\ref{eqn:Cap06_EqDinImp}), we obtain the following system of $(2k+1)n$ equations \begin{eqnarray} f_i^A = q_{i+1}^A \ , \label{eqn:Cap06_SODE}\\ G^0_A = \displaystyle \derpar{{\cal L}}{q_0^A} \quad , \quad G^i_A = \displaystyle \derpar{{\cal L}}{q_i^A} -p^{i-1}_A = d_T(p^i_A) \ , \label{eqn:Cap06_EqDin} \\ p^{k-1}_A - \derpar{{\cal L}}{q_k^A} = 0 \ , \label{eqn:Cap06_FL} \end{eqnarray} where $0 \leqslant i \leqslant k-1$ in (\ref{eqn:Cap06_SODE}) and $1 \leqslant i \leqslant k-1$ in (\ref{eqn:Cap06_EqDin}). Therefore \begin{equation} \label{Xcoor} X = \sum_{i=0}^{k-1}q_{i+1}^A\derpar{}{q_i^A} + \sum_{i=k}^{2k-1}F_i^A\derpar{}{q_i^A} + \derpar{{\cal L}}{q_0^A}\derpar{}{p^0_A} + \sum_{i=1}^{k-1}d_T(p^i_A)\derpar{}{p^i_A} \ . \end{equation} We can observe that equations (\ref{eqn:Cap06_FL}) are just a compatibility condition that, together with the other conditions for the momenta, say that the vector fields $X$ exist only with support on the submanifold defined by the graph of the Legendre-Ostrogradsky map. So we recover, in coordinates, the result stated in Propositions \ref{prop:Cap06_ExistSolEqDin} and \ref{prop:Cap06_W0GrafFL}. Furthermore, this local expression shows that $X$ is a semispray of type $k$ in ${\cal W}$. The component functions $F_i^A$, $k \leqslant i \leqslant 2k-1$, are undetermined. Nevertheless, we must study the tangency of $X$ to the submanifold ${\cal W}_o$; that is, we have to impose that $\restric{\mathop{\rm L}\nolimits(X)\xi}{{\cal W}_o} \equiv \restric{X(\xi)}{{\cal W}_o} = 0$, for every constraint function $\xi$ defining ${\cal W}_o$. So, taking into account Prop. \ref{prop:Cap06_W0GrafFL}, these conditions lead to \begin{align*} &\left(\sum_{i=0}^{k-1}q_{i+1}^A\derpar{}{q_i^A} + \sum_{i=k}^{2k-1}F_i^A\derpar{}{q_i^A} + \derpar{{\cal L}}{q_0^A}\derpar{}{p^0_A} + \sum_{i=1}^{k-1}d_T(p^i_A)\derpar{}{p^i_A}\right) \left( p^{k-1}_A - \derpar{{\cal L}}{q_k^A} \right) = 0 \\ &\left(\sum_{i=0}^{k-1}q_{i+1}^A\derpar{}{q_i^A} + \sum_{i=k}^{2k-1}F_i^A\derpar{}{q_i^A} + \derpar{{\cal L}}{q_0^A}\derpar{}{p^0_A} + \sum_{i=1}^{k-1}d_T(p^i_A)\derpar{}{p^i_A}\right) \left( p^{k-2}_A - \sum_{i=0}^{1}(-1)^i d_T^i\left(\derpar{{\cal L}}{q_{k-1+i}^A}\right) \right) = 0 \\ &\qquad \qquad \qquad \qquad \vdots \\ &\left(\sum_{i=0}^{k-1}q_{i+1}^A\derpar{}{q_i^A} + \sum_{i=k}^{2k-1}F_i^A\derpar{}{q_i^A} + \derpar{{\cal L}}{q_0^A}\derpar{}{p^0_A} + \sum_{i=1}^{k-1}d_T(p^i_A)\derpar{}{p^i_A}\right) \left( p^{1}_A - \sum_{i=0}^{k-2}(-1)^i d_T^i\left(\derpar{{\cal L}}{q_{2+i}^A}\right) \right) = 0 \\ &\left(\sum_{i=0}^{k-1}q_{i+1}^A\derpar{}{q_i^A} + \sum_{i=k}^{2k-1}F_i^A\derpar{}{q_i^A} + \derpar{{\cal L}}{q_0^A}\derpar{}{p^0_A} + \sum_{i=1}^{k-1}d_T(p^i_A)\derpar{}{p^i_A}\right) \left( p^{0}_A - \sum_{i=0}^{k-1}(-1)^i d_T^i\left(\derpar{{\cal L}}{q_{1+i}^A}\right) \right) = 0 \ , \end{align*} and, from here, we obtain the following $kn$ equations \begin{equation} \label{eqn:Cap06_TanVectFieldX} \begin{array}{l} \displaystyle \left(F_k^B-q_{k+1}^B\right)\derpars{{\cal L}}{q_k^B}{q_k^A} = 0 \\[10pt] \displaystyle \left(F_{k+1}^B - q_{k+2}^B\right)\derpars{{\cal L}}{q_k^B}{q_k^A} - \left(F_k^B-q_{k+1}^B \right) d_T\left(\derpars{{\cal L}}{q_k^B}{q_k^A}\right) = 0 \\ \qquad \qquad \qquad \qquad \vdots \\ \displaystyle \left(F_{2k-2}^B - q_{2k-1}^B\right)\derpars{{\cal L}}{q_k^B}{q_k^A} - \sum_{i=0}^{k-3} \left(F_{k+i}^B-q_{k+i+1}^B \right) (\cdots\cdots) = 0 \\ \displaystyle (-1)^k\left(F_{2k-1}^B - d_T\left(q_{2k-1}^B\right)\right) \derpars{{\cal L}}{q_k^B}{q_k^A} + \sum_{i=0}^{k} (-1)^id_T^i\left( \derpar{{\cal L}}{q_i^A} \right) - \sum_{i=0}^{k-2} \left(F_{k+i}^B-q_{k+i+1}^B \right) (\cdots\cdots) = 0 \ , \end{array} \end{equation} where the terms in brackets $(\cdots\cdots)$ contain relations involving partial derivatives of the Lagrangian and applications of $d_T$ which for simplicity are not written. These are just the Lagrangian equations for the components of $X$, as we have seen in (\ref{eqn:Cap03_LocalCoordLagEq}). These equations can be compatible or not, and a sufficient condition to ensure compatibility is the regularity of the Lagrangian function. In particular, we have: \begin{prop} \label{prop:Cap06_RegLag} If ${\cal L} \in {\rm C}^\infty({\rm T}^kQ)$ is a regular Lagrangian function, then there exists a unique vector field $X \in {\mathfrak{X}}_{{\cal W}_o}({\cal W})$ which is a solution to equation (\ref{eqn:Cap06_EqDinSupW0}); it is tangent to ${\cal W}_o$, and is a semispray of type $1$ in ${\cal W}$. \end{prop} \begin{proof} As the Lagrangian function ${\cal L}$ is regular, the Hessian matrix $\displaystyle \left(\derpars{{\cal L}}{q_k^B}{q_k^A}\right)$ is regular at every point, and this allows us to solve the above $k$ systems of $n$ equations (\ref{eqn:Cap06_TanVectFieldX}) determining all the functions $F_i^A$ uniquely, as follows \begin{eqnarray} \label{eqn:Cap06_LagEqReg} F_i^A = q_{i+1}^A \quad , \quad (k \leqslant i \leqslant 2k-2) \\ (-1)^k\left(F_{2k-1}^B - d_T\left(q_{2k-1}^B\right)\right) \derpars{{\cal L}}{q_k^B}{q_k^A} + \sum_{i=0}^{k} (-1)^id_T^i\left( \derpar{{\cal L}}{q_i^A} \right) = 0 \ . \nonumber \end{eqnarray} In this way, the tangency condition holds for $X$ at every point on ${\cal W}_o$. Furthermore, the equalities (\ref{eqn:Cap06_LagEqReg}) show that $X$ is a semispray of type $1$ in ${\cal W}$ \end{proof} However, if ${\cal L}$ is not regular, the equations (\ref{eqn:Cap06_TanVectFieldX}) can be compatible or not. In the most favourable cases, there is a submanifold ${\cal W}_f \hookrightarrow {\cal W}_o$ (it could be ${\cal W}_f = {\cal W}_o$) such that there exist vector fields $X\in{\mathfrak{X}}_{{\cal W}_o}({\cal W})$, tangent to ${\cal W}_f$, which are solutions to the equation \begin{equation} \label{eqn:Cap06_EqDinSupWf} \restric{\left[\mathop{i}\nolimits(X)\Omega - dH\right]}{{\cal W}_f} = 0 \ . \end{equation} In this case, the equations (\ref{eqn:Cap06_TanVectFieldX}) are not compatible, and the compatibility condition gives rise to new constraints. \subsubsection{Dynamics in ${\rm T}^{2k-1}Q$} Now we study how to recover the Lagrangian dynamics from the dynamics in the unified formalism, using the dynamical vector fields. First we have the following results: \begin{prop} \label{prop:Cap06_pr1Difeo} The map $\overline{\operatorname{pr}}_1 = \operatorname{pr}_1 \circ j_o \colon {\cal W}_o \to {\rm T}^{2k-1} Q$ is a diffeomorphism. \end{prop} \begin{proof} As ${\cal W}_o = {\rm graph}\,{\cal FL}$, we have that ${\rm T}^{2k-1} Q \simeq {\cal W}_o$. Furthermore, $\overline{\operatorname{pr}}_1$ is a surjective submersion and, by the equality between dimensions, it is also an injective immersion and hence it is a diffeomorphism. \end{proof} \begin{lem} \label{lemma:Cap06_LagForm} If $\omega_{k-1} \in {\mit\Omega}^2({\rm T}^*({\rm T}^{k-1}Q))$ is the canonical symplectic $2$-form in ${\rm T}^*({\rm T}^{k-1}Q)$, and $\omega_{{\cal L}} = {\cal FL}^*\omega_{k-1}$ is the Lagrangian $2$-form, then $\Omega = \operatorname{pr}_1^*\omega_{\cal L}$. \end{lem} \begin{proof} In fact, $$ \operatorname{pr}_1^*\omega_{{\cal L}} = \operatorname{pr}_1^*({\cal FL}^*\omega_{k-1})=({\cal FL} \circ \operatorname{pr}_1)^*\omega_{k-1}= \operatorname{pr}_2^*\omega_{k-1}= \Omega \ . $$ \end{proof} \begin{lem} \label{lemma:Cap06_LagEnergy} There exists a unique function $E_{\cal L} \in {\rm C}^\infty({\rm T}^{2k-1} Q)$ such that $\operatorname{pr}_1^*E_{\cal L}=H$. This function $E_{\cal L}$ is the Lagrangian energy. \end{lem} \begin{proof} As $\overline{\operatorname{pr}}_1$ is a diffeomorphism, we can define the function $E_{\cal L}=(\overline{\operatorname{pr}}_1^{-1}\circ j_o)^*H \in {\rm C}^\infty({\rm T}^{2k-1} Q)$, which obviously verifies that $\operatorname{pr}_1^*E_{\cal L}=H$. In order to prove that $E_{\cal L}$ is the Lagrangian energy defined previously, we calculate its local expression in coordinates. Thus, from (\ref{eqn:Cap06_LocalCoordUnifiedHamiltFunc}) we obtain that $$ \overline{\operatorname{pr}}_1^*E_{\cal L} = H=\sum_{i=0}^{k-1} p^i_Aq_{i+1}^A - {\cal L}(q_0^A,\ldots,q_k^A) \ , $$ but ${\cal W}_o\hookrightarrow {\cal W}$ is the graph of the Legendre-Ostrogradsky map, and by Prop. \ref{prop:Cap06_W0GrafFL} we have $\displaystyle p^i_A = \sum_{j=0}^{k-i-1}(-1)^j d_T^j\left(\derpar{{\cal L}}{q_{i+1+j}^A}\right)$, and then \begin{align*} \overline{\operatorname{pr}}_1^*E_{\cal L} &= \sum_{i=0}^{k-1}\sum_{j=0}^{k-i-1} q_{i+1}^A(-1)^j d_T^j\left(\derpar{{\cal L}}{q_{i+1+j}^A}\right) - {\cal L}(q_0^A,\ldots,q_k^A) \\ &= \sum_{i=1}^k \sum_{j=0}^{k-i}q_i^A (-1)^jd_T^j\left(\derpar{{\cal L}}{q_{i+j}^A}\right) - {\cal L}(q_0^A,\ldots,q_k^A) \ . \end{align*} Now, as $\overline{\operatorname{pr}}_1 = \operatorname{pr}_1 \circ j_o$ and $\operatorname{pr}_1^*q_i^A = q_i^A$, we obtain finally \begin{equation*} E_{\cal L} = \sum_{i=1}^k\sum_{j=0}^{k-i} q_i^A (-1)^jd_T^j\left(\derpar{{\cal L}}{q_{i+j}^A}\right) -{\cal L}(q_0^A,\ldots,q_k^A) \end{equation*} which is the local expression (\ref{eqn:Cap03_LocalCoordLagEnergy}) of the Lagrangian energy. \end{proof} Using these results, we can recover an Euler-Lagrange vector field in ${\rm T}^{2k-1}Q$ starting from a vector field $X \in {\mathfrak{X}}_{{\cal W}_0}({\cal W})$ tangent to ${\cal W}_o$, a solution to (\ref{eqn:Cap06_EqDinSupW0}). First we have: \begin{lem} \label{lemma:Cap06_LagVectField} Let $X \in {\mathfrak{X}}({\cal W})$ be a vector field tangent to ${\cal W}_o$. Then there exists a unique vector field $X_{\cal L}\in {\mathfrak{X}}({\rm T}^{2k-1}Q)$ such that $X_{\cal L} \circ \operatorname{pr}_1 \circ j_o = {\rm T}\operatorname{pr}_1 \circ X \circ j_o$. \end{lem} \begin{proof} As $X \in {\mathfrak{X}}({\cal W})$ is tangent to ${\cal W}_o$, there exists a vector field $X_o \in {\mathfrak{X}}({\cal W}_o)$ such that ${\rm T} j_o \circ X_o = X \circ j_o$. Furthermore, as $\overline{\operatorname{pr}}_1$ is a diffeomorphism, there is a unique vector field $X_{\cal L} \in {\mathfrak{X}}({\rm T}^{2k-1}Q)$ which is $\overline{\operatorname{pr}}_1$-related with $X_o$; that is, $X_{\cal L} \circ \overline{\operatorname{pr}}_1 = {\rm T}\overline{\operatorname{pr}}_1 \circ X_o$. Then $$ X_{\cal L} \circ \operatorname{pr}_1 \circ j_o = X_{\cal L} \circ \overline{\operatorname{pr}_1} = {\rm T}\overline{\operatorname{pr}}_1 \circ X_o = {\rm T}\operatorname{pr}_1 \circ {\rm T} j_o \circ X_o = {\rm T}\operatorname{pr}_1 \circ X \circ j_o $$ \end{proof} And as a consequence we obtain: \begin{teor} \label{thm:Cap06_CorrX-XL} Let $X \in {\mathfrak{X}}_{{\cal W}_o}({\cal W})$ be a vector field solution to equation (\ref{eqn:Cap06_EqDinSupW0}) and tangent to ${\cal W}_o$ (at least on the points of a submanifold ${\cal W}_f \hookrightarrow {\cal W}_o$). Then there exists a unique semispray of type $k$, $X_{\cal L} \in {\mathfrak{X}}({\rm T}^{2k-1} Q)$, which is a solution to the equation \begin{equation} \label{eqn:Cap06_EqLag} \mathop{i}\nolimits(X_{\cal L})\omega_{\cal L} - dE_{\cal L} = 0 \end{equation} (at least on the points of $S_f = \operatorname{pr}_1({\cal W}_f)$). In addition, if ${\cal L} \in {\rm C}^\infty({\rm T}^kQ)$ is a regular Lagrangian, then $X_{\cal L}$ is a semispray of type $1$, and hence it is the Euler-Lagrange vector field. \noindent Conversely, if $X_{\cal L} \in {\mathfrak{X}}({\rm T}^{2k-1}Q)$ is a semispray of type $k$ (resp., of type $1$), which is a solution to equation (\ref{eqn:Cap06_EqLag}) (at least on the points of a submanifold $S_f \hookrightarrow {\rm T}^{2k-1}Q$), then there exists a unique vector field $X \in {\mathfrak{X}}_{{\cal W}_o}({\cal W})$ which is a solution to equation (\ref{eqn:Cap06_EqDinSupW0}) (at least on ${\cal W}_f = \overline{\operatorname{pr}}_1^{-1}(S_f) \hookrightarrow {\cal W}_o \hookrightarrow {\cal W}$), and it is a semispray of type $k$ in ${\cal W}$ (resp., of type $1$). \end{teor} \begin{proof} Applying Lemmas \ref{lemma:Cap06_LagForm}, \ref{lemma:Cap06_LagEnergy}, and \ref{lemma:Cap06_LagVectField}, we have: $$ 0 = \restric{\left[\mathop{i}\nolimits(X)\Omega - {\rm d} H\right]}{{\cal W}_o}= \restric{\left[\mathop{i}\nolimits(X)\operatorname{pr}_1^*\omega_{\cal L} - {\rm d}\operatorname{pr}_1^*E_{\cal L}\right]}{{\cal W}_o} = \operatorname{pr}_1^*\restric{\left[\mathop{i}\nolimits(X_{\cal L})\omega_{\cal L} - {\rm d} E_{\cal L}\right]}{{\cal W}_o} \ , $$ but, as $\operatorname{pr}_1$ is a surjective submersion, this is equivalent to $$ 0=\restric{\left[\mathop{i}\nolimits(X_{\cal L})\omega_{\cal L} - {\rm d} E_{\cal L}\right]}{\operatorname{pr}_1({\cal W}_o)} = \restric{\left[\mathop{i}\nolimits(X_{\cal L})\omega_{\cal L} - {\rm d} E_{\cal L}\right]}{{\rm T}^{2k-1}Q} = 0 \ , $$ since $\operatorname{pr}_1({\cal W}_o) = {\rm T}^{2k-1}Q$. The converse is immediate, reversing this reasoning. In order to prove that $X_{\cal L}$ is a semispray of type $k$, we proceed in coordinates. From the local expression (\ref{Xcoor}) for the vector field $X$ (where the functions $F_i^A$ are the solutions of the equations (\ref{eqn:Cap06_TanVectFieldX})), and using Lemma \ref{lemma:Cap06_LagVectField}, we obtain that the local expression of $X_{\cal L} \in {\mathfrak{X}}({\rm T}^{2k-1}Q)$ is $$ X_{\cal L} = \sum_{i=0}^{k-1}q_{i+1}^A\derpar{}{q_i^A} + \sum_{i=k}^{2k-1}F_i^A\derpar{}{q_i^A}\ , $$ and then $$ J_k(X_{\cal L}) = \sum_{i=0}^{k-1}\frac{(k+i)!}{i!}q_{i+1}^A\derpar{}{q_{k+i}^A} = \Delta_k \ ; $$ so $X_{\cal L}$ is a semispray of type $k$ in ${\rm T}^{2k-1}Q$. Finally, if ${\cal L} \in {\rm C}^\infty({\rm T}^kQ)$ is a regular Lagrangian, equations (\ref{eqn:Cap06_TanVectFieldX}) become (\ref{eqn:Cap06_LagEqReg}), and hence the local expression of $X$ is $$ X = \sum_{i=0}^{2k-2}q_{i+1}^A\derpar{}{q_i^A} + F_{2k-1}^A\derpar{}{q_{2k-1}^A} + \derpar{{\cal L}}{q_0^A}\derpar{}{p^0_A} + \sum_{i=1}^{k-1}d_T(p^i_A)\derpar{}{p^i_A} \ . $$ Therefore $$ X_{\cal L} = \sum_{i=0}^{2k-2}q_{i+1}^A\derpar{}{q_i^A} + F_{2k-1}^A\derpar{}{q_{2k-1}^A} \ , $$ and then $\displaystyle J_1(X_{\cal L}) = \sum_{i=0}^{2k-2}(i+1)q_{i+1}^A\derpar{}{q_{i+1}^A} = \Delta_1$, which shows that $X_{\cal L}$ is a semispray of type $1$ in ${\rm T}^{2k-1}Q$. \end{proof} {\bf Remarks}: \begin{itemize} \item It is important to point out that, if ${\cal L}$ is not a regular Lagrangian, then $X$ is a semispray of type $k$ in ${\cal W}$, but not necessarily a semispray of type $1$. This means that $X_{\cal L}$ is a Lagrangian vector field, but it is not necessarily an Euler-Lagrange vector field (it is not a semispray of type $1$, but just a semispray of type $k$). Thus, for singular Lagrangians, this must be imposed as an additional condition in order that the integral curves of $X_{\cal L}$ verify the Euler-Lagrange equations. This is a different from the case of first-order dynamical systems ($k=1$), where this condition ($X_{\cal L}$ is a semispray of type $1$; that is, a holonomic vector field) is obtained straightforwardly in the unified formalism. In general, only in the most interesting cases have we assured the existence of a submanifold ${\cal W}_f \hookrightarrow {\cal W}_o$ and vector fields $X \in {\mathfrak{X}}_{{\cal W}_0}({\cal W})$ tangent to ${\cal W}_f$ which are solutions to the equation (\ref{eqn:Cap06_EqDinSupWf}). Then, considering the submanifold $S_f=\operatorname{pr}_1({\cal W}_f)\hookrightarrow {\rm T}^{2k-1}Q$, in the best cases (see \cite{art:Batlle_Gomis_Pons_Roman88, art:Gracia_Pons_Roman91,art:Gracia_Pons_Roman92}), we have that those Euler-Lagrange vector fields $X_{\cal L}$ exist, perhaps on another submanifold $M_f\hookrightarrow S_f$ where they are tangent, and are solutions to the equation \begin{equation} \restric{\left[i_{X_{\cal L}}\omega_{\cal L} - {\rm d} E_{\cal L}\right]}{M_f} = 0 \ . \label{finaleq} \end{equation} \item Observe also that Theorem \ref{thm:Cap06_CorrX-XL} states that there is a one-to-one correspondence between vector fields $X \in {\mathfrak{X}}_{{\cal W}_o}({\cal W})$ which are solutions to equation (\ref{eqn:Cap06_EqDinSupW0}) and $X_{\cal L} \in {\mathfrak{X}}({\rm T}^{2k-1}Q)$ solutions to (\ref{eqn:Cap06_EqLag}), but not uniqueness, unless ${\cal L}$ is regular. In fact: \end{itemize} \begin{corol} \label{corol:Cap06_RegLag} If ${\cal L} \in {\rm C}^\infty({\rm T}^kQ)$ is a regular Lagrangian, then there is a unique $X \in {\mathfrak{X}}_{{\cal W}_o}({\cal W})$ tangent to ${\cal W}_o$ which is a solution to equation (\ref{eqn:Cap06_EqDinSupW0}), and it is a semispray of type $1$. \end{corol} \begin{proof} As ${\cal L}$ is regular, by Proposition \ref{prop:Cap03_LagVectFieldRegLag} there is a unique semispray of type $1$, $X_{\cal L} \in {\mathfrak{X}}({\rm T}^{2k-1} Q)$ which is a solution to equation (\ref{eqn:Cap06_EqLag}) on ${\rm T}^{2k-1}Q$. Then, by Theorem \ref{thm:Cap06_CorrX-XL}, there is a unique $X \in {\mathfrak{X}}_{{\cal W}_o}({\cal W})$, tangent to ${\cal W}_o$, which is a solution to (\ref{eqn:Cap06_EqDinSupW0}) on ${\cal W}_o$. \end{proof} \subsubsection{Dynamics in ${\rm T}^*({\rm T}^{k-1}Q)$} \paragraph{Hyperregular and regular Lagrangians}\ In order to recover the Hamiltonian formalism, we distinguish between the regular and non-regular cases. We start with the regular case, although by simplicity we analyze the hyperregular case (the regular case is recovered from this by restriction on the corresponding open sets where the Legendre-Ostrogadsky map is a local diffeomorphism). For this case we have the following commutative diagram $$ \xymatrix{ \ & {\rm T}{\cal W} \ar@/_/[ddl]_-{{\rm T}\operatorname{pr}_1} \ar@/^/[ddr]^-{{\rm T}\operatorname{pr}_2} & \ \\ \ & {\rm T}{\cal W}_o \ar[dl]_-{{\rm T}\overline{\operatorname{pr}}_1}|<<<<{\hole} \ar[dr]^-{{\rm T}\overline{\operatorname{pr}}_2} \ar@{^{(}->}[u]_-{{\rm T} j_o} & \ \\ {\rm T}({\rm T}^{2k-1} Q) & \ & {\rm T}({\rm T}^*({\rm T}^{k-1}Q)) \\ \ & {\cal W} \ar[ddl]_-{\operatorname{pr}_1} \ar[ddr]^-{\operatorname{pr}_2}|<<<<{\hole} \ar@/^1.95pc/[uuu]^(.35){X} & \ \\ \ & {\cal W}_o = {\rm graph}\,{{\cal FL}} \ar[dl]^-{\overline{\operatorname{pr}}_1} \ar[dr]_-{\overline{\operatorname{pr}}_2} \ar@{^{(}->}[u]^-{j_o} \ar@/_1.75pc/[uuu]_-{X_o} & \ \\ {\rm T}^{2k-1}Q \ar[dr]_-{\rho^{2k-1}_{k-1}} \ar[rr]^-{{\cal FL}} \ar[uuu]^-{X_{\cal L}} \ar@/_1.3pc/[ddr]_-{\beta^{2k-1}} & \ & {\rm T}^*({\rm T}^{k-1}Q) \ar[dl]^{\pi_{{\rm T}^{k-1}Q}} \ar[uuu]_-{X_h} \\ \ & {\rm T}^{k-1}Q \ar[d]_-{\beta^{k-1}} & \ \\ \ & Q & \ \\ } $$ \begin{teor} \label{thm:Cap06_CorrX-Xh} Let ${\cal L} \in {\rm C}^\infty({\rm T}^kQ)$ be a hyperregular Lagrangian, $h \in {\rm C}^\infty({\rm T}^*({\rm T}^{k-1}Q))$ the Hamiltonian function such that ${\cal FL}^*h = E_{\cal L}$, and $X \in {\mathfrak{X}}_{{\cal W}_o}({\cal W})$ the vector field solution to the equation (\ref{eqn:Cap06_EqDinSupW0}), tangent to ${\cal W}_o$. Then, there exists a unique vector field $X_h = {\cal FL}_*X_{\cal L} \in {\mathfrak{X}}({\rm T}^*({\rm T}^{k-1}Q))$ which is a solution to the equation \begin{equation} \label{eqn:Cap06_EqHam} \mathop{i}\nolimits(X_h)\omega_{k-1} - {\rm d} h = 0 \end{equation} Conversely, if $X_h \in {\mathfrak{X}}({\rm T}^*({\rm T}^{k-1}Q))$ is a solution to equation (\ref{eqn:Cap06_EqHam}), then there exists a unique vector field $X \in {\mathfrak{X}}_{{\cal W}_o}({\cal W})$, tangent to ${\cal W}_o$, which is a solution to equation (\ref{eqn:Cap06_EqDinSupW0}). \end{teor} \begin{proof} If ${\cal L}$ is hiperregular, then $\overline{\operatorname{pr}}_2 = {\cal FL} \circ \overline{\operatorname{pr}}_1$ is a diffeomorphism, since it is a composition of diffeomorphisms; then there exists a unique vector field $X_o \in {\mathfrak{X}}({\cal W}_o)$ such that ${\overline{\operatorname{pr}}_2}_*X_o = X_h$, and there is a unique $X \in {\mathfrak{X}}_{{\cal W}_o}({\cal W})$ such that ${j_o}_*X_o = \restric{X}{{\cal W}_o}$. Now, as ${\cal FL}^*h = E_{\cal L}$, by applying Lemma \ref{lemma:Cap06_LagEnergy} we have that $\operatorname{pr}_1^*({\cal FL}^*(h)) = \operatorname{pr}_1^*E_{\cal L} = H$; but ${\cal FL} \circ \operatorname{pr}_1 = \operatorname{pr}_2$, and then $\operatorname{pr}_2^*h = H$. Therefore, by the definition of $\Omega$, we have $$ 0 =\restric{\left[\mathop{i}\nolimits(X)\Omega - {\rm d} H\right]}{{\cal W}_o} = \restric{\left[\mathop{i}\nolimits(X)\operatorname{pr}_2^*\omega_{k-1} - {\rm d} \operatorname{pr}_2^*h\right]}{{\cal W}_o} = \operatorname{pr}_2^*\restric{\left[\mathop{i}\nolimits(X_h)\omega_{k-1} - {\rm d} h\right]}{{\cal W}_o} \ . $$ However, as $\operatorname{pr}_2$ is a surjective submersion and $\operatorname{pr}_2({\cal W}_o) = {\rm T}^*({\rm T}^{k-1}Q)$, we finally obtain that $$ \\ 0 = \restric{\left[\mathop{i}\nolimits(X_h)\omega_{k-1} - {\rm d} h\right]}{\operatorname{pr}_2({\cal W}_o)}= \restric{\left[\mathop{i}\nolimits(X_h)\omega_{k-1} - {\rm d} h\right]}{{\rm T}^*({\rm T}^{k-1}Q)} $$ \end{proof} \paragraph{Singular (almost-regular) Lagrangians} \ Remember that, for almost-regular Lagrangians, only in the most interesting cases have we assured the existence of a submanifold ${\cal W}_f \hookrightarrow {\cal W}_o$ and vector fields $X \in {\mathfrak{X}}_{{\cal W}_0}({\cal W})$ tangent to ${\cal W}_f$ which are solutions to equation (\ref{eqn:Cap06_EqDinSupWf}). In this case, the dynamical vector fields in the Hamiltonian formalism cannot be obtained straightforwardly from the solutions in the unified formalism, but rather by passing through the Lagrangian formalism and using the Legendre-Ostrogadsky map. Thus, we can consider the submanifolds $S_f=\operatorname{pr}_1({\cal W}_f)\hookrightarrow {\rm T}^{2k-1}Q$ and $P_f=\operatorname{pr}_2({\cal W}_f)= {\cal FL}(S_f)\hookrightarrow {\rm T}^*({\rm T}^{k-1}Q)$. Then, using Theorem \ref{thm:Cap06_CorrX-XL}, from the vector fields $X \in {\mathfrak{X}}_{{\cal W}_o}({\cal W})$ we obtain the corresponding $X_{\cal L} \in {\mathfrak{X}}({\rm T}^{2k-1}Q)$, and from these the semisprays of type $1$ (if they exist) which are perhaps defined on a submanifold $M_f\hookrightarrow S_f$, are tangent to $M_f$ and are solutions to equation (\ref{finaleq}). So we have the following commutative diagram $$ \xymatrix{ \ & \ & {\cal W} \ar@/_1.25pc/[dddl]_-{\operatorname{pr}_1} \ar@/^1.25pc/[dddr]^-{\operatorname{pr}_2} & \ \\ \ & \ & \ & \ \\ \ & \ & {\cal W}_P = {\rm T}^{2k-1}Q \times_{{\rm T}^{k-1}Q} P_o \ar@{^{(}->}[uu]^-{j_{{\cal W}_P}} \ar[dl]_-{\operatorname{pr}_{1,{\cal W}_P}} \ar[dr]^-{\operatorname{pr}_{2,{\cal W}_P}} \ar[ddr]_-{\operatorname{pr}_{2,P_o}} & \ \\ \ & {\rm T}^{2k-1}Q & \ & {\rm T}^*({\rm T}^{k-1}Q) \\ \ & \ & {\cal W}_o = {\rm graph}\,({\cal FL}_o) \ar@{^{(}->}[uu]^-{j_o} \ar[ul]_-{\overline{\operatorname{pr}}_{1,P_o}} \ar[r]^-{\overline{\operatorname{pr}}_{2,P_o}} & P_o \ar@{^{(}->}[u]^{j_{P_o}} \\ \ & \ & {\cal W}_f \ar@{^{(}->}[u]^-{j_{{\cal W}_f}} \ar[dl] \ar[dr] & \ \\ M_f \ar@{^{(}->}[r] & S_f \ar@{^{(}->}[uuu]^-{j_{S_f}} & \ & P_f \ar@{^{(}->}@/_1.25pc/[uuu]_-{j_{P_f}} } $$ Now, it is proved (\cite{art:Gracia_Pons_Roman92}) that there are Euler-Lagrange vector fields (perhaps only on the points of another submanifold $\bar M_f\hookrightarrow M_f$), which are ${\cal FL}$-projectable on $P_f = {\cal FL}(S_f) \hookrightarrow P_o \hookrightarrow {\rm T}^*({\rm T}^{k-1}Q)$. These vector fields $X_{h_o}={\cal FL}_*X_{{\cal L}} \in {\mathfrak{X}}({\rm T}^*({\rm T}^{k-1}Q))$ are tangent to $P_f$ and are solutions to the equation $$ \restric{\left[ i_{X_{h_o}}\omega_o - {\rm d} h_o \right]}{P_f} = 0 \ . $$ Conversely, as ${\cal FL}_o$ is a submersion, for every solution $X_{h_o} \in {\mathfrak{X}}({\rm T}^*({\rm T}^{k-1}Q))$ to the last equation, there is a semispray of type $1$, $X_{\cal L} \in {\mathfrak{X}}({\rm T}^{2k-1}Q)$, such that ${{\cal FL}_o}_*X_{\cal L} = X_{h_o}$, and we can recover solutions to equation (\ref{eqn:Cap06_EqDinSupWf}) using Theorem \ref{thm:Cap06_CorrX-XL}. \subsection{Integral curves} After studying the vector fields which are solutions to the dynamical equations, we analyze their integral curves, showing how to recover the Lagrangian and Hamiltonian dynamical trajectories from the dynamical trajectories in the unified formalism. Let $X \in {\mathfrak{X}}_{{\cal W}_o}({\cal W})$ be a vector field tangent to ${\cal W}_o$ which is a solution to equation (\ref{eqn:Cap06_EqDinSupW0}), and let $\sigma \colon I \subset \mathbb{R} \to {\cal W}$ be an integral curve of $X$, on ${\cal W}_o$. As $\tilde{\sigma} = X \circ \sigma$, this means that the following equation holds \begin{equation}\label{eqn:Cap06_EqDinImpCI} \mathop{i}\nolimits({\tilde{\sigma}})(\Omega \circ \sigma) = {\rm d} H \circ \sigma \ . \end{equation} Furthermore, if $\sigma_o \colon I \to {\cal W}_o$ is a curve on ${\cal W}_o$ such that $j_o \circ \sigma_o = \sigma$, we have that $\sigma_o$ is an integral curve of the vector field $X_o\in{\mathfrak{X}}({\cal W}_o)$ associated to $X$, and $\tilde{\sigma}_o = X_o \circ \sigma_o$. In local coordinates, if $\sigma(t) = (q_i^A(t),p^j_A(t))$, we have that \begin{align*} \dot{q}_i^A(t) = q_{i+1}^A\circ\sigma & \quad (0 \leqslant i \leqslant k-1) \quad ; \quad& \dot{q}_i^A(t) = F_i^A\circ\sigma \quad (k \leqslant i \leqslant 2k-1) \\ \dot{p}^0_A(t) = \derpar{{\cal L}}{q_0^A}\circ\sigma & \qquad \qquad\qquad\qquad\ ; \quad & \dot{p}^i_A(t) = d_T(p^i_A)\circ\sigma \quad (1 \leqslant i \leqslant k-1) \ , \end{align*} where $F_i^A$ are solutions to equations (\ref{eqn:Cap06_TanVectFieldX}). Now, for the Lagrangian dynamical trajectories we have the following result: \begin{prop} \label{prop:Cap06_XLCI} Let $\sigma \colon I \subset\mathbb{R} \to {\cal W}$ be an integral curve of a vector field $X$ solution to (\ref{eqn:Cap06_EqDinSupW0}), on ${\cal W}_o$. Then the curve $\sigma_{\cal L} = \operatorname{pr}_1 \circ \sigma \colon I \to {\rm T}^{2k-1}Q$ is an integral curve of $X_{\cal L}$. \end{prop} \begin{proof} As $\sigma = j_o \circ \sigma_o$, using that ${\rm T} j_o \circ X_o = X \circ j_o$ and that ${\rm T}\operatorname{pr}_1 \circ X = X_{\cal L} \circ \operatorname{pr}_1$, we have \begin{eqnarray*} \tilde{\sigma}_{\cal L} &=& \widetilde{\operatorname{pr}_1 \circ \sigma} = \widetilde{\operatorname{pr}_1 \circ j_o \circ \sigma_o}= {\rm T}\operatorname{pr}_1 \circ {\rm T} j_o \circ \tilde{\sigma}_o = {\rm T}\operatorname{pr}_1 \circ {\rm T} j_o \circ X_o \circ \sigma_o \\ &=& {\rm T} \operatorname{pr}_1 \circ X \circ j_o \circ \sigma_o = X_{\cal L} \circ \operatorname{pr}_1 \circ j_o \circ \sigma_o = X_{\cal L} \circ \sigma_{\cal L} \ . \end{eqnarray*} \end{proof} \begin{corol} If ${\cal L} \in {\rm C}^\infty({\rm T}^kQ)$ is a regular Lagrangian, then the curve $\sigma_{\cal L} = \operatorname{pr}_1 \circ \sigma \colon I \to {\rm T}^{2k-1}Q$ is the canonical lifting of a curve on $Q$; that is, there exists $\gamma \colon I \subset \mathbb{R} \to Q$ such that $\sigma_{\cal L} = \tilde{\gamma}^{2k-1}$. \end{corol} \begin{proof} It is a straighforward consequence of Proposition \ref{prop:Cap06_XLCI} and Theorem \ref{thm:Cap06_CorrX-XL}. \end{proof} And for the Hamiltonian trajectories, we have: \begin{prop} \label{prop:Cap06_XhCI} Let $\sigma \colon I \subset\mathbb{R} \to {\cal W}$ be an integral curve of a vector field $X$ solution to (\ref{eqn:Cap06_EqDinSupW0}), on ${\cal W}_o$. Then the curve $\sigma_h = {\cal FL} \circ \sigma_{\cal L} \colon I \to {\rm T}^*({\rm T}^{k-1}Q)$ is an integral curve of $X_h = {\cal FL}_*(X_{\cal L})$. \end{prop} \begin{proof} Given that $\sigma_{\cal L}$ is an integral curve of $X_{\cal L}$, Proposition \ref{prop:Cap06_XLCI}, and the definitions of $X_h$ and $\sigma_h$, we obtain $$ \tilde{\sigma}_h = \widetilde{{\cal FL} \circ \sigma_{\cal L}} = {\rm T}{\cal FL} \circ \tilde{\sigma}_{\cal L} = {\rm T}{\cal FL} \circ X_{\cal L} \circ \sigma_{\cal L} = X_h \circ {\cal FL} \circ \sigma_{\cal L} = X_h \circ \sigma_h \ . $$ Thus $\sigma_h$ is an integral curve of $X_h$. \end{proof} The relation among all these integral curves is summarized in the following diagram $$ \xymatrix{ \ & \ & {\cal W} \ar[ddll]_-{\operatorname{pr}_1} \ar[ddrr]^-{\operatorname{pr}_2} & \ & \ \\ \ & \ & {\cal W}_o \ar[dll]_(.45){\overline{\operatorname{pr}}_1}|(.19){\hole} \ar[drr]^-{\overline{\operatorname{pr}}_2} \ar@{^{(}->}[u]_-{j_o} & \ & \ \\ {\rm T}^{2k-1}Q \ar[ddrr]_-{\rho^{2k-1}_{k-1}} \ar@/_1.3pc/[dddrr]_-{\beta^{2k-1}} \ar[rrrr]^-{{\cal FL}}|(.39){\hole}|(.56){\hole} & \ & \ & \ & {\rm T}^*({\rm T}^{k-1}Q) \ar[ddll]^-{\pi_{{\rm T}^{k-1}Q}} \\ \ & \ & \mathbb{R} \ar@/^1.9pc/[dd]^-{\gamma}|(.31){\hole} \ar@/_1.3pc/[uu]_(.65){\sigma_o} \ar@/^1.65pc/[uuu]^(.45){\sigma} \ar[ull]_-{\sigma_{\cal L}} \ar[urr]^-{\sigma_h} & \ & \ \\ \ & \ & {\rm T}^{k-1}Q \ar[d]_-{\beta^{k-1}} & \ & \ \\ \ & \ & Q & \ & \ } $$ {\bf Remark}: Observe that in Propositions \ref{prop:Cap06_XLCI} and \ref{prop:Cap06_XhCI} we make no assumption on the regularity of the system. The only considerations in the almost-regular case are that, in general, the curves are defined in some submanifolds which are determined by the constraint algorithm, and that $\sigma_{\cal L}$ is not necessarily the lifting of any curve in $Q$ and this condition must be imposed. In particular: \begin{itemize} \item If the Lagrangian is regular (or hiperregular), then ${\rm Im}\,(\sigma) \subset {\cal W}_o$, ${\rm Im}\,(\sigma_{\cal L}) \subset {\rm T}^{2k-1} Q$ and ${\rm Im}\, (\sigma_h) \subset {\rm T}^*({\rm T}^{k-1}Q)$. \item If the Lagrangian is almost-regular, then ${\rm Im}\,(\sigma) \subset {\cal W}_f \hookrightarrow {\cal W}_o$, ${\rm Im}\,(\sigma_{\cal L}) \subset S_f \hookrightarrow {\rm T}^{2k-1} Q$ and ${\rm Im}\,(\sigma_h) \subset P_f \hookrightarrow P_o \hookrightarrow {\rm T}^*({\rm T}^{k-1}Q)$. \end{itemize} \section{Examples} \label{section:examples} \subsection{The Pais-Uhlenbeck oscillator} The Pais-Uhlenbeck oscillator is one of the simplest (regular) systems that can be used to explore the features of higher order dynamical systems, and has been analyzed in detail in many publications \cite{art:Pais_Uhlenbeck50,art:Martinez_Montemayor_Urrutia11}. Here we study it using the unified formalism. The configuration space for this system is a $1$-dimensional smooth manifold $Q$ with local coordinate $(q_0)$. Taking natural coordinates in the higher-order tangent bundles over $Q$, the second-order Lagrangian function ${\cal L} \in {\rm C}^\infty({\rm T}^2Q)$ for this system is locally given by $$ {\cal L}(q_0,q_1,q_2) = \frac{1}{2} \left( q_1^2 - \omega^2q_0^2 - \gamma q_2^2 \right) $$ where $\gamma$ is some nonzero real constant, and $\omega$ is a real constant. ${\cal L}$ is a regular Lagrangian function, since the Hessian matrix of ${\cal L}$ with respect to $q_2$ is $$ \left( \derpars{{\cal L}}{q_2}{q_2} \right) =- \gamma $$ which has maximum rank, since we assume that $\gamma$ is nonzero. Notice that, if we take $\gamma = 0$, then ${\cal L}$ becomes a first-order regular Lagrangian function, and thus it is a nonsense to study this system using the higher-order unified formalism. As this is a second-order dynamical system, the phase space that we consider is $$ \xymatrix{ \ & {\cal W} = {\rm T}^3Q \times_{{\rm T} Q} {\rm T}^*({\rm T} Q) \ar[dl]_-{\operatorname{pr}_1} \ar[dr]^-{\operatorname{pr}_2} & \ \\ {\rm T}^3Q \ar[dr]_-{\rho^{3}_{1}} & \ & {\rm T}^*({\rm T} Q) \ar[dl]^-{\pi_{{\rm T} Q}} \\ \ & {\rm T} Q & \ . } $$ Denoting the canonical symplectic form by $\omega_1 \in {\mit\Omega}^2({\rm T}^*({\rm T} Q))$, we define the presymplectic form $\Omega \in \operatorname{pr}_2^*\omega_1 \in {\mit\Omega}^2({\cal W})$ with the local expression $$ \Omega = {\rm d} q_0 \wedge {\rm d} p^0 + {\rm d} q_1 \wedge {\rm d} p^1\ , $$ The Hamiltonian function $H \in {\rm C}^\infty({\cal W})$ in the unified formalism is $H = {\cal C} - (\rho^3_2 \circ \operatorname{pr}_1)^*{\cal L}$, where ${\cal C}$ is the coupling function, whose local expression is $$ {\cal C}(q_0,q_1,q_2,q_3,p^0,p^1) = p^0q_1 + p^1q_2 \ . $$ and then the Hamiltonian function can be written locally \begin{equation*} H(q_0,q_1,q_2,q_3,p^0,p^1) = p^0q_1 + p^1q_2 - \frac{1}{2} \left( q_1^2 - \omega^2q_0^2 - \gamma q_2^2 \right) \end{equation*} As stated in the above sections, we can describe the dynamics for this system in terms of the integral curves of vector fields $X \in {\mathfrak{X}}({\cal W})$ which are solutions to equation (\ref{eqn:Cap06_EqDinImp}). If we take a generic vector field $X$ in ${\cal W}$, given locally by $$ X = f_0 \derpar{}{q_0} + f_1 \derpar{}{q_1} + F_2 \derpar{}{q_2} + F_3 \derpar{}{q_3} + G^0\derpar{}{p^0} + G^1\derpar{}{p^1}, $$ taking into acount that $$ {\rm d} H = \omega^2q_0{\rm d} q_0 + (p^0-q_1){\rm d} q_1 + (p^1 + \gamma q_2){\rm d} q_2 + q_1 {\rm d} p^0 + q_2 {\rm d} p^1 \ , $$ from the dynamical equation $\mathop{i}\nolimits(X)\Omega = {\rm d} H$, we obtain the following system of linear equations for the coefficients of $X$ \begin{align} & f_0 = q_1 \label{eqn:Example1_Semispray2_1} \\ & f_1 = q_2 \label{eqn:Example1_Semispray2_2} \\ & G^0 = - \omega^2 q_0 \label{eqn:Example1_VectorFieldG0} \\ & G^1 = q_1 - p^0 \label{eqn:Example1_VectorFieldG1} \\ & p^1 + \gamma q_2 = 0 \label{eqn:Example1_LegTransformation} \end{align} Equations (\ref{eqn:Example1_Semispray2_1}) and (\ref{eqn:Example1_Semispray2_2}) give us the condition of semispray of type $2$ for the vector field $X$. Furthermore, equation (\ref{eqn:Example1_LegTransformation}) is an algebraic relation stating that the vector field $X$ is defined along a submanifold ${\cal W}_o$ that can be identified with the graph of the Legendre-Ostrogradsky map, as we have seen in Propositions \ref{prop:Cap06_ExistSolEqDin} and \ref{prop:Cap06_W0GrafFL}. Thus, using (\ref{eqn:Example1_Semispray2_1}), (\ref{eqn:Example1_Semispray2_2}), (\ref{eqn:Example1_VectorFieldG0}) and (\ref{eqn:Example1_VectorFieldG1}), the vector field $X$ is given locally by \begin{equation} \label{eqn:Example1_VectorFieldX} X = q_1 \derpar{}{q_0} + q_2 \derpar{}{q_1} + F_2 \derpar{}{q_2} + F_3 \derpar{}{q_3} - \omega^2q_0\derpar{}{p^0} + \left(q_1 - p^0\right) \derpar{}{p_1} \ . \end{equation} As our goal is to recover the Lagrangian and Hamiltonian solutions from the vector field $X$, we must require $X$ to be a semispray of type $1$. Nevertheless, as ${\cal L}$ is a regular Lagrangian function, this condition is naturally deduced from the formalism, as we have seen in (\ref{eqn:Cap06_TanVectFieldX}). Notice that the functions $F_2$ and $F_3$ in (\ref{eqn:Example1_VectorFieldX}) are not determined until the tangency of the vector field $X$ on ${\cal W}_o$ is required. Recall that the Legendre-Ostrogradsky transformation is the map ${\cal FL} \colon {\rm T}^3Q \longrightarrow {\rm T}^*({\rm T} Q)$ given in local coordinates by \begin{eqnarray*} {\cal FL}^*p^0 &=& \derpar{{\cal L}}{q_1} - d_T\left(\derpar{{\cal L}}{q_2}\right) \equiv \derpar{{\cal L}}{q_1} - d_T\left( p^1 \right) = q_1 + \gamma q_3 \\ {\cal FL}^*p^1 &=& \derpar{{\cal L}}{q_2} = - \gamma q_2 \end{eqnarray*} and, as $\gamma =\hspace{-3.5mm}/\hspace{2mm} 0$, we see that ${\cal L}$ is a regular Lagrangian since ${\cal FL}$ is a (local) diffeomorphism. Then, the submanifold ${\cal W}_o = {\rm graph}\,{\cal FL}$ is defined by $$ {\cal W}_o = \left\{ p \in {\cal W} \colon \xi_0(p) = \xi_1(p) = 0 \right\} \ , $$ where $\xi_r = p^r - {\cal FL}^*p^r$, $r=1,2$. The diagram for this situation is $$ \xymatrix{ \ & {\cal W} \ar@/_1.25pc/[dddl]_-{\operatorname{pr}_1} \ar@/^1.25pc/[dddr]^-{\operatorname{pr}_2} & \ \\ \ & \ & \ \\ \ & {\cal W}_o = {\rm graph}\,{\cal FL} \ar@{^{(}->}[uu]^-{j_o} \ar[dl]_- {\overline{\operatorname{pr}}_{1}} \ar[dr]^-{\overline{\operatorname{pr}}_{2}} & \ \\ {\rm T}^3Q \ar@{-->}[rr]^-{{\cal FL}} & \ & {\rm T}^*({\rm T} Q) \ . } $$ Next we compute the tangency condition for $X \in {\mathfrak{X}}({\cal W})$ given locally by (\ref{eqn:Example1_VectorFieldX}) on the submanifold ${\cal W}_o \hookrightarrow {\cal W}$, by checking if the following identities hold $$ \mathop{\rm L}\nolimits(X)\xi_0\vert_{{\cal W}_o} = 0 \quad , \quad \mathop{\rm L}\nolimits(X)\xi_1\vert_{{\cal W}_o} = 0 \ . $$ As we have seen in Section \ref{section:DynamicsW}, these equations give us the Lagrangian equations for the vector field $X$; that is, on the points of ${\cal W}_o$ we obtain \begin{align} \mathop{\rm L}\nolimits(X)\xi_0 = - \omega^2 q_0 - q_2 - \gamma F_3 = 0 \label{eqn:Example1_EulerLagrangeVectFieldEq} \\ \mathop{\rm L}\nolimits(X)\xi_1 = \gamma\left(F_2 - q_3\right) = 0 \ . \label{eqn:Example1_Semispray1} \end{align} Equation (\ref{eqn:Example1_Semispray1}) gives us the condition of semispray of type $1$ for the vector field $X$ (recall that $\gamma =\hspace{-3.5mm}/\hspace{2mm} 0$), and equation (\ref{eqn:Example1_EulerLagrangeVectFieldEq}) is the Euler-Lagrange equation for the vector field $X$. Notice that, as $\gamma$ is nonzero, these equations give us a unique solution for $F_2$ and $F_3$. Thus, there is a unique vector field $X \in {\mathfrak{X}}({\cal W})$ solution to the equation $\restric{\left[ \mathop{i}\nolimits(X)\Omega - {\rm d} H \right]}{{\cal W}_o} = 0$ which is tangent to the submanifold ${\cal W}_o \hookrightarrow {\cal W}$, and it is given locally by $$ X = q_1 \derpar{}{q_0} + q_2 \derpar{}{q_1} + q_3 \derpar{}{q_2} - \frac{1}{\gamma}\left(\omega^2q_0 + q_2\right) \derpar{}{q_3} - \omega^2q_0\derpar{}{p^0} + \left(q_1 - p^0\right) \derpar{}{p_1} \ . $$ Then, if $\sigma \colon \mathbb{R} \to {\cal W}$ is an integral curve of $X$ locally given by \begin{equation} \label{eqn:Example1_LocalCoordSigma} \sigma(t) = \left(q_0(t),q_1(t),q_2(t),q_3(t),p^0(t),p^1(t)\right) \ , \end{equation} and its component functions are solutions to the system \begin{align} & \dot{q}_0(t) = q_1(t); \label{eqn:Example1_IntCurveX_Semispray1_1} \\ & \dot{q}_1(t) = q_2(t); \label{eqn:Example1_IntCurveX_Semispray1_2} \\ & \dot{q}_2(t) = q_3(t); \label{eqn:Example1_IntCurveX_Semispray1_3} \\ & \dot{q}_3(t) = -\frac{1}{\gamma}\left(\omega^2q_0(t) + q_2(t)\right); \label{eqn:Example1_IntCurveX_EulerLagrange} \\ & \dot{p}^0(t) = - \omega^2q_0(t); \label{eqn:Example1_IntCurveX_Hamiltonian1} \\ & \dot{p}^1(t) = q_1(t) - p^0(t). \label{eqn:Example1_IntCurveX_Hamiltonian2} \end{align} Finally we recover the Lagrangian and Hamiltonian solutions for this system. For the Lagrangian solutions, as we have shown in Lemma \ref{lemma:Cap06_LagVectField} and Theorem \ref{thm:Cap06_CorrX-XL}, the Euler-Lagrange vector field is the unique semispray of type $1$, $X_{\cal L} \in {\mathfrak{X}}({\rm T}^3Q)$, such that $X_{\cal L} \circ \operatorname{pr}_1 \circ j_o = {\rm T}\operatorname{pr}_1 \circ X \circ j_o$. Thus this vector field $X_{\cal L}$ is locally given by $$ X_{\cal L} = q_1 \derpar{}{q_0} + q_2 \derpar{}{q_1} + q_3 \derpar{}{q_2} - \frac{1}{\gamma}\left(\omega^2q_0 + q_2\right) \derpar{}{q_3} \ . $$ For the integral curves of $X_{\cal L}$ we know from Proposition \ref{prop:Cap06_XLCI} that if $\sigma \colon \mathbb{R} \to {\cal W}$ is an integral curve of $X$, then $\sigma_{\cal L} = \operatorname{pr}_1 \circ \sigma$ is an integral curve of $X_{\cal L}$. Thus, if $\sigma$ is given locally by (\ref{eqn:Example1_LocalCoordSigma}), then $\sigma_{\cal L}$ has the following local expression \begin{equation}\label{eqn:Example1_LocalCoordSigmaL} \sigma_{\cal L}(t) = \left(q_0(t),q_1(t),q_2(t),q_3(t)\right) \ , \end{equation} and its components satisfy equations (\ref{eqn:Example1_IntCurveX_Semispray1_1}), (\ref{eqn:Example1_IntCurveX_Semispray1_2}), (\ref{eqn:Example1_IntCurveX_Semispray1_3}) and (\ref{eqn:Example1_IntCurveX_EulerLagrange}). Notice that equations (\ref{eqn:Example1_IntCurveX_Semispray1_1}), (\ref{eqn:Example1_IntCurveX_Semispray1_2}) and (\ref{eqn:Example1_IntCurveX_Semispray1_3}) state that $\sigma_{\cal L}$ is the canonical lifting of a curve in the basis, that is, there exists a curve $\gamma \colon \mathbb{R} \to Q$ such that $\tilde\gamma^3 = \sigma_{\cal L}$. Furthermore, equation (\ref{eqn:Example1_IntCurveX_EulerLagrange}) is the Euler-Lagrange equation for this system. Now, for the Hamiltonian solutions, as ${\cal L}$ is a regular Lagrangian, Theorem \ref{thm:Cap06_CorrX-Xh} states that there exists a unique vector field $X_h = {\cal FL}_*X_{\cal L} \in {\mathfrak{X}}({\rm T}^*({\rm T} Q))$ which is a solution to the Hamilton equation. Hence, it is given locally by \begin{equation*} X_h = q_1 \derpar{}{q_0} + q_2 \derpar{}{q_1} - \omega^2q_0\derpar{}{p^0} + \left(q_1 - p^0\right) \derpar{}{p_1} \end{equation*} For the integral curves of $X_h$, Proposition \ref{prop:Cap06_XhCI} states that if $\sigma_{\cal L} \colon \mathbb{R} \to {\rm T}^3Q$ is an integral curve of $X_{\cal L}$ coming from an integral curve $\sigma$ of $X$, then $\sigma_h = {\cal FL} \circ \sigma_{\cal L}$ is an integral curve of the vector field $X_h$. Therefore, if $\sigma$ is given locally by (\ref{eqn:Example1_LocalCoordSigma}), then $\sigma_{\cal L}$ is given by (\ref{eqn:Example1_LocalCoordSigmaL}) and so $\sigma_h$ can be locally written $$ \sigma_h(t) = \left(q_0(t),q_1(t),p^0(t),p^1(t)\right) \ , $$ and its components must satisfy equations (\ref{eqn:Example1_IntCurveX_Semispray1_1}), (\ref{eqn:Example1_IntCurveX_Semispray1_2}), (\ref{eqn:Example1_IntCurveX_Hamiltonian1}) and (\ref{eqn:Example1_IntCurveX_Hamiltonian2}). Notice that these equations are the standard Hamilton equations for this system. \subsection{The second-order relativistic particle} Let us consider a relativistic particle whose action is proportional to its extrinsic curvature. This system was analyzed in \cite{art:Plyushchay88,art:Pisarski86,art:Batlle_Gomis_Pons_Roman88,art:Nesterenko89}, and here we study it using the Lagrangian-Hamiltonian unified formalism. The configuration space is a $n$-dimensional smooth manifold $Q$ with local coordinates $(q_0^A)$, $1 \leqslant A \leqslant n$. Then, if we take the natural set of coordinates on the higher-order tangent bundles over $Q$, the second-order Lagrangian function for this system, ${\cal L} \in {\rm C}^\infty({\rm T}^2Q)$, can be written locally as \begin{equation} {\cal L}(q_0^i,q_1^i,q_2^i) = \frac{\alpha}{(q_1^i)^2} \left[ (q_1^i)^2(q_2^i)^2 - (q_1^iq_2^i)^2 \right]^{1/2} \equiv \frac{\alpha}{(q_1^i)^2} \sqrt{g} \ . \label{eqn:Example_Lagrangian} \end{equation} where $\alpha$ is some nonzero constant. It is a singular Lagrangian, as we can see by computing the Hessian matrix of ${\cal L}$ with respect to $q_2^A$, which is $$ \left( \frac{\partial^2{\cal L}}{\partial q_2^B\partial q_2^A} \right) = \begin{cases} \displaystyle \frac{\alpha}{2(q_1^i)^2\sqrt{g^3}} \left[ \left((q_1^iq_2^i)^2 - 2(q_1^i)^2(q_2^i)^2 \right)q_1^Bq_1^A \right. + (q_1^i)^2(q_1^iq_2^i)(q_2^Bq_1^A-q_1^Bq_2^A) & \\ \displaystyle \qquad\qquad\quad \left. - (q_1^i)^2(q_2^i)^2q_2^Bq_2^A \right] & \mbox{ if } B =\hspace{-3.5mm}/\hspace{2mm} A \\ \displaystyle \frac{\alpha}{\sqrt{g^3}}\left[ g - (q_2^i)^2q_1^Aq_1^A + 2(q_1^iq_2^i)q_1^Aq_2^A - (q_1^i)^2q_2^Aq_2^A \right] & \mbox{ if } B = A \ , \end{cases} $$ then after a long calculation we obtain that $\displaystyle \det\left( \frac{\partial^2{\cal L}}{\partial q_2^B\partial q_2^A} \right) = 0$. In particular, ${\cal L}$ is an almost-regular Lagrangian. As this is a second-order dynamical system, the phase space that we consider is $$ \xymatrix{ \ & {\cal W} = {\rm T}^3Q \times_{{\rm T} Q} {\rm T}^*({\rm T} Q) \ar[dl]_-{\operatorname{pr}_1} \ar[dr]^-{\operatorname{pr}_2} & \ \\ {\rm T}^3Q \ar[dr]_-{\rho^{3}_{1}} & \ & {\rm T}^*({\rm T} Q) \ar[dl]^-{\pi_{{\rm T} Q}} \\ \ & {\rm T} Q & \ . } $$ As ${\cal L}$ is almost-regular, the ``natural'' phase space for this system would be ${\rm T}^3Q \times_{{\rm T} Q} P_o$, where $P_o \hookrightarrow {\rm T}^*({\rm T} Q)$ denotes the image of the Legendre-Ostrogradsky map. However, as we have a set of natural coordinates defined in ${\cal W}$, it is easier to work in ${\cal W}$ and then to obtain the constraints as a consequence of the formalism. If $\omega_{1} \in {\mit\Omega}^2({\rm T}^*({\rm T} Q))$ is the canonical symplectic form, we define the presymplectic form $\Omega = \operatorname{pr}_2^*\omega_{1}\in {\mit\Omega}^2({\cal W})$, whose local expression is $$ \Omega = dq_0^i \wedge dp_i^0 + dq_1^i \wedge dp_i^1 \ . $$ The Hamiltonian function $H \in {\rm C}^\infty({\cal W})$ is $H = {\cal C} - (\rho^3_2 \circ \operatorname{pr}_1)^*{\cal L}$, where ${\cal C}$ is the coupling function, whose local expression is ${\cal C}\left(q_0^i,q_1^i,q_2^i,q_3^i,p_i^0,p_i^1\right) = p_i^0q_1^i + p_i^1q_2^i$, and then the Hamiltonian function can be written locally $$ H\left(q_0^i,q_1^i,q_2^i,q_3^i,p_i^0,p_i^1\right) = p_i^0q_1^i + p_i^1q_2^i - \frac{\alpha}{(q_1^i)^2} \left[ (q_1^i)^2(q_2^i)^2 - (q_1^iq_2^i)^2 \right]^{1/2} \ . $$ The dynamics for this system are described as the integral curves of vector fields $X \in {\mathfrak{X}}({\cal W})$ which are solutions to equation (\ref{eqn:Cap06_EqDinImp}). If we take a generic vector field $X\in{\mathfrak{X}}({\cal W})$, given locally by $$ X = f_0^A \derpar{}{q_0^A} + f_1^A \derpar{}{q_1^A} + F_2^A \derpar{}{q_2^A} + F_3^A \derpar{}{q_3^A} + G_A^0 \derpar{}{p_A^0} + G_A^1 \derpar{}{p_A^1} \ , $$ taking into account that \begin{eqnarray*} {\rm d} H &=& \displaystyle q_1^A{\rm d} p_A^0 + q_2^A{\rm d} p_A^1 + \left[ p^0_A + \frac{\alpha}{((q_1^i)^2)^2\sqrt{g}}\left[ \left((q_1^i)^2(q_2^i)^2 - 2(q_1^iq_2^i)^2\right)q_1^A + (q_1^iq_2^i)(q_1^i)^2q_2^A \right] \right]{\rm d} q_1^A\\ & &+ \left[ p_A^1 - \frac{\alpha}{(q_1^i)^2\sqrt{g}}\left( (q^i_1)^2 q_2^A - (q_1^iq_2^i)q_1^A \right) \right]{\rm d} q_2^A \ ; \end{eqnarray*} from the dynamical equation we obtain the following linear systems for the coefficients of $X$ \begin{align} & f_0^A = q_1^A \label{eqn:Example_Semispray2_1} \\ & f_1^A = q_2^A \label{eqn:Example_Semispray2_2} \\ & G_A^0 = 0 \\ & G_A^1 = - p^0_A - \frac{\alpha}{((q_1^i)^2)^2\sqrt{g}}\left[ \left((q_1^i)^2(q_2^i)^2 - 2(q_1^iq_2^i)^2\right)q_1^A + (q_1^iq_2^i)(q_1^i)^2q_2^A \right] \label{eqn:Example_VectorFieldG1} \\ & p_A^1 - \frac{\alpha}{(q_1^i)^2\sqrt{g}}\left( (q^i_1)^2 q_2^A - (q_1^iq_2^i)q_1^A \right) = 0 \ . \label{eqn:Example_LegTransformation} \end{align} Note that from equations \eqref{eqn:Example_Semispray2_1} and \eqref{eqn:Example_Semispray2_2} we obtain the condition of semispray of type $2$ for $X$. Furthermore, equations \eqref{eqn:Example_LegTransformation} are algebraic relations between the coordinates in ${\cal W}$ stating that the vector field $X$ is defined along a submanifold ${\cal W}_o$ that is identified with the graph of the Legendre-Ostrogradsky map, as we stated in Propositions \ref{prop:Cap06_ExistSolEqDin} and \ref{prop:Cap06_W0GrafFL}. Thus, the vector field $X$ is given locally by \begin{equation} \label{eqn:Example_VectorFieldBeforeHolonomy} X = q_1^A \derpar{}{q_0^A} + q_2^A \derpar{}{q_1^A} + F_2^A \derpar{}{q_2^A} + F_3^A \derpar{}{q_3^A} + G_A^1 \derpar{}{p_A^1} \ , \end{equation} where the functions $G_A^1$ are determined by \eqref{eqn:Example_VectorFieldG1}. As we want to recover the Lagrangian solutions from the vector field $X$, we must require $X$ to be a semispray of type $1$. This condition reduces the set of vector fields $X \in {\mathfrak{X}}({\cal W})$ given by \eqref{eqn:Example_VectorFieldBeforeHolonomy} to the following ones \begin{equation} \label{eqn:Example_VectorFieldHolonomy} X = q_1^A \derpar{}{q_0^A} + q_2^A \derpar{}{q_1^A} + q_3^A \derpar{}{q_2^A} + F_3^A \derpar{}{q_3^A} + G_A^1\derpar{}{p_A^1} \ . \end{equation} Notice that the functions $F_3^A$ are not determinated until the tangency of the vector field $X$ on ${\cal W}_o$ is required. Now, the Legendre-Ostrogradsky transformation is the map ${\cal FL} \colon {\rm T}^3Q \longrightarrow {\rm T}^*({\rm T} Q)$ locally given by \begin{eqnarray*} {\cal FL}^*(p^0_A) &=& \derpar{{\cal L}}{q_1^A} - d_T\left(\derpar{{\cal L}}{q_2^A}\right) \equiv \derpar{{\cal L}}{q^A_1} - d_T \left( p_A^1 \right) = \\ & & \frac{\alpha}{(q_1^i)^2\sqrt{g^3}} \left[ \left( (q_2^i)^2g + (q_1^i)^2(q_2^i)^2(q_1^iq_3^i) - (q_1^i)^2(q_1^iq_2^i)(q_ 2^iq_3^i) \right)q_1^A\right] + \\ & & \frac{\alpha}{(q_1^i)^2\sqrt{g^3}} \left[ \left( ((q_1^i)^2)^2(q_2^iq_3^i) - (q_1^i)^2(q_1^iq_2^i)(q_1^iq_3^i) - (q_1^iq_2^i)g \right)q_2^A - (q_1^i)^2gq_3^A\right] \\ & & \\ {\cal FL}^*(p^1_A) &=& \derpar{{\cal L}}{q_2^A} = \frac{\alpha}{(q^i_1)^2\sqrt{g}} \left[ (q^i_1)^2q_2^A - (q^i_1q^i_2)q^A_1 \right] \ , \end{eqnarray*} and, in fact, ${\cal L}$ is an almost-regular Lagrangian. Thus, from the expression in local coordinates of the map ${\cal FL}$, we obtain the (primary) constraints that define the closed submanifold $P_o={\rm Im}\,{\cal FL}$, which are \begin{equation} \phi^{(0)}_1 \equiv p^1_iq_1^i = 0 \quad ; \quad \phi^{(0)}_2 \equiv (p_i^1)^2 - \frac{\alpha^2}{(q^i_1)^2} = 0 \ ; \label{eqn:Example_Constraints0} \end{equation} Let ${\cal FL}_o \colon {\rm T}^{3}Q \to P_o$. Then, the submanifold ${\cal W}_o ={\rm graph}\,{\cal FL}_o$ is defined by $$ {\cal W}_o = \left\{ p \in {\cal W} \ \colon \ \xi^A_0(p) = \xi^A_1(p) = \phi^{(0)}_1(p) = \phi^{(0)}_2(p) = 0, \ 1 \leqslant A \leqslant \dim Q \right\} $$ where $\xi_r^A \equiv p_A^r - {\cal FL}^*p_A^r$. The diagram for this situation is $$ \xymatrix{ \ & {\cal W} \ar@/_1.25pc/[dddl]_-{\operatorname{pr}_1} \ar@/^1.25pc/[dddr]^-{\operatorname{pr}_2} & \ \\ \ & \ & \ \\ \ & {\cal W}_{P_o} = {\rm T}^{3}Q \times_{{\rm T} Q} P_o \ar@{^{(}->}[uu]^-{j_{{\cal W}_{P_o}}} \ar[dl]_-{\operatorname{pr}_{1,{\cal W}_{P_o}}} \ar[dr]^-{\operatorname{pr}_{2,{\cal W}_{P_o}}} \ar[ddr]_-{\operatorname{pr}_{2,P_o}} & \ \\ {\rm T}^3Q & \ & {\rm T}^*({\rm T} Q) \\ \ & {\cal W}_o = {\rm graph}\,{\cal FL}_o \ar@{^{(}->}[uu]^-{j_o} \ar[ul]_- {\overline{\operatorname{pr}}_{1,P_o}} \ar[r]^-{\overline{\operatorname{pr}}_{2,P_o}} & P_o \ar@{^{(}->}[u]^{j_{P_o}} } $$ Notice that ${\cal W}_o$ is a submanifold of ${\rm T}^3Q \times_{{\rm T} Q} P_o$, and that ${\cal W}_o$ is the real phase space of the system, where the dynamics take place. Next we compute the tangency condition for $X \in {\mathfrak{X}}({\cal W})$ given locally by \eqref{eqn:Example_VectorFieldHolonomy} on the submanifold ${\cal W}_o \hookrightarrow {\cal W}_{P_o} \hookrightarrow {\cal W}$, by checking if the following identities hold \begin{eqnarray} \mathop{\rm L}\nolimits(X)\xi^A_0\vert_{{\cal W}_o} = 0 & \ , \ & \mathop{\rm L}\nolimits(X)\xi^A_1\vert_{{\cal W}_o} = 0 \label{eqn:Example_LagEquations1}\\ \mathop{\rm L}\nolimits(X)\phi^{(0)}_1\vert_{{\cal W}_o} = 0 & \ , \ & \mathop{\rm L}\nolimits(X)\phi^{(0)}_2\vert_{{\cal W}_o} = 0 \ . \label{eqn:Example_LieDerConstraints0} \end{eqnarray} As we have seen in Section \ref{section:DynamicsW}, equations (\ref{eqn:Example_LagEquations1}) give us the Lagrangian equations for the vector field $X$. However, equations (\ref{eqn:Example_LieDerConstraints0}) do not hold since $$ \mathop{\rm L}\nolimits(X)\phi^{(0)}_1 = \mathop{\rm L}\nolimits(X)(p^1_iq_1^i) = - p_i^0q_1^i \quad , \quad \mathop{\rm L}\nolimits(X)\phi^{(0)}_2 = \mathop{\rm L}\nolimits(X)((p^1_i)^2 - \alpha^2 / (q_1^i)^2) = - 2p_i^0q_i^1 \ , $$ and hence we obtain two first-generation secondary constraints \begin{equation} \phi^{(1)}_1 \equiv p_i^0 q_1^i = 0 \quad , \quad \phi^{(1)}_2 \equiv p_i^0p_i^1 = 0 \label{eqn:Example_Constraints1} \end{equation} that define a new submanifold ${\cal W}_1 \hookrightarrow {\cal W}_o$. Now, checking the tangency of the vector field $X$ to this new submanifold, we obtain $$ \mathop{\rm L}\nolimits(X)\phi^{(1)}_1 = \mathop{\rm L}\nolimits(X)(p_i^0q_1^i) = 0 \quad , \quad \mathop{\rm L}\nolimits(X)\phi^{(1)}_2 = \mathop{\rm L}\nolimits(X)(p_i^0p_i^1) = -(p^0_i)^2 \ , $$ and a second-generation secondary constraint appears \begin{equation} \phi^{(2)} \equiv (p^0_i)^2 = 0 \ , \label{eqn:Example_Constraints2} \end{equation} which defines a new submanifold ${\cal W}_2 \hookrightarrow {\cal W}_1$. Finally, the tangency of the vector field $X$ on this submanifold gives no new constraints, since $$ \mathop{\rm L}\nolimits(X)\phi^{(2)} = \mathop{\rm L}\nolimits(X)((p^0_i)^2) = 0 \ . $$ So we have two primary constraints \eqref{eqn:Example_Constraints0}, two first-generation secondary constraints \eqref{eqn:Example_Constraints1}, and a single second-generation secondary constraint \eqref{eqn:Example_Constraints2}. Notice that these five constraints only depend on $q_1^A$, $p^0_A$ and $p^1_A$, and so they are $\operatorname{pr}_2$-projectable. Thus, we have the following diagram $$ \xymatrix{ \ & \ & {\cal W} \ar@/_1.25pc/[dddll]_-{\operatorname{pr}_1} \ar@/^1.25pc/[dddrr]^-{\operatorname{pr}_2} & \ & \ \\ \ & \ & \ & \ & \ \\ \ & \ & {\cal W}_{P_o} \ar@{^{(}->}[uu]^-{j_{{\cal W}_{P_o}}} \ar[dll]_-{\operatorname{pr}_{1,{\cal W}_{P_o}}} \ar[drr]^-{\operatorname{pr}_{2,{\cal W}_{P_o}}} \ar[ddrr]_-{\operatorname{pr}_{2,P_o}} & \ & \ \\ {\rm T}^3Q & \ & \ & \ & {\rm T}^*({\rm T} Q) \\ S_1 \ar@{^{(}->}[u]^-{j_{S_1}} & \ & {\cal W}_o \ar@{^{(}->}[uu]^-{j_o} \ar[ull]_-{\overline{\operatorname{pr}}_{1,P_o}} \ar[rr]^-{\overline{\operatorname{pr}}_{2,P_o}} & \ & P_o \ar@{^{(}->}[u]^{j_{P_o}} \\ S_2 \ar@{^{(}->}[u]^-{j_{S_2}} & \ & {\cal W}_1 \ar@{^{(}->}[u]^-{j_1} \ar[ull]_-{\overline{\operatorname{pr}}_{1,P_1}} \ar[rr]^-{\overline{\operatorname{pr}}_{2,P_1}} & \ & P_1 \ar@{^{(}->}[u]^-{j_{P_1}} \\ \ & \ & {\cal W}_2 \ar@{^{(}->}[u]^-{j_2} \ar[ull]_-{\overline{\operatorname{pr}}_{1,P_2}} \ar[rr]^-{\overline{\operatorname{pr}}_{2,P_2}} & \ & P_2 \ar@{^{(}->}[u]^-{j_{P_2}} } $$ where \begin{align*} &P_1 = \left\{ p \in P_o \colon \phi^{(1)}_1(p) = \phi^{(1)}_2(p) = 0 \right\} =\operatorname{pr}_2({\cal W}_1) \\ &P_2 = \left\{ p \in P_o \colon \phi^{(2)}(p) = 0 \right\} =\operatorname{pr}_2({\cal W}_2) \\ &S_1 = {\cal FL}_o^{-1}(P_1) = \operatorname{pr}_1({\cal W}_1)\\ &S_2 = {\cal FL}_o^{-1}(P_2) = \operatorname{pr}_1({\cal W}_2) \ . \end{align*} Focusing only on the Legendre-Ostrogradsky map, and ignoring the unified part of the diagram, we have $$ \xymatrix{ {\rm T}^3 Q \ar[rr]^-{{\cal FL}} \ar[drr]^-{{\cal FL}_o} & \ & {\rm T}^*({\rm T} Q) \\ S_1 \ar@{^{(}->}[u]^-{j_{S_1}} \ar[drr]^-{{\cal FL}_o} & \ & P_o \ar@{^{(}->}[u]^-{j_{P_o}} \\ S_2 \ar@{^{(}->}[u]^-{j_{S_2}} \ar[drr]^-{{\cal FL}_o} & \ & P_1 \ar@{^{(}->}[u]^-{j_{P_1}} \\ \ & \ & P_2 \ar@{^{(}->}[u]^-{j_{P_2}} } $$ Notice that we still have to check \eqref{eqn:Example_LagEquations1}. As we have seen in Section \ref{section:DynamicsW}, we will obtain the following equations \begin{align} &\left(F_3^B - d_T\left(q_3^B\right)\right)\frac{\partial^2{\cal L}}{\partial q_2^B\partial q_2^A} + \derpar{{\cal L}}{q_0^A} - d_T\left(\derpar{{\cal L}}{q_1^A}\right) + d_T^2\left(\derpar{{\cal L}}{q_2^A}\right) + \left(F_2^B - q_3^B\right)d_T\left(\frac{\partial^2{\cal L}}{\partial q_2^B\partial q_2^A}\right) = 0 \label{eqn:Example_EulerLagrangeInitial} \\ &\left(F_2^B - q_3^B\right)\frac{\partial^2{\cal L}}{\partial q_2^B\partial q_2^A} = 0 \label{eqn:Example_Semispray1} \end{align} As we have already required the vector field $X$ to be a semispray of type $1$, equations \eqref{eqn:Example_Semispray1} are satisfied identically and equations \eqref{eqn:Example_EulerLagrangeInitial} become \begin{equation}\label{eqn:Example_EulerLagrangeFinal} \left(F_3^B - d_T\left(q_3^B\right)\right)\frac{\partial^2{\cal L}}{\partial q_2^B\partial q_2^A} + \derpar{{\cal L}}{q_0^A} - d_T\left(\derpar{{\cal L}}{q_1^A}\right) + d_T^2\left(\derpar{{\cal L}}{q_2^A}\right) = 0 \ . \end{equation} A long calculation shows that this equation is compatible and so no new constraints arise. Thus, we have no Lagrangian constraint appearing from the semispray condition. If some constraint had appeared, it would not be ${\cal FL}_o$-projectable (see \cite{art:Gracia_Pons_Roman92}) Thus, the vector fields $X \in {\mathfrak{X}}({\cal W})$ given locally by \eqref{eqn:Example_VectorFieldHolonomy} which are solutions to the equation $$ \restric{\left[\mathop{i}\nolimits(X)\Omega - {\rm d} H\right]}{{\cal W}_2} = 0 \ , $$ are tangent to the submanifold ${\cal W}_2 \hookrightarrow {\cal W}_o$. Therefore, taking the vector fields $X_o\in{\mathfrak{X}}({\cal W}_2)$ such that ${\rm T} j_2\circ X_o=X\circ j_2$, the form $\Omega_o = (j_{{\cal W}_{P_o}} \circ j_o \circ j_1 \circ j_2)^*\Omega$, and the canonical Hamiltonian function $H_o = (j_{{\cal W}_{P_o}} \circ j_o \circ j_1 \circ j_2)^*H$, the above equation leads to $$ \label{eqn:Example_DynamicalEquationsRestricted} \mathop{i}\nolimits(X_o)\Omega_o - {\rm d} H_o= 0 \ , $$ but a simple calculation in local coordinates shows that $H_o=0$, and thus the last equation becomes $\mathop{i}\nolimits(X_o)\Omega_o= 0$. One can easily check that, if the semispray condition is not required at the beginning and we perform all this procedure with the vector field given by \eqref{eqn:Example_VectorFieldBeforeHolonomy}, the final result is the same. This means that, in this case, the semispray condition does not give any additional constraint. As final results, we recover the Lagrangian and Hamiltonian vector fields from the vector field $X \in {\mathfrak{X}}({\cal W})$. For the Lagrangian vector field, by using Lemma \ref{lemma:Cap06_LagVectField} and Theorem \ref{thm:Cap06_CorrX-XL} we obtain a semispray of type $2$, $X_{\cal L} \in {\mathfrak{X}}({\rm T}^3Q)$, tangent to $S_2$. Thus, requiring the condition of semispray of type $1$ to be satisfied (perhaps on another submanifold $M_2 \hookrightarrow S_2$), the local expression for the vector field $X_{\cal L}$ is \begin{equation*} X_{\cal L} = q_1^A \derpar{}{q_0^A} + q_2^A \derpar{}{q_1^A} + q_3^A \derpar{}{q_2^A} + F_3^A \derpar{}{q_3^A} \ . \end{equation*} where the functions $F_3^A$ are determined by (\ref{eqn:Example_EulerLagrangeFinal}). For the Hamiltonian vector fields, recall that ${\cal L}$ is an almost-regular Lagrangian function. Thus, we know that there are Euler-Lagrange vector fields which are ${\cal FL}_o$-projectable on $P_2$, tangent to $P_2$ and solutions to the Hamilton equation. \section{Conclusions and outlook} \label{section:outlook} After introducing the natural geometric structures needed for describing higher-order autonomous dynamical systems, we review their Lagrangian and Hamiltonian formalisms, following the exposition made in \cite{book:DeLeon_Rodrigues85}. The main contribution of this work is that we develop the Lagrangian-Hamiltonian unified formalism for higher-order dynamical systems, following the ideas of the original article \cite{art:Skinner_Rusk83}. We pay special attention to showing how the Lagrangian and Hamiltonian dynamics are recovered from this, both for regular and singular systems. A first consideration is to discuss the fundamental differences between the first-order and the higher-order unified Lagrangian-Hamiltonian formalisms. In particular: \begin{itemize} \item As there is no canonical pairing between the elements of ${\rm T}^{2k-1}_qQ$ and of ${\rm T}^*_q({\rm T}^{k-1}Q)$, in order to define the higher-order coupling function ${\cal C}$ in an intrinsic way, we use the canonical injection that transforms a point in ${\rm T}^{2k-1}Q$ into a tangent vector along ${\rm T}^{k-1}Q$. \item When the equations that define the Legendre-Ostrogradsky map are recovered from the unified formalism (both in the characterization of the compatibility submanifold ${\cal W}_o$ as the graph of ${\cal FL}$, and in the equations in local coordinates of the vector field $X \in {\mathfrak{X}}({\cal W})$ solution to the dynamical equations), the only equations that are recovered are those that define the highest order momentum coordinates, and the remaining equations that define the map must be recovered using the relations between the momentum coordinates. \item The regularity of the Lagrangian function is more relevant in the higher-order case, because the condition of semispray of type $1$ (the holonomy condition) of the Lagrangian vector field cannot be deduced from the dynamical equations if the Lagrangian is singular, unlike the first-order case, where this holonomy condition is deduced straightforwardly from the equations independently of the regularity of the Lagrangian function. When the Lagrangian is singular, we can only ensure that the Lagrangian vector field is a semispray of type $k$. It is therefore necessary, in general, to require the condition of semispray of type $1$ as an additional condition. Then, for regular Lagrangian systems, when the tangency condition of the vector field $X \in {\mathfrak{X}}({\cal W})$ solution in the unified formalism along the submanifold ${\cal W}_o$ is required, we obtain not only the Euler-Lagrange equations for the vector field, but also the remaining $k-1$ systems of equations that the vector field must satisfy to be a semispray of type $1$. \end{itemize} As we point out in the introduction, a previous and quick presentation of a unified formalism for higher-order systems was outlined in \cite{art:Colombo_Martin_Zuccalli10}. Our formalism differs from this one, since in that article the authors take ${\rm T}^kQ \oplus_{{\rm T}^{k-1}Q} {\rm T}^*({\rm T}^{k-1}Q)$ as the phase space in the unified formalism, instead of ours, which is ${\rm T}^{2k-1}Q \oplus_{{\rm T}^{k-1}Q} {\rm T}^*({\rm T}^{k-1}Q)$. This is a significant difference, since when we want to recover the dynamical solutions of the Lagrangian formalism from the unified formalism, the Lagrangian phase space is ${\rm T}^{2k-1}Q$, instead of ${\rm T}^kQ$, which is the bundle where the Lagrangian function is defined. This fact makes it more natural to obtain the Lagrangian dynamics as well as the Hamiltonian dynamics, which in turn is obtained from the Lagrangian one using the Legendre-Ostrogradsky map. By using any suitable generalization of some of the several formalisms for first-order non-autonomous dynamical systems \cite{book:Abraham_Marsden78,art:Barbero_Echeverria_Martin_Munoz_Roman08,art:Echeverria_Munoz_Roman91}, a future avenue of research consists in generalizing this unified formalism for higher-order non-autonomous dynamical systems. This generalization should also be recovered as a particular case of the corresponding unified formalism for higher-order classical field theories. As regards this topic, a proposal for a unified formalism for higher-order classical field theories has recently been made \cite{art:Campos_DeLeon_Martin_Vankerschaver09,art:Vitagliano10}, which is based on the model presented in \cite{art:Colombo_Martin_Zuccalli10}. This formulation allows us to improve some previous models for describing the Lagrangian and Hamiltonian formalisms. Nevertheless, some ambiguities arise when considering the solutions of the field equations. We hope that a suitable extension of our formalism to field theories will enable these difficulties to be overcome and complete the model given in \cite{art:Campos_DeLeon_Martin_Vankerschaver09,art:Vitagliano10}. \section*{Acknowledgments} We acknowledge the financial support of the {\sl Ministerio de Ciencia e Innovaci\'on} (Spain), projects MTM2008-00689 and MTM2009-08166-E. We also thank Mr. Jeff Palmer for his assistance in preparing the English version of the manuscript. {\small
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In the century following their discovery by Victor Hess in 1912, cosmic-rays have been recognized as an important constituent of the Galaxy. With a total energy density somewhat larger that of starlight (e.g.\ Draine 2011), cosmic-rays are the dominant source of hydrogen ionization for the cold neutral medium (CNM) within the Galactic ISM. In starless molecular cloud cores, they are also the dominant source of heating. Thus, cosmic-rays play a central role in astrochemistry by initiating a rich ion-neutral chemistry that operates within the CNM, and the cosmic-ray ionization rate (CRIR) is a key parameter in models of the chemistry of the ISM (Grenier et al.\ 2015, and references therein). Three related definitions of this parameter are widely used in the literature but must be distinguished. Here, we adopt the {\it primary} ionization rate per hydrogen atom, $\zeta_p({\rm H})$, as the fundamental parameter of interest, because it is most directly related to the density of cosmic-rays. The two other quantities of interest are the {\it total} rate of ionization per hydrogen atom, $\zeta_t({\rm H})$, which includes the secondary ionizations that are caused by the energetic electrons produced by primary ionizations, and the total ionization rate per hydrogen molecule, $\zeta_t({\rm H}_2).$ While the exact ratios of these three quantities depend upon the fractional ionization and molecular fraction (Dalgarno et al.\ 1999), the rough relationship is $\zeta_p({\rm H}) = \zeta_t({\rm H})/1.5 = \zeta_t({\rm H}_2)/2.3 $ under typical conditions within the diffuse neutral ISM (Glassgold \& Langer 1974). While cosmic-rays of energies above $\sim 1$~GeV can be readily observed from the location of Earth's orbit, cosmic rays of lower energy have their flux modulated by the Sun's magnetic field and the solar wind. These are precisely the cosmic-rays that dominate the ionization and heating of the ISM. Recent measurements performed by the {\it Voyager I} spacecraft, now located beyond the heliopause, have provided improved estimates of the flux of lower-energy cosmic-ray protons and electrons, down to energies as low as $\sim 3$~MeV (Cummings et al.\ 2016). Nevertheless, it remains unclear whether the cosmic-ray fluxes reach their unmodulated values even at the current location of {\it Voyager I}. Moreover, because the ionization cross-sections for H and H$_2$ peak at an energy $\sim 0.01~$MeV, an extrapolation to unobserved energies is still required to determine the implied CRIR, which remains quite uncertain even in the solar neighborhood. Estimates of the CRIR in interstellar gas clouds may be obtained through a careful astrochemical analysis of the observed abundances of specific molecules whose production is driven by cosmic-rays. In dense molecular clouds that are shielded from the interstellar UV radiation field, the abundances of H$^{13}$CO$^+$ and H$_3^+$ have been used to derive estimates of $\zeta_t({\rm H}_2)$ in the range $\sim 0.6$ to $6 \times 10^{-17}\, \rm s^{-1}$ (van der Tak \& van Dishoeck 2000; Kulesa 2002). For the cold diffuse interstellar medium, where the UV radiation field is less strongly attenuated, it is convenient to use the nomenclature adopted by Snow \& McCall (2006), who distinguished between diffuse {\it atomic} material, in which the molecular fraction $f_{\rm H2} = 2\,n({\rm H}_2)/[2\,n({\rm H}_2)+n({\rm H})]$ is smaller than 0.1, and diffuse {\it molecular} material, in which $f_{\rm H2}$ is larger than 0.1 but the UV radiation field is still sufficient to maintain C$^+$ as the dominant reservoir of gas-phase carbon nuclei. Diffuse {\it atomic} gas is found in clouds of typical visual extinction $A_{\rm V} \le 0.2$~mag and H nucleus density $n_{\rm H} = 10 - 100\, \rm cm^{-3},$ while diffuse {\it molecular} gas is found in clouds with $A_{\rm V} = 0.2 - 1$~mag and $n_{\rm H} = 100 - 500\, \rm cm^{-3}$ (Snow \& McCall 2006). Clearly, the distinction here -- although useful -- is somewhat arbitrary, and we certainly expect a continuous distribution of $f_{\rm H2}$, $A_{\rm V}$ and $n_{\rm H}$. The CRIR within diffuse {\it molecular} gas can be inferred from measurements of H$_3^+$ (e.g.\ Indriolo et al.\ 2007) and HD (e.g.\ Liszt 2015). Such measurements have revealed that the CRIR within the diffuse molecular clouds is typically an order-of-magnitude larger than those inferred for dense molecular clouds, suggesting that the cosmic-ray fluxes are significantly attenuated within dense molecular clouds. These measurements generalize the surprising result obtained in the pioneering study of McCall et al.\ (2003), which combined astronomical observations with a new laboratory measurements of the dissociative recombination rate for H$_3^+$ and derived a CRIR along the sight-line to $\zeta$~Per that was a factor $\sim 40$ larger than those typically inferred for dense molecular clouds. Similarly-enhanced CRIRs were subsequently inferred from measurements of the OH$^+$ and H$_2$O$^+$ abundances in the diffuse ISM (e.g.\ Gerin et al.\ 2010; Neufeld et al.\ 2010); in this case, the molecular fraction indicated by the OH$^+$/H$_2$O$^+$ column density ratio is $\sim 2 - 10\,\%$ (Indriolo et al.\ 2015, hereafter I15), implying that the CRIR estimates obtained from measurements of OH$^+$ and H$_2$O$^+$ apply to diffuse {\it atomic} material. The past five years have seen the publication of two large surveys of relevance to the CRIR in the diffuse ISM: a near-infrared survey of H$_3^+$ in diffuse molecular clouds, obtained with ground-based telescopes (Indriolo \& McCall 2012; hereafter IM12); and a submillimeter survey of OH$^+$ and H$_2$O$^+$ in diffuse atomic clouds (I15), obtained using the {\it Herschel Space Observatory}. As will be discussed in Sections 3 and 4 below, these studies used simple analytic expressions -- based upon an approximate treatment of the chemistry -- to estimate CRIRs from the observed abundance of H$_3^+$ or the observed abundances of OH$^+$ and H$_2$O$^+$. In this paper, we will present the results of detailed physical and chemical models for diffuse interstellar gas clouds, examine critically the approximations used by IM12 and I15, and present refined estimates for the CRIR in the diffuse ISM. The diffuse cloud model used in this study is described in \S 2, along with the $\rm H_3^+$ abundance predictions obtained from the model. In \S 3, we present estimates of the CRIR within diffuse molecular and diffuse atomic clouds in the Galactic disk. In \S 4 we discuss the comparison with previous estimates reported in the literature and present recommended values for the mean CRIR in the Galactic disk. \section{Diffuse cloud model, and predictions for $\rm H_3^+$ abundances} \subsection{Diffuse cloud model} Our thermal and chemical model for diffuse molecular clouds is based on that described by Hollenbach et al. (2012; hereafter H12), with the modifications discussed by Neufeld \& Wolfire (2016; hereafter Paper I). In this model, we treat an interstellar gas cloud as a two-sided slab that is illuminated isotropically by an ultraviolet radiation field with the spectrum given by Draine (1978). The strength of the radiation field is characterized by the quantity, $\chi_{\rm UV}$, which is the ratio of the specific intensity to the mean interstellar value recommended by Draine (1978). The attenuation of the isotropic field was calculated as described in Wolfire et al.\ (2010), and the equilibrium gas temperature and steady-state chemical abundances were calculated as a function of depth into the cloud. As in Paper I, we included a network of chemical reactions for argon-containing species identical to what we presented in Schilke et al.\ (2014; hereafter S14). \subsection{Standard model grid} \begin{deluxetable}{lll} \tablewidth{0pt} \tablecaption{Grid of model parameters} \tablehead{Parameter & Number of values & Values} \startdata $\chi_{\rm UV}$ & 10 & 0.05, 0.1, 0.2, 0.3, 0.5, 1.0, 2.0, 3.0, 5.0, 10.0 \\ $\zeta_p({\rm H})/10^{-16}\,\rm s^{-1}$ & 9 & 0.006, 0.02, 0.06, 0.2, 0.6, 2.0, 6.0, 20, 60 \\ $A_{\rm V}({\rm tot})$/mag & 16 & 0.0003, 0.001, 0.003, 0.01, 0.03, 0.1, 0.2, \\ & & 0.3, 0.5, 0.8, 1.0, 1.5, 2.0, 3.0, 5.0, 8.0 \\ $Z/Z_{\rm std}$ & 2 & 1.0, 2.0 \\ $n_{\rm H}$ & 1 & 50$\,\rm cm^{-3}$ \\ \enddata \end{deluxetable} We have computed a grid of models for diffuse atomic clouds, and for diffuse and translucent molecular clouds, for all combinations listed in Table 1 of four key parameters: the normalized UV radiation field, $\chi_{\rm UV}$, the primary CRIR per H atom, $\zeta_p({\rm H})$, the total visual extinction across the slab, $A_{\rm V}({\rm tot})$, and the metallicity, $Z$. In our standard metallicity model, $Z=Z_{\rm std}$, the adopted elemental abundances were those most appropriate to the Galactic ISM at the solar circle, for which we assumed gas-phase carbon, oxygen and argon abundances of $1.6 \times 10^{-4}$ (Sofia et al. 2004; Gerin et al.\ 2015), $3.9 \times 10^{-4}$ (Cartledge et al. 2004), and $3.2 \times 10^{-6}$ (Asplund et al.\ 2009) respectively relative to H nuclei. Given a primary CRIR per H atom, $\zeta_p({\rm H})$, we used the expressions given by Dalgarno et al.\ (1999) to determine the total CRIR per H atom (including the effects of secondary ionizations), $\zeta_t({\rm H})$, and the total ionization rate per H$_2$ molecule, $\zeta_t({\rm H}_2)$. The typical conversion factors are in good agreement with those given by Glassgold \& Langer (1974): $\zeta_t({\rm H})= 1.5 \zeta_p({\rm H})$ and $\zeta_t({\rm H}_2)= 2.3 \zeta_p({\rm H})$ All the models were computed for a single H nucleus density, $n_{\rm H} = 50\,\rm cm^{-3}$; however, as explained in Paper~I, the cloud properties can be predicted for other values of $n_{\rm H}$ by means of a simple scaling, because the cloud properties are completely determined by $\chi_{\rm UV}/n_{\rm H}$, $\zeta_p({\rm H})/n_{\rm H}$, $A_{\rm V}({\rm tot})$, and $Z$. The selection of parameters listed in Table 1 extends the range of those considered in Paper I to smaller $\chi_{\rm UV}$, to smaller $\zeta_p({\rm H})$, and to larger $A_{\rm V}({\rm tot})$, resulting in a grid consisting of 2880 diffuse cloud models. \subsection{Predictions for the $\rm H_3^+$ abundance} Our treatment of the chemistry of $\rm OH^+$, $\rm H_2O^+$ and $\rm ArH^+$ has been presented in previous papers (H12, S14, Paper I) and will not be discussed further here. In this section, we confine our attention to the H$_3^+$ molecular ion. As has been described in many previous studies (e.g. IM12 and references therein), the formation of H$_3^+$ is initiated by the cosmic ray ionization of H$_2$ to form H$_2^+$, followed by proton transfer from to H$_2$: $$\rm H_2^+ + H_2 \rightarrow H_3^+ + H. \eqno(R1)$$ If the molecular fraction is small, charge transfer with H is a significant competing channel that limits the H$_3^+$ abundance: $$\rm H_2^+ + H \rightarrow H^+ + H_2. \eqno(R2)$$ Dissociative recombination and photoionization are other loss processes for $\rm H_2^+$, but they are generally unimportant, so that the fraction of ${\rm H}_2$ ionizations that are followed by H$_3^+$ production, $\epsilon({\rm H}_3^+)$, is well-approximated by $$\epsilon({\rm H}_3^+)= {1 \over 1 + k_2 n({\rm H})/k_1 n({\rm H_2})}= {1 \over 1 + 0.3\, n({\rm H})/n({\rm H_2})},\eqno(1)$$ where $k_1$ and $k_2$ are the rate coefficients for reactions (R1) and (R2) respectively, for which we adopt values of $2.1 \times 10^{-9}$ (Theard \& Huntress 1974) and $6.4 \times 10^{-10}\, \rm cm^3\,s^{-1}$ (Karpas \& Huntress 1979). The H$_3^+$ production efficiency, $\epsilon({\rm H}_3^+)$, exceeds 50$\%$ whenever $n({\rm H})/n({\rm H_2})$ is smaller than $\sim 3$, or equivalently whenever the molecular fraction, $f({\rm H}_2) = 2n({\rm H}_2)/[n({\rm H})+2n({\rm H}_2)]$, exceeds $\sim 0.4$. Except in dense clouds, where the electron fraction $x_{\rm e}$ is very small, the destruction of H$_3^+$ is dominated by dissociative recombination: $$\rm H_3^+ + e \rightarrow H_2 + H\,\,\,\,or\,\,\,\,3H. \eqno(R3)$$ At small visual extinctions, carbon is fully ionized; $x_{\rm e}$ is at therefore least\footnote{For sufficiently large CRIRs, as discussed further below, the ionization of hydrogen can be a significant source (or even the dominant source) of electrons.} as large as the gas-phase abundance of carbon nuclei, $x_{\rm C}$, for which we adopt a value of $1.6 \times 10^{-4}$ (Sofia et al.\ 2004) in our standard metallicity models. As the extinction increases, the ionized fraction for carbon begins to drop, and the destruction rate decreases accordingly. Eventually, for sufficiently small electron fractions, proton transfer to neutral species (such as O or CO) becomesß the dominant loss process and sets a floor on the destruction rate for H$_3^+$. In diffuse and translucent clouds, where dissociative recombination dominates the destruction of H$_3^+$, the equilibrium $n({\rm H_3^+})/n({\rm H_2})$ density ratio is therefore $${n({\rm H_3^+}) \over n({\rm H_2})} = {\epsilon({\rm H}_3^+) \, \zeta_t({\rm H}_2) \over k_3 x_e n_{\rm H}},\eqno(2)$$ where $k_3$ is the rate coefficient for reaction (R3) (including both reaction channels). Given the value of $k_3$ measured by McCall et al.\ (2004), which may be approximated by $1.2 \times 10^{-7} (T/100\,{\rm K})^{-0.5},$ we obtain $${n({\rm H_3^+})\over n({\rm H_2})} = 2.1 \times 10^{-7}\, {\epsilon({\rm H}_3^+)\, \zeta_t({\rm H}_2)_{-15}\,T_2^{0.5} \over (x_{\rm e}/x_{\rm C})\, (Z/Z_\odot)\, n_{250}}, \eqno(3)$$ where $\zeta_t({\rm H}_2)_{-15}=\zeta_t({\rm H}_2)/[10^{-15}\,\rm s^{-1}]$, $T_2=T/ [\rm 100\,K]$, $n_{250}=n_{\rm H}/[250\,\rm cm^{-3}],$ and $Z$ is the metallicity (with $Z/Z_\odot$ = 1 in the standard metallicity case and 2 in the enhanced metallicity case.) \begin{figure} \includegraphics[width=13 cm]{fig1.ps} \caption{Local abundances of H$_2$, electrons, H$_2^+$ and H$_3^+$ relative to H nuclei, and of H$_2^+$ and H$_3^+$ relative to H$_2$, as a function of depth into a cloud of total visual extinction $A_{\rm V}({\rm tot}) = 8$ (i.e.\ $A_{\rm V}({\rm tot}) = 4$ to the midplane) exposed to UV radiation with $\chi_{\rm UV}/n_{250} = 1.$ Results are shown for several values of the CRIR (see legend in top left panel). In the lower right panel, which shows $N({\rm H}_3^+)/N({\rm H}_2)$, the dashed lines indicate predictions of the analytic treatment adopted by IM12 (see our eqn.\ 4), for an assumed gas temperature of 70 K.ß} \end{figure} In Figure 1, we have plotted several profiles showing the dependence predicted by our model for various abundances, as a function of depth into a cloud of total visual extinction $A_{\rm V}({\rm tot}) = 8$ exposed to UV radiation with $\chi_{\rm UV}/n_{250} = 1.$ Results are shown for six different CRIRs: $\zeta_p({\rm H})_{-16}/n_{250} = 1$ (black), 3 (red), 10 (brown), 30 (green), 100 (blue), and 300 (magenta), where $\zeta_p({\rm H})_{-16} = \zeta_p({\rm H})/[10^{-16}\,\rm s^{-1}] \sim 4.3 \,\zeta_t({\rm H}_2)_{-15}$. The top left panel, which shows the abundance of H$_2$ relative to H nuclei, reveals a strong gradient resulting from the effect of self-shielding on the H$_2$ photodissociation rate . In the cloud interior, destruction of H$_2$ by cosmic-rays reduces the H$_2$ abundance by a factor greater than 2 if $\zeta_p({\rm H})_{-16}/n_{250}$ exceeds $\sim 50$ (blue and magenta curves.) The top right panel shows the electron abundance, $x_{\rm e} = n_{\rm e}/n_{\rm H}.$ As discussed previously, significant (i.e.\ greater than a factor 2) departures from $x_{\rm e} = x_{\rm C}$ are predicted either (1) if the CRIR $\zeta_p({\rm H})_{-16}/n_{250}$ exceeds $\sim 50$, in which case the ionization of H by CR can significantly enhance the electron abundance {\it above} $x_{\rm C}$; or (2) once $A_{\rm V}$ exceeds $\sim 0.3\,\rm mag$, at which point the C$^+$ abundance starts to fall (and thus the electron abundance drops {\it below} $x_{\rm C}$ unless condition (1) also applies.) The middle panels show the H$_2^+$ and H$_3^+$ abundances relative to H nuclei, while the bottom panels show the H$_2^+$ and H$_3^+$ abundances relative to H$_2$ molecules. The $n({\rm H}_2^+)/n({\rm H}_2)$ ratio (bottom left panel) is exactly proportional to the CRIR and shows only a weak dependence on $A_{\rm V}$: a small decrease in $n({\rm H}_2^+)/n({\rm H}_2)$ occurs as the gas becomes molecular, at $A_{\rm V} \sim 10^{-2}$, because the destruction rate in fully-molecular gas, $k_1 n({\rm H}_2) = k_1 n_H /2$, is somewhat larger than that in fully-atomic gas, $k_2 n({\rm H}) = k_2 n_H$. The $n({\rm H}_3^+)/n({\rm H}_2)$ ratio (bottom right panel) shows a more complicated behavior. In their derivation of CRIRs from the observed column densities of $\rm H_3^+$ and $\rm H_2$, IM12 made the simplifying assumptions $x_{\rm e} = x_{\it C}$ and $\epsilon({\rm H}_3^+) = 1$. For a temperature of 70~K, the default value assumed by IM12 unless an alternative estimate was available, and with the approximation that $\zeta_p({\rm H})_{-16} = 4.3 \,\zeta_t({\rm H}_2)_{-15}$, these assumptions then imply $${n({\rm H_3^+}) \over n({\rm H_2})} = 4.1 \times 10^{-8}\, {\zeta_p({\rm H})_{-16} \over n_{250}}. \eqno(4)$$ That value is shown by the horizontal dashed lines in the bottom right panel of Figure 1 (with the same color-coding as the solid curves). Clearly, while equation (4) provides an adequate description over part of the relevant parameter space, significant deviations do result from shortcomings in the assumption that $x_{\rm C} = x_{\rm e}$ (described above), and from departures from $\epsilon({\rm H}_3^+) = 1$ that are important when the molecular fraction is small (see eqn.\ 1 above). \begin{figure} \includegraphics[width=15 cm]{fig2.ps} \caption{$N({\rm H}_3^+)/N({\rm H}_2)$ column density ratios predicted for diffuse and translucent molecular clouds exposed to UV radiation with $\chi_{\rm UV}/n_{250} = 1.$ Results are shown for several values of the total visual extinction through the cloud. The blue dashed line indicates results obtained using the analytic treatment adopted by IM12 (see our eqn.\ 4), for an assumed gas temperature of 70 K.} \end{figure} Molecular ``abundances" determined from astronomical observations are typically the ratios of {\it column densities}, not number densities. Accordingly, it is most useful to provide predictions for ${N({\rm H_3^+}) / N({\rm H_2})}$ along a sightline passing through an interstellar gas cloud. These are shown in Figure 2, where we have plotted ${N({\rm H_3^+}) / N({\rm H_2})}$ as a function of ${\zeta_p({\rm H})_{-16}/ n_{250}}$. Results are shown for several values of the total extinction, $A_{\rm V}({\rm tot})$, but they all apply to $\chi_{\rm UV}/n_{250} = 1.$ Here, we also show the approximate results obtained using equation (4) (blue dotted line), upon which the CRIR-determinations of IM12 were based. As expected from equation (4), ${N({\rm H_3^+}) / N({\rm H_2})}$ initially shows a linear dependence upon the CRIR. However, once the electron abundance starts to rise above the gas-phase elemental abundance of carbon, the ${N({\rm H_3^+}) / N({\rm H_2})}$ ratio then flattens out. For the highest CRIRs that we considered, the ${N({\rm H_3^+}) / N({\rm H_2})}$ eventually becomes a decreasing function of the CRIR, because the atomic hydrogen abundance increases sufficiently to compete with H$_2$ for H$_2^+$, reducing $\epsilon({\rm H}_3^+)$ even at the cloud center. As a result, the ${N({\rm H_3^+}) / N({\rm H_2})}$ ratio is a non-monotonic function of the CRIR. Figure 2 also shows that the ${N({\rm H_3^+}) / N({\rm H_2})}$ ratio is an increasing function of $A_{\rm V}({\rm tot})$ (even though expression (4) predicts no such dependence). This behavior occurs because the C$^+$ abundance in the cloud interior is smaller in clouds of larger $A_{\rm V}({\rm tot})$. \section{Estimates of the CRIR} \subsection{Diffuse molecular clouds} \begin{figure} \includegraphics[width=14 cm]{fig3.ps} \caption{H$_2$ column densities and ${N({\rm H_3^+}) / N({\rm H_2})}$ column density ratios predicted for diffuse and translucent molecular clouds with $\chi_{\rm UV}/n_{250} = 1$, where $\chi_{\rm UV}$ is the incident radiation field in Draine (1978) units and $n_{\rm H} = 250\,n_{250}\,\rm cm^{-3}$ is the density of H nuclei. Results are shown in the plane of $N({\rm H_2})$ and ${N({\rm H_3^+}) / N({\rm H_2})}$, with contours of visual extinction, $A_{\rm V}({\rm tot})$, shown in red and contours of $\zeta_p({\rm H})/n_{250}$ shown in blue (where $\zeta_p({\rm H}) \sim \zeta_t({\rm H}_2)/2.3$ is the primary cosmic-ray ionization rate per H nucleus and $\zeta_t({\rm H}_2)$ is the total cosmic-ray ionization rate per H$_2$ molecule.) Blue contours are labeled with $\zeta_p({\rm H})/n_{250}$, in units of $10^{-16}\,\rm s^{-1}$, and red contours with $A_{\rm V}({\rm tot})$ in mag. Diamonds, with 1$\sigma$ error bars, indicate measurements reported by IM12 or Albertsson et al.\ (2014). Here, black diamonds denote measurements obtained from direct observations of H$_2$, with magenta diamonds showing cases in which the H$_2$ column densities have been inferred indirectly from observations of CH or $E(B-V)$.} \end{figure} For diffuse molecular clouds, IM12 have presented an extensive compilation of H$_3^+$ column densities derived from near-IR spectroscopy of stars. This compilation, presented in their Table 4, includes 21 sight-lines with H$_3^+$ detections, 10 of which had been reported previously, and 30 sight-lines with upper limits. Two of the sight-lines with H$_3^+$ detections exhibit two (velocity-resolved) absorption components, leading to a total of 23 clouds in which $N({\rm H}_3^+)$ has been measured. For six of these diffuse clouds, $N({\rm H}_2)$ has been measured directly by means of ultraviolet absorption line spectroscopy. For the remaining 17 clouds with ${\rm H}_3^+$ detections, direct measurements of H$_2$ were unavailable, and IM12 inferred $N({\rm H}_2)$ indirectly from measurements of the selective extinction, $E(B-V)$, or the CH column density. For these clouds without direct measurements of H$_2$, the inferred H$_2$ column densities were relatively inaccurate, with estimated uncertainties of a factor 2 (i.e.\ 0.30 dex, for those derived from $E(B-V)$) and 1.6 (i.e.\ 0.21 dex, for those derived from $N({\rm CH})).$ In the analysis presented below, we will focus primarily on ``gold-standard" determinations in which $N({\rm H}_3^+)$ and $N({\rm H}_2)$ have been measured directly. Three more such determinations, reported by Albertsson et al.\ (2014, herefater A14), may be added to the six cases presented by IM12, for a total of 9 clouds with direct measurements of $N({\rm H}_3^+)$ and $N({\rm H}_2).$ \begin{figure} \includegraphics[width=14 cm]{fig4.ps} \caption{Same as Figure 3, but with the column densities computed using the simple analytic approximations adopted by IM12. Here, we adopted a gas temperature of 70 K and assumed a molecular fraction of 1.0} \end{figure} In Figure 3, we show the data presented by IM12 and A14 in the plane of observables, with the horizontal axis showing the H$_2$ column density and the vertical axis the column density ratio, $N({\rm H}_3^+)/N({\rm H}_2).$ Here, black diamonds with 1 $\sigma$ error bars refer to clouds with direct measurements of both $N({\rm H}_3^+)$ and $N({\rm H}_2),$ while magenta diamonds refer to clouds for which the H$_2$ column density was estimated from $N({\rm CH})$ or $E(B-V)$. Overplotted contours show the predictions of our diffuse cloud model, with contours of visual extinction shown in blue, and contours of CRIR shown in red. Blue contours are labeled with $\zeta_p({\rm H})/n_{250}$ in units of $10^{-16}\,\rm s^{-1}$, and red contours with $A_{\rm V}({\rm tot})$ in mag. All the model predictions apply to a UV radiation field with $\chi_{UV}/n_{250} = 1.$ The considerations discussed in Section 2.3 are illustrated clearly when this figure is compared with Figure 4, in which identical data are shown with model predictions from the simple analytic treatment of IM12. In Figure 4, the blue contours of constant CRIR are horizontal and evenly spaced, because the predicted $N({\rm H}_3^+)/N({\rm H}_2)$ ratio is simply proportional to $\zeta_p({\rm H})/n_{250}$. In Figure 3, by contrast, the blue contours curve upwards at large $N({\rm H}_2)$ because the abundance of electrons -- which destroy H$_3^+$ -- is smaller at larger visual extinctions where carbon is no longer fully ionized. For larger CRIRs, the spacing between the blue contours diminishes -- and the contours may even cross -- because cosmic-ray ionization of H enhances the electron abundance. Moreover, the red contours curve to the left near the top of Figure 3, owing to the destruction of H$_2$ by cosmic-rays. \begin{figure} \includegraphics[width=14 cm]{fig5.ps} \caption{ Same as Figure 3, but now with contours of the observed quantities $N({\rm H}_2)$ and $N({\rm H}_3^+)/N({\rm H}_2)$ in the plane of the model parameters $A_{\rm V}({\rm tot})$ and $\zeta_p({\rm H})/n_{250}$. Blue contours are labeled with ${\rm log}_{10}[N({\rm H}_3^+)/N({\rm H}_2)]$, and red contours are labeled with ${\rm log}_{10}[N({\rm H}_2)/{\rm cm}^{-2}]$. } \end{figure} \begin{figure} \includegraphics[width=14 cm]{fig6.ps} \caption{ Estimates of $\zeta_p({\rm H})$, derived from measurements of the $\rm H_2$ and $\rm H_3^+$ column densities, as as a function of the measured $N({\rm H}_2)$ (upper panel) and of the derived $A_{\rm V}({\rm tot})$ (lower panel). Black diamonds: measurements obtained from direct observations of H$_2$. Magenta diamonds: measurements in which the H$_2$ column densities have been inferred indirectly from observations of CH or $E(B-V)$. Dashed red lines: best fits to all the data. } \end{figure} In Figure 5, we have transformed the coordinate system adopted in Figures 3 and 4, plotting {\it contours of observable quantities in the plane of model parameters}. Here, the horizontal axis shows the visual extinction, $A_{\rm V}({\rm tot})$, and the vertical axis shows $\zeta_p({\rm H})/n_{250}$. Blue contours show the logarithm of the $N({\rm H}_3^+)/N({\rm H}_2)$ ratio, and red contours show the logarithm of $N({\rm H}_2)$ in cm$^{-2}$. The diamonds now represent the best-fit model parameters for each cloud, and the error bars represent 68$\%$ confidence limits. While the data plotted here reveal a clear tendency for $\zeta_p({\rm H})/n_{250}$ to decrease with $A_{\rm V}({\rm tot})$, it is not clear from Figure 5 whether this tendency occurs because CRIR is a decreasing function of $A_{\rm V}({\rm tot})$ or because the density increases with $A_{\rm V}({\rm tot})$ (or both). Certainly, there is an expectation that the typical gas density will increase with $A_{\rm V}({\rm tot})$ once self-gravity becomes important. For seven of the nine clouds with direct measurements of both $N({\rm H}_3^+)$ and $N({\rm H}_2),$ and for five additional clouds with indirect determinations of $N({\rm H}_2),$ gas density estimates, $n_{\rm H}$, are also available (Sonnentrucker et al.\ 2007). These density estimates were inferred from a fit to the relative level populations of rotational states of the C$_2$ molecule, which had been obtained from absorption-line observations at visible wavelengths. For these clouds, we have multiplied the $\zeta_p({\rm H})/n_{250}$ estimates derived from the $\rm H_2$ and $\rm H_3^+$ column densities by the gas density estimates presented by Sonnentrucker et al.\ 2007, thereby obtaining estimates of the CRIR. The results are shown in Figure 6, as a function of the measured $N({\rm H}_2)$ (upper panel) and of the derived $A_{\rm V}({\rm tot})$ (lower panel). All results were obtained for a UV radiation field $\chi_{UV}/n_{250} = 1.$ As in Figures 4 and 5, black diamonds with 1$\sigma$ error bars refer to clouds with direct measurements of both $N({\rm H}_3^+)$ and $N({\rm H}_2),$ while magenta diamonds refer to clouds for which the H$_2$ column density was estimated from $N({\rm CH})$ or $E(B-V)$. Dashed red lines in Figure 6 represent the best linear fits to the dependence of ${\rm log}_{10}[\zeta_p({\rm H})]$ on the measured ${\rm log}_{10}[N({\rm H}_2)]$ and on the derived ${\rm log}_{10}[A_{\rm V}({\rm tot})]$. The best-fit slopes are $\sim -1$, but the differences from zero are only of marginal significance. \subsection{Diffuse atomic clouds} As discussed in H12, S14, and Paper I, the CRIR in diffuse {\it atomic} clouds may be probed using observations of $\rm OH^+$, $\rm H_2O^+$ and $\rm ArH^+$. Model predictions for $\rm ArH^+$ were presented previously in Paper I (their Figure 3) , and those for $\rm OH^+$ and $\rm H_2O^+$ in H12 (their Figures 14 and 15). In the present study, our results for $\rm OH^+$ and $\rm H_2O^+$ reflect several changes to the chemistry described in Paper I, and have been computed on a finer grid that those presented in H12. Accordingly, we have shown the results of the current model in Figures 7 -- 9, in a manner analogous to that adopted for Figures 3 -- 5. In Figure 7, as in H12 and Figure 3 above, we show model predictions in the plane of two observable quantities, with $N({\rm OH^+})/N({\rm H_2O^+})$ plotted on the horizontal axis and $N({\rm OH^+})/N({\rm H})$ on the vertical axis. Once again, overplotted contours show the predictions of our diffuse cloud model, with contours of visual extinction shown in blue, and contours of CRIR shown in red. Blue contours are labeled with $\zeta_p({\rm H})/n_{50}$ in units of $10^{-16}\,\rm s^{-1}$, and red contours with $A_{\rm V}({\rm tot})$ in mag. All the model predictions apply to a UV radiation field with $\chi_{UV}/n_{50} = 1.$ Figure 8, like Figure 4, shows the corresponding predictions obtained from an analytic treatment, in this case that of Neufeld et al.\ (2010) and I15 (their equations 12 and 15). Here, several simplifying assumptions were made: (1) a constant fraction, $\epsilon=0.07$, of H ionizations lead to OH$^+$, with the value of $\epsilon$ ``calibrated" by observations of a single source toward which H$_3^+$, OH$^+$, and $\rm H_2O^+$ are all observed (Indriolo et al.\ 2012); (2) $\rm H_2O^+$ is formed exclusively by reaction of OH$^+$ with H$_2$; (3) OH$^+$ and H$_2$O$^+$ are destroyed exclusively by dissociative recombination and reaction with H$_2$; (4) the electron abundance is equal to the gas-phase carbon abundance; and (5) the H$_2$ fraction in a given cloud is constant (red contours) throughout the zone in which OH$^+$ and $\rm H_2O^+$ are present. A comparison of Figures 7 and 8 indicates that the simple analytic treatment adopted in I15 significantly overestimates the $N({\rm OH^+})/N({\rm H})$ ratio when the CRIR is large. This behavior results from a breakdown of assumption (4) above; for large CRIRs, H ionization contributes significantly to the electron abundance, thereby increasing the OH$^+$ destruction rate. Finally, in Figure 9, we have transformed the coordinate system so that contours of observable quantities [$N({\rm OH^+})/N({\rm H_2O^+})$ and $N({\rm OH^+})/N({\rm H})$] are plotted in the plane of model parameters [$\zeta_p({\rm H})/n_{50}$ and $A_{\rm V}({\rm tot})$]. \begin{figure} \includegraphics[width=14 cm]{fig7.ps} \caption{ ${N({\rm OH^+}) / N({\rm H_2O^+})}$ and ${N({\rm OH^+}) / N({\rm H})}$ column density ratios predicted for diffuse and translucent molecular clouds with $\chi_{\rm UV}/n_{50} = 1$, where $\chi_{\rm UV}$ is the incident radiation field in Draine (1978) units and $n_{\rm H} = 50\,n_{50}\,\rm cm^{-3}$ is the density of H nuclei. Results are shown in the plane of ${N({\rm OH^+}) / N({\rm H_2O^+})}$ and ${N({\rm OH^+}) / N({\rm H})}$, with contours of visual extinction, $A_{\rm V}({\rm tot})$, shown in red and contours of $\zeta_p({\rm H})/n_{50}$ shown in blue (where $\zeta_p({\rm H}) \sim \zeta_t({\rm H}_2)/2.3$ is the primary cosmic-ray ionization rate per H nucleus and $\zeta_t({\rm H}_2)$ is the total cosmic-ray ionization rate per H$_2$ molecule.) } \end{figure} \begin{figure} \includegraphics[width=14 cm]{fig8.ps} \caption{ Same as Figure 7, but with the column density ratio computed using the simple analytic approximations adopted by N15, and with the red contours being contours of molecular fraction. } \end{figure} \begin{figure} \includegraphics[width=14 cm]{fig9.ps} \caption{ Same as Figure 7, but now with contours of ${N({\rm OH^+}) / N({\rm H})}$ and ${N({\rm OH^+}) / N({\rm H_2O^+})}$ in the plane of the model parameters $A_{\rm V}({\rm tot})$ and $\zeta_p({\rm H})/n_{50}$. Blue contours are labeled with ${\rm log}_{10}[N({\rm OH^+}) / N({\rm H_2O^+})]$, and red contours are labeled with ${\rm log}_{10}[N({\rm OH^+}) / N({\rm H})]$} \end{figure} For the diffuse atomic ISM, I15 have presented observations of $N({\rm OH}^+)$ and $N({\rm H_2O^+})$ along 20 Galactic sight-lines toward background sources of bright submillimeter continuum emission. Combined with HI 21 cm observations obtained by Winkel et al.\ (2017), these observations permit reliable absorption-line determinations of $N({\rm OH^+})/N({\rm H_2O^+})$ and $N({\rm OH^+})/N({\rm H})$ in 37 distinct velocity intervals arising in foreground diffuse atomic gas within the Galactic disk. For 15 of these velocity intervals, observations of ArH$^+$ absorption are also available (S14). The measured values of $N({\rm OH^+})/N({\rm H_2O^+})$ and $N({\rm OH^+})/N({\rm H})$ are represented by diamonds in Figures 7 -- 8, along with their 1 $\sigma$ error bars, and the corresponding clouds parameters are shown in Figure 9, along with their 68\% confidence intervals. One simplifying assumption adopted here is that the $N({\rm OH}^+)$ and $N({\rm H_2O^+})$ absorptions originate in the same gas as the HI 21 cm absorption. However, as shown in Paper I, an analysis of the $\rm OH^+$, $\rm H_2O^+$ and $\rm ArH^+$ column densities shows that a single population of clouds cannot account simultaneously for the observations. Instead, the measured column densities require at least two distinct populations of diffuse atomic clouds: (1) a population of smaller clouds, which are primarily responsible for the observed ArH$^+$ absorption, with a total visual extinction of at most 0.02 mag per cloud and a column-averaged molecular fraction in the range $10^{-5}$ to 10$^{-2}$; and (2) a population of somewhat larger clouds, primarily responsible for the observed OH$^+$ and H$_2$O$^+$ absorption, in which the column-averaged molecular fraction is $\sim 0.2$. Because part of the observed 21 cm absorption originates in population (1) above, the $N({\rm OH^+})/N({\rm H})$ ratio in population (2) can be larger than the measured ratio. This effect will be discussed further in \S 4.1 below. \section{Discussion} \subsection{Comparison with previous estimates of the CRIR in the diffuse ISM} \subsubsection{Estimates obtained from observations of molecular ions} In Figure 10, we present a comparison of the CRIRs derived in the present study with previous estimates obtained by N12 and I15. The top row in Figure 10 shows the CRIRs, $\zeta_p({\rm H})/n_{50}$, derived for diffuse atomic gas in which the column densities of $\rm OH^+$, $\rm H_2O^+$, $\rm ArH^+$ and H have all been measured. Here, blue diamonds represent the values obtained previously using a simple analytic treatment of the chemistry, whereas red and magenta symbols indicate the results obtained using the detailed diffuse clouds models. The CRIRs indicated by the red diamonds were computed without any correction for the presence of small ArH$^+$-containing clouds that may contribute significantly to the HI column density but not the OH$^+$ and H$_2$O$^+$ column densities. The results shown by the magenta diamonds apply the necessary correction, using the methodology described in Paper I. Here, a simultaneous fit to the $N({\rm OH}^+)/N({\rm H})$, $N({\rm H_2O}^+)/N({\rm H})$, and $N({\rm ArH^+})/N({\rm H})$ ratios was obtained for a combination of the smaller and larger cloud types described in \S 3.2 above. In this analysis, we assumed the standard UV radiation field, $\chi_{\rm UV}/n_{50}=1$, and varied four free parameters: $\zeta_p({\rm H})/n_{50}$ (assumed to be the same in both cloud types); the fraction of atomic H in the population of smaller clouds, $f_{\rm S}$; the total visual extinction across an individual small cloud, $A_{\rm V}({\rm tot})_{\rm S}$; and the total visual extinction across an individual large cloud, $A_{\rm V}({\rm tot})_{\rm L}$. Because there are only three observables -- $N({\rm OH}^+)/N({\rm H})$, $N({\rm H_2O}^+)/N({\rm H})$, and $N({\rm ArH^+})/N({\rm H})$ -- the problem is underconstrained and thus there is a range of CRIRs that can satisfactorily match the data for any given velocity interval. This range is reflected in the error bars on the magenta diamonds. In deriving the range of acceptable values for the CRIR, we apply the additional constraint that $f_{\rm S} \le 0.5,$ i.e. that the population of larger clouds contains at least one-half of the observed HI. This constraint is motivated by the observational fact that the HI absorption spectra are typically more similar to the OH$^+$ and $\rm H_2O^+$ spectra than they are to the ArH$^+$ spectra (Neufeld et al.\ 2015). The labels underneath the plotted points indicate the background source for each CRIR determination and the velocity interval to which the determination applies (in km~s$^{-1}$ with respect to the local standard of rest.) \begin{figure} \includegraphics[width=10 cm]{fig10.ps} \caption{Comparison of the CRIRs derived in the present study with previous estimates obtained by N12 and I15. The labels underneath the plotted points indicate the background source for each CRIR determination and the velocity interval to which the determination applies. Top row: $\zeta_p({\rm H})/n_{50}$, derived for diffuse atomic gas in which the column densities of $\rm OH^+$, $\rm H_2O^+$, $\rm ArH^+$ and H have all been measured. Middle row: $\zeta_p({\rm H})/n_{50}$, derived for diffuse atomic gas in which ArH$^+$ observations are not available. Bottom row: CRIRs derived for diffuse molecular gas from observations of H$_3^+$. Bottom left: clouds in which H$_2$ is measured directly. Bottom right: clouds in which H$_2$ is only measured indirectly. Blue diamonds: values obtained previously using a simple analytic treatment of the chemistry (IM12 or I15). Red diamonds: values obtained using detailed diffuse cloud models, but without any correction for HI in the small diffuse atomic clouds responsible for the observed ArH$^+$ absorption. Magenta diamonds: values obtained using detailed diffuse cloud models with the inclusion of the aforementioned correction.} \end{figure} The middle row of Figure 10 shows the CRIRs, $\zeta_p({\rm H})/n_{50}$, derived for diffuse atomic gas in which ArH$^+$ observations are not available. Here, only the blue and red diamonds can be presented. Finally, the bottom row of Figure 10 shows the CRIRs derived for diffuse molecular gas from observations of H$_3^+$. The nine determinations on the left are the most reliable, because they apply to clouds in which H$_2$ is measured directly, while the remaining determinations -- with correspondingly larger uncertainties -- apply to clouds where only indirect H$_2$ measurements are available. For the bottom row of Figure 10, we have used the density estimates adopted by I15 to present values for $\zeta_p({\rm H})$ itself, rather than $\zeta_p({\rm H})/n_{\rm H}$. For each set of CRIR determinations plotted in Figure 10, the mean values are indicated by dotted horizontal lines with the same color-coding as the diamonds. For diffuse atomic clouds (top and middle rows), our detailed cloud models yield CRIR-estimates that are systematically larger than those obtained with the simplifying assumptions used in previous studies, by an average factor of 1.7 (0.23 dex). Applying a correction for HI in small diffuse clouds, which can be implemented in gas where ArH$^+$ is observed (top row), increases our estimates of the CRIR by a further factor of 1.4 (0.15 dex). For diffuse molecular clouds probed by H$_3^+$ and direct measurements of H$_2$ (bottom left), the CRIRs derived from the detailed cloud models are in excellent agreement with the simple analytical treatment, with the average discrepancy being only $9\%$ (0.04 dex). For those diffuse molecular clouds without direct measurements of H$_2$, however, the CRIRs derived from the detailed models are, on average, a factor 3.8 (0.58 dex) lower than those derived using the simple analytic estimates. For these clouds, which are of larger $A_{\rm V}({\rm tot})$ than any cloud in which H$_2$ can be measured directly by UV absorption line observations, the electron abundance falls below the gas-phase carbon abundance; as a result, the H$_3^+$ destruction rate is overestimated in the simple analytic treatment of IM12, leading to an overestimate of the CRIR required to fit a given value of $N({\rm H}_3^+).$ \subsubsection{Estimates obtained from radio recombination line observations of atomic ions} As originally discussed by Shaver (1976) and Sorochenko \& Smirnov (1987; hereafter SS87), radio recombination lines (RRLs) from atomic ions provide an alternate probe of the CRIR in the diffuse neutral ISM. Here, H$^+$ is produced by cosmic-ray ionization, whereas C$^+$ is produced by photoionization. Thus the strength of hydrogen radio recombination lines (HRRLs) relative to that of carbon radio recombination lines (CRRLs) is an increasing function of $\zeta_p({\rm H})/n_{\rm H}$. To date, the sight-line to the Cas A supernova remnant is the best-studied case in which RRLs have been observed from the diffuse ISM. Oonk et al.\ (2017; hereafter O17) have recently reported new measurements of RRL strengths for this sight-line, derived from high-quality interferometric data obtained from the Low Frequency Array (LOFAR) and the Westerbork Synthesis Radio Telescope (WSRT). These included WSRT detections of H$n\alpha$ line emission -- with principal quantum number $n$ in the range 257 to 278 -- from a cold cloud located in the Perseus arm at a velocity of $-47\,\rm km \,s^{-1}$ relative to the Local Standard of Rest (LSR). With the use of a new model (Salgado et al.\ 2016) for the level populations of Rydberg states of C, together with an analysis of the observed line widths, O17 derived an CRRL emission measure for the $-47\,\rm km \,s^{-1}$ component of $$EM_{\rm C} = \int n({\rm C}^+) n_e dz = 0.056 \pm 0.014 \rm \, cm^{-6} \rm pc , \eqno(6)$$ an electron temperature of $85 \pm 5$~K, and an electron density of $0.040 \pm 0.05 \rm \,cm^{-3}$. O17's preferred model for the $-47\,\rm km \,s^{-1}$ component is a diffuse molecular cloud with a sheet-like geometry, observed at an oblique inclination, and with a density $n_{\rm H} \sim 2.9 \times 10^2 \rm cm^{-3}$ and line-of-sight column density $N_{\rm H} \sim 3 \times 10^{22} \rm cm^{-3}$. \begin{figure} \includegraphics[width=15 cm]{fig11.ps} \caption{Ratio of emission measures, $EM_{\rm H}/EM_{\rm C}$, predicted by our diffuse clouds models as a function of the CRIR. Results are shown for five values of the UV radiation field, $\chi_{\rm UV} /n_{250}$ = 0.25, 0.5, 1, 1,5, and 2.5; and for two values of the cloud extinction, $A_V({\rm tot})$ = 1 mag (dashed curves) and 5 mag (solid curves). Green curve: $EM_{\rm H}/EM_{\rm C}$ predictions obtained using the simple SS87 analysis (see text).} \end{figure} Given these values for electron temperature and density, along with the observed strengths of the HRRLs detected with WSRT, O17 derived an HRRL emission measure, $EM_{\rm H} = 0.0036 \rm \,cm^{-6}\,\rm pc$, implying $EM_{\rm H}/EM_{\rm C}=0.064.$ Using a simple analysis due to SS87, in which radiative recombination is assumed to dominate the destruction of H$^+$ and HI is assumed to be the dominant reservoir of H nuclei, O17 found that a total CRIR of $\zeta_t({\rm H}) = 2.5 \times 10^{-18}\rm \,s^{-1}$ was needed to match the observed $EM_{\rm H}/EM_{\rm C}$ ratio, a CRIR value much smaller than those derived from observations of molecular ions. O17 noted, however, that a much larger CRIR could be required (e.g.\ Liszt 2003) if the neutralization of H$^+$ by small grains enhances the destruction of H$^+$; the above value is therefore a strict lower limit. In Figure 11, we plot the values of $EM_{\rm H}/EM_{\rm C}$ predicted by our diffuse clouds model as a function of the CRIR. Results are shown for five values of the UV radiation field, $\chi_{\rm UV} /n_{250}$ = 0.25, 0.5, 1.0, 1.5, and 2.5; and for two values of the cloud extinction, $A_V({\rm tot})$ = 1 mag (dashed) and 5 mag (solid). The $EM_{\rm H}/EM_{\rm C}$ ratio is an increasing function of $\chi_{\rm UV} /n_{250}$, because larger UV radiation fields enlarge the region within which H is primarily atomic. Moreover, $EM_{\rm H}/EM_{\rm C}$ is roughly independent of $A_V({\rm tot})$ for values typical of diffuse molecular clouds, because both the CRRL and HRRL emissions occur relatively close to cloud surfaces. For comparison, the green line shows the much larger $EM_{\rm H}/EM_{\rm C}$ values predicted using the simple SS87 analysis. In addition to the destruction of H$^+$ in reactions with neutral or negatively-charged PAHs, two further effects reduce the $EM_{\rm H}/EM_{\rm C}$ ratio below the SS87 predictions. First -- as discussed, for example, by Sorochenko \& Smirnov (2010) -- the H$_2$ fraction becomes significant within the CRRL-emitting region, so that the atomic hydrogen abundance and HRRL line emission are diminished accordingly. Second, charge transfer reactions of atomic oxygen dominate radiative recombination as a destruction process for H$^+$, so that even in the absence of PAH-assisted recombination the H$^+$ abundance would be reduced sharply. The horizontal dashed line in Figure 11 shows the $EM_{\rm H}/EM_{\rm C}$ ratio obtained by O17 from a simultaneous fit to the CRRL and HRRL strengths measured for the $-47\,\rm km \,s^{-1}$ cloud using WSRT. With inclusion of the various factors that reduce the $EM_{\rm H}/EM_{\rm C}$ ratio far below the predictions of SS87, and given the density estimate of O17, our best-fit CRIR is $\zeta_p({\rm H}) = 2.9 \times 10^{-16}\rm \,s^{-1}$, for an assumed $\chi_{\rm UV}$ of 1. This value is roughly two orders of magnitude larger that the lower limit obtained by O17, $\zeta_t({\rm H}) = 1.5\, \zeta_p({\rm H}) = 2.5 \times 10^{-18}\rm \,s^{-1}$, and is very typical of the estimates derived from molecular abundances. One caveat applies to discussion given above. In addition to detecting Hn$\alpha$ RRLs in the range $n=257$ to 278 using WSRT, O17 also obtained upper limits on the strengths of lower frequency H$n\alpha$ RRLs (with $n \sim 500$) using LOFAR. To reconcile the HRRL detections obtained with WSRT with these upper limits from LOFAR, O17 required a somewhat larger assumed electron density ($\sim 0.065-0.11\,\rm cm^{-3}$) and somewhat lower assumed gas temperature ($\sim 30 - 50$~K) than those derived from their analysis of the CRRL. Properly investigatng this discrepancy will require calculations that are beyond the scope of the present study. Instead of simply computing the relative emission measures for C$^+$ and H$^+$, future models will need to integrate the individual RRL strengths over the cloud, taking account of the varying temperature and electron density to obtain predictions for the H$n\alpha$ and C$n\alpha$ line strengths as a function of $n$. \subsection{Mean and dispersion of the CRIR} With the aid of the detailed diffuse cloud models described in this paper, we obtain the estimates of the CRIR given in Table 2. Results are given here for 4 subsets of the data. From left to right, these are (1) diffuse molecular clouds in which $\rm H_3^+$ and H$_2$ are measured directly and gas density estimates are available from observations of C$_2$; (2) diffuse atomic gas in which OH$^+$, H$_2$O$^+$ and HI have been measured but not ArH$^+$; (3) diffuse atomic gas in which OH$^+$, H$_2$O$^+$, ArH$^+$ and HI have all been measured; (4) all diffuse atomic gas [i.e. the union of subsets (2) and (3)]. In obtaining our estimate of the average CRIR in subset (4), we have applied the mean correction for HI in small clouds obtained in subset (3) to subset (2) in which ArH$^+$ measurements are unavailable. For each of these subsets, we list the sample size, the mean of ${\rm log}_{10} \zeta_p({\rm H})$ or ${\rm log}_{10}[\zeta_p({\rm H})/n_{50}]$ and its standard error, the corresponding values of $\zeta_p({\rm H})$ or $\zeta_p({\rm H})/n_{50}$, and the dispersion, $\sigma_{\rm BE}$, of the best estimates of ${\rm log}_{10} \zeta_p({\rm H})$ or ${\rm log}_{10}[\zeta_p({\rm H})/n_{50}]$ plotted in Figure 10. Because our estimates of these quantities have known uncertainties, resulting from uncertainties in the column-density measurements upon which they are based, $\sigma_{\rm BE}$ is an upper limit on the true dispersion of ${\rm log}_{10} \zeta_p({\rm H})$ or ${\rm log}_{10}[\zeta_p({\rm H})/n_{50}]$. On the assumption that the errors in these quantities are Gaussian, and that the actual distribution of $\zeta_p({\rm H})$ or ${\rm log}_{10}[\zeta_p({\rm H})/n_{50}]$ is log normal, we may estimate the true dispersion, $\sigma_{\rm T}$, of either quantity from the equation $$\chi^2_{red} = {1 \over N-1}\sum_i \bigl[ (x_i - x_m)^2/(\sigma_{\rm T}^2 + \sigma_{i}^2)\bigr] = 1, \eqno(5)$$ where $N$ is the number of objects in the sample, $x_i$ is the best estimate of ${\rm log}_{10} \zeta_p({\rm H})$ or ${\rm log}_{10}[\zeta_p({\rm H})/n_{50}]$ for the {\it i}th object, $\sigma_i$ is the uncertainty in $x_i$, and $x_m$ is the mean of the $x_i$. Values of $\sigma_{\rm T}$ are given in Table 2. Entries in boldface represent the key results obtained from the present study. For diffuse molecular gas probed by H$_3^+$, we obtain $-15.63 \pm 0.09$ (standard error) for the mean of ${\rm log}_{10} \zeta_p({\rm H})$. This value is a factor 1.2 times as large as the previous estimate presented by IM12. We estimate the true dispersion of ${\rm log}_{10} \zeta_p({\rm H})$ as 0.09. For diffuse atomic clouds clouds probed by OH$^+$, $\rm H_2O^+$ and ArH$^+$, we obtain $-15.34 \pm 0.05$ (standard error) for ${\rm log}_{10}[\zeta_p({\rm H})/n_{50}]$. This value is a factor 2.6 times as large as the previous estimate presented by I15. We estimate the true dispersion of ${\rm log}_{10}[\zeta_p({\rm H})/n_{50}]$ as 0.23. One striking feature of our CRIR estimates is their remarkably low dispersions. In the case of the diffuse molecular clouds, the value of 0.09 for $\sigma_{\rm T}$ corresponds to a cloud-to-cloud variation of only $\sim 25\%$. Moreover, our analysis did not include uncertainties in the density estimates derived from C$_2$ observations, so the value of 0.09 is really an upper limit. In the case of the diffuse atomic clouds, the value of 0.23 for $\sigma_{\rm T}$ corresponds to a variation by only a factor of 1.7. In this case, the dispersion of ${\rm log}_{10}[\zeta_p({\rm H})/n_{50}]$ includes both intrinsic variations in the CRIR and intrinsic variations in the density. Here again, 0.23 dex is strict upper limit on any variations in the CRIR. One important caveat should be noted in the case of diffuse molecular clouds: the mean and dispersion of the CRIR given above applies specifically to a sample of stars towards which H$_3^+$ (and H$_2$) {\it have been detected}. The set of sight-lines discussed by IM12 also include multiple cases with H$_3^+$ {\it non-detections}, and in some of these the upper limits inferred by IM12 for the CRIR are significantly smaller than the average value. In particular, IM12 noted that the nearby Ophiuchus-Scorpius region appears to exhibit an abnormally low CRIR. \begin{deluxetable}{lcccc} \tabletypesize{\scriptsize} \tablewidth{0pt} \tablecaption{Estimates of the CRIR: mean values and dispersions} \tablehead{Method & H$_3^+$ & OH$^+$ and $\rm H_2O^+$ & OH$^+$ and $\rm H_2O^+$ & OH$^+$ and $\rm H_2O^+$ \\ & & without ArH$^+$ & with ArH$^+$ & (all)} \startdata Sample size & 7 & 22 & 15 & 37\\ \\ $< {\rm log}_{10} \zeta_p({\rm H}) > $ (Present work) & $\bf -15.63 \pm 0.09$ \\ $< {\rm log}_{10} \zeta_p({\rm H}) > $ (IM12) & $-15.73$ \\ $10^{< {\rm log}_{10} \zeta_p({\rm H}) >} /10^{-16}\,\rm s^{-1}$ (Present work) & $2.3 \pm 0.6$ \\ $10^{< {\rm log}_{10} \zeta_p({\rm H}) >} /10^{-16}\,\rm s^{-1}$ (IM12) & 1.9 \\ $\sigma_{\rm BE}[{\rm log}_{10} \zeta_p({\rm H})]$ (Present work) & 0.23 \\ $\sigma_{\rm BE}[{\rm log}_{10} \zeta_p({\rm H})]$ (IM12) & 0.24 \\ $\sigma_{\rm T}[{\rm log}_{10} \zeta_p({\rm H})]$ (Present work) & {\bf 0.09} \\ \\ $< {\rm log}_{10}[\zeta_p({\rm H})/n_{50}] > $ (Present work) && $-15.54 \pm 0.07$ & $-15.25 \pm 0.07$ & $\bf -15.34 \pm 0.05$ \\ $< {\rm log}_{10}[\zeta_p({\rm H})/n_{50}] > $ (I15) &&&& --15.77\\ $10^{< {\rm log}_{10} [\zeta_p({\rm H})/n_{50}] >}/10^{-16}\,\rm s^{-1} $ (Present work) && $2.9 \pm 0.5$ & $5.6 \pm 0.9$ & $\bf 4.6 \pm 0.5$ \\ $10^{< {\rm log}_{10} [\zeta_p({\rm H})/n_{50}] >} /10^{-16}\,\rm s^{-1} $ (I15) &&&& 1.8 \\ $\sigma_{\rm BE}[{\rm log}_{10}[\zeta_p({\rm H})/n_{50}]]$ (Present work) && 0.31 & 0.25 & 0.28 \\ $\sigma_{\rm T}[{\rm log}_{10}[\zeta_p({\rm H})/n_{50}]]$ (Present work) && 0.26 & {\bf 0.23} \\ \\ Notes & (a) & (b) & (c) & (d) \\ \enddata \tablenotetext{a}{Includes only sight-lines for which H$_2$ has been measured directly and H$_3^+$ has been detected, and assumes that $\chi_{UV}/n_{250} = 1$} \tablenotetext{b}{Assumes that 100 percent of the observed HI is present within the larger clouds responsible for OH$^+$ ,and $\rm H_2O^+$, \textcolor[rgb]{0,0,0}{and that $\chi_{UV}/n_{50} = 1$}} \tablenotetext{c}{Assumes that up to 50 percent of observed HI may be present within the smaller clouds responsible for ArH$^+$, \textcolor[rgb]{0,0,0}{and that $\chi_{UV}/n_{50} = 1$}} \tablenotetext{d}{In cases where ArH$^+$ is not observed, a correction for HI in smaller clouds is applied, using the average correction factor obtained when $N({\rm ArH}^+)$ is measured. A $\chi_{UV}/n_{50}$ value of 1 is assumed.} \end{deluxetable} \begin{figure} \includegraphics[width=15 cm]{fig12.ps} \caption{Dependence of the mean derived CRIRs upon the assumed values of $\chi_{\rm UV}/n_{\rm 50}$ for $Z=Z_{\rm std}$ (solid curves) and $Z=2\,Z_{\rm std}$ (dashed curves). Blue curves: mean CRIRs derived for diffuse molecular clouds (i.e.\ from H$_3^+$). Red curves: mean CRIRs derived for diffuse atomic clouds} \end{figure} \subsection{Dependence of our CRIR estimates on the assumed UV radiation field and gas metallicity} All the results presented in Figures 1 -- 10 were obtained under the assumptions that $\chi_{\rm UV}/n_{\rm 50} =1$ in diffuse atomic clouds, $\chi_{\rm UV}/n_{\rm 50} =0.2$ in diffuse molecular clouds, and that the abundances of the heavy elements have the standard values adopted for the gas-phase in diffuse interstellar material in the solar neighborhood, $Z=Z_{\rm std}$. To examine the dependence of the mean CRIRs we derive (Table 2) upon these assumptions, we have repeated the entire analysis described in \S4.2 for a range of assumed $\chi_{\rm UV}/n_{\rm 50}$ and for two assumed metallicities, $Z=Z_{\rm std}$ and $Z=2\,Z_{\rm std}$. In Figure 12, we show the dependence of the mean derived CRIRs upon the assumed values of $\chi_{\rm UV}/n_{\rm 50}$ for $Z=Z_{\rm std}$ (solid curves) and $Z=2\,Z_{\rm std}$ (dashed curves). The blue curves show the mean CRIRs derived for diffuse molecular clouds (i.e.\ from H$_3^+$), and the red curves show those derived for diffuse atomic clouds (i.e.\ from OH$^+$, $\rm H_2O^+$, and ArH$^+$). The former show that the CRIR derived from H$_3^+$ observations are roughly proportional to $Z/Z_{\rm std}$; this behavior results because the abundance of electrons, which are primarily responsible for the destruction of H$_3^+$ given the typical CRIRs in the Galactic disk, is roughly equal to the gas-phase carbon abundance. The red curves show that the CRIR derived from observations of OH$^+$, $\rm H_2O^+$, and ArH$^+$ are almost independent of $Z$ for the two cases we examined; here, the increased abundance of electrons is roughly balanced by the increased abundances of gas-phase O and Ar. The dependences of the mean derived CRIRs on the assumed value of $\chi_{\rm UV}/n_{\rm 50}$ reflects a complex interplay of factors: in the case of the mean CRIR in diffuse molecular clouds, the final dependence is quite weak, whereas the mean CRIR in diffuse atomic clouds decreases roughly as $(\chi_{\rm UV}/n_{\rm 50})^{-0.7}$ for the typical values in the Galactic disk. Also shown on Figure 12 are the mean values presented in Table 2, which correspond to $\chi_{\rm UV}/n_{\rm 50}=0.2$ -- or equivalently $\chi_{\rm UV}/n_{\rm 250}=1$ -- in diffuse molecular clouds, and $\chi_{\rm UV}/n_{\rm 50}=1$ in diffuse atomic clouds. \subsection{Sensitivity to the assumed reaction rates} The uncertainty estimates presented in this paper are statistical in nature and do not include systematic uncertainties inherent in the diffuse cloud models. Because the model is based upon a large number of assumptions about fundamental physical and chemical processes -- including the rate coefficients for a large number of chemical reactions -- a quantitative analysis of the systematic uncertainies is impractical. We can, however, identify several key reactions with rate coefficients that are important in determining what CRIR is needed to match the available data. These are listed in Table 3, along with the rate coefficients we adopted and the primary bibliographic references of relevance. In all cases, the values adopted are the same as those in H12\footnote{We note here a typographic error in H12 Table 1, where the temperature dependence for reaction $k_7$ has a sign error in the exponent. (This error affected only Table 1, because the correct exponent was used in all the calculations performed by H12).} Based upon the analytic treatment of $\rm OH^+$ and $\rm H_2O^+$ given by H12 (their Appendix B), the CRIR is expected to show the following dependences: \begin{deluxetable}{lll} \tablewidth{0pt} \tablecaption{Key reaction rates} \tablehead{Reaction & Adopted rate coefficient ($\rm cm^3 s^{-1}$) & Primary reference} \startdata $\rm H_3^+ + e \rightarrow products$ & $k_1=6.8 \times 10^{-8} (T/{\rm 300\, K})^{-0.5}$ & McCall et al.\ (2003)\\ $\rm H^+ + PAH \rightarrow H + PAH^+$ & $k_2=3.5 \times 10^{-8} $ & Draine \& Sutin (1987), H12 \\ $\rm O(^3P_2) + H^+ \rightarrow O^+ + H$ & $k_3$~(Note a)& Stancil et al.\ (1999) \\ $\rm O^+ + H \rightarrow O + H^+$ & $k_4=5.7 \times 10^{-10} (T/{\rm 300\, K})^{0.36}\, e^{8.6 {\rm K}/T}$ & Stancil et al.\ (1999) \\ $\rm O^+ + H_2 \rightarrow OH^+ + H$ & $k_5=1.7 \times 10^{-9}$ & Smith et al.\ (1978) \\ $\rm OH^+ + H_2 \rightarrow H_2O^+ + H$ & $k_6=1.0 \times 10^{-9}$ & Jones et al.\ (1981)\\ $\rm OH^+ + e \rightarrow O + H$ & $k_7=3.8 \times 10^{-8} (T/{\rm 300\, K})^{-0.5}$ & Mitchell (1990) \\ \enddata \tablenotetext{a}{The rate coefficient $k_3$ is given by a more complex fitting function: see Stancil et al.\ (1999) for the expression adopted} \end{deluxetable} \noindent (1) For diffuse molecular clouds, the CRIR needed to match the observed H$_3^+$ abundances is an increasing function of $k_1$, the rate coefficient for dissociative recombination of H$_3^+$. The dependence is linear in clouds where C is largely ionized. \noindent (2) For diffuse atomic clouds, the CRIR needed to match the observed OH$^+$ abundances is an increasing function of $k_6$ and $k_7$, the rate coefficients for the dominant OH$^+$-destroying reactions. \noindent (3) For diffuse atomic clouds, the CRIR needed to match the observed OH$^+$ abundances and HRRL line strengths is linearly proportional to $k_2$, the rate coefficient for destruction of H$^+$ via charge transfer to PAHs, and inversely proportional to $k_5$, the rate coefficient for the production of OH$^+$ by the reaction of O$^+$ with H$_2$. \noindent (4) For diffuse atomic clouds, the CRIR needed to match the observed OH$^+$ abundances is linearly proportional to $k_4/k_3$, i.e.\ the ratio of the rate coefficients for the destruction of O$^+$ via electron transfer from H to O$^+$ and for the formation of O$^+$ via electron transfer from O to H$^+$ and for In local thermodyamic equilibrium, $k_4/k_3$ is determined entirely by the principle of detailed balance and is a fixed function of temperature. However, at the low densities of the interstellar clouds of present interest, atomic oxygen is almost entirely in the lowest fine structure state ($\rm ^3P_2$), with a negligible population in the excited states $\rm ^3P_1$ and $\rm ^3P_0$. This departure from LTE could significantly affect the value of $k_4/k_3$ if the channel to O($\rm ^3P_2$) is strongly disfavored in the reaction of O$^+$ with H. The most widely adopted reaction rates for charge transfer involving O and H$^+$, those computed by Stancil et al.\ (1999; adopted in our study), do not show any such effect. However, a subsequent theoretical study by Spirko et al.\ (2003) has suggested that the channel to O($\rm ^3P_2$) may indeed be abnormally slow at the temperatures of relevance; these authors cautioned, however, that the calculated cross-section for the reaction of O($\rm ^3P_2$) with H$^+$ is strongly dependent on the exact details of the assumed potential energy surface, and showed that minor modifications to the adopted potential could lead to large increases in that cross-section. Both the Stancil et al.\ (1999) and Spirko et al.\ (2003) studies are consistent with laboratory measurements at 300~K that do not discriminate between O fine-structure states, so a definitive resolution of the issue must await future investigations. While we have favored the Stancil et al.\ (1999) rate coefficients in our diffuse cloud models, we have investigated the effects of using those of Spirko et al.\ (2003) instead. At 100~K and in the low-density limit (i.e.\ with all O in $\rm ^3P_2$), $k_4$ is decreased by a factor 1.3 relative to that of Stancil et al.\ (1999), while $k_3$ is decreased by a factor 5.8. As a result, the value of $k_4/k_3$ is increased by a factor $\sim 4$, as is the CRIR required to match the observations. If the Spirko et al.\ cross-sections are correct, then the CRIR estimated for diffuse atomic clouds becomes a factor $\sim 4$ larger than that for diffuse molecular clouds. \subsection{Variation of the CRIR with Galactocentric radius} \begin{figure} \includegraphics[width=15 cm]{fig13.ps} \caption{Dependence of CRIR on Galactocentric radius, $R_g$, for diffuse atomic material in the Galactic disk. Magenta diamonds: values of ${\rm log}_{10}[\zeta_p({\rm H})/n_{50}]$ determined from observations of OH$^+$, H$_2$O$^+$ and ArH$^+$. Red squares: values of ${\rm log}_{10}[\zeta_p({\rm H})/n_{50}]$ determined from measurements of OH$^+$ and H$_2$O$^+$ alone. All values are computed for an assumed $\chi_{\rm UV}/n_{50}$ of 1.} \end{figure} Submillimeter observations of OH$^+$, $\rm H_2O^+$ and ArH$^+$ allow the CRIR to be determined for diffuse atomic material at considerably larger distances than is possible for the diffuse molecular clouds (observed with near-IR spectroscopy of H$_3^+$ and UV spectroscopy of H$_2$). As a result, we have obtained CRIR estimates for material covering a significant range of Galactocentric distances, $R_g$, from roughly 4 to 9~kpc. Following I15, we have used kinematic estimates of $R_g$ to examine the dependence of the CRIR in diffuse atomic clouds upon the Galactocentric distance. In Figure 13, we have plotted ${\rm log}_{10}[\zeta_p({\rm H})/n_{50}]$ versus $R_g$, with magenta diamonds showing CRIRs determined from measurements of OH$^+$, H$_2$O$^+$ and ArH$^+$, and red squares showing those determined from measurements of OH$^+$ and H$_2$O$^+$ alone. All the estimates for $R_g$ are those given by I15. For the red points, we adopted the mean correction factor needed to account for HI within the population of small clouds responsible for ArH$^+$ absorption. Figure 13 indicates that there is no statistically-significant dependence of the derived values of $\zeta_p({\rm H})/n_{50}$ upon $R_g$. In particular, a linear fit to the (more reliable) magenta points yields a slope, $m$, of $0.01 \pm 0.05 \,{\rm kpc}^{-1}$; for the red points, the slope is $0.07 \pm 0.04 \,{\rm kpc}^{-1}.$ An important caveat here is that all the CRIRs plotted in Figure 13 were computed under the assumptions that $\chi_{\rm UV}/n_{50} = 1$ and $Z=Z_{\rm std}$. Although the $\zeta_p({\rm H})/n_{50}$ values that we derive under those assumptions show no dependence upon $R_g$, additional considerations allow a Galactocentric dependence of $\zeta_p({\rm H})$ to be inferred as described below. From the sensitivity analysis described in \S 4.3 above, we know that the derived values of $\zeta_p({\rm H})/n_{50}$ are roughly proportional to $[\chi_{\rm UV}/n_{50}]^{-0.7}$ and almost independent of $Z/Z_{\rm std}$. We may therefore estimate the true Galactocentric gradient in ${\rm log}_{10}[\zeta_p({\rm H})/n_{50}]$ as $${d {\rm log}_{10} [\zeta_p({\rm H})/n_{50}] \over dR_g} = m + 0.7\, {d {\rm log}_{10} n_{50} \over dR_g} - 0.7\,{d {\rm log}_{10} \chi_{\rm UV} \over dR_g},\eqno(7)$$ or equivalently, $$ {d {\rm log}_{10} \zeta_p({\rm H}) \over dR_g} = m + 1.7\, {d {\rm log}_{10} n_{50} \over dR_g} - 0.7\,{d {\rm log}_{10} \chi_{\rm UV} \over dR_g}. \eqno(8)$$ Wolfire et al.\ (2003) have presented a comprehensive model of the neutral ISM within the Galactic disk, in which the UV radiation field has a scale length $-(d {\rm ln} \chi_{\rm UV}/dR_g)^{-1} = 4.1\, {\rm kpc}^{-1}$; this then implies a value of $-0.106$ for $d{\rm log}_{10} \chi_{\rm UV}/dR_g.$ In this model, the mean density in the cold neutral medium has an average Galactocentric gradient $d{\rm log}_{10} n_{\rm 50}/dR_g = -0.110 \, {\rm kpc}^{-1}$ (fit to Wolfire et al.\ 2003, Table 3, for $R_g$ in the range 3 to 8.5~kpc). Thus, with the aid of equation (7), we obtain a best estimate the true Galactocentric gradient in the CRIR as $$ {d {\rm log}_{10} \zeta_p({\rm H}) \over dR_g} = 0.01 - 1.7 \times 0.106 + 0.7 \times 0.111 = 0.093, \eqno(9)$$ corresponding to a scale length $R_{\zeta} = -(d {\rm ln} \zeta_p({\rm H})/dR_g)^{-1} = 4.7\,{\rm kpc}.$ In the Wolfire et al.\ (2003) model, the mean density in the cold neutral medium is $n_{\rm H} = 33 \rm \, cm^{-3}$ at the solar circle ($R_g = R_0 = 8.5$~kpc). Combining this density estimate with the mean CRIR listed in Table 2 and the Galactocentric radius dependence discussed above, we obtain the following estimate of the mean CRIR in diffuse atomic clouds $$\zeta_p({\rm H}) = (2.2 \pm 0.3) \exp [(R_0-R_g)/4.7\,\rm{kpc}] \times 10^{-16} \rm \, s^{-1}.\eqno(10)$$ This value is entirely consistent with the mean CRIR determined from H$_3^+$ measurements in diffuse {\it molecular} clouds near the solar circle, $\zeta_p({\rm H}) = (2.3 \pm 0.6) \times 10^{-16} \rm \, s^{-1}$, and provides no evidence for any difference between the CRIR in diffuse atomic material and in diffuse molecular clouds. By contrast, there is strong evidence (e.g.\ I15) that the CRIR is smaller by an order of magnitude or more in dense molecular clouds than it is in the diffuse ISM. \section{Summary} We have obtained estimates for the cosmic-ray ionization rate (CRIR) in the Galactic disk, using a detailed model for the physics and chemistry of diffuse interstellar gas clouds to interpret previously-published measurements of the abundance of four molecular ions: ArH$^+$, OH$^+$, $\rm H_2O^+$ and $\rm H_3^+$. \noindent (1) Within diffuse atomic clouds observed along the sightlines to bright submillimeter continuum sources, measurements of ArH$^+$, OH$^+$, $\rm H_2O^+$, and H column densities imply a mean logarithm of the CRIR of $ < \log_{10}[\zeta_{\rm p}({\rm H})/n_{\rm 50}] > \,\, = -15.34 \pm 0.05$, corresponding to a CRIR of $(4.6 \pm 0.5) \times 10^{-16}\, n_{\rm 50}\,\rm s^{-1}$, where $\zeta_{\rm p}({\rm H})\, {\rm s}^{-1} \sim [\zeta_{\rm t}({\rm H})/1.5] \, {\rm s}^{-1}$ is the primary ionization rate per H atom, $\zeta_{\rm t}({\rm H})\, {\rm s}^{-1}$ is the total ionization rate per H atom, $50\,n_{\rm 50}\, {\rm cm}^{-3}$ is the density of H nuclei, and the quoted errors are standard errors on the mean. These CRIR estimates were obtained under the assumption that $\chi_{UV} /n_{50} = 1$, where $\chi_{UV}$ is the adopted UV radiation field in units of the mean value at the solar circle; they scale roughly as $(\chi_{UV} /n_{50})^{-0.7}$. \noindent (2) Within diffuse atomic clouds, the intrinsic dispersion of $\log_{10}[\zeta_{\rm p}({\rm H})/n_{\rm 50}]$ is estimated as 0.23, corresponding to a factor 1.7. \noindent (3) Given existing models for the variation of mean gas density and UV radiation field with position within the Galactic disk, our analysis of the ArH$^+$, OH$^+$, and $\rm H_2O^+$ data leads to a recommended value of $\zeta_{\rm p}({\rm H})= (2.2 \pm 0.3) \exp [(R_0-R_g)/4.7\,\rm{kpc}] \times 10^{-16} \rm \, s^{-1}$ for the mean CRIR at Galactocentric distance $R_g$, where $R_0=8.5$~kpc. \noindent (4) Within diffuse molecular clouds observed toward stars in the solar neighborhood, measurements of $\rm H_3^+$ and $\rm H_2$ imply a mean logarithm of the CRIR of $ < \log_{10}\,\zeta_{\rm p}({\rm H}) > \,\, = -15.63 \pm 0.10$, corresponding to a CRIR of $(2.3 \pm 0.6) \times 10^{-16}\,\,\rm s^{-1}$ and a total ionization rate per H$_2$ molecule of $\zeta_{\rm t}({\rm H_2}) \sim 2.3 \,\zeta_{\rm p}({\rm H}) = (5.3 \pm 1.1) \times 10^{-16}\,\,\rm s^{-1},$ in good accord with previous estimates (IM12). \noindent (5) For diffuse molecular clouds in which H$_3^+$ is detected, the intrinsic dispersion of $\log_{10}\,\zeta_{\rm p}({\rm H})$ is estimated as 0.09, corresponding to a factor of only 1.23. observations of H$_3^+$. \noindent (6) Our results show marginal evidence that the CRIR in diffuse molecular clouds decreases with cloud extinction, with a best-fit dependence $\propto A_{\rm V}({\rm tot})^{-1}$ for $A_{\rm V}({\rm tot}) \ge 0.5$. \noindent (7) We have presented a rederivation of the CRIR implied by recent observations of carbon and hydrogen radio recombination lines along the sight-line to Cas A, which yields a best-fit estimate for the primary CRIR of $2.9 \times 10^{-16}\,\,\rm s^{-1}$ per H atom. \noindent (8) The uncertainty estimates presented in this paper are statistical in nature and do not include systematic uncertainties inherent in the diffuse cloud models. We have identified several key reactions with rate coefficients that are important in determining what CRIR is needed to match the astronomical data: these include the dissociative recombination of H$_3^+$ and OH$^+$, the H abstraction reactions of O$^+$ and OH$^+$ with H$_2$, and the charge transfer reactions of H with O$^+$ and of O($\rm ^3P_2$) with H$^+$. While our model adopts rate coefficients for these processes that are based upon the theoretical and experimental data currently available, we anticipate that new calculations and experiments may require them to be revised in the coming years; accordingly, we have discussed the dependences of the derived CRIRs upon the adopted rate coefficients for each of these processes. \begin{acknowledgements} We thank N.\ Indriolo and J.\ Black for several valuable comments about an earlier draft of this paper. We gratefully acknowledge the support of grant number 120364 from NASA's Astrophysical Data Analysis Program (ADAP; NNX15AM94G). \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $K$ be a totally real number field and let $\OO_K$ be its ring of integers. In \cite{FS}, Freitas and Siksek study the Fermat equation $a^p+b^p+c^p=0$ with $a$, $b$, $c \in \OO_K$ and $p$ prime. For now let $S$ be the set of primes of $\OO_K$ above $2$ and let $\OO_S$ be the ring of $S$-integer and $\OO_S^*$ be the group of $S$-units. Freitas and Siksek give a criterion for the non-existence of solutions $a$, $b$, $c \in \OO_K$ with $abc \ne 0$ for $p$ sufficiently large in terms of the solutions to the $S$-unit equation $\lambda+\mu=1$. The proof uses modularity and level lowering arguments over totally real fields. It is natural to seek an extention of the work of Freitas and Siksek to generalized Fermat equations $A a^p+B b^p+ C c^p=0$, for given non-zero coefficients $A$, $B$, $C \in \OO_K$. In this paper we show that the results of Freitas and Siksek can indeed be extended to any choice of \emph{odd} coefficients $A$, $B$, $C$, provided the set $S$ is enlarged to contain the primes dividing $ABC$ as well as the primes dividing $2$. We now state our results precisely. As in \cite{FS}, our results will sometimes be conditional on the following standard conjecture. \begin{conj}[\lq\lq Eichler--Shimura\rq\rq]\label{conj:ES} Let $K$ be a totally real field. Let $\ff$ be a Hilbert newform of level $\cN$ and parallel weight $2$, and write $\Q_\ff$ for the field generated by its eigenvalues. Suppose that $\Q_\ff=\Q$. Then there is an elliptic curve $E_\ff/K$ with conductor $\cN$ having the same $\mathrm{L}$-function as $\ff$. \end{conj} Let $A$, $B$, $C$ be non-zero elements of $\OO_K$, and let $p$ be a prime. Consider the equation \begin{equation}\label{eqn:Fermat} Aa^p+Bb^p+Cc^p=0, \qquad a,b,c\in \OO_K; \end{equation} we shall refer to this as \emph{the generalized Fermat equation over $K$ with coefficients $A$, $B$, $C$ and exponent $p$}. A solution $(a,b,c)$ is called \textbf{trivial} if $abc=0$, otherwise \textbf{non-trivial}. The following notation shall be fixed throughout the paper. \begin{equation}\label{eqn:ST} \begin{gathered} R = \text{Rad}(ABC) = \prod_{\substack{ \fq \mid ABC \\ \fq \text{ prime in } K}} \fq \\ S =\{ \mP \; :\; \text{$\mP$ is a prime of $\OO_K$ such that $\mP \mid 2R$}\}\\ T =\{ \mP \; :\; \text{$\mP$ is a prime of $\OO_K$ above $2$}\}, \\ U=\{ \mP \in T \; : \; f(\mP/2) = 1 \}, \qquad V=\{ \mP \in T \; : \; 3 \nmid \ord_\mP(2) \} \end{gathered} \end{equation} Here $f(\mP/2)$ denotes the residual degree of $\mP$. As in \cite{FS}, we need an assumption which we refer to throughout the paper as (ES): \[ \label{ES} \text{\bf (ES)} \qquad \left\{ \begin{array}{lll} \text{either $[K:\Q]$ is odd;}\\ \text{or $U \ne \emptyset$;}\\ \text{or Conjecture~\ref{conj:ES} holds for $K$.} \end{array} \right. \] \begin{thm}\label{thm:FermatGen} Let $K$ be a totally real field satisfying (ES). Let $A$, $B$, $C \in \OO_K$, and suppose that $A$, $B$, $C$ are odd, in the sense that if $\mP \mid 2$ is a prime of $\OO_K$ then $\mP \nmid ABC$. Write $\OO_{S}^*$ for the set of $S$-units of $K$. Suppose that for every solution $(\lambda,\mu)$ to the $S$-unit equation \begin{equation}\label{eqn:sunit} \lambda+\mu=1, \qquad \lambda,\, \mu \in \OO_{S}^* \, , \end{equation} there is \begin{enumerate} \item[(A)] either some $\mP \in U$ that satisfies $\max\{ \lvert \ord_{\mP} (\lambda) \rvert, \lvert \ord_{\mP}(\mu) \rvert \} \le 4 \ord_{\mP}(2)$, \item[(B)] or some $\mP \in V$ that satisfies both $\max\{ \lvert \ord_{\mP} (\lambda) \rvert, \lvert \ord_{\mP}(\mu) \rvert \} \le 4 \ord_{\mP}(2)$, and $\ord_{\mP}(\lambda \mu) \equiv \ord_{\mP}(2) \pmod{3}$. \end{enumerate} Then there is some constant $\cB=\cB(K,A,B,C)$ such that the generalized Fermat equation~\eqref{eqn:Fermat} with exponent $p$ and coefficients $A$, $B$, $C$ does not have non-trivial solutions with $p>\cB$. \end{thm} Theorem~\ref{thm:FermatGen} gives a bound on the exponent of non-trivial solutions to the generalized Fermat equation \eqref{eqn:Fermat} provided certain hypotheses are satisfied. There are practical algorithms for determining the solutions to $S$-unit equations (e.g.\ \cite{Smart}), so these hypotheses can always be checked for specific $K$, $A$, $B$, $C$. The following theorem is an example where the $S$-unit equation can still be solved, even though the coefficients are not completely fixed. \begin{thm}\label{thm:d5mod8} Let $d \geq 13$ be squarefree, satisfying $d \equiv 5 \pmod{8}$ and let $q \geq 29$ be a prime such that $q \equiv 5 \pmod{8}$ and $\left( \frac{d}{q} \right) = -1$. Let $K=\Q(\sqrt{d})$ and assume Conjecture~\ref{conj:ES} for $K$. Then there is an effectively computable constant $\cB_{K,q}$ such that for all primes $p > \cB_{K,q}$, the Fermat equation $$x^p+y^p+q^rz^p=0$$ has no non-trivial solutions with exponent $p$. \end{thm} \section{Preliminaries} We shall need the theoretical machinary of modularity, irreducibility of Galois representations and level lowering. These tools and the way we use them is practically identical to \cite{FS} which we refer the reader to for more details. \subsection{The Frey curve and its modularity} We shall need the following recently proved theorem \cite{FHS}. \begin{thm}[Freitas, Le Hung and Siksek] \label{thm:modgen} Let $K$ be a totally real field. Up to isomorphism over $\overline{K}$, there are at most finitely many non-modular elliptic curves $E$ over $K$. Moreover, if $K$ is real quadratic, then all elliptic curves over $K$ are modular. \end{thm} We shall associate to a solution $(a,b,c)$ of \eqref{eqn:Fermat} the following Frey elliptic curve. \begin{equation}\label{eqn:Frey} E : Y^2 = X(X-Aa^p)(X+Bb^p) \end{equation} Before applying Theorem~\ref{thm:modgen} to the Frey curve associated to our generalized Fermat equation \eqref{eqn:Fermat} we shall need the following lemma. \begin{lem}\label{lem:not1} Let $A$, $B$, $C \in \OO_K$ be odd, and suppose that every solution $(\lambda,\mu)$ to the $S$-unit equation \eqref{eqn:sunit} satisfies either condition (A) or (B) of Theorem~\ref{thm:FermatGen}. Then $(\pm 1, \pm 1, \pm 1)$ is not a solution to equation \eqref{eqn:Fermat}. \end{lem} \begin{proof} Suppose $(\pm 1, \pm 1, \pm 1)$ is a solution to \eqref{eqn:Fermat}. By changing signs of $A$, $B$, $C$, we may suppose that $(1,1,1)$ is a solution, and therefore that $A+B+C=0$. Let $\lambda=A/C$ and $\mu=B/C$. Clearly $(\lambda,\mu)$ is a solution to the $S$-unit equation \eqref{eqn:sunit}. Suppose first that (A) is satisfied. Then $U \ne \emptyset$, so there is some $\mP \mid 2$ with residue field $\F_2$. As $A$, $B$, $C$ are odd, we have $\mP \nmid ABC$. Reducing the relation $A+B+C=0$ mod $\mP$ we obtain $1+1+1=0$ in $\F_2$, giving a contradiction. Suppose now that (B) holds. By (B) there is some $\mP \in V$ such that $\ord_\mP(\lambda \mu) \equiv \ord_\mP(2) \pmod{3}$. However, as $A$, $B$, $C$ are odd, $\ord_\mP(\lambda \mu)=0$. Moreover, $3 \nmid \ord_\mP(2)$ by definition of $V$. This gives a contradiction. \end{proof} \begin{cor}\label{cor:Freymod} Let $A$, $B$, $C \in \OO_K$ be odd, and suppose that every solution $(\lambda,\mu)$ to the $S$-unit equation \eqref{eqn:sunit} satisfies either condition (A) or (B) of Theorem~\ref{thm:FermatGen}. There is some (ineffective) constant $\cA=\cA(K,A,B,C)$ such that for any non-trivial solution $(a,b,c)$ of \eqref{eqn:Fermat} with prime exponent $p>\cA$, the Frey curve $E$ given by \eqref{eqn:Frey} is modular. \end{cor} \begin{proof} By Theorem~\ref{thm:modgen}, there are at most finitely many possible $\overline{K}$-isomorphism classes of elliptic curves over $K$ that are non-modular. Let $j_1,\dots,j_n \in K$ be the $j$-invariants of these classes. Write $\lambda=-Bb^p/Aa^p$. The $j$-invariant of $E_{a,b,c}$ is \[ j(\lambda)=2^8 \cdot (\lambda^2-\lambda+1)^3 \cdot \lambda^{-2} (\lambda-1)^{-2}. \] Each equation $j(\lambda)=j_i$ has at most six solutions $\lambda \in K$. Thus there are values $\lambda_1,\dots,\lambda_m \in K$ such that if $\lambda \ne \lambda_k$ for all $k$ then $E$ is modular. If $\lambda=\lambda_k$ then \[ (-b/a)^p=A\lambda_k/B, \qquad (c/a)^p=A( \lambda_k-1)/C. \] This pair of equations results in a bound for $p$ unless $-b/a$ and $c/a$ are both roots of unity. But as $K$ is real, the only roots of unity are $\pm 1$. If $-b/a=\pm 1$ and $c/a= \pm 1$ then \eqref{eqn:Fermat} has a solution of the form $(\pm 1,\pm 1, \pm 1)$ contradicting Lemma~\ref{lem:not1}. This completes the proof. \end{proof} \subsection{Irreducibility of mod $p$ representations of elliptic curves} To use a generalized version of level lowering, we need the mod $p$ Galois representation associated to the Frey elliptic curve to be irreducible. The following theorem of Freitas and Siksek \cite[Theorem 2]{FSirred}, building on earlier work of David, Momose and Merel, is sufficient for our purpose. \begin{thm} \label{thm:irred} Let $K$ be a totally real field. There is an effective constant $\cC_K$, depending only on $K$, such that the following holds. If $p > \cC_K$ is a rational prime, and $E$ is an elliptic curve over $K$ which is semistable at some $\fq \mid p$, then $\overline{\rho}_{E,p}$ is irreducible. \end{thm} In \cite{FSirred} the theorem is stated for Galois totally real fields $K$, but the version stated here follows immediately on replacing $K$ by its Galois closure. \subsection{Level Lowering} As before, $K$ is a totally real field. Let $E/K$ be an elliptic curve of conductor $\cN$ and $p$ a rational prime. For a prime ideal $\fq$ of $K$ denote by $\Delta_\fq$ the discriminant of a local minimal model for $E$ at $\fq$. Let \begin{equation}\label{eqn:Np} \cM_p := \prod_{ \substack{\fq \Vert \cN,\\ p \mid \ord_\fq(\Delta_\fq)} } {\fq}, \qquad\quad \cN_p:=\frac{\cN}{\cM_p} \, . \end{equation} The ideal $\cM_p$ is precisely the product of the primes where we want to lower the level. For a Hilbert eigenform $\ff$ over $K$, denote the field generated by its eigenvalues by $\Q_\ff$. The following level-lowering recipe is derived by Freitas and Siksek \cite{FS} from the works of Fujiwara, Jarvis and Rajaei. \begin{thm}\label{thm:levell} With the above notation, suppose the following \begin{enumerate} \item[(i)] $p\ge 5$ and $p$ is unramified in $K$ \item[(ii)] $E$ is modular, \item[(iii)] $\overline{\rho}_{E,p}$ is irreducible, \item[(iv)] $E$ is semistable at all $\fq \mid p$, \item[(v)] $p \mid \ord_\fq(\Delta_\fq)$ for all $\fq \mid p$. \end{enumerate} Then, there is a Hilbert eigenform $\ff$ of parallel weight $2$ that is new at level $\cN_p$ and some prime $\varpi$ of $\Q_\ff$ such that $\varpi \mid p$ and $\overline{\rho}_{E,p} \sim \overline{\rho}_{\ff,\varpi}$. \end{thm} \section{Conductor of the Frey curve} Let $(a,b,c)$ be a non-trivial solution to the Fermat equation~\eqref{eqn:Fermat}. Write \begin{equation}\label{eqn:cG} \cG_{a,b,c}=a \OO_K+b \OO_K+ c \OO_K, \end{equation} which we naturally think of as the greatest common divisor of $a$, $b$, $c$. Over $\Q$, or over a number field of class number $1$ it is natural to scale the solution $(a,b,c)$ so that $\cG_{a,b,c}=1 \cdot \OO_K$, but this is not possible in general. The primes that divide all of $a$, $b$, $c$ can be additive primes for the Frey curve, and additive primes are not removed by the level lowering recipe given above. To control the final level we need to control $\cG_{a,b,c}$. Following \cite{FS}, we fix a set \[ \cH=\{\fp_1,\dots,\fp_h\} \] of prime ideals $\fp_i \nmid 2 R$, which is a set of representatives for the ideal classes of $\OO_K$. For an non-zero ideal $\ga$ of $\OO_K$, we denote by $[\ga]$ the class of $\ga$ in the class group. We denote $[\cG_{a,b,c}]$ by $[a,b,c]$. The following is Lemma 3.2 of \cite{FS}, and states that we can always scale our solution $(a,b,c)$ so that the gcd belongs to $\cH$. \begin{lem}\label{lem:gcd} Let $(a,b,c)$ be a non-trivial solution to \eqref{eqn:Fermat}. There is a non-trivial integral solution $(a^\prime,b^\prime,c^\prime)$ to \eqref{eqn:Fermat} such that the following hold. \begin{enumerate} \item[(i)] For some $\xi \in K^*$, \[ a^\prime= \xi a, \qquad b^\prime= \xi b, \qquad c^\prime=\xi c. \] \item[(ii)] $\cG_{a^\prime,b^\prime,c^\prime} = \fp \in \cH$. \item[(iii)] $[a^\prime,b^\prime,c^\prime]=[a,b,c]$. \end{enumerate} \end{lem} \begin{lem}\label{lem:cond} Let $(a,b,c)$ be a non-trivial solution to the Fermat equation \eqref{eqn:Fermat} with odd prime exponent $p$, and scaled as in Lemma~\ref{lem:gcd} so that $\cG_{a,b,c}=\fp \in \cH$. Write $E=E_{a,b,c}$ for the Frey curve in \eqref{eqn:Frey}, and let $\Delta$ be its discriminant. For a prime $\fq$ we write $\Delta_\fq$ for the minimal discriminant at $\fq$. Then at all $\fq \notin S \cup \{\fp\}$, the model $E$ is minimal, semistable, and satisfies $p \mid \ord_\fq(\Delta_\fq)$. Let $\cN$ be the conductor of $E$, and let $\cN_p$ be as defined in \eqref{eqn:Np}. Then \begin{equation}\label{eqn:cnp} \cN=\fp^{s_{\fp}} \cdot \prod_{\mP \in S} \mP^{r_\mP} \cdot \prod_{\substack{\fq \mid abc \\ \fq \notin S \cup \{\fp\} }} \fq \, , \qquad \qquad \cN_p= \fp^{s^\prime_{\fp}} \cdot \prod_{\mP \in S} \mP^{r^\prime_\mP}, \end{equation} where $0 \le r^\prime_\mP \le r_\mP \le 2 + 6\ord_{\mP} (2)$ for $\mP \mid 2$, and $0 \le r^\prime_\mP \le r_\mP \le 2$ for $\mP \mid R$, and $0 \leq s^\prime_{\fp} \leq s_{\fp} \leq 2$. \end{lem} \begin{proof} The discriminant of the model given by $E$ is $16 (ABC)^2 (abc)^{2p}$, thus the primes appearing in $\cN$ will be either primes dividing $2R$ or dividing $abc$. For $\mP \mid 2$ we have $r_\mP=\ord_\mP(\cN) \le 2+6 \ord_{\mP}(2)$ by \cite[Theorem IV.10.4]{SilvermanII}; this proves the correctness of the bounds for the exponents in $\cN$ and $\cN_p$ at even primes, and we will restrict our attention to odd primes. As $E$ has full 2-torsion over $K$, the wild part of the conductor of $E/K$ vanishes (\cite{SilvermanII}, page 380) at all odd $\fq$, and so $\ord_\fq(\cN_p) \le \ord_\fq(\cN) \le 2$. This proves the correctness of the bounds for the exponents in $\cN$ and $\cN_p$ at $\fq$ that divide $R$ and for $\fq=\fp$. It remains to consider $\fq \mid abc$ satisfying $\fq \not \in S \cup \{ \fp\}$. It is easily checked that the model \eqref{eqn:Frey} is minimal and has multiplicative reduction at such $\fq$, and it is therefore clear that $p \mid \ord_\fq(\Delta)=\ord_\fq(\Delta_\fq)$. It follows that $\ord_\fq(\cN)=1$ and, from the recipe for $\cN_p$ in \eqref{eqn:Np} that $\ord_\fq(\cN_p)=0$. This completes the proof. \end{proof} \section{Level Lowering for the Frey Curve} \begin{thm}\label{thm:ll2} Let $K$ be a totally real field satisfying (ES). Let $A$, $B$, $C \in \OO_K$ be odd, and suppose that every solution $(\lambda,\mu)$ to the $S$-unit equation \eqref{eqn:sunit} satisfies either condition (A) or (B) of Theorem~\ref{thm:FermatGen}. There is a constant $\cB=\cB(K,A,B,C)$ depending only on $K$ and $A$, $B$, $C$ such that the following hold. Let $(a,b,c)$ be a non-trivial solution to the generalized Fermat equation \eqref{eqn:Fermat} with prime exponent $p>\cB$, and rescale $(a,b,c)$ as in Lemma~\ref{lem:gcd} so that it remains integral and satisfies $\cG_{a,b,c}=\fp$ for some $\fp \in \cH$. Write $E=E_{a,b,c}$ for the Frey curve given in \eqref{eqn:Frey}. Then there is an elliptic curve $E^\prime$ over $K$ such that \begin{enumerate} \item[(i)] the conductor of $E^\prime$ is divisible only by primes in $S \cup \{ \fp \}$; \item[(ii)] $\# E^\prime(K)[2]=4$; \item[(iii)] $\overline{\rho}_{E,p} \sim \overline{\rho}_{E^\prime,p}$; \end{enumerate} Write $j^\prime$ for the $j$-invariant of $E^\prime$. Then, \begin{enumerate} \item[(a)] for $\mP \in U$, we have $\ord_\mP(j^\prime)<0$; \item[(b)] for $\mP \in V$, we have either $\ord_\mP(j^\prime)<0$ or $3 \nmid \ord_\mP(j^\prime)$; \item[(c)] for $\fq \notin S$, we have $\ord_\fq(j^\prime) \ge 0$. \end{enumerate} In particular, $E^\prime$ has potentially good reduction away from $S$. \end{thm} \begin{proof} We first observe, by Lemma~\ref{lem:cond}, that $E$ is semistable outside $S \cup \{\fp\}$. By taking $\cB$ to be sufficiently large, we see from Corollary~\ref{cor:Freymod} that $E$ is modular, and from Theorem~\ref{thm:irred} that $\overline{\rho}_{E,p}$ is irreducible. Applying Theorem~\ref{thm:levell} and Lemma~\ref{lem:cond} we see that $\overline{\rho}_{E,p} \sim \overline{\rho}_{\ff,\varpi}$ for a Hilbert newform $\ff$ of level $\cN_p$ and some prime $\varpi \mid p$ of $\Q_\ff$. Here $\Q_\ff$ is the field generated by the Hecke eigenvalues of $\ff$. The remainder of the proof is identical to the proof of \cite[Theorem 9]{FS}, and so we omit the details, except that we point out that it is here that we make use of assumption (ES). \end{proof} The constant $\cB$ is ineffective as it depends on the ineffective constant $\cA$ in Corollary~\ref{cor:Freymod}. However, if $K$ is a real quadratic field then we do not need that corollary as we know modularity from Theorem~\ref{thm:modgen}. In this case the arguments of \cite{FS} produce an effective constant $\cB$. \section{Elliptic curves with full $2$-torsion and solutions to the $S$-unit equation} Theorem~\ref{thm:ll2} relates non-trivial solutions of the Fermat equation to elliptic curves with full $2$-torsion having potentially good reduction outside $S$. There is a well-known correspondence between such elliptic curves and solutions of the $S$-unit equation \eqref{eqn:sunit} that we now sketch. Consider an elliptic curve over $K$ with full $2$-torsion, \begin{equation}\label{eqn:e123} y^2=(x-a_1)(x-a_2)(x-a_3). \end{equation} where $a_1$, $a_2$, $a_3$ are distinct. The \textbf{cross ratio} \[ \lambda=\frac{a_3-a_1}{a_2-a_1} \] belongs to $\PP^1(K)-\{0,1,\infty\}$. Moreover, any $\lambda \in \PP^1(K)-\{0,1,\infty\}$ can be written as a cross ratio of three distinct $a_1$, $a_2$, $a_3$ in $K$ and hence comes from an elliptic curve with full $2$-torsion. Write $\sS_3$ for the symmetric group on $3$ letters. The action of $\sS_3$ on the triple $(e_1, e_2, e_3)$ extends via the cross ratio in a well-defined manner to an action on $\PP^1(K)-\{0,1,\infty\}$. The orbit of $\lambda \in \PP^1(K)-\{0,1,\infty\}$ under the action of $\sS_3$ is \begin{equation}\label{eqn:orbit} \left\{ \lambda, \frac{1}{\lambda}, 1-\lambda, \frac{1}{1-\lambda}, \frac{\lambda}{\lambda-1}, \frac{\lambda-1}{\lambda} \right\}. \end{equation} It follows from the theory of Legendre elliptic curves (\cite[Pages 53--55]{SilvermanI}) that the cross ratio in fact defines a bijection between elliptic curves over $K$ having full $2$-torsion (up to isomorphism over $\overline{K}$), and $\lambda$-invariants up to the action of $\sS_3$. Under this bijection, the $\sS_3$-orbit of a given $\lambda \in \PP^1(K)\backslash\{0,1,\infty\}$ is associated to the $\overline{K}$-isomorphism class of the \textbf{Legendre elliptic curve} $y^2=x(x-1)(x-\lambda)$. We would like to understand the $\lambda$-invariants that correspond to elliptic curves over $K$ with full $2$-torsion and potentially good reduction outside $S$. The $j$-invariant of the Legendre elliptic curve is given by \begin{equation}\label{eqn:j} j(\lambda)=2^8 \cdot \frac{(\lambda^2-\lambda+1)^3}{\lambda^2 (1-\lambda)^2} \, . \end{equation} The Legendre elliptic curve (and therefore its $\overline{K}$-isomorphism class) has potentially good reduction outside $S$ if and only if $j(\lambda)$ belongs to $\OO_S$. It easily follows from \eqref{eqn:j} that this happens precisely when both $\lambda$ and $1-\lambda$ are $S$-units (recall that $S$ includes all the primes above $2$); in other words, this is equivalent to $(\lambda,\mu)$ being a solution to the $S$-unit equation \eqref{eqn:sunit}, where $\mu=1-\lambda$. Let $\Lambda_{S}$ be the set of solutions to the $S$-unit equation \eqref{eqn:sunit}: \begin{equation}\label{eqn:LambdaS} \Lambda_{S}=\{(\lambda,\mu)\; : \; \lambda+\mu=1, \qquad \lambda,\; \mu \in \OO_{S}^*\}. \end{equation} It is easy to see that the action of $\sS_3$ on $\PP^1(K)-\{0,1,\infty\}$ induces a well-defined action on $\Lambda_{S}$ given by \[ (\lambda,\mu)^{\sigma} =(\lambda^{\sigma},1-\lambda^\sigma). \] We denote by $\sS_3 \backslash \Lambda_{S}$ the set of $\sS_3$-orbits in $\Lambda_{S}$. We deduce the following. \begin{lem}\label{lem:prebij} Let $\cE_{S}$ be set of all elliptic curves over $K$ with full $2$-torsion and potentially good reduction outside $S$. Define the equivalence relation $E_1 \sim E_2$ on $\cE_{S}$ to mean that $E_1$ and $E_2$ are isomorphic over $\overline{K}$. There is a well-defined bijection \[ \Phi \; : \; \cE_{S}/\sim \; \longrightarrow \; \sS_3 \backslash \Lambda_{S} \] which sends the class of an elliptic curve given by \eqref{eqn:e123} to the orbit of \[ \left(\frac{a_3-a_1}{a_2-a_1},\; \frac{a_2-a_3}{a_2-a_1} \right) \] in $\sS_3 \backslash \Lambda_{S}$; the map $\Phi^{-1}$ sends the orbit of $(\lambda,\mu)$ to the class of the Legendre elliptic curve $y^2=x(x-1)(x-\lambda)$. \end{lem} We shall need the following for the proof of Theorem~\ref{thm:FermatGen}. \begin{lem}\label{lem:jval} Let $E^\prime \in \cE_{S}$ and suppose that its $\sim$-equivalence class corresponds via $\Phi$ to the orbit of $(\lambda,\mu) \in \Lambda_{S}$. Let $j^\prime$ be the $j$-invariant of $E^\prime$ and $\mP \in T$. Then \begin{itemize} \item[(i)] $\ord_\mP(j^\prime) \ge 0$ if and only if $\max\{\lvert \ord_\mP(\lambda) \rvert, \lvert \ord_\mP(\mu) \rvert \} \le 4 \ord_\mP(2)$, \item[(ii)] $3 \mid \ord_\mP(j^\prime)$ if and only $\ord_\mP(\lambda \mu) \equiv \ord_\mP(2) \pmod{3}$. \end{itemize} \end{lem} \begin{proof} Observe that \begin{equation}\label{eqn:jlambda} j^\prime= j(\lambda)=2^8 \cdot \frac{(\lambda^2-\lambda+1)^3}{\lambda^2 (\lambda-1)^2} =2^8 \cdot \frac{(1-\lambda \mu)^3}{(\lambda \mu)^2} \, . \end{equation} From this we immediately deduce (ii). Let \[ m=\ord_\mP(\lambda), \qquad n=\ord_\mP(\mu), \qquad t=\max(\lvert m \rvert, \lvert n \rvert). \] If $t=0$ then $\ord_\mP(j^\prime) \ge 8 \ord_\mP(2) > 0$, and so (i) holds. We may therefore suppose that $t>0$. Now the relation $\lambda+\mu=1$ forces either $m=n=-t$, or $m=0$ and $n=t$, or $m=t$ and $n=0$. Thus $\ord_\mP(\lambda \mu)=-2t<0$ or $\ord_\mP(\lambda \mu)=t>0$. In either case, from~\eqref{eqn:j}, \[ \ord_\mP(j^\prime)=8 \ord_\mP(2)-2 t. \] This proves (i). \end{proof} \section{Proof of Theorem~\ref{thm:FermatGen}} Let $K$ be a totally real field satisfying assumption (ES). Let $S$, $T$, $U$, $V$ be as in \eqref{eqn:ST}. Let $\cB$ be as in Theorem~\ref{thm:ll2}, and let $(a,b,c)$ be a non-trivial solution to the Fermat equation~\eqref{eqn:Fermat} with exponent $p>\cB$, scaled so that $\cG_{a,b,c}=\fp$ with $\fp \in \cH$. Applying Theorem~\ref{thm:ll2} gives an elliptic curve $E^\prime/K$ with full $2$-torsion and potentially good reduction outside $S$ whose $j$-invariant $j^\prime$ satisfies: \begin{enumerate} \item[(a)] for all $\mP \in U$, we have $\ord_\mP(j^\prime) <0$; \item[(b)] for all $\mP \in V$, we have $\ord_\mP(j^\prime)<0$ or $3 \nmid \ord_\mP(j^\prime)$. \end{enumerate} Let $(\lambda,\mu)$ be a solution to $S$-unit equation \eqref{eqn:sunit}, whose $\sS_3$-orbit corresponds to the $\overline{K}$-isomorphism class of $E^\prime$ as in Lemma~\ref{lem:prebij}. By Lemma~\ref{lem:jval} and (a), (b) we know that \begin{enumerate} \item[($a^\prime$)] for all $\mP \in U$, we have $\max\{ \lvert \ord_\mP(\lambda) \rvert, \lvert \ord_\mP(\mu) \rvert \} > 4 \ord_\mP(2)$; \item[($b^\prime$)] for all $\mP \in V$, we have $\max\{ \lvert \ord_\mP(\lambda) \rvert, \lvert \ord_\mP(\mu) \rvert \} > 4 \ord_\mP(2)$ or $\ord_\mP(\lambda \mu) \not\equiv \ord_\mP(2) \pmod{3}$. \end{enumerate} These contradict assumptions (A) and (B) of Theorem~\ref{thm:FermatGen}, completing the proof. \section{The $S$-unit equation over real quadratic fields} To prove Theorem ~\ref{thm:d5mod8} we need to understand the solutions to the $S$-unit equation \eqref{eqn:sunit} for real quadratic fields $K$. This is easier when $S$ is small in size. \begin{lem}\label{lem:makeintegral} Suppose $\lvert S \rvert=2$. Let $(\lambda,\mu) \in \Lambda_S$. Then, there is some element $\sigma \in \sS_3$ so that $(\lambda^\prime,\mu^\prime)=(\lambda,\mu)^\sigma$ satisfies $\lambda^\prime$, $\mu^\prime \in \OO_K$. \end{lem} \begin{proof} As $\mu^\prime=1-\lambda^\prime$ we need only find some element $\sigma \in \sS_3$ so that $\lambda^\prime=\lambda^\sigma \in \OO_K$. Write $S=\{\mP_1,\mP_2\}$. If $\ord_{\mP_i}(\lambda) \ne 0$ for $i=1$, $2$, then let $\lambda^\prime=\lambda/(\lambda-1)$, which will have non-negative valuation at $\mP_i$ and so belongs to $\OO_K$. Thus without loss of generality we may suppose that $\ord_{\mP_1}(\lambda)=0$. Now if $\ord_{\mP_2}(\lambda) \ge 0$ then $\lambda^\prime=\lambda \in \OO_K$, and if $\ord_{\mP_2}(\lambda)<0$ then $\lambda^\prime=1/\lambda \in \OO_K$. \end{proof} For the remainder of this section $d$ denotes a squarefree integer $\geq 13$ that satisfies $d \equiv 5 \pmod{8}$ and $q\geq 29$ a prime satisfying $q \equiv 5 \pmod{8}$ and $\left( \frac{d}{q} \right)=-1$. Let $K$ denotes the real quadratic field $\Q(\sqrt{d})$. It follows that both $q$ and $2$ are inert in $K$. We let $S=\{2,q\}$. \begin{lem}\label{lem:makeint} Let $K$ and $S$ be as above, and let $(\lambda,\mu) \in \Lambda_S$. Then $\lambda$, $\mu \in \Q$ if and only if $(\lambda,\mu)$ belongs to the $\sS_3$-orbit $\{(1/2,1/2),\; (2,-1),\; (-1,2)\} \subseteq \Lambda_S$. \end{lem} \begin{proof} Suppose $\lambda$, $\mu \in \Q$. By Lemma~\ref{lem:makeintegral} we may suppose that $\lambda$ and $\mu$ belong to $\OO_K \cap \Q=\Z$ and hence $\lambda=\pm 2^{r_1} q^{s_1}$, $\mu=\pm 2^{r_2} q^{s_2}$ where $r_i \ge 0$ and $s_i \ge 0$. As $\lambda+\mu=1$ we see that one of $r_1$, $r_2$ is $0$ and likewise one of $s_1$, $s_2=0$. Without loss of generality $r_2=0$. If $s_2=0$ too then we have $\lambda\pm 1=1$ which forces $(\lambda,\mu)=(2,-1)$ as required. We may therefore suppose that $s_1=0$. Hence $\pm 2^{r_1} \pm q^{s_2}=1$. If $s_2=0$ then again we obtain $(\lambda,\mu)=(2,-1)$, so suppose $s_2>0$. We now easily check that $r_1=1$ and $r_1=2$ are both incompatible with our hypotheses on $q$. Thus $r_1 \ge 3$ and so $\mu=\pm q^{s_2} \equiv 1 \pmod{8}$. As $q \equiv 5 \pmod{8}$, we have $\mu=q^{2t}$ for some integer $t \ge 1$. Hence $(q^t+1)(q^t-1)=\mu-1=-\lambda=\mp 2^{r_1}$. This implies that $q^t+1=2^{a}$ and $q^t-1=2^{b}$ where $a \ge b \ge 1$. Subtracting we have $2^a-2^b=2$ and so $b=1$ and $q=3$ giving a contradiction. \end{proof} We follow \cite{FS} in calling the elements of the orbit $\{(1/2,1/2),\; (2,-1),\; (-1,2)\}$ \textbf{irrelevant}, and in calling other elements of $\Lambda_S$ relevant. Next we give a parametrization of all relevant elements of $\Lambda_S$. This the analogue of \cite[Lemma 6.4]{FS}, and shows that such a parametrization is possible even though our set $S$ is larger, containing the odd prime $q$. \begin{lem}\label{lem:param} Up to the action of $\sS_3$, every relevant $(\lambda,\mu) \in \Lambda_{S}$ has the form \begin{equation}\label{eqn:parasol} \lambda=\frac{\eta_1 \cdot 2^{2r_1} \cdot q^{2s_1} -\eta_2 \cdot q^{2s_2}+1 +v \sqrt{d}}{2}, \qquad \mu=\frac{\eta_2 \cdot q^{2s_2} -\eta_1 \cdot 2^{2r_1} \cdot q^{2s_1}+1 -v \sqrt{d}}{2} \end{equation} where \begin{equation}\label{eqn:paracond1} \eta_1=\pm 1, \qquad \eta_2=\pm 1, \qquad r_1 \ge 0, \qquad s_1, s_2 \geq 0, \qquad s_1 \cdot s_2=0, \qquad v \in \Z, \qquad v \ne 0 \end{equation} are related by \begin{gather}\label{eqn:paracond2} (\eta_1 \cdot 2^{2 r_1} \cdot q^{2s_1}-\eta_2 \cdot q^{2s_2}+1)^2 -d v^2 =\eta_1 \cdot 2^{2r_1+2} \cdot q^{2s_1}\\ \label{eqn:paracond3} (\eta_2 \cdot q^{2s_2}-\eta_1 \cdot 2^{2r_1} \cdot q^{2s_1}+1)^2 -d v^2 =\eta_2 \cdot 2^{2} \cdot q^{2s_2}. \end{gather} \end{lem} \begin{proof} If $\eta_1$, $\eta_2$, $r_1$, $s_1$, $s_2$ and $v$ satisfy \eqref{eqn:paracond1}, \eqref{eqn:paracond2}, \eqref{eqn:paracond3} and $\lambda$, $\mu$ are given by \eqref{eqn:parasol}, it is clear that $(\lambda,\mu)$ is a relevant element of $\Lambda_S$. Conversely, suppose $(\lambda,\mu)$ is a relevant element of $\Lambda_S$. By Lemma~\ref{lem:makeint}, we may suppose that $\lambda$, $\mu \in \OO_K$, and that $\lambda$, $\mu \notin \Q$. As $S=\{2,q\}$ we can write $\lambda=2^{r_1} q^{s_1} \lambda^\prime$ and $\mu=2^{r_2} q^{s_2} \mu^\prime$ where $\lambda^\prime$ and $\mu^\prime$ are units. As $\lambda+\mu=1$ we have $r_1 r_2=0$ and $s_1 s_2=0$. Swapping $\lambda$ and $\mu$ if necessary, we can suppose that $r_2=0$. Let $x \mapsto \overline{x}$ denote conjugation in $K$. Then \[ \lambda \overline{\lambda}=\eta_1 \cdot 2^{2 r_1} \cdot q^{2s_1}, \qquad \mu \overline{\mu}=\eta_2 \cdot q^{2s_2}, \qquad \eta_1=\pm 1, \qquad \eta_2=\pm 1. \] Now, \begin{equation*} \lambda+\overline{\lambda} = \lambda \overline{\lambda} - (1-\lambda)(1-\overline{\lambda}) +1 = \lambda \overline{\lambda} - \mu \overline{\mu} +1 = \eta_1 \cdot 2^{2r_1} \cdot q^{2s_1} - \eta_2 \cdot q^{2s_2} +1 \, . \end{equation*} Moreover we can write $\lambda-\overline{\lambda}=v \sqrt{d}$, where $v \in \Z$, and as $\lambda \notin \Q$, we have $v \ne 0$. The expressions for $\lambda+\overline{\lambda}$ and $\lambda-\overline{\lambda}$ give the expression for $\lambda$ in \eqref{eqn:parasol}, and we deduce the expression for $\mu$ from $\mu=1-\lambda$. Finally, \eqref{eqn:paracond2} follows from the identity \[ (\lambda+\overline{\lambda})^2-(\lambda-\overline{\lambda})^2=4 \lambda \overline{\lambda}, \] and \eqref{eqn:paracond3} from the corresponding identity for $\mu$. \end{proof} \begin{lem}\label{lem:nottable} Let $d \equiv 5 \pmod{8}$ be squarefree $d \geq 13$ and $q \geq 29$ a prime such that $q \equiv 5 \pmod{8}$ and $\left( \frac{d}{q} \right) = -1$. Then there are no relevant elements of $\Lambda_{S}$. \end{lem} \begin{proof} We apply Lemma~\ref{lem:param}. In particular, $s_1 s_2=0$. Suppose first that $s_1>0$. Thus $s_2=0$. As $( d/q )=-1$, we have from \eqref{eqn:paracond2} that $q^{s_1} \mid v$ and $q^{s_1} \mid (\eta_2 -1)$. Hence $\eta_2=1$. Now \eqref{eqn:paracond2} can be rewritten as \[ 2^{4 r_1} q^{2s_1} - d (v/q^{s_1})^2=\eta_1 2^{2 r_1+2} \, . \] Thus $(d/q)=(-\eta_1/q)=1$ as $q \equiv 5 \pmod{8}$. This is a contradiction. \medskip Thus, henceforth, $s_1=0$. Next suppose that $s_2=0$. We will consider the subcases $\eta_2=-1$ and $\eta_2=1$ separately and obtain contradictions in both subcases showing that $s_2>0$. Suppose $\eta_2 = -1$. From \eqref{eqn:paracond3} we have $2^{4r_1} - dv^2 = - 4$. If $r_1=0$ or $1$ then $d=5$ and if $r_1 \ge 2$ then $d \equiv 1 \pmod{8}$, giving a contradiction. Hence suppose $\eta_2=1$. From \eqref{eqn:paracond2}, we have $2^{4r_1}-dv^2 = \eta_1 2^{2r_1+2}$. If $r_1=0$, $1$, $2$ then $dv^2=1 \pm 4$, $dv^2= 16 \pm 16$, $dv^2= 256 \pm 64$ all of which contradict the assumptions on $d$ or the fact that $v \ne 0$ (by \eqref{eqn:paracond1}). If $r_1 \ge 3$ then $2^{2r_1-2}-\eta_1=d(v/2^{r_1+1})^2$ which forces $d \equiv \pm 1 \pmod{8}$, a contradiction. \medskip We are reduced to $s_1=0$ and $s_2>0$. From \eqref{eqn:paracond3}, as $(d/q)=-1$, we have $q^{s_2} \mid v$ and \begin{equation}\label{eqn:divide} q^{s_2} \mid (\eta_1 2^{2r_1}-1). \end{equation} The conditions $q \ge 29$ and $q \equiv 5 \pmod{8}$ force $r_1 \ge 5$. Write $v=2^t w$ where $2 \nmid w$. Suppose $t \le r_1-1$. From \eqref{eqn:paracond2} we have $\eta_1 2^{2 r_1}-\eta_2 q^{2s_2}+1=2^t w^\prime$ where $2 \nmid w^\prime$. Thus ${w^\prime}^2-d w^2 \equiv 0 \pmod{8}$, contradicting $d \equiv 5 \pmod{8}$. We may therefore suppose $t \ge r_1$. Hence $2^{r_1} \mid (\eta_2 q^{2s_2}-1)$. Thus $\eta_2=1$. Therefore $2^{r_1} \mid (q^{s_2}-1)(q^{s_2}+1)$. Since $q \equiv 5 \pmod{8}$, we have $2 \mid\mid (q^{s_2}+1)$ and so \[ 2^{r_1-1} \mid (q^{s_2}-1) . \] As $q \equiv 5 \pmod{8}$ and $r_1 \ge 5$, we see that $s_2$ must be even, and that $2^{r_1-2} \mid (q^{s_2/2} -1)$. We can write $q^{s_2/2}=k \cdot 2^{r_1-2}+1$. From \eqref{eqn:divide}, \[ k^2 2^{2r_1-4}+k 2^{r_1-1}+1=q^{s_2} \le 2^{2 r_1}+1. \] Hence $k=1$, $2$ or $3$. Moreover, as $q^{s_2/2} \equiv 1 \pmod{8}$, we have $4 \mid s_2$. Hence \[ (q^{s_2/4}-1)(q^{s_2/4}+1)=k 2^{r_1-2}. \] Again as $q \equiv 5 \pmod{8}$ we have $2 \mid \mid (q^{s_2/4}+1)$ and so $q^{s_2/4}+1=2$ or $6$, both of which are impossible. This completes the proof. \end{proof} \section{Proof of Theorem~\ref{thm:d5mod8}} We apply Theorem~\ref{thm:FermatGen}. By Lemma~\ref{lem:nottable} all solutions to \eqref{eqn:sunit} are irrelevant, and the irrelevant solutions satisfy condition (A) of Theorem~\ref{thm:FermatGen}. This completes the proof of Theorem~\ref{thm:d5mod8}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{The map equation} \noindent Define a {\em module partition} $\mathsf{M}$ as a hard partition of a set of $n$ nodes into $m$ modules such that each node is assigned to one and only one module. The map equation $L(\mathsf{M})$ gives the average number of bits per step that it takes to describe an infinite random walk on a network partitioned according to $\mathsf{M}$: \begin{align} L(\mathsf{M}) = {\color{modulecolor}q_{\curvearrowright} H(\mathcal{Q})} {\color{nodecolor} + \sum_{i=1}^{m}p_{\circlearrowright}^iH(\mathcal{P}^i)}. \end{align} Below, we define and expand these terms, but first a note about the general approach. The map equation calculates the minimum description length of a random walk on the network for a two-level code that separates the important structures from the insignificant details based on the partition $\mathsf{M}$. As described in the main text, this two-level code uses unique codewords to name the modules specified by partition $\mathsf{M}$ but reuses the codewords used to name the individual nodes within each module. The first term of this equation ({\color{modulecolor}in red}) gives the average number of bits necessary to describe movement between modules, and the second term ({\color{nodecolor}in blue}) gives the average number of bits necessary to describe movement within modules. To efficiently describe a random walk using a two-level code of this sort, the choice of partition $\mathsf{M}$ must reflect the patterns of flow within the network, with each module corresponding to a cluster of nodes in which a random walker spends a long period of time before departing for another module. To find the best such partition, we therefore seek to minimize the map equation over all possible partitions $\mathsf{M}$. We begin by expanding the terms in the map equation. For clarity we here use $i,j$ to enumerate modules, $\alpha,\beta$ to enumerate nodes, {\color{modulecolor}red terms to describe movements between the modules}, and {\color{nodecolor}blue terms to describe movements within the modules}. \bigskip \noindent The per step probability that the random walker switches modules is \begin{align}\label{map1} {\color{modulecolor}q_{\curvearrowright}=\sum_{i=1}^m q_{i \curvearrowright}}, \end{align} where $q_{i \curvearrowright}$ is the per step probability that the random walker exits module $i$. This probability depends on the partitioning of the network and will be derived in Eq.~\ref{q_jump}. The entropy of movements between modules is \begin{align}\label{map2} {\color{modulecolor}H(\mathcal{Q})= \sum_{i=1}^m\frac{q_{i \curvearrowright}}{\sum_{j=1}^m q_{j \curvearrowright}}\log \left( \frac{q_{i \curvearrowright}}{\sum_{j=1}^m q_{j \curvearrowright}} \right)}, \end{align} which is the lower limit of the average length of a codeword used to name a module. Here we have used Shannon's source coding theorem \cite{sup-shannon48} and treated the modules as $m$ states of a random variable $X$ that occur with frequencies $q_{i\curvearrowright}/\sum_{j=1}^m q_{j \curvearrowright}$. Combining Eqs.~\ref{map1} and \ref{map2}, the first term in the map equation is the per step average description length of movements between modules within the random walk. \bigskip \noindent To weight the entropy of movements within module $i$, we compute \begin{align}\label{map3} {\color{nodecolor}p_{\circlearrowright}^i = q_{i \curvearrowright }+\sum_{\alpha \in i} p_\alpha}, \end{align} where the notation $\alpha \in i$ means ``over all nodes $\alpha$ in module $i$'' and $p_\alpha$ is the ergodic node visit frequency at node $\alpha$ within the random walk. We use the power method, to be explained in detail on next page, to calculate this probability. Because the exit codewords are necessary to separate within-module movements from between-module movements, we include the probability of exiting module $i$, $q_{i\curvearrowright}$, in the weight of within-module movements in module $i$. In this way we can guarantee efficient coding: by encoding the exit codewords together with the within-module codewords, we appropriately adjust the length of the exit codewords to the frequency of their use. Finally, the entropy of movements within module $i$ is \begin{subequations} \label{map4} \begin{align} {\color{nodecolor}H(\mathcal{P}^i)} &= {\color{nodecolor}\frac{q_{i \curvearrowright} }{q_{i \curvearrowright }+\sum_{\beta \in i} p_\beta} \log \left( \frac{q_{i \curvearrowright} }{q_{i \curvearrowright }+\sum_{\beta \in i} p_\beta} \right)}\label{map4a}\\ &{\color{nodecolor}+ \sum_{\alpha \in i}\frac{p_\alpha}{q_{i \curvearrowright }+\sum_{\beta \in i} p_\beta}\log \left( \frac{p_\alpha}{q_{i \curvearrowright }+\sum_{\beta \in i} p_\beta} \right)}\label{map4b} \end{align} \end{subequations} which is the lower limit of the average length of a codeword used to name a node (exit code included) in module $i$. The single term in Eq.~\ref{map4a} is the contribution from the exit codeword and the sum in Eq.~\ref{map4b} is the contribution from the codewords naming the nodes. Combining Eqs.~\ref{map3} and \ref{map4} and summing over all modules makes it easy to identify the second term in the map equation as the per step average description length of movements within modules of the random walk. \bigskip \noindent By collecting the terms and simplifying, we get the final expression for the map equation \begin{subequations} \begin{align} L(\mathsf{M}) &= {\color{modulecolor}\left(\sum_{i=1}^m q_{i \curvearrowright}\right) \log \left( \sum_{i=1}^m q_{i \curvearrowright} \right)}\\ &- ({\color{modulecolor}1} + {\color{nodecolor}1}) \sum_{i=1}^m q_{i \curvearrowright}\log \left( q_{i \curvearrowright} \right) {\color{nodecolor} - \sum_{\alpha=1}^{n} p_\alpha \log \left( p_\alpha \right)}\label{map5b}\\ &{\color{nodecolor}+ \sum_{i=1}^{m}\left(q_{i \curvearrowright }+\sum_{\alpha \in i} p_\alpha \right) \log \left( q_{i \curvearrowright }+\sum_{\alpha \in i} p_\alpha \right)}. \end{align} \end{subequations} Note that the map equation is only a function of the ergodic node visit frequencies $p_\alpha$ and the exit probabilities $q_{i\curvearrowright}$, which both can be easily calculated. Moreover, because the term ${\color{nodecolor}\sum_{1}^{n} p_\alpha \log \left( p_\alpha \right)}$ is independent of partitioning and $p_\alpha$ otherwise only shows up summed over all nodes in a module, it is sufficient to keep track of changes in $q_{i\curvearrowright}$ and $\sum_{\alpha \in i} p_\alpha$ in the optimization algorithm. They can easily be derived for any partition of the network or quickly updated when they change in each step of the optimization procedure using the ergodic node visit frequencies. \begin{itemize} \item \emph{Ergodic node visit frequencies}. We use the power method to calculate the steady state visit frequency for each node. To guarantee a unique steady state distribution for directed networks, we introduce a small teleportation probability $\tau$ in the random walk that links every node to every other node with positive probability and thereby convert the random walker into a \emph{random surfer}. The movement of the random surfer can now be described by an irreducible and aperiodic Markov chain that has a unique steady state by the Perron-Frobineous theorem. To generate the ergodic node visit frequencies, we start with a distribution of $p_\alpha=1/n$ for the random surfer to start at each node $\alpha$. The surfer moves as follows: at each time step, with probability $1-\tau$ the random surfer follows one of the outgoing links from the node $\alpha$ that it currently occupies, with probability proportional to the weights of the outgoing links $w_{\alpha\beta}$ from $\alpha$ to $\beta$. It is therefore convenient to set $\sum_\beta w_{\alpha\beta} = 1$. With the remaining probability $\tau$, or with probability $1$ if the node does not have any outlinks, the random surfer ``teleports'' with uniform probability to a random node anywhere in the system. As in Google's Page\-Rank algorithm \cite{sup-google}, we use $\tau=0.15$, but emphasize that the results are robust to this choice. \item \emph{Exit probabilities}. Given the ergodic node visit frequencies $p_\alpha,\, \alpha=1,\ldots,n$ and an initial partitioning of the network, it is easy to calculate the ergodic module visit frequencies $\sum_{\alpha \in i} p_\alpha$ for module $i$. The exit probability for module $i$, with teleportation taken into account is then \begin{align}\label{q_jump} q_{i \curvearrowright }=\tau\frac{n-n_i}{n-1}\sum_{\alpha \in i} p_\alpha + (1-\tau)\sum_{\alpha \in i}\sum_{\beta \notin i} p_\alpha w_{\alpha\beta}, \end{align} where $n_i$ is the number of nodes in module $i$. This equation follows since every node teleports a fraction $\tau(n-n_i)/(n-1)$ and guides a fraction $(1-\tau)\sum_{\beta \notin i} w_{\alpha\beta}$ of its weight $p_\alpha$ to nodes outside of the module. \end{itemize} \section*{Implementation} \noindent Finding the map that provides the minimal description length of the data, given the requirement that modules receive unique names, is now a standard computational optimization problem. Below we describe how we first use a deterministic greedy search algorithm \cite{sup-clauset-2004-70,sup-wakita} and then refine the results with a simulated annealing approach \cite{sup-kirkpatrick,sup-guimera-nature} with the heat-bath algorithm. \begin{enumerate} \item \emph{Greedy search}. We first calculate the ergodic node visit frequencies and then assign every node to a unique module and derive the the exit probabilities as described above. We use the map equation to calculate the description length and repeatedly merge the two modules that give the largest decrease in description length until further merging gives a longer description \cite{sup-clauset-2004-70,sup-wakita}. With the improved version in ref.\ \cite{sup-wakita} of the greedy search algorithm in ref.\ \cite{sup-clauset-2004-70}, we have successfully partitioned networks with 2.6 million nodes and 29 million links. Starting at the next page, we illustrate the greedy search for the example network in Fig.~1 of the paper. \item \emph{Simulated annealing}. The result of the previous step can typically be refined by simulated annealing \cite{sup-kirkpatrick,sup-guimera-nature}. We use the heat-bath algorithm \cite{sup-newmanheatbath} and start with the module configuration achieved by the greedy search. Starting the heat-bath algorithm at several different temperatures, we select the run that gives the shortest description of the map, i.e., the minimal value of the map equation. This step can improve the description length by up to several percent over that found by the greedy search alone. \item \emph{Visualization}. We set the area of every module to be proportional to the fraction of time a random surfer spends in the module, and the area of the bordering ring to be proportional to the exit probability. Similarly, we vary the widths of the links in accord with the transition probabilities between modules (excluding teleportation). \end{enumerate} \section*{Network maps and coding theory} In this paper, we use maps to describe the dynamics across the links and nodes in directed, weighted networks that represent the local interactions among the subunits of a system. These local interactions induce a system-wide flow of information that characterizes the behavior of the full system \cite{ziv,donath,enright,girvan,kasper}. Consequently, if we want to understand how network structure relates to system behavior, we need to understand the flow of information on the network. We therefore identify the modules which compose the network by finding an optimally compressed description of how information flows on the network. A group of nodes among which information flows quickly and easily can be aggregated and described as a single well-connected module; the links between modules capture the avenues of information flow between those modules. \begin{figure*}[thbp] \centering \includegraphics[width=\textwidth]{fig1.eps} \caption{\label{fig1}Detecting communities by compressing the description of information flows on networks. (A) We want to describe the trajectory of a random walk on the network, such that important structures have unique names. The orange line shows one sample trajectory. (B) A basic approach is to give a unique name to every node in the network. The Huffman code illustrated here is an efficient way to do so. The 314 bits shown under the network describes the sample trajectory in A, starting with $1111100$ for the first node on the walk in the upper left corner, $1100$ for the second node etc., and ending with $00011$ for the last node on the walk in the lower right corner. (C) A two-level description of the random walk, in which major clusters receive unique names but the names of nodes within clusters are reused, yields on average a 32\% shorter description for this network. The codes naming the modules and the codes used to indicate an exit from each module are shown to the left and the right of the arrows under the network, respectively. Using this code, we can describe the walk in A by the 243 bits shown under the the network in C. The first three bits $111$ indicate that the walk begins in the red module, the code $0000$ specifies the first node on the walk etc. (D) Reporting only the module names, and not the locations within the modules, provides an optimal coarse-graining of the network.} \end{figure*} Succinctly describing information flow is a coding or compression problem. The key idea in coding theory is that a data stream can be compressed by a code that exploits regularities in the process that generates the stream \cite{shannon}. We use a random walk as a proxy for the information flow, because a random walk uses all of the information in the network representation and nothing more. Thus it provides a default mechanism for generating a dynamics from a network diagram alone \cite{ziv}. Taking this approach, we develop an efficient code to describe a random walk on a network. We thereby show that finding community structure in networks is equivalent to solving a coding problem \cite{RosvallAndBergstrom07,rissanen1978,grunwald}. We exemplify this by making a map of science, based on how information flows among scientific journals by means of citations. \subsection*{Describing a path on a network} To illustrate what coding has to do with map-making, consider the following communication game. Suppose that you and I both know the structure of a weighted directed network. We aim to choose a code that will allow us to efficiently describe paths on the network that arise from a random walk process, in a language that reflects the underlying structure of the network. How should we design our code? If maximal compression were our only objective, we could encode the path at or near the entropy rate of the corresponding Markov process. Shannon showed that one can achieve this rate by assigning to each node a unique dictionary over the outgoing transitions \cite{shannon48}. But compression is not our only objective; here want our language to reflect the network structure, we want the {\em words} we use to refer to {\em things} in the world. Shannon's approach does not do this for us, because every codeword would have a different meaning depending on where it is used. Compare maps: useful maps assign unique names to important structures. Thus we seek a way of describing or encoding the random walk in which important structures indeed retain unique names. Let us look at a concrete example. Figure \ref{fig1}A shows a weighted network with $n=25$ nodes. The link thickness indicates the relative probability that a random walk will traverse any particular link. Overlaid upon the network is a specific 71-step realization of a random walk that we will use to illustrate our communication game. In panels \ref{fig1}B--D, we describe this walk with increasing levels of compression, exploiting more and more of the regularities in the network. \subsection*{Huffman coding} A straightforward method of giving names to nodes is to use a Huffman code \cite{huffman}. Huffman codes save space by assigning short codewords to common events or objects, and long codewords to rare ones, much as common words are short in spoken languages \cite{zipf}. Figure \ref{fig1}B shows a prefix-free Huffman coding for our sample network. Each codeword specifies a particular node, and the codeword lengths are derived from the ergodic node visit frequencies of an infinitely long random walk. With the Huffman code pictured in Fig.~\ref{fig1}B, we are able to describe the specific 71-step walk in 314 bits. If we instead had chosen a uniform code, in which all codewords are of equal length, each codeword would be $\lceil \log{25} \rceil = 5$ bits long and $71\cdot5=355$ bits would have been required to describe the walk. Though in this example we assign actual codewords to the nodes for illustrative purposes, in general we will not be interested in the codewords themselves, but rather in the theoretical limit of how concisely we can specify the path. Here we invoke Shannon's source coding theorem \cite{shannon48} which implies that when you use $n$ codewords to describe the $n$ states of a random variable $X$ that occur with frequencies $p_i$, the average length of a codeword can be no less than the entropy of the random variable $X$ itself: $H(X) = -\sum_{1}^{n} p_i \log(p_i)$. This theorem provides us with the necessary apparatus to see that in our Huffman illustration, the average number of bits needed to describe a single step in the random walk is bounded below by the entropy $H(P)$, where $P$ is the distribution of visit frequencies to the nodes on the network. We define this lower bound on code length to be $L$. For example, $L=4.50$ bits/step in Fig.~\ref{fig1}B. \subsection*{Highlighting important objects} Matching the length of codewords to the frequencies of their use gives us efficient codewords for the nodes, but no map. Merely assigning appropriate-length names to the nodes does little to simplify or highlight aspects of the underlying structure. To make a map, we need to separate the important structures from the insignificant details. We therefore divide the network into two levels of description. We retain unique names for large-scale objects, the clusters or modules to be identified within our network, but we reuse the names associated with fine-grain details, the individual nodes within each module. This is a familiar approach for assigning names to objects on maps: most US cities have unique names, but street names are reused from one city to the next, such that each city has a Main Street and a Broadway and a Washington Avenue and so forth. The reuse of street names rarely causes confusion, because most routes remain within the bounds of a single city. A two-level description allows us to describe the path in fewer bits than we could do with a one-level description. We capitalize on the network's structure --- and in particular, on the fact that a random walker is statistically likely to spend long periods of time within certain clusters of nodes. Figure \ref{fig1}C illustratess this approach. We give each cluster a unique name, but use a different Huffman code to name the nodes within each cluster. A special codeword, the exit code, is chosen as part of the within-cluster Huffman coding and indicates that the walk is leaving the current cluster. The exit code is always followed by the ``name'' or module code of the new module into which the walk is moving (see supporting online material for more details). Thus we assign unique names to coarse-grain structures, the cities in the city metaphor, but reuse the names associated with fine-grain details, the streets in the city metaphor. The savings are considerable; in the two-level description of Fig~\ref{fig1}C the limit $L$ is $3.05$ bits/step compared to 4.50 for the one-level description. Herein lies the duality between finding community structure in networks and the coding problem: to find an optimal code, we look for a module partition $\mathsf{M}$ of $n$ nodes into $m$ modules so as to minimize the expected description length of a random walk. Using the module partition $\mathsf{M}$, the average description length of a single step is given by \begin{equation}\label{map} L(\mathsf{M}) = q_{\curvearrowright} H(\mathcal{Q}) + \sum_{i=1}^{m}p_{\circlearrowright}^iH(\mathcal{P}^i). \end{equation} This equation comprises two terms: first is the entropy of the movement between modules, and second is the entropy of movements within modules (where exiting the module is also considered a movement). Each is weighted by the frequency with which it occurs in the particular partitioning. Here $q_{\curvearrowright}$ is the probability that the random walk switches modules on any given step. $H(\mathcal{Q})$ is the entropy of the module names, i.e., the entropy of the underlined codewords in Fig.~\ref{fig1}D. $H(\mathcal{P}^i)$ is the entropy of the within-module movements --- including the exit code for module $i$. The weight $p_{\circlearrowright}^i$ is the fraction of within module movements that occur in module $i$, plus the probability of exiting module $i$ such that $\sum_{i=1}^mp_{\circlearrowright}^i=1+q_{\curvearrowright}$ (see supporting online material for more details). For all but the smallest networks, it is infeasible to check all possible partitions to find the one that minimizes the description length in the map equation (Eq.~\ref{map}). Instead we use computational search. We first compute the fraction of time each node is visited by a random walker using the power method, and using these visit frequencies we explore the space of possible partitions using a deterministic greedy search algorithm \cite{clauset-2004-70,wakita} . We refine the results with a simulated annealing approach \cite{guimera-nature} using the heat-bath algorithm (see supporting online material for more details). Figure \ref{fig1}D shows the map of the network, with the within-module descriptors faded out; here the significant objects have been highlighted and the details have been filtered away.\bigskip In the interest of visual simplicity, the illustrative network in Fig.~\ref{fig1} has weighted but undirected links. Our method is developed more generally, so that we can extract information from networks with links that are directed in addition to being weighted. The map equation remains the same, only the path that we aim to describe must be slightly modified to achieve ergodicity. We introduce a small ``teleportation probability'' $\tau$ in the random walk: with probability $\tau$ the process jumps to a random node anywhere in the network. This converts our random walker into the sort of ``random surfer'' that drives Google's Page\-Rank algorithm \cite{google}. Our clustering results are highly robust to the particular choice of the small fraction $\tau$. For example, so long as $\tau < 0.45$ the optimal partitioning of the network in Fig.~\ref{fig1} remains exactly the same. In general, the more significant the regularities, the higher $\tau$ can be before frequent teleportation swamps the network structure. We choose $\tau=0.15$ corresponding to the well known damping factor $d=0.85$ in the Page\-Rank algorithm \cite{google}. \section*{Mapping flow compared to maximizing modularity} \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{fig2.eps} \caption{Mapping flow compared to optimizing modularity in directed and weighted networks. The coloring of nodes illustrates alternative partitions of two sample networks. The left-hand partitions show the modular structure as optimized by the map equation (minimum $L$) and the right-hand partitions show the structure as optimized by modularity (maximum $Q$). In the network shown in panel A, the left-hand partition minimizes the map equation, because the persistence times in the modules are long; with the weight of the bold links set to twice the weight of other links, a random walker without teleportation takes on average 3 steps in a module before exiting. The right-hand clustering gives a longer description length, because a random walker takes on average only 12/5 steps in a module before exiting. The right-clustering maximizes the modularity, because modularity counts weights of links, the in-degree, and the out-degree in the modules; the right-hand partitioning places the heavily weighted links inside of the modules. In panel B, for the same reason, the right-hand partition again maximizes modularity. But not so the map equation. Because every node is either a sink or a source in this network, the links do not induce any long-range flow and the one step walks are best described as in the left-hand partition, with all nodes in the same cluster.\label{fig2}} \end{figure} The traditional way of identifing community structure in directed and weighted networks has been to simply disregard the directions and the weights of the links. But such approaches discard valuable information about the network structure. By mapping the system-wide flow induced by local interactions between nodes, we retain the information about the directions and the weights of the links. We also acknowledge their interdependence in networks inherently characterized by flows. This makes it interesting to compare our flow-based approach with recent topological approaches based on modularity optimization that also makes use of information about weight and direction \cite{newman-fast,arenasdirectedweighted,guimeradirected,leichtdirected}. In its most general form, the modularity for a given partitioning of the network into $m$ modules is the sum of the total weight of all links in each module minus the expected weight \begin{equation}\label{modularity} Q = \sum_{i=1}^{m}\frac{w_{ii}}{w} - \frac{w_{i}^{\mathrm{in}}w_{i}^{\mathrm{out}}}{w^2}. \end{equation} Here $w_{ii}$ is the total weight of links starting and ending in module $i$, $w_{i}^{\mathrm{in}}$ and $w_{i}^{\mathrm{out}}$ the total in- and out-weight of links in module $i$, and $w$ the total weight of all links in the network. To estimate the community structure in a network, Eq.~\ref{modularity} is maximized over all possible assignments of nodes into any number $m$ of modules. The two equations (\ref{map}) and (\ref{modularity}) reflect two different senses of what it means to have a network. The former, which we pursue here, finds the essence of a network in the patterns of flow that its structure induces. The latter effectively situates the essence of network in the combinatoric properties of its links (as we did in ref.~\cite{RosvallAndBergstrom07}). Does this conceptual distinction make any practical difference? Figure \ref{fig2} illustrates two simple networks for which the map equation and modularity give different partitionings. The weighted, directed links shown in the network in panel A induce a structured pattern of flow with long persistence times in, and limited flow between, the four clusters as highlighted on the left. The map equation picks up on these structural regularities and thus the description length is much shorter for the partitioning in the left-hand figure (2.67 bits/step) than for the right-hand one (4.13 bits/step). Modularity is blind to the interdependence in networks characterized by flows, and thus cannot pick up on this type of structural regularity. It only counts weights of links, in-degree, and out-degree in the modules, and thus prefers to partition the network as shown on the right with the heavily weighted links inside of the modules. In panel B, by contrast, there is no pattern of extended flow at all. Every node is either a source or a sink, and no movement along the links on the network can exceed more than one step in length. As a result, random teleportation will dominate (irrespective of teleportation rate) and any partition into multiple modules will lead to a high flow between the modules. For a network such as in panel B, where the links do not induce a pattern of flow, the map equation will always partition the network into one single module. Modularity, because it looks at pattern in the links and in- and out-degree, separates the network into the clusters shown at right. Which method should a researcher use? It depends on which of the two senses of network, described above, that one is studying. For analyzing network data where links represented patterns of movement among nodes, flow-based approaches such as the map equation are likely to identify the most important aspects of structure. For analyzing network data where links represent not flows but rather pairwise relationships, it may be useful to detect structure even where no flow exists. For these systems, combinatoric methods such as modularity \cite{girvan} or cluster-based compression \cite{RosvallAndBergstrom07} may be preferable. \section*{Mapping scientific communication} Science is a highly organized and parallel human endeavor to find patterns in nature; the process of communicating research findings is as essential to progress as is the act of conducting the research in the first place. Thus science is not merely a set of ideas, but also the flow of these ideas through a multipartite and highly differentiated social system. Citation patterns among journals allow us to glimpse this flow, and provide the trace of communication between scientists \cite{price,small73,small,anegon,borner}. To highlight important fields and their relationships, to uncover differences and changes, to simplify and make the system comprehensible --- we need a good map of science. \begin{figure*}[tbp] \centering \includegraphics[width=\textwidth]{fig3.eps} \caption{A map of science based on citation patterns. We partitioned 6,128 journals connected by 6,434,916 citations into 88 modules and 3,024 directed and weighted links. For visual simplicity we show only the links that the random surfer traverses more than 1/5000'th of her time, and we only show the modules that are visited via these links (see supporting online material for the complete list). Because of the automatic ranking of nodes and links by the random surfer \cite{google}, we are assured of showing the most important links and nodes. For this particular level of detail we capture 98\% of the node weights and 94\% of all flow. \label{fig3}} \end{figure*} \begin{figure*}[tbp] \centering \includegraphics[width=1.0\textwidth]{fig4.eps} \caption{ A map of the social sciences. The journals listed in the 2004 social science edition of Journal Citation Reports are a subset of those illustrated in Fig.~\ref{fig3}, totaling 1431 journals and 217,287 citations. When we map this subset on its own, we get a finer level of resolution. The 10 modules that correspond to the social sciences now are partitioned into 54 modules, but for simplicity we show only links which the random surfer visits at least 1/2000'th of her time together with the modules they connect (see supporting online material for the complete list). For this particular level of detail we capture 97\% of the node weights and 90\% of all flow.\label{fig4}} \end{figure*} Using the information theoretic approach presented above, we map the flow of citations among 6,128 journals in the sciences (Fig. \ref{fig3}) and social sciences (Fig. \ref{fig4}). The 6,434,916 citations in this cross-citation network represent a trace of the scientific activity during 2004 \cite{jsr}. Our data tally on a journal-by-journal basis the citations from articles published in 2004 to articles published in the previous five years. We exclude journals that that publish fewer than 12 articles per year, and those which do not cite other journals within the data set. We also exclude the only three major journals that span a broad range of scientific disciplines: \emph{Science}, \emph{Nature}, and \emph{Proceedings of the National Academy of Sciences}; the broad scope of these journals otherwise creates an illusion of tighter connections among disciplines, when in fact few readers of the physics articles in \emph{Science} are also close readers of the biomedical articles therein. Because we are interested in relationships between journals, we also exclude journal self-citations. Through the operation of our algorithm, the fields and the boundaries between them emerge directly from the citation data, rather than from our preconceived notions of scientific taxonomy (see Figs.~\ref{fig3} and \ref{fig4}). Our only subjective contribution has been to suggest reasonable names for each cluster of journals that the algorithm identifies: economics, mathematics, geosciences, and so forth. The physical size of each module or ``field'' on the map reflects the fraction of time that a random surfer spends following citations within that module. Field sizes vary dramatically. Molecular biology includes 723 journals that span the areas of genetics, cell biology, biochemistry, immunology, and developmental biology; a random surfer spends 26\% of her time in this field, indicated by the size of the module. Tribology (the study of friction) includes only 7 journals, in which a random surfer spends 0.064\% of her time. The weighted and directed links between fields represent citation flow, with the color and width of the arrows indicating flow volume. The heavy arrows between medicine and molecular biology indicate a massive traffic of citations between these disciplines. The arrows point in the direction of citation: $A \rightarrow B$ means ``$A$ cites $B$'' as shown in the legend. These directed links reveal the relationship between applied and basic sciences. We find that the former cite the latter extensively, but the reverse is not true, as we see e.g.\ with geotechnology citing geosciences, plastic surgery citing general medicine, and power systems citing general physics. The thickness of the module borders reflect the probability that a random surfer within the module will follow a citation to a journal outside of the module. These outflows show a large variation; for example the outflow is 30\% in general medicine but only 12\% in economics. The map reveals a ring-like structure in which all major disciplines are connected to one another by chains of citations --- but these connections are not always direct, because fields on opposite sides of the ring are linked only through intermediate fields. For example, while psychology rarely cites general physics or visa versa, psychology and general physics are connected via the strong links to and between the intermediaries molecular biology and chemistry. Once we consider the weights of the links among fields, however, it becomes clear that the structure of science is more like the letter $\mathbf{U}$ than like a ring, with the social sciences at one terminal and engineering at other, joined mainly by a backbone of medicine, molecular biology, chemistry, and physics. Because our map shows the pattern of citations to research articles published within five years, it represents what de Sola Price called the ``research frontier,'' \cite{price} rather than the long-term interdependencies among fields. For example, while mathematics are essential to all natural sciences, the field of mathematics is not central in our map because only certain subfields (e.g.\ areas of physics and statistics) rely heavily on the most recent developments in pure mathematics and contribute in return to the research agenda in that field. When a cartographer designs a map, the scale or scope of the map influences the choice of which objects are represented. A regional map omits many of the details that appear on a city map. Similarly, in the approach that we have developed here, the appropriate size or resolution of the modules depends on the universe of nodes that are included in the network. If we compare the map of a network to a map of a subset of the same network, we would expect to see the map of the subset to reveal finer divisions, with modules composed of fewer nodes. Figure \ref{fig4} illustrates this by partitioning a subset of the journals included in the map of science: the 1,431 journals in the the social sciences. The basic structure of the fields and their relations remains unchanged, with psychiatry and psychology linked via sociology and management to economics and political science, but the map also reveals further details. Anthropology fractures along the physical / cultural divide. Sociology divides into behavioral and institutional clusters. Marketing secedes from management. Psychology and psychiatry reveal a set of applied subdisciplines. The additional level of detail in the more narrowly focused map would have been clutter on the full map of science. When we design maps to help us comprehend the world, we must find that balance where we eliminate extraneous detail but highlight the relationships among important structures. Here we have shown how to formalize this cartographer's precept using the mathematical apparatus of information theory. \begin{acknowledgments} We are grateful to Jevin West for processing the data used to construct the maps in Figs.~3 and 4, and to Cynthia A. Brewer, \texttt{www.ColorBrewer.org}, for providing the color schemes we have used in Figs.~1--4. This work was supported by the National Institute of General Medical Sciences Models of Infectious Disease Agent Study program cooperative agreement 5U01GM07649. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Causality is a key notion in science and philosophy. Physics laws are often expressed in terms of a causal relation, helping in the relevant job of explanation and prediction. For example, Newton's second law predicts the force necessary (cause) to perform a desired acceleration (effect). In Philosophy, the relevance of causality was highlighted by Aristotle. In Posterior Analytics, he asserted that: we think we have knowledge of a thing only when we have grasped its cause (APost. 71 b 9-11. Cf. APost. 94 a 20). Aristotle also advanced the distinction of causality in four mayor types or classes: material cause, formal cause, efficient cause and final cause. In the history of thought, Mill \cite{mill1869system} raises the importance of causality in experimental sciences. Inductive inferences from a limited number of observed cases to general instances are feasible because nature is governed by laws. So, the law of universal causation is the guarantee that there are principles that are meant to be discovered if we pursue them actively enough. From a theoretical point of view, our work shares Mill's ideas about the relevance of causality as a condition in the generation of objective scientific knowledge. But as an empirical approach to the analysis of causation, we also adopt Mach's point of view of causality \cite{mach1976knowledge} as a way of describing, instead of explaining, phenomena: causality is a way to relate cause and effect, not to explain the effect from the cause. Usually, causality is characterized as a relationship attending the schema \cite{agueda2011causality}: 'A causes B', where A is the cause, B the effect and 'cause', the causal particle. Traditionally, any causal relationship follows these guidelines \cite{pearl2009causality}: \begin{itemize} \item Temporality: causes generally precede their effects. \item Contiguity: causes are contiguous to the immediate effects. \item Evidential: causes and effects provide evidence for each other. To these traditional ideas, we would like to add another: \item Imperfection: causes, effects and the cause-effect link are usually qualified by different degrees of strength. \end{itemize} This last property is endorsed by the presence of vague words in the aforementioned undisputed properties of causation, as 'generally precede', 'immediate effects' or 'is evidence of'. It is a fact that, in many cases, causality is imperfect in nature and causal relations are a matter of degree as we demostrate and try to weight in this work. In classical logic, deduction is a crisp relation: a conclusion is reached or not from the premises. But as we have previously seen, the explanans sometimes include imprecise generalizations instead of precise laws \cite{apt1990logic}. Therefore, the conclusion or explanadum should be a matter of degree. Imprecise generalizations are common in social sciences and often express tendencies or correlations rather than covering law knowledge. So according to this imprecise idea of causality, we have organized this paper as follows. First, we introduce theory about causality and related work in this field to provide the context where this work operates to the reader in section 2. Then, we explain how we generate the weighted graph in Section 3, both from the theoretical point of view in the first subsection and technically, in the second subsection of Section 3. Then, we propose a set of synthetic experiments and a real experiment to show the utility of our approach in Section 4. We conclude the paper with a set of conclusions and suggestions for further work in Section 5. \section{Mining causal sentences to generate knowledge} In \cite{puente2011text}, Puente, Sobrino, Olivas \& Merlo described a procedure to automatically display a causal graph from medical knowledge included in several medical texts. A morphological and syntactical program was designed to analyze causal phrases denoted by words like 'cause', 'effect' or their synonyms, highlighting vague words that qualify the causal nodes or the links between them. Another C program received as input a set of tags from the previous parser generating a template with a starting node (cause), a causal relation (denoted by lexical words), possibly qualified, and a final node (effect), possibly modified by a linguistic hedge showing its intensity. Once the system was developed, an experiment was performed to answer the question What provokes lung cancer?, obtaining a set of causal sentences related to this topic. The whole process was unable to answer the question directly, but was capable of generating a causal graph with the topics involved in the proposed question as shown in Figure \ref{fig:syn}. \begin{figure}[htb] \begin{center} \includegraphics[width=1\linewidth]{figures/CRISTINA_2.pdf} \end{center} \vspace{-.7cm} \caption{Causal representation related to the question What provokes lung cancer?} \label{fig:syn} \end{figure} The problem with this causal graph is that the weight of each edge is a point estimation of the uncertainty \cite{merchan2019generating}, poorly representing it. We might desire to have a probability distribution in each graph to model its uncertainty. This is so because we analyze several texts that contain a different certainty degree, represented by an adverb, between the same cause and effect and if we only take point estimations we lose properties about the uncertainty as symmetry, skewness or kurtosis. That is why we propose to go a step further to evaluate to what extent a cause provokes and effect, and if so, quantify it. \vspace{-.3cm} \section{Generating a weighted graph from Text Causal Relations} \vspace{-.3cm} In this section we will introduce theoretically the study about imperfection in causality by means of the extracted causality sentences. Throughout this section, we try to combine two views of the probability. The first one is the probability viewed by logicists, that assigns a certainty factor to each of the rules connecting causes and effects. This view of probability is the one captured by the graph. The other view is the subjective view of probability, followed by the Bayesian community. This view of probability assigns prior distributions and makes Bayesian inference to represent uncertainty. This is the probability that models the weights, or uncertainties, about the certainty factors of the graph. This is the main theoretical difference with respect to bayesian networks \cite{koller2009probabilistic}, that only see probability from a subjective point of view. \subsection{Introducing uncertainty in certain factors by probability distributions} In order to model the uncertainty in a more general and principled way \cite{bernardo2009bayesian}, we are going to model each adverb by an univariate probability density function (PDF) over the universe $[0,1]$. Certain factors $\mathbf{x}$ in the graph are the latent variables that we are going to learn about by modelling them with probability distributions whose parametric form, belonging to the exponential family \cite{holland1981exponential}, is given by the retrieved adverbs from the text. This means that, for example, the adverb sometimes is not going to be hard coded as an event with a $50$ percent of probability of happening but with a Gaussian PDF. The process is reversible taking the MAP of the distribution: \vspace{-.1cm} \begin{align} x_{MAP} = \arg\max_{x \in \Omega} P(x). \end{align} Where $\Omega$ is the space $[0,1] \in \mathcal{R}$. It is interesting to observe that by representing the adverbs with probability distributions we are proposing a generalization of the point estimation model. We are going to model each adverb with Gaussian, Beta and Exponential distributions. For example, consider the adverb \textit{hardly ever} w.r.t \textit{sometimes}. When we qualify a causal relation by using the adverb \textit{sometimes} the uncertainty is broad. It can occur with a $25\%$ or a $75\%$ of probability. On the other hand, when using \textit{hardly ever} the uncertainty interval is low, maybe just a $10\%$ or a $20\%$ of probability. Just by representing \textit{hardly ever} by a point estimate we cannot represent this information. Probability distributions fit this scenario, assigning a probability mass to every possible certainty factor in the $[0,1]$ universe. It is common that causal relations come qualified with adverbs such as \textit{Always} and we could think in placing a Delta Distribution over $1$ but as David Hume said about causality \cite{hume2011letters} we can not be sure about the future effects of a cause just by the past effects of this cause, the induction principle is arguable, so we place a spiky exponential distribution just in case the causal relation does not hold in the future by the random law of probability of rare events. We can observe these distributions in Figure \ref{fig:adverbs}. \begin{figure}[htb] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.49\linewidth]{figures/sometimes.jpg} & \includegraphics[width=0.49\linewidth]{figures/he.jpg}\\ \end{tabular} \caption{Prior distribution of two of the considered time adverbs.} \label{fig:adverbs} \end{center} \end{figure} Causal relations can appear in a multiple number of texts relating the same relation $x$ but qualified by a different time adverb $a_i,a_j \in A$, where $A$ is the set of all possible adverbs. In this case, we are interested in inferring a posterior distribution $p(x|a_i,a_j)$ over the certain factor latent variable of the relation $x$ that represents a mixture of the uncertainties of the analyzed time adverbs $p(x|a_i), p(x|a_j)$. We propose a learning method to compute such posterior distribution $p(x|a_1,...,a_n)$ for every certain factor latent variable. First, we infer the posterior distribution $p(x)$ between two prior distributions by multiplication $p(x|a_i)p(x|a_j)$. In order to do so, we discretize the value of this distributions in the $[0,1]$ universe by approximating them using a grid $\mathbf{g} \approx \mathcal{X} = \mathcal{R}^{[0,1]}$, which is commonly defined as a Grid Approximation. The Grid Approximation method for multiplying distributions is inadvisable for Multivariate Distributions \cite{mcelreath2018statistical} but as we are considering Univariate Distributions is a plausible method. After the multiplication we normalize the posterior distribution by dividing each value in the grid by the sum of all the values in the grid $p(x_k) = p(x_{k-1}) / \sum_{k=1}^N p(x_k)$. We have considered this process since the resultant distribution will have lower entropy than the previous two distributions if these distributions are similar. If we analyze a high number of causal relations and all the time adverbs are similar then we are interested in concluding that we are sure about the uncertainty degree over the certainty factor latent variable \ref{fig:learning}. \begin{figure}[htb] \begin{tabular}{ccc} \includegraphics[width=0.49\linewidth]{figures/prior.pdf}& \includegraphics[width=0.49\linewidth]{figures/posterior.pdf}\\ \end{tabular} \caption{Learning process. (Left) Prior distribution. (Right) Posterior distribution.} \label{fig:learning} \end{figure} Moreover, if the time adverbs are not similar, we have learned contradictory information about the certainty factors \ref{fig:posteriors}. \begin{figure}[htb] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.45\linewidth]{figures/certain.jpg}& \includegraphics[width=0.45\linewidth]{figures/contradictory.jpg}\\ \end{tabular} \caption{Posterior distributions about the certainty factors latent variables. (Left) Low entropy, useful information. (Right) High entropy, contradictory information.} \label{fig:posteriors} \end{center} \end{figure} If we want to generate text that represent the learned PDF we need to find the adverb that best represent the posterior distribution $p(x|a_1,...,a_n)$. We do so by minimizing the KL Divergence between the posterior and the adverbs. The KL divergence is a measure of distance between distributions \cite{murphy2012machine}. \vspace{-.3cm} \begin{align} a^\star = \arg\min_{a \in \mathbf{A}} KL(a || P). \end{align} Where $a$ represent the PDF of an adverb, $P$ the learned posterior and $\mathbf{A}$ is the space of all the considered adverb priors. The KL divergence between two distributions $P$ and $Q$ is given by the following expression, where we approximate the integral by Grid Approximation: \begin{align} D_{KL}(P || Q) = \int_{\infty}^{\infty} p(x)\log(\frac{p(x)}{q(x)}) dx. \end{align} In order to infer conditional distributions in the graph we just multiply the PDFs that relate the intermediate concepts. We can find the adverb that best represents the joint probability distribution by minimizing the KL Divergence. For example, let us consider that a cause $a$ produces and effect $b$ and the effect $b$ produces another effect $c$. We have learned $P(b|a)$ and $P(c|b)$ by multiplying distributions. So we now need to compute $P(c|b,a)$, this expression is just given by $P(c|b,a) = P(c|b)P(b|a)$. This expression generalizes for any potential chain of events in the graph. We can also take a point estimation for $P(z|x_1,...,x_n)$ by computing its MAP. If we think that $P(z|x_1,...,x_n)$ may be multi-modal we can resort to a sampling algorithm as the Metropolis Hastings algorithm and then compute a sufficient statistic as the mean or median. It is important to denote that this pondered graph is not a Bayesian Network. In Bayesian Networks the probability distributions are associated with the nodes of the graph \cite{jensen1996introduction} and in this case the probability distributions are associated with the edges of the graph. For the sake of understanding we have denoted the probability distribution between the nodes $a$ and $b$ as $P(b|a)$ but this is not the relation that is established in Bayesian Networks. Here, we do not know the probability of $a$ as the node $a$ does not represent a random variable but a cause or effect in the sense that logicists view an effect caused by a rule. What we are modelling is the uncertainty in the causal relation between concepts $a$ and $b$. This uncertainty is the one that is represented by $P(b|a)$ and represents the uncertainty over each possible certainty factor between the concepts $a$ and $b$. The applications of our proposed model and bayesian networks are different. This model can be used, as we will further see in the experiments section, to represent the uncertainty over learned sets of rules each of which connects the cause $a$ and effect $b$ by a different certainty factor $c \in [0,1]$. As far as we know, no model is able to provide as estimation of $p(b|a)$ in the logicist point of view of probability for the certainty factor $c$ that connects $b$ and $a$, only a maximum a posteriori estimation $c$, that is equal to the maximum of the probability distribution that our model infers and proposes as a novel contribution: $c = \arg \max p(b|a)$. This uncertainty between $a$ and $b$ of the certain factors, among which we provide a probability distribution, is the result of being modelled by a random variable $\epsilon$ that defines a probabilistic space $(E,\Omega,P)$, being $E$ the set of all events and $\Omega$ the $\sigma$-algebra of $E$. We model each different adverb found for the same cause and result by a different probability distribution and make inference of them all but the surrogate random variable between cause and effect is the same. This does not happen in bayesian networks, where each node represents a random variable. Bayesian Networks are probabilistic graphical models that model conditional independence, they also model causation but in a different way, in these models, the edges represent conditional dependence and they are not a random variable. The nodes of the graph are the random variables that define a factorized representation of the joint probability distributiontaking into account the conditional independences defined by the graph. If and edge between $a$ and $b$ exist, it means that $p(b|a)$ is a factor in the joint probability distribution. Then, knowing values of $a$ and $b$, we can conduct inference. We can see that the applications of bayesian networks are totally different from the ones of our proposed model. While bayesian networks perform inference with respect to different random variables in a graphical model to represent the global uncertainty of the joint probability distribution of the model, our method perform inference of the random variable modelling the uncertainty of each certain factor to perfectly represent the uncertainty of each causal relation. Bayesian networks are pure subjetivist models and our models are a mixture between logicists and subjetivist models. We now generate the same inference procedure for every pair of retrieved connected nodes by an edge. This generates a pondered graph that connects every pair of nodes by a probability distribution. We can now compute the probability distribution of nodes connected by two or more edges by the product of probability distributions of all the edges that connect them. Suppose that two nodes $a$ and $c$ are connected by a set of nodes $\mathbf{b}$ , which we can denote $P(c|a,b_1,...,b_n)$, then, this is equivalent to connect $a$ and $c$ with the probability distribution $P(c|a) = \sum_{i=2}^{N} P(b_i|b_{i-1}) P(b_1) P(a)$, where $N$ are the number of nodes between $a$ and $c$. Computing these posteriors is useful to answer questions that involve concepts whose causal relations does not explicity relate them, but implicity, they are connected. Figure \ref{fig:dfd} includes the architecture of our system that creates automatically the pondered graphs for visualization and representation of the uncertainty. \begin{figure}[htb] \begin{center} \includegraphics[width=0.7\linewidth]{figures/DFD_Weighted_Graph_Generation.pdf} \end{center} \vspace{-.7cm} \caption{Flow diagram of our proposed system architecture.} \label{fig:dfd} \end{figure} We can see that our model represents the uncertainty over the logicist certainty factors. Previous models only output a certain factor and our proposed model give one additional dimension of uncertainty, better modelling it. Applications involve a better decision analysis with respect to causation between nodes, a better representation of the uncertainty when several different retrieved causal relations involving different adverbs between a cause and an effect exist in texts and a visualization of the causation between nodes in a graph and a treatment of fake news or fake information that is retrieved by several sources detected by high entropy posterior distributions or high KL divergence with respect to a new causal relation. \section{Experiments} In this section we will show the usefulness of our proposed approach in a set of synthetic experiments and in a real experiment involving lung cancer. These experiments provide empirical claims to support our hypothesis that a pondered graph can be generated from a set of texts belonging to a particular domain. \subsection{Synthetic Experiments} It is interesting to see how our tool displays the causal information described in previous sections. Just to show the pondered graphs that the system is capable of generating, we create a toy problem where we are going to sample the following causal relations: \textit{A -> C. C -> D. C -> B} where \textit{A,B,C,D} are the nodes of the causal graph. We configure the generation script to sample random causal relations between all the \textit{A,B,C,D} nodes and retain only the causal relations \textit{A -> C. C -> D. C -> B}. We configure the data generation script to generate different number of causal relations just to see how the posterior distributions behave after being computed from a different number of causal relations. We also configure an scenario where the sampled adverbs are always similar where we expect the posterior distributions to have low entropy. We show the learned pondered graphs of this toy problem in Figure \ref{fig:toy}. \begin{figure}[htb] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.475\linewidth]{figures/toy_1.pdf}& \includegraphics[width=0.475\linewidth]{figures/toy_2.pdf}\\ \end{tabular} \end{center} \vspace{-.7cm} \caption{Learned pondered graphs of a toy problem.} \label{fig:toy} \end{figure} We can see in Figure \ref{fig:toy} different pondered graphs that the system has learned by processing the causal relations that the data generator file has generated. The size of the edge is proportional to the number of analyzed causal relations between the nodes that are connected by the edge. The adverb that is plotted in the edge is the adverb whose probability distribution minimizes the KL divergence with respect to the inferred posterior distribution of that edge. In the first pondered graph we can see how the probability distributions are spiky, i.e., have low entropy. These distributions have been inferred by causal relations that contained very similar adverbs, so the prior distributions associated with these adverbs were similar. We can say in this scenario that we are sure about the learned knowledge, or at least, that we are certain that our uncertainty over the acquired knowledge is low. On the other hand, in the second scenario we have generated a very small number of causal relations, so the output did not even acquire all the valid causal relations and the uncertainty over the causality of these concepts is very high. The computational time of this process is quadratic over the number of causal relations and the size of the grid that we use to approximate the posterior distributions, that is: $\mathbb{O}(NR)$ where $N$ is the number of causal relations and $R$ is the size of the grid. \vspace{-.5cm} \subsection{Real Experiment} Having tested the system in the previous section with a set of synthetic experiments we are now interested in this section in showing a real case scenario where our approach can provide an elegant representation of the causal knowledge hidden in a set of texts. We have analyzed several text from the lung cancer domain using the mentioned architecture. By analyzing those texts, our system has inferred the graph whose main part is shown in Figure \ref{fig:real}. We have cut the other causal relations of the graph for the sake of visibility. \begin{figure}[htb] \begin{center} \includegraphics[width=1\linewidth]{figures/lung_cancer_cut.pdf} \end{center} \vspace{-.7cm} \caption{Partial section of the causal graph computed from the lung cancer scenario.} \label{fig:real} \end{figure} We can observe a great variety of posterior distributions that connect every cause and effect in the graph. For example, we are certain the lung cancer can produce death half of the times or that inhaling radon gas is a frequent cause of developing lung cancer. We have also learn that the probability degree of developing lung cancer with respect to the workplace is very uncertain. We can see the accuracy of our system in that the probability distribution of the causal relation that involve heavy smoking with lung cancer resides at the left (almost close to 1) that the probability distribution of smoking with lung cancer. This validates the hypothesis that the amount of smoking is directly proportional to developing lung cancer and that our system is capable of inferring posterior probability distributions that are coherent with prior premises. \section{Conclusions and further work} We have proposed a methodology that generates pondered causal graphs from sets of analyzed texts of particular domains. By pondering the causal relations that appear in these texts we have represented, with probability distributions, the uncertainty involving these causal relations. We have illustrated the theoretical and practical details of our methodology. Our system serves as a representation of the adquired knowledge and as a question and answering system \cite{puente2013answering} with uncertainty. In order to also represent vagueness we will resort to fuzzy logic \cite{yen1999fuzzy} and build a similar methodology to compare both approaches. We will also propose a system that uses the model to tackle fake news \cite{lazer2018science}. Summarizing the causal information \cite{puente2013creating} \cite{puente2017summarizing} \cite{puente2015summarizing} with the learned uncertainty is also a pending task. Another interesting line of work is proposing an evaluation measure of the graph and then optimizing the priors and types of the distributions to detect which are the best prior distributions their hyperparameters. In order to do so we will resort to multiobjective Bayesian Optimization with Constraints \cite{garrido2019predictive}. Finally, we plan to integrate the weighted causal graph in a machine consciousness architecture implemented in robots \cite{merchn2020machine} in to make them question and answer questions from its knowledge base, exhibiting human behaviour. \section*{Acknowledgments} The authors gratefully acknowledge the use of the facilities of Centro de Computaci\'on Cient\'ifica (CCC) at Universidad Aut\'onoma de Madrid. The authors also acknowledge financial support from Spanish Plan Nacional I+D+i, grants TIN2016-76406-P and TEC2016-81900-REDT. \bibliographystyle{acm}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Redshift is an omnipresent Doppler-effect-related quantity, used in cosmology to build meaningful spatial and temporal distances. In 1962, Sandage and McVittie \cite{sandage1962change,mcvittie1962appendix} showed that the redshift is in fact a dynamical quantity, now labeled as redshift drift. A redshift drift driven by the expansion of the Universe is called a cosmological drift\footnote{Other contributions from peculiar velocities and inhomogeneities exist, and are discussed in \cite{linder2010constraining}.}, with the interesting consequence of turning redshift-dependent quantities into time-dependent ones. The cosmological drift depends on the Hubble rate $H(z)$, and is expected to be very small at low redshift $z$, with a redshift change of the order of $10^{-10}$ per year. However, it applies to every object in the Universe, which makes all of them possible probes of the drift, and collaborations such as the Extremely Large Telescope, the Square-Kilometer Array or the Vera C.~Rubin Observatory will provide means of its detection. Optimistic estimates show that these facilities could reach a precision of $10^{-10}$, for example with monitoring programs of 1000 hours of exposure with a 40-meter telescope \cite{Kim:2014uha}. For this reason, cosmology with redshift drifts is an active field of research \cite{TeppaPannia:2013lbl,Mishra:2014vga,Martins:2016bbi,Pandolfi:2014nfa,Alves:2019hrg,Klockner:2015rqa,Piattella:2017uat,Melia:2016bnb,TeppaPannia:2018ale,Bolejko:2019vni,Martinelli:2012vq,Quartin:2009xr,Giani:2020fpz,Codur:2021wrt}, which we advance further with this work, in which we investigate how the cosmological drift would affect the image from black hole shadows and interferometric signatures from black holes' photon rings. Black holes were first predicted over a century ago, yet the first direct image of a black hole was reconstructed only very recently by the Event Horizon Telescope (EHT) collaboration \cite{Akiyama:2019bqs,Akiyama:2019brx,Akiyama:2019cqa,Akiyama:2019eap,Akiyama:2019fyp,Akiyama:2019sww,Akiyama:2021qum}. The direct detection of the shadow of M87$^{\star}$, the supermassive black hole at the centre of the galaxy Messier 87, suggests we will soon be able to obtain black hole parameters such as their mass, spin and charge in a systematic way. Black holes shadows and photons rings are observables that can be used to infer these parameters \cite{Bardeen:1973tla,Bambi:2019tjh,Allahyari:2019jqz,Hadar:2020fda,Himwich:2020msm,Ghosh:2020spb,Afrin:2021imp,Broderick:2021ohx,Roelofs:2021wdi}, and are particularly interesting to precisely constrain gravity in the strong-field regime \cite{Vagnozzi:2019apd,Khodadi:2020jij,Gralla:2020srx,Psaltis:2020lvx,Li:2021mzq,Aratore:2021usi}. In fact, black hole shadows could potentially be used as standard rulers to infer the present day value of the Hubble parameter $H_0$ once that the mass of the black hole has been inferred independently \cite{Qi:2019zdk}. These aspects (and more) about black hole shadows are covered in the recent review \cite{Perlick:2021aok}. The shadow of a black hole is surrounded by a bright area, clearly visible in the image resolved by the EHT. This image was obtained using a baseline with roughly the same size of the diameter of the Earth. As a consequence, most of the luminosity comes from astrophysical processes taking place in the disk and jet of M87$^{\star}$. To distinguish the luminosity emitted by the rings, much longer baselines are required \cite{Johnson:2019ljv,Gralla:2020nwp}. While the luminosity flux of photon rings is exponentially decreasing with each orbit, the signal is dominated by periodic universal interferometric signatures for baselines greater than approximately 20 G$\lambda$ \footnote{Baselines are often given in units of wavelengths. This is because the telescope resolution (in radians) is obtained through the relation $R=\lambda/u$, with $u$ the baseline. Therefore, when $u$ is proportional to the wavelength, the resolution is simply $R=1/u$.} (see figure 4 of \cite{Johnson:2019ljv}), with the period inversely proportional to the diameter of the ring. We show in this work that cosmological expansion affects both the angular size of black hole shadows and the periodic signal in the same fashion. Furthermore, we discuss the possibility of using the non-detection of a black hole shadow drift as another probe of the equivalence principle. \clearpage \section{Black hole shadow drift} \subsection{Schwarzschild black hole in an expanding universe} \label{sec_schwarz} The McVittie metric \cite{McVittie:1933zz} is a hybrid solution recovering the Schwarzschild metric near the black hole and the flat Friedmann-Lemaître-Robertson-Walker (FLRW) far away from it. Therefore, when the metric is well-defined, it describes a spherically symmetric black hole in an expanding Universe. Its line element is usually written as \begin{align} \textup{d} s^2=-\left(\frac{1-\mu}{1+\mu}\right)^2 c^2 \textup{d} t^2 + (1+\mu)^4 \, a^2(t) \, \left(\textup{d} l^2+l^2\textup{d} \Omega^2 \right) \;, \end{align} with \begin{align} \mu := \frac{m}{2a(t) l} \;. \end{align} In this definition, $l$ is the radial comoving distance, $a(t)$ is the scale factor characterising the expansion of the Universe, and $m=GM/c^2$, with $M$ the mass of the black hole. For an observer near the black hole, the scale factor is nearly constant and evaluated at $t=t_0$, which allows to recover the Schwarzschild metric with the change of variable \begin{align} R = r \left(1+\frac{2m}{r}\right)^2 \;, \end{align} and the definition $r:=a(t_0)\, l$. The position of the observer at time $t_0$ is denoted $R_O$ in these coordinates, and $t_0$ corresponds to the present time. An approximate solution for black hole shadows in a McVittie spacetime has been found as a composite solution by employing the technique of matching solutions for the Einstein equations \cite{Tsupko:2019mfo}: \begin{align} \alpha_{\textup{appr}}(R_O) = \alpha_{\textup{schw}}(R_O) + \alpha_{\textup{cosm}}(R_O) - \alpha_{\textup{overlap}}(R_O) \;, \end{align} where the three pieces on the right side of the equation are respectively the angular diameter of the shadow when observed from near the Schwarzschild black hole, in an intermediate region far from the black hole but in which the cosmological expansion is negligible, and at cosmological scales. These are defined as: \begin{align} \alpha_{\textup{schw}}(R_O) &=\left\{ \begin{aligned} \pi -\arcsin({3\sqrt{3}m\sqrt{1-2m/R_O}/R_O}) \quad &\text{for}\, 2m\leq R_O \leq 3m \\ \arcsin({3\sqrt{3}m\sqrt{1-2m/R_O}/R_O})\quad &\text{for}\, R_O \geq 3m \\ \end{aligned} \right. \\ \alpha_{\textup{overlap}}(R_O) &= \frac{3\sqrt{3}m}{R_O} \hspace{6.05cm} \text{for}\, m\ll R_O \ll c/H_0 \\ \alpha_{\textup{cosm}}(R_O) &= \frac{3\sqrt{3}m}{D_A(z)} \hspace{6.05cm} \textup{otherwise} \;. \label{cosmosol} \end{align} The cosmological solution depends on the redshift $z$, and, in a flat universe, the apparent size for an observer at cosmological distances depends on the angular diameter distance $D_A(z)$ defined by \begin{align} \label{angular_distance} D_A(z):=\frac{c}{1+z} \int_0^{z} \frac{d\Tilde{z}}{H(\Tilde{z})}=\frac{1}{1+z}\chi\;, \end{align} with $\chi$ the comoving distance to the black hole. The overlap region is well-defined for an observer at a distance much greater than $GM/c^2$. The most massive black holes have a mass reaching $10^{41}$ kg, which corresponds to $z\approx 10^{-14}$. The upper bound was derived assuming that $z\ll 1$ in \cite{Tsupko:2019mfo}. However, since cosmological expansion can be neglected only at scales below the scale of galaxy clusters, and remembering that the closest galaxy clusters are situated at a redshift of $z\approx 10^{-3}$, we can assume that the overlap region should lie in the redshift span $10^{-14} \ll z \ll 10^{-3}$. \subsection{Shadow drift} By construction, only the cosmological term \eqref{cosmosol} of the composite solution is redshift-dependent. For an observer sufficiently far away from the black hole, this very term is the only one relevant. Therefore, any drift in the shadow angular radius is fully contained into the relation (omitting $R_O$ for conciseness) \begin{align} \frac{\dot{\alpha}_{\textup{appr}}}{\alpha_{\textup{appr}}} = \frac{\dot{\alpha}_{\textup{cosm}}}{\alpha_{\textup{cosm}}} \;. \end{align} The time derivative of $\alpha_{\textup{cosm}}$, \begin{align} \dot{\alpha}_{\textup{cosm}} = -3\sqrt{3} m \frac{\dot{z}}{D_A^2(z)} \frac{\textup{d}D_A}{\textup{d}z} = -\alpha_{\textup{cosm}} \frac{\dot{z}}{D_A(z)}\frac{\textup{d}D_A}{\textup{d}z}\;, \end{align} explicitly depends on the variation of the angular diameter distance \eqref{angular_distance} with redshift and on the redshift drift $\dot{z}$. The first quantity is \begin{align} \label{angdistder} \frac{\textup{d}D_A}{\textup{d}z} = -\frac{D_A(z)}{1+z} \;. \end{align} Recalling the redshift drift in a FLRW background is \cite{sandage1962change,mcvittie1962appendix,linder1997first,Piattella:2015xga} \begin{align} \label{redshift_drift} \frac{\textup{d}z}{\textup{d}t} = H_0 (1+z) -H(z)\;, \end{align} we find the \textit{shadow drift} \begin{align} \label{shadow_drift} \frac{\dot{\alpha}}{\alpha} = H_0 -\frac{H(z)}{1+z} \;, \end{align} in which we have dropped the subscript in the angular radius for conciseness. In a flat $\Lambda$CDM universe, the Hubble parameter is related to the matter density $\Omega_{m0}$, the radiation density $\Omega_{r0}$ and the dark energy density $\Omega_{\Lambda}$ through \begin{align}\label{H(z)LCDM} H(z) = H_0 \sqrt{\Omega_{m0}(1+z)^3+\Omega_{r0}(1+z)^4+\Omega_{\Lambda}} \;. \end{align} In Fig.~\ref{fig_shadow_drift}, the left figure shows the shadow drift normalised with $H_0$ for three sets of dimensionless densities, assuming $\Omega_{r0}=0$. For each mode, drift effects get much higher with redshift, but the luminosity drops quickly, which makes high-redshift black hole shadows impractical targets in real situations for the current status of observations. Indeed, the observed flux of a source located at redshift $z$ decreases as $(1+z)^2$, while the surface brightness as $(1+z)^4$, so a source at $z=1$ would see its surface brightness reduced by a factor 16 \cite{Vagnozzi:2020quf}. We note that there is a small bump at low redshift around $z=0.5$ where the drift is potentially easier to observe. In the right figure, we display the variation of the angular apparent radius $\dot{\alpha}$. The plot shows that, if given an independent measure of the black hole's mass, we can obtain an independent measure of the Hubble rate today. We have used the density values $\Omega_{m0}=0.33$, $\Omega_{r0}=0$ and $\Omega_{\Lambda}=0.67$ to obtain this figure. \begin{figure} \centering \begin{minipage}[c]{\textwidth} \centering \includegraphics[scale=0.35]{shadow_drift_Om.png} \includegraphics[scale=0.35]{shadow_drift_h0.png} \caption{Left: Normalised shadow drift as a function of redshift for three sets of matter and dark energy dimensionless densities. At low redshift, the maximum normalised shadow drift is about $0.1$ for the three sets, which results in a shadow drift $\dot{\alpha}/\alpha$ of $10^{-14}$ day$^{-1}$. Right: Variation of the apparent angular diameter for three different values of the present Hubble rate. We used density values of $\Omega_m=0.33$ and $\Omega_{\Lambda}=0.67$, and assumed an apparent angular diameter $d=40 \mu as$.\vspace{1cm}} \label{fig_shadow_drift} \end{minipage} \end{figure} \subsection[Shadow drift of M87*]{Shadow drift of M87$^{\star}$} Let us work out as an example the shadow of the black hole at the centre of the galaxy Messier 87, M87$^{\star}$. The heliocentric radial velocity of M87$^{\star}$ is around 1284 km/s \cite{cappellari2011atlas3d}, with a peculiar velocity of the order of $\sim$ 10\% of the total velocity \cite{lisker2018active}, which we assume to be too small to have a noticeable impact on the shadow drift. We also assume that the contribution from the peculiar accelerations to be negligible, \textit{i.e.}, we suppose the peculiar will not change noticeably. Furthermore, the estimated distance of M87$^{\star}$ from us is roughly $z\sim 0.004$ \cite{cappellari2011atlas3d}. This distance corresponds to a $R_O$ near the $z \sim 10^{-3}$ upper bound we found for the overlap region defined in the previous section. However, for illustrative purposes, we will assume M87$^{\star}$ to be at cosmological distances in order to make a quantitative estimate of the drift effects magnitude. At sufficiently low redshift ($z\ll1$), we can safely ignore the contribution from radiation in Eq. \eqref{H(z)LCDM}. We use the density values $\Omega_{m0}\simeq 0.31$, $\Omega_{\Lambda}\simeq 0.67$, and the Hubble rate today $H_0 = 67.8$ km/s/Mpc. The shadow drift is then: \begin{equation} \frac{\dot{\alpha}}{\alpha} \approx 2 H_0 \times 10^{-3}\approx 3.8 \times 10^{-16}/day \;. \end{equation} A tiny variation, which seems far away from the angular resolution the EHT can achieve from Earth.\footnote{It must be stressed that the EHT is currently capable of resolving objects of size $d\approx 25 \mu as$, even though through imaging techniques they were able to reach a sensitivity of the order of $\approx 1\mu as$ for M87$^*$ and SgrA$^*$ (see details in EHT IV \cite{Akiyama:2019bqs}).} Indeed, even for the future Plateau de Bure--South Pole baseline, which should reach a resolution of 15 $\mu$as at 345 GHz, this seems like an infeasible task. An Earth--Moon baseline could potentially enhance the resolution by a factor 10, while an Earth-L2 Lagrange point baseline by a factor 100, but does not raise much hope for this detection. \section{Shadow drift as a probe of the equivalence principle} We argue in this section that the impossibility of detecting the cosmological shadow drift can be put into good use. Until now, we have considered that only the redshift depends on time, though it would be interesting to let $m$ be a time-dependent quantity as well in the cosmological solution \eqref{cosmosol}. In this case, the three quantities $G$, $M$ and $c$ could depend on time, but we will keep the speed of light constant in this work. First, we assume a time-dependent gravitational coupling and a constant mass. Then the right-hand side of the shadow drift \eqref{shadow_drift} has an additional contribution \begin{align} \label{equiv} \frac{\dot{\alpha}}{\alpha} = H_0 -\frac{H(z)}{1+z}+\frac{\dot{G}}{G} \;. \end{align} Therefore, the non-observation of a drift in the angular shadow within the sensitivity of present experiments provides another probe of the equivalence principle, complementary to other works using black hole shadows to test the equivalence principle \cite{Li:2019lsm}. Using the mean values of M87$^{\star}$ shadow's angular diameter reported by the EHT collaboration, which we also plot in Fig. \ref{diametervstime}, during the period of observations from April 5 to April 11 (see Table 7 in EHT IV \cite{Akiyama:2019bqs}), we compute the shadow drift between these two dates for the three different pipelines (DIFMAP, eht-imaging and SMILI) used by the collaboration, summarised in Table \ref{tabledrift}. \begin{figure} \centering \begin{minipage}[c]{\textwidth} \centering \includegraphics[scale=1.2]{diametervstime.pdf} \caption{The observed diameter of the shadow of M87* measured by the EHT with three different pipelines, DIFMAP, eht-imaging and SMILI, from April 5 to April 11. Data taken from Table 7 of \cite{Akiyama:2019bqs}.} \label{diametervstime} \end{minipage} \end{figure} \begin{table}[h!] \centering \begin{tabular}{c||c|c|c|} &DIFMAP & eht-imaging & SMILI\\ \hline $\dot{\alpha}/ \alpha$ ($10^{-2}$) & $1.6 \pm 1.7$ & $0.7 \pm 0.9$ & $0.7 \pm 0.9 $ \end{tabular} \caption{Estimated variation of the angular size of M87$^{\star}$'s shadow for the three pipelines used by the EHT collaboration. All values are given in day$^{-1}$.} \label{tabledrift} \end{table} We can readily see from the table that the non-observation over a period of 6 days implies that $\dot{G}/G \lesssim 10^{-1}-10^{-2}$ day$^{-1}$. Extrapolating over a period of one year would improve the precision to roughly $10^{-3}-10^{-4}$ year$^{-1}$. While this precision is far inferior to other existing probes (the strongest constraint gives $\dot{G}/G \leq 10^{-13}-10^{-14}$ day$^{-1}$, see for example Table 1 from Ref.~\cite{Alestas:2021nmi}), we note that the shadow drift only depends on the cosmological model. Therefore, it is an almost model-independent probe of the equivalence principle similar to the one found in \cite{Giani:2020fpz} in the case of strong lensing. On the other hand, it must be stressed that a variation of the effective gravitational coupling $G$ is degenerate with a variation of the black hole mass $M$. Modeling the evolution of a black hole mass through accretion is a cumbersome task, involving many astrophysical processes which ultimately depends on the black hole environment, as discussed for example in Ref.~\cite{Li:2012ts}. Consequently, in order to properly employ drift observations of rings and shadows to test the equivalence principle, a reliable accretion model is required. However, assuming that General Relativity is the correct theory of gravitation, for which $G$ is constant, the very same argument shows that the non-observation of a shadow drift can be used to constrain accretion rates $\dot{M}/M$, since we would now have \begin{align} \frac{\dot{\alpha}}{\alpha} = H_0 -\frac{H(z)}{1+z}+\frac{\dot{M}}{M} \;. \end{align} For M87$^{\star}$, which has a mass of roughly $6.5 \times 10^9 M_{\odot}$ as measured by the EHT collaboration \cite{Akiyama:2019eap}, an absence of shadow drift of order $\dot{\alpha}/{\alpha} \leq 10^{-4}$ year$^{-1}$ implies that the black hole mass has changed less than $10^5 M_{\odot}$ over a year. \footnote{Note that superradiance could potentially change the apparent angular diameter of the shadow \cite{Roy:2019esk,Creci:2020mfg}, and drift effects could have interesting applications for these models as well.} One way to distinguish between a time dependence of the mass and a time dependence of $G$ is to look at the variation sign of $\dot{\alpha}/\alpha$. Since evaporation is a very slow process, we expect that only accretion would change the mass of the black hole, and therefore $\dot{M}/M >0$. This implies that if we observe $\dot{\alpha}/\alpha <0$, the variation of the angular diameter of the shadow can be attributed to a variation in the gravitational coupling, and to the cosmological drift (even if the latter is very small as we discussed in the previous section). To have $\dot{M}/M >0$, the black hole needs to accrete matter from its surroundings. The maximum efficiency of this process is given by the Eddington rate (see \textit{e.g.} \cite{Brito:2014wla}) \begin{align} \dot{M}_{Edd} := 0.02 f_{\textrm{Edd}} \frac{M}{10^{6} M_{\odot}} M_{\odot} \textup{yr}^{-1} \;, \end{align} where $f_{\textrm{Edd}}$ is the Eddington ratio typically ranging between $10^{-2}$ and $1$ \cite{Barausse:2014tra}. Note that this formula assumes a radiative efficiency $\eta \approx 0.1$. For M87, the Eddington ratio is expected to be at most $f_{\textrm{Edd}}=0.03$ \cite{Forman:2017kpv}, resulting in $\dot{M}_{\textrm{Edd}}/M \simeq 6 \times 10^{-10} M_{\odot} \textup{yr}^{-1}$, or, equivalently, in $\dot{M}_{\textrm{Edd}} = 4 M_{\odot} \textup{yr}^{-1}$. The latter estimate implies that the strongest constraint on $\dot{G}/G$ which could be obtained from M87$^{*}$ is of order $10^{-10} $ yr$^{-1}$. Alternatively, drifts observations could be used to constrain models of BH accretion, for example to obtain upper bound on the Eddington rate $f_{\textrm{Edd}}$. \section{Ring visibility amplitude and frequency drifts} We investigate in this section a complementary possibility of observing drift effects with black holes using photon rings, which should be observable for interferometers with baselines $u$ at least greater than 20 $G\lambda$ (or, equivalently, for a resolution $1/u$ better than $10 \, \mu as$). For such large baselines, the complex visibility $V(u)$ of a Kerr photon ring can be approximated by a damped oscillating function with period $\Delta u=2/d$, with $d$ the ring diameter \cite{Johnson:2019ljv}. As argued by the authors of \cite{Johnson:2019ljv}, all rings are almost circular, independently of the black hole spin and inclination, so we adopt in this section a perfectly circular photon ring model to study the effect of cosmological expansion on the ring diameter. Since the photon ring properties in Schwarzschild do not differ appreciably from Kerr black hole photon rings \cite{Gralla:2019xty}, we assume the conclusions drawn by Johnson et al in \cite{Johnson:2019ljv} hold for a Schwarzschild black hole. For simplicity, we suppose further that all rings are infinitesimally thin and uniform. In this case, the complex visibility is given by a Bessel function of the first kind, $V(u)=J_0(\pi du)$, which reduces to roughly $(du)^{-1/2} \cos(\pi du)$ for large arguments. Within the above hypotheses, we infer that the ring diameter $d$ should not change in time near the black hole. However, as we have shown in the previous sections, the expansion of the Universe makes $d$ a time-dependent quantity, with a redshift drift similar to Eq.~\eqref{shadow_drift}. Therefore, the ring diameter acquires in this picture a time dependence \begin{equation} \label{vis_drift} \frac{ \dot{d}}{d} = H_0 -\frac{H(z)}{1+z} \;, \end{equation} and the relative variation of the visibility amplitude varies as \begin{align} \frac{\dot{V}}{V} &= - \pi u \dot{d} \, \frac{J_1(\pi d u)}{J_0(\pi d u)} \nonumber \\ &\simeq - \frac{1}{2} (\pi u d)^2 \frac{\dot{d}}{d} \nonumber \\ &\simeq - \frac{1}{2} \left(\pi u d\right)^2 \left(H_0 -\frac{H(z)}{1+z}\right)\;, \end{align} where we have used the Bessel expansion for large argument $J_1(x)/J_0(x) \simeq x/2$ in the second line. This expansion is justified since the first photon ring should be resolved for large baselines, which implies $du\gg1$. The visibility variation interestingly depends on $u^2$, meaning that a greater baseline should probe a greater variation, as shown in Fig.~\ref{visibility_drift}. \begin{figure} \centering \includegraphics[scale=0.4]{visibility_drift.png} \caption{Variation of the visibility amplitude for a photon ring with apparent angular diameter $d=40 \,\mu as$. The variation is a function of the redshift and drawn for three baselines ($u=10,1,$ and $0.1 \,\mu as^{-1}$, in blue, purple and green, respectively). For a redshift $z=0.004$ (M87$^{\star}$), the respective variations of the normalised amplitude $|\dot{V}/(H_0 \,V)|$ are of about $1.6\times 10^{3}$, $16.0$ and $0.16$ for each baseline, resulting in a change of the amplitude $|\dot{V}/V|$ of about a factor $3 \times 10^{-11}$, $3\times 10^{-13}$ and $3\times 10^{-15}$ per day.} \label{visibility_drift} \end{figure} The effect is once again very small, with a best case of a variation about $10^{-6}$ per year for $z\simeq 0.5$ using a baseline $u=10 \, \mu as^{-1}$, which would be barely attainable with a Moon-Earth array (see Fig.~5 in \cite{Johnson:2019ljv}). It is worthwhile though to mention that spectroscopic measurements are usually more precise than optical ones. It is expected that forthcoming surveys in the next few years will be able to detect a shift in the spectral lines of the order of $10^{-9}-10^{-10}$ within an observational window of $\sim 10 yr$, see for example Ref.~\cite{Kim:2014uha}. Therefore, if a similar precision can be reached for interferometric observations of photon rings, the task of measuring their drift does not seem hopeless. A second interesting point is that the period also suffers from the redshift drift, with a relative change \begin{equation} \label{period_drift} \frac{\dot{\Delta u}}{\Delta u} = - \frac{ \dot{d}}{d} = -\left(H_0 -\frac{H(z)}{1+z}\right) \;, \end{equation} opposite to the drift in the ring diameter. Since the $2/d$ periodicity is unlikely to be contaminated by other sources of emission on great baselines, the period is a universal signature which can be used to detect a black hole photon ring via the observed visibility. As an immediate consequence following from Eq.~\eqref{period_drift}, the universal interferometric pattern should also change with time, along with the strength of the visibility amplitude. \section{Conclusions and Discussion} The main purpose of this work was to quantify the impact of cosmological drift on three important black hole physics observables: the black hole shadow's apparent angular diameter, the visibility, and the frequency (or equivalently the period) of its photon rings. The apparent angular diameter of the shadow, for a spherically symmetric black hole, is determined by its total mass $M$ and the angular diameter distance $D_A$, which makes it a possible candidate standard ruler, see Refs.~\cite{Tsupko:2019pzg}. Similarly, the visibility amplitude and frequency inferred from interferometric universal signatures of photon rings are mostly determined by the ring diameter, which could potentially be a source of cosmological information, as discussed in Ref.~\cite{Johnson:2019ljv}. The drifts of these three quantities are given by Eqs.~\eqref{shadow_drift}, \eqref{vis_drift} and \eqref{period_drift}, respectively. In Fig.\ref{fig_shadow_drift}, the shadow drift is reported in units of $H_0$ up to redshift $z=5$ assuming a fiducial $\Lambda$CDM cosmology with $\Omega_m = 0.33$ and $\Omega_\Lambda = 0.67$. At low redshift, $z \ll 1$, the maximum drift is of order $10^{-1} H_0$. For M87$^{\star}$, located at redshift $z \approx 4\times 10^{-3}$, this results in a shadow drift of the order of $10^{-16}$ day$^{-1}$, which is beyond the angular resolution of the EHT and forthcoming experiments. Concerning the status of photon ring observations, current experiments do not have enough sensitivity to resolve individual rings. On the other hand, as discussed in Ref.~\cite{Johnson:2019ljv}, future Earth-Moon and Earth-L2 baselines will have enough precision to resolve the rings and allow for spectroscopic observations. In Fig.~\ref{visibility_drift}, the visibility amplitude drift is shown for three future experiments with baselines larger than the one currently in use with the EHT (which tops at 0.05 $\mu as^{-1}$). The shift induced in the spectral lines by the Hubble flow is the most promising candidate to detect redshift drift effects and, as discussed in Ref.~\cite{Kim:2014uha}, can reach a precision of order $10^{-9}$ over a time span of a decade. If, in the future, such a precision can be reached for spectroscopic measurements of photon rings, observing their visibility amplitudes and frequencies drifts does not seem hopeless. It is interesting to note that the formula for the black hole shadow \eqref{shadow_drift} is very similar to the one obtained for the lens equation in the thin-lens approximation, being proportional to the product of the Newton constant and the object mass $G_N M$ and inversely proportional to the angular diameter distance. In the same fashion as what has been done for strong lensing observables \cite{Giani:2020fpz}, it is possible to translate non-observations of drift effects within the sensitivity of current experiments into a constraint on the violation of the Equivalence Principle in theories featuring a time-dependent gravitational coupling $G_{eff}$. We quantified the non-observations of a drift in the size of M87$^{\star}$'s shadow over a period of one year, within the current sensitivity of EHT, into a constraint on $\dot{G}_{eff}/G_{eff} \leq 10^{-4}$. This constraint, however, is obtained assuming that the mass of the black hole $M$ does not vary with time. On the other hand, if this is the case, variations of the black hole shadow within the framework of General Relativity can be used to verify accretion models of black holes. Using the current sensitivity of EHT, the non-observations of a shadow drift over a period of one year gives a constraint on the variation of M87$^{\star}$'s mass of $\Delta M \leq 10^5 M_{\odot}$. While most of the effects we presented in this paper are beyond the sensitivity of current experiments, it is worthwhile to recognize and quantify the amount of cosmological information encoded within black hole shadows for future long-monitoring programs. Furthermore, these observations have the potential to become an important test of the equivalence principle in a strong gravity regime, thus complementing other existing probes. In this work, we considered variations in the shadow of a single, spherically symmetric Black hole embedded in a FLRW Universe. It would also be very interesting to study how the results we obtained change when we relax these assumptions. As was discussed in \cite{Li:2020drn}, the shadow of a Kerr black hole can be obtained in a similar way as for a McVittie black hole, which we used in this paper. The shadow obtained by these authors is quite similar to those in McVittie, with two different angular sizes due to the black hole spin. By reproducing the steps of our paper to this Kerr-de Sitter framework, we obtain the same qualitative results. However, the embedding of a Kerr black hole into a FLRW background is not unique, as the former breaks not only homogeneity of the spacetime (like in McVittie) but also isotropy, introducing dragging effects relative to the rotation axes which makes the results observer dependent. To conclude, we mention that it would be very interesting to study the time dependence of the shadow(s) profile(s) for two or more interacting black holes \cite{Yumoto:2012kz}. Indeed, in this case one expects that the mass loss due to the production of gravitational waves (GW) should correspond to a shrinking of the shadow(s). However, it must be stressed that the GW signal during the inspiral phase depends on a particular combination of the black hole masses, \textit{i.e.} the chirp mass, whose relation with the diameter of the shadow(s) is not trivial. Furthermore, the GW response function is affected in a similar way by the cosmological expansion, by a time-dependent gravitational coupling and by a time-varying mass \cite{Yunes:2009bv}. As a result, if one tries to relate the GW signal with a drift of the shadow(s), these effects will all be degenerate in the final measurement. We believe that these directions deserve further investigations, which we leave for future works. \section*{Acknowledgments} We are grateful to Tamara Davis, Oliver Piattella, and Sunny Vagnozzi for valuable comments and discussions. EF thanks the Helsinki Institute of Physics for its warm hospitality. LG acknowledges support from the Australian Government through the Australian Research Council Laureate Fellowship grant FL180100168. \bibliographystyle{JHEP}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} \subsection{Motivating notions from complex dynamics} In this paper, we study certain families of sets associated with linear first order ordinary differential operators. These sets are closely related to the attractors of Hutchinson operators and to the Julia sets of rational functions. \smallskip Namely, in complex dynamics one often considers a map $\mathcal{F}: 2^{\setRS} \to 2^{\setRS}$ from the set of subsets of the Riemann sphere to itself and tries to find the ``fixed points'' of this map, i.e., non-empty subsets $S\subseteq \setRS$ such that $\mathcal{F}(S) = S$. For example, the Julia set associated with a rational map $f(z)$, is the unique non-trivial closed ``fixed point'' of either \begin{equation}\label{eq:juliaIntro} \mathcal{F}(S) = \{ f(z) : z \in S \}, \quad \text{ or } \quad \mathcal{F}(S) = \bigcup_{u \in S} \{ z: f(z) = u \}, \end{equation} see e.g. \cite[Thm. 3.2.4]{Beardon2000} for the proof of this classical result. \smallskip Another well-studied instance of such situation occurs in the case of Hutchinson operators, i.e. operators $\mathcal{F}: 2^{\setRS} \to 2^{\setRS}$ of the form \begin{equation}\label{eq:hutchinsonfinite} \mathcal{F}(S) = \bigcup_{z \in S} \left\{ \phi_1(z),\dotsc,\phi_\ell(z) \right\}, \end{equation} where $\phi_1,\dotsc,\phi_\ell : \setRS \to \setRS$ is a finite collection of contracting maps. Such collections of contractions are usually referred to as \defin{iterated function systems} (IFS for short). Then it is classically known that the equation $\mathcal{F}(S) = S$ has a unique non-trivial closed solution, namely, the \defin{attractor} of $\mathcal{F}$, see \cite{Hutchinson1981}. Examples of such attractors are e.g. \emph{the Sierpinski triangle}, \emph{Koch's snowflake} and \emph{Barnsley's fern}. \smallskip In most of the cases discussed in the existing literature, a non-trivial closed solution to $\mathcal{F}(S) = S$ is unique due to the fact that $\mathcal{F}$ under consideration is a contraction in a suitable topology. \smallskip The situation which we consider below is quite similar to that of Hutchinson operators. However in our case, the map $\mathcal{F}$ is not a contraction and we therefore do not have unique ``fixed point". Hence, instead of an attractor we have families of what we call \defin{invariant sets}. \smallskip Finally, in our situation the set of maps appearing in \cref{eq:hutchinsonfinite} is not finite, but rather parameterized by $t\ge 0$. Iterated function systems with uncountable many maps are not very common in the mathematical literature, but they do appear, see e.g.~\cite{Strobin2021}. However, such systems are abundant in the fractal art community, whose goal is to generate and color interesting attractors. We refer to the seminal paper by S.~Draves \cite{dravesflame} for background on this topic and the computer software Apophysis. \subsection{Invariant sets for linear first order differential operators} Let us now describe our basic set-up in detail. Given two polynomials $P,Q$ non-vanishing identically, consider the first order linear differential operator \begin{equation}\label{eq:1st} T=Q(z)\frac{d}{dz}+ P(z). \end{equation} We say that a closed subset $S \subset \mathbb{C}$ is \defin{Hutchinson $T$-invariant} (or \defin{$T_H$-invariant} for short) if for any $u\in S$ and $n \in \mathbb{N}$, we have that the polynomial $T[(z-u)^n]$ is either identically zero, or has all its roots in $S$. In other words, a closed subset $S\subseteq \mathbb{C}$ is Hutchinson $T$-invariant if and only if it is a (closed) ``fixed point" of the operator \begin{align*} \mathcal{F}(S) &= \bigcup_{u \in S} \bigcup_{n \in \mathbb{N}} \left\{ z : n Q(z) (z-u)^{n-1} + P(z) (z-u)^n =0 \right\} \\ &= \bigcup_{u \in S} \bigcup_{n \in \mathbb{N}} \left\{ z : n Q(z) + P(z) (z-u) =0 \right\}. \end{align*} Unfortunately, it seems rather difficult to study Hutchinson $T$-invariant sets $S$ for a somewhat general operator \eqref{eq:1st}, but we hope to return to this topic in the future. Below we concentrate on a similar and easier to handle case when one replaces a non-negative integer power $n$ with a non-negative real parameter $t$. Therefore our main definition is as follows. \begin{definition} A non-empty closed set $S\subset \mathbb{C}$ is \defin{continuously Hutchinson invariant} (or \defin{$T_{CH}$-invariant} for short) if for any $u\in S$ and an \emph{arbitrary non-negative number} $t$, the image $T(f)$ of the function $f(z)=(z-u)^t$ has all roots in $S$ or vanishes identically. \end{definition} \begin{remark} Since for $T$ given by \eqref{eq:1st}, one has $T[(z-u)^t]=(z-u)^{t-1}( t \, Q(z) + (z-u) P(z))$, the polynomial equation \begin{equation}\label{eq:main} t \, Q(z) + (z-u) P(z) =0 \end{equation} is our main object of interest; note that for any $T_{CH}$-invariant set $S\subset \mathbb{C}$, every $u \in S$, and any $t \geq 0$, the roots of \eqref{eq:main} must also belong to $S$. \end{remark} We now observe that a closed set $S$ is $T_{CH}$-invariant if it is a fixed-point of \begin{align} \mathcal{F}_T(S) &= \overline{ \bigcup_{u \in S} \bigcup_{t\geq 0} \left\{ z : t Q(z) + (z-u)P(z) =0 \right\} } \notag \\ &= \overline{ \bigcup_{u \in S} \bigcup_{t\geq 0} \left\{ z : z + t\frac{ Q(z)}{P(z)} = u \right\} }, \label{eq:juliaConnection} \end{align} where $\overline{\cdots}$ denotes the closure. By looking at \eqref{eq:juliaConnection} and comparing it with \eqref{eq:juliaIntro}, it is fairly straightforward to show that any $T_{CH}$-invariant set must contain the Julia set associated with the map $z \mapsto z + t\frac{ Q(z)}{P(z)}$, for any fixed $t>0$, see \cref{prop:juliaSetIsSubset}. This connection is illustrated in \cref{fig:cochleoid}. \begin{example} The operator $T = z^2{\frac{d}{dz}} + (z-1)$ has a unique minimal (under inclusion) $T_{CH}$-invariant set, whose boundary in polar coordinates can be parameterized as $r(\theta) = \frac{\sin{\theta}}{\theta}$, see the leftmost picture in \cref{fig:cochleoid}. This boundary curve is called the \defin{cocheloid}. The central picture (constructed via a type of Monte-Carlo simulation, hence the artificial void in the center) illustrates the minimal invariant set associated with iterations of \begin{equation}\label{eq:tgeq1iterations} \mathcal{F}_T(S) = \overline{ \bigcup_{u \in S} \bigcup_{t\geq 1} \left\{ z : z + t\frac{ Q(z)}{P(z)} = u \right\} }. \end{equation} Note that having $t\geq 1$ seem to give a fractal boundary. Finally, the rightmost picture shows the union of several Julia sets associated with iterations of $z + t\frac{z^2}{z-1}$. \begin{figure}[!ht] \centering \includegraphics[width=0.29\textwidth]{cocheloid}\hspace{0.25cm} \includegraphics[width=0.32\textwidth]{cariodidFractal-t-plus1}\hspace{-0.75cm} \includegraphics[width=0.32\textwidth]{cardiodidJuliaMerged} \caption{The leftmost image shows the boundary of the minimal $T_{CH}$-invariant set for the operator $T = z^2{\frac{d}{dz}} + (z-1)$. The center image shows (a numerical approximation of) the minimal invariant set, which is a fixed-point of \eqref{eq:tgeq1iterations}. The rightmost figure shows the union of the Julia sets of the maps $z \mapsto z + t\frac{z^2}{z-1}$, for $t \in \{0.2, 0.4, 0.6,\dotsc, 1.8, 2.0\}$. } \label{fig:cochleoid} \end{figure} \end{example} \medskip Below we, in particular, consider the following questions. \begin{problem}\label{prob:exist} For which linear differential operator $T$ given by \eqref{eq:1st}, there exists a unique minimal under inclusion $T_{CH}$-invariant set? \end{problem} Below we denote this minimal set (if it exists) by \defin{$\minvset{CH}$}. \begin{problem}\label{prob:charac} Find possible alternative characterizations of $T_{CH}$-invariant sets in terms of the polynomials $P$ and $Q$. Describe topological and geometric properties of $T_{CH}$-invariant sets and their boundaries. \end{problem} \begin{problem}\label{prob:compact} Assuming that for a linear differential operator $T$, the minimal set $\minvset{CH}$ exists, under which conditions on $T$ it is compact? \end{problem} Observe that $\minvset{CH}$ is closed by definition, so its compactness is equivalent to its boundedness. \begin{problem}\label{prob:boundaryMIN} Describe properties of the boundary of $\minvset{CH}$ and describe explicitly $\minvset{CH}$ for some classes of operators $T$. \end{problem} \medskip To finish the introduction, we present here a small sample of our results related to the above questions. Further results and details can be found in the subsequent sections. \begin{PROP}[see \cref{prop:basic} below] For any linear differential operator $T$ given by \eqref{eq:1st} with either $Q(z)$ or $P(z)$ non-constant, there exists a unique minimal under inclusion $T_{CH}$-invariant set $\minvset{CH}$. \end{PROP} \smallskip We say that $T$ given by \eqref{eq:1st} is \defin{non-degenerate} if $\deg Q>\deg P$ and \defin{degenerate} otherwise. \begin{PROP}[see Proposition~\ref{prop:degenerate} below] If $T$ is degenerate then $\minvset{CH}$ is unbounded. \end{PROP} The latter proposition is complemented with the following claim. We call $\minvset{CH}$ \emph{trivial} if it (and therefore every $T_{CH}$-invariant set) coincides with the whole $\bC$. \begin{notation} For an operator $T$ given by \eqref{eq:1st}, throughout the whole paper a very important role will be played by the rational vector field $\dot z= R(z)$ where $R(z)=\frac {Q(z)}{P(z)}$. To distinguish the latter vector field from the rational function $R(z)$, we will denote it by $R(z)\partial_z$. We will also be frequently using the vector field $-R(z)\partial_z$. \end{notation} \begin{PROP}[see Proposition~\ref{prop:triviality} below] If $\deg Q - \deg P =K$ with $|K|\ge 2$ then $\minvset{CH}$ is trivial. In other words, $\minvset{CH}$ is trivial unless $R(z)\partial_z$ has a critical point of order one, two or three at $\infty$. \end{PROP} Therefore non-trivial (i.e. different from the whole $\bC$) minimal invariant set $\minvset{CH}$ can only exist when $K\in\{-1, 0, 1\}$. As we will show below, $\minvset{CH}$ is necessarily unbounded and nontrivial when $K\in\{-1, 0\}$. \begin{remark} We do not know of any other situation related to analytic vector fields in which zeros of orders exactly $1, 2$ and $3$ are more special than zeros or poles of other orders. \end{remark} In case $K=1$ we obtain the following result. \begin{PROP}[combines Propositions~\ref{pr:unbounded}, ~\ref{th:compcrit}, and \ref{th:negativeResidueAtInfty} below] For $K=1$, consider the Laurent expansion of the vector field $R(z)\partial_z$ at $\infty$ in powers of $\frac{1}{z}$ \begin{equation}\label{eq:exp} R(z){\partial_z}= \frac{Q(z)}{P(z)} {\partial_z}=\left(\lambda z + \text{higher order terms in } \frac{1}{z}\right) {\partial_z}. \end{equation} Then in the above notation, \begin{enumerate}[label={(\roman*)}] \item the set $\minvset{CH}$ is trivial if and only if $\Re \lambda <0$, i.e. the vector field $R(z)\partial_z$ has a source at $\infty$; \item the set $\minvset{CH}$ is bounded (and therefore compact and non-trivial) otherwise, i.e. if $\Re \lambda \ge 0$. In other words, $\minvset{CH}$ is compact if $R(z)\partial_z$ has a center or a sink at $\infty$. \end{enumerate} \end{PROP} The latter two propositions together provide a criterion of triviality of $\minvset{CH}$. \medskip Throughout the paper, let $\defin{\setRS}$ denote the complex projective line, i.e.~$\setRS=\bC \cup \{\infty \}$. In the case when $\Re \lambda=0$, the point $\infty\in \setRS$ is a center of the rational vector field $R(z){\partial_z}$. In this situation one can easily see that there exists a unique closed integral curve of $R(z){\partial_z}$ going around $\infty\in \setRS$ which has the property that it bounds a minimal under inclusion convex region in $\mathbb{C}$. The next result describes a compact $\minvset{CH}$ ``explicitly" in this situation. \begin{PROP}[see \cref{thm:relambda0} below] In the above notation, if $\Re \lambda =0$, i.e. $R(z)\partial_z$ has a center at $\infty$ then $\minvset{CH}$ is equal to the closed interior of the above integral curve $\Psi$ in $\mathbb{C}$. \end{PROP} We present below many other examples where $\minvset{CH}$ can be ``explicitly" described, but these descriptions are too lengthy for the introductory section. \medskip The structure of the paper is as follows. In \S~\ref{sec:basic} we present a general sufficient condition for the existence of $\minvset{CH}$ and its implicit description in terms of complex dynamics. In \S~\ref{sec:roottrails} we introduce the \emph{root trails/trajectories/$t$-traces} which are solutions of \eqref{eq:main} for a fixed initial $u$ and $t\in [0,+\infty)$ and discuss their properties. We also introduce a time-dependent vector which describes their dynamics. In \S~\ref{sec:properties} we give a general characterization of $T_{CH}$-invariant sets in terms of the family of \emph{associated rays} of $T$ which is the family of half-lines in $\bC$ spanned by the rational vector field $R(z){\partial_z}$. We introduce the notion of a \emph{regular} $T_{CH}$-invariant set which means that it coincides with the closure of its set of interior points and give a characterization of those. In \S~\ref{sec:separatrices} we prove that certain types of separatrices of the vector field $R{\partial_z}$ must belong to any $T_{CH}$-invariant set which will be extensively used later in the text. In \S~\ref{sec:comp} we give necessary and sufficient conditions for compactness of $\minvset{CH}$ in $\bC$. In \S~\ref{sec:triviality} we give important sufficient conditions guaranteeing that $\minvset{CH}=\bC$, i.e., $\minvset{CH}$ is trivial. Later we provide necessary and sufficient conditions of triviality. In \S~\ref{sec:irreg} we discuss irregular $T_{CH}$-invariant sets and present necessary and sufficient conditions for their existence. In particular, we completely describe the case when $\minvset{CH}$ is fully irregular. In \S~\ref{sec:GenProp} we describe some general properties of the boundary of $\minvset{CH}$. In \S~\ref{sec:K=-1,0,1} we describe many features of $\minvset{CH}$ in the cases when it can be nontrivial which is the cases $\deg Q -\deg P\in\{-1,0,1\}$. Although we do not have complete description of $\minvset{CH}$ in general, we can describe it in a number of special cases. In \S~\ref{sec:1-point} we present examples of families of $1$-point generated $T_{CH}$-invariant sets for two special types of operators. In \S~\ref{sec:outlook} we describe many open questions related to our set-up. Finally, Appendix~\ref{sec:ratfields} contains relevant classical material on rational vector fields on $\bC P^1$ and curves of inflections of trajectories of analytic vector fields. \medskip \begin{remark} Although this paper is an outgrowth of our still unpublished studies \cite{ABS1,ABS2}, it is self-contained and independent of the latter manuscripts. All the necessary notions and results are presented below. \end{remark} \medskip\noindent \emph{Acknowledgments:} The third author wants to acknowledge the financial support of his research provided by the Swedish Research Council grants 2016-04416 and 2021-04900. \section{Existence and implicit characterization of the minimal set $\minvset{CH}$} \label{sec:basic} \subsection{Existence}\label{sub:exist} \smallskip For an operator $T$ given as in \eqref{eq:1st} and a set $\Omega \subset \mathbb{C}$, we call by the \emph{$T_{CH}$-extension} $\mathfrak{T}(\Omega)$ of $\Omega$ the set obtained by the following iterative procedure. Set $\Omega_0\coloneqq \Omega$ and for any positive integer $j=1,2,\dotsc ,$ define \[ \Omega_j\coloneqq \overline{\bigcup_{u\in \Omega_{j-1}}\mathfrak{tr}_u}, \] where $\overline{\cdots}$ denotes the closure and $\mathfrak{tr}_u$ stands for the set of all solutions of \eqref{eq:main} for a given fixed $u$ and all $t\ge 0$, see details in \cref{sec:roottrails}. (In what follows $\mathfrak{tr}_u$ will be called the \emph{root trail} of $u$.) The closure of $\cup_{j=0}^\infty \Omega_j$ is denoted by $\defin{\mathfrak{T}(\Omega)}$. If $\Omega=\{\omega\}$ is a singleton we use the notation $\mathfrak{T}(\Omega)=\mathfrak{T}(\omega)$. \smallskip We start with the following simple claim. \begin{lemma}\label{lm:simple} For any linear differential operator $T$ given by \eqref{eq:1st}, one has \noindent \rm {(i)} For any $\Omega\subset \mathbb{C}$, its $T_{CH}$-extension $\mathfrak{T}(\Omega)$ is $T_{CH}$-invariant; \smallskip \noindent for any two $T_{CH}$-invariant sets $S_1$ and $S_2$ in $\mathbb{C}$, \noindent {\rm (ii)} $S_1\cap S_2$ is $T_{CH}$-invariant; \noindent {\rm (iii)} $S_1\cup S_2$ is $T_{CH}$-invariant. \end{lemma} \begin{proof} To settle (i) observe that by definition, for any $\Omega\subset \mathbb{C}$, its $T_{CH}$-extension $\mathfrak{T}(\Omega)$ is closed and for each point $u\in \mathfrak{T}(\Omega)$, it contains the root trail $\mathfrak{tr}_u$. Item (ii) is obvious. To settle (iii) recall that a subset of $\bC$ is $T_{CH}$-invariant, if it coincides with its $T_{CH}$-extension, so $\mathfrak{T}(S_1) = S_1$ and $\mathfrak{T}(S_2) = S_2$. The definition of $T_{CH}$-extension implies directly that for any sets $\Omega_1$, $\Omega_2$, we have \[ \mathfrak{T}(\Omega_1 \cup \Omega_2) = \mathfrak{T}(\Omega_1) \cup \mathfrak{T}(\Omega_2). \] Thus, \[ \mathfrak{T}(S_1 \cup S_2) = \mathfrak{T}(S_1) \cup \mathfrak{T}(S_2) = S_1 \cup S_2 \] which implies that $S_1 \cup S_2$ is $T_{CH}$-invariant. \end{proof} \begin{proposition}\label{prop:basic} For any linear differential operator $T$ given by \eqref{eq:1st} such that at least one of $Q(z)$ and $P(z)$ are non-constant, {\rm (i)} any $T_{CH}$-invariant set $S\subset \mathbb{C}$ contains all roots of both $P(z)$ and $Q(z)$; {\rm (ii)} there exists a unique minimal under inclusion $T_{CH}$-invariant set \defin{$\minvset{CH}$}. \end{proposition} \begin{proof} To settle (i), take a non-empty $T_{CH}$-invariant set $S\subseteq \mathbb{C}$ and recall that for $t\geq 0$ and $u\in S$, \[ T[(z-u)^t ]= (z-u)^{t-1} (t Q(z) + (z-u) P(z)) . \] By our assumption, for any $t\ge 0$, all roots of the r.h.s. of the latter expression must belong to $S$. For $t=0$ and $u\in S$, the roots of \[ tQ(z)+(z-u)P(z)=0 \] are all the zeros of $P$ together with $z=u$. For $t>0$, dividing both sides of the equation by $t (z-u)^{t-1}$ we get \begin{equation}\label{eq:factor} \frac{T[(z-u)^t]}{t (z-u)^{t-1}} = Q(z) + \frac{1}{t} P(z)(z-u). \end{equation} When $t\to \infty$ the second term in the r.h.s. tends to $0$. Therefore if $\deg P< \deg Q$ then for $t\to \infty$, all roots of the right-hand side with respect to $z$ will necessarily tend to those of $Q(z)$. If $\deg P\ge \deg Q$ then $\deg P-\deg Q+1$ roots will tend to infinity and $\deg Q$ roots will tend to those of $Q(z)$. Conclusion follows. \smallskip Item (ii) follows from item (i) together with the above observation that for any operator $T$ given by \eqref{eq:1st}, the intersection of any two $T_{CH}$-invariant set $S_1\cap S_2$ is also $T_{CH}$-invariant. Since by definition, any $T_{CH}$-invariant set is closed we conclude the existence of a unique minimal $T_{CH}$-invariant set $\minvset{CH}$ obtained as the intersection of the complete family of $T_{CH}$-invariant sets. Notice that depending on $T$ the minimal set $\minvset{CH}$ might or might not be bounded which will be discussed in details below. \end{proof} Observe that the assumption that $T$ has non-constant coefficients is essential for the existence of $\minvset{CH}$, see \cref{ssec:constant}. \begin{remark}\label{re:affineChange} When we mention a change of variable, we do so in terms of the vector field $R(z)\partial_z=\frac{Q(z)}{P(z)}\partial_z$ and not the rational function $R(z)$. That is, if $z \mapsto aw+b$ then we get a new vector field $\defin{\hat{R}(w)\partial_w} \coloneqq \frac{Q(aw+b)}{a(P(aw+b))} \partial_w$. Note here that this is equivalent to considering the change of variables for the operator $T=Q(z){\frac{d}{dz}} + P(z)$. Indeed, the change of variables $z \mapsto aw+b$ yields \[ T[(aw+b-u)^t]=tQ(aw+b)(aw+b-u)^{t-1}+P(aw+b)(aw+b-u)^t. \] Dividing with $\frac{(aw+b-u)^{t-1}}{a}$ gives \[ \frac{t}{a}Q(aw+b)+P(aw+b)\left(w-\frac{u-b}{a}\right). \] Since $w=\frac{z-b}{a}$, we see that a set $S$ is $T_{CH}$-invariant for $T$ if and only if $\hat{S}=\{\frac{z-b}{a}:z\in S\}$ is $\hat{T}_{CH}$-invariant where \[ \hat{T} = \frac{1}{a}Q(aw+b)\frac{d}{dw}+P(aw+b). \] As stated, here we obtain $\hat{R}(w)=\frac{Q(aw+b)}{a(P(aw+b))}.$ Note that this also shows that in some sense, it is only affine changes of variables that transform the problem to a problem stated in precisely the same way, since a copy of $\frac{dz}{dw}$ appears after applying the differential operator $\frac{d}{dw}$ to $(z(w)-u)^t$. \end{remark} \subsection{Implicit characterization of $\minvset{CH}$ and $1$-point generated sets} For any operator $T$ with non-constant coefficients, let us now provide a general implicit description of $\minvset{CH}$ in spirit of complex dynamics. \begin{lemma}\label{lm:trivial} For any operator $T$ given by \eqref{eq:1st} with non-constant $Q(z)$, its minimal $T_{CH}$-invariant set $\minvset{CH}$ is the $T_{CH}$-extension of any point belonging to $\minvset{CH}$. \end{lemma} \begin{proof} Observe that if there is a point $u\in \minvset{CH}$ whose $T_{CH}$-extension $\mathfrak{T}(u)$ is strictly contained in $\minvset{CH}$ then $\minvset{CH}$ is a $T_{CH}$-invariant set which is not minimal under inclusion. \end{proof} Using the above definitions we obtain the following. \begin{corollary}\label{cor:simple} For any operator $T$ given by \eqref{eq:1st} such that $\deg Q(z)\ge 1$, the minimal $T_{CH}$-invariant set $\minvset{CH}$ is the $T_{CH}$-extension of an arbitrary root of $Q(z)$. Similarly, if $\deg P(z)\ge 1$ then $\minvset{CH}$ is the $T_{CH}$-extension of any root of $P(z)$. \end{corollary} \begin{proof} The statement follows immediately from \cref{lm:trivial} and item (i) of \cref{prop:basic}. \end{proof} \cref{lm:trivial} and Corollary~\ref{cor:simple} show that $\minvset{CH}$ is an example a of \defin{$1$-point generated $T_{CH}$-invariant set} which, by definition, is the $T_{CH}$-extension of a single point $u\in \bC$. Moreover $\minvset{CH}$ is generated by any of its points which is not true for more general $1$-point generated $T_{CH}$-invariant sets. Nevertheless these sets are ``building blocks" of arbitrary $T_{CH}$-invariant sets. Namely, by item (iii) of \cref{lm:simple} any $T_{CH}$-invariant set is (the closure of) a union of some collection of $1$-point generated sets. Thus they play a special role in our set-up and unless $\minvset{CH}=\bC$ there exist $1$-point generated sets different from $\minvset{CH}$. In \cref{sec:1-point} we present families of these sets for certain specific operators $T$. Another natural application of $1$-point generated sets is related to the following claim. \begin{theorem}\label{th:reduc} Assume that polynomials $Q(z)$ and $P(z)$ have a common factor $U(z)$ with roots $z_1,\dots, z_\ell$ so that $Q(z)=\widetilde Q(z) U(z)$ and $P(z)=\widetilde P(z) U(z)$. Also assume that neither $Q(z)$ nor $P(z)$ vanish identically and at least one of them is non-constant. Then the minimal invariant set $\minvset{CH}$ equals the union of $1$-point generated $\widetilde T_{CH}$-invariant sets $S_1, S_2,\dots, S_\ell$ generated by $z_1, z_2,\dots, z_\ell$ respectively, where $\widetilde T=\widetilde Q(z)\frac{d}{dz}+ \widetilde P(z)$. \end{theorem} \begin{proof} Indeed, any $T_{CH}$-invariant set must contain all roots of $Q(z)$ and $P(z)$ and, in particular, all roots of $U(z)$. Equation~\ref{eq:main} factorizes as \[ U(z)(t \tilde Q(z) +(z-u) \tilde P(z))=0 \] which means that any $T_{CH}$-invariant set is in fact a $\widetilde T_{CH}$-invariant containing all roots of $U(z)$ and vice versa. Thus $\minvset{CH}=S_1\cup S_2\cup \dots \cup S_\ell$. \end{proof} \subsection{Trivial special cases}\label{ssec:trivial} For the sake of completeness we discuss here some very degenerate and trivial cases which will be exceptional from the of view of our general framework and results described later. \subsubsection{Case of operators with constant coefficients}\label{ssec:constant}\label{subsublinear} In \cref{sub:exist} we have shown the existence of $\minvset{CH}$ for any operator $T$ given by \eqref{eq:1st} with at least one non-constant coefficient. Let us separately cover the case $T=\alpha \frac{d}{dz}+\beta$ where $\alpha$ and $\beta$ are non-vanishing complex numbers. \begin{definition} Given a complex number $\xi\neq 0$, we say that a set $S\subset \mathbb{C}$ is \emph{closed in direction $\xi$} if for any point $p\in S$, the ray $p+t \xi$ where $t$ runs over non-negative numbers belongs to $S$. \end{definition} \begin{lemma} \label{lm:const} Given an operator $T=\alpha \frac{d}{dz}+\beta$ where $\alpha$ and $\beta$ are non-vanishing complex numbers, a closed set $S\subset \mathbb{C}$ is $T_{CH}$-invariant if and only if $S$ is closed in direction $\xi=-\frac{\alpha}{\beta}$. \end{lemma} \begin{proof} For any $T$ as above, equation~\eqref{eq:main} takes the form $\alpha t +\beta (z-u)=0$. Therefore if $S$ is $T_{CH}$-invariant and $u\in S$, then $z(t)=u-\frac{\alpha}{\beta}t$ belongs to $S$ for all non-negative $t$. The latter condition coincides with the requirement that $S$ is closed in direction $\xi=-\frac{\alpha}{\beta}$. \end{proof} In particular, in the above notation the $T_{CH}$-invariant set generated by a point $u\in \bC$ coincides with the closed ray starting at $u$ and having a direction vector $\xi$. \subsubsection{Case when either $P$ or $Q$ vanish identically} Note that for constant coefficients, using the notation of \S~\ref{subsublinear}, if at least one of $\alpha$ and $\beta$ are equal to zero, then any closed subset of $\mathbb{C}$ is $T_{CH}$-invariant. If however $P\equiv 0$ and $\deg Q\ge 1$, then a set is $T_{CH}$-invariant if and only if it contains the roots $Q$. In the same vein, if $Q\equiv 0$ and $\deg P\ge 1$, then a set is $T_{CH}$-invariant if and only if it contains the roots of $P$. \subsubsection{Case when $Q=\alpha(z-z_0),P=-\delta\alpha$ and $\delta>0$}\label{ssec:special} In this very special situation, set $u=z_0$ and consider the roots of \[t\alpha(z-z_0)-\delta\alpha(z-u)=0 \iff (t-\delta)(z-z_0)=0.\] When $t=\delta$, it is not immediately clear what the roots of this equation are supposed to be. In order to make this case consistent with the others and comply with \cref{th:charact} below, we define the roots of the equation to be the whole of $\bC$ so that $\minvset{CH}=\bC$. \medskip In what follows (unless explicitly mentioned) we shall assume that $Q$ and $P$ have no common roots and that $T$ is not as in \cref{ssec:constant}--\ref{ssec:special}. \section{Root trails and their properties} \label{sec:roottrails} In this section we introduce a number of technical tools to be used throughout the paper. \begin{definition} For an operator $T$ given by \eqref{eq:1st}, a complex number $u$ and $t\ge 0$, we call by the \defin{root divisor} $\mathfrak{tr}_u(t)$ of the pair $(u,t)$ the set of all solutions of \eqref{eq:main} considered in $\setRS$ and by the \defin{root trail} $\mathfrak{tr}_u$ of $u$ the closure in $\setRS$ of the union $\cup_{t\ge 0}\mathfrak{tr}_u(t)$. \end{definition} Observe that for any complex number $u$, the root divisor $\mathfrak{tr}_u(0)$ which we call the \defin{initial divisor} contains $u$ and the zero locus of $P(z)$ (together with $\infty\in \setRS$ if $\deg Q> \deg P+1$). Further, the root divisor $\mathfrak{tr}_u(\infty)=\lim_{t\to +\infty}\mathfrak{tr}_u(t)$ which we call the \defin{final divisor} contains $Q(z)$ (together with $\infty\in \setRS$ if $\deg P\ge \deg Q$). \begin{definition} We say that a number $u\in \bC$ is \defin{$T$-generic} if for all $t>0$, the root divisors $\mathfrak{tr}_u(t)\subset \setRS$ are simple, i.e., have no multiple roots. By definition, for a generic $u$, its \defin{open trail} $\mathfrak{tr}_u^\circ=\cup_{t>0}\mathfrak{tr}_u(t)$ splits into $\max(\deg Q, \deg P+1)\coloneqq N$ smooth non-intersecting connected components which we call \defin{$t$-trajectories} of $u$. \end{definition} \begin{definition} For any fixed $u$, we call by a \defin{$t$-trace} a continuous function $\gamma(t): [0, +\infty)\to \setRS$ such that $\gamma(u,t)$ solves \eqref{eq:main} for each $t\geq 0$. The notation $\gamma(t)$ will also be used and we say that a $t$-trace $\gamma(t)$ corresponds to a $u\in \bC$ if it solves \eqref{eq:main}. \end{definition} \begin{figure}[!ht] \begin{center} \includegraphics[width=0.7\textwidth]{separatrices-pix} \end{center} \caption{For $Q(z) = (z + 1) (z - i)$, $P(z)= 2 z + i$, we show a zero $z_0 = -i/2$ of $P(z)$, two separatrices emerging from $z_0$ (black, thin), and $t$-trajectories originating at $z_0$ (blue, thick). } \end{figure} Below we will describe the set of $T$-non-generic complex numbers $u$ and the subset of $\bC$ where such $\mathfrak{tr}_u$ are non-generic. It is somewhat surprising that this set is a part of the so-called \emph {curve $\mathfrak{I}_R$ of inflection points} of the vector field $R{ \partial_z}$ where $R(z)=\frac{Q(z)}{P(z)}$, see \S~\ref{ssec:inflections}. \subsection{Non-generic root trails} Interpreting $t$ as time we will show that $t$-traces form a time-dependent flow in $\mathbb{C}$ and we will find explicitly the singular time-dependent vector field in the space $\mathbb{C}\times \setR_{\ge 0}$ which generates this flow. The time-dependent singularities of this field will be closely related to non-generic root trails, i.e., those which do not be split into separate $t$-trajectories over the half line $t>0$. For the next result and throughout the text we will use the notation $\mathcal{Z}(f)$ for the set of zeros of the function $f$, with the convention that if $f\equiv 0$, $\mathcal{Z}(f)=\emptyset$. (To fully understand the result below our readers need to take a brief look at \S~\ref{ssec:inflections} which introduces and describes the plane curve consisting of inflection points of trajectories of the vector field given by an arbitrary analytic function in $\bC$.) \begin{lemma}\label{le:genericu} {\rm (i)} For any operator $T$ given by \eqref{eq:1st}, a starting point $u\in \bC $ is generic, i.e., its root divisors $\mathfrak{tr}_u(t)$ are simple for all positive $t$, if and only if the rational function $\Psi_u(z)=\frac{(u-z)P(z)}{Q(z)}$ has no positive critical values. \noindent {\rm (ii)} The set $\Theta_T\subset \bC$ consisting of points $z_0$ such that there exists $u\in \bC$ for which $z_0$ is a critical point of $\Psi_u$ having a positive critical value coincides with the negative part of the curve of inflections, i.e. with $\mathfrak{I}_R^-\subset \bC$. \noindent {\rm (iii)} The set $\theta_T \subset \bC$ consisting of all $T$-non-generic $u$ is the image of $\mathfrak{I}_R^-\subset \bC$ under the mapping $z\mapsto u$ given by $u=z+\frac{PQ}{P^\prime Q-Q^\prime P}$. \end{lemma} \begin{proof} To settle (i), notice that for a fixed starting point $u$, its root divisor $\mathfrak{tr}_u(t)$ is given by the equation \eqref{eq:main} which is equivalent to \begin{equation}\label{eq:t} t=\frac{(u-z)P(z)}{Q(z)}. \end{equation} Thus for $u$ fixed, the roots of the latter equation (w.r.t the variable $z$) for distinct values of $t$ belong to different level sets of the rational function $\frac{(u-z)P(z)}{Q(z)}$ and therefore are necessarily disjoint. It might however happen that for a fixed $t_0$, some of the roots of \eqref{eq:t} are multiple which corresponds to the case when $t_0$ is a critical value of $\frac{(u-z)P(z)}{Q(z)}$. This occurs, for example, for $t=0$ if $u$ is chosen to be a root of $P(z)$. If, for a given $u$, $t>0$ is never a critical value of $\frac{(u-z)P(z)}{Q(z)}$ then $\mathfrak{tr}_u^\circ$ is not self-intersecting implying that $u$ is generic. Item (i) follows. Notice that if $Q(z)$ has a multiple root then for any choice of $u$, $\mathfrak{tr}_u(\infty)$ has a multiple root. If all roots of $P(z)$ are simple then for any $u$ which is not a root of $P(z)$, $\mathfrak{tr}_u(0)$ has no multiple roots. (If $\deg Q>\deg P+1$ then $\deg Q - \deg P -1$ roots of $\mathfrak{tr}_u$ will be coming from $\infty$.) To settle (ii), notice that for $u$ fixed, \[ {\frac{d}{dz}}\Psi_u(z)=-\frac{P}{Q}+(u-z)\left( \frac{P}{Q} \right)^\prime, \] where the symbol $^\prime$ means that the derivative is taken w.r.t. $z$. The critical points of $\Psi_u(z)$ satisfy the equation \[ u\rho^\prime=z\rho^\prime+\rho \iff u=z+\frac{\rho}{\rho^\prime}=z+\frac{PQ}{P^\prime Q- Q^\prime P}, \] where $\rho(z)=\frac{P(z)}{Q(z)}=\frac{1}{R(z)}$. Thus if we fix $z_0$ and want to find $u_0$ such that $z_0$ is a critical point of $\Psi_{u_0}(z)$ then we should take $u_0=z_0+\frac{\rho(z_0)}{\rho^\prime(z_0)}$. Let us now calculate the critical value of $\Psi_{u_0}(z)$ at $z_0$. We get \[ \Psi_{u_0}(z_0)=(u_0-z_0)\rho(z_0)=\frac{\rho(z_0)}{\rho^\prime(z_0)}\rho(z_0)= \frac{P^2(z_0)}{P^\prime (z_0) Q(z_0)- Q^\prime (z_0)P(z_0)}=- \frac{1}{R^\prime(z_0)}. \] The requirement that $\Psi_{u_0}(z_0)$ is positive is equivalent to the requirement that $R^\prime(z_0)$ is negative, which by \cref{lm:infl} defines the negative part of the curve of inflections $\mathfrak{I}_R^-$. \smallskip Finally, to settle (iii), we already noted that for a given $z_0$, to make it a critical point of $\Psi_{u_0}(z)$ one should take $u_0=z_0+\frac{\rho(z_0)}{\rho^\prime(z_0)}$. Thus the set $\theta_T$ of all non-generic $u$ is obtained from the set $\Theta_T$ under the latter mapping. \end{proof} Observe that for any pair $(z_0,t_0)$, where $z_0$ is an arbitrary complex number different from a root of $P(z)$ and $t_0> 0$, there exists a unique $u_0$ such that $\gamma(u_0,t_0)=z_0$. Indeed if $z_0 $ is not a root of $P(z)$, then the starting point $u_0$ is given by $z_0+t_0R(z_0)$. (This circumstance will be very important later when we introduce the notion of associated rays). In other words, equation~\eqref{eq:main} is solvable for any $t_0>0$ and any $z_0$ different from roots of $P(z)$ and it is unsolvable for $t>0$ and $z_0$ being a root of $P(z)$. For $t=0$, equation \eqref{eq:main} is solvable for all $z_0$ including the roots of $P(z)$. \begin{remark} Notice that the root divisors $\mathfrak{tr}_{u_1}(t), \mathfrak{tr}_{u_2}(t)$ of two starting points $u_1\neq u_2$ can be such that $\bC\supset \left(\mathfrak{tr}_{u_1}(t_1)\cap\mathfrak{tr}_{u_2}(t_2)\right)\neq \emptyset$ with either $t_1>0$ or $t_2>0$, but only if $t_1\neq t_2$. \end{remark} The following result holds. \begin{proposition} Given $u\in \bC$, the set of all $z \in \mathbb{C}\setminus \mathcal{Z}(Q)$ such that $t Q(z) + (z-u)P(z) =0$ has a solution for $t \geq 0$, is a real semi-algebraic curve given by \begin{equation} \Im\left( P(z) \overline{Q(z)} (z-u) \right) =0, \qquad \Re\left( P(z) \overline{Q(z)} (z-u) \right) \leq 0. \end{equation} \end{proposition} \begin{proof} Suppose $t Q(z) + (z-u)P(z) =0$. Multiplying both sides by $\overline{Q(z)}$ and solving for $t$ gives \[ t = -\frac{P(z)\overline{Q(z)}(z-u)}{ |Q(z)|^2 }. \] This expression must be real, so we get the first condition. Moreover, if we want $t \geq 0$, we need the second condition. \end{proof} \subsection{Time-dependent vector field} \begin{theorem}\label{th:addit} For any operator $T$ given by \eqref{eq:1st}, the following claims are valid. \noindent {\rm (i)} The vector field of the velocity vectors of the $t$-traces is given by \begin{equation}\label{eq:field} V(\gamma(t),t)\coloneqq\gamma'(t) =-\frac{R(\gamma (t))}{t R^\prime(\gamma (t))+1}, \end{equation} where $R(z)=\frac{Q(z)}{P(z)}$. \noindent {\rm (ii)} If the roots of $P(z)$ are simple and $P(z)$ and $Q(z)$ are coprime, then the roots of $Q(z)$ are attracting points of the vector field \eqref{eq:field} for all sufficiently large $t$ while the roots of $P(z)$ are the poles of \eqref{eq:field} for $t=0$. \noindent {\rm (iii)} In addition to the above time-independent zeros, the vector field \eqref{eq:field} has moving poles given by the equation $R^\prime(z)=-\frac{1}{t}$. (When $t\to 0^+$ two moving poles tend to every simple root of $P(z)$ and create a simple pole at this root when $t=0$.) \end{theorem} \begin{remark} Observe that the moving poles traverse the negative part $\mathfrak{I}_R^-$ of the curve of inflections, see \S~\ref{ssec:inflections} which is exactly the set where different $t$-trajectories with the same starting point $u$ can ``merge". \end{remark} \begin{proof} To settle (i) we invoke \eqref{eq:main}. Fixing $t\ge 0$, we want to find the derivative $\gamma^\prime(t)$ which will provide the formula for the vector field in question. The starting position $u$ for the trajectory under consideration is given by \begin{equation}\label{eq:begin} u=\gamma(t)+tR(\gamma(t)). \end{equation} Now take a small increment $t+\delta$ of time and substitute it instead of $t$ in \eqref{eq:main} together with the latter expression for $u$. Expanding \eqref{eq:main} and disregarding all terms containing $\delta$ in degrees higher than $1$ we get \[ (t+\delta)(Q(\gamma(t))+Q^\prime(\gamma(t))\gamma^\prime(t)\delta) \] \[ +\left(\gamma(t)+\gamma^\prime(t)\delta-\gamma(t)-\frac{tQ(\gamma(t))}{P(\gamma(t))}\right)(P(\gamma(t))+\gamma^\prime(t)\delta)=0 \] which after simplification and division by $\delta$ gives \[ \gamma^\prime(t)\left(tQ^\prime(\gamma(t))+P(\gamma(t))-t\frac{Q(\gamma(t))P^\prime(\gamma(t))}{P(\gamma(t))}\right)=-Q(\gamma(t)). \] We get \begin{equation}\label{eq:zeross} \gamma^\prime(t)=\frac{Q(\gamma(t))P(\gamma(t))}{t(Q(\gamma(t))P^\prime(\gamma(t))-Q^\prime(\gamma(t))P(\gamma(t)))-P^2(\gamma(t))}. \end{equation} Dividing the numerator and denominator in the above r.h.s by $P^2(\gamma(t))$ we obtain the required expression \[ \gamma^\prime(t)=-\frac{R(\gamma(t))}{t R^\prime(\gamma(t))+1}. \] \smallskip To settle (ii) assume that $z_0$ is a zero of $Q(z)$ and that $\gamma(t)=z_0$. Then for any $t>0$, the derivative $\gamma'(t)$ is \begin{equation}\label{eq:der} \gamma'(t)= \frac{(Q(z_0)P^\prime(z_0)+Q^\prime(z_0)P(z_0))\cdot Num - Q(z_0)P(z_0)\cdot Num^\prime}{Num^2}, \end{equation} where $Num$ is the denominator of the right-hand side of \eqref{eq:zeross}. Since $Q(z_0)=0$ the latter expression simplifies to \[ \gamma'(t)= -\frac{Q^\prime(z_0)P(z_0)}{tQ^\prime(z_0)P(z_0)+P^2(z_0)}. \] First, if $Q^\prime(z_0)P(z_0)=0$, then the expression above is $0$ since $P(z_0)\neq 0$. If however $Q^\prime(z_0)P(z_0)\neq 0$, then for $t$ large, the denominator of the latter formula becomes arbitrary large and the absolute value of the r.h.s. becomes smaller than $1$. This implies that $z_0$ will be an attracting zero of the vector field for all sufficiently large $t$. Analogously, assume that $z_0$ is a zero of $P(z)$. Then for any $t>0$, the derivative $V(z_0,t)$ at $z_0$ will be given by the same r.h.s of \eqref{eq:der}. Since $P(z_0)=0$ the latter expression simplifies as \[ \gamma'(t)= \frac{Q(z_0)P^\prime(z_0)}{tQ(z_0)P^\prime(z_0)-P^2(z_0)}. \] Since $Q(z_0)P^\prime(z_0)\neq 0$ for large $t$, the denominator of the latter formula will become arbitrary large implying that the absolute value of $\gamma'(t)$ will become arbitrary small. Therefore $z_0$ will be an attracting zero of the vector field for all sufficiently large $t$. Additionally, notice that for $t=0$, equation \eqref{eq:zeross} simplifies as \[ \gamma'(0)=-\frac{Q(\gamma(0))}{P(\gamma(0))} \] implying that the roots of $P(z)$ are the poles of the vector field at $t=0$. \smallskip Finally, to settle (iii), let us find out when the denominator of $V(z,t)$ vanishes. Dividing by $P^2(z)$ we get the relation \[ t R^\prime(z)+1=0 \quad \iff \quad R^\prime(z) = -\frac{1}{t}<0. \] \end{proof} Although the zeros of both $P(z)$ and $Q(z)$ are attracting for large $t$, if we choose a generic starting point $u$, every $t$-trajectory which is a part of the root trail starting at $u$ either approaches a zero of $Q(z)$ or $\infty$ when $t\to +\infty$. \subsection{Dependence of \texorpdfstring{$t$}{t}-trajectories on \texorpdfstring{$u$}{u}} Let $\gamma(u,t)$ be a $t$-trace corresponding to a starting point $u$. It depends on a complex parameter $u$ and a real/positive parameter $t$. In Theorem~\ref{th:addit} we have calculated the derivative of a $t$-trace with respect to the time parameter $t$. We will also need a partial derivative of $\gamma(u,t)$ with respect to the starting point $u$. \begin{lemma}\label{prop:deru} Suppose that $P(\gamma(u,t))\neq 0$ and $\gamma(u,t)\neq\infty$. Then \[ \frac{\partial\gamma(u,t)}{\partial u}=\frac{1}{tR'(\gamma(u,t))+1}. \] \end{lemma} \begin{proof} The $t$-trace $\gamma(u,t)$ satisfies the equation \[ tQ(\gamma(u,t))+(\gamma(u,t)-u)P(\gamma(u,t))=0. \] Differentiating with respect to $u$ we obtain \begin{equation*} tQ'(\gamma(u,t))\frac{\partial\gamma(u,t)}{\partial u}+\frac{\partial\gamma(u,t)}{\partial u}P(\gamma(u,t)) \end{equation*} \begin{equation*} +(\gamma(u,t)-u)P'(\gamma(u,t))\frac{d\gamma(u,t)}{du}=P(\gamma(u,t)). \end{equation*} After the substitution of $u=\gamma(u,t)+tR(\gamma(t))$, this yields \[ \frac{\partial\gamma(u,t)}{\partial u}=\frac{P(\gamma(u,t))}{tQ'(\gamma(u,t))+P(\gamma(u,t))-tR(\gamma(t))P'(\gamma(u,t)} =\frac{1}{tR'(\gamma(u,t))+1}. \] \end{proof} \begin{remark} If $u$ is generic and $t>0$ then $\frac{\partial\gamma(u,t)}{\partial u}$ exists and is non-zero. In particular, for a given point $z$, there is a family of $t$-traces defined by the condition $\gamma(u,t)=z$, $u=z+tR(z)$. For all but at most one pair $(u,t)$ such that $u=z+tR(z)$, the derivative $\frac{dz}{du}\coloneqq \frac{\partial \gamma(u,t)}{\partial u}$ exists and is non-zero. \end{remark} \subsection{Behavior of \texorpdfstring{$t$}{t}-trajectories near roots of \texorpdfstring{$P$}{P}} We now show how the $t$-traces behave when $t\to 0$. \begin{proposition}\label{prop:rootTrajectoryDirections} Suppose that $P$ and $Q$ have no common zeros and that $P(z)=(z-z_0)^m G(z)$ where $G(z_0) \neq 0$ and $m \geq 1$. Consider the Laurent series expansion \[ R(z) = \sum_{j\geq -m} b_j (z-z_0)^j \text{ where } b_{-m} = \frac{Q(z_0)}{G(z_0)}. \] \smallskip \noindent The following facts hold: \noindent {\rm (i)} If $\gamma(t)$ solves the equation $t Q(z)+(z-u)P(z)=0$, where $\gamma(0) = z_0\neq u$ and $\eta(t)\coloneqq\gamma(t^m)$ then \[ (\dot{\eta}(0))^m = -\frac{Q(z_0)}{(z_0-u)G(z_0)}. \] In particular, if $m=1$, we have \[ \dot{\eta}(0) = -\frac{Q(z_0)}{(z_0-u)P'(z_0)}. \] \noindent {\rm (ii)} If $\gamma(t)$ solves $t Q(z)+(z-u)P(z)=0$, where $\gamma(0)=z_0=u$, and $\eta(t)\coloneqq \gamma(t^{m+1})$ then \[ (\dot{\eta}(0))^{m+1} = -\frac{Q(z_0)}{G(z_0)}. \] \end{proposition} \begin{proof} To settle (i), note that $\eta(t)$ solves $t^{m}Q(z) + (z-u)P(z)=0$. Consider the Taylor expansion $\eta(t) = z_0 + a t + O(t^2)$ and substitute it in the latter relation. We get \[Q\left( \eta(t) \right) + \frac{(\eta(t)-u)(\eta(t)-z_0)^{m}G(\eta(t))}{t^{m}} = 0,\] which implies \[Q\left( \eta(t) \right) + a^{m}(\eta(t)-u)G(\eta(t)) + O(t)G(\eta(t)) = 0.\] Taking the limit when $t\to 0$ we obtain \[ a^m = -\frac{Q(z_0)}{(z_0-u)G(z_0)}. \] Arguing in a similar way to settle (ii), note that $\eta(t)$ solves $t^{m+1}Q(z) + (z-u)P(z)=0$. Again consider the Taylor expansion $\eta(t) = u + a t + O(t^2)$ and substitute it in this relation. We get \[Q\left( \eta(t) \right) + \frac{(\eta(t)-u)^{m+1}G(\eta(t))}{t^{m+1}} = 0\] which implies \[Q\left( \eta(t) \right) + a^{m+1}G(\eta(t)) + O(t)G(\eta(t)) = 0.\] Taking the limit $t\to 0^+$ we obtain $a^{m+1} = -Q(u)/G(u)$. \end{proof} Item (ii) above shows that the directions of the $t$-trajectories originating at $u=z_*\in \mathcal{Z}(P)$ agree with the directions of the root-starting separatrices of the vector field $-R(z){\partial_z}$ (c.f. \cref{prop:dirSeptrix}). \begin{example}\label{ex:rootTrajectoryDirections} Set $Q(z)=z (z + \frac{1 + i}{3}) (z^2 + \frac14)$ and $P(z) = (z - 1)^2 (z - i)$. In \cref{fig:tTrajectoriesExample}, we show the vector field $-R(z){\partial_z}$ together with the $t$-trajectories of the zeros of $P$; the $t$-trajectories associated with the double root $u=1$ are shown in black, while the ones with $u=i$ are shown in blue. \begin{figure}[H \includegraphics[width=0.4\textwidth]{tTrajectoriesExample} \includegraphics[width=0.4\textwidth]{tTrajectoriesExampleZoomed} \caption{The vector field $-R(z){\partial_z}$ and $t$-trajectories for the above $P$ and $Q$ whose zeros are encircled. The figure on the right is a closeup near $z = 1$. Note how close the black curves are to being separatrices of $-R(z){\partial_z}$ near this point. }\label{fig:tTrajectoriesExample} \end{figure} \end{example} \subsection{Unbounded $t$-traces} \begin{lemma}\label{le:rootsStartingInInfty} Let $\deg Q-\deg P-1=K-1>0$ and denote the leading coefficients of $Q$ and $P$ by $q_n$ and $p_m$ respectively. Then for sufficiently small $t_0$, and independently of $u$, there exist $L\coloneqq K-1$ components of $\mathfrak{tr}_u((0,t_0))$ that start at $\infty$. The arguments of the $t$-traces that tend to $\infty$ as $t\to 0^+$ are equal to the arguments of the solutions to \[ z^L+\frac{p_m}{q_n}=0. \] \end{lemma} \begin{proof} The fact that there are $L$ components of $\mathfrak{tr}_u((0,t_0))$ emanating from $\infty$ is evident from the fact that \cref{eq:main} have $\deg P+1$ solutions in $\bC$ for $t=0$ and $\deg Q$ solutions for $t>0$. Now, suppose that $\gamma(t)$ is a $t$-trace such that $t\to 0$ yields $\gamma(t)\to \infty$. We have that \[ t=-\frac{(\gamma(t)-u)P(\gamma(t))}{Q(\gamma(t))}. \] For small $t$, this becomes \[ t=-\frac{p_m}{q_n \gamma(t)^L}+ \mathcal{O}(\gamma(t)^{-(L+1)})\quad \text{as $\gamma(t)\to \infty$}. \] We find that $\gamma(t)$ grows as $1/t^L$ when $t\to 0$. This yields \[ t\gamma(t)^L+\frac{p_m}{q_n }=\mathcal{O}(\gamma(t)^{-1})\quad \text{as $\gamma(t)\to \infty$}. \] Taking $t\to 0^+$, we get the required result. \end{proof} \begin{lemma}\label{le:rootsEndingInInfty} Assume that $\deg Q-\deg P-1=-L<0$ and denote the leading coefficients of $Q$ and $P$ by $q_n$ and $p_m$ respectively. Then for sufficiently large $t_0$ and independently of $u$, there exist $L\coloneqq 1-K$ components of $\mathfrak{tr}_u((t_0,\infty))$ which end at $\infty$. The arguments of the $t$-traces that tend to $\infty$ as $t\to \infty$ are equal to the arguments of the solutions to \[ z^L+\frac{q_n}{p_m}=0. \] \end{lemma} \begin{proof} The fact that there are $L$ components of $\mathfrak{tr}_u((t_0,\infty))$ going to $\infty$ is evident from \cref{eq:factor} since for $t\in \mathbb R$, the RHS has $\deg P+1$ solutions and as $t\to \infty$, $\deg Q$ roots will tend to the roots of $Q$ and the others will tend to $\infty$. Now, suppose that $\gamma(t)$ is a $t$-trace such that $\gamma(t)\to \infty$ when $t\to \infty$. We have that \[ t=-\frac{(\gamma(t)-u)P(\gamma(t))}{Q(\gamma(t))}. \] For large $t$, this becomes \[ t=-\frac{p_m \gamma(t)^L}{q_n}+ \mathcal{O}(\gamma(t)^{(L-1)})\quad \text{as $\gamma(t)\to \infty$}. \] Then it follows that for large $t$, $|\gamma(t)|$ grows as $t^{1/L}$. Hence, we obtain \[ t\gamma(t)^{-L}+\frac{p_m}{q_n }=\mathcal{O}(t^{-1/L})\quad \text{as $\gamma(t)\to \infty$}. \] Taking $t\to \infty$, we get the required result. \end{proof} \section{Properties of $T_{CH}$-invariant sets, associated rays, and regularity} \label{sec:properties} In this section we study some general properties of $T_{CH}$-invariant sets. As before, set $R(z)=\frac{Q(z)}{P(z)}$. \begin{definition} In the above notation, for a point $p\in \mathbb{C}$, define its \defin{associated ray} $r_p\coloneqq \{p+t R(p)\}$, where $t\in [0,\infty)$. Observe that $r_p$ is well-defined unless $p$ is a pole of $R(z)$. Additionally, if $p\in \mathcal{Z}(R)$ then $r_p$ is merely a point. \end{definition} \begin{lemma}\label{lm:ray} Any point $p\in \mathbb{C}$ which is not a pole of $R(z)$, lies on the root trail $\mathfrak{tr}_u$ if and only if $u\in r_p$. \end{lemma} \begin{proof} Indeed, under our assumptions $p$ solves the equation $tQ(p)+(p-u)P(p)=0$ for some non-negative $t$. If $P(p)\neq 0$, i.e., $p$ is not a pole of $R(z)$ then \[ u-p=tR(p) \iff u=p+tR(p) \] which means that $u\in r_p$. \end{proof} Note again that unless $p$ is a root of $R(z)$ the ray $r_p$ does not degenerate to a point. \subsection{Characterization of $T_{CH}$-invariant sets in terms of associated rays} We begin this subsection with the following observation about the associated rays of points in the complement to a $T_{CH}$-invariant set. \begin{theorem}\label{th:charact} For an operator $T$ given by \eqref{eq:1st} with $P(z)$ and $Q(z)$ not identically vanishing, $S\subset \bC$ is $T_{CH}$-invariant set if and only if {\rm (i)} $S$ is closed; {\rm (ii)} $S$ contains $\mathcal{Z}(P)$; {\rm (iii)} the associated rays of all points in $S^c:=\bC\setminus S$ are contained in $S^c$. \end{theorem} Theorem~\ref{th:charact} applies even when $P(z)$ and $Q(z)$ are non-vanishing constants in which case item (ii) is empty. \begin{proof} Items (i) and (ii) are trivial. Indeed, a $T_{CH}$-invariant set must be closed by definition. Moreover, in \cref{prop:basic}, for an operator $T$ given by \eqref{eq:1st} with non-constant and not identically vanishing $Q(z)$ and $P(z)$, we have established existence and uniqueness of its minimal $T_{CH}$-invariant set $\minvset{CH}$. We have also shown that $\minvset{CH}$ must contain all the zeros of $Q(z)$ and $P(z)$. To settle the necessity of item (iii) for the $T_{CH}$-invariance of $S$, notice the following. If $S$ is $T_{CH}$-invariant then no point in $S^c$ lies on the root trail $\mathfrak{tr}_u$ of a point $u\in S$. Observe that $S^c$ is open in $\mathbb{C}$. Since $S$ contains both all roots and all poles of $R(z)$ we get that by \cref{lm:ray} the ray $r_p$ of any point $p\in S^c$ must completely lie in $S^c$. To prove the sufficiency of items (i) -- (iii) for the $T_{CH}$-invariance of $S$, we argue by contradiction. Assume that $S$ is not $T_{CH}$-invariant although (i) -- (iii) hold. This means that there exists a point $u_0\in S$ such that its root trail $\mathfrak{tr}_{u_0}$ leaves $S$. In other words, there is a point $p\in S^c$, $u_0\in S$ and some $t\geq 0$ such that solves \[tQ(p)+(p-u_0)P(p)=0.\] by (ii), $p\notin \mathcal{Z}(P)$ so $p+tR(p)=u_0$. This contradicts item (iii). \end{proof} Using the notation in \cref{re:affineChange}, note that under the affine change of variables $z \mapsto aw +b$, the associated ray $\{z_0+tR(z_0):t\geq 0\}$ is mapped to \[ \left\{\frac{z_0-b}{a}+\frac{t}{a}R(z_0):t\geq 0 \right\}=\{w_0+ t\hat{R}(w_0):t\geq 0\}, \] where $z_0 = aw_0+b$. The family of affine maps is precisely the family which sends straight lines to straight lines. We also have this following alternative formulation of \cref{th:charact}. \begin{corollary} A set $S$ is $T_{CH}$-invariant if and only if its complement $S^c$ is open, non-empty, does not contain $\mathcal{Z}(P)$ and is forward-invariant under the family of maps \[ \{z \mapsto z+tR(z), \; t\geq 0\}. \] If neither $Q(z)$ nor $P(z)$ are identically zero and at least one of them has positive degree, there exists a maximal open subset $S^c$ with the above property and its complement coincides with $\minvset{CH}$. \end{corollary} \subsection{Characterization of certain regular $T_{CH}$-invariant sets} We now focus on the associated rays of boundary points of a set $S\subset \bC$. First we need the following definition. \begin{definition}\label{def:regularSets} For a subset $S\subset \bC$, denote by $S^\circ$ the set of all its interior points. The set $\defin{\partial S^\circ} \coloneqq \overline{S^\circ} \setminus S^\circ$ is called the \emph{essential boundary} of $S$. Points in $\overline{S^\circ}$ are called \defin{regular} and points in $S\setminus \overline{S^\circ}$ are called \defin{irregular} points of $S$. We say that $S\subset \bC$ is (topologically) \defin{regular} if all its points are regular, i.e., $S=\overline{S^\circ}$ (which implies that $\partial{S}=\partial{S}^\circ$). Otherwise, the set $S$ is called \defin{irregular}. Obviously all points of $S$ are irregular if and only $S^\circ$ is empty. \end{definition} \begin{remark} In order to avoid trivialities, we will focus our attention to irregular invariant sets that are not singletons. If $T$ is not as in \cref{ssec:trivial}, then these appear only if $R(z){\frac{d}{dz}}=\lambda z \partial_z$ where $\lambda\notin \mathbb{R}_{-}$. Moving forward, when referring to an irregular invariant set, we will assume it is not a singleton. \end{remark} \begin{proposition}\label{prop:notinv} Let $S\subset \bC$ be a set such that there exist $p\in \partial S$ and $u\in S^\circ$ such that $u =p+t_0R(p)$ for some $t_0>0$ where (as above) $S^\circ$ denotes the interior of $S$. Then for all sufficiently small open neighborhoods $U_p\subset \mathbb{C}$ of $p$ and any $z_0\in U_p$, there exists a $t$-trace $\gamma(t)$ solving \[ tQ(\gamma(t))+(\gamma(t)-u_0)P(\gamma(t))=0 \] where $\gamma(t_1)=z_0$ for some $t_1>0$ and $u_0\in S$. In particular, $S$ is not $T_{CH}$-invariant. \end{proposition} \begin{proof} First, if $t_0=-1/R'(p)$, replace $t_0$ with another $t$ such that $p+t_0R(p)=u_0\in S^\circ$ and $t_0\neq-1/R'(p)$ and note that such $t_0$ always exists. Let $\gamma_0(t)$ be a $t$-trace such that $\gamma_0(u_0,t_0)=p$ and that \[ tQ(\gamma_0(u_0,t))+(\gamma_0(u_0,t)-u_0)P(\gamma_0(u_0,t))=0. \] Using \cref{prop:deru}, we see that \[ \frac{\partial\gamma_0(u,t)}{\partial u}\Big|_{(u,t)=(u_0,t_0)}\neq 0. \] The statement now follows, see illustration in \cref{fig:extRay2}. \end{proof} \begin{figure} \begin{center} \includegraphics[width=0.35\textwidth,page=2]{tikzFigures} \end{center} \caption{ In this example, there exists a point $p$ on the boundary of $S$ such that $r_p$ contains $u$. This means that there is some $t$-trajectory originating from a point in $\mathcal{Z}(P)\cup\{u\}$ which passes through $p$ for some $t>0$ and $t Q(p)+(p-u)P(p)=0$. Then we can find $u_0$ near $u$ such that the solution $p$ moves to $w \notin S$. }\label{fig:extRay2} \end{figure} \begin{proposition}\label{prop:raysGiveCHinv} Suppose that $S\subset \bC$ is a regular set such that \rm{(i)} $\mathcal{Z}(PQ)$ lies in $ S^\circ$; \rm{(ii)} for every point $p\in \partial S$, the associated ray $r_p$ lies in $\overline{S^c}$. Then $S$ is $T_{CH}$-invariant. \end{proposition} \begin{proof} Suppose that $S$ is not $T_{CH}$-invariant. Then there exist a $u_0\in S$ and a corresponding $t$-trace $\gamma_0(t)$ such that $\gamma_0(0) \in S$ and $\gamma_0(t_0)\notin S$ for some $t_0>0$. Let $\epsilon_0>0$ be the distance between $\gamma_0(t_0)$ and $S$. Note that the roots of \[ tQ(z)+(z-u)P(z)=0 \] depend continuously on the coefficients. Moreover, since $u_0 \in S$ and $S$ is a regular domain so for any neighborhood $U $ of $ u_0$, there is a point $u_1 \neq u_0$ in $S^\circ \cap U$. Hence, if $u_0 \notin S^\circ$, we can choose $u_1$ close enough to $u_0$ to guarantee that there is a solution $z_1$ of \[ t_0 Q(z)+(z-u_1)P(z)=0 \] whose distance to $\gamma_0(t_0)$ is at most $\epsilon_0/2$. In particular, $z_1 \notin S$. But then there is a $t$-trace $\gamma_1(t)$ such that \[ tQ(\gamma_1(t))+(\gamma_1(t)-u_1)P(\gamma_1(t))=0. \] Moreover, since all zeros of $PQ$ lie in $S^\circ$ and any $t$-trace has a startpoint or an endpoint among the zeros of $PQ$, it follows that one or both of $\gamma_1(0)$ and $\lim_{t\to \infty} \gamma_1(t)$ belong to $ S^\circ$. Furthermore, for a given $t$-trace $\gamma(t)$, there is at most one $\tau\in [0,\infty]$ such that $\lim_{t\to \tau}\gamma(t)=\infty$. Hence, by continuity, there is a $t_1\in (0,\infty)$ such that $\gamma_1(t_1) \in \partial S$. But then \[ u_1 = \gamma_1(t_1) + t_1R(\gamma_1(t_1)) \] which contradicts the assumption that the associated rays of points in $\partial S \setminus \mathcal{Z}(P)$ belong to $\overline{S^c}$. \end{proof} In a similar fashion we can show the following result which will be useful when proving $T_{CH}$-invariance of certain sets. \begin{corollary}\label{cor:notPassingThrough} Suppose that $S\subset \bC$ is a regular set such that for every $p\in \partial S$, the associated ray $r_p$ lies in $\overline{S^c}$. Then a $t$-trace can not begin in $S$ and enter the exterior $S^c$ through a point $p\in\partial S\setminus{\mathcal{Z}(P)}$. \end{corollary} Notice that because a $t$-trace can start in a zero of $P$ despite having $u\in S^\circ$, \cref{cor:notPassingThrough} can not be extended to the whole boundary $\partial S$. \begin{example}[The ``lemniscate" region]\label{ex:lemniscate} Consider the plane domain $S$ whose boundary $\partial S$ is parameterized by \[ r(\theta) = \pm\sqrt{ \left( \frac{\sin{2\theta}}{2\theta} \right)},\quad \theta\in [0,2\pi]. \] This curve is reminiscent of Bernoulli's \defin{lemniscate}, but it is only analytic and not algebraic, see \cref{fig:lemniscate}. One can also check that this ``lemniscate" is a union of root-starting separatrix of the vector field $v_R=-\frac{z^3}{(z+1)(z-1)}{\partial_z}$. Additionally, all associated rays corresponding to $T = z^3{\frac{d}{dz}} + (z+1)(z-1)$ on the boundary $\partial S$ lie in the closure of the complement of $S$. Therefore, removal of the left part (corresponding to the minus sign in the above formula) of $S$ leaves us with the domain $S_{\mathrm{right}}$ whose boundary is given by \[ r(\theta) =\sqrt{ \left( \frac{\sin{2\theta}}{2\theta} \right)},\quad \theta \in [0,2\pi]. \] The associated rays of the boundary of $S_{\mathrm{right}}$ also belong to the exterior of $S_{\mathrm{right}}$. However, $S_{\mathrm{right}}$ is not a $T_{CH}$-invariant set since it does not contain all roots of $P$. Moreover, $S_{\mathrm{right}} \cup\{-1\}$ is still $T_{CH}$-invariant. This example shows that the assumptions of regularity and that $\mathcal{Z}(PQ)\subset S$ is crucial for the conclusion of \cref{prop:raysGiveCHinv}. We shall later see that the minimal invariant set of $T$ is in fact the ``lemniscate" region $S_1$. \begin{figure \centering \includegraphics[width=0.5\textwidth]{lemniscate} \caption{``Lemniscate" region.} \label{fig:lemniscate} \end{figure} \end{example} \begin{example}\label{compactre0} If $\deg Q=\deg P +1$ then there exists $\lambda\neq 0$ such that near $\infty$, \begin{equation*} R(z){\partial_z}= \frac{Q(z)}{P(z)}{\partial_z}=\left(\lambda z + \text{higher order terms in } \frac{1}{z}\right){\partial_z}. \end{equation*} Note that introducing the change of variables $z=1/w$ we obtain \begin{equation*} R(w){\partial_w} =-w^2\frac{Q(w)}{P(w)}{\partial_w}=\left(-\lambda w + \text{higher order terms in } w\right){\partial_w}. \end{equation*} near $w=0$ and so $R(z){\partial_z}$ has a zero of order 1 at $\infty$. If $\Re \lambda=0$, then $\infty$ is a center of the vector field $-R(z) {\partial_z}$. Hence, no separatrix goes to $\infty$ and we can consider the closure $S=\overline{\cup_i^N \septrix_i }$ of the union of all separatrices. (Recall that there are finitely many separatrices and therefore $S$ will be compact in $\bC$). Moreover, let $\mathcal{D}(S)\subset \bC$ be the domain that contains $S$ together with the region that $S$ bounds. In other words, $\mathcal{D}(S)$ is the smallest simply connected (in $\mathbb{C}$) set that contains all the separatrices. Notice that such a set exists (and is regular if $\deg Q\geq 2$) since the $\setRS\setminus \mathcal{D}(S)$ is a center zone and hence is simply connected in $\setRS$, see \cref{fig:ordering}. Since the $\setRS\setminus \mathcal{D}(S)$ is a center zone, all the integral curves of $-R(z){\partial_z}$ near $\infty$ are closed curves with the same finite period $\tau$. We may therefore consider a total ordering on the integral curves between $\mathcal{D}(S)$ and $\infty $. We choose this ordering so that if integral curves $\phi_0(t)$ and $\phi_1(t)$ satisfy $\mathcal{D}(\phi_0[0,\tau])\subset \mathcal{D}(\phi_1[0,\tau])$, then $\phi_0(t)$ is smaller than $\phi_1(t)$. We denote by $\defin{\Psi}$ the minimal (with respect to the above ordering) integral curve in $\setRS\setminus \mathcal{D}(S)$ that encloses a convex region and by $\mathcal{D}(\Psi)$ the region that $\Psi$ encloses. Note that $\mathcal{Z}(PQ)$ is in the interior of $\mathcal{D}(\Psi)$ and by \cref{lem:blueCurveCompact}, $\Psi$ always exists. By \cref{prop:raysGiveCHinv}, $\mathcal{D}(\Psi)$ is $T_{CH}$-invariant. Moreover, for any integral curve $\Phi<\Psi$, we can find a boundary point of $\mathcal{D}(\Phi)$ for which the associated ray intersects the interior of $\mathcal{D}(\Phi)$. Therefore by \cref{prop:notinv}, $\mathcal{D}(\Phi)$ is not $T_{CH}$-invariant. We shall later see that $\minvset{CH}$ is in fact equal to $\mathcal{D}(\Psi)$, see \cref{thm:relambda0}. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.5\textwidth]{infty-is-center} \end{center} \caption{We order integral trajectories in $\mathbb{C}\setminus \mathcal{D}(S)$ in such a way that the further out from the center of the picture an integral curve lies, the larger in this order it is.}\label{fig:ordering} \end{figure} \end{example} \subsection{Connectedness of $T_{CH}$-invariant sets}\label{sec:connect} In this section we discuss connectedness of $T_{CH}$-invariant sets. \begin{theorem} If $PQ\not\equiv 0$ and a $T_{CH}$-invariant set $S$ has a bounded component, then $S$ is connected. Furthermore, the closure of $S$ in $\setRS$ is connected. \end{theorem} \begin{proof} Suppose that $S_i$ is a bounded connected component of an $T_{CH}$-invariant set $S$. Consider a simply connected compact set $D$ in $S^c$ whose interior $D^\circ$ is such that $D^\circ \cap S = S_i$. Since $S$ is $T_{CH}$-invariant, the set $A=\{z+tR(z):z\in \partial D, t\geq 0\}$ is disjoint from $S_i$. As $R$ is non-zero and continuous on $\partial D$, we have that $A\supset (D^\circ)^c$. By \cref{th:charact}, $S\cap (D^\circ)^c=\emptyset$. The first claim now follows. For the second claim, note that all unbounded components have $\infty$ in their closure. \end{proof} \begin{theorem}\label{th:simplyConnected} Each connected component $S_i$ of a $T_{CH}$-invariant set $S$ is simply connected. \end{theorem} \begin{proof} Suppose $S_i$ is connected, but not simply connected in $\bC$. Then there exists a simple closed curve $C\subset S_i$ whose interior has a point $z_0$ in the complement of $S_i$, see Figure~\ref{fig:simplyConn}. Since the zeros of $P$ and $Q$ lie in $S$, $R(z_0) \neq 0$ and there exists $t\geq 0$ such that $z_0 + t R(z_0) = u_0 \in S_i$. But this contradicts \cref{th:charact} and we readily conclude that $S_i$ is simply connected. \begin{figure}[!ht] \centering \includegraphics[width=0.25\textwidth]{simplyConnected} \caption{A non-simply connected minimal set $S$ and a curve $C$ with an interior point $z_0$ not in $S$. } \label{fig:simplyConn} \end{figure} \end{proof} \section{Root-starting separatrices and $T_{CH}$-invariant sets} \label{sec:separatrices} In this section we discuss one general property of $T_{CH}$-invariant sets which will be especially useful when studying irregularity of these sets. Namely, we will show that the root-starting separatrices of the vector field $-R(z){\partial_z}$ (if they exist) belong to any $T_{CH}$-invariant set. \cref{prop:invSetClosedUnderForwardIntegralCurves} and its proof are quite similar to statements which can be found in several texts on numerical methods for differential equations, such as \cite{HairerErnst1993SODE}. (For a stronger result see \cite{Frank1981TheCO}.) Since we were not able to find the exact statement of \cref{prop:invSetClosedUnderForwardIntegralCurves} in the literature and for convenience of the readers, we include it and its proof below. \begin{proposition}\label{prop:invSetClosedUnderForwardIntegralCurves} Let $z(t)$ be a solution to the initial value problem \[ \begin{cases} z'(t)=-R(z)\\ z(0)=z_0 \end{cases} \] for which there exist $M>0$, $\epsilon>0$, and $s_0>0$ such that if $|z(t)-z|<\epsilon$ for some $t\in [0,s_0]$ then $|R(z)|<M, |R'(z)|<M, |R''(z)|<M$. Then, for any $s_1\in [0,s_0)$ and $\delta>0$, there exist $t_0>0$, a sequence $\{z_k\}_{k=0}^\infty$ where each $z_k$ satisfies \[ t_0 Q(z_k)+(z_k-z_{k-1})P(z_k)=0 \] and a positive integer $N$ such that $|z(s_1)-z_N|<\delta$. Hence, if $z_0$ belongs to some $T_{CH}$-invariant set $S$ and $z(t)$ is finite for $t\in [0,s_1]$, then $z(s_1)\in S$ as well. \end{proposition} \begin{proof} We note that if $z(t)$ is a not a separatrix then the above criteria of boundedness of the derivatives is fulfilled with $s_0=\infty$. If it is a separatrix, but $z_0$ not a root of $P$ while $\lim_{t\to s_2}z(t)$ is a root of $P$ for some $s_2>0$ then the above criteria is fulfilled for all $s_0\in (0,s_2)$ provided that $\epsilon>0$ is chosen sufficiently small and $M$ is sufficiently large. Now, let $z(t)$ be such a solution of the above initial value problem and define the sequence $\{z_k\}$ as follows. Consider the map \[ \varphi(z)=z+t_0R(z). \] (We may pick small enough $t_0$ so that $t_0M<1$.) Then $\varphi(z)$ has a non-vanishing derivative in the $\epsilon$-neighborhood $U$ of the curve segment $z([0,s_0])$ and therefore by the inverse function theorem, $\varphi$ has an inverse $\varphi^{-1}: \varphi(U) \to U$. We define \[ z_k:=\varphi^{-1}(z_{k-1}) \] and note that $z_k$ is one of the solutions to \[ t_0Q(z)+(z-z_{k-1})P(z)=0. \] Namely, we choose $z_k$ to be such a solution that if we fix $z_{k-1}$ and take the limit $t_0\to 0$, then $z_k\to z_{k-1}$. Now, we will inductively approximate $z((k+1)t_0)$. To that end, we consider \begin{equation}\label{eq:approximation} z(kt_0+\Delta t)=z(kt_0)+\Delta tR(z(kt_0)+\Delta t))-r_{k} \end{equation} where $r_k$ is what is usually referred to as the \emph{local residue}. We can estimate its absolute value by using the Taylor expansion of $z(kt_0+\Delta t)$ and $z'(kt_0+\Delta t)=-R(kt_0+\Delta t)$ around $\Delta t=0$. Namely, \[z(kt_0+\Delta t)=z(kt_0)+\Delta t z'(kt_0)+\frac{\Delta t^2}{2} z''(kt_0)+O(\Delta t^3);\] \[z'(kt_0+\Delta t)=z'(kt_0)-\Delta tR'(z(kt_0))+O(\Delta t^2)\] \[=z'(kt_0)+\Delta tz''(z(kt_0))+O(\Delta t^2).\] We substitute these two relations in \eqref{eq:approximation}, let $\Delta t=t_0$ and use the equality $z'(kt_0+\Delta t)=-R(kt_0+\Delta t)$ to obtain \begin{equation}\label{eq:sizeoflocalres} |r_{k}|=\frac{|t_0^2 z''(kt_0)|}{2}+O(t_0^3)=\frac{|t_0^2 M|}{2}+O(t_0^3)<|t_0^2 M|\end{equation} for small $t_0$. Next, as $t_0M<1$, $\varphi^{-1}$ is Lipschitz in $V$. Denoting the Lipschitz constant of $\varphi^{-1}$ in $V$ by $L$ note that $L\leq \frac{1}{1-t_0M}$. Now, recall that $z_0$ is in $U$ and we suppose that $z_l\in U $ for $ l=1,\dots,k-1$. Then \[ \varphi(z_{k})=z_{k-1}, \qquad \varphi(z(kt_0))=z((k-1)t_0)-r_{k-1}, \] and thus \[ |z_k-z(kt_0)|<L|z_{k-1}-z((k-1)t_0)+r_{k-1}|. \] Using the notation $e_k:=|z_k-z(kt_0)|$ and \eqref{eq:sizeoflocalres}, we get \[ e_{k}\leq \frac{|e_{k-1}|+|r_{k-1}|}{1-t_0M}=(|e_{k-1}|+|r_{k-1}|)\sum_{n=0}^\infty(t_0M)^n<|e_{k-1}|(1+2t_0M)+2|r_{k-1}| \] provided we pick $t_0$ small enough. Using \cite[Eq. 1--12]{henrici} together with $|e_{k}|\leq (1+2t_0M)e_{k-1}+2|r_{k-1}|$ and $e_0=0$ we get \[ e_k\leq 2\max |r_k|\frac{e^{2kt_0M}-1}{2t_0M}. \] Hence, \begin{equation}\label{eq:finalapprox} |z_k-z(kt_0)|< Mt_0(e^{2kt_0M}-1). \end{equation} In particular, fix $\delta>0$ and let $\delta_0=\min(\delta,\epsilon)$ and $s_1\in [0,s_0)$. For any $N\in \mathbb{N}$, let $t_{0,N}$ be such that $t_{0,N}N=s_1$. Now pick $N$ large enough so that $Mt_{0,N}( e^{N2t_{0,N}M}-1)<\delta_0$. Then we have that $z_k\in U$ for $k\leq N$. Using $t_0=t_{0.N}$ we may thereby apply \eqref{eq:finalapprox} and by induction conclude that $|z_N-z(s_1)|<\delta$ as asserted. \end{proof} \begin{definition}\label{fwdTrajectories} If the initial value problem \begin{equation}\label{eq:diffEq} \begin{cases} z'(t)=-R(z)\\ z(0)=z_0 \end{cases} \end{equation} has a solution $z(t)$ for $t\in [0,s_0)$, then $\overline{z([0,s_0))}$ is called the \emph{forward trajectory} of $z_0$. \end{definition} \cref{prop:invSetClosedUnderForwardIntegralCurves} states that if $S\subset \mathbb{C}$ is a $T_{CH}$-invariant set then the forward trajectories of all $u\notin \mathcal{Z}(P)$ are contained in $S$. \smallskip The next important definition is that of a separatrix. \begin{definition} A \defin{separatrix} is a solution $\sigma(t)$ of \eqref{eq:diffEq} whose maximal domain of definition $(t_{min},t_{max})$ is a proper subset of $\mathbb{R}$. If $t_{min}$ is finite, then $\sigma(t)$ is called a \defin{$P$-starting} or \defin{root-starting separatrix} and if $t_{min}$ is finite it is called a \defin{$P$-ending separatrix}. Furthermore, if $\lim_{t\to t_{min}}\sigma(t)=z_0\in \mathcal{Z}(P)$, it is called \defin{$z_0$-starting}, and if $\lim_{t\to t_{max}}\sigma(t)=z_0\in \mathcal{Z}(P)$, it is called \defin{$z_0$-ending}. \end{definition} In what follows, when we refer to $P$-starting separatrices without mention of the vector field, we will always mean the $P$-starting separatrices of $-R(z)\partial_z$. An introduction to rational vector fields and their separatrices can be found in \cref{sec:ratfields}. The next statement shows that the $P$-starting separatrices of $-R(z){\partial_z}$, when they exist, also lie in any $T_{CH}$-invariant set $S$. \begin{proposition}\label{pro:separatrixIsInInv} Let $S\subset \bC$ be a $T_{CH}$-invariant set for an operator $T$ given by \eqref{eq:1st} such that $\deg P(z)\ge 1$ and $Q(z)$ is not identically vanishing. Suppose that $z_0 \in \mathcal{Z}(P)$ and let $\sigma$ be a $z_0$-starting separatrix of $-R(z) {\partial_z}$ such that $\lim_{t\to 0}\sigma(t)=z_0$. Then $\sigma([0,s_0]) \in S$ for all $s_0$ such that $\sigma(t)$ is finite for $0\le t\leq s_0$. \end{proposition} \begin{proof} Near $z_0$ the vector field $-R(z) {\partial_z}$ looks like $\frac{\alpha}{z^k} {\partial_z}$ for some $k\geq 1$ and $\alpha\in \bC$, where $k$ is the multiplicity of $z_0$. By Proposition \ref{prop:rootTrajectoryDirections} there is a $t$-trace $\gamma(u,t,z_0)$ with $u=z_0$ which is tangent to $\sigma$. In a sufficiently small neighborhood of $z_0$, we can assume that $\gamma$ is confined in a curvilinear cone---both sides of which being separatrices for $-R(z) {\partial_z}$--- and that additionally $\gamma$ splits this cone into two domains, see Figure~\ref{fig:fig-separatrix-included}. \begin{figure}[!ht] \includegraphics[width=0.4\textwidth]{fig-separatrix-included} \caption{The separatrix $\sigma$ (solid blue). In this particular example, $k=2$. }\label{fig:fig-separatrix-included} \end{figure} Now for any interior point $w$ in the domain containing $\sigma$, there is a point $w'$ on $\gamma$, such that the integral curve of $-R(z) {\partial_z}$ originating at $w'$ passes through $w$. By Lemma~\ref{prop:invSetClosedUnderForwardIntegralCurves}, we obtain that $w \in S$. Since $S$ is closed and $w$ can be chosen arbitrary close to $\sigma$, we obtain that $\sigma \subseteq S$. \end{proof} \begin{example}\label{ex:coch} Take $Q=z^2$, $P=z-1$. Then in polar coordinates the closure of the union of the $1$-starting separatrices is given by $\frac{\sin \theta}{\theta}$. Let us denote by $S$ this union together with the region it encloses. By \cref{pro:separatrixIsInInv} the boundary of $S$ is contained in $\minvset{CH}$. For any point $z$ in the interior of $S$, $R(z)\neq 0$, so there exists $t$ such that $z+tR(z)\in \partial S$ and it follows that $z\in \minvset{CH}$. Now, using the same argument as in \cref{prop:raysGiveCHinv} one can show that no $t$-trace can reach the complement of $S$ through a point which is not a root of $P$. In order to deduce that $S$ is indeed equal to $\minvset{CH}$, we need to show that the $t$-traces defined by the solutions to \begin{equation}\label{eq:pgraph} tz^2+(z-1)^2=0 \end{equation} are in $S$ for any small (and hence for all) $t$. We do so by comparing the second order terms in the Taylor expansion of the solutions to \eqref{eq:pgraph} as $t\to 0$ as well as the $1$-starting separatrices as $\theta\to 0^+$. Now, following \cref{prop:rootTrajectoryDirections} and introducing the change of variables $t\to t^2$, we obtain that the solutions to \eqref{eq:pgraph} are given by \[ 1+i t-t^2+O\left(t^3\right), \quad 1-i t-t^2+O\left(t^3\right) \] and the separatrices close to $\theta=0$ are given by \[ 1+i t-\frac{2 t^2}{3}+O\left(t^3\right), \quad 1-i t-\frac{2 t^2}{3}+O\left(t^3\right). \] We conclude that $S=\minvset{CH}$. \begin{figure}[!ht] \centering \includegraphics[width=0.27\textwidth]{cocheloid} \caption{(The boundary of) $\minvset{CH}$ for $T=z^2\frac{d}{dz}+(z-1)$.} \label{fig:cardiodid} \end{figure} \end{example} We finish this section with the following consequence of \cref{prop:invSetClosedUnderForwardIntegralCurves}. \begin{proposition}\label{prop:qRootSinks} Suppose that $T$ is not of the form \begin{itemize} \item $P$ vanishes identically or \item $P$ is a non-zero constant and $Q$ has degree 1. \end{itemize} Then the simple roots of $Q$ belonging to the boundary of an invariant set $S$ are poles of the $1$-form $-R dz$ with negative residues. \end{proposition} \begin{proof} Let $z^*$ be a simple zero of $Q$. First assume that $z^*$ is a center of the vector field $-R(z) {\partial_z}$. There is a $t$-trace going to $z^*$ that is contained in $S$. Since the forward trajectories of points on this $t$-trace near $z^*$ are closed loops and they are contained in $S$, we obtain $z^*\notin \partial S$. Next, suppose that $z^*$ is a source of $-R(z) {\partial_z}$ lying on the boundary of $S$. There are points close to $z^*$ included in $S$, so a simple geometric argument involving associated rays gives that there exists $z\notin S$ such that $z+tR(z)\in S$, contradicting \cref{th:charact}. Finally, suppose that $z^*$ is a sink of $-R(z) {\partial_z}$, but the $1$-form $-Rdz$ has a non-real residue at $z^*$. Then any integral curve approaching $z^*$ rotates about $z^*$ infinitely many times. Now let $z_1\in S$ be a point close enough to $z^*$ and such that the forward trajectory of $z_1$ has limit $z^*$. By picking a neighborhood $U$ of $z^*$ small enough, we have that for all $z\in U$, the ray $z+tR(z)$ intersects the forward trajectory of $z_1$. Since the forward trajectory of $z_1$ is contained in $S$, \cref{th:charact} implies that $U\subset S$. \end{proof} \section{Criterion of (non)compactness of \texorpdfstring{$\minvset{CH}$}{MTCH}}\label{sec:comp} In this section, we provide criteria of compactness/non-compactness of $\minvset{CH}$ in $\bC$. Since $\minvset{CH}$ is closed (by definition), it is compact if and only if it is bounded. First, let us settle the case of degenerate operators---recall that $T$ given by \eqref{eq:1st} is called degenerate if $\deg Q\le \deg P$. \begin{proposition}\label{prop:degenerate} If $T$ given by \eqref{eq:1st} is degenerate and $Q(z)$ is not identically vanishing, then $\minvset{CH}$ is unbounded. \end{proposition} \begin{proof} Indeed, assume that $u\in \minvset{CH}$ and consider $T[(z-u)^t]$ for sufficiently large $t$. The roots of $T[(z-u)^t]$ different from $u$ satisfy equation \eqref{eq:factor}. For all but finitely many values of $t$, the degree of \eqref{eq:factor} in the variable $z$ equals $1+\deg P$ by degeneracy of $T$. On the other hand, as $t\to \infty$, all terms of equation \eqref{eq:factor} except for $Q(z)$ tend to $0$ and all its roots with respect to $z$ which do not tend to $\infty$ tend to that of $Q(z)$. Since $\deg Q<1+\deg P$, at least $1+ \deg P -\deg Q\ge 1$ roots of \eqref{eq:factor} tend to $\infty$ as $t\to +\infty$. Since all these roots must belong to $\minvset{CH}$, the conclusion follows. \end{proof} \begin{remark} \cref{prop:degenerate} implies that non-degeneracy of $T$ is a necessary condition for the compactness of $\minvset{CH}$. On the other hand, for a large class of non-degenerate operators, $\minvset{CH}$ is still unbounded. To prove this, we first describe a close relation between our problem and the classical complex dynamics. \end{remark} In this and later sections of the text, we use the following definition. \begin{definition} If $\deg Q=\deg P+1$ we let $\defin{\lambda}$ be the coefficient of $z$ in the Laurent expansion of $R(z)$ at $\infty$. That is, \begin{equation}\label{eq:expN} R(z){\partial_z}= \frac{Q(z)}{P(z)}{\partial_z}=\left(\lambda z + \text{higher order terms in } \frac{1}{z}\right){\partial_z}. \end{equation} \end{definition} \medskip As mentioned previously, introducing $z=1/w$ yields \begin{equation*} R(w){\partial_w}= -w^2\frac{Q(w)}{P(w)}{\partial_w}=\left(-\lambda w + \text{higher order terms in } w\right){\partial_w}, \end{equation*} near $w=0$ and so $R(z){\partial_z}$ has a zero of order 1 at $\infty$. If $\Re(\lambda)>0$, it is a sink, if $\Re(\lambda)=0$ a center, and if $\Re(\lambda)<0$ it is a source. \smallskip Let $\defin{\mathcal{J}(f_t)} \subseteq \bC$ denote the Julia set of $f_t$. The next proposition shows that $\minvset{CH}$ contains $\mathcal{J}(f_t)$ of the rational functions $z\mapsto f_t(z)$, where $t$ ranges over the non-negative reals and \[ \defin{f_t(z)} \coloneqq z + t R(z). \] \begin{proposition}\label{prop:juliaSetIsSubset} For any $t > 0$, $\mathcal{J}(f_t)$ is contained in $\minvset{CH}$. \end{proposition} \begin{proof} First, if $\deg f_t \leq 1$ then $\mathcal{J}(f_t)\subset\mathcal{Z}(Q)\subset \minvset{CH}$. Next, suppose that $t\geq 0$ is given such that $\deg f_t\geq 2$ and that $u \in \minvset{CH}$. Then all roots of \eqref{eq:main} also lie in $\minvset{CH}$. Indeed, given $u$, we look for $z$ solving \begin{equation} \label{eq:juliaSolutions} u=\frac{ tQ(z)+zP(z)}{P(z)} = f_t(z). \end{equation} Hence, $f^{-1}_t(u) \subseteq \minvset{CH}$. Iteration of this argument yields that $\cup_{j=0}^\infty f_t^{-j}(u)\subseteq \minvset{CH}$. Recall that since $f_t$ is of degree at least 2, by \cite[Thm. 4.2.7]{Beardon2000}, $\mathcal{J}(f_t) \subseteq \overline {\cup_{j=0}^\infty f_t^{-j}(u)}$ except possibly for two values $u\in \mathbb{C}$. Using the fact that $\minvset{CH}$ contains a curve to or from a zero of $PQ$, we obtain that $\minvset{CH}$ has at least three distinct points which in turn yields that $\mathcal{J}(f_t) \subseteq \overline {\minvset{CH}} =\minvset{CH}$. \end{proof} \smallskip \cref{prop:juliaSetIsSubset} proves that \[ \bigcup_{t\geq 0 } \mathcal{J}(f_t(z)) \] is a subset of $\minvset{CH}$. In general, we suspect that, for example, in case of \cref{thm:relambda0} the union of the above Julia sets can be strictly smaller than $\minvset{CH}$. Motivated by \cite{MR1043403} we formulate the following guess. \begin{conjecture} \label{conj:boundsep} If the boundary of $\minvset{CH}$ consists only of $P$-starting separatrices then $\minvset{CH}$ coincides with the above union of Julia sets. \end{conjecture} \smallskip The next statement gives a sufficient condition for unboundedness of $\minvset{CH}$ in case of non-degenerate $T$. (As we will see later, Proposition~\ref {pr:unbounded} is actually a criterion of unboundedness for such operators.) \begin{proposition}\label{pr:unbounded} For a non-degenerate $T$ given by \eqref{eq:1st}, the set $\minvset{CH}$ is unbounded if either \noindent {\rm (i)} $\deg Q > \deg P+1$; or \noindent {\rm (ii)} $\deg Q=\deg P+1\ge 2$ and $\Re \lambda <0$. \end{proposition} \begin{proof} In case (i), set $\Psi(z)\coloneqq t Q(z)+P(z)(z-u)$. Since $\deg Q >1+ \deg P$, then for all positive $t$, $\deg Q > \deg P(z)(z-u)$ and $\deg \Psi(z)=\deg Q(z)$. For $t=0$, $\deg \Psi(z)=1+\deg P< \deg Q$. Thus, for any choice of the starting point $u$, if we consider the dependence of the zero locus of $\Psi(z)$ on $t$, at least one of its roots must tend to $\infty$ which implies the unboundedness of $\minvset{CH}$. In case (ii) and $\deg Q\geq 2$ consider the Julia set of $f_t(z) \coloneqq z+t R(z)$. For any given $\lambda$ such that $\Re \lambda <0$, for sufficiently small $t>0$ we have that, $0<|(1+t\lambda)|<1$ and hence $\infty$ is a repelling fixed point for $f_t$. Since the Julia set $\mathcal{J}(f_t)$ for each $t$ is the closure of all repelling fixed points we get that $\infty \in \mathcal{J}(f_t)$ for small $t$. Further, \cref{prop:juliaSetIsSubset} claims that $\mathcal{J}(f_t) \subseteq \minvset{CH}$ for generic $t$. Now, for generic $t$, $\deg f_t\geq 2$ so $ \mathcal{J}(f_t)$ is perfect and we thereby conclude that $\minvset{CH}$ is unbounded. \end{proof} \begin{remark} If we in the above item (ii) remove the condition that $\deg Q\geq 2$ the unboundedness fails: the minimal invariant set is just the unique root of $Q$. \end{remark} \begin{remark} Another way of concluding non-compactness in case (ii) is to note that $\infty$ is a sink of the vector field $-R(z) {\partial_z}$ and hence that there is a $P$-starting separatrix that goes to $\infty$. Since by \cref{pro:separatrixIsInInv}, all $P$-starting separatrices are contained in $\minvset{CH}$, the claim follows. \end{remark} We now concentrate on sufficient conditions of compactness of $\minvset{CH}$ (which by the previous results may only occur if $\deg Q=\deg P+1$). \begin{lemma}\label{lem:mincompact} Suppose that in the expansion \eqref{eq:expN}, $a\coloneqq\Re \lambda >0$. Then there exists a number $M \gg 0$ (independent of $t$) such that whenever $t>0$ and $|u|>M$, we get \begin{equation}\label{eq:bigRgivesExpansion} \frac{|f_t(u)|}{|u|} > \left(1+\frac{a}{2}t \right). \end{equation} \end{lemma} \begin{proof} Expanding $f_t(z)$ at $\infty$, we get \[ f_t(z) = (1+\lambda t)z + t\left(c_0 + c_1 z^{-1}+ c_2 z^{-2} + \dotsb\right). \] Now choose $M$ such that \[ \frac{a}{2} M > |c_0| + \frac{|c_1|}{M}+ \frac{|c_2|}{M^2}+ \dotsb. \] This is possible, since $|c_j| \leq C \rho^j$ for some $C>0$, $j$ is sufficiently large and $\rho$ is larger than the absolute values of the zeros of $P$. We now obtain \[ |f_t(z)| \geq |1+\lambda t|\cdot|z| - t \left|c_0 + c_1 z^{-1}+ c_2 z^{-2} + \dotsb\right|, \] and if $|u|>M$, we have \[ |f_t(u)| \geq (1+ a t)|u| - \frac{ a }{2} t M \geq \left(1+\frac{a}{2}t \right) |u|. \] \end{proof} \begin{corollary}\label{cor:posreallambda} If in the above notation, $a=\Re \lambda >0$, i.e. the vector field $R(z)\partial_z$ has a sink at $\infty$, then $\minvset{CH}$ is compact. \end{corollary} \begin{proof} Suppose that $|u| \leq M$. By \eqref{eq:bigRgivesExpansion} any solution $z$ to \[ t Q(z) + P(z)(z - u) =0 \quad \iff \quad u = f_t(z) \] must satisfy the inequality $|z|\leq M$, since $|f_t(z)|>M$ whenever $|z|>M.$ Hence, the disk of radius $M$ centered at the origin is $T_{CH}$-invariant. Consequently, $\minvset{CH}$ is compact. \end{proof} Finally, let us consider the limiting case. \begin{proposition} If in the above notation, if $a=\Re \lambda =0$, i.e. $R(z)\partial_z$ has a center at $\infty$, then $\minvset{CH}$ is compact. \end{proposition} \begin{proof} In \cref{compactre0} we found a compact $T_{CH}$-invariant set $\mathcal{D}(\Psi)$ for the case $\Re \lambda=0$. Hence, $\minvset{CH}$ is compact. \end{proof} \begin{remark} As we we shall prove in \cref{thm:relambda0} below, the set $\mathcal{D}(\Psi)$ is in fact the minimal $T_{CH}$-invariant set. \end{remark} Summarizing, we have the following criterion of compactness/non-compactness of $\minvset{CH}$ in $\bC$. \begin{theorem}\label{th:compcrit} For any $T$ as in \eqref{eq:1st}, {\rm (i)} if $T$ is degenerate, $\minvset{CH}$ is unbounded in $\bC$; {\rm (ii)} if $T$ is non-degenerate, the set $\minvset{CH}$ is unbounded in $\bC$ if and only if either $\deg Q > 1+\deg P$ or if $\deg Q=1+\deg P\ge 2$ and $\Re \lambda <0$ in the expansion \eqref{eq:expN}; {\rm (iii)} $\minvset{CH}$ is compact in $\bC$ if and only if either $\deg Q =1+\deg P\ge 2$ and $\Re \lambda \ge 0$ in the expansion \eqref{eq:expN} or $\deg Q=1+\deg P=1$ and $\lambda \notin (-\infty,0)$. \end{theorem} \begin{example} Let us return to the case of the ``lemniscate" $S_1$, see \cref{ex:lemniscate} and \cref{fig:lemniscate}, whose boundary is given by \[ r(\theta) = \pm\sqrt{ \left( \frac{\sin{2\theta}}{2\theta} \right)}. \] This boundary is also the root-starting separatrices of $-\frac{z^3}{(z+1)(z-1)} {\partial_z}$. We may note that all the associated rays corresponding to $T = z^3{\frac{d}{dz}} + (z+1)(z-1)$ in $\partial S_1$ lie in the exterior of $S_1$ and hence no $t$-traces can move into the complement of $S_1$ through $\partial S_1\setminus \mathcal{Z}(P)$. Using \cref{prop:rootTrajectoryDirections} we observe that to prove that $S_1$ is $T_{CH}$-invariant (and hence equal to $\minvset{CH}$), it suffices to show that the solutions to the following two equations \[ tz^3+(z+1)(z-1)^2=0, \quad tz^2+(z+1)^2(z-1)=0 \] lie in $S_1$ for small (and hence all) $t\geq 0$. We focus merely on the former equation; by symmetry the same fact will be valid for the solutions to the latter one. For $u=1$, one of the $t$-traces which starts at $1$ is given by \[ \gamma(t)=1+\frac{i t}{\sqrt{2}}-\frac{5 t^2}{8}+O\left(t^3\right). \] Now after introducing a suitable change of variable, the Taylor expansion of $r(\theta)$ close to $z=1$ with $\theta>0$, is given by \[ 1+\frac{i t}{\sqrt{2}}-\frac{5 t^2}{12}+O\left(t^{5/2}\right). \] By comparing the second order terms, we see that $\gamma(t)\in S_1$ for all $t\geq 0$. In precisely the same vein, we study the other $t$-trace starting at $u=1$. By symmetry we can make the same conclusion for the $t$-traces starting at $u=-1$ and obtain that the ``lemniscate" $S_1$ is indeed $\minvset{CH}$ for the operator $T=z^3{\frac{d}{dz}} + (z+1)(z-1)$. \end{example} \section{Sufficient conditions for triviality of $\minvset{CH}$}\label{sec:triviality} In the previous section we provided necessary and sufficient conditions for (non)-compactness of $\minvset{CH}$. Here we will partially, but substantially strengthen this result by providing a simple sufficient condition on $T$ guaranteeing that $\minvset{CH}=\bC$ which implies that $\bC$ is the only invariant set. We say that a $T_{CH}$-invariant set is \defin{trivial} if it is equal to $\bC$ and \defin{non-trivial} otherwise. Later in \cref{cor:trivialfin} we formulate necessary and sufficient conditions for the triviality of $\minvset{CH}$ extending the following theorem. \begin{theorem}\label{thm:degQminusdegP} An operator $T=Q{\frac{d}{dz}}+P$ might have non-trivial $T_{CH}$-invariant sets if and only if $-1\leq \deg Q -\deg P \leq 1$. \end{theorem} \begin{proof} The ``if''-- statement follows by noting that the operators \begin{enumerate} \item $T=-\frac{d}{dz}+z$, \item$T=(z-1)\frac{d}{dz}+z$, \item $T=(z-1)(z+1)\frac{d}{dz}+z$ \end{enumerate} all have non-trivial minimal invariant sets, i.e., different from $\bC$, cf. \cref{thm:classifying1d} below. The ``only if''-- statement follows from the next proposition. \end{proof} \begin{proposition}\label{prop:triviality} If $|\deg Q-\deg P|\ge 2$ then $\minvset{CH}=\mathbb{C}$. \end{proposition} \begin{proof} Suppose first that $\deg Q-\deg P=K\geq 2$. After an affine change of coordinates, we may assume that at $\infty$, \[ R(z){\partial_z}= \left(-z^K + \text{ higher order terms in }\frac{1}{z}\right){\partial_z}. \] Note here that if $z=1/w$, then \begin{equation*} R(w){\partial_w} =-w^2\frac{Q(w)}{P(w)}{\partial_w}=\left(w^{2-K} + \text{higher order terms in } w\right){\partial_w} \end{equation*} near $w=0$. In particular, $K>2$, $R(z){\partial_z}$ has a pole of order $K-2$ at $\infty$, if $K<2$, $R(z){\partial_z}$ has a zero of order $2-K$ at $\infty$ and if $K=2$, $\infty$ is a regular point. By \cref{le:rootsStartingInInfty}, there exist $K-1$ curves contained in $\minvset{CH}$ which have asymptotic directions given by the $(K-1)$-st roots of unity. Let $C_1$ be the curve going in the positive horizontal asymptotic direction. Introducing the standard polar coordinates $z=re^{i\theta}$, let \[ \Theta_{K-1} \coloneqq \left\{ \frac{2 \pi j}{K-1} \text{ with } j \in 1,2,\dotsc,K-1 \right\} \] denote the arguments of the $(K-1)$-st roots of unity. Similarly, set \[ \Theta_{2K-2} \coloneqq \left\{ \frac{\pi j}{K-1} \text{ with } j \in 1,2,\dotsc,2K-2 \right\}. \] Suppose that $\theta\in (0,\frac{\pi}{K})$; the case $\theta\in (-\frac{\pi}{K},0)$ is treated in a similar manner. Since $R(z)\approx -z^K$, there exists $M(\theta)$ such that for $|z|>M(\theta)$, one has \[ -\pi+\frac{2K-1}{2}\theta < \arg R(z) < 0. \] It follows that the associated ray $\{z+t R(z):t\geq 0\}$ of $z$ intersects $C_1$ and therefore $z$ is contained in $\minvset{CH}$ by \cref{th:charact}, see the illustration in \cref{fig:raysDegDiff}. \begin{figure}[!ht] \includegraphics[width=0.4\textwidth]{rays-deg-diff} \caption{ Taking $z$ on the ray (dashed, blue) with direction $\theta$, with $|z|$ sufficiently large, yields an associated ray (dashed, red) with direction $\arg R(z)$ with the property that this ray intersects $C_1$ (black). }\label{fig:raysDegDiff} \end{figure} The next step is as follows. Take \[ \theta\in\left (\frac{\pi}{K},\frac{\pi}{K}+\frac{\pi}{K^2}\right)=I_{1}. \] Then there exists $M(\theta)$ such that if $\theta\in I_{1}$ and $|z|>M$ then $\arg R(z)\in (0,\frac{\pi}{K})$. Furthermore, if $\theta=\frac{\pi}{K}$ then for any $\delta>0$, there exists $M$ such that if $|z|>M$ then $\arg R(z) \in (-\delta,\delta)$. In any case, we can conclude that if $|z|>M(\theta)$ there exists $t_0$ such that $z+t_0 R(z)\in \minvset{CH} $ which implies that $z\in \minvset{CH}$. Let us now use the following iteration process. Pick \[ \theta\in \left(\frac{\pi}{K}+\frac{\pi}{K^2},\frac{\pi}{K}+\frac{\pi}{K^2}+\frac{\pi}{K^3}\right)=I_{2}. \] Then there exists $M(\theta)$ such that if $\theta\in I_{2}$ and $|z|>M$ then $\arg R(z)\in I_{1}$. Furthermore, if \[ \theta=\frac{\pi}{K}+\frac{\pi}{K^2} \] then for any $\delta>0$, there exists $M>0$ such that if $|z|>M$ then $\arg R(z)\in (\frac{\pi}{K}-\delta,\frac{\pi}{K}+\delta)$. In any case, from what we previously found we get that, if $|z|>M(\theta)$ then there exists $t_0$ such that $z+t_0 R(z)\in \minvset{CH} $ which implies that $z\in \minvset{CH}$. \smallskip Iterating this process $N$ times (the same method works for $\theta<0$) and using the fact that the new directions have points contained in $\minvset{CH}$, we finally get that there exists $M(\theta)$ such that if \[ \theta\in \left(-\sum_{k=1}^N\frac{\pi}{K^k},\sum_{k=1}^N\frac{\pi}{K^k}\right), \] $\theta\neq 0, |z| > M(\theta)$ then $z\in \minvset{CH}$. In the limit we obtain that if $0<|\theta| \leq \pi/(K-1) $ and $|z| > M(\theta)$, then $z\in \minvset{CH}$. Carrying out the same argument for all other values in $\Theta_{K-1}$ (while replacing $C_1$ by another corresponding curve) we conclude that if $\theta\notin \Theta_{2K-2}$ and $r$ is sufficiently large, then $z\in \minvset{CH}$. \medskip Now we recall that all points in $\mathcal{Z}(PQ)$ are contained in $\minvset{CH}$ which allows us to assume that $z\notin \mathcal{Z}(PQ)$. Next, given a point $z$ such that $\arg R(z) \notin \Theta$, observe that by the above discussions, $\{z+tR(z):t\geq 0\}$ intersects $\minvset{CH}$ for all sufficiently large $t$ which implies that $z$ is contained in $\minvset{CH}$. It remains to consider only the case when $\arg R(z)\in \Theta$. However, this set of $z \in \mathbb{C}$ has empty interior since $R(z)$ is non-constant and analytic outside the zeros of $P$. Therefore taking the closure of the set of all $z$ such that $\arg R(z)\notin \Theta$ which belongs to $\minvset{CH}$ by the previous arguments we obtain $\minvset{CH} = \mathbb{C}$. \bigskip So far we have treated the case $K\geq 2$. The arguments for the case $K\leq 2$ are very similar; we include them below for the sake of completeness. However, these arguments will not carried out with the same amount of details as in the previous case. Suppose that $\deg Q-\deg P=K\leq 2$. Then the $P$-starting separatrices of $-R(z) {\partial_z}$ are contained in $\minvset{CH}$ and (after an affine change of $z$) the ones that go to $\infty$ can be assumed to have the asymptotic directions $\frac{\pi(2l+1)}{K+1}$, $l=0,\dotsc,K$. Furthermore, their existence is guaranteed by the cyclic reversals of the separatrix graph at $\infty$; notice that $\infty$ is a singular point of the field $R(z) {\partial_z}$ of order $K+2$, see \cite{DiasGarijo2020x}. We now aim to prove that if $z=re^{i\theta}$, $\theta \notin \Theta_{2K+2}$, then $z\in\minvset{CH}$ if $|z|$ is sufficiently large (the magnitude depends on $\theta$). Pick \[ \theta\in \left(\frac{\pi}{K+1}, \frac{\pi}{K(K+1)}\right) \] and note that we can find $M(\theta)$ such that if $|z| > M(\theta)$, then the associated ray of $z$ intersects the separatrix going in the direction of $-\frac{\pi}{K+1}$ which implies that $z$ is contained in $\minvset{CH}$, see \cref{fig:sectorsProof}. In the same vein, if \[ \theta\in \left(-\frac{\pi}{K(K+1)},-\frac{\pi}{K+1}\right), \] then $z\in \minvset{CH}$ provided $|z| > M(\theta)$. \begin{figure}[!ht] \includegraphics[width=0.4\textwidth]{sectors-proof} \caption{Case $K=4$. The dashed sectors are obtained in the first iteration of the argument. The black lines are $P$-starting separatrices of $-R(z) {\partial_z}$. }\label{fig:sectorsProof} \end{figure} Iterating this argument yields that if $0<|\theta|<\pi/(K+1)$ and $|z| > M(\theta)$, then $z\in \minvset{CH}$. Carrying out the same procedure for the other values of $\theta\in \Theta_{K+1}$, we may conclude that if $\theta\notin \Theta_{2K+2}$ and $|z|$ is sufficiently large, then $z\in \minvset{CH}$. As in the above argument, this fact implies that $\minvset{CH}=\bC$. \end{proof} \section{Irregular $T_{CH}$-invariant sets} \label{sec:irreg} In this section we concentrate on the properties of irregular $T_{CH}$-invariant sets (recall \cref{def:regularSets}) and describe for which operators $T$ they occur. The next statement provides an important necessary condition for the existence of irregular $T_{CH}$-invariant sets. \begin{proposition}\label{prop:onlyRealLine1dimPts} Suppose that $z_0 \notin \mathcal{Z}(PQ)$ is an irregular point of a $T_{CH}$-invariant set $S$. Then $z_0$ lies on a line segment belonging to $S$. If (after an appropriate affine change of $z$) the line spanned by the latter segment coincides with the real axis in $\bC$ then the vector field $R(z){\partial_z}=\frac{Q(z)}{P(z)}{\partial_z}$ is such that $R(z)$ is real on the real axis. \end{proposition} \begin{proof} We consider a $t$-trace $\gamma(t)$ that solves \[ tQ(z)+(z-z_0)P(z)=0, \] i.e. $\gamma(0)=z_0$. We have that $\gamma'(t)$ is given by \eqref{eq:field}, which is non-singular for small $t$. Furthermore, the forward trajectories of $\gamma(t)$ are contained in $S$. So by irregularity of $z_0$, $\gamma'(t)=\frac{-R(z)}{tR'(z)+1}=-k(t)R(z)$ for some $k(t)\in \mathbb{R}\setminus\{0\}$ and sufficiently small $t$. Since $\gamma(0)=\gamma(t)+tR(\gamma(t))$, it follows that $S$ is locally a line segment $L$ containing $z_0$. After an affine change of variables, we can assume that $L\subset \mathbb{R}$. Since $\gamma(t)\in L$ for small $t$ and $\gamma'(t)=-k(t)R(z)$, it follows that $R(z)$ is real on $L$. Thus $R(z)$ is a real rational function. \end{proof} \begin{remark} From Proposition~\ref{prop:onlyRealLine1dimPts} and results of \S~\ref{sec:triviality} we can conclude that if $T=Q\frac{d}{dz}+P$ admits an irregular $T_{CH}$-invariant set then $T$ must be real (up to a scalar factor and affine variable change) and $|\deg Q -\deg P|\leq 1$ which substantially restricts the area of study. \end{remark} \subsection{Characterizing fully irregular $T_{CH}$-invariant sets} Recall that a set $S\subset \bC$ is called fully irregular if the set $S^\circ$ of its interior points is empty. Below we show that if a $T_{CH}$-invariant set is fully irregular, then under mild assumptions on $T$, it can only be either a straight line, a half-line, or a line segment. \begin{proposition}\label{prop:1d} If $T$ given by \eqref{eq:1st} with non-constant $P$ and $Q$ which do not vanish identically has a fully irregular $T_{CH}$-invariant set $S$, then $S$ is contained in a straight line in $\bC$. \end{proposition} \begin{proof} Recall that \cref{prop:onlyRealLine1dimPts} implies that, outside of the zeros of $PQ$, $S$ locally is a segment of a straight line. \smallskip Our next goal is to show that if $S$ is fully irregular, then $P$ must have only simple roots. First, recall that the only integral curves of $-R(z) {\partial_z}$ going to or from a root $z_*\in \mathcal{Z}(P)$ are its separatrices. Now, suppose there $z_*$ is a root of $P$ of multiplicity $n\geq 2$. Then there exist $(n+1)$ $z_*$-starting separatrices contained in $S$ with the angle $2\pi/(n+1)$ between any two consecutive. % By \cref{prop:rootTrajectoryDirections} we know that the $t$-traces originating at $z_*$ satisfy the equation \[ tQ(z)+(z-z_*)P(z)=0 \] and start in the directions of these separatrices. Let $\gamma(t)$ be the $t$-trace that starts in some direction $v_0$. By \eqref{eq:begin} there is some $t_0>0$ such that for all $t\in (0,t_0]$, \begin{equation}\label{eq:zStarDirection} \gamma(t)+tR(\gamma(t))=z_*. \end{equation} Since $\gamma(t)$ is a part of a $z_*$-starting separatrix of $-R(z) {\partial_z}$, we have that $\gamma'(t)=-k(t)R(z)$ with $k(t)>0$ for $t\in (0,t_0)$. Then $\gamma([0,t_0])$ is a line segment, and by irregularity, all $z_*$-ending separatrices contained in $S$ and all $z_*$-starting separatrices are locally line segments in some open neighborhood $U$ of $z_*$. Moreover, we can choose $U$ sufficiently small so that no other points of $S$ lie in $U$, see \cref{fig:1dimSimpleRoots}. \begin{figure}[!ht] \includegraphics[width=0.4\textwidth,page=3]{tikzFigures} \caption{The neighborhood $U$ and the straight line segments which are $z_*$-starting separatrices. Note that a priori, there may also be $z_*$-ending separatrices in $S$ and $U$. In this example, $n=2$.}\label{fig:1dimSimpleRoots} \end{figure} Now take $z_0 = \gamma(t_0/2)$, which is a point in $U$, and let $V$ be an open disk in $U$ centered at $z_0$ and not containing $z_*$ nor $\gamma(t_0)$. Since $n\geq 2$, we can find a point $w$ near $z_*$ which lies on a $z_*$-starting separatrix with an initial direction $v_1 \neq \pm v_0$ and such that a root of \[ (t_0/2)Q(z) + (z-w)P(z) =0 \] belongs to $V$. If we pick $w$ sufficiently close to $z_*$, this root is equal to some $\gamma(t_1)$ with $t_1 \in (0,t_0)$. But this is a contradiction, since one has \[ \gamma(t_1) + t_1 R( \gamma(t_1) ) = w, \] which is incompatible with \eqref{eq:zStarDirection}. Hence, all roots of $P$ are simple. \medskip Suppose now that there exists $w \in S$ such that $\arg(w-z_*)$ is not the direction of some separatrix at $z_*$, where $z_* \in \mathcal{Z}(P)$. Then there is a $t$-trace originating at $z_*$ which is not an integral trajectory by \cref{prop:rootTrajectoryDirections}. Hence near $z_*$ the set $S$ is not fully irregular. (Notice that, in particular, this argument covers all cases when $\deg P>1$.) \medskip The final case to consider is when $P$ has a single root $z_*$. We must show that there is no $w \in S$, where $w-z_*$ has the same direction as some $z_*$-ending separatrix. Suppose that this is the case. Then the $t$-trace corresponding to $w$ originating at $z_*$, starts of in the direction of a $z_*$-ending separatrix; let us denote this separatrix by $\sigma$. Now pick $u \in \sigma$ and let the $t$-trace $\gamma$ start in $u$. Note that $\gamma$ starts in the direction of $z_*$. However, in order for $S$ to be fully irregular the curve $\gamma$ must also be an integral curve of $R(z) {\partial_z}$. Thus $\gamma$ must follow $\sigma$. Now, $\sigma$ ends in $z_*$ but the $t$-trace $\gamma$ cannot pass through a root of $P$. We conclude that $w$ is not in $S$. \end{proof} Let us now describe which operators $T$ admit fully irregular $T_{CH}$-invariant sets. We will analyze the case when $P$ and $Q$ are as in \cref{prop:1d} and use the fact that $|\deg Q-\deg P|\leq 1$. As the roots of $Q$ and $P$ are necessarily contained in any $T_{CH}$-invariant set $S$, it follows that the roots of $P$ and $Q$ must lie on a line. By an affine change of variables we may assume that the polynomials $P$ and $Q$ are real. We have already seen that $P$ must have only simple roots. \begin{lemma}\label{le:temporary} Let $T$ be as in \cref{prop:1d} and $S$ be $T_{CH}$-invariant. Suppose there is an interval $I \subset\mathbb{R}$ contained in $S$ and not containing any zero of $PQ$. Finally, let $\gamma_0(t), \gamma_1(t)$ be $t$-traces which solve \eqref{eq:main} for some $u\in S$. Moreover, suppose there is a $t_0$ such that $\gamma_0(t_0), \gamma_1(t_0)\in I, \gamma_0(t_0)<\gamma_1(t_0), \gamma'_0(t_0)>0, \gamma'_1(t_0)<0$. Then $S$ is not fully irregular. \end{lemma} \begin{proof} By \cref{prop:1d}, it suffices to prove that there is $t$ such that $\gamma_0(t)\notin \mathbb{R}$. We can assume that $\gamma_0(t_0+\epsilon) $ and $\gamma_1(t_0+\epsilon)$ belong to $I$ for small $\epsilon,$ since otherwise there is nothing to prove. Now, by analyticity of the $t$-traces away from multiple roots, there are (potentially other) $\gamma_0$ and $\gamma_1$ for which there exists some time moment $t_1$ where $\gamma_0(t_1)=\gamma_1(t_1) \in I$. Since \begin{equation*} \begin{cases} &t_0 Q(z)+(z-u)P(z)=0, \\ &t_1 Q(z)+(z-u)P(z)=0 \end{cases} \implies (t_0-t_1) Q(z)=0, \end{equation*} the only common roots of $t_0 Q(z)+(z-u)P(z)=0$ and $t_1 Q(z)+(z-u)P(z)=0$ are roots of $Q$. In particular, one of $\gamma_0(t_1+\delta)$ and $\gamma_0(t_1-\delta)$ does not belong to $\mathbb{R}$ and we are done. \end{proof} We say that two roots $z_0,z_1 \in \mathcal{Z}(PQ)$ are \defin{adjacent} (along the $x$-axis) if there is no root $z_2\in \mathcal{Z}(PQ)$ such that $z_0<z_2<z_1$ or $z_1<z_2<z_0$. It is now clear that if $T$ admits a fully irregular set $S$ then no two roots of $P$ can be adjacent. Indeed, if there were adjacent roots $z_1$, $z_2$ of $P$, since $S\subset \setR$ and the $P$-starting separatrices are contained in $S$, it follows that some of the $P$-starting separatrices of $-R{\partial_z}$ intersect, which is impossible. The next lemma shows that the roots of $P$ and $Q$ must \defin{interlace}. In other words, all roots of $P$ and $Q$ are simple, and roots of $P$ can only be adjacent to roots of $Q$ and vice versa. \begin{lemma}\label{le:onlyInterlacing} If $S\subset\mathbb{R}$ is $T_{CH}$-invariant and $P$ is non-constant, then {\rm(i)} $Q$ has no roots of multiplicity greater than 1; {\rm (ii)} $Q$ has no adjacent roots. \end{lemma} \begin{proof} We prove both claims by contradiction, handling them in parallel. We note that having a root $z^*$ of multiplicity $m\geq 3$ is impossible. Indeed, there are only two paths (in $S$) to $z^*$ and by \cref{prop:rootTrajectoryDirections} and \cref{le:rootsStartingInInfty}, it is straightforward to realize that three or more $t$-traces ending in $z^*$ is impossible if $u \neq z^*$. A similar argument shows that we cannot have two adjacent roots of $Q$ where at least one of them is non-simple. Next, suppose that there are two adjacent simple roots $z^*_1, z^*_2$ of $Q$. It follows that one of them is a source of $-R(z) {\partial_z}$, and this violates \cref{prop:qRootSinks}. It remains to prove that $Q$ has no root of multiplicity $2$. Suppose there is such a root, call it $z^*$. Since it has even degree and the $P$-starting separatrices are contained in $S$, there can only be at most one adjacent root $z_*$ of $P$. Without loss of generality, suppose that $z_*<z^*$. Picking $u\in (z_*,z^*)$ yields that the interval $I\coloneqq \{z>z^*\}$ is contained in $S$. Further by \cref{prop:rootTrajectoryDirections}, \cref{le:rootsStartingInInfty}, and \cref{le:temporary}, picking $u\in I$ yields that $S$ is not fully irregular since $-R(z)>0$ in $I$. \end{proof} \begin{lemma}\label{le:singularities} Let $\gamma_0(u,t)$ be a $t$-trace, i.e. solution of \eqref{eq:main}. Then $V(\gamma(t),t)$ has a singularity at $t_0$ only if there is another $t$-trace $\gamma_1(u,t)$ such that $\gamma_0(u,t_0)=\gamma_1(u,t_0)$. \end{lemma} \begin{proof} Follows from the fact that simple roots of a polynomial are analytic in the coefficients. \end{proof} \begin{theorem}\label{thm:classifying1d} Let $P$ and $Q$ have only real roots. Then the following facts hold: {\rm (i)} If $\deg Q>0$, $P = \beta \neq 0$, then there exist fully irregular $T_{CH}$-invariant sets if and only if $Q=\alpha(z-z_0)$ where $\alpha/\beta>0$ are real proportional. {\rm (ii)} If $\deg P>0$ and $Q$ non-zero, then there exist fully irregular $T_{CH}$-invariant sets if and only if the roots of $P$ and $Q$ interlace, there are no zeros of multiplicity greater than 1, and for any root $z_*$ of $P$, we have \[ -\frac{Q(z_*)}{G(z_*)}>0 \text { where } P(z)=(z-z_*)G(z). \] Furthermore, these invariant sets are either intervals, half-lines or lines. \end{theorem} \begin{proof} We begin with item (i). If there are fully irregular $T_{CH}$-invariant sets, by \cref{le:onlyInterlacing} $Q$ can only have a single root and it must be of multiplicity 1. After an affine change of coordinates, we may assume \[ T=z{\frac{d}{dz}} + \beta, \qquad \beta \neq 0. \] The case $\beta<0$ was handled in \cref{ssec:special}. So suppose that $\beta\in \bC\setminus(-\infty,0]$. We find that the equation $T[(z-u)^t]=0$ has solutions, other than $u$, of the form \[ z=\frac{\beta u}{t+\beta}. \] Hence, a closed set $S$ is $T_{CH}$-invariant if and only if the assumption $u\in S$ implies $\frac{\beta u}{t+\beta}\in S$ for all $t\geq 0$. If $\beta>0$ then $S$ is $T_{CH}$-invariant if and only if for each $u\in S$ it contains the line-segment between $u$ and $0$. Now assume that $\beta \notin \mathbb{R}$. We still have that the forward trajectories of $-R(z) {\partial_z}$ are contained in $S$, and $-R(z)=-\frac{z}{\beta}$. Therefore away from 0, a fully irregular $S$ must locally consist of integral trajectories of $-R(z) {\partial_z}$. Take a non-zero $u\in S$. Then the curve $\gamma(t)=\frac{\beta u}{t+\beta}$ belongs to $S$. It connects $u$ to $0$, and for $t>0$, the derivative $\gamma^\prime(t)$ is not real proportional to $-R(\gamma(t))$. Therefore there are no fully irregular $T_{CH}$-invariant sets in this situation. Let us turn to item (ii). The assumption $-\frac{Q(z_*)}{G(z_*)}>0$ implies that the $P$-starting separatrices start with the arguments 0 and $\pi$, which is necessary for full irregularity. The ``only if'' - statement follows from \cref{le:onlyInterlacing} above. We need to prove the ``if'' - statement. There are three cases to consider: \begin{enumerate} \item \label{it:qpq} the smallest and the largest roots of $PQ$ belong to $\mathcal{Z}(Q)$; \item \label{it:pq} one of the smallest and the largest roots belong to $\mathcal{Z}(Q)$ and the other to $\mathcal{Z}(P)$; \item \label{it:pqp} the smallest and the largest roots are roots of $\mathcal{Z}(P)$. \end{enumerate} We claim that in case (\ref{it:qpq}) the minimal invariant set $\minvset{CH}$ is the interval between the smallest and the largest roots of $Q$. Let us denote the latter interval by $I$. Indeed, the closures of the $P$-starting separatrices belong to $I$. Furthermore, the condition $\deg Q =\deg P+1$ yields that all $t$-traces which do not start at $u$, should start in the roots of $P$. By \cref{prop:rootTrajectoryDirections}, if $u\notin \mathcal{Z}(P)$ we find that these $t$-traces must have the initial direction pointing away from $u$. If $u\in \mathcal{Z}(P)$ we similarly find that the $t$-traces that do not start in $u$ have the initial direction pointing away from $u$ and the two $t$-traces that start at $u$ have the directions of the $u$-starting separatrices. In particular, the $t$-traces belong to the intervals of the form \[ [z_*,z^*], \; [z^*,z_*], \quad z_*\in \mathcal{Z}(P), \; z^*\in \mathcal{Z}(Q), \] having disjoint interiors. By \cref{le:singularities}, for $t>0$, the vector field $V(\gamma(t),t)$ in \eqref{eq:field} has no singularities for a $t$-trace $\gamma(t)$. Therefore by analyticity, they remain inside these intervals for all $t\geq 0$. Hence, $I$ is $T_{CH}$-invariant. Moreover, the closure of the $P$-starting separatrices of $-R(z) {\partial_z}$ is the convex hull of the roots of $Q$. Thus the interval $[a,b]$ indeed coincides with the minimal invariant set $\minvset{CH}$. Moreover any $T_{CH}$-invariant set containing $a_1\leq a$ contains $[a_1,b]$ and any set containing $b_1\geq b$ contains $[a,b_1]$ and these sets are $T_{CH}$-invariant. Furthermore, any interval of the form $[a_1,\infty), (-\infty,b_1], (\infty,\infty)$ is $T_{CH}$-invariant as well. In case (\ref{it:pq}), we suppose without loss of generality that the smallest root is that of $P$ and the largest one is that of $Q$. Let $z^*_0$ be the largest root of $Q$. Then $\minvset{CH}$ coincides with $(-\infty, z^*_0]$. Since $\deg(Q)<\deg(P)+1$, the $t$-traces are bounded for finite $t$. Hence, by the same argument as above, all $t$-traces belong to the intervals whose endpoints are in $\mathcal{Z}(PQ) \cup \{ -\infty\}$ and the closure of the union of the said intervals is the minimal invariant set $\minvset{CH}$. For any $b>z^*_0,$ we again find that $(-\infty, b]$ is $T_{CH}$-invariant and so is $(-\infty, \infty)$. Lastly, in case (\ref{it:pqp}), the minimal invariant set is the real line with is settled precisely the same way as in case (2). Finally, by \cref{prop:1d}, there are no other fully irregular $T_{CH}$-invariant sets in (\ref{it:qpq})--(\ref{it:pqp}). \end{proof} \begin{corollary}\label{cor:irregularInvSets} Suppose $P $ and $Q$ are as in \cref{thm:classifying1d} item (ii) above. Consider the cases: {\rm (i)} the smallest and largest roots in $\mathcal{Z}(PQ)$ are roots of $Q$; {\rm (ii)} one of the smallest and largest roots is a root of $Q$ and the other of $P$; {\rm (iii)} the smallest and largest roots are roots of $P$. The fully irregular $T_{CH}$-invariant sets are of the form: In case {\rm (i)}, any (potentially infinite) line-segment containing the roots of $Q$ is $T_{CH}$-invariant. In case {\rm (ii)}, if the largest root is that of $P$, then any line-segment of the form $[a,\infty)$ containing the roots of $Q$ is $T_{CH}$-invariant, and otherwise, any line-segment of the form $(-\infty,b]$ containing the roots of $Q$ is $T_{CH}$-invariant. In case {\rm (iii)}, $\mathbb{R}$ is the only fully irregular $T_{CH}$-invariant set. \end{corollary} \subsection{Characterization of irregular $T_{CH}$-invariant sets of mixed type} \begin{definition} An irregular $T_{CH}$-set is said to be of \defin{mixed type} if it has non-empty interior, i.e. its boundary contains both regular and irregular points. \end{definition} Below we present necessary and sufficient conditions for the existence of irregular $T_{CH}$-invariant sets of mixed type. Let us first show that minimal invariant sets $\minvset{CH}$ can not be of mixed type. \begin{proposition}\label{le:twodimensional} For a given operator $T$ as in \eqref{eq:1st} such that $P$ and $Q$ have distinct zeros, if $\minvset{CH}$ has interior points, then it is regular. \end{proposition} \begin{proof} Suppose that $z_0\in \minvset{CH}$ is an irregular point. By \cref{prop:onlyRealLine1dimPts}, we may assume that $z_0\in \mathbb{R}$ and that $z_0$ lies on a real line segment $C\subset \minvset{CH}$ consisting of irregular points. Let us additionally assume that $z_0$ is not an endpoint of $C$. Our goal is to show that $\overline{\minvset{CH}^\circ}$ is also $T_{CH}$-invariant. Pick $U$ an open neighborhood of $z_0$ such that $U\cap \minvset{CH}\subset C$ and such that it does not contain an endpoint of $C$. Furthermore, let $u_0\in \minvset{CH}$ be such that there is $t_0\in [0,\infty)$ and $z_1 \in U\cap S$ a solution to \[t_0Q(z)+(z-u_0)P(z)=0.\] Let any such pair $(u_0,z_1)$ be given and $\gamma(t,u_0)$ the $t$-trace such that $\gamma(t_0,u_0)=z_1$ Since $z_1$ is not an endpoint of $C$, $V(z_1,t_0)$ is non-zero. Therefore, at $u=u_0$ the derivative $\frac{\gamma(u,t_0)}{du}$ exists and is non-zero. Hence, since $C$ is fully irregular, $u$ must be an irregular point of $\minvset{CH}$. Thus, by removing all irregular points from $\minvset{CH}$, we obtain a strictly smaller invariant set. Contradiction. \end{proof} \begin{corollary} Suppose that $z_0$ is an irregular point of a $T_{CH}$-invariant set $S$. Then the integral curve of $-R(z) {\partial_z}$ passing through $z_0$ cannot have a zero of $P$ as its endpoint. Furthermore, the zeros of $P$ and $Q$ are regular points. \end{corollary} \begin{proof} The fact that the integral curves cannot end in roots of $P$ can be proven in the exact same way as the fact that the $z_*$-starting separatrices cannot be part of a $1$-dimensional invariant set, see \cref{prop:1d}. Regularity of the zeros of $Q$ and $P$ follow from \cref{le:twodimensional}. \end{proof} \begin{remark} If a $T_{CH}$-invariant set $S$ is different from $\minvset{CH}$, then there might exist irregular points belonging to $S$ despite the fact that $S$ has a non-empty interior. For instance, this can happen in the case of the cocheloid, see \cref{ex:coch}. Indeed, we can add to the minimal invariant set $\minvset{CH}$ any interval $I=[-k,0)$ for $k>0,$ and obtain $S=\minvset{CH}\cup I$ which is also $T_{CH}$-invariant. \end{remark} We have the following sufficient condition for the existence of irregular invariant sets of mixed type in the case $K=\deg Q-\deg P=1$. \begin{proposition}\label{prop:irregularForKEqualToOne} Take an operator $T=Q{\frac{d}{dz}}+P$ such that $\deg Q-\deg P=1$ and such that $R(z)=\frac{Q(z)}{P(z)}$ is a real rational function. Then if $\lambda=a_{-1}>0$, there exist irregular invariant sets of mixed type. \end{proposition} \begin{proof} We have that $R(z)=\sum_{k=-1}^\infty a_{k} z^{-k}$ where $a_k\in \mathbb{R}$ so that $R'(z)=a_{-1}+\sum_{k=1}^\infty-k a_{-k} z^{-k-1}$. Pick $M$ big enough so that \begin{enumerate} \item $\{z:|z|\leq M\}$ is $T_{CH}$–invariant and \item if $|z|>M$ then $|\sum_{k=1}^\infty-k a_ {-k} z^{-k-1}|<a_1/2$. \end{enumerate} If $|z|>M$, then $\Re R'(z) >|\Im R'(z)|$ and the associated ray at $z$, $\{z+tR(z):t\geq 0\}$, does not intersect $\{z:|z|\leq M\}$. Pick $z_0=a+bi$ such that $|z|>M$. If $a\geq M$ or $a\leq -M$, comparing $R(z_0)$ with $R(a)\in \mathbb{R}$ and using the fact that $\Re R'(z_0)>|\Im R'(z_0)|$, it follows that $\text{sign}(\Im R(z_0))=\text{sign}(b)$ and hence that the associated ray at $z$ does not intersect the real line. Next, if $a\in (-M,M)$, comparing with $z_1=M$ or $z_1=-M$, recalling that $R(z_1)\in \mathbb{R}$ and noting that $|\Re(z_0-z_1)|>|\Im(z_0-z_1)|$ yields $\text{sign}(\Im R(z_0))=\text{sign}(b)$. Once again we obtain that the associated ray at $z$ does not intersect the real line. In particular, adjoining to $\{z:|z|\leq M\}$ any real interval containing $z=M$ or $z=-M$ yields an irregular invariant set of mixed type. \end{proof} Taking into account \cref{th:negativeResidueAtInfty} which can be found later in the text, \cref{prop:onlyRealLine1dimPts} together with \cref{prop:irregularForKEqualToOne} imply the following claim. \begin{corollary} Given $T$ with $\deg Q - \deg P=1$, there exist irregular $T_{CH}$-invariant sets if and only if there is an affine change of variables after which $R(z){\partial_z}$ is such that if $z\in \setR$, then $R(z)\in \setR$ and $\lambda>0$ (recall the definition of $\lambda$ in \eqref{eq:expN}). \end{corollary} \begin{proposition}\label{prop:irregularK0} Given $T=Q\frac{d}{dz}+P$, suppose that $K=\deg Q- \deg P =0$ and $Q,P$ are real. Then there exist irregular $T_{CH}$-invariant sets. \end{proposition} \begin{proof} First, after a possible affine change of variables assume that \[R(z){\partial_z}=\left(a_0 + \text{ higher order terms in $\frac{1}{z}$}\right) {\partial_z}\] at $\infty$ with $a_0>0$. Take $M$ big enough so that if $\Re z>M$, then $\Re R(z) >|\Im R(z) |$ and $R'(z)$ satisfies the conditions $|R'(z)|<a_0/2$ and $|\Re R'(z)|>|\Im R'(z)|$ in an open disk with center $M$ and radius $\delta<1$. Take a point $z_0=x+iy\in |z-M|\leq \delta,\; y>0,\; x >M$. We find that $|\Im R(z_0)|< y a_0/2$ and $\Re R(z_0)>a_0/2$. Suppose that the associated ray $z_0+tR(z_0)$ intersects the real line for some $t>0$. Then $y-t\frac{ya_0}{2} <0$ so that $t>\frac{2}{a_0}$. Then \[ \Re(z_0+t R(z_0))>M+\frac{2}{a_0} \frac{a_0}{2}=M+1>M+\delta. \] In the same way we obtain that if $y<0$, then the intersection occurs only when $z_0+tR(z_0)>M+\delta$. Since $\Re R(z) >|\Im R(z)|$ no associated ray of a point in the complement outside of the disk of radius $\delta$ can intersect $(M,M+\delta]$. We conclude that the set $\{z:\Re z\leq M\}\cup (M,M+\delta]$ is $T_{CH}$-invariant, since the associated rays of points in the complement of the latter set lie in the complement. \end{proof} Together with \cref{prop:onlyRealLine1dimPts}, \cref{prop:irregularK0} implies that the following necessary and sufficient conditions for irregularity for $K=0$. \begin{corollary} Given an operator $T$ with $K=0$, then there exist irregular $T_{CH}$-invariant sets if and only if $R(z)$ is real after a suitable affine change of variables. \end{corollary} Finally, let us handle the case of operators with $K=\deg Q -\deg P=-1$. \begin{proposition}\label{prop:Ksymm} Suppose that $K=-1$, $Q,P$ are real polynomials with the leading coefficients $q_l,p_l$ respectively and are not as in \cref{thm:classifying1d}. If $q_l p_l=-1$, and there exist irregular $T_{CH}$-invariant sets of mixed type, then there is an additional line of symmetry, other than $\mathbb{R}$ given by $a+i\mathbb{R}$ for some $a\in \mathbb{R}$. \end{proposition} \begin{proof} Since $\deg Q + \deg P$ is odd, there is a zero of $PQ$ on the real line. Since $R(z)<0$ for sufficiently large $z>0$ and $R(z)>0$ for sufficiently small $z<0$, it follows that real $z$ with large absolute values are contained in $\minvset{CH}$. Since $R(z)\in \mathbb{R}\setminus\{0\}$ if $z\in \mathbb{R}\setminus{\mathcal{Z}(PQ)}$, it follows that $\mathbb{R}\subset \minvset{CH}$. If follows from \cref{le:twodimensional} that no points of $\mathbb{R}$ are irregular. By \cref{prop:onlyRealLine1dimPts}, there must be a change of variables that is a composition of a translation and a non-zero rotation, such that $R(z){\partial_z}$ once again is such that $R(z)$ is real. Since $R(z){\partial_z}$ is such that $R(z)$ is real and $K=-1$, $R(z){\partial_z}=\left(a_1/z+\text{ higher order terms in $\frac{1}{z}$ }\right){\partial_z}$ where $a_1\in \mathbb{R}$, it follows that the rotation must be by $\pi/2$ or $3\pi/2$ radians. \end{proof} \begin{corollary} In the above notation, if there exist irregular invariant sets, then there exist an affine change of variables and $a_1>0$ such that \[ R(z){\partial_z}=\left(\frac{a_1}{z}+\text{ higher order terms in }\frac{1}{z}\right)\partial_z. \] \end{corollary} \begin{proposition} Suppose that $P,Q$ are such that after an affine change of variables, $R(z){\partial_z}=\left(\frac{a_1}{z}+\text{ higher order terms in }\frac{1}{z}\right){\partial_z}$ with $a_1>0$ and $R(z)\in \setR$ if $z\in \setR$. Then there exist irregular $T_{CH}$-invariant sets. \end{proposition} \begin{proof} Find $M$ large enough so that $\{z:|\Re z|\leq M\}$ is $T_{CH}$-invariant and such that if $ z\geq M$, $R(z)=\frac{a_0}{z}+f(z)$ where $f(z)$ is such that $|\Im(f(z))|\leq|\Im\left(\frac{a_0}{2z}\right)|$ and $|\Re(f(z))|\leq|\Re\left(\frac{a_0}{2z}\right)|$. (Note that this is possible since the rational function $R(z)=\sum_{l=1}^\infty a_l/z^l$ is real and the coefficients $a_l$ are uniformly bounded). Take $z=x+iy$ with $y>0$, $x>M$. Then \[ \Im(R(z))\geq\Im\left(\frac{3a_0}{2z}\right)=\frac{-3a_0y}{2(x^2+y^2)} \] and \[ \Re(R(z))\geq \frac{a_0x}{2(x^2+y^2)}. \] Hence, if $\Im(z+tR(z))=0$ then $t\geq\frac{2(x^2+y^2)}{3a_0}$ so that $\Re(z+tR(z))\geq x+\frac{x}{3}>\frac{4}{3}M$. By symmetry, we obtain the estimate for $y<0$. In particular, $\{z:|\Re(z)|\leq M\}\cup\{(M,\frac{4}{3}M]\}$ is $T_{CH}$-invariant. \end{proof} Summarizing, the results of this section give necessary and sufficient conditions for the existence of irregular $T_{CH}$-invariant sets in terms of operator $T$ stated below. \begin{theorem} There exist irregular $T_{CH}$-invariant sets if and only if there is an affine change of variables such that we get one of the following four cases: {\rm (1)} $K=1$ and $R(z){\partial_z}$ is such that if $z\in \setR$ then $R(z)\in \setR$ and $\lambda>0$; {\rm (2)} $K=0$ and $R(z){\partial_z}$ is such that if $z\in \setR$ then $R(z)\in \setR$; {\rm (3)} $K=-1$ and $R(z){\partial_z}=\left(\frac{a_1}{z}+\text{ higher order terms in }\frac{1}{z}\right){\partial_z}$ with $a_1>0$ is such that if $z\in \setR$ then $R(z)\in \setR$; {\rm (4)} $K=-1$ and $P$ and $Q$ are as in \cref{thm:classifying1d}. \end{theorem} \section{Some remarks on the boundary of \texorpdfstring{$\minvset{CH}$}{CH}} \label{sec:GenProp} In the above \cref{thm:classifying1d} we have completely characterized for which operators $T$, their minimal invariant sets $\minvset{CH}$ are fully irregular and described them. Further in \cref{le:twodimensional} we have observed that in all other cases $\minvset{CH}$ are regular. Below we study in more details the boundary of a regular $\minvset{CH}$. \medskip Given a regular $T_{CH}$-invariant set $S\subset \mathbb{C}$ for some operator $T$, we say that a boundary point $p\in \partial S$ is \emph{non-singular} if the restriction of $S$ to any sufficiently small disk centered at $p$ is homeomorphic to a half-disk and \emph{singular} otherwise. \begin{proposition}\label{prop:singular} If $p$ is a singular point lying on the boundary of some regular $T_{CH}$-invariant set $S$, then $p$ is either a root of $Q(z)$ or a root of $P(z)$. \end{proposition} \begin{proof} Given a regular $T_{CH}$-invariant set $S$, suppose that $p\in \partial S$ is a singular point such that $p\notin \mathcal{Z}(PQ)$. Let $\phi(t)$ be the solution to $\dot{\phi}(t)=-R(z), \phi(0)=p$. Let $t_0>0$ be in the domain of definition of $\phi(t)$ and let $U$ be a neighborhood of $\phi(t_0)$ not containing $p$ nor a zero of $PQ$. Moreover, pick $U$ small enough so that $U\cap S$ is connected. Let $V$ be a neighborhood of $p$ disjoint from $U$ such that if $z\in V$, the solutions $\phi_z(t)$ of $\dot{\phi_z}(t)=-R(z), \phi_z(0)=z$ satisfies the condition $\cup_{z\in V}\phi_z(t_0)=U$. (Such a neighborhood exists since $-R(z)$ is non-singular at $p$). Consider two distinct components $S_1,S_2$ of $V\cap(S\setminus \{p\})$. All forward trajectories of points in either $S_1$ or $S_2$ must pass through $p$, but this is impossible, since such a set is one-dimensional. \end{proof} Let us now concentrate on non-singular boundary points of $\minvset{CH}$. \begin{definition}\label{def:types} We say that a point $p\in \partial \minvset{CH}$ satisfying $p\notin \mathcal{Z}(PQ)$ is of (a) \defin{local type} if either $\partial \minvset{CH}$ is $C^1$-smooth at $p$ and $R(p)$ is tangent to $\partial \minvset{CH}$ there; or for every $\epsilon>0$ there is $\omega\in \bC$ with $|\omega|<\epsilon$ such that $\{p+t(R(p)+\omega):t\in [0,t_0)\}$ for any $t_0>0$ intersects the interior $(\minvset{CH})^\circ$. (b) \defin{global type} if the associated ray $r_p$ intersects the boundary $\partial \minvset{CH}$ at some other point $p' \neq p$. (c) \defin{mixed type} if $p$ is both of local and global type. \end{definition} \begin{proposition}\label{prop:types} Every point $p\in \partial \minvset{CH}$ of a compact $\minvset{CH}$ satisfying $p\notin \mathcal{Z}(PQ)$ and that the boundary $\partial \minvset{CH}$ is $C^1$-smooth at $p$ belongs to one of the above three types. \end{proposition} \begin{proof} First suppose that $\minvset{CH}$ is irregular. Then it is fully irregular and it is easy to see that all points $z\notin \mathcal{Z}(PQ)$ lie on the boundary and are points of mixed type. So suppose that $\minvset{CH}$ is regular and consider $R_p(I)=\{p+tR(p):t\in I\}$. Suppose $p$ is not of local type nor global type. By \cref{prop:notinv}, $R_p$ cannot intersect the interior of $\minvset{CH}$. Now, $R_p((0,t_1])$ is in the exterior of $\minvset{CH}$ for all $t_1\in \mathbb{R}_+$. Since $R(z)$ is continuous at $p$, there is an open neighborhood $U$ of $p$ such that for all points $z\in U$, $R_z((0,t_1])$ is in the exterior of $(\minvset{CH}\setminus U)^c$. Hence, all points in $(\minvset{CH}\setminus U)^c$ have associated rays that are in $(\minvset{CH}\setminus U)^c$. By \cref{th:charact}, $\minvset{CH}\setminus U$ is $T_{CH}$-invariant contradicting minimality of $\minvset{CH}$. \end{proof} \begin{remark}\label{lm:local} The segments of $\partial \minvset{CH}$ consisting of points of the local type are segments of trajectories of $-R(z) {\partial_z}$ and are therefore analytic. \end{remark} \noindent {\it Claim.} Suppose $C_a$ and $C_g$ are segments of $\partial \minvset{CH}\setminus \mathcal{Z}(PQ)$ where $C_a$ is analytic and $C_g$ is of global type such that the associated rays of $C_g$ are tangent to $C_a$. Then $C_g$ is analytic. The argument goes as follows. For each point $p$ on $C_a$, we can find the equation for the tangent line to $C_a$ at $p$. This line intersects $C_g$ precisely at points $p^\prime$ where it is parallel to the vector field $R(z){\partial_z}$ (which is the direction of the associated ray). Finding $p^\prime$ in terms of $p$ involves only rational equations in $p$. Hence, $p^\prime$ depends analytically on $p$ and since $C_a$ is analytic, the claim follows. \bigskip We predict that there are only finitely many points on the boundary $\partial \minvset{CH}$ at which $\partial \minvset{CH}$ is not analytic and believe that \cref{prop:types} extends to unbounded minimal invariant sets as well. In fact, we have the following guess. \begin{conjecture} All points in $\partial \minvset{CH}\setminus \mathcal{Z}(PQ)$ of a minimal invariant set $\minvset{CH}$ belong to one of the above three types. \end{conjecture} \begin{remark} We have already seen an example when all boundary points that are not zeros of $PQ$ are of local type, namely \cref{ex:coch}. In the following sections we analyze $\minvset{CH}$ for certain specific choices of $P$ and $Q$. In particular, for $Q=a(z-z_0), P=b(z-z_1)$, using the action of the affine group we can assume $Q=\alpha (z-1), P=z$. We then find that for $\alpha>0$, all points that are not zeros of $PQ$ are of mixed type; for $\alpha<0$, all points which are not zeros of $PQ$ are of local type; and for $\alpha\notin\mathbb{R}$, we get points of local, global and of mixed type. Furthermore, \cref{ex:mustach} provides an operator $T$ with $\deg Q -\deg P =1$ where we have points of local, global and mixed type. Finally, in the case considered in \cref{thm:relambda0} we prove that all non-trivial $T_{CH}$-invariant sets have only boundary points of local type. \end{remark} \section{Properties of $\minvset{CH}$ for $\deg Q -\deg P=-1, 0$, and $1$}\label{sec:K=-1,0,1} Below we discuss all possible situations when $\minvset{CH}$ can be nontrivial. We recall that the cases $K \in \{-1,0,1\}$ correspond to the situations where $R(z)\partial_z$ at $\infty$ has a zero of order $3,2$ and $1$ respectively. \subsection{Case $\deg Q - \deg P=-1$} We first prove that in this situation, $\minvset{CH}$ is always nontrivial. \begin{proposition}\label{prop:K=-1} If $\deg Q -\deg P=-1$ then $\minvset{CH}\neq \bC$. \end{proposition} \begin{proof} After an affine change of variable, we can assume that \[ R(z){\partial_z}=\left(\frac{1}{z}+\text{higher order terms in } \frac{1}{z}\right){\partial_z} \] at $\infty$. In particular, $\arg(R(z))\approx \arg (\overline{z})$ so that we can find $M$ so that if $|\Re z|>M$, $\text{sign}(\Re R(z))=\text{sign}(\Re z)$. In particular, the strip $\{z=a+bi:|a|\leq R\}$ is $T_{CH}$-invariant. \end{proof} In the case of lowest possible degree with $\deg Q -\deg P=-1$ we get the following. \begin{proposition} Suppose that $Q=\alpha\neq 0, P=\beta(z-z_0)$. Then $\minvset{CH}$ the line passing through the point $z_0$ and having the slope $\frac{1}{2}\arg(-\frac{\alpha}{\beta})$. \end{proposition} \begin{proof} Introduce the changes of variable $z\to z+z_0$ and then $z\to \sqrt{\frac{\beta}{\alpha}} z$. Then we obtain the associated problem $z_0=0$, $\beta=1$. Then the $P$-starting separatrices of $-R(z) {\partial_ z}$ are the imaginary axis and if $\Re z>0$, $\Re R(z)>0$ and if $\Re z<0$, $\Re R(z)<0$. It follows that the imaginary axis is the minimal invariant set. Reversing the change of variable, the result follows. \end{proof} The next degree to consider is $\deg Q=1$ and $\deg P=2$. At the moment we only have the description of $\minvset{CH}$ in the situation when $Q$ and $P$ have a common zero, see \cref{sec:1-point} and in particular \cref{sec:constlinear}. \subsection{Case $\deg Q - \deg P=0$} Again we start by proving that in this situation $\minvset{CH}$ is always non-trivial as well. First, for $\epsilon>0$ and $\theta\in [0,2\pi)$, define the cone \[ \defin{C(\theta,\epsilon)} \coloneqq \{z \in \bC:\arg(z)\in (\theta-\epsilon,\theta +\epsilon)\}. \] \begin{proposition}\label{prop:K=0} Suppose that $\deg Q -\deg P=0$. There is an angle $\theta$, such that for every $\epsilon>0$, there is an $M_\epsilon$, such that if $|z|>M_\epsilon$ and $z\notin C(\theta,\epsilon)$, then $z\notin \minvset{CH}$. In particular, $\minvset{CH} \neq \bC$. \end{proposition} \begin{proof} After an affine change of variable, we can assume that at $\infty$ \[ R(z){\partial_z} =\left( a_0 + \text{higher order terms in $\frac{1}{z}$}\right){\partial_z}, \] with $a_0>0$. In particular, $\arg R(z)\approx 0$. Now, pick $M'$ large enough so that $\arg(R(z))\in (-\epsilon/3,\epsilon/3)$ whenever $|z| \geq M'$. Now, let $S$ be the union of \[ C(\pi,\epsilon), \; \{z-te^{i\epsilon/2} : |z| \leq M',\, t \geq 0\} \; \text{ and } \; \{z-te^{-i\epsilon/2} : |z| \leq M',\, t \geq 0 \}, \] see \cref{fig:case-k0-cone}. It follows from \cref{th:charact} that $S$ is $T_{CH}$-invariant. Moreover, $S\setminus C(\pi,\epsilon)$ is compact, and thus lies in some disk with radius $M_\epsilon$. \begin{figure}[!ht] \includegraphics[width=0.5\textwidth]{case-k0-cone} \caption{The set $S$.}\label{fig:case-k0-cone} \end{figure} \end{proof} We shall now describe the minimal invariant sets when $\deg Q= \deg P=1$. After an affine change of variables we can assume that $T=\alpha (z-1){\frac{d}{dz}} +z$ for some $\alpha\in \mathbb{C}$. We have already seen that if $\alpha>0$, then $\minvset{CH}=\{z\in\mathbb{R}:z\leq 1\}$. So suppose $\alpha\notin \mathbb{R}_+$. We first make some general remarks. Writing $\alpha=a+bi$ and $z=x+iy$, the curve of inflections is the set of solutions to \[ ax^2-2abxy-ay^2=0, \] which coincides with \[ \{te^{i\arg(\alpha)/2}:t\in \setR\}\cup \{te^{i\arg(-\alpha)/2}:t\in \setR\}. \] Moreover, the $P$-starting separatrices start in the directions $\pm e^{i\arg(\alpha)/4}$. Since the integral trajectories of $-R(z){\partial_z}$ are analytic outside $0$, there is a small neighborhood $U$ of the origin where the $P$-starting separatrices do not intersect the curve of inflections. Hence, the $P$-starting separatrices define a convex curve in $U$. Consider now the $t$-traces corresponding to $u=0$. Analyzing $R(z)=\alpha-\alpha/z$ in $U$, we get that any point in $U$ which belongs to one of these $t$-traces lie on the ``inside'' of the convex curve defined by the $P$-starting separatrices. This observation is used to show $T_{CH}$-invariance below for certain sets $S\subset \bC$. In particular, we use it to show that the $t$-traces corresponding to $u=0$ do not pass to the exterior of $S$ through $0$. \bigskip If $\alpha<0$ then the $P$-starting separatrices of $-R(z) {\partial_z}$ constitute the boundary of $\minvset{CH}$ and the minimal invariant set is the minimal convex set containing them, see \cref{fig:separatrix-deg1-nils}. To prove this, denote this set by $S$. We have that $\Im R(i x)>0$ for $x\in \mathbb{R}\setminus \{0\}$. Hence, since the curve of inflections is the union of the real and the imaginary axes, and a $P$-ending separatrix coincides with the positive part of the real axis, it follows that the $P$-starting separatrices do not intersect the curve of inflections. Hence, the associated rays on the boundary of $S$ lie in the closure of the complement which by \cref{cor:notPassingThrough}, gives that no points can pass through $S\setminus \{0\}$ to the exterior of $S$. The fact that the $t$-traces corresponding to $u=0$ do not pass through the boundary of $S$ was discussed earlier. By \cref{prop:rootTrajectoryDirections}, we get that all other $t$-traces starting in $0$ have the same property as well. Hence, $S$ is $T_{CH}$-invariant. We have that the associated rays of points in the interior of $S$ has an associated ray intersecting either the $P$-starting separatrices or $z=0$, which are contained in $\minvset{CH}$. Hence, we get that all points of $S$ are contained in $\minvset{CH}$ and thus conclude that $S=\minvset{CH}$. \begin{figure}[!ht] \includegraphics[width=0.5\textwidth]{separatrix-deg1-nils} \caption{ For the operator $T=-(z-1){\frac{d}{dz}}+z$, its $\minvset{CH}$ coincides with the region bounded by the separatrices (black) containing the point $z=1$. }\label{fig:separatrix-deg1-nils} \end{figure} Let us turn to the case $\alpha\notin\setR$ and write $z=x+iy$. First, suppose $\Re \alpha=0$ and $\Im \alpha>0$ -- the other purely imaginary case can be treated analogously. We find that the curve of inflections is given by $\{te^{i\pi/4}:t\in \setR\}\cup \{te^{i\pi/4}:t\in \setR\}$. One of the $P$-starting separatrices is a homoclinic separatrix that has $z=1$ in the region that its closure encloses. It starts in the direction $e^{{\pi i}/{4}}$ and due to the above parametrization of the curve of inflections we can deduce using symmetry that this separatrix does not intersect it. For future use, we parameterize this separatrix as $\sigma_0(t)$ where $\lim_{t\to 0}\sigma_0(t)=0$ and $\sigma_0(t)$ is defined in $(0,t_{max})$. \begin{figure}[!ht] \includegraphics[width=0.5\textwidth]{morran} \caption{For $T=i(z-1){\frac{d}{dz}}+z$, its $\minvset{CH}$ is the union of $A$ and $B$. Here $A$ is the region containing $-i$ and bounded by the other $0$-starting separatrix and the blue curve while $B$ is the region containing the point $z=1$ and bounded by the separatrix which forms a loop. (The separatrices are the black curves and the blue curve is an approximation of the boundary drawn by hand). }\label{fig:morran} \end{figure} The other $P$-starting separatrix starts in the direction $e^{{5\pi i}/{4}}$ and we parameterize it as $\sigma_1(t)$, with $\lim_{t\to 0}\sigma_1(t) =0$. Then, since $\lim_{t\to 0}\arg(-R(\sigma_1(t)))=\frac{3\pi}{2}$, and $\Im R(z)>0$ in the lower half plane, we obtain that $\sigma_1(t)$ does not intersect the curve of inflections. Next, for each $u\in \sigma_0(t)$, consider the union of the corresponding $t$-traces. Let $A$ be the region containing $-i$ that contains is bounded by this union and $\sigma_1((0,\infty))$. Next, let $B$ be the convex hull of $\overline{\sigma_0(0,t_{max})}$ and set $S \coloneqq A\cup B$, see \cref{fig:morran}. We claim that $\minvset{CH}=S$. First, note that $A$ lies in the region $\{y<x\}\cap\{y<-x\}\cup\{0\}$. Indeed, this follows from the following facts. First, we note that if $y\geq x, z\neq 0$, then $\frac{\pi}{4}<\arg R(z) <\frac{5\pi}{4}$. Secondly, we recall that the curve of inflections does not intersect $\{y<x\}\cap\{y>-x\}\cup\{0\}$. Lastly, we have that $\sigma_0$ lies in $\{y<x\}\cap\{y>-x\}$. All in all, we can conclude that the associated rays outside of $B$ lie in $\{y<x\}\cap\{y<-x\}\cup\{0\}$. Since $\sigma_0$ and $\sigma_1$ have no inflections, we get that the associated rays starting on these separatrices lie in the closure of the complement of $S$. By definition, it follows that the boundary points $p$ of $A\setminus (\sigma_1 \cup \{0\})$ are such that the associated rays lie in the closure of the complement of $S$. From \cref{cor:notPassingThrough}, it is follows that no points can pass through $S\setminus \{0\} $ to the exterior of $S$. We next note that the corresponding $t$-traces of $u\in B\setminus\{0\}$ start in some direction $\theta\in (-\frac{\pi}{4},\frac{\pi}{4})$ by \cref{prop:rootTrajectoryDirections} and the ones corresponding to $u\in A$ begin in the direction of $B$. Lastly suppose that $u=0$. By constant sign of the curvature of the separatrices, \cref{prop:rootTrajectoryDirections}, the fact that these $t$-traces have limit $1$ and $-i\infty$, and that $\arg R(z)\approx \frac{\pi}{2}$ for $z$ of large absolute value, it follows that $S$ is indeed $T_{CH}$-invariant, and thus coincides with $\minvset{CH}$. Now write $\alpha=a+bi$ with $a,b\neq 0$. In this case the curve of inflections is given by $ax^2-ay^2-2b xy=0$ which is the union of two straight lines that start in the same directions as the separatrices. By symmetry, it suffices to consider only the case $b>0$. Let us first assume that $a>0$. We begin by proving that the $P$-starting separatrices do not intersect the curve of inflections. Note here that since $1$ is a sink of $-R(z)\partial_z$ at least one $ P$-starting separatrix $\sigma_0(t)$ has limit $1$. Furthermore, $\infty$ is a critical point of $-R(z)\partial_z$ of degree 2 and $-R\approx -\alpha$ close to $\infty$ which gives that one $P$-starting separatrix $\sigma_1(t)$ has limit $-\alpha \infty$. Set $v=(v_1,v_2)$ with positive $v_1,v_2$ such that $\arctan(\frac{v_2}{v_1})= e^{ {i\arg(\alpha)}/{2}}$. Analyzing $R(z)=\Re R(z)+i \Im R(z)$ on the segment $ \{te^{-{i\arg(\alpha)}/{2}}:t\in \setR\setminus\{0\}\}$ one can deduce that $v\cdot (\Re R(z),\Im R(z))>0$. Since the curve of inflections and the separatrices start in the directions given implicitly in \cref{prop:dirSeptrix} we obtain, in particular, that neither of the $P$-starting separatrices intersect the curve of inflections. Consider now the $t$-traces corresponding to $u$ on $\sigma_0$. An argument similar to that with $a=0$ shows that for each $u$, one of these $t$-traces remains in the region bounded by the line segments \[ \{-te^{i\arg(\alpha)/2}:t\geq0\}\cup \{te^{i\arg(-\alpha)/2}:t\geq 0\}. \] Furthermore, they do not intersect $\sigma_1$. Let us denote this union by $\Theta$. Now, let $A$ be the region that contains $-i$ and contains is bounded by $\sigma_1$ and the $\Theta$. Take a boundary point $p\in \partial A\setminus (\sigma_1\cup\{0\})$. It is easy to see that for each such $p$, there exists a unique $u\in \sigma_0$ such that $u=z+t_0R(z)$ for some $t_0\ge 0$. Taking again the $t$-traces corresponding to $\partial A \setminus \sigma_1$, we obtain that one of these $t$-traces remains in the region that contains $1$ and is bounded by \[ \{te^{i{\arg \alpha }/{2}}:t\geq 0\}\cup \{te^{i{\arg(-\alpha)}/{2}}:t\geq 0\}. \] Analyzing the rational function $R$ in this region, one can deduce that the smallest simply connected region containing $\sigma_0$ together with the union of these $t$-traces form a set $B$ contained in the very same region. In the same manner as above, one can once again conclude that $A\cup B$ is $T_{CH}$-invariant. Hence, it coincides with $\minvset{CH}$, see \cref{fig:hemulen}. \begin{figure}[!ht] \includegraphics[width=0.5\textwidth]{hemulen} \caption{For $T=(1+10i)(z-1){\frac{d}{dz}}+z$, its $\minvset{CH}$ is the union of $A$ and $B$. Here $A$ is the region that contains $z=1$ and is bounded by the orange curve and the separatrix that swirls around the point $z=1$ while $B$ is the region that contains $z=i$ and is bounded by the other $0$-starting separatrix and the blue curve. (The separatrices are the black curves; the blue curve is an approximation of the boundary drawn by hand. The blue and the orange curve are approximations of the boundary drawn by hand. To make the essential features more visible we have chosen $\alpha$ with large imaginary part). }\label{fig:hemulen} \end{figure} We turn to the remaining case $\alpha=a+bi$ with $a<0$. By symmetry, we can assume that $b>0$. Denote by $A$ the region that contains $1$ and is bounded by the $P$-starting separatrices. We call them $\sigma_0$ and $\sigma_1$ and let the latter be the one that starts in the direction $e^{i{\arg \alpha}/{2}}$. Since these have limits $-\alpha$, it follows that $\sigma_1$, but not $\sigma_0$ intersects the curve of inflections, and $\sigma_1$ intersects the part that goes in the direction of $e^{-i{\arg \alpha }/{2}}$. We can also note that this intersection is unique and let $t_0$ be such that $\sigma_1(t_0)$ belongs to the curve of inflections. We obtain that the associated rays of points on $\sigma_0$ lie in the closure of the complement of $A$. However, since $\sigma_1$ has an inflection, the union of the $t$-traces of $u$ on $\sigma_1((0,t_0])$ yields a larger set, see \cref{fig:hattifnatt}. The boundary points of this resulting set have their associated rays in the closure of the complement and by analyzing the $t$-traces starting in $0$ in the same way as above, we can deduce that this set is indeed $T_{CH}$-invariant. It must therefore coincide with $\minvset{CH}$. \begin{figure}[!ht] \includegraphics[width=0.5\textwidth]{hattifnatt} \caption{For $T=(-1+4i)(z-1){\frac{d}{dz}}+z$, its $\minvset{CH}$ is the region that contains $z=0$ and bounded by parts of the separatrices (black) and the blue curve. (The blue curve is an approximation of the boundary drawn by hand. To make the essential features more visible we have chosen $\alpha$ with large imaginary part). }\label{fig:hattifnatt} \end{figure} Note that we obtain the following characterization of the boundary. If $\alpha>0$, all boundary points are of mixed type, if $\alpha<0$, then all points are of local type, if $\alpha=a+bi\notin \setR, a\leq 0$, we get points of local and global type, and if $\alpha=a+bi\notin \setR, a> 0$ we get points of local, global and mixed type. \subsection{Case $\deg Q -\deg P=1$} In \cref{th:compcrit}~(iii) we found necessary and sufficient conditions for the compactness of $\minvset{CH}$ given by $K=\deg Q -\deg P=1$ and $\Re \lambda \ge 0$. Below we show that for $K=1$ and $\Re \lambda < 0$, we get $\minvset{CH}=\bC$, i.e. this case is trivial. Then we will describe $\minvset{CH}$ for $K=1$ and $\Re \lambda = 0$. Unfortunately, in the most interesting situation $K=1$ and $\Re \lambda > 0$, we do not have a general description of $\minvset{CH}$. However we provide a number of partial results, observations and examples. \subsubsection{$\Re \lambda<0$ } Suppose now that $\Re \lambda<0$ and that $\deg P>0$ (since otherwise $\minvset{CH}$ is just the root of $Q$). In this case, $R(z){\partial_z}$ has a source at $\infty$. Our next result is as follows. \begin{proposition}\label{th:negativeResidueAtInfty} If $\Re \lambda<0$, then $\minvset{CH}=\bC$. \end{proposition} \begin{proof} Take $M>0$ large enough so that if $|z|>M$, then the forward trajectory of $-R(z) {\partial_z}$ goes to $\infty$. It is clear that such an $M$ exists, since $\infty$ is a sink of $-R(z) {\partial_z}$. Now take $z$ such that $|z|>M$ and consider the integral curve $\varphi(t)$ of $-R(z) {\partial_z}$ passing through $z$ defined on its maximal domain. We shall prove that for each $t$ in the domain of definition, $\varphi(t)\in \minvset{CH}$. There are three possible limits $\lim_{t\to t_{min}}\varphi(t)$ for this integral curve: \begin{enumerate} \item a zero of $P$, \item a zero $z^*$ of $Q$ which is a source of $-R(z) {\partial_z}$, \item a zero $z^*$ of $Q$ of multiplicity $k\geq 2$. \end{enumerate} In the first case $\varphi(t)$ is a $P$-starting separatrix of $-R(z) {\partial_z}$ and hence contained in $\minvset{CH}$. In the second case, by \cref{prop:qRootSinks}, $z^*$ is in the interior of $\minvset{CH}$ and so $\varphi(t)$ is a forward trajectory of $-R(z) {\partial_z}$ of a point belonging to $\minvset{CH}$, so $\varphi(t)\in \minvset{CH}$. It remains to consider the third case, i.e. when $\lim_{t\to t_{min}}\varphi(t) = \lim_{t\to-\infty}\varphi(t)$ is equal to $z^*$ which is a root of $Q$ of multiplicity at least $2$. We can assume that $z^*=0$ and note that there are $k-1$ possible values of $\arg(\lim_{t\to \infty}\varphi(t))$. After an affine change of variables we can assume that $R(z) \partial_z = \left( z^k+ \mathcal{O}(z^{k+1}) \right) \partial_z$ as $z\to 0$ and hence $\lim_{t\to -\infty}\varphi(t)=0$. Now, note that for each $\theta$ such that $\theta$ is the argument of a $(k-1)^\textnormal{th}$ root of unity, there exists at least one $P$-starting separatrix $\sigma(t)$ (contained in $\minvset{CH}$) such that $\lim_{t\to \infty} \sigma(t)=0$ and $\lim_{t\to \infty} \arg(\sigma(t)) =\theta$. Suppose first that $k\geq 3$. We have that as $t \to -\infty$, $\arg(\varphi(t))$ approaches a $(k-1)^\textnormal{th}$ root of $-1$ (there are only $k-1$ directions from which we may approach $z^*$ via a backwards trajectory of $-R(z) \partial_z$). Since $k\geq 3$, it follows that for $t \ll 0$, the ray $\{\varphi(t)+sR(\varphi(t)):s\geq 0\}$ intersects either $0$ or a $P$-starting separatrix (contained in $\minvset{CH}$) and so $\varphi(-\infty,\infty)\subset\minvset{CH}$. So suppose $k=2$. Then there is at least one $P$-starting separatrix $\sigma(t)$ (contained in $\minvset{CH}$) such that $\lim_{t\to \infty}\sigma(t)=0$ and it approaches $0$ from the right, i.e.~$\lim_{t\to \infty} \arg(\sigma(t))=0$. Now, since $R(z)$ is continuous and $\arg R(z) \approx \arg z^2$ close to $0$ there exists a curve $\phi(t)$ approaching $0$ such that $\lim_{t\to \infty} \arg \phi(t) =\pi$, for which $\phi(t_1)+s R(\phi(t_1))\in \minvset{CH}$ for some $s \geq 0$ and each $t_1$ for which $\phi(t)$ is defined. \emph{Informally, $\sigma$ lies roughly on the right hand side of $0$, and $\phi$ on the left hand side, and both are in $\minvset{CH}$, see \cref{fig:case-k-1}.} Using \cref{th:charact} at most twice, we may now conclude that $\varphi(t)$ also lies in $\minvset{CH}$. \begin{figure}[!ht] \includegraphics[width=0.5\textwidth]{case-k-1} \caption{ Sketch of the case $k=2$. The black curve is the separatrix $\sigma(t)$, and the blue on the left is $\phi(t)$. For every point $\phi(t_1)$ on $\phi(t)$, the ray $\phi(t_1)+s R(\phi(t_1))$ with $s>0$, intersects $\sigma$. }\label{fig:case-k-1} \end{figure} \end{proof} Since we have already shown that the situation $K=\deg Q - \deg P=1,\Re \lambda=0$ yields a compact minimal invariant set, we have thus proved the following consequence of \cref{th:compcrit}, \cref{prop:K=-1}, \cref{prop:K=0}, and \cref{thm:degQminusdegP}. \begin{corollary} \label{cor:trivialfin} Given $T=Q\frac{d}{dz}+P$ with $\deg Q-\deg P=K$, one has that its $\minvset{CH}$ is trivial if and only if either $|K|\geq 2$ or $K=1$ and in the expansion $\frac{Q(z)}{P(z)}{\partial_z}=\left(\lambda z+ \text{higher order terms in } \frac{1}{z}\right){\partial_z},$ one has $\Re \lambda< 0$. \end{corollary} \subsubsection{Case $\Re\lambda=0$} We start with the following definition and recall a few facts related to \cref{compactre0}. \begin{definition} For any connected set $A\subset \mathbb{C}$, we define \defin{$\mathcal{D}(A)$} to be the smallest under inclusion simply connected set in $\bC$ containing $A$. \end{definition} Since $\Re \lambda=0$ then for $\deg P=0$, $\minvset{CH}$ is $\mathcal{Z}(Q)$. Assume now that $\deg P\geq 1$. We have that $\infty$ is a center of $-R(z) {\partial_z}$ and hence, no separatrix $\septrix_i$ goes to infinity. Consider the compact closure of the union $\defin{\Sigma}=\overline{\cup_i^N \septrix_i }$ of separatrices. Since $\setRS\setminus \mathcal{D}(\Sigma)$ is a center zone the set $\mathcal{D}(\Sigma)$ exists and is regular. Further, since $\setRS\setminus \mathcal{D}(\Sigma)$ is a center zone, the integral curves of $-R(z){\partial_z}$ in this zone are closed curves with some finite period $\tau$. We use the ordering of these integral curves in $\setRS\setminus \mathcal{D}(\Sigma)$ introduced in \cref{compactre0} and denote by $\Psi$ the smallest integral curve in $\setRS\setminus \mathcal{D}(\Sigma)$ that encloses a convex region. By \cref{lem:blueCurveCompact}, $\Psi$ exists. \begin{lemma}\label{prop:invariantsetforcenter} If $\Re \lambda=0$ and $\minvset{CH}$ contains a point in the exterior of $\mathcal{D}(\Sigma)$, then $\minvset{CH}=\mathcal{D}(\Psi)$. \end{lemma} \begin{proof} Let $x\in \minvset{CH}$ be a point in the exterior of $\mathcal{D}(\Sigma)$. By \cref{prop:invSetClosedUnderForwardIntegralCurves} the forward trajectory of $x$ are in $\minvset{CH}$ and this is a closed loop. Suppose that $\minvset{CH}\subsetneq \mathcal{D}(\Psi)$. First, the forward trajectories of all points of $\minvset{CH}$ are in $\minvset{CH}$, so there must be a maximal integral curve $\gamma(t)$ according to the above ordering such that $\partial\minvset{CH}=\gamma([0,s])$ for some $s\in \mathbb{R}$. Since $\gamma([0,s])\subset \minvset{CH}$, so is $\mathcal{D}(\gamma([0,s]))$. Indeed, $\minvset{CH}$ contains $\mathcal{Z}(PQ)$ and for any point $z_0 \in \mathcal{D}(\gamma([0,s]))\setminus(\mathcal{Z}(PQ)\cup \gamma([0,s])),$ the rational function $R(z_0)$ has no poles or zeros, so $z_0+t_0R(z_0)\in \gamma([0,s])$ for some $t_0>0$ which implies $z_0\in \minvset{CH}$. By assumption, $\mathcal{D}(\gamma([0,s]))$ is not convex and since its boundary consists of an integral curve containing $\mathcal{Z} (PQ)$ in its interior, the vector field $-R(z)\partial_z$ is always tangent to the boundary. We can thus find a point $z\in \partial \minvset{CH}$ such that the associated ray of $z$ intersects the interior of $\minvset{CH}$. Note now that $\mathcal{D}(\gamma([0,s]))$ is a regular domain and so by \cref{prop:notinv}, $\minvset{CH}$ is not $T_{CH}$-invariant which is the desired contradiction. Hence, $\mathcal{D}(\Psi)\subset \minvset{CH}$. But since $\mathcal{D}(\Psi)$ is convex and $\partial \mathcal{D}(\Psi)$ is a closed integral curve of $-R(z)\partial_z$, $\mathcal{D}(\Psi)$ is a domain which is $T_{CH}$-invariant by \cref{prop:raysGiveCHinv}. \end{proof} By \cite[Lemma 2]{DiasGarijo2020x}, we have that a zero $z^*$ of $Q$ can not be a boundary point of $\mathcal{D}(\Sigma)$. Further, as $\infty$ is in $\setRS\setminus \mathcal{D}(\Sigma)$ and is a critical point of $-R(z)\partial_z$, all zeros of $Q$ are in the interior of $\mathcal{D}(\Sigma)$. Now, as a subset of the separatrices form the boundary of $\mathcal{D}(\Sigma)$, we get that this boundary consists of homo/heteroclinic separatrices. In particular, since the $z_p$-starting separatrices for all $z_p\in \mathcal{Z}(P)$ are in $\minvset{CH}$, there is a point $z_*\in \mathcal{Z}(P)$ on the boundary of $\mathcal{D}(\Sigma)$ such that all $z_*$-starting separatrices and at least one $z_*$-ending separatrix $\septrix_0$ are contained in $\minvset{CH}$ where $\septrix_0\subset \partial \mathcal{D}(\Sigma)$. With the above observation and \cref{prop:invariantsetforcenter}, we are ready to state and prove the following theorem. \begin{theorem}\label{thm:relambda0} If $\Re \lambda=0$, i.e. $\infty$ is a center of the vector field $R(z)\partial_z$, then $\minvset{CH}=\mathcal{D}(\Psi)$. \end{theorem} \begin{proof} We recall that there is a point $z_*\in \mathcal{Z}(P)$ on the boundary of $\mathcal{D}(\Sigma)$ such that all $z_*$-starting separatrices and at least one $z_*$-ending separatrix $\septrix_0$ are contained in $\minvset{CH}$ where $\septrix_0\subset \partial \mathcal{D}(\Sigma)$. As before, let $\Sigma$ be the union of all $z_p$-starting separatrices for $z_p\in\mathcal{Z}(P)$ and observe that $\Sigma\subset \minvset{CH}$. Then the smallest angle between the separatrices that are $z_*$-ending and $z_*$-starting is at most $\frac{\pi}{2}$. In particular, we note that the associated ray of points close to $z_*$ on a $z_*$-starting separatrix intersect $\septrix_0$ transversely. Moreover, both of these separatrices are parts of the boundary of $\mathcal{D}(\Sigma)$, see \cref{fig:ptOutside}. We may therefore find a $t_0>0$ and a point $p\in \partial \mathcal{D}(\Sigma)\setminus \mathcal{Z}(P)$ such that $R'(p)\neq -\frac{1}{t_0}$ and such that $p+t_0Rz_0)\in \mathcal{D}(\Sigma)^\circ$. Invoking \cref{prop:notinv}, we can find a point in $\mathbb{C}\setminus \mathcal{D}(\Sigma)$ that belongs to $\minvset{CH}$. Thus by \cref{prop:invariantsetforcenter}, $\minvset{CH}=\mathcal{D}(\Psi)$. \end{proof} \begin{figure}[!ht] \begin{center} \includegraphics[width=0.35\textwidth,page=1]{pointOutsideSeptrix} \end{center} \caption{Schematic picture of $z_*\in \partial \mathcal{D}(\Sigma)$, a zero of $P$, and a nearby point $p\in \partial \mathcal{D}(\Sigma)\setminus \mathcal{Z}(P)$ such that $r_p$ intersects the interior of $\mathcal{D}(\Sigma)$.}\label{fig:ptOutside} \end{figure} \subsubsection{$\Re \lambda > 0$}\label{sec:repost} As we mentioned above, we do not have a general description of $\minvset{CH}$, but a number of interesting observations and examples. (Observe that in this case $\infty$ is a sink of $R(z)\partial_z$). \begin{proposition} \label{prop:convex} If the rational $1$-form $\frac{dz}{R}=\frac{Pdz}{Q}$ has all positive residues in $\bC$ then the convex hull $Conv_Q$ of all roots of $Q$ is $T_{CH}$-invariant which implies that $\minvset{CH}\subseteq Conv_Q$. \end{proposition} \begin{proof} The argument is similar to the one used in the proof of the classical Gauss--Lucas theorem. By \cref{th:charact} a closed set $S\subset \bC$ is $T_{CH}$-invariant if it contains all roots of $P$ and $Q$ and the associated rays of all points in $S^c=\bC\setminus S$ are contained in $S^c$. By the generalized Gauss--Lucas theorem, see \cite{MR0225972} if $P/Q$ has all positive residues then the roots of $P$ are contained in $Conv_Q$. The proof is based on consideration of the electrostatic force $F$ created by the system of point charges placed at the poles of $P/Q$ where the value of each charge equals the residue at the corresponding pole. This electrostatic force $F$ equals the conjugate of $P/Q$ and one can show that if we take any line $L$ not intersecting $Conv_Q$ then at any point $p\in L$, $F$ points inside the half-plane of $\bC \setminus L$ not containing $Conv_Q$. Now recall that the associated ray at a point $p$ is given by $r_p=p+tR(p),\; t\ge 0$. Notice that $R(z)=\frac{1}{P/Q}$ has the same direction as the conjugate of $P/Q$. Thus, by the above argument for any point $p\in S^c$, $r_p$ does not intersect $Conv_Q$. \end{proof} \begin{remark} In the special case when all roots of $Q$ are real, we get that $\minvset{CH}=Conv_Q$; the latter set is an interval of the real line. However if $Conv_Q$ is an actual polygon then $\minvset{CH}$ is typically strictly smaller, see \cref{fig:triFlower} below. \end{remark} \begin{example} Take $\deg Q=\deg P+1$ and assume that the roots of $P$ and $Q$ are real and interlacing. Suppose further there are no zeros of multiplicity greater than 1, and that for any root $z_*$ of $P$, factorizing $P(z)=(z-z_*)G(z)$ we have \[ -\frac{Q(z_*)}{(z_*-u)G(z_*)}>0. \] Note that this latter condition is the same as the requirement $R(z)=\lambda z+ \dotsb$ at $\infty$ with $\lambda>0$. By \cref{thm:classifying1d} we know that in this situation $\minvset{CH}$ is a bounded interval. Moreover, we can take any convex set $S$ and observe that such $S$ is $T_{CH}$-invariant by invoking \cref{th:charact}. \end{example} \bigskip \noindent \emph{A family of operators with $\minvset{CH}$ being the unit disk:} \smallskip Consider the family $T = z(z^k-1){\frac{d}{dz}} + (z^k+1)$, where $k$ is a positive integer. Our goal is to show that in this case $\minvset{CH}$ is the unit disk. \begin{lemma}\label{lem:unitDiskRays} Set $f(z) = z + t \frac{z(z^k-1)}{z^k+1}$, with $t>0$. Then $|f(z)|>1$ whenever $|z|>1$. \end{lemma} \begin{proof} We substitute $z = r^{\frac{1}{k}} e^{i \theta}$ with $r>1$. With some algebra, we find that \begin{equation} \frac{|f(z)|^2}{|z|^2} = \frac{ |f( r^{\frac{1}{k}} e^{i \theta})|^2 }{ r^{2/k} } = 1 + t\frac{2 r^2-2+r^2 t-2 \cos(\theta k) r t+t}{r^2+ 2\cos(\theta k) r+1}. \end{equation} Setting $c \coloneqq \cos k \theta$ and rewriting further, we get \begin{equation}\label{eq:absValueSimplified} \frac{ |f( r^{\frac{1}{k}} e^{i \theta})|^2 }{ r^{2/k} } = 1 + t\frac{2 (r^2-1)+ t \left( (r-c)^2 + (1-c^2) \right )}{(r+c)^2 + (1-c^2)}. \end{equation} Since $-1 \leq c \leq 1$, it follows that \[ \frac{ |f( r^{\frac{1}{k}} e^{i \theta})|^2 }{ r^{2/k} } > 1 + t \frac{r^2-1}{(r+1)^2} >1. \] Consequently, $|f(z)| > |z|$ whenever $|z|>1$ and the statement follows. \end{proof} \begin{lemma}\label{lem:unitDiskBdd} The separatrices of the vector field $R(z)\partial_z=\frac{z(z^k-1)}{z^k+1}\partial_z$ are arcs of the unit circle, connecting roots of $P(z)$ with roots of $Q(z)$. \end{lemma} \begin{proof} We shall use \cref{lm:gradient} in \cref{sec:ratfields} in order to prove the statement. Assuming that $z$ is not a root of $Q$, we have that \[ \int \frac{z^k+1}{z(z^k-1)} dz = k^{-1} \log\left( \frac{(1-z^k)^2}{z^k} \right). \] Now for $z = e^{i \theta}$, we find that \[ \Im \log( (1-z^k)^2/z^k ) = \arg( (1-z^k)^2/z^k ) = \arg(-2+ e^{i k \theta}+ e^{-i k \theta})=\pi. \] Hence, the unit circle consists of integral trajectories of $R(z) {\partial_z}$. Since the roots of $P$ lie on the unit circle, these integral trajectories must be separatrices. \end{proof} \begin{corollary} For $T = z(z^k-1){\frac{d}{dz}} + (z^k+1)$, $\minvset{CH}$ coincides with the unit disk. \end{corollary} \begin{proof} By \cref{lem:unitDiskRays}, we have that all associated rays for points outside the unit disk never intersect the unit disk. Hence, by \cref{th:charact}, the unit disk is $T_{CH}$-invariant. By \cref{lem:unitDiskBdd}, we have that the unit circle consists of root-starting separatrices of $-R(z) {\partial_z}$. Since components of invariant sets must be simply connected, the statement follows. \end{proof} \noindent \emph{Case of non-degenerate $T$ with quadratic $Q$:} \label{sec:quadr} \smallskip Next we provide some information about $\minvset{CH}$ in the case when $Q(z)$ is quadratic polynomial and $P(z)$ has degree at most $1$. Already this case (which is the simplest non-trivial) reveals a quite intricate behavior of $\minvset{CH}$. The next statement is straightforward. \begin{lemma}\label{lm:red} Under the above assumptions, using the affine variable change and rescaling, every $T=Q(z)\frac{d}{dz}+P(z)$ can be reduced to the following $3$ families: \begin{enumerate}[label={Case \arabic*:}] \item $T=z^2\frac{d}{dz}+az,$ where $a\in \mathbb{C}$; \item $T=z^2\frac{d}{dz}+az+1,$ where $a\in \mathbb{C}$; \item $T=(z^2-1)\frac {d}{dz} +az+b,$ where $a\in \mathbb{C}$ and $b\in \mathbb{C}$. \end{enumerate} \end{lemma} In these three cases, we have that \begin{equation} \int \frac{P(z)}{Q(z)} dx = \begin{cases} a \log z &\text{-- Case 1} \\ a \log z - \frac{1}{z} &\text{-- Case 2} \\ \frac{a+b}{2} \log(1+z) + \frac{a-b}{2} \log(1-z)&\text{-- Case 3} \end{cases} \end{equation} \begin{remark} It is a trivial observation that in Case 1 the minimal $T_{CH}$-invariant set $\minvset{CH}$ coincides with the origin. \end{remark} \noindent \emph {Case 2.} In the special subcase $a=0$ of Case 2 we get $\deg Q -\deg P=2$ which implies that $\minvset{CH}$ is trivial. Analogously, for $\Re a<0$, $\minvset{CH}$ is trivial by \cref{th:negativeResidueAtInfty} . Finally, for $a\neq 0$ with $\Re a =0$, we get a special case of \cref{thm:relambda0}, but we provide more information below. \begin{proposition} \label{prop:case2} In Case 2 with $\Re a=0$ we obtain a package of closed trajectories of $R(z)\partial_z$ near $\infty$ and $\minvset{CH}$ is a convex domain whose boundary is given by the closed connected curve satisfying the equation \[ \beta\ln(x^2+y^2)+\frac{y}{x^2+y^2}=\frac{\beta}{2}\ln\left(\frac{4}{\beta^2}\right)+\frac{\beta}{2}. \] \end{proposition} \begin{proof} Use \cref{thm:relambda0} together with an explicit analysis of the curve of inflections and the family of closed trajectories of $R(z) {\partial_z}$ near $\infty$. \end{proof} We are not able to settle the case $\Re a>0$ in general, but we have the following guess. \begin{conjecture}\label{conj:case2} For $\Re a>0$, $\minvset{CH}$ is a domain whose boundary is given by the closed curve satisfying the equation \[ \Psi(x,y)=\Psi\left(-\frac{\alpha}{\alpha^2+\beta^2}, \frac{\beta}{\alpha^2+\beta^2}\right), \] where $a=\alpha+ i\beta$, $z=x+i y$, and \[ \Psi(x,y)=\frac{\beta}{2}\ln (x^2+y^2)+\alpha \arg(z) +\frac{y}{x^2+y^2}. \] \end{conjecture} We have already discussed the instance $a=1$ of Case 2 in \cref{ex:coch}. Now we present illustrations of \cref{conj:case2} focusing on the family $T=z^2\frac{d}{dz}+e^{i\theta} z+1$ with $0 \leq \theta < \frac{\pi}{2}$. In \cref{fig:case2Figs}, we show an approximation of the minimal $T_{CH}$-invariant set for two values of $\theta$. \begin{figure}[!ht] \centering \includegraphics[width=0.45\textwidth]{cardiodid-deformed1} \hfill \includegraphics[width=0.45\textwidth]{cardiodid-deformed2} \caption{Conjectural $\minvset{CH}$ for $T=z^2\frac{d}{dz}+e^{i\theta}z+1$ with $\theta=\frac{\pi}{6}$ and $\theta=\frac{\pi}{3}$, respectively. Note that these figures are drawn in different scales; the sets gets larger as $\theta$ increases. Notice also that \cref{ex:coch} corresponds to $\theta=0$. } \label{fig:case2Figs} \end{figure} \noindent {\emph Case 3.} We start with describing non-generic situations. \begin{remark} Special subcases of Case 3 are as follows: \noindent {\rm (i)} $a=0$ and $b\neq 0$. Then $\deg Q -\deg P=2$ implying that $\minvset{CH}=\bC$. \noindent {\rm (ii)} $a> 0$ and $b= 0$. Then it follows from \cref{cor:irregularInvSets} that $\minvset{CH}=[-1,1]$. \noindent {\rm (iii)} $a\neq 0$ and either $az+b=a(z-1)$ or $az+b=a(z+1)$; this case is settled in \cref{sec:1-point} below. \end{remark} \noindent Let us now assume that $T=(z^2-1)\frac{d}{dz} +az+b$ where $a\neq 0$ and the root of $az+b$ is different from $\pm 1$. We already know that if $\Re a<0$, then $\minvset{CH}$ is trivial and if $\Re a=0$, then $\minvset{CH}$ is given in \cref{thm:relambda0}. Let us elaborate on the remaining case $a\neq 0,\Re a>0$. Already here we do not have a complete answer despite the fact that we restrict ourselves to the case $a\in \mathbb{R}$, $b\in \bC$. The problem gets quite difficult and we can only provide an illustrating example. In this example, $\minvset{CH}$ has boundary points of local, global, and mixed types. Furthermore, the points of global type are such that nearby points correspond to different $u$, making the boundary particularly difficult to describe explicitly. \begin{example}\label{ex:mustach} Take $Q=(z-1)(z+1)$, $P=(z+i)$. Begin by noting that using $z=x+iy$, the curve of inflections of $R(z){\partial_z}$ is given by $x(y+1)=0$. Next, consider the $P$-starting separatrices. They are the non-imaginary solutions of the equation \[ \Im \left(\left(\frac{1}{2}+\frac{i}{2}\right) (\log (1-z)-i \log (z+1))\right) = 0. \] Let us call the solution with negative real part by $\sigma_-(s)$ and the one with positive real part $\sigma_+(s)$, both parameterized by $s>0$. Importantly, we may note that there exist a $t_0$ for which $\Im \sigma_\pm(s_0)$ is maximal and then a $s_1>s_0$ for which $\Re \sigma_\pm(s_1)$ is maximal for $\sigma_-(s)$ and minimal for $\sigma_+(s)$ in the interval $s>s_0$. For each $u$ on $\sigma_-((s_0,s_1))$, consider the union of the solutions to \[ tQ(z)+(z-u)P(z)=0 \] for $t\geq 0$. Since $\arg R(iy)=\frac{\pi}{2}$ if $y>0$ and $\arg R(iy)=\frac{3\pi}{2}$ for $y<0$, from \cref{prop:rootTrajectoryDirections} and the fact that these solutions tend to the roots of $Q$ as $t\to \infty$, it follows that none of these solutions crosses the imaginary axis and that the $t$-traces that start in $-i$ tend to 1. Similarly we consider the $t$-traces corresponding to $u$ on $\sigma_+((s_0,s_1))$. By analyzing $R(z)$ on the boundary of the regions $-1<x<0$, $-1<y<0$ and $0<x<1$, $-1<y<0$ and using the fact that $\arg(\sigma_+(s_0,s_1))$ and $\arg(\sigma_-(s_0,s_1))$ are quite close to $0$ and $\pi$ respectively, one can deduce that all our $t$-traces remain in either the region $-1<x<0$, $-1<y<0$ or in $0<x<1$, $-1<y<0$ except for the case $x=0, y=-1$. In particular, it follows that the $t$-traces intersect $\sigma_-(s)$ in $-1<x<0$, $-1<y<0$ or intersect $\sigma_+(s)$ in $0<x<1$, $-1<y<0$. Clearly, there is a minimal $t$ when this occurs for each $t$-trace $\gamma(t)$. This intersection is such that if $\gamma(t)=\sigma(s_2)$, then $\gamma(t)\neq \sigma(s)$ for any $ s<s_2$ and $ t\geq 0$. We may thereby consider the smallest simply connected set $S\subset \bC$ containing the aforementioned separatrices and the corresponding $t$-traces and deduce that it must be regular. We shall prove that $S=\minvset{CH}$. Furthermore, we shall deduce that its boundary consists of points of both local as well as global types. Now, the curve of inflections does not intersect $x\neq 0, y>-1$. Moreover, the integral trajectories of $-R(z) {\partial_z}$ with the limit point either $1$ or $-1$ circulate around $1$ counter-clockwise and around $-1$ clockwise. Hence, considering the $t$-traces corresponding to $t\notin [s_0,s_1]$ does not yield a larger set. For invariance, take a point $z$ on the boundary of $S$. Suppose that it corresponds to a $t$-trace with $u$ lying on a separatrix. Then its associated ray intersects $S$ only in one of the separatrices $\sigma_\pm$. This follows by \cref{prop:rootTrajectoryDirections} and the observation that ${\frac{d}{dz}} (\arg R(z))$ is of constant sign in the regions $-1<x<0$, $-1<y<0$ and $0<x<1$, $-1<y<0$, which in its turn follows from the fact that the curve of inflections does not intersect the concerned region. Moreover, the associated ray cannot cut through the separatrix into the interior because then each $z_0$ in an open neighborhood of $z$ would have the same property, contradicting the assumption $z_0\in \partial S$. This also uses the fact that the associated rays only intersect in $\sigma_\pm(s_0,s_1)$. Next, suppose that $z\in \partial S$ does not correspond to a $u\in \sigma_\pm(s_0,s_1)$. Since the curve of inflections does not intersect the $P$-starting separatrices, the associated ray at $z$ does not intersect $S$ except at $z$. By regularity and following the proof of \cref{prop:raysGiveCHinv}, we can deduce that a $t$-trace can not pass to the exterior of $S$ through a point that is not $-i$. We end the proof by excluding the latter possibility. Shifting $w \to z+i$, we get \[ \arg R(w)=\arg\left(-2i-\frac{2}{w}+w\right). \] Now, we have that $\arg \frac{1}{w}=-\arg w$. In particular, for small $w$, we find that if $\arg w \notin\{\frac{\pi}{2},3\frac{\pi}{2}\}$, then $|\arg R( w)-\frac{3\pi}{2}|<|\arg w-\frac{3\pi}{2}|$ which implies that the associated at $z=w-i$ does not intersect $-i$. Furthermore, if $\arg w=\frac{3\pi}{2}$, then $\arg R( w)=\frac{3\pi}{2}$. Using associated rays we can deduce that there can not be a $t$-trace corresponding to $u=-i$ that goes to the exterior of $S$. Furthermore, by \cref{prop:rootTrajectoryDirections} any other $t$-trace starting in $-i$ remains in $S$. All in all we deduce that $S$ is indeed $T_{CH}$-invariant. Note now that the points $\sigma_\pm(s_0)$ are of local type. Furthermore, merely the union of the separatrices is not $T_{CH}$-invariant and hence there exist points of global type. Note that we also obtain two points on the boundary that are of mixed type, but we can not explicitly find their exact location. A sketch of $\minvset{CH}$ is presented in \cref{fig:mustasch}. The black part of the boundary consists of separatrices and the blue part of the boundary is such that for each blue point $z$, there is a unique point $z+t_0R(z)$, on the boundary, and the line $z+tR(z)$ is a tangent to the separatrix at $z+t_0R(z)$. In \cref{fig:mustasch}, we illustrate this with the brown dotted line from one of points of mixed type. The two points of mixed type are indicated by orange markers. As we already mentioned, in the case we obtain boundary points that are of local, global as well as of mixed types. \begin{figure}[!ht] \includegraphics[width=0.6\textwidth]{mustasch} \caption{For $T=(z^2-1){\frac{d}{dz}}+(z+i)$, its $\minvset{CH}$ is the region bounded by the separatrices (solid black) and the blue curve. The points in orange are of mixed type. (This figure exaggerated in order to emphasize its important features. The blue curve is an approximation). }\label{fig:mustasch} \end{figure} \end{example} \section{Examples of $1$-point generated $T_{CH}$-invariant sets} \label{sec:1-point} \subsection{Case $Q=(z-z_0)(z-z_1),P=\beta(z-z_0) $} Here we discuss a very special situation where $Q$ and $P$ have a common root and are of degree $2$ and $1$ respectively This is equivalent to the case when $Q=(z-z_1)$, $P=\beta\in\mathbb{C}$, but we now looking for the $T_{CH}$-invariant set generated by the point $z_0$. That is, we look for the $T_{CH}$-extension of $z_0$ where $T=(z-z_1){\frac{d}{dz}}+\beta$, see \cref{sub:exist}. Without loss of generality, we may assume that $Q = z(z-1)$ and $P=(a+bi)(z-1)$. \begin{itemize} \item If $a=b=0$, then $\minvset{CH}=\{z_1\}$. \item If $a>0$, $b=0$, then $\minvset{CH}$ is equal to the line segment joining $z_0$ and $z_1$. \item If $a<0$, $\minvset{CH}$ is equal to $\bC$, i.e. is trivial. \end{itemize} The first two cases are easy to realize. In the last case, if $a<0$, $b=0$ then this coincides with the example in \cref{ssec:special}. Further, if $b\neq 0$ then the forward trajectory of $-R(z)\partial_z$ starting in $a+b i$ tends to and circulates around $\infty$ infinitely many times. In particular, the associated ray of any point $z\neq 0$ intersects this forward trajectory so $z\in \minvset{CH}$. Consider now $b\neq 0$ and $a>0$. Then $\minvset{CH}$ clearly contains the $t$-trace $\gamma(t)$ starting in $z_0$ as well as the forward trajectory $\phi$ of $-R(z) {\partial_z}$ starting in $z_0$. We claim that $\partial \minvset{CH} \subset \gamma(t,z_0)\cup \phi$, see \cref{fig:snail}. To accomplish this, define $S$ to be the minimal simply connected set containing $\gamma(t)\cup \phi$. \begin{figure}[!ht] \includegraphics[width=0.7\textwidth]{snail} \caption{For $T = (z-1)z {\frac{d}{dz}} + (z-1)(1+2i)$, its $\minvset{CH}$ is bounded by the solid black and the blue lines.}\label{fig:snail} \end{figure} It is straightforward to show that in polar coordinates the forward trajectory $\phi$ is given by $r(\theta) = \exp(-\frac{a}{b}\cdot \theta)$, where $\theta \geq 0$. Moreover, $\gamma(t)$ is a curve satisfying \[ b (x^2 - x + y^2 ) + a y=0, \qquad y \geq 0. \] Note that this is an arc of a circle. We define $\phi(t)$ such that $\phi(0)=z_0$ and $t_0$ the smallest $t$ such that $\phi(t_0)\in \gamma([0,\infty))$ which clearly exists because $\phi(t)$ rotates about $z_1$ infinitely many times and $\gamma(t)$ does not, since $\lim_{t\to \infty}\gamma(t)=z_1$. Also let $t_1$ be such that $\gamma(t_1,z_0)=\phi(t_0)$. Then the boundary of $S$ is equal to $\phi([0,t_0])\cup \gamma([0,t_1])$. Now, the associated ray $r_p$ for $p\in \phi([0,t_0])$ intersects $S$ only in $p$, except for the point $p=\phi(t_0)$ in which case it also intersects $S$ in $z_0$. The associated rays at all points $p$ in $\gamma([0,t_1])$ intersect $S$ in precisely $p$ and $z_0$. In particular, $S$ is $T_{CH}$-invariant since otherwise, we could find a $t$-trace $\gamma_1(t)$ starting in some $z_2\in S$ such that there exists $t_2>0$ with $\gamma_1(t_2)\in S^c$. Then taking a small perturbation $z_3$ of $z_2$ in the interior of $S$ (recall that $S$ is regular), we can assume that there exists a $t$-trace $\gamma_2$ such that $\gamma_0(t_0)\in S^c$. But this renders a point on the boundary of $S$ that has an associated ray that intersects the interior of $S$, contradiction. Hence, we conclude that $S$ is $T_{CH}$-invariant. Moreover, $\minvset{CH}$ must contain the forward trajectory and the $t$-trace associated to $z_0$. Hence, $S=\minvset{CH}$. Note here that $\phi(t_0)=\gamma(t_1)$ is a point of mixed type in which the boundary changes character. All points in $\gamma((0,t_1))$ are of global type and all points in $\phi([0,t_0))$ are of local type, see \cref{def:types}. \subsection {Case $Q=\alpha, P=\beta( z-z_0)$}\label{sec:constlinear} After an affine change of variables we can assume that $T={\frac{d}{dz}} -z$, implying that $Q=1,P=-z$. We then have that $\minvset{CH}=\mathbb{R}.$ Suppose now that we were to consider the case $Q=(z-b i), P=-(z-b i)z$ with $b\in \mathbb{R}$. By the above discussion, we want to find the $T_{CH}$-extension of the point $b i$. Consider the $t$-traces of $b i$. They are given by \[t-z(z-b i)=0 \iff z=\frac{b i}{2}\pm\sqrt{-\frac{b ^2}{4}+t}\] with $t\geq 0$. This set is the union of the line segment between $0$ and $b i$ together with straight line $\frac{b i}{2}+\mathbb{R}.$ It is clear that all points $z$ between the real axis $\mathbb{R}$ and the horizontal line $\frac{b i}{2}+\mathbb{R}$ satisfy the condition that there exists $t$ so that $z+tR(z)\in \frac{b i}{2}+\mathbb{R}$. Furthermore, if $b >0$ then, since all points $z$ on $\frac{b i}{2}+\mathbb{R}$ satisfy the condition that there exists $t$ for which $z+tR(z)=b i$, we obtain that all points satisfying either $\Im z>b /2, \Re z\neq 0$ or $\Im z>b $ are such that $z+tR(z)$ do not intersect the set \[S=\{ z: \Im z\in [0,b /2]\}\cup \{z: \Im z\in [0,b ],\Re z=0\}.\] Thus $\minvset{CH}=S$ by \cref{th:charact}. In exactly the same way, for $b <0$, \[\minvset{CH}=\{ z: \Im z\in [b /2,0]\}\cup \{z: \Im z\in [b ,0],\Re z=0\}.\] Suppose now that $Q=\prod_k^N (z-b_ki), P=-z\prod_k^N (z-b _ki)$ with $b _k\in \mathbb{R}$. It then readily follows from \cref{lm:simple} that \[\minvset{CH}=\{ z: \Im z\in [\min(b _k)/2,0]\}\cup \{z: \Im z\in [\min(b _k),0],\Re z=0\}\] \[\cup\{ z: \Im z\in [0,\max(b _k)/2]\}\cup \{z: \Im z\in [0,\max(b _k)],\Re z=0\}.\] (Above we used the convention that if $b _j<0$ then $[0,b _j]=[-b _j,0]=\emptyset$). \smallskip Suppose next that we want to consider the general case $Q=(z-(a+b i)), P=-(z-(a+b i))z$ with $a,b\neq 0$. Using symmetry, we can assume that $a,b>0$. Set $z_0=a+b i$ and consider the $t$-traces of $z_0$. One of these, say $\gamma_0(t)$, starts in $0$ and tends to $-\infty+b i/2$. The other, $\gamma_1(t)$ starts in $z_0$ and tends to $\infty+b i/2$. Next, consider the forward trajectory $\phi(t)$ of the vector field $-R(z)\partial_z$ starting in $z_0$, i.e. such that $\phi(0)=z_0$. This curve begins in $z_0$, tends to $\infty$, and never crosses the real axis. (Note here that $\phi(t)$ does not intersect the curve of inflections, which in this case is the union of the real and imaginary axes). Lastly, for every point on $u\in \gamma_0([0,\infty))$, consider the $t$-trace of $u$ that starts in $0$. Take the union of all such $t$-traces. This union forms a set whose boundary consists of $[0,\infty)$ and a curve $\psi(t)$ tending to $\infty+b i/4$. Note here that there is a unique point $z_1=a_1+b_1i$ where $\phi([0,\infty))$ and $\psi([0,\infty))$ intersect. Also, note that each point $w_0\in \psi([0,\infty))$ is such that there is a unique point on $w_1\in \gamma_0([0,\infty))$ for which the associated ray at $w_0$ is tangent to $\gamma_0([0,\infty))$ at $w_1$. Consider now the regular, closed and connected set $S$ whose boundary consists of segments of $\gamma_0([0,\infty)), \gamma_1([0,\infty)), \phi([0,\infty)), \psi([0,\infty))$ and the real axis. We claim that this $S$ is $\minvset{CH}$. Indeed, for any point in its interior, it is straightforward to see that its associated ray intersects at least one segment of the boundary, and hence $S\subseteq \minvset{CH}$. To show that $S$ is indeed $T_{CH}$-invariant, take a point $z=x+iy$ in its complement. If $y<0$ then $\Im R(z)<0$ implying that the associated ray at $z$ does not intersect $S$. If $x<0$, we note that $\arg R(z) >\arg R(x+b_0i)$ where we choose $b_0$ such that $x+b_0 \in \gamma_0([0,\infty))$. Hence, the associated ray at $z$ does not intersect $S$. If $x>0$, we have to subdivide the situation into subcases. Suppose first that $x<a$ and let $x+b_0i\in \psi([0,\infty))$. Then $\arg R(z)<\arg R(x+b_0i)$ and $\Re R(z)<0$ implying that the associated ray at $z$ does not intersect $\minvset{CH}$. Next, suppose that $a\geq x \leq a_1$, $y<b_2$, where $x+b_2i\in \phi([0,\infty))$ and $x+b_0\in \psi([0,\infty))$. Then $\arg R(x+b_2i)<\arg R(z)<\arg R(x+b_0i)$ implying that the associated ray at $z$ does not intersect $S$. Lastly, suppose that $x \geq a $, $y>b_2$ where $a+b_2i\in \gamma_1([0,\infty))$. Then $\arg R(z)<\arg R(x+b_2i)$ and since the associated ray at $x+b_2i$ intersects $z_0$, we get that the associated ray at $z$ does not intersect $S$. Summarizing, we conclude that $S$ indeed is $T_{CH}$-invariant and hence equal to $\minvset{CH}$. \qed \section{Outlook} \label{sec:outlook} In this short section we formulate some of the very many open questions related to the above topic. \begin{enumerate}[label={\bf \arabic*.}] \item Is it possible to extend the main results of this paper to linear differential operators $T$ of order exceeding $1$? \item Is it possible to give a description of the boundary of a nontrivial $\minvset{CH}$ starting with the roots of $Q$ and $P$ and making finitely many steps of taking $t$-trajectories and trajectories of the vector field $R(z) {\partial_z}$? Our best guess is that, in general, this is impossible, i.e. one needs countably many such steps. \item Extend the supply of cases with ``explicit" description of $1$-point generated $T_{CH}$-invariant sets. Two special situations which seem to be fundamental, but we do not have an answer for, are Case 3 for non-degenerate $T$ with quadratic $Q$, see \S~\ref{sec:repost} and $T = (z^3-1){\frac{d}{dz}} + 3z^2$ which illustrates \cref{prop:convex}, see \cref{fig:triFlower}. This figure however is not accurate. It only shows some approximation of the actual $\minvset{CH}$. \item Show that the boundary of $\minvset{CH}$ is always piecewise analytic. \item In this paper we were mainly studying minimal continuously Hutchinson invariant sets $\minvset{CH}$ for linear differential operators of order $1$ while our original interest was to study minimal Hutchinson invariant sets $\minvset{H}$, see \S~\ref{sec:intro}. Numerical experiments show that their boundaries typically have a fractal structure, see e.g. \cref{fig:cochleoid}, but we do not have conclusive results yet. \end{enumerate} \begin{figure}[!ht] \centering \includegraphics[width=0.35\textwidth]{blomman} \caption{Monte--Carlo approximation of $\minvset{CH}$ for $T = (z^3-1){\frac{d}{dz}} + 3z^2$. At first glance it can appear that $\minvset{CH}$ has boundary given by circle arcs. This is not the case, one can use associated rays to prove this and the actual $\minvset{CH}$ contains these circle arcs but is somewhat bigger. However, we have not been able to find an explicit description of $\minvset{CH}$ in this situation.} \label{fig:triFlower} \end{figure} \begin{appendices} \section{Appendix. Rational vector fields in $\setRS$ and their curves of inflections}\label{sec:ratfields} This section contains some basic information about rational vector fields in $\setRS$ which is frequently used in the paper. Our exposition follows mainly the recent preprint~\cite{DiasGarijo2020x}. \subsection{Separatrices and separatrix graphs of rational vector fields}\label{sec:septrix} Consider the first order differential equation \begin{equation}\label{eq:rationaldiff} \dot{z}(t)=\frac{dz}{dt}=-R(z), \end{equation} where $-R(z)=-\frac{Q(z)}{P(z)}: \setRS \to \setRS$ is a rational map such that deg $Q$ $=$ deg $P +1$. With this differential equation we associate the rational vector field $v_R=-R(z) {\partial_z}$. An important notion in this context is that of a \emph{separatrix} of $v_R$. Namely, separatrices are solutions of \eqref{eq:rationaldiff} whose maximal domain of definition is strictly smaller than $\mathbb{R}$. To be precise, let $\phi(t,\eta)$ be a solution of \eqref{eq:rationaldiff} with initial condition $\phi(0,\eta)=\eta$. We assume that $\phi(\cdot,\eta): (t_{min},t_{max})\to \setRS$ is defined on its maximum interval of definition. Then $\phi(t,\eta)$ is a \defin{separatrix} (of $v_R$) if at least one of $t_{min},t_{max}$ is finite. We will often use the notation $\phi(t)$ instead of $\phi(t,\eta)$ when it is unnecessary to specify $\eta$. If $\lim_{t\to t_{min}}\phi(t)=z_*$, we say that $\phi(t)$ is \defin{$z_*$-starting} and if $\lim_{t\to t_{max}}\phi(t)=z_*$ then $\phi(t)$ is \defin{$z_*$-ending}. In general, if $z_*$ is not specified, we say that $\phi(t)$ is \defin{$P$-starting} resp.~\defin{$P$-ending}. If $t_{min}$ is finite then $\lim_{t\to t_{min}}\phi(t,\eta)$ is a pole of $R(z)$ (equivalently, a saddle point of $v_R$). Similarly, if $t_{max}$ is finite then $\lim_{t\to t_{max}}\phi(t,\eta)$ is a pole of $R(z)$. Therefore there exist four distinct types of separatrices: \begin{enumerate} \item $t_{min}$ is finite and $t_{max}$ is infinite $\rightarrow$ $\phi(\eta,t)$ is called an \defin{outgoing} separatrix. \item $t_{min}$ is infinite and $t_{max}$ is finite $\rightarrow$ $\phi(\eta,t)$ is called an \defin{ingoing} separatrix. \item $t_{min}$ and $t_{max}$ are both finite, but \[ \lim_{t\to t_{min}}\phi(\eta,t)\neq \lim_{t\to t_{max}}\phi(\eta,t), \] $\phi(\eta,t)$ is called a \defin{heteroclinic} separatrix. \item $t_{min}$ and $t_{max}$ are both finite and \[ \lim_{t\to t_{min}}\phi(\eta,t)= \lim_{t\to t_{max}}\phi(\eta,t), \] $\phi(\eta,t)$ is called a \defin{homoclinic} separatrix. \end{enumerate} The collection of separatrices of $v_R$ subdivides $\setRS$ into disjoint open domains in each of which integral trajectories of $v_R$ have similar properties. These domains can be of the following $4$ different types: center zone, annular zone, parallel and elliptic zones defined as follows. A \emph{center zone} is a simply connected region that contains a zero to $-R(z)$ which is a center. The integral trajectories of $v_R$ in center zones are periodic orbits with the same period. An \emph{annular zone} is a doubly connected region in which integral trajectories are periodic orbits with the same period and there exists a positive number $L$ such that all these integral trajectories have length greater than $L$. Both \emph {elliptic and parallel zones} are simply connected regions. The integral trajectories in elliptic zones have the property that all $\omega$- and $\alpha$-limits coincide with a critical point $z_0$ of $\phi(\eta,t)$ solving \eqref{eq:rationaldiff}. For parallel zones, all $\omega$-limits are equal to one critical point $z_0$ while all $\alpha$-limits are equal to another critical point $z_1$, see illustrations in \cref{fig:vectorFieldTypes}. \begin{figure}[!ht \begin{center} \includegraphics[width=0.18\textwidth,page=1]{separatrix-regions}\hskip1.5cm \includegraphics[width=0.18\textwidth,page=2]{separatrix-regions} \includegraphics[width=0.18\textwidth,page=3]{separatrix-regions}\hskip1.5cm \includegraphics[width=0.18\textwidth,page=4]{separatrix-regions} \end{center} \caption{Illustrations of different types of zones: center (top left), annular (top right), parallel (bottom left), and elliptic (bottom right).% }\label{fig:vectorFieldTypes} \end{figure} \smallskip As mentioned above, the poles of $-R$ are saddle points of $v_R$. More exactly, if $z_*$ is a pole of order $k$ of $-R$, then it is a saddle point of $v_R$ with $k+1$ $z_*$-starting and $k+1$ $z_*$-ending separatrices. Furthermore, if one goes around $z_*$ once along the boundary of a sufficiently small, convex neighborhood of $z_*$, one traverses segments of the boundary where a trajectory starts and ends in $z_*$ consecutively. Additionally, the angles between two adjacent separatrices are equal to $\frac{2\pi}{2k+2}$, see~\cref{fig:locPort}. More specifically, we have the following statement. \begin{proposition}[See~\cite{needham}]\label{prop:dirSeptrix} Suppose that $R(z)$ is a rational function having a pole of order $k$ at $z_0$ and \[ R(z)=\frac{c}{(z-z_0)^k}+f(z) \] where $\lim_{z\to z_0}f(z)(z-z_0)^k=0$. Then the directions of the $z_0$-starting separatrices of $R(z) {\partial_z}$ at $z_0$ are given by the solutions to $z^{k+1}= c$ and the directions of the $z_0$-ending separatrices are given by the solutions to $z^{k+1}= -c$. \end{proposition} \begin{figure}[!ht] \begin{center} \includegraphics[width=0.25\textwidth,page=1]{localPortrait} \end{center} \caption{Schematic description of separatrices near a pole $z_*$ of order $k=2$.}\label{fig:locPort} \end{figure} Moreover, if $z_0$ is a zero of $-R(z)$ of order $1$, then the phase portrait near $z_0$ is either a sink (if $-\Re R'(z) <0$), a source (if $-\Re R'(z)>0$), or a center (if $-\Re R'(z)=0$). If $z_0$ is zero of $-R(z)$ of order $k$ then the phase portrait near $z_0$ is the union of $2(k-1)$ elliptic sectors. Examples of phase portraits near singularities of analytic vector fields are shown in \cref{fig:vectorFieldSing}. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.2\textwidth,page=1]{center} \includegraphics[width=0.2\textwidth,page=1]{sink} \includegraphics[width=0.2\textwidth,page=1]{source} \includegraphics[width=0.2 \textwidth,page=1]{saddle} \includegraphics[width=0.2 \textwidth,page=1]{saddle2} \includegraphics[width=0.2 \textwidth,page=1]{zero2} \end{center} \caption{Singular points of analytic vector fields (left-to-right; top-to-bottom): center, sink, source, saddle, saddle of order $2$, and zero of order $2$.}\label{fig:vectorFieldSing} \end{figure} For a given rational vector field, the union of the closures of its separatrices is called the \emph{separatrix graph}. Given a rational function $R$ as above, let us denote by $\Gamma_R$ the separatrix graph of $v_R$. As mentioned above, there are four different types of zones for a rational vector field $v_R$ and its separatrix graph $\Gamma_R$ constitutes the boundary of these zones. Different topological possibilities for separatrix graphs of rational vector fields in $\setRS$ are completely characterized in \cite{DiasGarijo2020x}. In this paper, we will, in particular, use the property that the complement of $\setRS\setminus \Gamma_R$ is the union of simply and doubly connected domains and that the boundary of an annular or a center zone consists of homo/heteroclinic separatrices. \subsection{Flat model of a rational vector field} We start with the following claim. \begin{lemma}\label{lm:gradient} Let $h(z)$ be a complex-analytic function in some domain $\Omega\subset \bC$. Then the integral trajectories of the vector field $h(z) {\partial_z}$ are given by \begin{equation} \Im \int \frac {dz}{h(z)} = \mathrm{const}. \end{equation} \end{lemma} \begin{proof} Observe that $h(z)$ and $\frac{1}{h(z)}$ have complex-conjugated directions at each point where $h(z)\neq 0$. Now notice that Cauchy--Riemann equations imply that, for any complex-analytic function $f(z)$, one has \[ f^\prime(z)=u^\prime_x(x,y)-i u^\prime_y (x,y)=\overline{\grad u(x,y)} \] where $u(x,y)$ is the real part of $f(z)$ and $z=x+iy$. Thus the gradient of $\Re \int\frac{dz}{h(z)}$ is real proportional to $h(z)$. Since the level curves of the real and imaginary parts of a given analytic function are orthogonal at each non-singular point, the tangents to the level curves of $\Im \int\frac{dz}{h(z)}$ will be real proportional to $h(z)$ which implies that the latter curves will be trajectories of $\dot{z}(t)=h(z)$. Another way to obtain the same result is as follows. If $\dot{z}(t)=\frac{dz}{dt}=h(z)$ then $dt=\frac{dz}{h(z)}$. Introducing the primitive function $\Theta(z)=\int \frac{dz}{h(z)}$ and using the fact that $t$ is a real parameter we conclude that the trajectories of the above field are given by the family $\Im \Theta (z)= \mathrm{const}$. \end{proof} Developing further the connection pointed out in \cref{lm:gradient}, let us associate to a rational vector field $v_R=-R(z) {\partial_z}=-\frac{Q(z)}{P(z)} {\partial_z}$ the $1$-form $\omega=-\frac{P(z)}{Q(z)}dz$. Then \cref{lm:gradient} claims that integral trajectories of $v_R$ is the family of curves given by the condition that the imaginary part of the primitive function of $\omega$ is constant. \begin{definition} Given a compact Riemann surface $Y$ with a meromorphic $1$-form $\omega$, we denote by $Cr_\omega\subset Y$ the set of all zeros and poles of $\omega$. We define the \defin{flat model} of the pair $(Y,\omega)$ as the flat metric on $Y\setminus Cr_\omega$ induced by the integration of $\omega$ where the distance between two points $p_1$ and $p_2$ is $\vert \int_{p_1}^{p_2}\omega\vert$. \end{definition} \subsection{Inflection points of trajectories of an analytic vector field}\label{ssec:inflections} In what follows we will need a description of the set of inflection points of trajectories of an analytic vector field in the complex plane. Let $W(\mathbf{z})\partial_{\mathbf{z}}$ be a vector field where $ W(\mathbf{z}): \mathbb{C} \to \mathbb{C}:$ $W(\mathbf{z}) = (u(x,y), v(x,y))$. Typically, for real-analytic $W(\mathbf{z})\partial_{\mathbf{z}}$, there exists a curve $\defin{\mathfrak{I}_W}\subset \mathbb{R}^2$ consisting of all points at which trajectories of $W(\mathbf{z})\partial_{\mathbf{z}}$ have inflections. Our nearest goal is to describe $\mathfrak{I}_W$ when $W(\mathbf{z})\partial_{\mathbf{z}}$ is given by the real and imaginary parts of a complex-analytic function. Suppose now that $\mathbf{z}$ is an inflection point of a trajectory of $W(\mathbf{z})\partial_{\mathbf{z}}$. This means that the vector field at the point $\mathbf{z} + \epsilon W(\mathbf{z})$ for $\epsilon$ small is close to $W(\mathbf{z})$. In other words, $W(\mathbf{z})$ and $W(\mathbf{z} + \epsilon W(\mathbf{z}))$ are almost parallel. More exactly, we require that \[ \lim_{\epsilon \to 0} \frac{1}{\epsilon} \begin{vmatrix} u(x,y) & v(x,y) \\ u(x + \epsilon u(x,y),y + \epsilon v(x,y)) & v(x + \epsilon u(x,y),y + \epsilon v(x,y)) \end{vmatrix} =0. \] Expanding the determinant and applying l'Hospital's rule, we end up with \begin{equation}\label{eq:generalFormula} u\, ( v'_x u + v'_y v ) - v\, ( u'_x u + u'_y v ) = 0. \end{equation} \noindent In the special case when \[ W(\mathbf{z}) = (u(x,y), v(x,y)), \quad u(x,y) + iv(x,y) = R(\mathbf{z}), \; \quad \mathbf{z} = x+i y \in \mathbb{C}, \] and $R(\mathbf{z}) : \setRS \to \setRS$ is analytic/meromorphic one can say more. In this case we will write $\defin{\mathfrak{I}_R}$ instead of $\mathfrak{I}_W$ and use notation $z$ instead of $\mathbf{z}$. \begin{lemma}\label{lm:infl} For an analytic function $R(z)$, the curve of inflections of the vector field $W=(\Re R, \Im R)$ satisfies the condition $\Im R^\prime=0$. \end{lemma} \begin{proof} Recall that the Cauchy--Riemann equations for $R(z)$ have the form \[ \frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}, \qquad \frac{\partial v}{\partial x} = -\frac{\partial u}{\partial y}. \] If we apply them to \eqref{eq:generalFormula} assuming that either $u$ or $v$ is non-vanishing, we end up with the simple condition \[ \frac{\partial u}{\partial y} = \frac{\partial v}{\partial x} = 0 \] which is equivalent to the requirement that $R^\prime(z)$ attains a real value. If $u=v=0$ we get a zero of $R$ which is an uninteresting case. \end{proof} Set $\defin{\flex(x,y)} \coloneqq - \frac{\partial u}{\partial y}$, i.e., let $\flex(x,y)$ be the imaginary part of $R^\prime(z)$. By \cref{lm:infl}, $\mathfrak{I}_R$ is the locus of $\flex(x,y)=0$. Note that for the rational function $R(z)=\frac{Q(z)}{P(z)}$, one gets \begin{equation} \flex(x,y) = \frac{1}{|P|^4}\Im\left( (PQ'- QP' )\overline{P}^2 \right). \end{equation} For further use let us denote by $\mathfrak{I}_R^+\subset \mathfrak{I}_R$ the portion where $R^\prime>0$ and by $\mathfrak{I}_R^-\subset \mathfrak{I}_R$ the portion where $R^\prime<0$. \smallskip The singularities on the curve $\mathfrak{I}_R$ correspond to the points at which both $R^\prime(z)$ and $R^{\prime\prime}(z)$ attain real values. In particular, since \[ R^\prime(z)=\frac{Q^\prime P-QP^\prime}{P^2}\quad \text{and} \quad R^{\prime\prime}(z)=\frac{Q^{\prime\prime}P^2-QPP^{\prime\prime} - 2PP^\prime Q^\prime-2Q(P^\prime)^2}{P^3} \] then zeros of $P$ are the singular points of $\mathfrak{I}_R$ which can be seen in \cref{fig:bluecurve}. Moreover, for generic rational $R(z)$, its poles are the only singular points of $\mathfrak{I}_R$. \begin{lemma} Any bounded connected component of the solutions to $\flex(x,y)=0$ contains at least one pole of $\RR(z)$. \end{lemma} \begin{proof} Let $\Gamma$ be such a bounded connected component, and assume that $\Gamma$ does not contain any pole of $\RR$. We can then let $D_\epsilon \supset \Gamma$ be the domain consisting of all points of distance at most $\epsilon$ from $\Gamma$. By choosing $\epsilon$ sufficiently small, we can guarantee that $D_\epsilon$ is a bounded domain which does not contain poles of $\RR$. Moreover, we can ensure that $D_\epsilon$ does not intersect any other connected component of $\flex$. But since $\flex$ is the imaginary part of the analytic function $R'(z)$ it is harmonic in $D_\epsilon$, and so $|\flex(x,y)|$ must be $0$ on the boundary of $D_\epsilon$. This violates the choice of $\epsilon$, and our assumption must have been false. \end{proof} One can also ask under what conditions on $R$ the curve $\mathfrak{I}_R$ is compact. In our main application, we have that $\deg Q=\deg P+1$. As before, expand $R(z)\partial_z$ at $\infty$: \[ \frac{Q(z)}{P(z)}\partial_z =\left( \lambda z + \text{higher order terms in } \frac{1}{z}\right)\partial_z. \] The following claim holds. \begin{lemma}\label{lem:blueCurveCompact} The set $\mathfrak{I}_R$ is compact whenever $\Im \lambda \neq 0$. \end{lemma} \begin{proof} Since \[ \mathfrak{I}_R = \{ z\in \mathbb{C} : \Im R^\prime(z)=0 \} = \{ z\in \mathbb{C} : \Im \lambda + \text{higher order terms in }\frac{1}{z}=0 \}, \] we see that $\Im R^\prime(z) \approx \Im \lambda \neq 0$ whenever $|z|$ is large. Hence, $\mathfrak{I}_R$ is bounded. \end{proof} \begin{example}\label{ex:5} In \cref{fig:bluecurve}, we show $\mathfrak{I}_R$ for the vector field $R(z) {\partial_z}$ with \[ \defin{\RR(z)} \coloneqq (1+i)z+\frac{2}{z}+\frac{1}{z-i}+\frac{4}{z-(1+i)}. \] \begin{figure}[H \begin{center} \includegraphics[width=0.5\textwidth]{inflection-points-curve} \end{center}\caption{ Example of a curve of inflections. }\label{fig:bluecurve} \end{figure} Here \[ \flex(x,y) = -1 -\frac{4 x y}{\left(x^2+y^2\right)^2}-\frac{x (2 y-2)}{\left(x^2+y^2-2 y+1\right)^2}-\frac{(4 x-4) (2 y-2)}{\left(x^2-2 x+y^2-2 y+2\right)^2}. \] \end{example} \begin{proposition} For generic $R(z)$, the direction of the curve of inflections at a pole $z_0$ of $R$ coincides with the direction of separatrices emanating from $z_0$. \end{proposition} \begin{proof} Given an arbitrary rational $R(z)$, suppose that it has a pole $z_0$ of order $k$. Without loss of generality, let us assume that $z_0=0$. We can then write \[ R(z)=\frac{c}{z^k}+ f(z) \] where $f(z)$ is rational and $\lim_{z\to0}f(z)z^k=0$. Hence, the derivative of $R(z)$ in a punctured neighborhood of $0$ is \[ R'(z)=\frac{-k c}{z^{k+1}}+f'(z) \] where $\lim_{z\to 0}f'(z)z^{k+1}=0$. Now, expanding $c=a+bi$ we get \begin{align*} \Im R'(z) &=\Im\left(\frac{-k c}{z^{k+1}}+f'(z)\right) \\ &=\frac{ka}{z^{k+1}}\sin((k+1)\arg z)-\frac{kb}{z^{k+1}}\cos((k+1)\arg z)+\Im f'(z). \end{align*} Setting this expression equal to 0 and multiplying both sides by $z^{k+1}$ we get \[ ka\sin((k+1)\arg z)-kb\cos((k+1)\arg z)+\Im f(z)z^{k+1}=0. \] Letting $z\to 0$ we obtain \[ \lim_{z\to 0}(a\sin((k+1)\arg z)-b\cos((k+1)\arg z)=0. \] Comparing the above with \cref{prop:dirSeptrix}, we find that this equation is solved precisely when $\lim_{z\to0}\arg z$ is equal to the argument of the separatrices emanating from $0$. \end{proof} \end{appendices} \bibliographystyle{alphaurl}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Recent years have seen remarkable developments in many-body theory in the form of an assortment of techniques that may be loosely termed bosonization. The beginnings of these types of techniques may be traced back to the work of Tomonaga\cite{Tom} and later on by Luttinger \cite{Lutt} and by Lieb and Mattis \cite{Lieb}. The work of Sawada \cite{Sawada} and Arponen and Pajanne \cite{Arponen} in recasting the fermi gas problem in a bose language has to be mentioned. Arponen and Pajanne recover corrections to the Random Phase Approximation (RPA) of Bohm and Pines \cite{Bohm} in a systematic manner. In nuclear physics, bosonization is widely used to study collective properties, for an introduction see the book by Iachello and Arima \cite{Arima}. In the 70's an attempt was made by Luther \cite{Luther} at generalising these ideas to higher dimensions. Closely related to this is the work by Sharp et. al. \cite{Sharp} in current algebra. More progress was made by Haldane \cite{Haldane} which culminated in the explicit computation of the single-particle propagator by Castro-Neto and Fradkin \cite{Neto} and by Houghton, Marston et.al. \cite{Mars} and also by Kopietz et. al. \cite{Kop}. Rigorous work by Frohlich and Marchetti\cite{Froch} is also along similar lines. Also the work of Frau et. al. \cite{Zemba} on algebraic bosonization is relevant to the present article as the authors have considered effects beyond the linear dispersion in that article. The exactly solvable models of Calogero and Sutherland are of relevance here as well, the exact propagators of these models have been computed by various authors \cite{Lesage}. Recently, these types of models have been generalised to more than one dimension by Ghosh \cite{Ghosh}. The attempt made here is to generalise the concepts of Haldane \cite{Haldane} to accomodate short-wavelength fluctuations where the concept of a linearised bare fermion energy dispersion is no longer valid. To motivate progress in this direction, we find that it is necessary to introduce two different concepts, one is the canonical conjugate of the fermi/bose density distribution, the other is the concept of sea/condensate displacements. Histrorically speaking, the idea that the velocity operator could serve as the canonical conjugate of the density has been around for a long time, and this has been exploited in the study of HeII by Sunakawa et. al. \cite{Sun}. However, the authors are not aware of a rigorous study of the meaning of this object, in particular, an explicit formula for the canonical conjugate of the density operator has to the best of the authors' knowledge never been written down in terms of the field operators. The work by Sharp et. al. \cite{Sharp} comes close to what we are attempting here. The concept of a sea-displacement is a generalisation of the traditional approach used for bosonizing one-dimensional systems such as the Tomonaga-Luttinger{\cite{Tom},\cite{Lutt}} models. There, one introduces bose fields that correspond physically to displacement of the fermi surface (in 1D, fermi points). These bose fields have simple forms relating them to number-conserving product of fermi fields. The field operator is obtained by exponentiating the commutation rule between the surface-displacement operator and the field operator. By analogy, we generalise these ideas, so that one is no longer restricted to be close to the fermi surface. The way this is done is to postulate the existence of bose fields that correspond to displacements of the fermi-sea rather than just the fermi surface. From this it is possible to write down formulas for the number-conserving product of two fermi-fields in terms of the bose fields. A similar construction is possible when the parent fields are bosons, but here, we find that instead of sea-displacements, we have to introduce operators that correspond physically to displacements of the condensate. Actually, the bose case is much simpler and a mathematically rigorous formulation of this correspondence is possible. This is a boon, since we use this fact and make plausible the analogous correspondence in the fermi case. The assertions in the fermi case are not proved "rigorously", rather are made exceedingly plausible by analogy. This is the main drawback of this article. This article is organised as follows. In the next section, we present some formulas that relate the number conserving product of two fermi/bose fields to the relevant sea/condensate-displacement operators that are postulated to be canonical bosons. The sea/condensate-displacement operators in turn may be related to the parent fermi/bose fields, as it happens, this formula is simple in the case when the parent fields are bosons but is difficult in the case when the parent fields are fermions. Following this, we write down a generic formula for the fermi/bose field-operator in terms of the density operator (operator-valued distribution, to be precise) and its canonical conjugate. The new ingredient in this section is the canonical conjugate on the density operator. This quantity may in turn be related to currents and densities. We find that these formulas are ambiguous unless a proper choice is made for a certain phase functional. For bosons, we find that this choice is the zero-functional but for fermions it has to be determined by making contact with the free theory(done in the section following the one just described). Combining the two previous sections, we write down in the next section, formulas for currents and densities in terms of the sea/condensate-displacements, the field-operator has a formula in terms of the sea/condensate-displacements as well. Contact is made with the propagator of the free theory and the undetermined phase functional of the previous sections is determined for the fermi case. In the next section, interaction terms are introduced that correspond to two-body repulsive interactions. It is argued and demonstrated that selectively retaining parts of the interaction that are quadratic in the sea/condensate-displacements amounts to doing Bogoliubov/RPA theory. Corrections to this quadratic hamiltonian are easy to write down but are not used to compute corrections to RPA/Bogoliubov theory as this requires significantly more effort. It is found that the diagonalisation of the RPA hamiltonian is rather tricky if one wants to recover both the particle-hole modes and the collective mode. In the end, closed formulas are written down for the fermi propagator in all three spatial dimensions and their various qualitative features are examined. This completes the solution of the many-body problem in the RPA/Bogoliubov-limit. The appendices are as follows, Appendix A contains a detailed proof of the correspondence between the number-conserving product of two bose fields and the corresponding condensate-displacements. Appendix B involves writing down similar ideas for fermi systems. However here, the various assertions are only made plausible unlike in the bose case where a rigorous solution is possible. Appendix C is devoted to proving the assertion that retaining only terms linear in the sea-displacements in the definition of the density recovers the RPA. Appendix D involves a derivation of the formula for the momentum distribution of the 1D system using the equation of motion approach. Appendix E contains some technical statements regarding the proof of uniqueness of the formula relating the fermi field with the corresponding currents and densities. \section{Expressing Products of Parent Fields in Terms of Sea-displacements} In this section we introduce canonical bose fields called sea-displacements in the fermi case and condensate-displacements in the bose case. First, we write down a formula for the number conserving product of two bose fields in terms of the condensate displacement operators. A rigourous proof of this is relegated to Appendix A. The correspondence is made plausible by making several observations about these formulae. Let us first focus on the bose case. Let $ b_{ {\bf{q}} } $ and $ b^{\dagger}_{ {\bf{q}} } $ be canonical bose operators. From these, we may construct other bose operators defined as follows($ {\bf{q}} \neq {\bf{0}} $), \begin{equation} d_{ {\bf{q}}/2 }({\bf{q}}) = (\frac{1}{\sqrt{N_{0}}}) b^{\dagger}_{ {\bf{0}} }b_{ {\bf{q}} } \end{equation} and, \begin{equation} d_{ {\bf{0}} }({\bf{0}}) = 0 \end{equation} where $ N_{0} = b^{\dagger}_{ {\bf{0}} }b_{ {\bf{0}} } $. This is the condensate displacement annhilation operator. It is so named for the following reason. The definition suggests that this operator removes a boson from among those that are not in the condensate and returns it to the condensate, thereby displacing the latter. The reason for the redundant momentum label in the notation $ d_{ {\bf{q}}/2 }({\bf{q}}) $ becomes clear if one realises that a more general object would be a sea-displacement annhilation operator $ d_{ {\bf{k}} + {\bf{q}}/2 }({\bf{q}}) $ . Since the condensate corresponds to $ {\bf{k}} = 0 $, we have just the condensate displacement annhilation operator. In fact, it will be shown subsequently that for the fermi case we have to deal with this more general object namely, the sea-displacement annhilation operator. It may be shown that (see Appendix A) this object $ d_{ {\bf{q}}/2 }({\bf{q}}) $ satisfies canonical bose commutation rules. Also a formula is possible for the number-conserving product of two parent bosons in terms of these condensate-displacements. The formula is written down below and proved in Appendix A. \[ b^{\dagger}_{ {\bf{k+q/2}} }b_{ {\bf{k-q/2}} } = N_{0} \delta_{ {\bf{k}}, 0 }\delta_{ {\bf{q}}, 0 } + [\delta_{ {\bf{k+q/2}}, 0 }(\sqrt{N_{0}})d_{ {\bf{k}} }(-{\bf{q}}) + \delta_{ {\bf{k-q/2}}, 0 }d^{\dagger}_{ {\bf{k}} }({\bf{q}}) (\sqrt{N_{0}})] \] \begin{equation} + d^{\dagger}_{ (1/2)({\bf{k+q/2}}) }({\bf{k+q/2}}) d_{ (1/2)({\bf{k-q/2}}) }({\bf{k-q/2}}) \label{BOSE} \end{equation} where, \begin{equation} N_{0} = N - \sum_{ {\bf{q}}_{1} }d^{\dagger}_{ {\bf{q}}_{1}/2 }({\bf{q}}_{1}) d_{ {\bf{q}}_{1}/2 }({\bf{q}}_{1}) \end{equation} and, \begin{equation} [d_{ {\bf{q}}/2 } ({\bf{q}}), N] = 0 \end{equation} \begin{equation} N = \sum_{ {\bf{q}} }b^{\dagger}_{ {\bf{q}} }b_{ {\bf{q}} } \end{equation} also the object $ d_{ {\bf{0}} }({\bf{0}}) = 0 $, by definition. The way the authors initially derived this formula is as follows. One starts off with the observation that the object $ b^{\dagger}_{ {\bf{k}} + {\bf{q}}/2 } b_{ {\bf{k}} - {\bf{q}}/2 } $ is the only one that enters in the hamiltonian of number-conserving systems. Furthermore, it satisfies closed commutation rules amongst other members of its kind. One is therefore led to look for formulas for these objects in terms of other bosons with a view to make the full hamiltonian more easily diagonalisable. In particular, if there were bose operators $ d_{ {\bf{q}}/2 }({\bf{q}}) $ and $ d^{\dagger}_{ {\bf{q}}/2 }({\bf{q}}) $ such that $ b^{\dagger}_{ {\bf{k}} + {\bf{q}}/2 } b_{ {\bf{k}} - {\bf{q}}/2 } $ was exactly linear in these bosons, then the full hamiltonian would indeed be exactly diagonalisable. We find that this is not the case and there are corrections to this linear term and it so happens that introduction of a quadratic term in the condensate-displacements in fact makes the correspondence exact. The authors are not aware of a deeper reason behind this simple formula that terminates after including the quadratic term, after all, the formula for the parent annhilation operator $ b_{ {\bf{q}} } $ in terms of the condensate displacements is formidable as we shall soon see. The bose case being so simple and exact can be used as a benchmark to write down corresponding formulas in the fermi case, where rigorous proofs are much harder to come by. The authors also have in mind generalisations to relativistic systems, where one might profit by following the above prescription. In particular, it would be fascinating to see if the ideas above were useful in getting non-perturbative information regarding gauge theories like QED, QCD e.t.c. But this is far into the future. For now, let us try to write down a similar correspondence for the nonrelativistic fermi system. As mentioned earlier, for fermi systems, it is necessary to postulate the existence of a sea-displacement annhilation operator, denoted by $ a_{ {\bf{k}} }({\bf{q}}) $. A formula for this in terms of the fermi fields is extremely difficult to deduce. In Appendix B, attempts are made to do exactly this. There it is pointed out that these objects satisfy canonical boson commutation rules. The important issues that enable one to draw practical conclusions, fortunately do not depend very much on the technical details. In Appendix B and in the sections that follow, we show how to extract the necessary physics while cirumventing the technical details. It must be pointed out however that this drawback is regrettable. Let us merely quote the final answers and later on make these formulas plausible. The RPA form of the number conserving product of two fermi fields in terms of the sea-bosons is given by( $ {\bf{q}} \neq 0 $ ), \[ c^{\dagger}_{ {\bf{k}} + {\bf{q}}/2 } c_{ {\bf{k}} - {\bf{q}}/2 } = (\frac{N}{\langle N \rangle })^{\frac{1}{2}} [\Lambda_{ {\bf{k}} }({\bf{q}}) a_{ {\bf{k}} }(-{\bf{q}}) + a^{\dagger}_{ {\bf{k}} }({\bf{q}}) \Lambda_{ {\bf{k}} }(-{\bf{q}})] \] \[ + T_{1}({\bf{k}}, {\bf{q}})\sum_{ {\bf{q}}_{1} } a^{\dagger}_{ {\bf{k}} + {\bf{q}}/2 - {\bf{q}}_{1}/2 }({\bf{q}}_{1}) a_{ {\bf{k}} - {\bf{q}}_{1}/2 }({\bf{q}}_{1} - {\bf{q}}) \] \begin{equation} - T_{2}({\bf{k}}, {\bf{q}})\sum_{ {\bf{q}}_{1} } a^{\dagger}_{ {\bf{k}} - {\bf{q}}/2 + {\bf{q}}_{1}/2 }({\bf{q}}_{1}) a_{ {\bf{k}} + {\bf{q}}_{1}/2 }({\bf{q}}_{1} - {\bf{q}}) \end{equation} here, \begin{equation} T_{1}({\bf{k}}, {\bf{q}}) = \sqrt{ 1 - {\bar{n}}_{ {\bf{k}} + {\bf{q}}/2 } } \sqrt{ 1 - {\bar{n}}_{ {\bf{k}} - {\bf{q}}/2 } } \end{equation} \begin{equation} T_{2}({\bf{k}}, {\bf{q}}) = \sqrt{ {\bar{n}}_{ {\bf{k}} + {\bf{q}}/2 } {\bar{n}}_{ {\bf{k}} - {\bf{q}}/2 } } \end{equation} \begin{equation} \Lambda_{ {\bf{k}} }({\bf{q}}) = \sqrt{ {\bar{n}}_{ {\bf{k}} + {\bf{q}}/2 } (1 - {\bar{n}}_{ {\bf{k}} - {\bf{q}}/2 }) } \end{equation} here, the sea-boson commutes with the total number of fermions, \begin{equation} [a_{ {\bf{k}} }({\bf{q}}), N] = 0 \end{equation} and also the operator $ a_{ {\bf{k}} }({\bf{0}}) = 0 $. Further, \begin{equation} n_{ {\bf{k}} } = n^{\beta}({\bf{k}}) \frac{ N }{ \langle N \rangle } + \sum_{ {\bf{q}} }a^{\dagger}_{ {\bf{k}} - {\bf{q}}/2 }({\bf{q}}) a_{ {\bf{k}} - {\bf{q}}/2 }({\bf{q}}) - \sum_{ {\bf{q}} }a^{\dagger}_{ {\bf{k}} + {\bf{q}}/2 }({\bf{q}}) a_{ {\bf{k}} + {\bf{q}}/2 }({\bf{q}}) \label{NUMBER} \end{equation} and \begin{equation} n^{\beta}({\bf{k}}) = \frac{1}{ exp(\beta(\epsilon_{ {\bf{k}} }-\mu)) + 1 } \end{equation} Also $ {\bar{n}}_{ {\bf{k}} } = \langle n_{ {\bf{k}} } \rangle $. The expectation value is with respect to the full interacting ground state. This quantity depends on the interactions that are present in the system and must be evaluated self-consistently. In fact, there is a deeper reason for introducing this. The exact formula for the number conserving product of two fermi fields and the sea-bosons may be expected to involve the number operator itself under the square root sign. This is made exceedingly likely by analogy with the bose case, where the square root of the number operator in the zero momentum state appears. In Appendix B the manner in which this exact correspondence may be deduced is hinted at. At this stage, it is pertinent to merely write down a formula for the sea-boson annhilation operator in the RPA-limit. The sea-boson is defined analogous to the condensate displacement boson, except the fermi case is more complicated due to the presence of the fermi surface. The sea-boson may be defined as follows (the rest of the details including a "proof" of this fact and how it fits into the fermi-bilinear-sea-boson correspondence is relegated to Appendix B), \begin{equation} a_{ {\bf{k}} }({\bf{q}}) = \frac{1}{\sqrt{ n_{ {\bf{k}} - {\bf{q}}/2 } }} c^{\dagger}_{ {\bf{k}} - {\bf{q}}/2 } (\frac{ n^{\beta}({\bf{k}} - {\bf{q}}/2) }{\langle N \rangle })^{\frac{1}{2}} e^{i\mbox{ }\theta({\bf{k}}, {\bf{q}})}c_{ {\bf{k}} + {\bf{q}}/2 } \label{SEABOSONFORM} \end{equation} here $ \theta({\bf{k}}, {\bf{q}}) $ is a c-number phase that serves to randomly cancel out troublesome terms : this is also related to the "random phase" of the Random Phase Approximation of Bohm and Pines. Thus the above formula for the sea-boson is in the "Random Phase" approximation. This correspondence recovers the salient features of the finite and zero temperature aspects of the free theory provided we make the following assumption, the sea-bosons do participate in the thermodynamic averaging but come with an infinite negative chemical potential. This means that as far as the free theory is concerned, the average value of the sea-boson occupation is zero in the non-interacting case. The kinetic energy operator in the sea-boson language is given by, \begin{equation} K = \sum_{ {\bf{k}}, {\bf{q}} }\frac{ {\bf{k.q}} }{m} \mbox{ }a^{\dagger}_{ {\bf{k}} }({\bf{q}})a_{ {\bf{k}} }({\bf{q}}) + N\mbox{ }\epsilon_{0} \end{equation} here $ \epsilon_{0} $ is the kinetic energy per particle. Therefore, \begin{equation} \langle a^{\dagger}_{ {\bf{k}} }({\bf{q}}) a_{ {\bf{k}} }({\bf{q}}) \rangle = \frac{1}{exp(\beta(\frac{ {\bf{k.q}}}{m} - \mu_{B})) - 1} = 0 \end{equation} here $ - \mu_{B} = \infty $. However, when there are interactions in the system, the answer is likely to be different. In particular, it is likely to be a non-analytic function of the interaction in such a way that it vanishes as the coupling goes to zero(this is demonstrated explicitly in Appendix D). Roughly speaking we may write, \begin{equation} \langle a^{\dagger}_{ {\bf{k}} }({\bf{q}}) a_{ {\bf{k}} }({\bf{q}}) \rangle \approx (\frac{1}{V})exp(-1/v) \end{equation} where $ v $ is the coulomb repulsion parameter and $ V $ is the volume. All these do come out naturally from the correspondece written down above as we shall soon see. We have thus written down a useful correspondence between fermi and bose operators that recovers the salient features of the free theory at zero and finite temperature and it is clear that this correspondence is all that is needed to write down model hamiltonians with any sort of interactions, like coulombic, with phonons e.t.c and extract exact non-perturbative (more precisely, non-analytic in the couplings) solutions. These solutions possess features that are impossible to capture via diagrammatic means let alone mean-field theory. Thus a strong case is to be made for this method as a new paradigm for condensed matter physics. \section{Field Operator in Terms of Density and its Canonical Conjugate} In this section, we introduce the canonical conjugate of the fermi/bose density distribution. The reason for doing this is that we would like to express the field operator itself in terms of the density and its canonical conjugate and consequently in terms of current and densities. None of these ideas are really new. For example, Sunakawa et.al. \cite{Sun} use the velocity operator as a canonical conjugate of the density in their investigation of the properties of He-II. The velocity operator is somewhat related to the current operator but is not exactly equal to it. The reason is that the current operator behaves like the conjugate of the density as far as commutation rules with the latter is concerned, but does not commute with members of its own kind(it is difficult to say this in words but will soon become clear). Let us postulate the existence of the object $ \Pi({\bf{x}}\sigma) $ as the canonical conjugate of the density, \begin{equation} [\Pi({\bf{x}}\sigma), \rho({\bf{y}}\sigma^{'})] = i\mbox{ }\delta({\bf{x}} - {\bf{y}})\delta_{\sigma, \sigma^{'}} \end{equation} \begin{equation} [\Pi({\bf{x}}\sigma), \Pi({\bf{y}}\sigma^{'})] = 0 \end{equation} It is clear that redefinitions of this object by amounts that involve translations by (more or less arbitrary) functionals of the density are not going to spoil the nature of the commutation rules above. However, we shall take the point of view that $ \Pi $ is defined to be that(almost unique) object that satisfies the relation below (making mathematically rigorous sense out of all this requires the use of functional analysis and will be attempted in Appendix E). \begin{equation} \rho({\bf{x}}\sigma) = -i \mbox{ }\frac{\delta}{\delta \Pi({\bf{x}}\sigma) } \end{equation} Observe that $ \rho({\bf{x}}\sigma) = \psi^{\dagger}({\bf{x}}\sigma)\psi({\bf{x}}\sigma) $ (technical problems involving the multiplication of operator-valued distributions at the same point are alleviated by assuming that we have the whole system in a box, with periodic boundary conditions on the fields, making any infinities only as large as the volume of the box itself, please refer to Appendix E for more details). Observe that (valid for both bose as well as fermi systems), \begin{equation} [\rho({\bf{x}}\sigma), \psi({\bf{x}}^{'}\sigma^{'})] = -\delta^{d}({\bf{x}}-{\bf{x}}^{'})\delta_{ \sigma, \sigma^{'} } \psi({\bf{x}}\sigma) \end{equation} rewriting this as a differential equation, \begin{equation} [-i\frac{\delta}{\delta \Pi({\bf{x}}\sigma)}, \psi({\bf{x}}^{'}\sigma^{'})] = -\delta^{d}({\bf{x}}-{\bf{x}}^{'})\delta_{ \sigma, \sigma^{'} } \psi({\bf{x}}\sigma) \end{equation} This may be solved (exponentiation of commutation rules is the more technical term) as, \begin{equation} \psi({\bf{x}}\sigma) = exp(-i\Pi({\bf{x}}\sigma))F([\rho];{\bf{x}}\sigma) \end{equation} Observe now that $ \rho = \psi^{\dagger}\psi $. Therefore, \begin{equation} F^{\dagger}([\rho];{\bf{x}}\sigma)F([\rho];{\bf{x}}\sigma) = \rho({\bf{x}}\sigma) \end{equation} this may in turn be solved and the final Density Phase Variable Ansatz (DPVA for short) may be written as, \begin{equation} \psi({\bf{x}}\sigma) = e^{-i\mbox{ }\Pi({\bf{x}}\sigma)} e^{i\mbox{ }\Phi([\rho];{\bf{x}}\sigma)}(\rho({\bf{x}}\sigma))^{\frac{1}{2}} \label{DPVA} \end{equation} It may be noted above that redefinitions of $ \Pi $ consistent with it being the canonical conjugate to $ \rho $ may be absorbed by a suitable redefinition of the phase functional $ \Phi $. Therefore, Eq.(~\ref{DPVA}) is in fact quite general. The crucial point of this whole exercise is that the phase functional $ \Phi $ determines the statistics of the field $ \psi({\bf{x}}\sigma) $. It may be shown (the proof is rather tedious and since this issue is not central to the practical computations, we defer the proof for a future communication) that imposition of bose/fermi commutation rules on $ \psi $ involves imposing the following restriction on the form of $ \Phi $. \[ \Phi([\{\rho({\bf{y_{1}}}\sigma_{1}) - \delta({\bf{y_{1}}}-{\bf{x}}^{'})\delta_{\sigma_{1},\sigma^{'}} \} ] ;{\bf{x}}\sigma) \] \[ + \Phi([\rho];{\bf{x^{'}}}\sigma^{'}) - \Phi([\rho];{\bf{x}}\sigma) \] \begin{equation} -\Phi([\{\rho({\bf{y_{1}}}\sigma_{1}) - \delta({\bf{y_{1}}}-{\bf{x}})\delta_{\sigma_{1},\sigma} \} ] ;{\bf{x^{'}}}\sigma^{'}) = m\pi \label{recur} \end{equation} where m is an odd integer for fermions even for bosons. This recursion is to be satisfied for all $ ({\bf{x}}\sigma) \neq ({\bf{x^{'}}}\sigma^{'}) $. It will be shown later that the restriction is far more severe, brought about by the need to recover the free case properly. It may puzzle the reader that the above statement implies that a random choice of the phase functional that ensures that the recursion is satisfied does not suffice. This is mysterious, but is clarified by a conjecture in Appendix E. This is done by relating the canonical conjugate to the current operator and rewriting the DPVA in terms of current and densities. Again this type of idea has been addressed in the paper by Goldin et. al.\cite{Sharp}. However, many in this field(pardon the pun!) continue to be under the mistaken impression that the formula for the annhilation operator (say the fermi operator) in terms of the corresponding currents and densities depends on whether the fields in question are free or whether there are interactions in the system. This is shown to be false in the bose case, by demonstrating that there is a unique $ \Phi $ namely $ \Phi = 0 $ that reproduces the free theory properly. Interactions just change the form of the hamiltonian but do not affect the form of the field operator in terms of currents and densities. The same is true but not easily seen in the fermi case; indeed throught this article we find that the bose case is much simpler and we shall take refuge under this rigorously justifiable edifice when confronted by fermi systems. Further, the formulas for the field operators suggested by Goldin, Menikoff and Sharp in their famous paper\cite{Sharp} are according to our results only partially correct, since they have not actually introduced the phase functional $ \Phi $ and computed it (this will again become clear soon). Let us now write down a formula for the current operator in terms of the canonical conjugate and density, \begin{equation} {\bf{J}} = (\frac{1}{2i})(\psi^{\dagger}\nabla\psi - (\nabla\psi)^{\dagger}\psi) \end{equation} using the DPVA Eq.(~\ref{DPVA}), \begin{equation} {\bf{J}}({\bf{x}}\sigma) = \rho(\nabla\Phi) - \rho(\nabla\Pi + [-i\Phi,\nabla\Pi]) \end{equation} From this it possible to deduce a formula for the conjugate in terms of currents and densities, \begin{equation} \Pi({\bf{x}}\sigma) = X_{ 0\sigma } + \int^{ {\bf{x}} }d{\bf{l}}\mbox{ } (-1/\rho({\bf{y}}\sigma)){\bf{J}}({\bf{y}}\sigma) + \Phi([\rho];{\bf{x}}\sigma) - \int^{ {\bf{x}} }d{\bf{l}}\mbox{ }[-i\Phi,\nabla\Pi]({\bf{y}}\sigma) \end{equation} The line integral is along an arbitrary path from a remote point where all quantities may be set equal to zero. The field operator may now be rewritten exclusively in terms of currents and densities like so, \begin{equation} \psi({\bf{x}}\sigma) = e^{-i\mbox{ }X_{0\sigma}-i\mbox{ }\int^{ {\bf{x}} }d{\bf{l}}\mbox{ } (-1/\rho({\bf{y}}\sigma)){\bf{J}}({\bf{y}}\sigma) -i\mbox{ } \Phi([\rho];{\bf{x}}\sigma) + i\mbox{ } \int^{ {\bf{x}} }d{\bf{l}}\mbox{ }[-i\Phi,\nabla\Pi]({\bf{y}}\sigma) } e^{i\mbox{ }\Phi([\rho];{\bf{x}}\sigma) }(\rho({\bf{x}}\sigma))^{\frac{1}{2}} \label{FIELDOP} \end{equation} here $ X_{0\sigma} $ is canonically conjugate to the total number of fermions/bosons($ [X_{0\sigma}, N_{\sigma^{'}}] = i\mbox{ }\delta_{\sigma,\sigma^{'}} $) $ N_{\sigma} = \sum_{ {\bf{k}} }c^{\dagger}_{ {\bf{k}}\sigma }c_{ {\bf{k}}\sigma } $ The need for this is clear. The gradient of $ \Pi $ does not involve the object $ X_{0\sigma} $ , when in fact it should. To put it differently, the field operator when commuted with $ N_{\sigma} $ should produce itself, whereas if we omit the object $ X_{0\sigma} $ then we find that the field operator commutes with the total number, which should not happen. These nuances are not very important for the practical computations as we shall see. It will be shown later that for bosons $ \Phi = 0 $ is the only possible choice and for fermions $ \Phi $ has to be fixed by making contact with the free theory. Uniqueness is assumed for the fermi case by making analogy with the bose case for which uniqueness may be proved. \section{Making Contact with the Free Theory} In this section, we write down the kinetic energy operator in terms of the sea-displacements and determine the undetermined phase functional $ \Phi $ in the fermi case. The reason why the phase functional $ \Phi = 0 $ in the bose case will also be addressed here. Let us take the bose case first. It is clear at the outset that the choice $ \Phi = 0 $ satisfies the recursion Eq.(~\ref{recur}) for bosons when one assumes that $ m = 0 $, an even integer. That this is the only possible choice is not at all clear. In order to verify this, let us write down the kinetic energy operator in terms of the density and its conjugate and show that an expansion in terms of the density fluctuations recovers the correct form of the dynamical density correlation function of the free theory(just the bose case). \begin{equation} K = \int\frac{ d{\bf{x}} }{2m}\mbox{ }[ \rho(\nabla\Pi)^{2} + \frac{(\nabla\rho)^{2}}{4\rho} ] + c-number \end{equation} It may now be verified that an expansion in terms of density fluctuations leads to a hamiltonian that describes free harmonic oscillators, that may be easily diagonalised, it may also be shown that this diagonalised form reproduces the correct dynamical density correlation functions. The expanded form of the operator in fourier space is reproduced below for convenience, \begin{equation} K = \sum_{ {\bf{q}} \neq 0 }N\mbox{ }\epsilon_{ {\bf{q}} }X_{ {\bf{q}} } X_{ -{\bf{q}} } + \sum_{ {\bf{q}} \neq 0 } \frac{ \epsilon_{ {\bf{q}} } }{4N}\rho_{ {\bf{q}} }\rho_{ -{\bf{q}} } \end{equation} A different choice of $ \Phi $ does not reproduce the free theory correctly. This is attested to by a simple calculation made in 1D. Let us assume a form, \begin{equation} \Phi([\rho];x) = 2\pi\int_{-\infty}^{+\infty}\mbox{ }dy\mbox{ } \theta(x-y)(\rho(y)-\rho_{0}) \end{equation} where $ \theta(x) $ is the Heaviside step function. The above form clearly satisfies the recursion but does not reproduce the free theory as may be easily verified by the reader. The fermi case is somewhat more difficult. The difficulty is due to the fact that we must have a choice of $ \Phi \neq 0 $ that satisifes the recursion at the same time reproducing the free case. We shall take the point of view that the simplest choice for $ \Phi $ namely linear in $ \rho $ should suffice. In any event, for the scheme to have practical significance, it is important for $ \Phi $ to be a simple functional of the density. We fix the coefficient in this ansatz by making contact with the free theory. Let us focus on the case of spinless fermions. In what follows, we restrict ourselves to zero temperature and a weakly non-ideal system, in this case, we are allowed to replace the $ {\bar{n}}_{ {\bf{k}} } $ in the definition of $ \Lambda_{ {\bf{k}} }({\bf{q}}) $ by its noninteracting value at zero temperature. More interesting situations arise when the quantity $ {\bar{n}}_{ {\bf{k}} } $ is evaluated self-consistently, but we shall relegate these issues to future publications \cite{Setlur}. From equation Eq.(~\ref{FIELDOP}) it is clear that redefinitions of the phase functional by amounts that do not depend on the density for example, $ \Phi([\rho];{\bf{x}}) \rightarrow \Phi([\rho];{\bf{x}}) + f({\bf{x}}) $, do not affect the formula for the field operator in Eq.(~\ref{FIELDOP}). Therefore, let us try the following ansatz for $ \Phi $. \begin{equation} \Phi([\rho];{\bf{x}}) = \sum_{ {\bf{q}} \neq 0 }U_{ {\bf{q}} }({\bf{x}}) \rho_{ {\bf{q}} } \end{equation} Let us now write down the kinetic energy operator for fermions using the results of the first section. The kinetic energy operator was written down as, \begin{equation} K = \sum_{ {\bf{k}}, {\bf{q}} }\frac{ {\bf{k.q}} }{m} a^{\dagger}_{ {\bf{k}} }({\bf{q}}) a_{ {\bf{k}} }({\bf{q}}) + N \mbox{ }\epsilon_{0} \end{equation} It has been demonstrated in Appendix C that if one uses the form of the density fluctuation operator obtained by dropping quadratic terms in the sea displacements(the existence of such quadratic terms are hinted in Appendix B), this reproduces the RPA dielectric function. Since we know from prior experience that RPA is exact in the ultra-high density limit, we can use these two pieces of information to deduce a formula for $ U_{ {\bf{q}} }({\bf{x}}) $ in terms of the properties of the free theory. First let us write down the RPA-form of the density fluctuation operator, \begin{equation} {\tilde{\rho}}_{ {\bf{q}} } = \sum_{ {\bf{k}} } [\Lambda_{ {\bf{k}} }({\bf{q}}) a_{ {\bf{k}} }(-{\bf{q}}) + \Lambda_{ {\bf{k}} }(-{\bf{q}}) a^{\dagger}_{ {\bf{k}} }({\bf{q}})] \end{equation} where, \begin{equation} \Lambda_{ {\bf{k}} }({\bf{q}}) = \sqrt{ {\bar{n}}_{ {\bf{k}} + {\bf{q}}/2 } (1 - {\bar{n}}_{ {\bf{k}} - {\bf{q}}/2 }) } \end{equation} and the corresponding conjugate variable may be written down (that is, $ \Pi $ in fourier space), \begin{equation} {\tilde{X}}_{ {\bf{q}} } = (-\frac{1}{2\mbox{ }i\mbox{ }N\mbox{ }\epsilon_{ {\bf{q}} }}) \sum_{ {\bf{k}} } [\Lambda_{ {\bf{k}} }(-{\bf{q}}) \omega_{ {\bf{k}} }({\bf{q}}) a_{ {\bf{k}} }({\bf{q}}) - \Lambda_{ {\bf{k}} }({\bf{q}})\omega_{ {\bf{k}} }(-{\bf{q}}) a^{\dagger}_{ {\bf{k}} }(-{\bf{q}})] \end{equation} the dispersion is given by $ \omega_{ {\bf{k}} }({\bf{q}}) = \frac{ {\bf{k.q}} }{m} $. From this the fermi field operator may be written down as, \begin{equation} \psi({\bf{x}}) = e^{-i\mbox{ }U_{1}({\bf{x}}) }e^{i\mbox{ }U_{2}({\bf{x}})} \sqrt{\rho_{0}} \end{equation} where, \begin{equation} U_{1}({\bf{x}}) = \sum_{ {\bf{q}} \neq 0 }e^{i\mbox{ }{\bf{q.x}} } {\tilde{X}}_{ {\bf{q}} } \end{equation} \begin{equation} U_{2}({\bf{x}}) = \sum_{ {\bf{q}} \neq 0 }U_{ {\bf{q}} }({\bf{x}}) {\tilde{\rho}}_{ {\bf{q}} } \end{equation} Using these facts, let us compute the equal-time version of the propagator below in the bose language and in the usual fermi language and equate the two expressions. In the sea-displacement language it comes out as, \begin{equation} \langle \psi^{\dagger}({\bf{x}},t)\psi({\bf{x}}^{'},t)\rangle = \rho_{0}\mbox{ }e^{-\sum_{ {\bf{k}}, {\bf{q}} \neq 0} g^{*}_{ {\bf{k}}, {\bf{q}} }({\bf{x}})g_{ {\bf{k}}, {\bf{q}} }({\bf{x}}) } e^{\sum_{ {\bf{k}}, {\bf{q}} \neq 0}g^{*}_{ {\bf{k}}, {\bf{q}} }({\bf{x}}) g_{ {\bf{k}}, {\bf{q}} }({\bf{x}}^{'}) } \end{equation} where, \begin{equation} g_{ {\bf{k}}, {\bf{q}} }({\bf{x}}) = -e^{-i\mbox{ }{\bf{q.x}} } (\frac{1}{2\mbox{ }N\mbox{ }\epsilon_{ {\bf{q}} } }) \Lambda_{ {\bf{k}} }(-{\bf{q}})\omega_{ {\bf{k}} }({\bf{q}}) + i\mbox{ }U_{ {\bf{q}} }({\bf{x}})\Lambda_{ {\bf{k}} }(-{\bf{q}}) = -f^{*}_{ {\bf{k}}, {\bf{q}} }({\bf{x}}) \end{equation} In the original fermi language it is, \begin{equation} \langle \psi^{\dagger}({\bf{x}},t)\psi({\bf{x}}^{'},t)\rangle = \frac{1}{V}\sum_{ {\bf{q}} }e^{i\mbox{ }{\bf{q}}.({\bf{x}}^{'} - {\bf{x}})} \theta(k_{f} - |{\bf{q}}|) \end{equation} Set $ U_{ {\bf{q}} }({\bf{x}}) = e^{-i\mbox{ }{\bf{q.x}}}U_{0}({\bf{q}}) $ and $ U_{0}({\bf{q}}) $ is real. In order to derive a formula for $ U_{0}({\bf{q}}) $, let us equate the logarithm of the two expressions, \[ log(\langle \psi^{\dagger}({\bf{x}},t)\psi({\bf{x}}^{'},t)\rangle) = log(\rho_{0}) + \sum_{ {\bf{k}}, {\bf{q}} \neq 0 } [(\frac{1}{2\mbox{ }N\mbox{ }\epsilon_{ {\bf{q}} } })^{2} (\frac{ {\bf{k.q}} }{m})^{2} + (U_{0}({\bf{q}}))^{2}] (\Lambda_{ {\bf{k}} }(-{\bf{q}}))^{2}(e^{i{\bf{q}}.({\bf{x}}-{\bf{x}}^{'})} - 1) \] \begin{equation} = log(\rho_{0}) + log(1 + \frac{1}{N}\sum_{ {\bf{q}} \neq 0 } (e^{i{\bf{q}}.({\bf{x}}-{\bf{x}}^{'})} - 1)\theta(k_{f} - |{\bf{q}}|)) \approx log(\rho_{0}) + \frac{1}{N}\sum_{ {\bf{q}} \neq 0 } (e^{i{\bf{q}}.({\bf{x}}-{\bf{x}}^{'})} - 1)\theta(k_{f} - |{\bf{q}}|) \end{equation} This leads to the following formula for the coefficient, \begin{equation} U_{0}({\bf{q}}) = \frac{1}{N} (\frac{ \theta(k_{f} - |{\bf{q}}|) - w_{1}({\bf{q}}) }{ w_{2}({\bf{q}}) }) ^{\frac{1}{2}} \end{equation} \begin{equation} w_{1}({\bf{q}}) = (\frac{1}{4\mbox{ }N\mbox{ }\epsilon^{2}_{ {\bf{q}} }}) \sum_{ {\bf{k}} }(\frac{ {\bf{k.q}} }{m})^{2} (\Lambda_{ {\bf{k}} }(-{\bf{q}}))^{2} \end{equation} \begin{equation} w_{2}({\bf{q}}) = (\frac{1}{N})\sum_{ {\bf{k}} } (\Lambda_{ {\bf{k}} }(-{\bf{q}}))^{2} \end{equation} In fact, in principle we could go all the way back to the expression in Eq.(~\ref{FIELDOP}) and say that we now have a unique correspondence between the fermi field operator and the corresponding currents and densities. In the next section, we write down and diagonalise the hamiltonian of interacting systems. It is shown that when only the lowest order sea-displacement terms/condensate displacement terms are included, it amounts to doing RPA/Bogoliubov theory. This hamiltonian is diagonalised in the fermi and bose cases and the single-particle spectral functions are computed. The bose case comes out nicely since, it is just the bogoliubov theory, but in the fermi case, we have to take extra care in properly diagonalising the hamiltonian in order not to lose the particle hole mode, the collective mode being more obvious. \section{Spectral Function of Interacting Systems} Let us make the following observation for future reference, \begin{equation} {\bf{RPA/Bogoliubov \rightarrow Leave \mbox{ }out\mbox{ }the \mbox{ } Quadratic \mbox{ }Part\mbox{ } in\mbox{ } Eq.(~\ref{FERMI})\mbox{ } and \mbox{ }Eq.(~\ref{BOSE}) }} \label{PRES} \end{equation} It is pertinent at this stage to remark on the physical meaning of the above relation. In case of bosons, it is simple to visualise. Bogoliubov's theory is exact provided there are large number bosons in the zero momentum state so that we may legitimately replace the number operator by its c-number expectation value. Also it is important that the system be weakly interacting so that the fluctuations of the number operator in the zero momentum state are small compared with its macroscopic expectation value. In the fermi case an analogous statement would be that the momentum distribution be sufficiently different from zero or unity for all values of the momenta. Also the fluctuations of the momentum distribution must be small. Thus for the fermi system, our approach gives good answers even for strong interactions that drive the momentum distribution away from zero or unity for all momenta so long as the fluctuations around these nonideal averages are small. In any event, the philosophy is that we have an exactly solvable class of models that describe correlation effects in many different contexts and this alone merits attention and serious study. In the end experiments may have to be used to 'calibrate' these models so that they become a true description of the low-energy real world. \begin{center} {\bf{Part A}} \end{center} Let us focus on the bose case first. The bogoliubov hamiltonian may be written down by following the prescription of Eq.(~\ref{PRES}). \begin{equation} H_{bog} = \sum_{ {\bf{k}} }\epsilon_{ {\bf{k}} } d^{\dagger}_{ (1/2){\bf{k}} }({\bf{k}})d_{ (1/2){\bf{k}} }({\bf{k}}) + \sum_{ {\bf{q}} \neq 0 }\frac{ v_{ {\bf{q}} } }{2} [{\sqrt{N_{0}}}d_{ -{\bf{q}}/2 }(-{\bf{q}}) + d^{\dagger}_{ {\bf{q}}/2 }({\bf{q}}){\sqrt{N_{0}}}] [{\sqrt{N_{0}}}d_{ {\bf{q}}/2 }({\bf{q}}) + d^{\dagger}_{ -{\bf{q}}/2 }(-{\bf{q}}){\sqrt{N_{0}}}] \end{equation} In the above equation $ N_{0} $ is an operator, therefore this is the non-local bogoliubov hamiltonian. But we shall assume that it is legitimate to replace it with its c-number expectation value. It would be interesting to see what corrections to bogoliubov theory come about by incorporating this square-root of the operator. These correction terms tell us that fluctuations of the number of particles in the condensate are important and lead to correlations beyond the bogoliubov theory. This is in addition to correlations coming from quadratic terms that the prescription Eq.(~\ref{PRES}) neglects. When these approximations are implemented, and a further approximation $ N_{0} \approx N $ is made, it becomes exactly the bogoliubov theory introduced by Bogoliubov and Bogliubov and Zubarev\cite{Bog}. It may be diagonalised quite easily, \begin{equation} H_{bog} = \sum_{ {\bf{q}} }\omega_{ {\bf{q}} } f^{\dagger}_{ {\bf{q}} }f_{ {\bf{q}} } \end{equation} and, \begin{equation} f_{ {\bf{q}} } = (\frac{ \omega_{ {\bf{q}} } + \epsilon_{ {\bf{q}} } + \rho_{0}v_{ {\bf{q}} } } { 2 \mbox{ }\omega_{ {\bf{q}} } })^{\frac{1}{2}} d_{ {\bf{q}}/2 }({\bf{q}}) + (\frac{ -\omega_{ {\bf{q}} } + \epsilon_{ {\bf{q}} } + \rho_{0}v_{ {\bf{q}} } } { 2 \mbox{ }\omega_{ {\bf{q}} } })^{\frac{1}{2}} d^{\dagger}_{ -{\bf{q}}/2 }(-{\bf{q}}) \end{equation} \begin{equation} d_{ {\bf{q}}/2 }({\bf{q}}) = (\frac{ \omega_{ {\bf{q}} } + \epsilon_{ {\bf{q}} } + \rho_{0}v_{ {\bf{q}} } } { 2 \mbox{ }\omega_{ {\bf{q}} } })^{\frac{1}{2}}f_{ {\bf{q}} } - (\frac{ -\omega_{ {\bf{q}} } + \epsilon_{ {\bf{q}} } + \rho_{0}v_{ {\bf{q}} } } { 2 \mbox{ }\omega_{ {\bf{q}} } })^{\frac{1}{2}}f^{\dagger}_{ -{\bf{q}} } \end{equation} The dispersion is given by, \begin{equation} \omega_{ {\bf{q}} } = \sqrt{ \epsilon^{2}_{ {\bf{q}} } + 2 \rho_{0}v_{ {\bf{q}} }\epsilon_{ {\bf{q}} } } \end{equation} here, $ \rho_{0} $ is the density of bosons in the condensate(not the overall density). From this one may deduce the filling fraction and dynamical structure factor, \newline {\center{ {\bf{FILLING FRACTION}} }} \begin{equation} f_{0} = N_{0}/N = 1 - (1/N)\sum_{ {\bf{q}} }\langle d^{\dagger}_{ (1/2){\bf{q}} }({\bf{q}}) d_{ (1/2){\bf{q}} }({\bf{q}}) \rangle \end{equation} in other words, \begin{equation} f_{0} = N_{0}/N = 1 - (1/2\pi^{2}\rho)\int_{0}^{\infty} \mbox{ }dq\mbox{ } q^{2} ( \frac{ -\omega_{ {\bf{q}} } + \epsilon_{ {\bf{q}} } + \rho_{0}v_{ {\bf{q}} } }{ 2\mbox{ }\omega_{ {\bf{q}} } } ) \end{equation} here $ \rho $ is the total density of bosons including those that are not in the condensate. \newline {\bf{DYNAMICAL STRUCTURE FACTOR}} \begin{equation} S^{>}({\bf{q}}t) = \langle \rho_{ {\bf{q}} }(t) \rho_{ -{\bf{q}} }(0)\rangle = N_{0}\langle [ d_{ -(1/2){\bf{q}} }(-{\bf{q}})(t) + d^{\dagger}_{ (1/2){\bf{q}} }({\bf{q}})(t) ] [ d_{ (1/2){\bf{q}} }({\bf{q}})(0) + d^{\dagger}_{ -(1/2){\bf{q}} }(-{\bf{q}})(0) ] \rangle \end{equation} in other words, \begin{equation} S^{>}({\bf{q}},t) = N_{0} (\frac{ \epsilon_{ {\bf{q}} } }{\omega_{ {\bf{q}} } }) \mbox{ }exp(-i \mbox{ }\omega_{ {\bf{q}} }t) \end{equation} This method is truly powerful when applied to compute single-particle properties. The single-particle green function is difficult to obtain using conventional digrammatic methods or otherwise (See Kadanoff and Baym Ref.(~\onlinecite{Baym}) ). For this one must first write down the field operator in terms of the condensate-displacements. \begin{equation} \Pi({\bf{x}}) \approx (\frac{i}{2\sqrt{N_{0}}}) \sum_{ {\bf{q}} }exp(i{\bf{q.x}})[d_{ {\bf{q}}/2 }({\bf{q}}) - d^{\dagger}_{ -{\bf{q}}/2 }(-{\bf{q}})] \end{equation} and the expression for the field operator is, \begin{equation} \psi({\bf{x}}) \approx e^{-i\mbox{ }\Pi({\bf{x}}) } \mbox{ } \sqrt{ \rho } \end{equation} The propagator (all propagators in this article are evaluated at zero temperature, this means we may set the chemical potential equal to zero in the bose case) may now be computed and shown to be equal to the free propagator at ultra-high density. The interacting case is more interesting. The time-evolved version is, \begin{equation} \psi({\bf{x}},t) \approx e^{-i\mbox{ }\Pi({\bf{x}},t) } \mbox{ } \sqrt{ \rho } \end{equation} and \begin{equation} \Pi({\bf{x}},t) = (\frac{i}{2\sqrt{N_{0}}}) \sum_{ {\bf{q}} }exp(i{\bf{q.x}}) (A_{ {\bf{q}} }+B_{ {\bf{q}} }) [f_{ {\bf{q}} } e^{-i\omega_{ {\bf{q}} }t } - f^{\dagger}_{ -{\bf{q}} }e^{i\omega_{ {\bf{q}} }t }] \end{equation} \begin{equation} \langle \psi^{\dagger}({\bf{0}},0)\psi({\bf{x}},t)\rangle = \rho\mbox{ }\langle \mbox{ }e^{i\mbox{ }\Pi({\bf{0}},0)} e^{-i\mbox{ }\Pi({\bf{x}},t)} \mbox{ }\rangle \end{equation} In order to ensure that the free case is properly recovered, we use this somewhat illegal trick, but a trick that should be very palatable to most physicists, namely multiply and divide by the free propagator and in the division use the free propagator predicted by the bosonized theory and in the numerator use the free propagator obtained from elementary considerations. \begin{equation} \langle \psi^{\dagger}({\bf{0}},0)\psi({\bf{x}},t)\rangle = exp\mbox{ }[ \mbox{ }(\frac{1}{4N_{0}}) \sum_{ {\bf{q}} }f_{ {\bf{q}} }({\bf{x}},t) \mbox{ }] \langle \psi^{\dagger}({\bf{0}},0)\psi({\bf{x}},t)\rangle_{free} \end{equation} where, \begin{equation} A_{ {\bf{q}} } = (\frac{ \omega_{ {\bf{q}} } + \epsilon_{ {\bf{q}} } + \rho_{0}v_{ {\bf{q}} } }{2 \mbox{ }\omega_{ {\bf{q}} } })^{\frac{1}{2}} \end{equation} \begin{equation} B_{ {\bf{q}} } = (\frac{ -\omega_{ {\bf{q}} } + \epsilon_{ {\bf{q}} } + \rho_{0}v_{ {\bf{q}} } }{2 \mbox{ }\omega_{ {\bf{q}} } })^{\frac{1}{2}} \end{equation} Similarly, \[ \langle \psi({\bf{x}},t)\psi^{\dagger}({\bf{0}},0) \rangle = \rho\langle e^{-i\mbox{ }\Pi({\bf{x}},t)} e^{i\mbox{ }\Pi({\bf{0}},0)} \rangle \] \begin{equation} = exp\mbox{ }[ \mbox{ }(\frac{1}{4N_{0}}) \sum_{ {\bf{q}} }f_{ {\bf{q}} }(-{\bf{x}},-t) \mbox{ }] \langle \psi({\bf{x}},t)\psi^{\dagger}({\bf{0}},0)\rangle_{free} \end{equation} here, \begin{equation} f_{ {\bf{q}} }({\bf{x}},t) = (e^{-i\mbox{ }{\bf{q.x}}}e^{i\mbox{ }\omega_{ {\bf{q}} }t} -1)(A_{ {\bf{q}} } + B_{ {\bf{q}} })^{2} - (e^{-i\mbox{ }{\bf{q.x}}}e^{i\mbox{ }\epsilon_{ {\bf{q}} }t} -1) \end{equation} From Kadanoff and Baym \cite{Baym} the spectral function may be deduced as follows, \newline {\bf{THE SPECTRAL FUNCTION}} \begin{equation} A({\bf{p}},\omega) = \int \mbox{ }{ d{\bf{x}} }\mbox{ } \int_{ -\infty }^{ +\infty }\mbox{ }dt\mbox{ } e^{-i{\bf{p.x}} + i\mbox{ }\omega\mbox{ }t} \{ exp[\frac{1}{4N_{0}}\sum_{ {\bf{q}} }f_{ {\bf{q}} }(-{\bf{x}},-t) ] (\rho + u({\bf{x}},t)) - exp[\frac{1}{4N_{0}}\sum_{ {\bf{q}} }f_{ {\bf{q}} }({\bf{x}},t) ] \rho \} \end{equation} and, \begin{equation} u({\bf{x}},t) = \frac{1}{V}\sum_{ {\bf{k}} }e^{i\mbox{ }{\bf{k.x}} } e^{-i\mbox{ }\epsilon_{ {\bf{k}} }t } \end{equation} The above answer is the exact answer for the spectral function provided bogoliubov's theory is adequate. Now we move on to the fermi case which is far more interesting and important. \begin{center} {\bf{Part B}} \end{center} In order to compute the full propagator for these systems, it is desirable to first ascertain, under what conditions these formulas are going to be valid. The answer is given by the assertion in Eq.(~\ref{PRES}). Thus these answers for the single-particle properties are valid in the same limit in which RPA/Bogoliubov's theory is exact. The assertion in the bose case in Eq.(~\ref{PRES}) has been verified. In order to verify the analogous assertion in the fermi case, we have to diagonalise the full hamiltonian given below (The fact that the RPA dielectric function comes out naturally from the prescription in Eq.(~\ref{PRES}) will be demsonstrated in Appendix C). In the fermi case, we have to diagonalise the full hamiltonian given below, \begin{equation} H = \sum_{ {\bf{k}}, {\bf{q}} }\omega_{ {\bf{k}} }({\bf{q}}) a^{\dagger}_{ {\bf{k}} }({\bf{q}})a_{ {\bf{k}} }({\bf{q}}) + \sum_{ {\bf{q}} \neq 0 }\frac{ v_{ {\bf{q}} } }{2V} \sum_{ {\bf{k}}, {\bf{k}}^{'} } [\Lambda_{ {\bf{k}} }({\bf{q}})a_{ {\bf{k}} }(-{\bf{q}}) + \Lambda_{ {\bf{k}} }(-{\bf{q}})a^{\dagger}_{ {\bf{k}} }({\bf{q}})] [\Lambda_{ {\bf{k}}^{'} }(-{\bf{q}})a_{ {\bf{k}}^{'} }({\bf{q}}) + \Lambda_{ {\bf{k}}^{'} }({\bf{q}})a^{\dagger}_{ {\bf{k}}^{'} }(-{\bf{q}})] \label{FULLHAM} \end{equation} here $ \omega_{ {\bf{k}} }({\bf{q}}) = \frac{ {\bf{k.q}} }{m}\Lambda_{ {\bf{k}} }(-{\bf{q}}) $. The zero temperature case is somewhat special, here we may assume that the sea-boson annhilates the non-interacting fermi sea, this means that we have to introduce a factor of $ \Lambda_{ {\bf{k}} }(-{\bf{q}}) $ in the dispersion that makes the kinetic energy operator positive definite. In order to diagonalise this we proceed as follows. Assume that the diagonalised form looks like this, \begin{equation} H = \sum_{ i, {\bf{q}} }{\tilde{\omega}}_{ i }({\bf{q}}) b^{\dagger}_{ i }({\bf{q}})b_{ i }({\bf{q}}) \end{equation} here $ b_{ i }({\bf{q}}) $ and $ b^{\dagger}_{ i }({\bf{q}}) $ are "dressed-sea-displacement" operators. The objects $ i $ take on values from an index set. The size of this set is the big issue here. Is it finite or does it have the same size as the number of points in k-space, or is it equal to the number of points on the fermi surface ? We shall find that answers to these questions are hard, and may be addressed only after coming to an agreement as to what sort of physics we hope to capture. Indeed, in many cases in physics one is forced to bend the rules or reinterpret mathematical formulae in order to capture what one is looking for. We find that we have to resort to such methods here as well. In particular, we find the following general feature. $ {\tilde{\omega}}_{i}({\bf{q}}) $ are the roots of the RPA-dielectric function. Now the RPA-dielectric function is a complex quantity, as it is usually introduced in the textbooks. Therefore finding roots cannot mean finding the zeros of both the real and imaginary parts at the same time for this gives no root, both the real and imaginary part cannot be zero simultaneously. This leaves us with the following options, reinterpret the zeros of the RPA-dielectric function to be the maxima of the dynamical structure factor, in which case one gets both the particle-hole mode as well as the collective mode. The better option is to delay taking the thermodynamic limit until after all the summation over momenta have been performed. Then assume that the density is high enough and at the very end go to the thermodynamic limit, this ensures that both the particle-hole mode and the collective mode are properly recovered. These are admitedly difficult issues to grapple with, and the authors have attempted a different approach to deal with them. However, the traditional viewpoint on this matter is presented in the paper by Castro-Neto and Fradkin\cite{Neto}. The diagonalisation proceeds as follows, \begin{equation} b_{i}({\bf{q}}) = \sum_{ {\bf{k}} } [b_{i}({\bf{q}}), a^{\dagger}_{ {\bf{k}} }({\bf{q}})] a_{ {\bf{k}} }({\bf{q}}) - \sum_{ {\bf{k}} } [b_{i}({\bf{q}}), a_{ {\bf{k}} }(-{\bf{q}})] a^{\dagger}_{ {\bf{k}} }(-{\bf{q}}) \end{equation} the corresponding inverted formula reads as, \begin{equation} a_{ {\bf{k}} }({\bf{q}}) = \sum_{ i } [a_{ {\bf{k}} }({\bf{q}}), b^{\dagger}_{ i }({\bf{q}})] b_{ i }({\bf{q}}) - \sum_{ i } [a_{ {\bf{k}} }({\bf{q}}), b_{ i }(-{\bf{q}})] b^{\dagger}_{ i }(-{\bf{q}}) \end{equation} The quantities $ [b_{i}({\bf{q}}), a_{ {\bf{k}} }(-{\bf{q}})] $ and $ [a_{ {\bf{k}} }({\bf{q}}), b^{\dagger}_{ i }({\bf{q}})] $ are c-numbers and real. The $ i $ here could span a continuum (particle-hole mode) or be finite( actually there is just one, collective mode ). The diagonalisation continues unabated, \begin{equation} [b_{i}({\bf{q}}), a^{\dagger}_{ {\bf{k}} }({\bf{q}})] = (\frac{ \Lambda_{ {\bf{k}} }(-{\bf{q}}) } { {\tilde{\omega}}_{i}({\bf{q}}) - \frac{ {\bf{k.q}} }{m} }) g_{i}({\bf{q}}) = [a_{ {\bf{k}} }({\bf{q}}), b^{\dagger}_{i}({\bf{q}})] \end{equation} \begin{equation} [b_{i}({\bf{q}}), a_{ {\bf{k}} }(-{\bf{q}})] = -(\frac{ \Lambda_{ {\bf{k}} }({\bf{q}}) } { {\tilde{\omega}}_{i}({\bf{q}}) - \frac{ {\bf{k.q}} }{m} }) g_{i}({\bf{q}}) \end{equation} \begin{equation} [a_{ {\bf{k}} }({\bf{q}}), b_{i}(-{\bf{q}})] = (\frac{ \Lambda_{ {\bf{k}} }(-{\bf{q}}) } { {\tilde{\omega}}_{i}(-{\bf{q}}) + \frac{ {\bf{k.q}} }{m} }) g_{i}(-{\bf{q}}) \end{equation} \begin{equation} g_{i}({\bf{q}}) = [\sum_{ {\bf{k}} } \frac{ n_{F}({\bf{k-q/2}}) - n_{F}({\bf{k+q/2}}) } { ( {\tilde{\omega}}_{i}({\bf{q}}) - \frac{ {\bf{k.q}} }{m} )^{2} }] ^{-\frac{1}{2}} \end{equation} The eigenvalues $ {\tilde{\omega}}_{i}({\bf{q}}) $ are zeros of the real part of the RPA dielectric function. The RPA dielectric function is written down below, \begin{equation} \epsilon_{RPA}({\bf{q}}, {\tilde{\omega}}) = 1 + \frac{ v_{ {\bf{q}} } }{V} \sum_{ {\bf{k}} } \frac{ n_{F}({\bf{k}} + {\bf{q}}/2) - n_{F}({\bf{k}} - {\bf{q}}/2) } { {\tilde{\omega}} - \frac{ {\bf{k.q}} }{m} } \label{RPA} \end{equation} As it stands, the above sum is ill-defined. In particular, if one takes the thermodynamic limit at the outset, and treats the above sum as the principal part, then one gets the real part of the RPA dielectric function. On the other hand, if one defers the taking of the thermodynamic limit until the very end, and instead takes the high density limit first, then one obtains the particle-hole mode as the argument below will attest. Let us rewrite the sum in the RPA dielectric function as, \[ \epsilon_{RPA}({\bf{q}}, {\tilde{\omega}}) = 1 + \frac{ v_{ {\bf{q}} } }{V} \sum_{ {\bf{k}} } \frac{ \Lambda^{2}_{ {\bf{k}} }({\bf{q}}) - \Lambda^{2} _{ {\bf{k}} }(-{\bf{q}}) } { {\tilde{\omega}} - \frac{ {\bf{k.q}} }{m} } \] \[ = 1 + \frac{ v_{ {\bf{q}} } }{V} \sum_{ {\bf{k}} }\frac{ \Lambda^{2}_{ {\bf{k}} }({\bf{q}}) } { {\tilde{\omega}} + \omega_{ {\bf{k}} }(-{\bf{q}}) } - \frac{ v_{ {\bf{q}} } }{V} \sum_{ {\bf{k}} \neq {\bf{k}}_{i}}\frac{ \Lambda^{2}_{ {\bf{k}} }(-{\bf{q}}) } { {\tilde{\omega}} - \omega_{ {\bf{k}} }({\bf{q}}) } \] \begin{equation} - \frac{ v_{ {\bf{q}} } }{V} \frac{ \Lambda^{2}_{ {\bf{k}}_{i} }(-{\bf{q}}) } { {\tilde{\omega}} - \omega_{ {\bf{k}}_{i} }({\bf{q}}) } \label{REDU} \end{equation} Let us now assume that the volume $ V $ is fixed and we now go to the high density limit($ k_{F} \rightarrow \infty $, or equivalently when , $ |{\bf{q}}| << k_{f} $), then we find, due to the fact below, \begin{equation} \Lambda_{ {\bf{k}} }(-{\bf{q}}) =0 \mbox{ }; \mbox{ }unless \mbox{ }|{\bf{k}}| \approx k_{f} \mbox{ }and \mbox{ } {\bf{k}}.{\bf{q}} > 0 \end{equation} The total number of terms in the above two sums is a small fraction of the total volume and as $ k_{F} $ keeps increasing, the fraction gets smaller and smaller until it becomes small compared to unity and may be neglected, this means, \begin{equation} 1 - (\frac{ v_{ {\bf{q}} } }{V}) \frac{ \Lambda^{2}_{ {\bf{k}}_{i} }(-{\bf{q}}) } { {\tilde{\omega}}_{i}({\bf{q}}) - \omega_{ {\bf{k}}_{i} }({\bf{q}}) } = 0 \end{equation} from this we may deduce that particle-hole mode as, \begin{equation} {\tilde{\omega}}_{i}( {\bf{q}} ) = \omega_{ {\bf{k}}_{i} }({\bf{q}}) + (\frac{ v_{ {\bf{q}} } }{V})\Lambda^{2}_{ {\bf{k}}_{i} }(-{\bf{q}}) \end{equation} as is clear from the above derivation, two points must be borne in mind, one is, we have to defer the taking of the thermodynamic limit until the very end, the other is to exploit the property of the object $ \Lambda_{ {\bf{k}} }({\bf{q}}) $, namely, if $ |{\bf{q}}| << k_{f} $, and $ \Lambda_{ {\bf{k}} }(-{\bf{q}}) = 1 $ ($ \Lambda_{ {\bf{k}} }(-{\bf{q}}) = 0,1\mbox{ }always $) then $ |{\bf{k}}| \approx k_{f} $. Alternatively, we can solve for $ {\tilde{\omega}}_{i}( {\bf{q}} ) $ as shown below, \begin{equation} {\tilde{\omega}}_{i}( {\bf{q}} ) = \omega_{ {\bf{k}}_{i} }({\bf{q}}) + (\frac{ v_{ {\bf{q}} } }{V}) \frac{ \Lambda^{2}_{ {\bf{k}}_{i} }(-{\bf{q}}) } { 1 + \frac{ v_{ {\bf{q}} } }{V} \sum_{ {\bf{k}} }\frac{ \Lambda^{2}_{ {\bf{k}} }({\bf{q}}) } { \omega_{ {\bf{k}}_{i} }({\bf{q}}) + \omega_{ {\bf{k}} }(-{\bf{q}}) } - \frac{ v_{ {\bf{q}} } }{V} \sum_{ {\bf{k}} \neq {\bf{k}}_{i}}\frac{ \Lambda^{2}_{ {\bf{k}} }(-{\bf{q}}) } { \omega_{ {\bf{k}}_{i} }({\bf{q}}) - \omega_{ {\bf{k}} }({\bf{q}}) } } \end{equation} We shall find these formulas useful later on when we try to write down the propagator. The collective mode in 1D and 3D may be written down as shown below, \begin{equation} \omega_{c-1D}(q) = (\frac{ |q| }{m}) \sqrt{ \frac{ (k_{f} + q/2)^{2} - (k_{f} - q/2)^{2}exp(-\lambda(q)) } { 1 - exp(-\lambda(q)) } } \end{equation} \begin{equation} \lambda(q) = (\frac{2 \pi q}{m})(\frac{1}{v_{q}}) \end{equation} we may also write, \begin{equation} \omega^{2}_{c-1D}(q) = (\frac{ k_{f}q }{m})^{2} + \epsilon_{q}^{2} + 2\mbox{ }\epsilon_{q}(\frac{ k_{f}q }{m})coth(\frac{\lambda(q)}{2}) \label{1DDISP} \end{equation} In 3D it is more familiar \cite{Mahan}(only for coulomb repulsion), \begin{equation} \omega_{c-3D}( {\bf{q}} ) = \omega_{p}[ 1 + \frac{3}{10} \frac{ (q\mbox{ }v_{f})^{2} }{\omega^{2}_{p}} ] \label{3DDISP} \end{equation} For more general forms of interaction in 3D the answer may be obtained by computing the roots of the equation below, \begin{equation} 1 - \frac{ n_{0} (v_{ {\bf{q}} }q^{2})/m }{\omega^{2}} \{1 + \frac{1}{ \omega^{2}}[\frac{3}{5}(qv_{F})^{2} - \epsilon^{2}_{ {\bf{q}} }] \} = 0 \end{equation} In 2D, the answer is not available in the books and may be deduced after some algebra as, \begin{equation} \omega_{c-2D}( {\bf{q}} ) = \frac{ (\frac{k_{f}|{\bf{q}}|}{m}) (1 + \frac{2\pi/m}{v_{ {\bf{q}} } }) } { \sqrt{ \frac{4\pi/m}{ v_{ {\bf{q}} } } + (\frac{2\pi/m}{ v_{ {\bf{q}} } })^{2} } } \end{equation} After all this, it is relatively simple to deduce the full propagator. For reference the free propagator is \[ \langle \psi^{\dagger}({\bf{x}},t)\psi({\bf{x}}^{'},t^{'})\rangle = \rho_{0} e^{-\sum_{ {\bf{k}}, {\bf{q}} \neq 0 }g^{*}_{ {\bf{k}}, {\bf{q}} }({\bf{x}}) g_{ {\bf{k}}, {\bf{q}} }({\bf{x}}) } \] \begin{equation} \times e^{\sum_{ {\bf{k}}, {\bf{q}} \neq 0 }g^{*}_{ {\bf{k}}, {\bf{q}} }({\bf{x}}) g_{ {\bf{k}}, {\bf{q}} }({\bf{x}}^{'}) e^{i\mbox{ }\omega_{ {\bf{k}} }({\bf{q}}) (t^{'}-t)} } \end{equation} \[ \langle \psi({\bf{x}}^{'},t^{'})\psi^{\dagger}({\bf{x}},t)\rangle = \rho_{0} e^{-\sum_{ {\bf{k}}, {\bf{q}} \neq 0 }f^{*}_{ {\bf{k}}, {\bf{q}} }({\bf{x}}^{'}) f_{ {\bf{k}}, {\bf{q}} }({\bf{x}}^{'}) } \] \begin{equation} \times e^{\sum_{ {\bf{k}}, {\bf{q}} \neq 0 }f^{*}_{ {\bf{k}}, {\bf{q}} }({\bf{x}}) f_{ {\bf{k}}, {\bf{q}} }({\bf{x}}^{'}) e^{i\mbox{ }\omega_{ {\bf{k}} }({\bf{q}}) (t-t^{'})} } \end{equation} \begin{equation} f_{ {\bf{k}}, {\bf{q}} }({\bf{x}}) = e^{i {\bf{q.x}} }(\frac{1}{2 \mbox{ }N\mbox{ }\epsilon_{ {\bf{q}} } }) \Lambda_{ {\bf{k}} }(-{\bf{q}})\omega_{ {\bf{k}} }({\bf{q}}) + i\mbox{ }U_{ -{\bf{q}} }({\bf{x}})\Lambda_{ {\bf{k}} }(-{\bf{q}}) \end{equation} \begin{equation} g_{ {\bf{k}}, {\bf{q}} }({\bf{x}}) = -e^{-i {\bf{q.x}} }(\frac{1}{2 \mbox{ }N\mbox{ }\epsilon_{ {\bf{q}} } }) \Lambda_{ {\bf{k}} }(-{\bf{q}})\omega_{ {\bf{k}} }({\bf{q}}) + i\mbox{ }U_{ {\bf{q}} }({\bf{x}})\Lambda_{ {\bf{k}} }(-{\bf{q}}) = -f^{*}_{ {\bf{k}}, {\bf{q}} }({\bf{x}}) \end{equation} and, \begin{equation} {\mathcal{Z}}_{0} = e^{i\mbox{ }\sum_{ {\bf{k}}, {\bf{q}} \neq 0 } U_{0}({\bf{q}})(\frac{1}{2 \mbox{ }N\epsilon_{ {\bf{q}} }}) (\Lambda_{ {\bf{k}} }(-{\bf{q}}))^{2}\omega_{ {\bf{k}} }({\bf{q}}) } e^{\frac{1}{2} \mbox{ }\sum_{ {\bf{k}}, {\bf{q}} \neq 0 } (\frac{1}{2 \mbox{ }N\epsilon_{ {\bf{q}} }})^{2} (\Lambda_{ {\bf{k}} }(-{\bf{q}}))^{2}(\omega_{ {\bf{k}} }({\bf{q}}))^{2} } e^{\frac{1}{2} \mbox{ }\sum_{ {\bf{k}}, {\bf{q}} \neq 0 } (U_{0}({\bf{q}}))^{2} (\Lambda_{ {\bf{k}} }(-{\bf{q}}))^{2} } \end{equation} The time evolved field operator in the interacting case is, \begin{equation} \psi^{\dagger}({\bf{x}},t) = exp( \sum_{ {\bf{k}}, {\bf{q}}\neq 0, i }U^{i}_{ {\bf{k}}, {\bf{q}} } ({\bf{x}})b^{\dagger}_{i}({\bf{q}}) e^{i\mbox{ }{\tilde{\omega}}_{i}({\bf{q}}) t} ) exp( -\sum_{ {\bf{k}}, {\bf{q}}\neq 0, i }U^{*i}_{ {\bf{k}}, {\bf{q}} } ({\bf{x}})b_{i}({\bf{q}}) e^{-i\mbox{ }{\tilde{\omega}}_{i}({\bf{q}}) t} ) {\mathcal{R}}_{0}{\mathcal{Z}}^{*}_{0}\sqrt{\rho_{0}} \end{equation} where, \begin{equation} U^{i}_{ {\bf{k}}, {\bf{q}} } = f^{*}_{ {\bf{k}}, {\bf{q}} }({\bf{x}}) [a_{ {\bf{k}} }({\bf{q}}), b^{\dagger}_{i}({\bf{q}})] + f_{ {\bf{k}}, -{\bf{q}} }({\bf{x}}) [a_{ {\bf{k}} }(-{\bf{q}}), b_{i}({\bf{q}})] \end{equation} \[ {\mathcal{R}}_{0} = exp(-\sum_{ {\bf{k}}, {\bf{q}}, i } f^{*}_{ {\bf{k}}, {\bf{q}} }({\bf{x}})f_{ {\bf{k}}, {\bf{q}} }({\bf{x}}) [b_{i}({\bf{q}}), a^{\dagger}_{ {\bf{k}} }({\bf{q}})] [a_{ {\bf{k}} }({\bf{q}}), b^{\dagger}_{i}({\bf{q}})]) \] \[ \times exp(-\frac{1}{2}\sum_{ {\bf{k}}, {\bf{q}}, i } f^{*}_{ {\bf{k}}, {\bf{q}} }({\bf{x}})f^{*}_{ {\bf{k}}, -{\bf{q}} }({\bf{x}}) [a_{ {\bf{k}} }(-{\bf{q}}), b_{i}({\bf{q}})] [a_{ {\bf{k}} }({\bf{q}}), b^{\dagger}_{i}({\bf{q}})]) \] \begin{equation} \times exp(-\frac{1}{2}\sum_{ {\bf{k}}, {\bf{q}}, i } f_{ {\bf{k}}, {\bf{q}} }({\bf{x}})f_{ {\bf{k}}, -{\bf{q}} }({\bf{x}}) [a_{ {\bf{k}} }(-{\bf{q}}), b_{i}({\bf{q}})] [a_{ {\bf{k}} }({\bf{q}}), b^{\dagger}_{i}({\bf{q}})]) \end{equation} The two full fermi propagators may be written down as, \begin{equation} \langle \psi^{\dagger}({\bf{x}},t)\psi({\bf{x}}^{'},t^{'})\rangle = |{\mathcal{R}}_{0}|^{2}|{\mathcal{Z}}_{0}|^{2}\rho_{0} e^{\sum_{ {\bf{k}}, {\bf{q}}, i } U^{*i}_{ {\bf{k}}, {\bf{q}} }({\bf{x}}) U^{i}_{ {\bf{k}}, {\bf{q}} }({\bf{x}}^{'}) e^{i\mbox{ }{\tilde{\omega}}_{i}({\bf{q}})(t^{'}-t)}} \end{equation} \begin{equation} \langle \psi({\bf{x}}^{'},t^{'})\psi^{\dagger}({\bf{x}},t)\rangle = |{\mathcal{R}}_{0}|^{2}|{\mathcal{Z}}_{0}|^{2}\rho_{0} e^{\sum_{ {\bf{k}}, {\bf{q}}, i } U^{*i}_{ {\bf{k}}, {\bf{q}} }({\bf{x}}^{'}) U^{i}_{ {\bf{k}}, {\bf{q}} }({\bf{x}}) e^{i\mbox{ }{\tilde{\omega}}_{i}({\bf{q}})(t-t^{'})}} \end{equation} Again, it is desirable to use the trick we used in the bose case, namely multiply and divide by the free propagator and in the division use the form predicted by the bosonized theory and in the multiplication, use the form predicted by elementary considerations. This procedure also ensures that inspite of the fact we have not verified that the fermi fields written down in terms of the bose fields anticommute, the anticommutation rules are forced on the propagators by the free propagators which we know anticommute in the right fashion. This leads to the following forms for the propagators, \begin{equation} \langle \psi^{\dagger}({\bf{x}},t)\psi({\bf{x}}^{'},t^{'})\rangle = |{\mathcal{R}}_{0}|^{2}|{\mathcal{Z}}_{0}|^{4} e^{\sum_{ {\bf{k}}, {\bf{q}}, i } U^{*i}_{ {\bf{k}}, {\bf{q}} }({\bf{x}}) U^{i}_{ {\bf{k}}, {\bf{q}} }({\bf{x}}^{'}) e^{i\mbox{ }{\tilde{\omega}}_{i}({\bf{q}})(t^{'}-t)}} e^{-\sum_{ {\bf{k}}, {\bf{q}} } g^{*}_{ {\bf{k}}, {\bf{q}} }({\bf{x}}) g_{ {\bf{k}}, {\bf{q}} }({\bf{x}}^{'}) e^{i\mbox{ }\omega_{ {\bf{k}} }({\bf{q}})(t^{'}-t)}} \langle \psi^{\dagger}({\bf{x}},t)\psi({\bf{x}}^{'},t^{'})\rangle_{free} \end{equation} \begin{equation} \langle \psi({\bf{x}}^{'},t^{'})\psi^{\dagger}({\bf{x}},t)\rangle = |{\mathcal{R}}_{0}|^{2}|{\mathcal{Z}}_{0}|^{4} e^{\sum_{ {\bf{k}}, {\bf{q}}, i } U^{*i}_{ {\bf{k}}, {\bf{q}} }({\bf{x}}^{'}) U^{i}_{ {\bf{k}}, {\bf{q}} }({\bf{x}}) e^{i\mbox{ }{\tilde{\omega}}_{i}({\bf{q}})(t-t^{'})}} e^{-\sum_{ {\bf{k}}, {\bf{q}} } f^{*}_{ {\bf{k}}, {\bf{q}} }({\bf{x}}) f_{ {\bf{k}}, {\bf{q}} }({\bf{x}}^{'}) e^{i\mbox{ }\omega_{ {\bf{k}} }({\bf{q}})(t-t^{'})}} \langle \psi({\bf{x}}^{'},t^{'})\psi^{\dagger}({\bf{x}},t)\rangle_{free} \end{equation} In the above formula, the index $ i $ runs over both the collective mode as well as the particle-hole mode($ i = c, {\bf{k}}_{i} $) The momentum distribution may be evaluated in a different way by computing the expectation value of the number operator in Eq.(~\ref{NUMBER}). This leads to the following answer. It includes contribution from both particle-hole mode and the collective mode. In Appendix D, we show how to derive the same momentum distribution using the equation of motion approach(actually just the collective part, for purposes of illustration). The full momentum distribution including the particle-hole mode is given below. \begin{equation} \langle c^{\dagger}_{ {\bf{k}} }c_{ {\bf{k}} } \rangle = \theta(k_{f} - |{\bf{k}}|)F_{1}({\bf{k}}) + (1 - \theta(k_{f} - |{\bf{k}}|))F_{2}({\bf{k}}) \end{equation} \begin{equation} F_{1}({\bf{k}}) = 1 - \sum_{ i, {\bf{q}} } \frac{ 1 - n_{F}({\bf{k}} + {\bf{q}}) }{({\tilde{\omega}}_{i}(-{\bf{q}}) + \frac{ {\bf{k.q}} }{m} + \epsilon_{ {\bf{q}} })^{2} } g^{2}_{i}(-{\bf{q}}) \end{equation} \begin{equation} F_{2}({\bf{k}}) = \sum_{ i, {\bf{q}} } \frac{ n_{F}({\bf{k}} - {\bf{q}}) }{({\tilde{\omega}}_{i}(-{\bf{q}}) + \frac{ {\bf{k.q}} }{m} - \epsilon_{ {\bf{q}} })^{2} } g^{2}_{i}(-{\bf{q}}) \end{equation} In the above sum over $ i $, one must include both the collective mode and the particle-hole mode($ i = c, {\bf{k}}_{i} $). A more general result is possible for systems that are significantly more nonideal. This comes about when one does not use the zero-temperature non-interacting values in the fermi-bilinear sea-boson correspondence. The form of the momentum distribution suggested by this is given in Appendix D. It is now very easy to write down a criterion for the breakdown of fermi-liquid behaviour. It is given by equating the step at the fermi surface to zero(the quasi-particle residue). \begin{equation} Z_{f} = F_{1}(k_{f}) - F_{2}(k_{f}) = 0 \end{equation} In the end, it is pertinent to address the claim made in the abstract namely that we are able to capture short-wavelength behaviour. The real issue here is that we have two length scales, one is the inverse of the fermi momentum the other is the lattice spacing. When one speaks of short wavelengths, one means wavelengths comparable to the lattice spacing. In the ultra-high density limit, where all the answers we have been deriving are valid, the inverse of the fermi momentum is much too small (compared to the lattice spacing) for the wavelength of any external field to be comparable to it. In other words, even if you have an external field that varies so rapidly in space that it changes sign every lattice spacing, the effective field induced by such an external field is still described by the RPA. To put it yet another way, the RPA is exact in the ultra-high density limit. Some have argued that this limit is uninteresting since in this limit, the coulomb interaction is completely screened out and therefore in this regime we just have a fermi liquid. We find that this argument is not entirely true. In fact, we have shown\cite{Setlur} that when the inverse of the fermi momentum is small compared to the lattice spacing, it is still possible to increase the value of the dimensionless coupling strength sufficiently so that fermi liquid behaviour is destroyed. Actually, our results turn upside down almost all conventional wisdom, first we find that fermi liquid behaviour persists in 1D for sufficiently weak coupling strengths (even when we assume the interactions are hard core delta-function interactions), in contrast to the Lieb-Mattis solution of the Tomonaga-Luttinger model. Second, we find that fermi-liquid behaviour breaks down in more than one dimension for sufficiently strong values of the coupling strength in contrast to the answers obtained by Castro-Neto and Fradkin \cite{Neto}. In fact we find that fermi liquid behaviour persists in all three dimensions for sufficiently small values of the coupling strength and is destroyed in all three dimensions for sufficiently large values of the coupling strength. It may be argued by the reader that our results are not foolproof either, for one, we have neglected several terms in the hamiltonian and those terms are small only in the limit when RPA is exact. The other points are the technical shortcomings, such as the fact that we have not proved the fermi case as rigorously as the bose case, like the fermi commutation rules are not explictly verified e.t.c.. Notwithstanding all these shortcomings, a strong case is to be made for the revision of entrenched dogma about fermi and luttinger liquids. \section{Conclusions} Let us summarise the results obtained so far. We have succeeded in reducing to quadratures the propagators of both bose and fermi systems. We have also computed the momentum distribution of interacting fermi systems and written down a formula for the quasi-particle residue in terms of the electron-electron repulsion. From this we obtain a criterion for the breakdown of fermi-liquid behaviour. The results we obtain contradict widely held views about 1D systems, in particular the Lieb-Mattis solution\cite{Lieb} of the Tomonaga-Luttinger model suggests that the momentum distribution of a 1D system with delta-function interactions exhibits no discontinuity at the fermi momentum. This is in contrast with the exact formula above that does in fact exhibit such a discontinuity for sufficiently weak values of the coupling strength and is destroyed only for larger values of the coupling strength. We attribute this discrepency to flaws in the linearised dispersion model (i.e. Tomonaga-Luttinger model) and the ensuing algebraic manipulations, not all of it transparent. In any event, the authors feel that the Tomonaga-Luttinger model is not a caricature of the homogeneous interacting fermi system in the ultra-high density limit. Luttinger Liquid theory is based on the unproven assumption that the low-energy behaviour of the homogeneous interacting fermi system in one-dimension is correctly described by the exactly solvable Tomonaga-Luttinger model. Our results show that this is false. The important qualitative features of the homogeneous interacting fermi system namely the presence or absence of a fermi surface cannot be surmised by examining the properties of the Tomonaga-Luttinger model, especially when the interactions are weak. \begin{center} APPENDIX A \end{center} In this appendix we prove some assersions made earlier. First the definition of the condensate displacement annhilation operator. \begin{equation} d_{ {\bf{q}}/2 }({\bf{q}}) = (\frac{1}{ \sqrt{N_{0}} }) b^{\dagger}_{ {\bf{0}} }b_{ {\bf{q}} } \label{DEFN} \end{equation} In order to define the quantity $ {\mathcal{O}} = (\frac{1}{ \sqrt{N_{0}} }) $ in a manner acceptable to most physicists, we proceed as follows. $ {\mathcal{O}} $ is defined to be that operator that commutes with the number operator $ N_{0} $ \begin{equation} [{\mathcal{O}}, N_{0}] = 0 \end{equation} in the basis in which $ N_{0} $ is diagonal and possesses non-zero eigenvalues (not an unreasonable assumption considering the fact that even in the most strongly interacting systems $ N_{0} $ is macroscopic), call them $ \{ N_{0}^{r} \} $, then the matrix elements of $ {\mathcal{O}} $ in the same basis are going to be $ 1/\sqrt{ N_{0}^{r} } $. Having thus provided all the matrix elements, the definition of $ {\mathcal{O}} $ is complete. We have to now show that $ d_{ {\bf{q}}/2 } ({\bf{q}}) $ satisfies canonical bose commutation rules. The simplest way of doing this is to use the polar decomposition of $ b_{ {\bf{0}} } $ \begin{equation} b_{ {\bf{0}} } = exp(-i\mbox{ }X_{0})\sqrt{N_{0}} \label{POLAR} \end{equation} here $ X_{0} $ is the hermitian operator canonically conjugate to $ N_{0} = b^{\dagger}_{ {\bf{0}} }b_{ {\bf{0}} } $, that is, $ [X_{0}, N_{0}] = i $. This decomposition correctly reproduces the bose commutation rules of $ b_{ {\bf{0}} } $ and $ b^{\dagger}_{ {\bf{0}} } $. For example, \begin{equation} [b_{ {\bf{0}} }, b^{\dagger}_{ {\bf{0}} }] = b_{ {\bf{0}} }b^{\dagger}_{ {\bf{0}} } - b^{\dagger}_{ {\bf{0}} }b_{ {\bf{0}} } = exp(-i\mbox{ }X_{0})\mbox{ }N_{0}\mbox{ }exp(i\mbox{ }X_{0}) - N_{0} = 1 \end{equation} This means that $ d_{ {\bf{q}}/2 }({\bf{q}}) = z^{*}_{0}b_{ {\bf{q}} } $, where $ z^{*}_{0} = exp(i\mbox{ }X_{0}) $. Since, $ [z_{0}, b_{ {\bf{q}} }] = 0 $ and $ [z_{0}, b^{\dagger}_{ {\bf{q}} }] = 0 $, and $ [z_{0}, z_{0}^{*}] = 0 $, it follows that $ d_{ {\bf{q}}/2 }({\bf{q}}) $ and $ b_{ {\bf{q}} } $ both satisfy the same commutation rules since, $ z^{*}_{0} $ now behaves effectively as a c-number(as regards commutation rules with $ b_{ {\bf{q}} } $, $ b^{\dagger}_{ {\bf{q}} } $and $ z_{0} $. It is worthwhile pointing out this fact, \[ [d_{ {\bf{q}}/2 }({\bf{q}}), N_{0}] \neq 0 \] rather, \begin{equation} [d_{ {\bf{q}}/2 }({\bf{q}}), N] = 0 \end{equation} though not obviously so. In order to prove this, \[ [d_{ {\bf{q}}/2 }({\bf{q}}),N] = [d_{ {\bf{q}}/2 }({\bf{q}}),N_{0}] + [d_{ {\bf{q}}/2 }({\bf{q}}),\sum_{ {\bf{q}}^{'} \neq 0 } b^{\dagger}_{ {\bf{q}}^{'} }b_{ {\bf{q}}^{'} }] \] \[ = [exp(i\mbox{ }X_{0}),N_{0}] b_{ {\bf{q}} } + exp(i\mbox{ }X_{0})\sum_{ {\bf{q}}^{'} \neq 0 } [b_{ {\bf{q}} },b^{\dagger}_{ {\bf{q}}^{'} }b_{ {\bf{q}}^{'} }] \] \[ = ( exp(i\mbox{ }X_{0})N_{0} - N_{0}exp(i\mbox{ }X_{0}) )b_{ {\bf{q}} } + exp(i\mbox{ }X_{0})b_{ {\bf{q}} } \] \begin{equation} = [i\mbox{ }X_{0}, N_{0}]exp(i\mbox{ }X_{0}) b_{ {\bf{q}} } + exp(i\mbox{ }X_{0})b_{ {\bf{q}} } = -exp(i\mbox{ }X_{0}) b_{ {\bf{q}} } + exp(i\mbox{ }X_{0})b_{ {\bf{q}} } = 0 \end{equation} Next, one would like to prove the equation Eq.(~\ref{BOSE}). For this we simply plug in the definition Eq.(~\ref{DEFN}) into Eq.(~\ref{BOSE}) and verify that is reduces to an identity. The details are as follows, first define, \[ L_{ {\bf{k}}, {\bf{q}} } = N_{0} \delta_{ {\bf{k}}, 0 }\delta_{ {\bf{q}}, 0 } + [\delta_{ {\bf{k+q/2}}, 0 }(\sqrt{N_{0}})d_{ {\bf{k}} }(-{\bf{q}}) + \delta_{ {\bf{k-q/2}}, 0 }d^{\dagger}_{ {\bf{k}} }({\bf{q}}) (\sqrt{N_{0}})] \] \begin{equation} + d^{\dagger}_{ (1/2)({\bf{k+q/2}}) }({\bf{k+q/2}}) d_{ (1/2)({\bf{k-q/2}}) }({\bf{k-q/2}}) \end{equation} The proof involves these cases, \newline (i) $ {\bf{k}} = 0 $ and $ {\bf{q}} = 0 $ \newline In this case, \begin{equation} L_{ {\bf{0}}, {\bf{0}} } = N_{0} = b^{\dagger}_{0}b_{0} \end{equation} \newline (ii) $ {\bf{k}} + {\bf{q}}/2 = 0 $ but $ {\bf{k}} - {\bf{q}}/2 \neq 0 $ \newline \begin{equation} L_{ {\bf{k}} = -{\bf{q}}/2, {\bf{q}} } = (\sqrt{N_{0}})d_{ -{\bf{q}}/2 }(-{\bf{q}}) = b^{\dagger}_{0}b_{ -{\bf{q}} } \end{equation} \newline (iii) $ {\bf{k}} - {\bf{q}}/2 = 0 $ but $ {\bf{k}} + {\bf{q}}/2 \neq 0 $ \newline \begin{equation} L_{ {\bf{k}} = {\bf{q}}/2, {\bf{q}} } = d^{\dagger}_{ {\bf{q}}/2 }({\bf{q}})(\sqrt{N_{0}}) = b^{\dagger}_{ {\bf{q}} }b_{ {\bf{0}} } \end{equation} \newline (iii) $ {\bf{k}} - {\bf{q}}/2 \neq 0 $ and $ {\bf{k}} + {\bf{q}}/2 \neq 0 $ \newline \begin{equation} L_{ {\bf{k}}, {\bf{q}} } = d^{\dagger}_{ (1/2)({\bf{k}} + {\bf{q}}/2) }({\bf{k}} + {\bf{q}}/2) d_{ (1/2)({\bf{k}} - {\bf{q}}/2) }({\bf{k}} - {\bf{q}}/2) = b^{\dagger}_{ {\bf{k}} + {\bf{q}}/2 }exp(-i\mbox{ }X_{0}) exp(i\mbox{ }X_{0})b_{ {\bf{k}} - {\bf{q}}/2 } = b^{\dagger}_{ {\bf{k}} + {\bf{q}}/2 }b_{ {\bf{k}} - {\bf{q}}/2 } \end{equation} Therefore in all cases, \begin{equation} L_{ {\bf{k}}, {\bf{q}} } = b^{\dagger}_{ {\bf{k}}+{\bf{q}}/2 }b_{ {\bf{k}}-{\bf{q}}/2 } \end{equation} and thus Eq.(~\ref{BOSE}) follows. Finally, we would like to clarify the finite temperature case. In particular, what is the chemical potential of the condensate displacement bosons ? Is it zero or is it the same as that of the parent bosons ? The answer may be found by computing the thermodynamic expectation value of the number of bosons in the condensate $ N_{0} $, \begin{equation} \langle N_{0} \rangle = N - \sum_{ {\bf{q}} \neq 0 }\langle d^{\dagger}_{ (1/2){\bf{q}} }({\bf{q}}) d_{ (1/2){\bf{q}} }({\bf{q}}) \rangle \end{equation} We also know the answer from elementary considerations, it is, \begin{equation} \langle N_{0} \rangle = N - \sum_{ {\bf{q}} \neq 0 } \frac{1}{exp(\beta(\epsilon_{ {\bf{q}} }-\mu)) - 1} \end{equation} where $ \mu $ is the chemical potential of the parent bosons. Then it follows that, \begin{equation} \langle d^{\dagger}_{ (1/2){\bf{q}} }({\bf{q}}) d_{ (1/2){\bf{q}} }({\bf{q}}) \rangle = \frac{1}{exp(\beta(\epsilon_{ {\bf{q}} }-\mu)) - 1} \end{equation} In other words, the chemical potential of the condensate displacement bosons is the same as that of the parent bosons. \begin{equation} \mu_{parent} = \mu_{cond/displ} \end{equation} \begin{center} APPENDIX B \end{center} In this Appendix, we try to to make plausible the correspondence between the number-conserving product of two fermi fields and the sea-bosons. Let us rewrite the bose case (Eq.(~\ref{BOSE})) more suggestively, \[ b^{\dagger}_{ {\bf{k+q/2}} }b_{ {\bf{k-q/2}} } = O({\bf{k}})\mbox{ }\delta_{ {\bf{q}}, {\bf{0}} } + [\sqrt{n_{ {\bf{k}} + {\bf{q}}/2 }}A_{ {\bf{k}} }(-{\bf{q}}) + A^{\dagger}_{ {\bf{k}} }({\bf{q}})\sqrt{n_{ {\bf{k}} - {\bf{q}}/2 }}] \] \[ + \sum_{ {\bf{q}}_{1} } A^{\dagger}_{ {\bf{k}} + {\bf{q}}/2 - {\bf{q}}_{1}/2 }({\bf{q}}_{1}) A_{ {\bf{k}} - {\bf{q}}_{1}/2 }(-{\bf{q}} + {\bf{q}}_{1}) \] \begin{equation} - \sum_{ {\bf{q}}_{1} } A^{\dagger}_{ {\bf{k}} - {\bf{q}}/2 + {\bf{q}}_{1}/2 }({\bf{q}}_{1}) A_{ {\bf{k}} + {\bf{q}}_{1}/2 }(-{\bf{q}} + {\bf{q}}_{1}) \label{SUGG} \end{equation} In the bose case, \begin{equation} A_{ {\bf{k}} }({\bf{q}}) = \delta_{ {\bf{k}}-{\bf{q}}/2, 0} d_{ {\bf{q}}/2 }({\bf{q}}) \end{equation} and, \begin{equation} O({\bf{k}}) = N\mbox{ }\delta_{ {\bf{k}}, 0 } \end{equation} Observe that the suggestively extravagant notation in Eq.(~\ref{SUGG}) is meant to imply that a very similar relation holds in the fermi case which we reproduce below, \[ c^{\dagger}_{ {\bf{k+q/2}} }c_{ {\bf{k-q/2}} } = O({\bf{k}})\mbox{ }\delta_{ {\bf{q}}, {\bf{0}} } + [\sqrt{n_{ {\bf{k}} + {\bf{q}}/2 }}A_{ {\bf{k}} }(-{\bf{q}}) + A^{\dagger}_{ {\bf{k}} }({\bf{q}})\sqrt{n_{ {\bf{k}} - {\bf{q}}/2 }}] \] \[ + \sum_{ {\bf{q}}_{1} } A^{\dagger}_{ {\bf{k}} + {\bf{q}}/2 - {\bf{q}}_{1}/2 }({\bf{q}}_{1}) A_{ {\bf{k}} - {\bf{q}}_{1}/2 }(-{\bf{q}} + {\bf{q}}_{1}) \] \begin{equation} - \sum_{ {\bf{q}}_{1} } A^{\dagger}_{ {\bf{k}} - {\bf{q}}/2 + {\bf{q}}_{1}/2 }({\bf{q}}_{1}) A_{ {\bf{k}} + {\bf{q}}_{1}/2 }(-{\bf{q}} + {\bf{q}}_{1}) \label{FERMI} \end{equation} Here $ A_{ {\bf{k}} }({\bf{q}}) $ depends on two momentum labels unlike in the bose case. This has to do with the fact the now $ O({\bf{k}}) $ no longer has the simple structure we saw in the bose case. We must now invert this relation and obtain a formula for the operator $ A_{ {\bf{k}} }({\bf{q}}) $. It is not at all clear that this object will behave like an exact boson annhilation operator. The alternative is to write down an ansatz for an exact boson in analogy with the bose case and determine the unknown in the formula by imposing canonical bose commutation rules. \begin{equation} a_{ {\bf{k}} }({\bf{q}}) = \frac{1}{\sqrt{n_{ {\bf{k}} - {\bf{q}}/2 }}} c^{\dagger}_{ {\bf{k}} - {\bf{q}}/2 }M({\bf{k}}, {\bf{q}}) c_{ {\bf{k}} + {\bf{q}}/2 } \end{equation} The unknown operator $ M({\bf{k}}, {\bf{q}}) $ has to be related to some number-conserving fermi bilinear by demanding that the operator $ a_{ {\bf{k}} }({\bf{q}}) $ obey canonical bose commutation rules, \begin{equation} [a_{ {\bf{k}} }({\bf{q}}), a_{ {\bf{k}}^{'} }({\bf{q}}^{'})] = 0 \end{equation} \begin{equation} [a_{ {\bf{k}} }({\bf{q}}), a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'})] = \delta_{ {\bf{k}}, {\bf{k}}^{'} }\delta_{ {\bf{q}}, {\bf{q}}^{'} } \end{equation} It is at present beyond the authors to arrive at a formula for $ M({\bf{k}},{\bf{q}}) $. Notwithstanding this, it is still useful to capture some sort of an approximate correspondence like the one introduced in Sec. II. The relations written down there have the following positive features \newline (i)They recover the RPA dielectric function at zero and finite temperatures. \newline (ii)They capture the correct four-point and six-point functions at zero and finite temperatures. \newline (iii)The formula for the sea-boson in Eq.(~\ref{SEABOSONFORM}) when plugged into the correspondence for the number operator in Eq.(~\ref{NUMBER}) gives an identity. \newline The only negative aspect of this correspondence is that \newline (I) The mutual commutation rules between the off-diagonal fermi bilinears is recovered correctly only upto terms linear in the sea-bosons. That is, somehow the operators on the right side of these commutations rules should not be too different from their approximations obtained by dropping terms higher than the linear order. This is no doubt a strong assumption. This is in fact equivalent to RPA(perhaps even better than RPA). \newline \newline The definition of the seaboson is incomplete without the prescription of the phase $ \theta({\bf{k}}, {\bf{q}}) $. In order to derive an expression for this, we again make heavy use of the bose case which we have proved rigorously in Appendix A. There we found that plugging in the expression for the condensate-displacement boson into the correspondence resulted in an identity in case of $ {\bf{q}} \neq 0 $ (the $ {\bf{q}} = 0 $ case being special). This identity comes about in a very specific fashion. In the general form of the correspondence outlined in Eq.(~\ref{SUGG}), we find that the sum on the right that comes with a negative sign is identically zero( for $ {\bf{q}} \neq 0 $ ) and the sum on the right that comes with a positive sign is equal to the left-hand side, except in "rare" cases when either $ {\bf{k}}+{\bf{q}}/2 = 0 $ or $ {\bf{k}}-{\bf{q}}/2 = 0 $. We shall adopt the same approach in the fermi casse and try to fix the phase $ \theta({\bf{k}}, {\bf{q}}) $ such that the identity is satisfied in the manner just described. Let us now write down the potential identity, \[ c^{\dagger}_{ {\bf{k}} + {\bf{q}}/2 }c_{ {\bf{k}} - {\bf{q}}/2 } = \Lambda_{ {\bf{k}} }({\bf{q}}) \frac{1}{\sqrt{ n_{ {\bf{k}} + {\bf{q}}/2 } }} c^{\dagger}_{ {\bf{k}} + {\bf{q}}/2 }(\frac{ n^{\beta}({\bf{k}}+{\bf{q}}/2)} {\langle N \rangle })^{\frac{1}{2}} e^{i\mbox{ }\theta({\bf{k}},-{\bf{q}})} c_{ {\bf{k}} - {\bf{q}}/2 } \] \[ + \Lambda_{ {\bf{k}} }(-{\bf{q}}) c^{\dagger}_{ {\bf{k}} + {\bf{q}}/2 } e^{-i\mbox{ }\theta({\bf{k}},{\bf{q}})} (\frac{ n^{\beta}({\bf{k}}-{\bf{q}}/2)} {\langle N \rangle })^{\frac{1}{2}}c_{ {\bf{k}} - {\bf{q}}/2 } \frac{1}{\sqrt{ n_{ {\bf{k}} - {\bf{q}}/2 } }} \] \[ + T_{1}({\bf{k}}, {\bf{q}}) c^{\dagger}_{ {\bf{k}} + {\bf{q}}/2 } (\sum_{ {\bf{q}}_{1} \neq {\bf{q}}, {\bf{0}} } \frac{ n^{\beta}({\bf{k}}+{\bf{q}}/2 - {\bf{q}}_{1})} {\langle N \rangle } e^{i\mbox{ }\theta({\bf{k}} - {\bf{q}}_{1}/2, -{\bf{q}}+{\bf{q}}_{1})} e^{-i\mbox{ }\theta({\bf{k}} + {\bf{q}}/2 - {\bf{q}}_{1}/2, {\bf{q}}_{1})}) c_{ {\bf{k}} - {\bf{q}}/2 } \] \[ - T_{2}({\bf{k}}, {\bf{q}}) c_{ {\bf{k}} - {\bf{q}}/2 }\frac{1}{\sqrt{ n_{ {\bf{k}} - {\bf{q}}/2 }}} \frac{1}{\sqrt{ n_{ {\bf{k}} + {\bf{q}}/2 }}} c^{\dagger}_{ {\bf{k}} + {\bf{q}}/2 } (\frac{ n^{\beta}({\bf{k}}+{\bf{q}}/2) }{\langle N \rangle })^{\frac{1}{2}} (\frac{ n^{\beta}({\bf{k}}-{\bf{q}}/2) }{\langle N \rangle })^{\frac{1}{2}} \] \begin{equation} \times \sum_{ {\bf{q}}_{1} \neq {\bf{q}}, {\bf{0}} } n_{ {\bf{k}} - {\bf{q}}/2 + {\bf{q}}_{1} } e^{i\mbox{ }\theta({\bf{k}} + {\bf{q}}_{1}/2, -{\bf{q}} + {\bf{q}}_{1})} e^{-i\mbox{ }\theta({\bf{k}} - {\bf{q}}/2 + {\bf{q}}_{1}/2, {\bf{q}}_{1})} \end{equation} Here, since we are not involved in proving the rigorous correspondence, but just the salient features, we are entitled to some leeway. In particular, we shall turn a blind eye to the fact that there exist these objects $ T_{1}({\bf{k}},{\bf{q}}) $ and $ T_{2}({\bf{k}},{\bf{q}}) $, in fact set them both equal to unity, just for the moment. The exact correspondence in terms of the $ A_{ {\bf{k}} }({\bf{q}}) $ seems to suggest exactly this. Then we find that, if we choose our $ \theta({\bf{k}},{\bf{q}}) $ to be such that \begin{equation} \theta({\bf{k}} - {\bf{q}}_{1}/2, -{\bf{q}}+{\bf{q}}_{1}) = \theta({\bf{k}} + {\bf{q}}/2 - {\bf{q}}_{1}/2, {\bf{q}}_{1}) \end{equation} and, \begin{equation} \sum_{ {\bf{q}}_{1}\neq {\bf{0}}, {\bf{q}} } {\bar{n}}_{ {\bf{k}} - {\bf{q}}/2 + {\bf{q}}_{1} } e^{i\mbox{ }\theta({\bf{k}} + {\bf{q}}_{1}/2, -{\bf{q}} + {\bf{q}}_{1})} e^{-i\mbox{ }\theta({\bf{k}} - {\bf{q}}/2 + {\bf{q}}_{1}/2, {\bf{q}}_{1})} = 0 \end{equation} then all is well. Terms that were linear in the sea-bosons are vanishingly small in the thermodynamic limit, and are important only when both the sums on the right side are identically zero for some reason, that is, it is "rarely" important just like in the bose case. It is not really important to write down an explicit formula for the phase function $ \theta({\bf{k}}, {\bf{q}}) $, it is merely sufficient to show that it does what is required of it, namely, it provides the "random phase" that cancels terms that enable the whole machinery to run smoothly. Lastly, we have not yet verified that this sea-boson obeys canonical commutation rules. This is again a tricky problem, it is likely to be resolved by the exact approach which is beyond the scope of this article. It is merely sufficient to point out that this is likely to come about due to the strong likelyhood that the phase $ \theta({\bf{k}},{\bf{q}}) $ is actually a functional of the number operator. The correspondence that we have just defended is nothing but a more elegant version of the correspondence introduced by the pioneers like Castro-Neto and Fradkin\cite{Neto}. Any criticism that may be leveled against our approach may equally well be leveled against theirs. The only difference between our approach and theirs is that the single-particle properties which they are so fervently seeking are far more elegantly recovered by our approach since we do not linearise the bare fermion dispersion or use the clumsy Luther construction\cite{Luther}. Indeed, we have even shown that the answers for the 1D case are different from the Tomonaga-Luttinger model that linearise the bare fermion dispersion. The other issue worth addressing at this stage is the validity of the prescription in Eq.(~\ref{PRES}). It can be seen from the exact correspondence in Eq.(~\ref{FERMI}) that as $ {\bf{q}} \rightarrow {\bf{0}} $ terms that correspond to corrections to the RPA-form of the full hamiltonian vanish at least as fast as $ |{\bf{q}}|/k_{f} $. The RPA-terms themselves don't vanish and tend toward ($ \lim_{ {\bf{q}} \rightarrow {\bf{0}} } A_{ {\bf{k}} }(-{\bf{q}}) \neq 0 $ as in the bose case) \begin{equation} \sum_{ {\bf{k}} }\mbox{ } {\sqrt{n_{ {\bf{k}} }}}A_{ {\bf{k}} }(-{\bf{q}}) + \sum_{ {\bf{k}} } \mbox{ }A^{\dagger}_{ {\bf{k}} }({\bf{q}}){\sqrt{ n_{ {\bf{k}} } }} \end{equation} In order for the prescription in Eq.(~\ref{PRES}) to be accurate, it is important for the interaction $ v_{ {\bf{q}} } $ to possess these properties, first it must vanish for large enough $ {\bf{q}} $(or small inter-particle separation), \begin{equation} Lim_{ |{\bf{q}}| \rightarrow c_{0}\mbox{ }k_{f} }\mbox{ } v_{ {\bf{q}} } \rightarrow 0 \end{equation} where $ c_{0} $ is small compared to unity and positive. This ensures that the only possible contributions come from small $ {\bf{q}} $ where corrections to the RPA-form themselves are small. In addition, if we also make sure that the interaction vanishes fast enough for large inter-particle seperations so that, \begin{equation} Lim_{ |{\bf{q}}| \rightarrow 0 }\mbox{ } v_{ {\bf{q}} } \rightarrow |{\bf{q}}|^{D} \end{equation} where $ D = 0,1,2,.. $(larger the better), then our formalism is in fact EXACT as $ k_{f} \rightarrow \infty $(or sufficiently large). It may be argued that this state of affairs is most likely uninteresting since it may not be realisable in practice, when it is, it merely leads to a fermi liquid. This is a valid point. But it is worth pointing out that non-fermi liquid behaviour can still emerge in such systems when the interaction strength (with the same functional form) becomes strong enough. These considerations also tell us that for an interaction of the delta-function type in 1D, provided the strength is weak enough, we have a fermi-liquid in contrast to the Lieb-Mattis solution of the Tomonaga-Luttinger model. In any event, the philosophy is, that having introduced sea-bosons, we more or less forget about the fact that it was fermions that motivated their introduction in the first place and instead try to write down a whole new set of models in terms of the sea-bosons calibrate them appropriately so that they capture the salient features of the real world. It is not a tautology to remark that we have in our hands a whole class of exactly solvable models of correlated fermions that is easier to use than mean-field theory itself but capture effects significantly beyond diagrammatic perturbation theory, like the nonanalytic dependence of the momentum distribution on the coupling strength(written down in Appendix D). \begin{center} APPENDIX C \end{center} In this appendix we demonstrate that the RPA dielectric function is recovered exactly by selectively retaining parts of the coulomb interaction that lead to RPA. We know that the kinetic energy in the bose language is given by, \begin{equation} H_{kin} = \sum_{ {\bf{k}}, {\bf{q}} } (\frac{ {\bf{k.q}} }{m}) a^{\dagger}_{ {\bf{k}} }({\bf{q}})a_{ {\bf{k}} }({\bf{q}}) \end{equation} For this let us choose, \begin{equation} H_{I} = \sum_{ {\bf{q}} \neq 0 }\frac{ v_{ {\bf{q}} } }{2V} {\tilde{\rho}}_{ {\bf{q}} }{\tilde{\rho}}_{ -{\bf{q}} } \end{equation} where, \begin{equation} {\tilde{\rho}}_{ {\bf{q}} } = \sum_{ {\bf{k}} } [ \Lambda_{ {\bf{k}} }({\bf{q}}) a_{ {\bf{k}} }(-{\bf{q}}) + \Lambda_{ {\bf{k}} }(-{\bf{q}}) a^{\dagger}_{ {\bf{k}} }({\bf{q}}) ] \end{equation} From this it may be shown that the RPA dielectric function is recovered as the following demonstration shows. Assume that a weak time-varying external perturbation is applied as shown below, \begin{equation} H_{ext} = \sum_{ {\bf{q}} \neq 0 } (U_{ext}({\bf{q}},t) + U^{*}_{ext}(-{\bf{q}},t)) {\tilde{\rho}}_{ {\bf{q}} } \end{equation} here, \begin{equation} U_{ext}({\vec{r}},t) = U_{0} \mbox{ }e^{i{\bf{q}}.{\vec{r}} -i \omega \mbox{ }t} \end{equation} Let us now write down the equations of motion for the various bose fields, \[ i\frac{ \partial }{\partial t} \langle a^{t}_{ {\bf{k}} }({\bf{q}}) \rangle = \omega_{ {\bf{k}} }({\bf{q}}) \langle a^{t}_{ {\bf{k}} }({\bf{q}}) \rangle + (\frac{ v_{ {\bf{q}} } }{V})\Lambda_{ {\bf{k}} }(-{\bf{q}}) \sum_{ {\bf{k}}^{'} }[\Lambda_{ {\bf{k}}^{'} }(-{\bf{q}}) \langle a^{t}_{ {\bf{k}}^{'} }({\bf{q}}) \rangle + \Lambda_{ {\bf{k}}^{'} }({\bf{q}}) \langle a^{t\dagger}_{ {\bf{k}}^{'} }(-{\bf{q}}) \rangle ] \] \begin{equation} + (U_{ext}({\bf{q}},t)+ U^{*}_{ext}(-{\bf{q}},t)) \Lambda_{ {\bf{k}} }(-{\bf{q}}) \end{equation} \[ -i\frac{ \partial }{\partial t} \langle a^{t\dagger}_{ {\bf{k}} }(-{\bf{q}}) \rangle = \omega_{ {\bf{k}} }(-{\bf{q}}) \langle a^{t\dagger}_{ {\bf{k}} }(-{\bf{q}}) \rangle + (\frac{ v_{ {\bf{q}} } }{V})\Lambda_{ {\bf{k}} }({\bf{q}}) \sum_{ {\bf{k}}^{'} }[\Lambda_{ {\bf{k}}^{'} }(-{\bf{q}}) \langle a^{t}_{ {\bf{k}}^{'} }({\bf{q}}) \rangle + \Lambda_{ {\bf{k}}^{'} }({\bf{q}}) \langle a^{t\dagger}_{ {\bf{k}}^{'} }(-{\bf{q}}) \rangle ] \] \begin{equation} + (U_{ext}({\bf{q}},t)+ U^{*}_{ext}(-{\bf{q}},t)) \Lambda_{ {\bf{k}} }({\bf{q}}) \end{equation} Now, let us decompose the expectation values as follows, \begin{equation} \langle a^{t}_{ {\bf{k}} }({\bf{q}}) \rangle = U_{ext}({\bf{q}},t)C_{ {\bf{k}} }({\bf{q}}) + U^{*}_{ext}(-{\bf{q}},t)D_{ {\bf{k}} }({\bf{q}}) \end{equation} \begin{equation} \langle a^{t\dagger}_{ {\bf{k}} }(-{\bf{q}}) \rangle = U^{*}_{ext}(-{\bf{q}},t)C^{*}_{ {\bf{k}} }(-{\bf{q}}) + U_{ext}({\bf{q}},t)D^{*}_{ {\bf{k}} }(-{\bf{q}}) \end{equation} The coefficients $ C_{ {\bf{k}} }({\bf{q}}) $ and $ D^{*}_{ {\bf{k}} }(-{\bf{q}}) $ satisfy, \begin{equation} \omega \mbox{ }C_{ {\bf{k}} }({\bf{q}}) = \omega_{ {\bf{k}} }({\bf{q}}) C_{ {\bf{k}} }({\bf{q}}) + (\frac{ v_{ {\bf{q}} } }{V})\Lambda_{ {\bf{k}} }(-{\bf{q}}) \sum_{ {\bf{k}}^{'} }[\Lambda_{ {\bf{k}}^{'} }(-{\bf{q}}) C_{ {\bf{k}}^{'} }({\bf{q}}) + \Lambda_{ {\bf{k}}^{'} }({\bf{q}}) D^{*}_{ {\bf{k}}^{'} }(-{\bf{q}})] + \Lambda_{ {\bf{k}} }(-{\bf{q}}) \end{equation} \begin{equation} -\omega \mbox{ }D^{*}_{ {\bf{k}} }(-{\bf{q}}) = \omega_{ {\bf{k}} }(-{\bf{q}}) D^{*}_{ {\bf{k}} }(-{\bf{q}}) + (\frac{ v_{ {\bf{q}} } }{V})\Lambda_{ {\bf{k}} }({\bf{q}}) \sum_{ {\bf{k}}^{'} }[\Lambda_{ {\bf{k}}^{'} }({\bf{q}}) D^{*}_{ {\bf{k}}^{'} }(-{\bf{q}}) + \Lambda_{ {\bf{k}}^{'} }(-{\bf{q}}) C_{ {\bf{k}}^{'} }({\bf{q}})] + \Lambda_{ {\bf{k}} }({\bf{q}}) \end{equation} Now, the effective potential may be written as, \begin{equation} U_{eff}({\bf{q}},t) = U_{ext}({\bf{q}},t) + (\frac{ v_{ {\bf{q}} } }{V})\langle \rho_{ -{\bf{q}} } \rangle^{'} U_{ext}({\bf{q}},t) \end{equation} here, \begin{equation} \langle \rho_{ -{\bf{q}} } \rangle = U_{ext}({\bf{q}},t)\langle \rho_{ -{\bf{q}} } \rangle^{'} + U^{*}_{ext}(-{\bf{q}},t)\langle \rho_{ -{\bf{q}} } \rangle^{''} \end{equation} Using the fact that, \begin{equation} \langle \rho_{ -{\bf{q}} } \rangle^{'} = \sum_{ {\bf{k}} }\Lambda_{ {\bf{k}} }(-{\bf{q}}) C_{ {\bf{k}} }({\bf{q}}) + \sum_{ {\bf{k}} } \Lambda_{ {\bf{k}} }({\bf{q}})D^{*}_{ {\bf{k}} }(-{\bf{q}}) \end{equation} Solving these equations and using the fact that the dielectric function is just the ratio of the external divided by the effective potential we get, \begin{equation} \epsilon( {\bf{q}}, \omega) = \frac{ U_{ext}({\bf{q}},t) } { U_{eff}({\bf{q}},t) } = 1 + \frac{ v_{ {\bf{q}} } }{V}\sum_{ {\bf{k}} } \frac{ n_{F}({\bf{k+q/2}}) - n_{F}({\bf{k-q/2}}) } { \omega - \frac{ {\bf{k.q}} }{m} } \label{EQQ} \end{equation} Which is nothing but the RPA dielectric function of Bohm and Pines. \begin{center} APPENDIX D \end{center} In this appendix we use the equation of motion approach to solve for the momentum distribution and compare it with the solution obtained via exact diagonalisation as described in the main text. The equations of motion for the bose propagators read as, \[ (i\frac{\partial}{\partial t} - \omega_{ {\bf{k}} }({\bf{q}})) \frac{ -i\langle T a^{t}_{ {\bf{k}} }({\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle} = \delta_{ {\bf{k}}, {\bf{k}}^{'} } \delta_{ {\bf{q}}, {\bf{q}}^{'} }\delta(t) \] \begin{equation} + (\frac{ v_{ {\bf{q}} } }{V}) \Lambda_{ {\bf{k}} }(-{\bf{q}}) \sum_{ {\bf{k}}^{''} }[\Lambda_{ {\bf{k}}^{''} }(-{\bf{q}}) \frac{ -i\langle T a^{t}_{ {\bf{k}}^{''} }({\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle} + \Lambda_{ {\bf{k}}^{''} }({\bf{q}}) \frac{ -i\langle T a^{\dagger t}_{ {\bf{k}}^{''} }(-{\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle}] \end{equation} \[ (i\frac{\partial}{\partial t} + \omega_{ {\bf{k}} }(-{\bf{q}})) \frac{ -i\langle T a^{\dagger t}_{ {\bf{k}} }(-{\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle} \] \begin{equation} = -(\frac{ v_{ {\bf{q}} } }{V}) \Lambda_{ {\bf{k}} }({\bf{q}}) \sum_{ {\bf{k}}^{''} }[\Lambda_{ {\bf{k}}^{''} }({\bf{q}}) \frac{ -i\langle T a^{\dagger t}_{ {\bf{k}}^{''} }(-{\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle} + \Lambda_{ {\bf{k}}^{''} }(-{\bf{q}}) \frac{ -i\langle T a^{t}_{ {\bf{k}}^{''} }({\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle} ] \end{equation} The boundary conditions on these propagators may be written down as, \begin{equation} \frac{ -i\langle T a^{\dagger t}_{ {\bf{k}} }(-{\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle} = \frac{ -i\langle T a^{\dagger (t - i\beta)}_{ {\bf{k}} }(-{\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle} \end{equation} \begin{equation} \frac{ -i\langle T a^{t}_{ {\bf{k}} }({\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle} = \frac{ -i\langle T a^{(t - i\beta)}_{ {\bf{k}} }({\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle} \end{equation} \begin{equation} \delta(t) = (\frac{1}{-i \mbox{ }\beta})\sum_{n} exp(\omega_{n}t) \end{equation} \begin{equation} \theta(t) = (\frac{1}{-i \mbox{ }\beta})\sum_{n} \frac{ exp(\omega_{n}t) }{\omega_{n}} \end{equation} The boundary conditions imply that we may write, \begin{equation} \frac{ -i\langle T a^{t}_{ {\bf{k}} }({\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle} = \sum_{n}\mbox{ }exp(\omega_{n}t)\mbox{ } \frac{ -i\langle T a^{n}_{ {\bf{k}} }({\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle} \end{equation} \begin{equation} \frac{ -i\langle T a^{\dagger t}_{ {\bf{k}} }(-{\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle} = \sum_{n}\mbox{ }exp(\omega_{n}t)\mbox{ } \frac{ -i\langle T a^{\dagger n}_{ {\bf{k}} }(-{\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle} \end{equation} and, $ \omega_{n} = (2\mbox{ }\pi \mbox{ }n)/\beta $. Thus, \[ (i\omega_{n} - \omega_{ {\bf{k}} }({\bf{q}})) \frac{ -i\langle T a^{n}_{ {\bf{k}} }({\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle} = \frac{ \delta_{ {\bf{k}}, {\bf{k}}^{'} } \delta_{ {\bf{q}}, {\bf{q}}^{'} } }{-i \mbox{ }\beta} \] \begin{equation} + (\frac{ v_{ {\bf{q}} } }{V}) \Lambda_{ {\bf{k}} }(-{\bf{q}}) \sum_{ {\bf{k}}^{''} }[\Lambda_{ {\bf{k}}^{''} }(-{\bf{q}}) \frac{ -i\langle T a^{n}_{ {\bf{k}}^{''} }({\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle} + \Lambda_{ {\bf{k}}^{''} }({\bf{q}}) \frac{ -i\langle T a^{\dagger n}_{ {\bf{k}}^{''} }(-{\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle}] \end{equation} \[ (i\omega_{n} + \omega_{ {\bf{k}} }(-{\bf{q}})) \frac{ -i\langle T a^{\dagger n}_{ {\bf{k}} }(-{\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle} \] \begin{equation} = -(\frac{ v_{ {\bf{q}} } }{V}) \Lambda_{ {\bf{k}} }({\bf{q}}) \sum_{ {\bf{k}}^{''} }[\Lambda_{ {\bf{k}}^{''} }({\bf{q}}) \frac{ -i\langle T a^{\dagger n}_{ {\bf{k}}^{''} }(-{\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle} + \Lambda_{ {\bf{k}}^{''} }(-{\bf{q}}) \frac{ -i\langle T a^{n}_{ {\bf{k}}^{''} }({\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle} ] \end{equation} Define, \begin{equation} \sum_{ {\bf{k}} } \Lambda_{ {\bf{k}} } (-{\bf{q}}) \frac{ -i\langle T a^{n}_{ {\bf{k}} }({\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle} = G_{1}( {\bf{q}}, {\bf{k}}^{'}, {\bf{q}}^{'}; n) \end{equation} \begin{equation} \sum_{ {\bf{k}} } \Lambda_{ {\bf{k}} } ({\bf{q}}) \frac{ -i\langle T a^{\dagger n}_{ {\bf{k}} }(-{\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{\langle T 1 \rangle} = G_{2}( {\bf{q}}, {\bf{k}}^{'}, {\bf{q}}^{'}; n) \end{equation} Multiplying the above equations with $ \Lambda_{ {\bf{k}} }(-{\bf{q}}) $ and summing over $ {\bf{k}} $ one arrives at simple formulas for $ G_{1} $ and $ G_{2} $. \[ G_{1}( {\bf{q}}, {\bf{k}}^{'}, {\bf{q}}^{'}; n) = \Lambda_{ {\bf{k}}^{'} }(-{\bf{q}}) \frac{ \delta_{ {\bf{q}}, {\bf{q}}^{'} } } { -i \mbox{ }\beta( i\omega_{n} - \omega_{ {\bf{k}}^{'} }({\bf{q}}) ) } \] \begin{equation} + f_{n}({\bf{q}})[G_{1}( {\bf{q}}, {\bf{k}}^{'}, {\bf{q}}^{'}; n) + G_{2}( {\bf{q}}, {\bf{k}}^{'}, {\bf{q}}^{'}; n) ] \end{equation} and, \begin{equation} G_{2}( {\bf{q}}, {\bf{k}}^{'}, {\bf{q}}^{'}; n) = f^{*}_{n}(-{\bf{q}})[ G_{1}( {\bf{q}}, {\bf{k}}^{'}, {\bf{q}}^{'}; n) + G_{2}( {\bf{q}}, {\bf{k}}^{'}, {\bf{q}}^{'}; n) ] \end{equation} \[ G_{ 2 }({\bf{q}}, {\bf{k}}^{'}, {\bf{q}}^{'}; n) = \frac{ f^{*}_{n}(-{\bf{q}}) }{(1- f^{*}_{n}(-{\bf{q}}))} G_{1} ({\bf{q}}, {\bf{k}}^{'}, {\bf{q}}^{'}; n) \] \[ G_{1}({\bf{q}}, {\bf{k}}^{'}, {\bf{q}}^{'}; n) + G_{2}({\bf{q}}, {\bf{k}}^{'}, {\bf{q}}^{'}; n) = G_{1}({\bf{q}}, {\bf{k}}^{'}, {\bf{q}}^{'}; n) /( 1 - f^{*}_{n}(-{\bf{q}}) ) \] \begin{equation} G_{1}({\bf{q}}, {\bf{k}}^{'}, {\bf{q}}^{'}; n) = (\frac{1}{-i \mbox{ }\beta}) \frac{ (1-f_{n}^{*}(-{\bf{q}}) ) \Lambda_{ {\bf{k}}^{'} }(-{\bf{q}}) \delta_{ {\bf{q}}, {\bf{q}}^{'} } } {(1- f_{n}^{*}(-{\bf{q}}) - f_{n}({\bf{q}}) ) ( i\omega_{n} - \omega_{ {\bf{k}}^{'} }({\bf{q}}) ) } \end{equation} \begin{equation} G_{2}({\bf{q}}, {\bf{k}}^{'}, {\bf{q}}^{'}; n) = (\frac{1}{-i \mbox{ }\beta}) \frac{ f_{n}^{*}(-{\bf{q}}) \Lambda_{ {\bf{k}}^{'} }(-{\bf{q}}) \delta_{ {\bf{q}}, {\bf{q}}^{'} } } {(1- f_{n}^{*}(-{\bf{q}}) - f_{n}({\bf{q}}) ) ( i\omega_{n} - \omega_{ {\bf{k}}^{'} }({\bf{q}}) ) } \end{equation} \[ G_{1}({\bf{q}}, {\bf{k}}^{'}, {\bf{q}}^{'}; n) + G_{2}({\bf{q}}, {\bf{k}}^{'}, {\bf{q}}^{'}; n) = (\frac{1}{-i \mbox{ }\beta})\frac{ \Lambda_{ {\bf{k}}^{'} }(-{\bf{q}}) \delta_{ {\bf{q}}, {\bf{q}}^{'} } } {(1- f_{n}^{*}(-{\bf{q}}) - f_{n}({\bf{q}}) ) ( i\omega_{n} - \omega_{ {\bf{k}}^{'} }({\bf{q}}) ) } \] \[ \frac{ -i\langle T a^{n}_{ {\bf{k}} }({\bf{q}}) a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) \rangle }{ \langle T 1 \rangle } = \frac{ \delta_{ {\bf{k}}, {\bf{k}}^{'} } \delta_{ {\bf{q}}, {\bf{q}}^{'} } } {-i\mbox{ }\beta( i\omega_{n} - \omega_{ {\bf{k}} }({\bf{q}}) )} \] \begin{equation} + (\frac{1}{-i \mbox{ }\beta}) ( \frac{ v_{ {\bf{q}} } }{V} )\frac{ \Lambda_{ {\bf{k}} }(-{\bf{q}}) } {(i\omega_{ n } - \omega_{ {\bf{k}} }({\bf{q}}) ) } \frac{ \Lambda_{ {\bf{k}}^{'} }(-{\bf{q}}) \delta_{ {\bf{q}}, {\bf{q}}^{'} } } {(1- f_{n}^{*}(-{\bf{q}}) - f_{n}({\bf{q}}) ) ( i\omega_{n} - \omega_{ {\bf{k}}^{'} }({\bf{q}}) ) } \end{equation} The zero temperature correlation function of significance here is, \begin{equation} -i\langle a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) a_{ {\bf{k}} }({\bf{q}}) \rangle \end{equation} This may be obtained from the above formulas as, \begin{equation} -i\langle a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'}) a_{ {\bf{k}} }({\bf{q}}) \rangle = -(\frac{v_{ {\bf{q}} } }{V}) \Lambda_{ {\bf{k}} }(-{\bf{q}})\Lambda_{ {\bf{k}}^{'} }(-{\bf{q}}) \delta_{ {\bf{q}}, {\bf{q}}^{'} } \int_{C} \frac{d\omega}{2\pi \mbox{ }i} \frac{1}{( i\omega - \omega_{ {\bf{k}} }({\bf{q}}) ) ( i\omega - \omega_{ {\bf{k}}^{'} }({\bf{q}}) ) ( 1 - f_{n}^{*}(-{\bf{q}}) - f_{n}({\bf{q}}) ) } \end{equation} where $ C $ is the positively oriented contour that encloses the upper half-plane( upper half-plane, because we need $ \langle a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'})a_{ {\bf{k}} }({\bf{q}}) \rangle $ and not $ \langle a_{ {\bf{k}} }({\bf{q}})a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'})\rangle $ ) Thus the problem now reduces to computing all the zeros of $ ( 1 - f_{n}^{*}(-{\bf{q}}) - f_{n}({\bf{q}}) ) $ that have positive imaginary parts. It may be shown quite easily that, \begin{equation} \epsilon_{RPA}({\bf{q}}, i\omega_{n}) = 1 - f_{n}^{*}(-{\bf{q}}) - f_{n}({\bf{q}}) \end{equation} In 1D, the dielectric function is evaluated as follows, \begin{equation} 1 - f^{*}_{n}(-q) - f_{n}(q) = 1 + v_{q}(\frac{1}{2\pi}) (\frac{m}{q})ln[ \frac{ (k_{f}+q/2)^{2} + (\frac{m \mbox{ }\omega}{q})^{2} } { (k_{f} - q/2)^{2} + (\frac{m \mbox{ }\omega}{q})^{2} } ] = 0 \end{equation} This leads to the root, \begin{equation} \omega = i\mbox{ }(\frac{|q|}{m}) \sqrt{ \frac{ (k_{f}+q/2)^{2} - (k_{f} - q/2)^{2} exp(-(\frac{ 2\mbox{ }\pi\mbox{ }q }{m})(\frac{1}{v_{q}})) } { 1- exp(-(\frac{ 2\mbox{ }\pi\mbox{ }q }{m})(\frac{1}{v_{q}})) } } \label{ROOT} \end{equation} Therefore the final result may be written as, \begin{equation} \langle a^{\dagger}_{ {\bf{k}}^{'} }({\bf{q}}^{'})a_{ {\bf{k}} }({\bf{q}}) \rangle = (\frac{1}{V})\frac { \Lambda_{k}(-q)\Lambda_{k^{'}}(-q) \delta_{ q, q^{'} } } { (\omega_{R}(q)+ \omega_{k}(q))(\omega_{R}(q)+ \omega_{k^{'}}(q)) (\frac{m}{q^{2}})(\frac{1}{2 \pi k_{f}})2(\frac{m}{q})^{2}\omega_{R}(q) (cosh(\lambda(q))-1) } \end{equation} here, \begin{equation} \lambda(q) = (\frac{2 \pi q}{m})(\frac{1}{v_{q}}) \end{equation} \begin{equation} \omega_{R}(q) = (\frac{ |q| }{m}) \sqrt{ \frac{ (k_{f} + q/2)^{2} - (k_{f} - q/2)^{2}exp(-\lambda(q)) } { 1 - exp(-\lambda(q)) } } \end{equation} In other words, \[ \langle c^{\dagger}_{ k }c_{ k } \rangle = n_{F}(k) + (2\pi k_{f}) \int_{-\infty}^{+\infty} \mbox{ }\frac{ dq_{1} }{2\pi}\mbox{ } \frac{ \Lambda_{ k - q_{1}/2 }(-q_{1}) } { 2\omega_{R}(q_{1})(\omega_{R}(q_{1}) + \omega_{k - q_{1}/2}(q_{1}))^{2} (\frac{ m^{3} }{q_{1}^{4}})( cosh(\lambda(q_{1})) - 1 ) } \] \begin{equation} - (2\pi k_{f}) \int_{-\infty}^{+\infty} \mbox{ }\frac{ dq_{1} }{2\pi}\mbox{ } \frac{ \Lambda_{ k + q_{1}/2 }(-q_{1}) } { 2\omega_{R}(q_{1})(\omega_{R}(q_{1}) + \omega_{k + q_{1}/2}(q_{1}))^{2} (\frac{ m^{3} }{q_{1}^{4}})( cosh(\lambda(q_{1})) - 1 ) } \end{equation} Note that the above formula possesses a non-analytic dependence in the coupling strength $ ( cosh(\frac{2 \pi q}{m})(\frac{1}{v_{q}}) - 1 ) $ , an unmistakeable signature of a non-digrammatic result. Next, we would like to provide formulas for the momentum distribution when we use the correct interacting expectation values in the fermi-bilinear sea-boson correspondence. The results obtained from these formulas are likely to be very different from the weakly nonideal case, which in any case is not very interesting. The answers are given below, \begin{equation} {\bar{n}}_{ {\bf{k}} } = \frac{ n^{\beta}({\bf{k}}) }{S_{1}({\bf{k}}) } + \frac{ S_{2}({\bf{k}}) }{S_{1}({\bf{k}}) } \end{equation} here, \begin{equation} S_{1}({\bf{k}}) = 1 + \sum_{ {\bf{q}},i } \frac{ {\bar{n}}_{ {\bf{k}} - {\bf{q}} } } {({\tilde{\omega}}_{i}(-{\bf{q}}) + \frac{ {\bf{k.q}} }{m} - \epsilon_{ {\bf{q}} })^{2}} g^{2}_{i}(-{\bf{q}}) + \sum_{ {\bf{q}},i } \frac{ 1-{\bar{n}}_{ {\bf{k}} + {\bf{q}} } } {({\tilde{\omega}}_{i}(-{\bf{q}}) + \frac{ {\bf{k.q}} }{m} + \epsilon_{ {\bf{q}} })^{2}} g^{2}_{i}(-{\bf{q}}) \end{equation} \begin{equation} S_{2}({\bf{k}}) = \sum_{ {\bf{q}},i } \frac{ {\bar{n}}_{ {\bf{k}} - {\bf{q}} } } {({\tilde{\omega}}_{i}(-{\bf{q}}) + \frac{ {\bf{k.q}} }{m} - \epsilon_{ {\bf{q}} })^{2}} g^{2}_{i}(-{\bf{q}}) \end{equation} also the form of the "RPA" dielectric function and its zeros $ {\tilde{\omega}}_{i}({\bf{q}}) $ are now different. The "RPA" dielectric function is given by, \begin{equation} \epsilon_{RPA}({\bf{q}},\omega) = 1 + \frac{ v_{{\bf{q}}} }{V} \sum_{ {\bf{k}} }\frac{ {\bar{n}}_{ {\bf{k}} + {\bf{q}}/2 } - {\bar{n}}_{ {\bf{k}} - {\bf{q}}/2 } } {\omega - \frac{ {\bf{k.q}} }{m}} \end{equation} \begin{equation} g_{i}({\bf{q}}) = [\sum_{ {\bf{k}} } \frac{ {\bar{n}}_{ {\bf{k}} - {\bf{q}}/2 }-{\bar{n}}_{ {\bf{k}} + {\bf{q}}/2 } } {({\tilde{\omega}}_{i}({\bf{q}}) - \frac{ {\bf{k.q}} }{m})^{2}}] ^{-\frac{1}{2}} \end{equation} The commutators are given as before, except for three changes. In the new approach, \begin{equation} \Lambda_{ {\bf{k}} }({\bf{q}}) = \sqrt{ {\bar{n}}_{ {\bf{k}}+{\bf{q}}/2 } (1-{\bar{n}}_{ {\bf{k}}-{\bf{q}}/2 }) } \end{equation} Next, the zeros are slightly different. The collective mode has to be computed self-consistently, whereas the particle-hole mode may be written down as described earlier, \begin{equation} {\tilde{\omega}}_{i}( {\bf{q}} ) = \omega_{ {\bf{k}}_{i} }({\bf{q}}) + (\frac{ v_{ {\bf{q}} } }{V}) \frac{ \Lambda^{2}_{ {\bf{k}}_{i} }(-{\bf{q}}) } { 1 + \frac{ v_{ {\bf{q}} } }{V} \sum_{ {\bf{k}} }\frac{ \Lambda^{2}_{ {\bf{k}} }({\bf{q}}) } { \omega_{ {\bf{k}}_{i} }({\bf{q}}) + \omega_{ {\bf{k}} }(-{\bf{q}}) } - \frac{ v_{ {\bf{q}} } }{V} \sum_{ {\bf{k}} \neq {\bf{k}}_{i}}\frac{ \Lambda^{2}_{ {\bf{k}} }(-{\bf{q}}) } { \omega_{ {\bf{k}}_{i} }({\bf{q}}) - \omega_{ {\bf{k}} }({\bf{q}}) } } \end{equation} The last change is in the form of $ U_{0}({\bf{q}}) $, here we have to make sure we use the finite temperature noninteracting values. The other issue that is also of interest is whether the momentum distribution evaluated using the fermi-bilinear/sea-boson correspondence is the same as that suggested by the full propagator. We have found that the answer to this is difficult and probably in the negative. This does not mean that the whole program is wrong. Some comfort and confidence in these manipulations may still be retained by demonstrating that the expression for the number operator is consistent with the RPA-form of the fermi creation operator. Again here, we have to be content with a weak form of this requirement. We take the point of view that it is sufficient to show that the commutator between the total momentum of the electrons and the field operator comes out the same in both the original fermi language and in the sea-boson language. The total momentum of the electrons has the form, \begin{equation} {\bf{P}} = \sum_{ {\bf{k}} }{\bf{k}} \mbox{ }c^{\dagger}_{ {\bf{k}} }c_{ {\bf{k}} } \end{equation} In the sea-boson language, it takes the form, \begin{equation} {\bf{P}} = \sum_{ {\bf{k}}, {\bf{q}} }{\bf{q}} \mbox{ }a^{\dagger}_{ {\bf{k}} }({\bf{q}}) a_{ {\bf{k}} }({\bf{q}}) \end{equation} Therefore in the original fermi language we have, \begin{equation} [{\bf{P}}, \psi^{\dagger}({\bf{x}})] = i\mbox{ }\nabla \mbox{ }\psi^{\dagger}({\bf{x}}) \end{equation} In the sea-boson language we have, \[ [{\bf{P}}, \psi^{\dagger}({\bf{x}})] = \sum_{ {\bf{k}}, {\bf{q}} }{\bf{q}} [a^{\dagger}_{ {\bf{k}} }({\bf{q}})a_{ {\bf{k}} }({\bf{q}}), \psi^{\dagger}({\bf{x}})] \] \[ = \sum_{ {\bf{k}}, {\bf{q}} }{\bf{q}}a^{\dagger}_{ {\bf{k}} }({\bf{q}}) (-g_{ {\bf{k}}, {\bf{q}} }({\bf{x}}))\psi^{\dagger}({\bf{x}}) + \sum_{ {\bf{k}}, {\bf{q}} }{\bf{q}} (-g^{*}_{ {\bf{k}}, {\bf{q}} }({\bf{x}})) \psi^{\dagger}({\bf{x}}) a_{ {\bf{k}} }({\bf{q}}) = i\mbox{ }\nabla\mbox{ }\psi^{\dagger}({\bf{x}}) \] as it should be. All this points to the fact that the answers for the momentum distribution and propagators should not be taken too literally, rather one must be content with the qualitative predictions that are most likely accurate, that also seem to contradict conventional wisdom. \begin{center} APPENDIX E \end{center} In the late 70's and early 80's, attempts were made to write down field theories that describe scalar mesons in terms of observables like currents and densities rather than the creation and annhilation operators. The motivation for doing this stems from the fact a theory cast directly in terms of observables was more physically intuitive than the more traditional approach based on raising and lowering operators on the Fock space. This attempt however, raised a number of technical questions, among them was how to make sense of the various identities connecting say the kinetic energy density to the currents and particle densities and so on. Elaborate mathematical machinery was erected by the authors who started this line of research\cite{Sharp} to address these issues. However, it seems gaps still remain especially with regard to the crucial question of how one goes about writing down a formula for the annhilation operator (fermi or bose) alone in terms of bilinears like currents and densities. The bilinears in question namely currents and densities satisfy a closed algebra known as the current algebra \cite{Sharp}. This algebra is insensitive to the nature of the statistics of the underlying fields. On the other hand, if one desires information about single-particle properties, it is necessary to relate the annhilation operator (whose commutation rules determine the statistics) to bilinears like currents and densities. That such a correspondence is possible was demonstrated by Goldin, Menikoff and Sharp \cite{Sharp}. However, they have not explicitly written down such a formula nor have they clarified some important issues such as whether this formula changes when one consider interacting fields rather than free fields. The general belief\cite{REFEREE} is that these formulas are different for interacting fields. It is shown here that this is in fact not the case, interactions in the system merely cause a change in the hamiltonian but do not affect how the annhilation operator is related to local currents and densities. The attempts made here are partly based on the work of Goldin et.al. \cite{Sharp} Ligouri and Mintchev on generalised statistics\cite{Lig} and the series by Reed and Simon\cite{Reed}. As has been demonstrated earlier, for the bose case we had to choose $ \Phi = 0 $. We argued that this choice was unique. In the fermi case the choice was different but was also unique due to the necessity of recovering the free theory. In this section, we write down a mathematically rigorous statement of this uniqueness criterion. This exercise also settles the issue regarding the delicate question of multiplying two operator-valued distributions at the same point and other related issues, like the meaning of the square-root of the density distribution. For this we prove this lemma. \newline {\bf{Lemma}} Let $ {\mathcal{F}} $ be smooth a function from a bounded subset of the real line on to the set of reals. Also let $ f $ and $ g $ be smooth functions from some bounded subset of $ {\mathcal{R}}^{d} $ to reals. Let us further assume that the range of these functions are such that it is always possible to find compositions such as $ {\mathcal{F}}\mbox{ }o\mbox{ }f $ and they will also be smooth functions with sufficiently big domains. They possess fourier transforms since they are well-behaved. If, \begin{equation} {\mathcal{F}}(\mbox{ }f({\bf{x}})\mbox{ }) = g({\bf{x}}) \end{equation} and, \begin{equation} f({\bf{x}}) = \sum_{ {\bf{q}} }{\tilde{f}}_{ {\bf{q}} } \mbox{ }e^{ i\mbox{ }{\bf{q.x}} } \end{equation} \begin{equation} g({\bf{x}}) = \sum_{ {\bf{k}} }{\tilde{g}}_{ {\bf{k}} } \mbox{ }e^{ i\mbox{ }{\bf{k.x}} } \end{equation} then the following also holds, \begin{equation} [ {\mathcal{F}}(\mbox{ }\sum_{ {\bf{q}} } {\tilde{f}}_{ {\bf{q}} }T_{ -{\bf{q}} }({\bf{k}}) \mbox{ })] \mbox{ }\delta_{ {\bf{k}}, {\bf{0}} } = {\tilde{g}}_{ {\bf{k}} } \end{equation} where $ T_{ {\bf{q}} }({\bf{k}}) = exp({\bf{q}}.\nabla_{ {\bf{k}} }) $. Here the operator $ T_{ {\bf{q}} }({\bf{k}}) $ acts on the $ {\bf{k}} $ in the Kronecker delta on the extreme right, and every time it translates the $ {\bf{k}} $ by an amount $ {\bf{q}} $. \newline {\bf{Proof}} \newline Proof is by brute force expansion. We know, \begin{equation} {\mathcal{F}}(y) = \sum_{n=0}^{\infty} \frac{ {\mathcal{F}}^{(n)}(0) }{n!} \mbox{ }y^{n} \end{equation} therefore, \[ {\mathcal{F}}( \mbox{ }f({\bf{x}}) \mbox{ }) = {\mathcal{F}}(0) + \sum_{n=1}^{\infty} \frac{ {\mathcal{F}}^{(n)}(0) }{n!} \mbox{ }\sum_{ \{ {\bf{q}}_{i} \} }{\tilde{f}}_{ {\bf{q}}_{1} } {\tilde{f}}_{ {\bf{q}}_{2} }...{\tilde{f}}_{ {\bf{q}}_{n} } exp(i(\sum_{i=1}^{n}{\bf{q}}_{i}).{\bf{x}}) \] \begin{equation} = \sum_{ {\bf{k}} }e^{i\mbox{ }{\bf{k.x}} }{\tilde{g}}_{ {\bf{k}} } \end{equation} This means (take inverse fourier transform), \[ {\mathcal{F}}(0)\delta_{ {\bf{k}}, 0 } + \sum_{n=1}^{\infty} \frac{ {\mathcal{F}}^{(n)}(0) }{n!} \mbox{ }\sum_{ \{ {\bf{q}}_{i} \} }{\tilde{f}}_{ {\bf{q}}_{1} } {\tilde{f}}_{ {\bf{q}}_{2} }...{\tilde{f}}_{ {\bf{q}}_{n} } \delta_{ ( {\bf{k}} - \sum_{i=1}^{n}{\bf{q}}_{i} ), {\bf{0}} } \] \begin{equation} = {\tilde{g}}_{ {\bf{k}} } \end{equation} This may also be cleverly rewritten as \[ {\mathcal{F}}(0)\delta_{ {\bf{k}}, 0 } + \sum_{n=1}^{\infty} \frac{ {\mathcal{F}}^{(n)}(0) }{n!} \mbox{ }\sum_{ \{ {\bf{q}}_{i} \} }{\tilde{f}}_{ {\bf{q}}_{1} } {\tilde{f}}_{ {\bf{q}}_{2} }...{\tilde{f}}_{ {\bf{q}}_{n} } T_{ -{\bf{q}}_{1} }({\bf{k}})T_{ -{\bf{q}}_{2} }({\bf{k}}) ...T_{ -{\bf{q}}_{n} }({\bf{k}}) \delta_{ {\bf{k}}, {\bf{0}} } \] \begin{equation} = {\tilde{g}}_{ {\bf{k}} } \end{equation} and therefore, \begin{equation} {\tilde{g}}_{ {\bf{k}} } = [{\mathcal{F}} (\mbox{ }\sum_{ {\bf{q}} }{\tilde{f}}_{ {\bf{q}} } T_{ -{\bf{q}} }({\bf{k}}) \mbox{ } )]\mbox{ }\delta_{ {\bf{k}}, {\bf{0}} } \end{equation} and the {\bf{Proof}} is now complete. Now we would like to capture the notion of the fermi density operator. Physicists define it to be $ \rho(x) = \psi^{*}(x) \psi(x) $. Multiplication of two fermi fields at the same point is a delicate issue and we would like to make more sense out of it. For this we have to set our single particle Hilbert Space: \[ {\mathcal{H}} = L_{p}({\mathcal{R}}^{3}) {\bigotimes} {\mathcal{W}} \] Here, $ L_{p}({\mathcal{R}}^{3}) $ is the space of all periodic functions with period $ L $ in each space direction. $ {\mathcal{W}} $ is the spin space spanned by two vectors. An orthonormal basis for $ {\mathcal{W}} $ \[ \{ \xi_{\uparrow}, \xi_{\downarrow} \} \] A typical element of $ {\mathcal{H}} $ is given by $ f({\bf{x}}) \bigotimes \xi_{\downarrow} $. A basis for $ {\mathcal{H}} $ is given by \[ {\mathcal{B}} = \{ \sqrt{ \frac{1}{L^{3}} } exp(i{\bf{q_{n}.x}}) \bigotimes \xi_{s} ; {\bf{n}} = (n_{1}, n_{2}, n_{3}) \in {\mathcal{Z}}^{3}, s \in \{ \uparrow, \downarrow \} ; \] We move on to the definition of the fermi-density distribution. The Hilbert Space $ {\mathcal{H}}^{\bigotimes n} $ is the space of all n-particle wavefunctions with no symmetry restrictions. From this we may construct orthogonal subspaces \[ {\mathcal{H}}_{+}^{\bigotimes n} = P_{+} {\mathcal{H}}^{\bigotimes n} \] \[ {\mathcal{H}}_{-}^{\bigotimes n} = P_{-} {\mathcal{H}}^{\bigotimes n} \] Tensors from $ {\mathcal{H}}_{+}^{\bigotimes n} $ are orthogonal to tensors from $ {\mathcal{H}}_{-}^{\bigotimes n} $. The only exceptions are when $ n = 0 $ or $ n = 1 $. The space $ {\mathcal{H}}_{+}^{\bigotimes n} $ is the space of bosonic-wavefunctions and the space $ {\mathcal{H}}_{-}^{\bigotimes n} $ is the space of fermionic wavefunctions. The definition of the fermi density distribution proceeds as follows. Let $ v $ be written as \[ v = \sum_{\sigma \in \{ \uparrow, \downarrow \} }a(\sigma)\xi_{\sigma} \] The Fermi density distibution is an operator on the Fock Space, given a vector $ f \bigotimes v \in \mathcal{H} $ in the single particle Hilbert Space, and a tensor $ \varphi $ in the n-particle subspace of of $ \mathcal{F}(\mathcal{H}) $, there exists a corresponding operator $ \rho(f \bigotimes v) $ that acts as follows: \[ [\rho(f \bigotimes v) \varphi ]_{n} ({\bf{x_{1}}}\sigma_{1}, {\bf{x_{2}}}\sigma_{2}, .... , {\bf{x_{n}}}\sigma_{n}) = 0 \] if $ \varphi \in {\mathcal{H}}_{+}^{\bigotimes n} $ and \[ [\rho(f \bigotimes v) \varphi]_{n} ({\bf{x_{1}}}\sigma_{1}, {\bf{x_{2}}}\sigma_{2}, .... , {\bf{x_{n}}}\sigma_{n}) = \sum_{i=1}^{n} f({\bf{x_{i}}}) a(\sigma_{i}) \varphi_{n}({\bf{x_{1}}}\sigma_{1}, {\bf{x_{2}}}\sigma_{2}, .... , {\bf{x_{n}}}\sigma_{n}) \] when $ \varphi \in {\mathcal{H}}_{-}^{\bigotimes n} $. The physical meaning of this abstract operator will become clear soon. Let us now define the current density in an analogous fashion, To Physicists, it is, \begin{equation} {\bf{J}}({\bf{x}}) = (\frac{1}{2i})[ \psi^{\dagger}(\nabla\psi) - (\nabla\psi)^{\dagger}\psi ] \end{equation} To Mathematicians it is an operator similar to the density \cite{Lig}. Given a typical element $ f \bigotimes v $ associated with the underlying single-particle Hilbert space, there is an operator denoted by $ J_{s}(f \bigotimes v) $, ( $ s = 1,2,3 $ ) that acts on a typical tensor from the n-particle subspace of the full Fock space as follows, \begin{equation} [J_{s}(f \bigotimes v)\varphi]_{n} ({\bf{x}}_{1}\sigma_{1}, {\bf{x}}_{2}\sigma_{2}, ... , {\bf{x}}_{n}\sigma_{n}) = 0 \end{equation} if $ \varphi \in {\mathcal{H}}^{\bigotimes n}_{+} $,and and, \[ [J_{s}(f \bigotimes v)\varphi]_{n} ({\bf{x}}_{1}\sigma_{1}, {\bf{x}}_{2}\sigma_{2}, ... , {\bf{x}}_{n}\sigma_{n}) \] \begin{equation} = -i\sum_{k = 1}^{n} \{ f({\bf{x}}_{k})a(\sigma_{k})\nabla^{k}_{s} + \frac{1}{2} [\nabla^{k}_{s}f({\bf{x}}_{k})]a(\sigma_{k}) \} \varphi_{n} ({\bf{x}}_{1}\sigma_{1}, {\bf{x}}_{2}\sigma_{2}, ... , {\bf{x}}_{n}\sigma_{n}) \end{equation} if $ \varphi \in {\mathcal{H}}^{\bigotimes n}_{-} $. For the bosonic current it is the other way around. Having done all this, we would now like to write the DPVA more rigorously. Now for some notation. As before, let $ g = \mbox{ }exp(i{\mbox{ }}{\bf{k}}_{m}.{\bf{x}})\bigotimes \xi_{r} $ (the square root of the volume is not needed as we want all operators in momentum space to be dimensionless). Then as before, \begin{equation} \psi({\bf{k}}_{m}r) = c(g) \end{equation} \begin{equation} \rho({\bf{k}}_{m}r) = \rho(g) \end{equation} \begin{equation} \delta \rho({\bf{k}}_{m}r) = \rho({\bf{k}}_{m}r) - N^{0}_{r} \delta_{ {\bf{k}}_{m}, {\bf{0}} } \end{equation} \begin{equation} j_{s}({\bf{k}}_{m}r) = {\bf{J}}_{s}(g) \end{equation} \begin{equation} \delta j_{s}({\bf{k}}_{m}r) = j_{s}({\bf{k}}_{m}r) \end{equation} Having done this, we would like to write down another formula for the canonical conjugate. \begin{equation} \nabla\Pi({\bf{x}}\sigma) = (-1/\rho({\bf{x}}\sigma)){\bf{J}}({\bf{x}}\sigma) + \nabla\Phi([\rho];{\bf{x}}\sigma) - [-i\mbox{ }\Phi,\nabla\Pi] \end{equation} Then we have(bear in mind here that we have distinguished between the c-number $ N^{0}_{r} $ and the operator $ \rho({\bf{0}}r) $ whose expectation value is $ N^{0}_{r} $). \begin{equation} (i\mbox{ }{\bf{q}}_{m})\mbox{ }X_{ {\bf{q}}_{m}r } = -(\frac{1}{ N^{0}_{r} })\frac{1}{1 + \frac{1}{N^{0}_{r}} \sum_{ {\bf{k}}_{n} } \delta\rho({\bf{k}}_{n}r)T_{ {\bf{k}}_{n} }({\bf{q}}_{m} )} [ \mbox{ } \sum_{ {\bf{p}}_{n} }\delta {\bf{j}}({\bf{p}}_{n}r) T_{ {\bf{p}}_{n} }({\bf{q}}_{m} ) \mbox{ } ] \mbox{ } \delta_{ {\bf{q}}_{m}, {\bf{0}} } + \mbox{ }{\bf{F}}([\rho];{\bf{q}}_{m}r) \end{equation} where, \begin{equation} \sum_{ {\bf{q}}_{m} }exp(i{\mbox{ }}{\bf{q}}_{m}.{\bf{x}}) {\bf{F}}([\rho];{\bf{q}}_{m}r) = \nabla\Phi - [-i\mbox{ }\Phi,\nabla\Pi] \end{equation} As regards the object $ X_{ {\bf{0}}r } $ that is conjugate to the total number is concerned, we must retain it as it is, since, it will ensure that the total number when commuting with the field operator is the field operator itself rather than the incorrect answer zero. For $ {\bf{q}}_{m} \neq {\bf{0}} $ \[ X_{ {\bf{q}}_{m}r } = (\frac{1}{q_{m}^{2}})(\frac{ i }{N^{0}_{r}}) \frac{1}{1 + \frac{1}{N^{0}_{r}} \sum_{ {\bf{k}}_{n} } \delta\rho({\bf{k}}_{n}r)T_{ {\bf{k}}_{n} }({\bf{q}}_{m} )} [ \mbox{ } \sum_{ {\bf{p}}_{n} } {\bf{q}}_{m}.\delta {\bf{j}}({\bf{p}}_{n}r) T_{ {\bf{p}}_{n} }({\bf{q}}_{m} ) \mbox{ } ] \mbox{ } \delta_{ {\bf{q}}_{m}, {\bf{0}} } \] \begin{equation} - \frac{ i\mbox{ }{\bf{q}}_{m}.{\bf{F}}([\rho];{\bf{q}}_{m}r) } { q_{m}^{2} } \end{equation} In order to define $ X_{ {\bf{0}}r } $ in terms of fermi fields, we have to make use of the fact that this object does not commute with the total number of fermions. This means it cannot be expressed exclusively in terms of number-conserving fermi bilinears like currents and densities. This will mean that we merely invert the formula in Eq.(~\ref{FIELDOP}) and solve for $ X_{ {\bf{0}}r } $ as, \[ X_{ {\bf{0}}r } = -\sum_{ {\bf{k}}_{m} \neq {\bf{0}} }X_{ {\bf{k}}_{m}r } + i\mbox{ } \sum_{ {\bf{k}}_{m} }\mbox{ }ln[ (\sqrt{ N_{r}^{0} } + \sum_{ {\bf{q_{n}}} } \delta{\mbox{ }}\psi({\bf{q_{n}}}r)T_{ -{\bf{q}}_{n} } ({\bf{k}}_{m}) ) (N_{r}^{0} + \sum_{ {\bf{q}}_{n} }\delta \rho({\bf{q}}_{n}r)T_{ {\bf{q}}_{n} } ({\bf{k}}_{m}))^{-\frac{1}{2}}exp(-i\mbox{ }\sum_{ {\bf{q}}_{n} } \] \begin{equation} \times \phi([\rho];{\bf{q}}_{n}r)T_{ {\bf{q}}_{n} }({\bf{k}}_{m}))] \mbox{ }\delta_{ {\bf{k}}_{m}, {\bf{0}} } \end{equation} Define an operator which is defined to be the formal expansion that the formula itself suggests, \begin{equation} {\tilde{\psi}}({\bf{k}}_{n}r) = exp(-i{\mbox{ }} \sum_{ {\bf{q}}_{m} }T_{ -{\bf{q}}_{m} }({\bf{k}}_{n}) X_{ {\bf{q}}_{m}r }) \mbox{ }exp(i{\mbox{ }} \sum_{ {\bf{q}}_{m} }T_{ {\bf{q}}_{m} }({\bf{k}}_{n}) \phi([\rho];{\bf{q}}_{m}r)){\mbox{ }} (N^{0}_{r} + \sum_{ {\bf{q}}_{m} }\delta\rho({\bf{q}}_{m}r) T_{ {\bf{q}}_{m} }({\bf{k}}_{n}))^{\frac{1}{2}} \delta_{ {\bf{k}}_{n}, {\bf{0}} } \end{equation} We would now like to write to write down a statement that would require a proof. This conjecture when proven will vindicate the DPVA. \newline {\bf{Conjecture}} \newline There exists a unique functional $ \Phi([\rho];{\bf{x}}r) $ and a unique odd (for fermions, even for bosons) integer $ m $ such that the following recursion holds, \[ \Phi([\{\rho({\bf{y_{1}}}\sigma_{1}) - \delta({\bf{y_{1}}}-{\bf{x}}^{'})\delta_{\sigma_{1},\sigma^{'}} \} ] ;{\bf{x}}\sigma) \] \[ + \Phi([\rho];{\bf{x^{'}}}\sigma^{'}) - \Phi([\rho];{\bf{x}}\sigma) \] \begin{equation} -\Phi([\{\rho({\bf{y_{1}}}\sigma_{1}) - \delta({\bf{y_{1}}}-{\bf{x}})\delta_{\sigma_{1},\sigma} \} ] ;{\bf{x^{'}}}\sigma^{'}) = m\pi \end{equation} and has the following additional effects. The domain of definition of $ {\tilde{\psi}}({\bf{k}}_{n}r) $ ( in which the series expansion converges ), is the same as that of $ \psi({\bf{k}}_{n}r) $ and it acts the same way too. In other words, \begin{equation} {\tilde{\psi}}({\bf{k}}_{n}r) = \psi({\bf{k}}_{n}r) \end{equation} We know how the ingredients of $ {\tilde{\psi}}({\bf{k}}_{n}r) $ namely the current $ {\bf{j}}({\bf{k}}_{n}r) $ and the density $ \delta\rho({\bf{q}}_{n}r) $ act on typical elements of the Fock space, and we know how $ \psi({\bf{k}}_{n}r) $ acts on the Fock space, we just have to show that the complicated $ {\tilde{\psi}}({\bf{k}}_{n}r) $ acts the same way as the simple $ \psi({\bf{k}}_{n}r) $. Moreover, this is true for a unique phase functional $ \Phi $. Lastly, we would like to defend the above ``fourier gymnastics'' by pointing out that the real space formulation is not well-defined due to the fact that the line-integral that appears in the formulas is difficult to define, any attempt is equivalent to the above approach. The other reason for attempting a rigorous formulation is the fact well-known to mathematicians that it is not possible to have a self-adjoint canonical conjugate of a positive definite self-adjoint operator. Since $ \rho $ is positive definite, the natural question that arises is whether $ \Pi $ self-adjoint ? We take the naive physicist's approach to this issue, namely we allow for sign changes in $ \rho $ and argue that these merely amount to translating the phase functional $ \Phi $ by constant amounts, thus not altering the general framework. Within this framework, $ \Pi $ is indeed self-adjoint and all is well. It is also worth remarking that the overall conjugate $ \Pi $ has two contributions, one from the position independent part $ X_{0\sigma} $, and the other is from terms involving currents and densities. The latter contribution is manifestly self-adjoint. The possible lack of self-adjointness of the overall conjugate stems from the canonical conjugate of the total number which cannot be expressed in terms of fermi bilinears. \section{ACKNOWLEDGEMENTS} It is a pleasure to thank Prof. A. H. Castro-Neto and Prof. D. K. Campbell for providing important references and encouragement and the former for useful discussions as well, and Prof. A. J. Leggett for giving his valuable time and advice on matters related to the pursuit of this work and also for providing important references and also for useful discussions. Thanks are also due to Prof. Ilias E. Perakis for providing the authors with an important reference. Thanks are also due to Prof. P.W. Anderson for critically evaluating an early version of this article. Last but not least to Dr. S. Chitanvis for correcting the authors' misreading of the Lieb-Mattis solution. This work was supported in part by ONR N00014-90-J-1267 and the Unversity of Illinois, Materials Research Laboratory under grant NSF/DMR-89-20539 and in part by the Dept. of Physics at University of Illinois at Urbana-Champaign. The authors may be contacted at the e-mail address setlur@mrlxpa.mrl.uiuc.edu. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction }\label{sec:intro} When very high energy cosmic rays interact in the stratosphere, mesons are produced in the primary hadronic interaction. These mesons either interact and produce lower energy hadronic cascades, or decay into high energy muons which can be observed deep underground. While the temperature of the troposphere varies considerably within the day, the temperature of the stratosphere remains nearly constant, usually changing on the timescale of seasons (with the exception of the occasional Sudden Stratospheric Warming~\cite{Osprey:2009}). An increase in temperature of the stratosphere causes a decrease in density. This reduces the chance of meson interaction, resulting in a larger fraction decaying to produce muons. This results in a higher muon rate observed deep underground~\cite{Barrett:1952,Ambrosio:1997tc,Bouchta:1999kg}. The majority of muons detected in the MINOS far detector are produced in the decay of pions, although the decays of kaons must be considered for a more complete description of the flux~\cite{Adamson:2007ww}. MINOS is a long baseline neutrino oscillation experiment~\cite{Adamson:2007gu,Adamson:2005qc}, with a neutrino source and near detector at Fermi National Accelerator Laboratory in Batavia, IL, USA, and a far detector at the Soudan Underground Mine State Park in northern Minnesota, USA. This paper describes cosmic ray data taken in the far detector, a scintillator and steel tracking calorimeter located \unit[0.72]{km} underground (\unit[2080]{mwe}, meters water equivalent)~\cite{MinosNIM}. It has a \unit[5.4]{kton} mass and a $\unit[6.91 \times 10^6] {cm^2 sr}$~\cite{Rebel:2004mm} acceptance. Because of its depth, MINOS detects cosmic-ray muons with energy at the surface, \unit[E$_{\mu}>$0.73]{TeV}. These high energy muons are mostly the result of the decays of the mesons produced in the primary hadronic interaction. This, coupled with the large acceptance, makes it possible to detect small seasonal temperature fluctuations in the upper atmosphere. The far detector is the deepest underground detector with a magnetic field, allowing the separation of particles by charge. The MINOS data are correlated with atmospheric temperature measurements at the Soudan site provided by the European Centre for Medium-Range Weather Forecasts (ECMWF)~\cite{ECMWF}. This temperature data set has higher precision than any other used for the seasonal variation analysis~\cite{Barrett:1952,Bouchta:1999kg,Ambrosio:2002db,Cini:1967,Humble:1979,Cutler:1981,Sherman:1954,Fenton:1961,Andreyev:1987}. The 67.32 million muon events used in this analysis were collected over five years, from August~1,~2003 to July~31,~2008, a period that includes five complete annual cycles. The seasonal variations in muon intensity were compared to a theoretical model which extends the pion-only model of~\cite{Ambrosio:1997tc} to include the contribution from kaons. \section{The Experimental Effective Temperature Coefficient}\label{sec:expint} \subsection{Experimental Intensity} The underground muon intensity depends on the threshold energy $E_\mathrm{th}$ and the cosine of the zenith angle $\theta$. The change in underground muon intensity variations as a function of temperature was derived following the formalism of~\cite{Barrett:1952,Ambrosio:1997tc}. The change in the surface muon intensity, $\Delta I_{\mu}(E,\cos \theta)$ occurring at the MINOS far detector site can be written as: \begin{equation} \Delta I_{\mu} = \int_0^{\infty} \mathrm{d}X W(X) \Delta T(X) \end{equation} where $\Delta T(X)$ is the change in atmospheric temperature at atmospheric depth $X$, and the weight $W(X)$ reflects the temperature dependence of the production of mesons in the atmosphere and their decay into muons that can be observed in the far detector. A temperature coefficient $\alpha(X)$ can be defined as: \begin{equation} \alpha(X) = \frac{T(X)}{I_{\mu}^0}W(X), \end{equation} where $I^{0}_{\mu}$ is the muon intensity evaluated at a given value of atmospheric temperature $T_0$. The phenomenological relationship between the atmospheric temperature fluctuations and muon intensity variations can now be written as \begin{equation}\label{eq:alpha} \frac{\Delta I_{\mu}}{I^0_{\mu}} = \int_0^{\infty} \mathrm{d}X \alpha(X) \frac{\Delta T(X)}{T_0}. \end{equation} The atmosphere consists of many levels that vary continuously in both temperature and pressure. To simplify calculations, the atmosphere is approximated by an isothermal body with an effective temperature, $T_\mathrm{eff}$, obtained from a weighted average over the atmospheric depth: \begin{equation}\label{eq:teff} T_\mathrm{eff} = \frac{\int_0^{\infty} \mathrm{d}X T(X) W(X)} {\int_0^{\infty} \mathrm{d}X W(X) } . \end{equation} An ``effective temperature coefficient'', $\alpha_{T}$ can then be defined \begin{equation}\label{eq:teff_coeff} \alpha_T = \frac{T_\mathrm{eff}}{I_{\mu}^0}\int_0^{\infty} \mathrm{d}X W(X). \end{equation} With these definitions in place, the relationship between atmospheric temperature fluctuations and muon intensity variations can now be written as: \begin{equation}\label{eq:temp_coeff} \frac{\Delta I_{\mu}}{ I^0_{\mu}} = \alpha_{T} \frac{\Delta T_\mathrm{eff}}{T_\mathrm{eff}} . \end{equation} The configuration and geometric acceptance of the far detector remain constant over time. Therefore, the rate, $R_{\mu}$ of muons observed in the detector is proportional to the incident muon intensity and varies with the effective atmospheric temperature as follows: \begin{equation} \frac{\Delta R_{\mu}}{\left < R_{\mu} \right >}~=~\alpha_{T} \frac{\Delta T_\mathrm{eff}}{\left < T_\mathrm{eff} \right >} . \end{equation} In practice, the observed muon rates and the temperature data are averaged over the period of a day. The effective temperature is obtained from a weighted average of temperature measurements obtained at a set of discrete pressure levels. The weight $W(X)$ can be written as the sum $W^{\pi} + W^{K}$, representing the contribution of pions and kaons to the overall variation in muon intensity. The weights $W^{\pi,K}$ are given by~\cite{Grashorn:2008dis,Grashorn:2009sm}: \begin{equation}\label{eq:wval} W^{\pi,K}(X)\simeq \frac{ (1-X/ \Lambda_{\pi,K} ')^2 e^{-X/\Lambda_{\pi,K}} A^1_{\pi,K} }{\gamma +\left( \gamma + 1 \right) B^1_{\pi,K} K(X)\left(\left<E_\mathrm{th}\cos\theta\right> / \epsilon_{\pi,K}\right)^2}, \end{equation} where \begin{equation} K(X) \equiv \frac{(1- X / \Lambda_{\pi,K} ')^2}{(1-e^{-X / \Lambda_{\pi,K} '}) \Lambda_{\pi,K} ' /X}. \end{equation} The parameters $A^{1}_{\pi,K}$ include the amount of inclusive meson production in the forward fragmentation region, masses of mesons and muons, and muon spectral index; the input values are $A^{1}_{\pi}$~=~1 and $A^{1}_{K}$~=~$0.38\cdot r_{K/\pi}$~\cite{Grashorn:2008dis,Grashorn:2009sm}, where $r_{K/\pi}$ is the K/$\pi$ ratio. The parameters $B^{1}_{\pi,K}$ reflect the relative atmospheric attenuation of mesons; The threshold energy, $E_\mathrm{th}$, is the energy required for a muon to survive to a particular depth; The attenuation lengths for the cosmic ray primaries, pions and kaons are $\Lambda_{N}$, $\Lambda_{\pi}$ and $\Lambda_{K}$ respectively with $1/\Lambda^{'}_{\pi,K} \equiv 1/\Lambda_{N} - 1/\Lambda_{\pi,K}$. The muon spectral index is given by $\gamma$. The meson critical energy, $\epsilon_{\pi, K}$, is the meson energy for which decay and interaction have an equal probability. Since the distribution has a long tail (Fig.~\ref{fig:Spectra}), the value of $\left<E_\mathrm{th} \cos \theta\right>$ used here is the median. The values for these parameters can be found in Table~\ref{tab:values}. \begin{table}[!h] \caption {Input W(X) parameter values.} \label{tab:values} \begin{tabular}{l l} \hline\hline \hspace*{15pt}\textbf{Parameter} & \textbf{Value} \hspace*{15pt} \\\hline \hspace*{15pt}$A^1_{\pi}$ & 1~\cite{Grashorn:2008dis,Grashorn:2009sm} \\ \hspace*{15pt}$A^1_{K}$ & $0.38\cdot r_{K/\pi}$~\cite{Grashorn:2008dis,Grashorn:2009sm}\\ \hspace*{15pt}$r_{K/\pi}$ & 0.149~\cite{Gaisser:1990vg}~$\pm$~0.06~\cite{Barr:2006it}\\ \hspace*{15pt}$B^1_{\pi} $ & 1.460$\pm$ 0.007~\cite{Grashorn:2008dis,Grashorn:2009sm}\\ \hspace*{15pt}$B^1_{K} $ & 1.740 $\pm$ 0.028~\cite{Grashorn:2008dis,Grashorn:2009sm} \\ \hspace*{15pt}$\Lambda_{N}$ & \unit[120]{g/cm$^{2}$}~\cite{Gaisser:1990vg} \\ \hspace*{15pt}$\Lambda_{\pi}$ & \unit[180]{g/cm$^{2}$}~\cite{Gaisser:1990vg} \\ \hspace*{15pt}$\Lambda_{K}$ & \unit[160]{g/cm$^{2}$}~\cite{Gaisser:1990vg}\\ \hspace*{15pt}$\left<E_\mathrm{th} \cos \theta\right>$\hspace*{15pt} &\unit[0.785$\pm$0.14]{TeV}\\ \hspace*{15pt}$\gamma$ & 1.7$\pm$0.1~\cite{Adamson:2007ww} \\ \hspace*{15pt}$\epsilon_{\pi}$ & \unit[0.114$\pm$0.003]{TeV}~\cite{Grashorn:2008dis,Grashorn:2009sm}\\ \hspace*{15pt}$\epsilon_{K}$ & \unit[0.851$\pm$0.014]{TeV}~\cite{Grashorn:2008dis,Grashorn:2009sm}\\ \hline \hline \end{tabular} \end{table} Since the temperature is measured at discrete levels, the integral is represented by a sum over the atmospheric levels $ X_n$: \begin{equation}\label{eq:teff_exp} T_\mathrm{eff} \simeq \frac{\sum_{n=0}^{N} \Delta X_n T(X_n)\left(W_n^{\pi}+W_n^{K}\right)} {\sum_{n=0}^{N} \Delta X_n \left(W_n^{\pi}+W_n^{K}\right)} \end{equation} where $W^{\pi,K}_n $ is $W^{\pi,K} $ evaluated at $X_n$. The temperature and pressure vary continuously through the atmosphere. Fig.~\ref{fig:temp_profile} (solid line) shows the average temperature from 2003-2008 above Soudan as a function of pressure level in the atmosphere~\cite{ECMWF}. The height axis on the right represents the average log-pressure height, the height of a pressure level relative to the surface pressure, corresponding to the average temperatures plotted here. The dashed line is the weight as a function of pressure level $W(X)$, obtained from Eq.~\ref{eq:wval} and normalized to one, used to calculate the effective temperature. \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{temp_profile_ECMWF} \end{center} \caption[Temperature Profile]{\label{fig:temp_profile}The five year average temperature at various pressure levels (solid line). The range is from \unit[1000]{hPa} (\unit[1]{hPa}~=~\unit[1.019]{g/cm$^2$}), near Earth's surface, to \unit[1]{hPa} (nearly \unit[50]{km}), near the top of the stratosphere. The height axis on the right represents the average log-pressure height corresponding to the average temperatures plotted here. The dashed line is the weight as a function of pressure level ($X$) used to find $T_\mathrm{eff}$. The weights are determined by Eq.~\ref{eq:wval}, normalized to one. } \end{figure} The dashed weight curve in Fig.~\ref{fig:temp_profile} shows that the temperature fluctuations higher in the atmosphere have a greater effect on the production of muons visible at a depth of \unit[2100]{mwe}. High energy mesons produced at the top of the atmosphere are more likely to decay, producing muons visible to MINOS, than those produced lower in the atmosphere. Note that the expression used to calculate $T_\mathrm{eff}$ in the pion scaling limit, ignoring the kaon contribution, is the same as the MACRO calculation~\cite{Ambrosio:1997tc}. The effective temperature coefficient, $\alpha_{T}$, is a function of both the muon threshold energy and the K/$\pi$ ratio. As the energy increases, the muon intensity becomes more dependent on the meson critical energy, which in turn is proportional to the atmospheric temperature. The effective temperature coefficient thus reflects the fraction of mesons that are sensitive to atmospheric temperature variations, and for energies much greater than the critical energy, the value of $\alpha_{T}$ approaches unity. At the depth of the MINOS far detector, the vertical muon threshold energy lies between the pion and kaon critical energies. Therefore, because the muon energy is close to the parent meson's energy, a larger K/$\pi$ ratio results in a smaller value of $\alpha_{T}$. \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{dt} \end{center} \caption[Time Between Consecutive Muon Arrivals]{\label{fig:dt}The time between consecutive cosmic ray muon arrivals, fit with a Poisson distribution. The fit gives $\chi^2/N_{DoF}~=~55.2/68$; $\left<R_{\mu}\right>$~=~\unit[0.4692~$\pm$~0.0001]{Hz} (from slope). The Poissonian nature of the muon arrival times demonstrates the absence of short timescale systematic effects on the data.} \end{figure} \subsection{The Data}\label{sub:data} The muon data for this analysis were accumulated over a five year span, beginning on August 1, 2003. Data quality cuts were performed to ensure a clean sample of muons (Pre-Analysis cuts)~\cite{Adamson:2007ww} \begin{enumerate} \item Require that all detector readout and sub-systems were functioning normally \item Remove runs with anomalous cosmic ray rates, greater than \unit[1]{Hz} \item Remove events that had many hits assigned to incorrect channels (properly demultiplexed~\cite{Adamson:2007ww}) \item Remove muons induced by NuMI beam interactions with timing cuts~\cite{Adamson:2007gu}. \end{enumerate} After all cuts were applied the initial sample of 68.66 million muons was reduced to 67.32 million muons~\cite{Grashorn:2008dis}. A plot of the time between consecutive muon arrivals in the MINOS data is shown in Fig.~\ref{fig:dt}. The distribution is well described by a Poisson distribution~\cite{Ahlen:1991yh,Grashorn:2008dis} with mean rate $\left<R_{\mu}\right>$~=~\unit[0.4692~$\pm$~0.0001]{Hz}, demonstrating the absence of short-timescale systematic effects on the data. The average muon rate was calculated for each day by dividing the number of observed muons by the detector livetime. The energy spectra for the observed muons can be seen in Fig.~\ref{fig:Spectra}. \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{EminEsBoth} \caption[Energy Spectra]{\label{fig:Spectra} A plot of the energy spectra observed in the far detector. The dashed line is $E_\mathrm{th}\cos\theta$, which was used to determine the value used in Eq.~\ref{eq:wval}. The solid line is the distribution of muon surface energies, $E_{\mu}$ in far detector. Also shown are $E_\mathrm{th}\cos\theta(CS)$ (dot-dash line), the distribution of $E_\mathrm{th}\cos\theta$ after charge-separation cuts have been applied, and $E_{\mu}(CS)$ (dotted line), the distribution of $E_{\mu}$ after the charge-separation cuts (see Sec.~\ref{sub:chargesep}) have been applied. Note that the charge-separation cuts have been applied, but the distributions shown include both muon species. } \end{center} \end{figure} The solid line is $E_\mathrm{th}\cos\theta$, which was used to determine the value used in Eq.~\ref{eq:wval}. The dashed line is the distribution of muon surface energies, $E_{\mu}$, in far detector, which has a much longer tail than the distribution of threshold energies. Also shown are $E_\mathrm{th}\cos\theta(CS)$ (dot-dash line), the distribution of $E_\mathrm{th}\cos\theta$ after charge-separation cuts have been applied, and $E_{\mu}(CS)$ (dotted line), the distribution of $E_{\mu}$ after the charge-separation cuts (see Sec.~\ref{sub:chargesep}) have been applied. Note that the charge-separation cuts have been applied, but the distributions shown include both muon species. The $E_\mathrm{th}\cos\theta$ distribution is peaked, with a median value $\left<E_\mathrm{th}\cos\theta\right>$\unit[=0.795$\pm$0.14]{TeV}. This distribution, with its rapid fall-off, reflects the rock overburden surrounding the far detector. The temperature data for the Soudan site was obtained from ECMWF, which collates a number of different types of observations (e.g. surface, satellite and upper air sounding) at many locations around the globe, and uses a global atmospheric model to interpolate to a particular location. For this analysis, the ECMWF model produced atmospheric temperatures at 21 discrete pressure levels: 1000, 925, 850, 700, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30, 20, 10, 7, 5, 3, 2 and \unit[1]{hPa} (\unit[1]{hPa}~=~\unit[1.019]{g/cm$^2$}), at four times, \unit[0000]{h}, \unit[0600]{h}, \unit[1200]{h} and \unit[1800]{h} each day. The effective temperature, $T_\mathrm{eff}$, was calculated four times each day using Eq.~\ref{eq:teff_exp}. A mean value $\left<T_\mathrm{eff}\right>$ and error was obtained from these four daily measurements The ECMWF temperature data was cross-checked using the Integrated Global Radiosonde Archive (IGRA) of temperature measurements~\cite{IGRA:2006}. The distribution of the differences between ECMWF and IGRA temperature values at International Falls, MN was well described by a Gaussian distribution with $\sigma$~=~\unit[0.31]{K} Fig.~\ref{fig:seasonal} shows the percentage deviation in the mean daily muon rate, $\Delta R_{\mu}$, over the entire set of data, with statistical error bars. A typical day at $\unit[\left<R_\mu \right>~=~0.4692]{Hz}$ yields $\sim$40,000 muons, resulting in error bars of order 0.5\%. The variation with season can be seen, with maxima in August and minima in February. These maxima peak at rates that are within 0.5\% of each other. \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{SeasonalEffect} \caption[Daily Muon Rate]{\label{fig:seasonal}The daily deviation from the mean rate of cosmic ray muon arrivals from 8/03-8/08, shown here with statistical error bars. The periodic fluctuations have the expected maxima in August, minima in February. The hatched region indicates the period of time when the detector ran with the magnetic field reversed from the normal configuration.} \includegraphics[width=0.5\textwidth]{TempDist} \caption[Daily Effective Temperature]{\label{fig:teff}The daily deviation from the mean effective temperature over a period of five years, beginning when the far detector was complete, 08/03-08/08. The hatched region indicates the period of time when the detector ran with the magnetic field reversed from the normal configuration.} \end{center} \end{figure} For the five year period $\left<T_\mathrm{eff}\right>~=~\unit[221.93]{K}$. The distribution of $\Delta T_\mathrm{eff}$ over the data period can be seen in Fig.~\ref{fig:teff}, with strong periodic seasonal correlation with the data. There is also striking correspondence between Fig.~\ref{fig:seasonal} and Fig.~\ref{fig:teff} for small term maxima and minima over a few days' span. A plot of $\Delta R_{\mu}/\left<R_{\mu}\right>(\Delta T_\mathrm{eff})$ was produced (Fig.~\ref{fig:RTCorrP}) for each day's $\Delta R_{\mu}$ and $\Delta T_\mathrm{eff}$ data to quantify the daily correlation between rate and temperature. To find the value for $\alpha_T$, a linear regression was performed using the MINUIT~\cite{James:1975dr} fitting package. This package performs a linear regression accounting for error bars on both the x and y axis using a numerical minimization method. The result of this fit is a slope of $\alpha_T~=~0.873~\pm~0.009$ (statistical errors only), and the correlation coefficient (R-value) between these two distributions is 0.90. \begin{figure}[!h] \begin{center} \includegraphics[width=0.5\textwidth]{RTCorrP} \caption[Time Series R(T)]{\label{fig:RTCorrP} A plot of $\Delta R_{\mu}/\left<R_{\mu}\right>$ as a function of $\Delta T_\mathrm{eff}/\left<T_\mathrm{eff}\right>$ for single muons, fit by a line with the y-intercept fixed at 0. The fit has a $\chi^2/N_{DoF} = 1959/1797$, and the slope is $ \alpha_T~=~0.873~\pm~0.009$ } \end{center} \end{figure} The effects of systematic uncertainties were evaluated by modifying parameters and recalculating $\alpha_T$. Table~\ref{tab:errors} shows the difference in calculated $\alpha_T$ for the modified parameters. The largest systematic errors are: a) the $\pm$~0.06 uncertainty in meson production ratio~\cite{Barr:2006it}; b) the \unit[$\pm0.31$]{K} uncertainty in mean effective temperature, estimated by comparing ECMWF temperatures at International Falls, MN, to those of the IGRA~\cite{IGRA:2006} measurements; c) the \unit[$\pm$0.14]{TeV} uncertainty in muon threshold energy, estimated from uncertainties in the rock overburden above the far detector. To estimate this uncertainty, the rock map was adjusted up by 10\% and $\left<E_\mathrm{th}\cos\theta \right>$ was calculated, then the rock map was adjusted down by 10\% and $\left<E_\mathrm{th}\cos\theta \right>$ was again recalculated. The uncertainty was then calculated from the difference between $\left<E_\mathrm{th}\cos\theta \right>$ and these adjusted values. \begin{table}[!h] \caption {Systematic errors on the experimental parameter inputs to $\alpha_T$ .} \label{tab:errors} \begin{tabular}{l l} \hline\hline \hspace*{15pt}\textbf{Parameter} & $\Delta \alpha_T$ \hspace*{15pt} \\\hline \hspace*{15pt}meson production ratio, $r_{K/\pi}$ = 0.149$\pm$0.06~\cite{Barr:2006it} \hspace*{15pt} & 0.007 \\ \hspace*{15pt}mean effective temperature, $\left<T_\mathrm{eff} \right>$= \unit[221.93$\pm$0.32]{K} \hspace*{5pt} & 0.0051 \\ \hspace*{15pt}threshold energy, $\left<E_\mathrm{th} \cos \theta \right>$=\unit[0.795$\pm$0.14]{TeV} & 0.0048\\ \hspace*{15pt}kaon constant, $B^1_{K} $~=~1.740 $\pm$ 0.028 & 0.00046 \\ \hspace*{15pt}pion constant, $B^1_{\pi} $~=~1.460 $\pm$ 0.007 & 0.000063 \\ \hline \hspace*{0.75cm}\textbf{Total} & \textbf{0.010} \\\hline\hline \end{tabular} \end{table} These systematic errors were added in quadrature and are included with the error from the linear fit to obtain the experimental value of $\alpha_T~=~0.873~\pm~0.009$~(stat.)~$\pm~0.010$~(syst.). \subsection{Charge Separated}\label{sub:chargesep} To obtain a sample of events with well-measured charge sign, further selection requirements were applied to the length and radius of curvature of muon tracks. These cuts, taken from previous investigations of the muon charge ratio at MINOS~\cite{Adamson:2007ww}, have the effect of reducing the energy distribution at Earth's surface of the selected muon sample. In all, 5.7\% of the data set survived the cuts for both the forward and reverse field detector configurations. For the charge-separated samples linear regressions yielded effective temperature coefficients, $\alpha_T(\mu^+)~=~0.79~\pm~0.05$ and $\alpha_T(\mu^-)~=~0.77~\pm~0.06$ with $\chi^2$/$N_{DoF}$ of 1933/1758 and 1688/1751 respectively. These numbers are consistent with each other, so there is no measurable difference between the temperature effect on $\mu^+$ and $\mu^-$. The value of the charge-separated $\alpha_T$ is expected to be smaller than the previous $\alpha_T$ with no charge separation because the selection cuts change the energy distribution over which the integration is performed to calculate $\alpha_T$. This can be seen in Fig.~\ref{fig:Spectra}, with the most dramatic difference between the all muon and charge separated distributions of $E_{\mu}cos\theta$. This difference could produce the systematic offset observed between these values, and is discussed further in the next section. \section{Discussion} \subsection{Predicted $\alpha_T$}\label{sub:theoretical_alpha} The theoretical prediction of $\alpha_T$ can be written as ~\cite{Barrett:1952}: \begin{equation}\label{eq:diffalpha} \alpha_T ~=~- \frac{E_\mathrm{th}}{I^0_{\mu}}\frac{\partial I_{\mu}}{\partial E_\mathrm{th}} - \gamma \end{equation} Using the differential muon intensity~\cite{Gaisser:1990vg}, \begin{eqnarray}\label{eq:energyspectrum} \frac{dI_{\mu}}{dE_{\mu}} &=& \int_0^{\infty}\mathcal{P}_{\mu}(E,X)dX \simeq A \times E^{-(\gamma+1)} \nonumber \\ &\times&\left(\frac{1}{1+1.1E_{\mu}\cos \theta/\epsilon_{\pi}}+\frac{0.38r_{K/\pi}}{1+1.1E_{\mu}\cos \theta/\epsilon_{K}}\right), \end{eqnarray} and the MACRO approximation for the muon intensity~\cite{Ambrosio:1997tc}, the prediction for $\alpha_T$ can be calculated: \begin{eqnarray}\label{eq:thalpha} \alpha_T &=& \frac{1}{D_{\pi}} \frac{1/\epsilon_{K}+A^1_K(D_{\pi}/D_{K})^2 /\epsilon_{\pi} } { 1/\epsilon_{K} + A^1_K(D_{\pi}/D_{K})/\epsilon_{\pi} } \end{eqnarray} where \begin{equation}\label{eq:dpik} D_{\pi,K}~=~\frac{\gamma}{\gamma+1} \frac{ \epsilon_{\pi,K}}{1.1 E_\mathrm{th} \cos \theta } + 1, \end{equation} Note that this can be reduced to MACRO's previously published expression $\left( \alpha_T \right)_{\pi}$ \cite{Ambrosio:1997tc}, by setting $A^1_K=0$ (no kaon contribution). $A^1_{K}~=~0.38 \cdot r_{K/\pi}$ is the same as in Sec.~\ref{sec:expint}. A numerical integration using a Monte Carlo method was performed to find the predicted value of the seasonal effect coefficient, $\left< \alpha_T \right>_\mathrm{p}$, for the far detector. A set of muons was generated by drawing values of $E_{\mu}$ and $\cos\theta$ separately from the differential intensity of muons at the surface, calculated in~\cite{Gaisser:1990vg}. A random azimuthal angle, $\phi$, was assigned to each event and combined with $\cos\theta$ and the Soudan rock overburden map~\cite{Adamson:2007ww} to find the slant depth, $S(\cos\theta, \phi)$, of the event. This was converted into the corresponding threshold energy, $E_\mathrm{th}$, required for a muon on the surface to propagate to the far detector. Events satisfying $E_{\mu} > E_\mathrm{th}$ were retained, and the mean value of $\alpha_T$ was found for a sample of 10,000 events, giving $\left<\alpha_T\right>_\mathrm{p}~=~0.864~\pm~0.024$ for MINOS. When this calculation was performed using the lower energy charge-separated energy spectrum, the result is an $\left<\alpha_T\right>_\mathrm{p}$ value that is lower by 0.015. This is most clearly seen in Eq.~\ref{eq:thalpha}, which is dominated by the leading $1/D_{\pi}$ term. As $E_\mathrm{th}cos\theta$ increases, $ D_{\pi}$ goes to one. Any selection that reduces the $E_\mathrm{th}cos\theta$ distribution will then reduce the expected $\alpha_T$. The systematic uncertainty on $\left<\alpha_T\right>_\mathrm{p}$ was found by modifying the input parameters and recalculating $\alpha_T$. The dominant contributions were from: a) the $\pm$~0.06 uncertainty in meson production ratio; b) the $\pm$~10\% in rock map uncertainty\footnote[1]{ The rock map is not a determination of the slant depth by geophysical means. It was created by measuring the muon flux coming from a particular solid angle region on the sky and then normalizing to the All-world Crouch underground muon intensity curve~\cite{Crouch:1987}. This was done with both Soudan~2 data~\cite{Kasahara:1997} and with MINOS data~\cite{Adamson:2007ww}, and these calculations were shown to agree to within 10\%. Average cosmic ray muon flux, like those determined here and in~\cite{Adamson:2007ww} can be determined using this method, although in any particular direction the rock map can be much different from what was calculated (e.g., in the direction of iron veins).}; c) the $\pm$~0.1 uncertainty in muon spectral index; d) the \unit[$\pm$~0.014]{TeV} uncertainty in kaon critical energy; and e) the \unit[$\pm$~0.003]{TeV} uncertainty in pion critical energy. These uncertainties are summarized in Table~\ref{tab:therrors}. \begin{table}[!h] \caption {Systematic errors on the theoretical parameter inputs to $\alpha_T$.} \label{tab:therrors} \centering \begin{tabular}{l l} \hline\hline \hspace*{15pt}\textbf{Parameter} & $\Delta \alpha_T $ \hspace*{15pt} \\\hline \hspace*{15pt}meson production ratio, K/$\pi$ = 0.149$\pm$0.06~\cite{Barr:2006it} \hspace*{15pt} & 0.020 \\%0.0217299 \hspace*{15pt}rock map uncertainty $\pm10\%$ &0.013\\%$\pm$ 10\% \hspace*{15pt}muon spectral index, $\gamma$= 1.7 $\pm$ 0.1 & 0.0031 \\ \hspace*{15pt}kaon critical energy, $\epsilon_{K}$=\unit[0.851$\pm$ 0.014]{TeV} \hspace*{20pt} & 0.0014\\ \hspace*{15pt}pion critical energy, $\epsilon_{\pi}$=\unit[0.114$\pm$0.003]{TeV} & 0.0002 \\ \hline \hspace*{0.75cm}\textbf{Theoretical Total} &\textbf{0.024} \\ \hline \hline \end{tabular} \end{table} \begin{figure}[h] \begin{center} \includegraphics[width=0.5\textwidth]{alphaGlobal} \caption{\label{fig:alphaGlobal} The theoretical prediction for $\alpha_T$ as a function of detector depth. The dashed (top) curve is the prediction using the pion-only model (of MACRO) and the dotted (bottom) curve is the prediction using a kaon-only model. The solid (middle) curve is the new prediction including both K and $\pi$. These curves are illustrative only as the definition of effective temperature used to calculate the experimental values also depends on the K/$\pi$ ratio. The data from other experiments are shown for comparison only, and are from Barrett~1,~2~\cite{Barrett:1952}, AMANDA~\cite{Bouchta:1999kg}, MACRO~\cite{Ambrosio:2002db}, Torino~\cite{Cini:1967}, Sherman~\cite{Sherman:1954}, Hobart~\cite{Fenton:1961} and Baksan~\cite{Andreyev:1987}.} \end{center} \end{figure} Fig.~\ref{fig:alphaGlobal} shows effective temperature coefficients from MINOS and other underground experiments, including those of the MACRO survey~\cite{Ambrosio:1997tc}, as a function of detector depth. The MINOS and Sherman~\cite{Sherman:1954} effective temperature coefficients shown in Fig.~\ref{fig:alphaGlobal} were calculated using Eq.~\ref{eq:teff_exp}. The other experimental data points are taken from the MACRO survey~\cite{Ambrosio:1997tc} and were calculated using a definition which excluded the contributions from kaons and were limited by temperature measurements up to \unit[20]{g/cm$^2$}; when the MINOS result is recalculated with this definition the effective temperature coefficient decreases to $\alpha_T$ = 0.835. To compare the experimental values with the theoretical model, Eq.~\ref{eq:thalpha}, the expected effective temperature coefficient as a function of depth was calculated using the numerical integration method outlined earlier, using standard rock and a flat overburden, and is shown in Fig.~\ref{fig:alphaGlobal} as the solid line. There is qualitative agreement between the prediction and the experimentally measured values, but quantitative comparisons would require recalculating the experimental values using the kaon-inclusive definition of effective temperature. The two dashed lines in Fig.~\ref{fig:alphaGlobal} show the effective temperature dependence for the extreme pion-only and kaon-only predictions. Fig.~\ref{fig:alphaGlobal} is illustrative only, as the dependence of the experimentally measured effective temperature coefficient on the input K/$\pi$ ratio is not explicitly shown. \subsection{Measurement of Atmospheric K/$\pi$ Ratio}\label{sub:kpi} The uncertainty on the atmospheric K/$\pi$ ratio in the current cosmic ray flux models is of order \unit[40]{\%}~\cite{Barr:2006it}. There has not been a measurement of this ratio with cosmic rays. Previous measurements have been made at accelerators for p+p collisions~\cite{Rossi:1974if}, Au+Au collisions~\cite{Adler:2002wn}, Pb+P collisions~\cite{Afanasiev:2002mx,Alt:2005zq} and p+$\bar{\mathrm{p}}$ collisions~~\cite{Alexopoulos:1993wt}. Many other older measurements are summarized in \cite{Gazdzicki:1995zs}. The experimental and theoretical values of $\alpha_{T}$ can be combined to give a new measurement of the K/$\pi$ ratio for the reaction $p+A_\mathrm{atm}$, with $E_p\gtrsim$\unit[7]{TeV}. The threshold muon surface energy, \unit[$E_\mathrm{th}$=0.73]{TeV} and the median muon surface energy, $\left<E_{\mu}\right>$, is \unit[1.15]{TeV}. On average, the muon energy is one tenth the energy of its parent primary. The theoretical $\alpha_{T}$ depends directly on the K/$\pi$ ratio, as a consequence of the different interaction and decay properties of kaons and pions in the atmosphere. Since kaons and pions have different critical energies and attenuation lengths, the effective temperature also depends on the K/$\pi$ ratio, and therefore the experimental $\alpha_{T}$ is a weak function of the K/$\pi$ ratio. By plotting the experimental and theoretical values of $\alpha_{T}$ as functions of the K/$\pi$ ratio and finding the intersection of the two curves, a measurement of the K/$\pi$ ratio can be obtained. Fig.~\ref{fig:alphaTKPi} shows the experimental and theoretical values of $\alpha_{T}$ as a function of the K/$\pi$ ratio for the MINOS data. \begin{figure}[!h] \begin{center} \includegraphics[width=.5\textwidth]{AlphaTfKPi} \end{center} \caption[ $\alpha_T$ as a Function of the K/$\pi$ ratio]{\label{fig:alphaTKPi} The MINOS experimental $\alpha_T$ as a function of the K/$\pi$ ratio (dot-dash line), with its error given by the cross-hatched region, on the same axes as the theoretical $\alpha_T$ as a function of the K/$\pi$ ratio (dashed line), with its error given by the hatched region. The error on the experimental $\alpha_T$ (from Table~\ref{tab:errors} $E_\mathrm{th} \cos \theta$, $B^1_{\pi,K}$ and $\left<T_\mathrm{eff} \right>$) plus statistical error is $\pm~0.012$, and the theoretical $\alpha_T$ error (from $\epsilon_{\pi,K}$ and the rock map, Table~\ref{tab:therrors}) is $\pm~0.013$ at the best fit point. The intersection is at K/$\pi$=$0.12^{+0.07}_{-0.05}$. The solid line denotes the 1$\sigma$ contour around the best fit. \end{figure} The errors in the experimental and theoretical values of $\alpha_T$ are taken to be $\pm$~0.012 and $\pm$~0.013 respectively, obtained by combining the statistical errors in quadrature with the systematic errors in Tables~\ref{tab:errors} and~\ref{tab:therrors}, but omitting the error in the K/$\pi$ ratio in each case. The error on the theoretical value of $\alpha_T$ grows with increasing K/$\pi$ ratio because $\epsilon_K$ has a larger uncertainty than $\epsilon_{\pi}$, so a larger contribution from kaons introduces more uncertainty. The intersection of the two curves occurs at K/$\pi$~=~$0.12^{+0.07}_{-0.05}$. The uncertainty is estimated by assuming Gaussian errors for the the theoretical and experimental values of $\alpha_T$ and performing a $\chi^2$ minimization to determine the $\Delta \chi^2 = 1$ contour that encompasses the best fit point. Previous measurements of the K/$\pi$ ratio do not directly compare to this indirect measurement. Nevertheless, the central value of MINOS's measurement is consistent with the collider-based direct measurements, although the indirect error bars are larger than those on the direct measurements. A comparison of this measurement to other measurements is shown in Fig.~\ref{fig:kpicomp}. \begin{figure}[!h] \begin{center} \includegraphics[width=.5\textwidth]{kpiGraph} \end{center} \caption[ Summary of K/$\pi$ Measurements]{\label{fig:kpicomp} A compilation of selected measurements of $K/\pi$ for various center of mass energies. The STAR value was from Au+Au collisions at RHIC~\cite{Adler:2002wn}, the NA49 measurement was from Pb+Pb collisions at SPS~\cite{Afanasiev:2002mx,Alt:2005zq}, and the E735 measurement was from p+$\bar{\mathrm{p}}$ collisions at the Tevatron~\cite{Alexopoulos:1993wt}. } \end{figure} Only the MINOS result is for a reaction where the interacting particles do not have equivalent energy in the laboratory frame. Nevertheless, they are all presented on the same axes for a broad overview The central value of MINOS' indirect cosmic ray-based $K/\pi$ measurement is consistent with the collider-based direct measurements, and the associated error bars span the dispersion in those direct measurements. \section{Conclusions} A five year sample of 67.32 million cosmic ray induced muons has been collected by the MINOS far detector and daily rate fluctuations have been compared to daily fluctuations in atmospheric temperature. These distributions were shown to be highly correlated, with a correlation coefficient of 0.90. The constant of proportionality relating the two distributions, $\alpha_T$, was found to be 0.873~$\pm$~0.009~(stat.)~$\pm$~0.010~(syst.). This value is in good agreement with the theoretical expectation of $\left<\alpha_T\right>~=~0.864~\pm~0.024$. A measurement of the temperature dependence of the rate of $\mu^+$ separate from $\mu^-$ was performed for the first time. There is no statistically significant difference between $\alpha_T(\mu^+)$ and $\alpha_T(\mu^-)$. The experimental value of $\alpha_T$ for the combined muon sample has the lowest uncertainty of any such measurement. While other experiments have estimated the effect of atmospheric temperature on kaon induced muons~\cite{Barrett:1952,Ambrosio:1997tc}, this is the first result to quantify the kaon-inclusive effective temperature coefficient. The new kaon-inclusive model fits the MINOS far detector data better than the pion only model~\cite{Ambrosio:1997tc} and suggests a measurement of the atmospheric K/$\pi$ ratio. Applying the differing temperature variations of kaon and pion decay to the seasonal variations analysis allowed the first measurement of the atmospheric K/$\pi$ ratio for $E_{p}\gtrsim$\unit[7]{TeV}. It was found to be K/$\pi$~=~$0.12^{+0.07}_{-0.05}$ \section{Acknowledgments} We thank the Fermilab staff and the technical staffs of the participating institutions for their vital contributions. This work was supported by the U.S. Department of Energy, the U.K. Science and Technologies Facilities Council, the U.S. National Science Foundation, the State and University of Minnesota, the Office of Special Accounts for Research Grants of the University of Athens, Greece, FAPESP (Fundacao de Amparo a Pesquisa do Estado de Sao Paulo) and CNPq (Conselho Nacional de Desenvolvimento Cientifico e Tecnologico) in Brazil. We gratefully acknowledge the Minnesota Department of Natural Resources for their assistance and for allowing us access to the facilities of the Soudan Underground Mine State Park and the crew of the Soudan Underground Physics Laboratory for their tireless work in building and operating the MINOS far detector. We also acknowledge the BADC and the ECMWF for providing the environmental data for this project.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In a recent paper \cite{GGL08}, Giacosa, Gutsche, and Lyubovitskij (GGL) studied the two-photon decay width of the $\sigma$ meson, alias $f_0(600)$ \cite{PDG06}, based on the presupposition that it is a $q\bar{q}$ state. They employed two simple perturbative sigma models, one purely local, comprising $\sigma$, $\pi$, quark and antiquark fields, and the other nonlocal, with only $\sigma$, $q$, and $\bar{q}$, besides an extended covariant vertex function. The principal result of their work was that, in contrast with what is generally assumed, a $q\bar{q}$ assignment for the $\sigma$ should lead to a width $\Gamma_{\sigma\to\gamma\gamma}$ much smaller than the recently reported value of $(4.1\pm0.3)$ keV resulting from an analysis by Pennington \cite{P06}, as well as the 3 values given in the 2006 PDG tables \cite{PDG06}, and probably even less than 1 keV. Therefore, GGL concluded that, if the large experimental $\gamma\gamma$ width is confirmed, a quarkonium interpretation of the $\sigma$ is not favored, \em ``contrary to usual belief.'' \em As an explanation for their very small $\Gamma_{\sigma\to\gamma\gamma}$\ prediction, GGL argued that a term in the quark-triangle loop diagram, necessary for gauge invariance, largely cancels the lead term, thus resulting in a small total amplitude. Moreover, GGL claimed that the former term is \em ''often neglected'', \em including in previous work of ours and our co-authors \cite{BKRS02,KBRS02,DLS99,SKR06}. In this Comment, we shall show that GGL are mistaken on several points. First of all, we have \em not \em \/unduly neglected any term in the evaluation of the quark triangle diagram in Refs.~\cite{BKRS02,KBRS02,DLS99,SKR06}. When we disregarded the term in question, this was fully justified, since the term was zero or negligible. Secondly, the small $\Gamma_{\sigma\to\gamma\gamma}$\ value obtained by GGL is a consequence of a very low $\sigma$ mass, in combination with a relatively large constituent quark mass, at least in the local case. For the nonlocal Lagrangian, their tiny $\Gamma_{\sigma\to\gamma\gamma}$\ value is rather an indication for the inadequacy of the Lagrangian itself. Thirdly, we demonstrate, by explicit calculation, how important meson-loop contributions are, which is in principle admitted by GGL, but not concretized. In Sec.~\ref{sigma} of this Comment, we study in detail the two-photon width of the $\sigma$ meson, in the context of the quark-level linear $\sigma$ model (QLL$\sigma$M) \cite{DS95}, showing that a good agreement with data is achieved. In Sec.~\ref{cncls} we present our conclusions. \section{Two-photon width of the $\sigma$ in the QLL$\sigma$M} \label{sigma} Given the scalar amplitude structure \cite{DLS99,KBRS02,DEMSB94} ${\cal M}\epsilon_\nu(k')\epsilon_\mu(k) \left(g^{\mu\nu}k'\cdot k-k'^\mu k^\nu\right)$, the rate for the decay of a scalar meson $S$ into two photons reads \begin{equation} \Gamma(S\to\gamma\gamma)=\frac{m^3_S|{\cal M}_{S\to\gamma\gamma}|^2}{64\pi}\;. \label{stogg} \end{equation} If one assumes, as GGL do, that the $\sigma$ is a scalar $q\bar{q}$ state, then the principal contribution to the amplitude ${\cal M}_{\sigma\to\gamma\gamma}$ comes from the up and down quark triangle diagrams (see e.g.\ FIG.~1 in Ref.~\cite{GGL08}), yielding (with $N_c=3$) \\[-4mm] \begin{equation} {\cal M}^{n\bar{n}}_{\sigma\to\gamma\gamma} \; = \; \frac{5\alpha}{3\pi f_\pi} 2\xi_n[2+(1-4\xi_n)I(\xi_n)] \; , \label{mnsig} \end{equation} where $\alpha=e^2/4\pi$, $\xi_n=m_n^2/m_\sigma^2$ ($n$ stands for $u$ or $d$), and $I(\xi)$ is the triangle loop integral given by \begin{equation} I(\xi) \left\{ \begin{array}{lll} =\;\displaystyle\frac{\pi^2}{2}-2\log^2\left[\sqrt{\frac{1}{4\xi}}+ \sqrt{\frac{1}{4\xi}-1}\:\right] + \\[4mm] \displaystyle\;\;\;\;\;\; 2\pi i\log\left[\sqrt{\frac{1}{4\xi}}+\sqrt{\frac{1}{4\xi}-1}\:\right] \;(\xi\leq0.25)\:,\\[5mm] =\;\displaystyle2\arcsin^2\left[\sqrt{\frac{1}{4\xi}}\:\right] \;\;(\xi\geq0.25)\;. \end{array} \right. \label{ixi} \end{equation} These Eqs.~(\ref{mnsig}) and (\ref{ixi}) exactly correspond to Eqs.~(2) and (4) in Ref.~\cite{GGL08}, with the proviso that GGL defined the $\sigma$-$\bar{q}$-$q$ coupling in their Lagrangian as $g_\sigma/\sqrt{2}$ instead of our QLL$\sigma$M\ coupling $g$, the latter being related to $f_\pi$ above via the Goldberger-Treiman relation $m_q=f_\pi g$ \cite{BKRS02,KBRS02,DLS99,SKR06}. Ignoring for the moment possible meson-loop contributions as well as an $s\bar{s}$ component in the $\sigma$, we can use Eq.~(\ref{mnsig}) to calculate $\Gamma_{\sigma\to\gamma\gamma}$, for different $\sigma$ and quark masses. Also, we can check what the importance is of the term involving $I(\xi)$. However, let us first deal with the allegation by GGL that we had erroneously neglected the $I(\xi)$ term in previous work. Well, in Ref.~\cite{BKRS02} we simply worked in the, perfectly well-defined, Nambu--Jona-Lasinio (NJL) \cite{NJL61} limit ($m_\sigma=2m_q$) of the QLL$\sigma$M, in which the term in question vanishes identically, using quite reasonable $\sigma$ and quark masses of 630~MeV and 315~MeV, respectively. The resulting $\Gamma_{\sigma\to\gamma\gamma}$, ignoring meson-loops, would then be 2.18~keV. But accounting for an estimate of the pion-loop contribution as well yielded the prediction of 3.76 keV \cite{BKRS02}, in good agreement with experiment, then and now. In Ref.~\cite{SKR06}, Eq.~(101), again the NJL limit of the QLL$\sigma$M\ was used, but now also including an estimate for the kaon loop, besides the pion loop, leading to a slightly smaller result, but still very much larger than any of GGL's predictions (also see Ref.~\cite{SRKB04}). Finally, in Refs.~\cite{KBRS02,DLS99} $\Gamma_{\sigma\to\gamma\gamma}$\ was not even considered, thus making the critique by GGL completely void. Moreover, note that in Ref.~\cite{KBRS02} we did use the full expressions of Eqs.~(\ref{mnsig}) and (\ref{ixi}) above when necessary, namely in the case of the $f_0$(1370) meson. Let us now carry out a more detailed analysis of $\Gamma_{\sigma\to\gamma\gamma}$\ in a QLL$\sigma$M\ setting, employing Eqs.~(\ref{mnsig}) and (\ref{ixi}). Working beyond the chiral limit (CL), we may take the NJL value $m_\sigma=675$~MeV for $m_n=337.5$ MeV \cite{S08}, where $m_n$ stands for the nonstrange (up or down) quark mass. Still neglecting $n\bar{n}$-$s\bar{s}$ mixing and meson loops, this gives $\Gamma^{q\bar{q}}_{\sigma\to\gamma\gamma}$$=2.68$~keV. Taking a somewhat more realistic value of $m_\sigma=666$~MeV \cite{S08}, away from the CL, the latter width gets reduced to 2.44 keV. If we now also allow for the admixture of a small $s\bar{s}$ component in the $\sigma$, with a nonstrange-strange mixing angle of, say, $-10.1^\circ$ \cite{S08}, then we get $\Gamma^{q\bar{q}}_{\sigma\to\gamma\gamma}$$=2.49$~keV, for the often used \cite{SKR06} QLL$\sigma$M\ quark masses $m_n=337.5$~MeV and $m_s=486$~MeV. Note that this $s\bar{s}$ component, with amplitude \begin{equation} {\cal M}^{s\bar{s}}_{\sigma\to\gamma\gamma} \; = \; \frac{\sqrt{2}\alpha g} {3\pi m_s}2\xi_s[2+(1-4\xi_s)I(\xi_s)]\;, \label{mssig} \end{equation} contributes with a weight factor of only $\sqrt{2}\alpha m_n/3\pi f_\pi m_s$ (using the GT relation $m_n=f_\pi g$), as compared to $5\alpha /3\pi f_\pi$ from Eq.~(\ref{mnsig}) in the $n\bar{n}$ case, since the charge of a strange quark is $-1/3$ \cite{KBRS02}. Next we are going to add meson-loop contributions as well. Now, in the framework of the QLL$\sigma$M, loops with charged mesons that couple to the $\sigma$ include those with pions and kaons, as well as those with the scalar mesons $\kappa$(800) and $a_0$(980). The expression for a gauge-invariant meson-loop contribution to the two-photon amplitude mainly differs from the quark triangle in Eq.~(\ref{mnsig}) because of the presence of a seagull graph (see e.g.\ Ref.~\cite{DEMSB94}, first paper), yielding a total amplitude \begin{equation} {\cal M}^{MM}_{\sigma\to\gamma\gamma}\;=\;-\frac{2g'\alpha}{\pi m^2_M} \left[-\frac{1}{2}+\xi I(\xi)\right] \; , \;\;\; \xi=\frac{m^2_M}{m^2_\sigma} \; , \label{mloop} \end{equation} where the minus sign stems from the opposite statistics with respect to the quark-loop case, and $g'$ is the cubic QLL$\sigma$M\ meson coupling. For the meson loops pertinent to the $\sigma$, we shall need the 3-meson couplings \cite{DS95,KBRS02,SKR06} \begin{equation} \begin{array}{lcl} g_{\sigma_{n\bar{n}},\pi\pi} & = & \displaystyle\frac{\cos^2(\phi_S)m^2_\sigma+ \sin^2(\phi_S)m^2_{f_0(980)}-m^2_{\pi^\pm}}{2f_\pi}\;, \\[3mm] g_{\sigma_{s\bar{s}},\pi\pi} & = & 0 \;, \\[3mm] g_{\sigma_{n\bar{n}},KK} & = & \displaystyle\frac{\cos^2(\phi_S)m^2_\sigma+ \sin^2(\phi_S)m^2_{f_0(980)}-m^2_{K^\pm}}{2f_K}\; , \\[4mm] g_{\sigma_{s\bar{s}},KK} & = & \displaystyle\frac{\sin^2(\phi_S)m^2_\sigma+ \cos^2(\phi_S)m^2_{f_0(980)}-m^2_{K^\pm}}{\sqrt{2}\,f_K}\;,\\[4mm] g_{\sigma_{n\bar{n}},\kappa\kappa} & = & \displaystyle\frac{\cos^2(\phi_S) m^2_\sigma+\sin^2(\phi_S)m^2_{f_0(980)}-m^2_{\kappa}}{2\,(f_\pi-f_K)}\;,\\[4mm] g_{\sigma_{s\bar{s}},\kappa\kappa} & = & \displaystyle\frac{\sin^2(\phi_S) m^2_\sigma+\cos^2(\phi_S)m^2_{f_0(980)}-m^2_{\kappa}}{\sqrt{2}\,(f_K-f_\pi)}\;, \\[4mm] g_{\sigma_{n\bar{n}},a_0a_0} & = & 3g_{\sigma_{n\bar{n}},\pi\pi} \;, \\[3mm] g_{\sigma_{s\bar{s}},a_0a_0} & = & 0 \;, \end{array} \label{cubic} \end{equation} where $\phi_S$ is the scalar mixing angle, and $f_K=f_\pi\,(m_s/m_n+1)/2\approx1.22\,f_\pi$. The cubic coupling of the physical $\sigma$ meson to the three channels is then given by \begin{equation} g'_{\sigma,MM} \; = \; \cos(\phi_S)g_{\sigma_{n\bar{n}},MM} - \sin(\phi_S)g_{\sigma_{s\bar{s}},MM} \; . \label{sigmamm} \end{equation} Note that we neglect here small OZI-violating corrections to the QLL$\sigma$M\ three-meson couplings, just as in previous work of ours \cite{KBRS02}. Such contributions will be included in a forthcoming study. Now we are in a position to do a complete calculation of $\Gamma_{\sigma\to\gamma\gamma}$, with both quark and meson loops accounted for. Note that the imaginary part of $I(\xi$), as given by the $\xi<0.25$ case in Eq.~(\ref{ixi}), will be included for the pion-loop amplitude. If we choose again a scalar mixing angle of $-10.1^\circ$ and take $m_{\kappa}=800$~MeV, we obtain a total two-gamma width \begin{equation} \Gamma_{\sigma\to\gamma\gamma}^{q\bar{q}+MM} \; = \; 3.50 \; \mbox{keV} \; . \label{gstgg} \end{equation} This rate corresponds to a total amplitude modulus $|{\cal M}|=4.88\times10^{-2}$ GeV$^{-1}$, which can be decomposed in terms of the partial quark- and meson-loop amplitudes \begin{equation} \begin{array}{rcl} \Re\mbox{e}\,{\cal M}_{n\bar{n}}&=&4.01\times10^{-2}\;\mbox{GeV}^{-1}\;,\\[1mm] \Re\mbox{e}\,{\cal M}_{s\bar{s}}&=&1.09\times10^{-2}\;\mbox{GeV}^{-1}\;,\\[1mm] {\cal M}_{\pi\pi}&=&(1.19 - i 1.03) \times 10^{-2}\;\mbox{GeV}^{-1}\;,\\[1mm] {\cal M}_{KK} & = & -1.83 \times 10^{-3} \; \mbox{GeV}^{-1} \; , \\[1mm] {\cal M}_{\kappa\kappa} &=&-2.06 \times 10^{-3} \; \mbox{GeV}^{-1}\;,\\[1mm] {\cal M}_{a_0a_0} & = & -1.50 \times 10^{-3} \; \mbox{GeV}^{-1} \; . \end{array} \label{amplitudes} \end{equation} Note that here the relative sign between quark and meson loops has already been included. Also observe that the kaon, $\kappa$, and $a_0(980)$ loops reduce the contribution of the pion loop, so that the net effect of the meson loops on the two-photon width is about $+40\%$. Taking a somewhat more negative value for the scalar mixing angle, e.g.\ $\phi_S=-18^\circ$ \cite{SKR06}, only reduces the total two-phton width to 3.39~keV. This prediction as well as the former one are fully compatible with the corresponding PDG \cite{PDG06} data, and also not at odds with Pennington's recent result \cite{P06}. In contrast, the sensitivity of $\Gamma_{\sigma\to\gamma\gamma}$\ to the $\sigma$ mass is much stronger, which is obvious from Eq.~(\ref{stogg}), relating width and amplitude via $m_\sigma$ cubed. This can also by seen in FIG.~2 of the paper \cite{GGL08} by GGL themselves, where e.g.\ an $m_\sigma$ of 650 MeV, with $m_q=350$ MeV, would yield a $\Gamma^{q\bar{q}}_{\sigma\to\gamma\gamma}$\ of roughly 2.5~keV, in good agreement with our value of 2.44~keV above. However, by taking a very small $m_\sigma$ of 440~MeV, as GGL choose to do, one obtains a much smaller $\Gamma_{\sigma\to\gamma\gamma}$, even when meson loops are included. For instance, if we assume the $\sigma$ to be purely $n\bar{n}$ and take $m_q=250$~MeV, $\Gamma_{\sigma\to\gamma\gamma}$\ becomes 0.67~keV, even with the 3 meson-loop contributions included, which should be compared to GGL's value of 0.54~keV (see TABLE I of Ref.~\cite{GGL08}) for the pure $q\bar{q}$ case. Neglecting in this scenario the term proportional to $I(\xi)$ would indeed increase our result of 0.67~keV to 1.38~keV, but this is of course an error we have not and will not make. At this point, we also take exception at GGL's claim \em ``\ldots the results for $\Gamma_{\sigma\to\gamma\gamma}$\ at a fixed pole mass of $M_\sigma=440$~MeV as favored by recent theoretical and experimental works [16,20]'', \em where their reference no.~20 is our Ref.~\cite{PDG06}, i.e., the 2006 PDG Review of Particle Physics. It is simply false to state that the PDG favors a $\sigma$ pole mass of 440~MeV. The truth is that the PDG listings mention ``{\bf (400--1200)\boldmath{$-i$}(250--500) OUR ESTIMATE}'', for the $f_0(600)$ $T$-matrix pole (i.e., $S$-matrix pole) as a function of $\sqrt{s}$. On the other hand, the theoretical papers referred to by GGL include the Roy-equation analysis by Caprini, Colangelo, and Leutwyler \cite{CCL06}, which indeed found 441~MeV for the real part of the $\sigma$ $S$-matrix pole, besides an imaginary part of 272~MeV. However, it is a common mistake to confuse the real part of the pole with the `mass' of a broad resonance, especially when the resonance is certainly not of a pure BW type, like e.g.\ the $\sigma$, which is strongly distorted due to the $\pi\pi$ threshold and the Adler zero not far below \cite{B03}. Notice that, in the latter analysis, the `mass' of the $\sigma$ at which the $\pi\pi$ phase shift passes through $90^\circ$ --- by definition the $K$-matrix pole --- lies at 926~MeV. This does not mean that this \em is \em \/the $\sigma$ mass, but just demonstrates the difficulty of assigning \em any \em \/specific mass to a broad non-BW resonance. Anyhow, our above choice of 666~MeV, in the context of the QLL$\sigma$M, is surely more reasonable than naively taking the real part of a pole that is already significantly lower than the `world average' \cite{PDG06,eef} of $\sigma$ poles. To conclude this section, we note that the $Z=0$ compositeness condition, discussed by GGL in the context of their nonlocal Lagrangian, is manifestly satisfied in the --- nonperturbative and selfconsistent --- QLL$\sigma$M, provided $\xi=m^2_q/m^2_\sigma\leq0.25$, with $g_\sigma$ \em not \em \/depending on $m_\sigma$. \\ \section{Conclusions} \label{cncls} In the present Comment we have shown that GGL incorrectly referred to and criticized our previous papers on the subject. Moreover, we have demonstrated, via an explicit and detailed calculation in the context of the QLL$\sigma$M, that the reported experimental values of $\Gamma_{\sigma\to\gamma\gamma}$\ give quantitative support to a $q\bar{q}$ interpretation of the $\sigma$ meson, provided that one uses a reasonable $\sigma$ mass and also includes meson-loop contributions, besides the quark loop considered by GGL. Finally, let us comment on the nonlocal Lagrangian employed by GGL besides the local one. Their justification was: \em ``However, the local approach is no longer applicable for values of $M_\sigma$ close to threshold, as will be evident from the discussion of the next section.'' \em Well, as already mentioned above, the QLL$\sigma$M\ is a \em local \em \/renormalizable field theory, exactly satisfying the $Z=0$ compositeness condition close to --- but below --- threshold, due to its nonperturbative and selfconsistent formulation \cite{DS95}. This condition can be rigorously described in both the QLL$\sigma$M\ and the NJL model, in terms of a log-divergent gap equation \cite{S98}. The latter can also be expressed via a four-dimensional ultraviolet cutoff $\Lambda$, resulting in a value $\Lambda\approx2.3m_q$. For a nonstrange quark mass of 337.5 MeV, this gives $\Lambda\approx750$ MeV, which is an energy scale that clearly separates the `elementary' $\sigma$ from e.g.\ the 'composite' $\rho$ meson. For further details, we refer to Ref.~\cite{S98}. In contrast, GGL were probably thinking in perturbative terms when going from their local $\sigma$-model Lagrangian to the nonlocal case. In view of the numerical results of the latter model, which produces even tinier values for $\Gamma_{\sigma\to\gamma\gamma}$\ than their local approach, we are led to conclude that Nature rather disfavors a nonlocal realization of chiral symmetry than a $q\bar{q}$ interpretation of the $\sigma$ meson. This work was supported by the {\it Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia} \/of the {\it Minist\'{e}rio da Ci\^{e}ncia, Tecnologia e Ensino Superior} \/of Portugal, under contract POCI/FP/81913/2007.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Let $\kappa$ be a probability measure on a finite set $K$. We will mainly be concerned with the simple case where $K = \ns{0,1}$, where we call $\kappa(1):= \kappa(\ns{1}) \in (0,1)$ the \dff{intensity} of $\kappa$. Let $G$ be a group. A \dff{Bernoulli shift over $G$ with base $(K, \kappa)$} is the measure-preserving system $(G, K^G, \kappa^G)$, where $G$ acts on $K^G$ via $(gx)(f) = x(g^{-1}f)$ for $x \in K^G$ and $g,f \in G$. Let $\iota$ be a probability measure of lower intensity. We say that a measurable map $\phi: K^G \to K^G$ is an \dff{equivariant thinning from $\kappa$ to $\iota$} if $\phi(x)(g) \leq x(g)$ for all $x \in K^G$ and $g \in G$, the push-forward of $\kappa^G$ under $\phi$ is $\iota^G$, and $\phi$ is equivariant $\kappa^G$-almost-surely; that is, on a set of full-measure, $\phi \circ g = g \circ \phi$ for all $g \in G$. \begin{theorem} \label{group} Let $\kappa$ and $\iota$ be probability measures on $\ns{0,1}$ and $\iota$ be of lower intensity. For Bernoulli shifts over the free group of rank at least two, there exists an equivariant thinning from $\kappa$ to $\iota$. \end{theorem} Theorem \ref{group} does not hold with such generality in the case of a Bernoulli shift over an amenable group like the integers. Recall that the \dff{entropy} of a probability measure $\kappa$ on a finite set $K$ is given by $$ H(\kappa) := -\sum_{i \in K} \kappa(i) \log \kappa(i).$$ \begin{theorem}[Ball \cite{Ball}, Soo \cite{Soo2016}] \label{KS} Let $\kappa$ and $\iota$ be probability measures on $\ns{0,1}$ and $\iota$ be of lower intensity. For Bernoulli shifts over the integers, there exists an equivariant thinning from $\kappa$ to $\iota$ if and only if $H(\kappa) \geq H(\iota)$. \end{theorem} In Theorem \ref{KS}, the necessity of $H(\kappa) \geq H(\iota)$ follows easily from the classical theory of Kolmogorov-Sinai entropy \cite{MR2342699,MR0304616}, which we now recall. Let $G$ be a group and let $\kappa$ and $\iota$ be probability measures on a finite set $K$. An equivariant map $\phi$ is a \dff{factor} from $\kappa$ to $\iota$ if the push-forward of $\kappa^G$ under $\phi$ is $\iota^G$, and is an \dff{isomorphism} if $\phi$ is a bijection and its inverse also serves as a factor from $\iota$ to $\kappa$. In the case $G = {\mathbb Z}$, Kolmogorov proved that entropy is non-increasing under factor maps; this implies the necessity of $H(\kappa) \geq H(\iota)$ in Theorem \ref{KS}. Furthermore, Sinai \cite{MR2766434} proved that there is a factor from $\kappa$ to $\iota$ if $H(\kappa) \geq H(\iota)$, and Ornstein \cite{Ornstein} proved there is an isomorphism from $\kappa$ to $\iota$ if and only if $H(\kappa) = H(\iota)$. Thus entropy is a complete invariant for Bernoulli shifts over ${\mathbb Z}$. Ornstein and Weiss \cite{MR910005} generalized these results to the case where $G$ is an amenable group. See also Keane and Smorodinsky for concrete constructions of factor maps and isomorphisms \cite{keanea, keaneb}. The sufficiency of $H(\kappa) > H(\iota)$ in Theorem \ref{KS} was first proved by Ball \cite{Ball}. The existence of an isomorphism that is also an equivariant thinning in the equal entropy case was proved by Soo \cite{Soo2016}. Let us remark that the factor maps given in standard proofs of the Sinai and Ornstein theorems will not in general be monotone; that is, they may not satisfy $\phi(x)(i )\leq x(i)$ for all $x \in \ns{0,1}^{{\mathbb Z}}$ and $i \in {\mathbb Z}$. Towards the end of their 1987 paper, Ornstein and Weiss \cite{MR910005} give a simple but remarkable example of an entropy increasing factor in the case where $G$ is the free group of rank at least two, which is further elaborated upon by Ball \cite{ball3}. It was an open question until recently whether all Bernoulli shifts over a free group of rank at least two are isomorphic. This question was answered negatively by Lewis Bowen \cite{Lbowen} in 2010, who proved that although entropy can increase under factor maps, in the context of a free group with rank at least two, it is still a complete isomorphism invariant. Recently, there has been much interest in studying factors in the non-amenable setting; see Russell Lyons \cite{iidtrees} for more information. Our proof of Theorem \ref{group} will make use of a variation of the Ornstein and Weiss example in Ball \cite{ball3} and a primitive version of a marker-filler type construction, in the sense of Keane and Smorodinsky \cite{keanea, keaneb}. Our construction uses randomness already present in the process in a careful way as to mimic a construction that one would make if additional independent randomization were available. This approach was % taken by Holroyd, Lyons, and Soo \cite{MR2884878}, Angel, Holroyd, and Soo \cite{MR2736350}, and Ball \cite{MR2133893} for defining equivariant thinning in the context of Poisson point processes. \section{Tools} \subsection{Coupling} Let $(A, \alpha)$ and $(B, \beta)$ be probability spaces. A \dff{coupling} of $\alpha$ and $\beta$ is a probability measure on the product space $A \times B$ which has $\alpha$ and $\beta$ as its marginals. For a random variable $X$, we will refer to the measure $\P(X \in \cdot)$ as the \dff{law} or the \dff{distribution} of $X$. If two random variables $X$ and $Y$ have the same law, we write $X \stackrel{d}{=} Y$. Similarly, a \dff{coupling} of random variables $X$ and $Y$ is a pair of random variables $(X', Y')$, where $X'$ and $Y'$ are defined on the same probability space and have the same law as $X$ and $Y$, respectively. Thus a coupling of random variables gives a coupling of the laws of the random variables. Often we will refer to the law of a pair of random variables as the \dff{joint distribution} of the random variables. In the case that $A=B$ and $A$ is a partially ordered by the relation $\preceq$, we say that a coupling $\gamma$ is \dff{monotone} if $\gamma\ns{(a,b) \subset A \times A : b \preceq a )} = 1$. We will always endow the space of binary sequences $\ns{0,1}^I$ indexed by a set $I$ with the partial order $x \preceq y$ if and only if $x_i \leq y_i$ for $i \in I$. \begin{example}[Independent thinning] \label{indepthin} Let $\kappa$ and $\iota$ be probability measures on $\ns{0,1}$, where $\kappa(1) := p \geq \iota(1) :=q$. Let $r := \tfrac{p-q}{p}.$ Then the measure $\rho$ on $\ns{0,1} ^2$ given by $$\rho(0,0) = 1-p, \ \rho(0,1)=0, \ \rho(1,0) = rp, \text{ and } \rho(1,1) = (1-r)p$$ is a monotone coupling of $\kappa$ and $\iota$. Thus under $\rho$, a $1$ is thinned to a $0$ with probability $r$ and kept with probability $1-r$. Clearly, the product measure $\rho^n$ is a monotone coupling of $\kappa^n$ and $\iota^n$. We will refer to the coupling $\rho^n$ as the \dff{independent thinning of $\kappa^n$ to $\iota^n$}. \erk \end{example} The following simple lemma is one of the main ingredients in the proof of Theorem \ref{group}. In it we construct a coupling of $\kappa^n$ and $\iota^n$ for $n$ sufficiently large which will allow us to extract spare randomness from a related coupling of $\kappa^G$ and $\iota^G$. We will write $0^n1^m$ to indicate the binary sequence of length $n+m$ of $n$ zeros followed by $m$ ones. \begin{lemma}[Key coupling] \label{fund} Let $\kappa$ and $\iota$ be probability measures on $\ns{0,1}$, where $\kappa$ is of greater intensity. For $n$ sufficiently large, there exists a monotone coupling $\gamma$ of $\kappa^n$ and $\iota^n$ such that $$\gamma(100^{n-2}, 0^n) = \kappa^n(100^{n-2})$$ and $$\gamma(010^{n-2}, 0^n) = \kappa^n(010^{n-2}).$$ \end{lemma} \begin{proof} Let $p=\kappa(1)$, $q= \iota(1)$, and $\rho^n$ be the independent thinning of $\kappa^n$ to $\iota^n$ as in Example \ref{indepthin}. We will perturb $\rho^n$ to give the required coupling. We specify a probability measure $\varrho$ on $\ns{0,1} ^n \times \ns{0,1}^n$ by stating that it agrees with $\rho^n$ except on the points $(100^{n-2}, 0^n)$, $(010^{n-2}, 0^n)$, $(100^{n-2}, 100^{n-2})$, and $(010^{n-2}, 010^{n-2})$, where we specify that $$\varrho(100^{n-2}, 0^n ) = \varrho(010^{n-2}, 0^n ) = p(1-p)^{n-1} $$ and $$ \varrho(100^{n-2}, 100^{n-2}) = \varrho(010^{n-2}, 010^{n-2}) = 0.$$ Thus $\varrho$ is almost a monotone coupling of $\kappa^n$ and $\iota^n$, except that from our changes to $\rho^n$ we have \begin{equation*}\begin{split} \sum_{x \in \ns{0,1} ^n} \varrho(x, 0^n) & = \sum_{x \in \ns{0,1} ^n} \rho^n(x, 0^n) -\rho^n(100^{n-2},0^n)-\rho^n(010^{n-2},0^n)\\ & \qquad\qquad +\varrho(100^{n-2},0^n)+\varrho(010^{n-2},0^n)\\ & = (1-q)^n + 2p(1-p)^{n-1}(1-r), \end{split}\end{equation*} and \begin{equation*}\begin{split} \sum_{x \in \ns{0,1} ^n} \varrho(x, 100^{n-2}) & = \sum_{x \in \ns{0,1} ^n} \rho^n(x, 100^{n-2}) -\rho^n(100^{n-2},100^{n-2})\\ &\qquad\qquad +\varrho(100^{n-2},100^{n-2})\\ & = q(1-q)^{n-1} - p(1-p)^{n-1}(1-r)+0\\ &= \sum_{x \in \ns{0,1} ^n} \varrho(x, 010^{n-2}), \end{split}\end{equation*} where $r = \tfrac{p-q}{p}$. We perturb $\varrho$ to obtain the desired coupling $\gamma$. Consider the set $B_1$ of all binary sequences of length $n$, where $x \in B_1$ if and only if $x_1=1$, $x_2=0$, and $\sum_{i=3}^n x_i = 1$. Similarly, let $B_2$ be the set of all binary sequences of length $n$, where $x \in B_2$ if and only if $x_1=0$, $x_2=1$, and $\sum_{i=3}^n x_i = 1$. The sets $B_1$ and $B_2$ are disjoint, and each have cardinality $n-2$. For $x \in B_1 \cup B_2$, $$\varrho(x, 0^n) = \rho^n(x, 0^n) = p^2(1-p)^{n-2}r^2,$$ for $x \in B_1$, $$\varrho(x, 100^{n-2}) = \rho^n(x, 100^{n-2})= p^2(1-p)^{n-2}r(1-r),$$ and for $x \in B_2$, $$\varrho(x, 010^{n-2}) = \rho^n(x, 010^{n-2})= p^2(1-p)^{n-2}r(1-r).$$ Note that for $n$ sufficiently large $$\sum_{x \in B_1 \cup B_2}\varrho(x, 0^n) = 2(n-2) p^2(1-p)^{n-2}r^2 >2p(1-p)^{n-1}(1-r).$$ Let $\gamma$ be equal to $\varrho$ except on the set of points $$\ns{(x,0^n) : x \in B_1 \cup B_2} \cup \ns{(x, 100^{n-2}): x \in B_1} \cup \ns{(x, 010^{n-2}): x \in B_2 },$$ where we make the following adjustments. For $x \in B_1 \cup B_2$, set $$\gamma(x, 0^n) = p^2(1-p)^{n-2}r^2 - \frac{p(1-p)^{n-1}(1-r)}{n-2} >0,$$ for $x \in B_1$, set $$\gamma(x, 100^{n-2}) = p^2(1-p)^{n-2}(1-r)r + \frac{p(1-p)^{n-1}(1-r)}{n-2},$$ and for $x \in B_2$, set $$\gamma(x, 010^{n-2}) = p^2(1-p)^{n-2}(1-r)r + \frac{p(1-p)^{n-1}(1-r)}{n-2}.$$ That $\gamma$ has the required properties follows from its construction. \end{proof} To illustrate the utility of Lemma \ref{fund}, we will give a different proof of the following result of Peled and Gurel-Gurevich \cite{GGP}. Let ${\mathbb N} = \ns{0, 1, 2, \ldots}$. \begin{theorem}[Peled and Gurel-Gurevich \cite{GGP}] \label{thick} Let $\kappa$ and $\iota$ be probability measures on $\ns{0,1}$, where $\kappa$ is of greater intensity. There exists a measurable map $\phi: \ns{0,1}^{{\mathbb N}} \to \ns{0,1}^{{\mathbb N}}$ such that the push-forward of $\kappa^{{\mathbb N}}$ under $\phi$ is $\iota^{{\mathbb N}}$ and $\phi(x)(i) \leq x(i)$ for all $x \in \ns{0,1}^{{\mathbb N}}$ and all $i \in {\mathbb N}$. \end{theorem} We note that in \cite[Theorem 1.3]{GGP}, they use the dual terminology of {\em thickenings}; their equivalent theorem states that for probability measures $\iota$ and $\kappa$ on $\ns{0,1}$, where $\iota$ is of lesser intensity, there is a measurable map $\phi: \ns{0,1}^{{\mathbb N}} \to \ns{0,1}^{{\mathbb N}}$ such that the push-forward of $\iota^{{\mathbb N}}$ under $\phi$ is $\kappa^{{\mathbb N}}$ and $\phi(x)(i) \geq x(i)$ for all $x \in \ns{0,1}^{{\mathbb N}}$ and all $i \in {\mathbb N}$. In the proof of Theorem \ref{thick}, we will make use of the following two lemmas. We say that a random variable $U$ is \dff{uniformly distributed} in $[0,1]$ if the probability that $U$ lies in a Borel subset of the unit interval is given by the Lebesgue measure of the set. \begin{lemma} \label{function} Let $(X, Y)$ be a pair of discrete random variables taking values on the finite set $A \times B$ with joint distribution $\gamma$. There exists a measurable function $\Gamma:A \times [0,1] \to B$ such that if $U$ is uniformly distributed in $[0,1]$ and independent of $X$, then $(X, \Gamma(X, U))$ has joint distribution $\gamma$. \end{lemma} \begin{proof} Assume that $\P(X = a) >0$, for all $a \in A$. Let $B = \ns{b_1 , \ldots, b_n}$. For each $a \in A$, let $$q_a(j):= \P(Y \in \ns{b_1, \ldots, b_j} | X=a) = \frac{\P (Y \in \ns{b_1, \ldots, b_j} , X=a)}{ \P( X=a) }$$ for all $1 \leq j \leq n$. Set $q_a(0) =0$ and note that $q_a(n) =1$, so that $$\P(q_a(j-1) \leq U < q_a(j) ) = \frac{\P(Y=b_j, X=a)}{\P(X=a)}.$$ For each $ 1\leq j \leq n$, let \begin{equation*} \Gamma(a, u) := b_j \text{ if } q_a(j-1) \leq u < q_a(j) . \qedhere \end{equation*} \end{proof} We call a $\ns{0,1}$-valued random variable a \dff{Bernoulli random variable}. The following lemma allows us to code sequences of independent coin-flips into sequences of uniformly distributed random variables. \begin{lemma} \label{rep} There exists a measurable function $c: \ns{0,1}^{{\mathbb N}} \to [0,1]^{{\mathbb N}}$ such that if $B =(B_{i})_{i \in {\mathbb N}}$ is a sequence of i.i.d.\ Bernoulli random variables with mean $\tfrac{1}{2}$, then $(c(B)_i)_{i \in {\mathbb N}}$ is a sequence of i.i.d.\ random variables that are uniformly distributed in $[0,1]$. \end{lemma} \begin{proof} The result follows from the Borel isomorphism theorem. See \cite[Theorem 3.4.23]{borel} for more details. \end{proof} \begin{proof}[Proof of Theorem \ref{thick}] Let $\gamma$ be the monotone coupling of $\kappa^n$ and $\iota^n$ given by Lemma \ref{fund}, so that $\gamma$ is a measure on $\ns{0,1} ^n \times \ns{0,1}^n \equiv (\ns{0,1} \times \ns{0,1})^n$. Thus the product measure $\gamma^2$ is a monotone coupling of $\kappa^{2n}$ and $\iota^{2n}$ and $\gamma^{{\mathbb N}}$ gives a monotone coupling of $\kappa^{{\mathbb N}}$ and $\iota^{{\mathbb N}}$. We will modify the coupling $\gamma^{{\mathbb N}}$ to become the required map $\phi$. In order to do this, it will be easier to think in terms of random variables rather than measures. Let $X =(X_i)_{i \in {\mathbb N}}$ be an i.i.d.\ sequence of Bernoulli random variables with mean $\kappa(1)$. For each $j \geq 0$, let $$X^{j} := (X_{jn}, \ldots, X_{(j+1)n -1} ),$$ so that the random variables are partitioned into blocks of size $n$. Let $U = (U_i)_{i \in {\mathbb N}}$ be an i.i.d.\ sequence of random variables that are uniformly distributed in $[0,1]$. Also assume that $U$ is independent of $X$, and let $Y =(Y_i)_{i \in {\mathbb N}}$ be an i.i.d.\ sequence of Bernoulli random variables with mean $\iota(1)$. By Lemmas \ref{fund} and \ref{function}, let $\Gamma :\ns{0,1} ^n \times [0,1] \to \ns{0,1}^n$ be a measurable map such that $(X^{1}, \Gamma(X^{1}, U_1))$ has joint law $\gamma$ and $\Gamma(w, v)=0^n$ for all $v \in [0,1]$ if $w \in \ns{100^{n-2}, 010^{n-2}}$. We have that $$\big(X, (\Gamma(X^i, U_i))_{i \in {\mathbb N}} \big)$$ gives a monotone coupling of $X$ and $Y$ with law $\gamma^{{\mathbb N}}$. For each $j \in {\mathbb N}$, call $X^j$ \dff{special} if $X^{j} \in \ns{100^{n-2}, 010^{n-2}}$ and let $S \subset {\mathbb N}$ be the random set of $j \in {\mathbb N}$ for which $X^j$ are special. Note that almost surely, $S$ is an infinite set. Let $\bar{X} = (\bar{X}_i)_{i \in {\mathbb N}}$ be the sequence of binary digits such that $\bar{X}^j = X^j$ if $j \not \in S$ and $\bar{X}^j = 0^n$ if $j \in S$. We have that $$ (\Gamma(X^i, U_i))_{i \in {\mathbb N}} = (\Gamma(\bar{X}^i, U_i))_{i \in {\mathbb N}}.$$ Let $(s_i)_{i \in {\mathbb N}}$ be the enumeration of $S$, where $s_0 < s_1 < s_2 < s_3 \cdots$. Consider the sequence of random variables given by $$b(X) := (\mathbf{1}[X^{s_i} =100^{n-2} ])_{i \in {\mathbb N}} = (X_{s_i n})_{i \in {\mathbb N}}$$ Since $100^{n-2}$ and $010^{n-2}$ occur with equal probability, we have that $b(X)$ is an i.i.d.\ sequence of Bernoulli random variables with mean $\tfrac{1}{2}$. Furthermore, we have that $b(X)$ is independent of $\bar{X}$, since $b(X)$ only depends on the values of $X$ on the special blocks. Let $c$ be the function from Lemma \ref{rep}, so that $c(b(X)) \stackrel{d}{=} U$. Since $b(X)$ is independent of $\bar{X}$, % \begin{eqnarray*} [\Gamma(X^i, U_i)]_{i \in {\mathbb N}} &=& [\Gamma(\bar{X}^i, U_i)]_{i \in {\mathbb N}} \\ &\stackrel{d}{=}& \big[\Gamma(\bar{X}^i, c(b(X))_i)\big]_{i \in {\mathbb N}} \\ &=& \big[\Gamma({X}^i, c(b(X))_i)\big]_{i \in {\mathbb N}}. \end{eqnarray*} % Thus $\Big(X, \big[\Gamma({X}^i, c(b(X))_i)\big]_{i \in {\mathbb N}}\Big)$ is another monotone coupling of $X$ and $Y$. Hence, we define $$\phi(x) := \big[\Gamma({x^i}, c(b(x))_i )\big]_{i \in {\mathbb N} }$$ for all $x \in \ns{0,1}^{{\mathbb N}}$ when the set $S$ is infinite, and set $\phi(x)=0^{{\mathbb N}}$ when $S$ is finite--an event that occurs with probability zero. % \end{proof} \subsection{Joinings} Let $T: \ns{0,1}^{{\mathbb Z}} \to \ns{0,1}^{{\mathbb Z}}$ be the left-shift given by $(Tx)_i = x_{i+1}$ for all $x \in \ns{0,1}^{{\mathbb Z}}$ and all $i \in {\mathbb Z}$. Let $\kappa$ and $\iota$ be probability measures on $\ns{0,1}$. A \dff{joining} of $\kappa^{{\mathbb Z}}$ and $\iota^{{\mathbb Z}}$ is a coupling $\varrho$ of the two measures with the additional property that $\varrho \circ (T\times T) = \varrho$. We will make use of the following joining in the proof of Theorem \ref{group}. \begin{example} \label{partition} Let $\kappa$ and $\iota$ be probability measures on $\ns{0,1}$. Assume that the intensity of $\kappa$ is greater than the intensity of $\iota$. Let $x \in \ns{0,1}^{{\mathbb Z}} $, and let $n$ be sufficiently large as in Lemma \ref{fund}. Call the subset $[j, j+2n+1] \subset {\mathbb Z}$ a \dff{marker} if $x_i =0$ for all $i \in [j, j+2n]$ and $x_{j+2n+1}=1$. Notice that two distinct markers have an empty intersection. Call an interval a \dff{filler} if it is nonempty and lies between two markers. Thus each $x \in \ns{0,1}^{{\mathbb Z}}$ partitions ${\mathbb Z}$ into intervals of markers and fillers. Call a filler \dff{fitted} if it is of size $n$, and call a filler \dff{special} if it is both fitted and of the form $100^{n-2}$ or $010^{n-2}$. Let $X$ have law $\kappa^{{\mathbb Z}}$ and $Y$ have law $\iota^{{\mathbb Z}}$. In what follows we describe explicitly how to obtain a monotone joining of $X$ and $Y$, where the independent thinning is used everywhere, except at the fitted fillers, where the coupling from Lemma \ref{fund} is used. Let $U=(U_i)_{i \in {\mathbb Z}}$ be an i.i.d.\ sequence of random variables that are uniformly distributed in $[0,1]$ and independent of $X$. By Example \ref{indepthin} and Lemma \ref{function}, let $R: \ns{0,1} \times [0,1] \to \ns{0,1}$ be a measurable function such that $R(X_1, U_1) \leq X_1$ is a Bernoulli random variable with mean $\iota(1)$. Let $\Gamma$ and $\gamma$ be as in the proof of Theorem \ref{thick}, so that $$\big( (X_1, \ldots, X_n), \Gamma( X_1, \ldots, X_n, U_1) \big)$$ has law $\gamma$. Consider the function $\Phi: \ns{0,1} ^{{\mathbb Z}} \times [0,1] ^{{\mathbb Z}} \to \ns{0,1} ^{{\mathbb Z}}$ defined by $\Phi(x, u)_i = R(x_i, u_i)$ if $i$ is not in a fitted filler. For $(j, j+1, \ldots, j+n)$ in a fitted filler, we set $$ (\Phi(x,u)_j, \ldots, \Phi(x,u)_{j+n}) = \Gamma(x_j, \ldots, x_{j+n}, u_j ).$$ The law of $X$ restricted to a filler interval is the law of a finite sequence of i.i.d.\ Bernoulli random variables with mean $\kappa(1)$, conditioned not to contain a marker. Note that since a fitted interval is of size $n$, and a marker is of size $2n+1$, the law of $X$ restricted to a fitted interval is just the law of a finite sequence of i.i.d.\ Bernoulli random variables with mean $\kappa(1)$. Furthermore, conditioned on the locations of the markers, the restrictions of $X$ to each filler interval are independent (see for example Keane and Smorodinsky \cite[Lemma 4]{keanea} for a detailed proof). Hence, $\Phi(X, U) \stackrel{d}{=} Y$. In addition, since all the couplings involved are monotone, we easily have that $\Phi(X,U)_i \leq X_i$ for all $i \in {\mathbb Z}$. \erk \end{example} \begin{remark} \label{KS2} To emphasize the strong form of independence in Example \ref{partition}, we note that if $A=(A_i)_{i\in\bb Z}$ are independent Bernoulli random variables with mean $\tfrac{1}{2}$ that are independent of $X$, then $(A_{jn})_{j\in S}$ has the same law as $(X_{jn})_{j\in S}$. Recall if $j\in S$ then $X^{j} = (X_{jn}, \ldots, X_{(j+1)n -1} )$ is special. % In addition, if $X'$ is such that $X'_i= X_i$ for every $i$ not in a special filler of $X$ and on each special filler of $X$ we set $X'_{jn} = A_{jn}$, $X'_{jn+1} =1-A_{jn}$, and \begin{equation*} X'_{jn+2}=X'_{jn+3} = \cdots =X'_{(j+1)n-1}= 0, % \end{equation*} then $X' \stackrel{d}{=} X$. Thus we can independently resample on the special fillers without affecting the distribution of $X$. \erk \end{remark} \subsection{The example of Ornstein and Weiss} Let $\mathbb{F}_r$ be the free group of rank $r \geq 2$. Let $a$ and $b$ be two of its generators. The Ornstein and Weiss \cite{MR910005} entropy increasing factor map is given by $$\phi(x)(g) = (x(g) \oplus x(ga), x(g) \oplus x(gb))$$ for all $x \in \ns{0,1}^{\mathbb{F}_r}$ and all $g \in \mathbb{F}_2$, where $$\phi: \ns{0,1}^{\mathbb{F}_r} \to ( \ns{0,1} \times \ns{0,1} )^{\mathbb{F}_r} \equiv \ns{00,01,10,11}^{\mathbb{F}_r}$$ pushes the uniform product measure $(\tfrac{1}{2}, \tfrac{1}{2}) ^{\mathbb{F}_r}$ forward to the uniform product measure $(\tfrac{1}{4}, \tfrac{1}{4}, \tfrac{1}{4}, \tfrac{1}{4}) ^{\mathbb{F}_r}$; the required independence follows from the observation that if $m \oplus n := m + n \bmod 2$, if $X$, $X'$, and $Y$ are independent Bernoulli random variables with mean $\tfrac{1}{2}$, and if $Z := X \oplus Y$ and $Z' := X' \oplus Y$, then $Z$ and $Z'$ are independent, even though they both depend on $Y$. Ornstein and Weiss's example can be iterated to produce an infinite number of bits at each vertex in the following way. As in Ball \cite[Proposition 2.1]{ball3}, we will define $\phi_k : \ns{0,1} ^{\mathbb{F}_r} \to ( \ns{0,1} ^{k} ) ^{\mathbb{F}_r}$ inductively for $k \geq 2$. Let $\tilde{\phi_k}: \ns{0,1} ^{\mathbb{F}_r} \to \ns{0,1} ^{\mathbb{F}_r}$ be the last coordinate of $\phi_k$ so that $\tilde{\phi_k}(x)(g) = [\phi_k(x)(g)]_k$ for all $x \in \ns{0,1}^{\mathbb{F}_r}$ and all $g \in \mathbb{F}_2$. Set $\phi_2 = \phi$ . For $k \geq 3$, let $\phi_k$ be given by $$ \phi_k(x)(g) = \Big( [{\phi_{k-1} (x)(g)}]_1, \ldots, [{\phi_{k-1}(x)(g) }]_{k-2}, (\phi \circ \tilde{ \phi}_{k-1})(x)(g) \Big)$$ for all $x \in \ns{0,1}^{\mathbb{F}_r}$ and all $g \in \mathbb{F}_2$. At each step we are saving one bit to generate two new bits using the original map $\phi$. The map $\phi_k$ pushes the uniform product measure $(\tfrac{1}{2}, \tfrac{1}{2}) ^{\mathbb{F}_r}$ forward to the uniform product measure on $( \ns{0,1} ^{k} ) ^{\mathbb{F}_r}$. By taking the limit, we obtain the mapping $$\phi_{\infty}: \ns{0,1} ^{\mathbb{F}_r} \to ( \ns{0,1} ^{{\mathbb Z}^{+}} ) ^{\mathbb{F}_r}$$ which yields a sequence of i.i.d.\ fair bits at each coordinate $g\in \mathbb{F}_2$, independently. Note that $\phi_{\infty}(x)(g)_k = \phi_{n}(x)(g)_k$ for all $n > k$. In our proof of Theorem \ref{group} we will use this iteration, which Ball attributes to Tim\'ar. \begin{comment} \begin{remark} \label{mod} We remark that the following simple probabilistic fact is important in these constructions. Let $X$ and $Y$ be independent Bernoulli random variables with mean $\tfrac{1}{2}$ and let $Z := X \oplus Y$. Although $Z$ is not independent of $(X, Y)$, we have that $Z$ is independent of $X$ and $Z$ is independent of $Y$. Similarly, if $X'$ is a Bernoulli random variable with mean $\tfrac{1}{2}$, and $X'$ is independent of $(X,Y)$, then $Z' = X' \oplus Y$ is independent of $Z$. \erk \end{remark} \end{comment} \section{Proof of the main theorem} \begin{proof}[Proof of Theorem \ref{group}] Let $r \geq 2$. We begin by extending the same monotone joining defined in Example \ref{partition} to a monotone joining of $\kappa^{\frgr r}$ and $\iota^{\frgr r}$. Let $X$ have law $\kappa^{\frgr r}$ and $Y$ have law $\iota^{\frgr r}$; then $X = (X_g)_{g \in \frgr r} = (X(g))_{g \in \frgr r}$ are i.i.d.\ Bernoulli random variables with mean $\kappa(1)$. As in the Ornstein and Weiss example, it will be sufficient to use only two generators $a$ and $b$ in the expression of our equivariant thinning. We refer to the string of generators and their inverses that make up the representation of an element in $\frgr r$ as a \dff{word}, and the individual generators and inverses as \dff{letters}. We call a word \dff{reduced} if its string of letters has no possible cancellations. Consider $\frgr r$ as being partitioned into infinitely many $\bb Z$ copies $Z(w)$ in the following way. Let $\frgrp r $ be the set of reduced words in $\frgr r$ that do not end in either $b$ or $b^{-1}$. For each $w \in \frgrp r$, set $Z(w):=\{wb^i\}_{i\in{\mathbb Z}}$. Indeed, any element in $\frgr r$ may be written as $wb^i$ for unique reduced $w \in \frgrp r $ and $i \in {\mathbb Z}$. Let $n$ be sufficiently large for the purposes of Lemma \ref{fund}. We define markers, fillers, fitted fillers, and special fillers on each of the ${\mathbb Z}$ copies in the obvious way. For example, if $x \in \ns{0,1}^{\frgr r}$ and $w \in \frgrp r$, then the set $\ns{wb^j, \ldots, wb^{j+2n+1}} $ is a marker if $ x(wb^i)=0$ for all $i \in [j, 2n]$ and $x(wb^{2n+1}) =1$. Let $U'=(U'_g)_{g \in{\frgr r}}$ be i.i.d. uniform random variables independent of $X$. Let $\Phi$ be as in Example \ref{partition}. Define $\hat\Phi : \{0,1\}^{\frgr r}\times [0,1]^{\frgr r}\rightarrow \{0,1\}^{\frgr r}$ by $$\hat\Phi (x,u')_{wb^i}=\Phi \big(x(Z(w)),u'(Z(w) )\big)_i$$ for all $w \in \frgrp r$ and all $i \in {\mathbb Z}$, where $x(Z(w)):= (x(wb^j))_{j \in {\mathbb Z}}$ and $u'(Z(w)) := (u'(wb^j))_{j \in {\mathbb Z}}$. Thus we have the monotone joining $\Phi$ on each $\bb Z$ copy $Z(w)$ in $\frgr r$, so that \begin{equation} \label{eqd} \hat\Phi (X,U')\stackrel{d}{=} Y \end{equation} and $\hat\Phi (X,U')_g \leq X_g$ for all $ g \in\frgr r$. Additionally, since $\Phi$ is a joining, the joint law of $(X, \hat\Phi (X,U'))$ is invariant under $\frgr r$-actions. Recall that a special filler has length exactly $n$, and the filler has two choices of values $010^{n-2}$ or $100^{n-2}$, which occur with equal probability. We define an \dff{initial vertex} of a special filler in $Z(w)$ to be an element $wb^{n_0} \in Z(w)$ where the entire special filler takes values sequentially at vertices on the minimal path from $wb^{n_0}$ to $wb^{n_0+n}$. For each $x \in \ns{0,1}^{\frgr r}$, let $V=V(x)$ be the set of initial vertices in $\frgr r$. Note that as in Example \ref{partition}, the law of $X$ restricted to a fitted interval is just the law of a finite sequence of i.i.d.\ Bernoulli random variables with mean $\kappa(1)$. Furthermore, conditioned on the locations of the markers, the restrictions of $X$ to each filler interval are independent. Thus for all $v \in V(X)$, $X(v)$ is a Bernoulli random variable with mean $\frac{1}{2}$, and conditioned on $V(X)$, the random variables $(X(v))_{v \in V}$ are independent. We have the same strong form of independence here as emphasized in Remark \ref{KS2} for Example \ref{partition}, again by Keane and Smorodinsky \cite[Lemma 4]{keanea}. This is key in our construction: we will use the Bernoulli random variables $(X(v))_{v \in V}$ to build deterministic substitutes for $U'$. Now we adapt the iteration of the Ornstein and Weiss example to assign a sequence of i.i.d.\ Bernoulli random variables to each $v \in V$. For each $v \in V$, let $k$ be the smallest positive integer such that $va^k \in V$; set $\alpha(v) = va^k$. Similarly, let $k'$ be the smallest positive integer such that $vb^{k'}\in V$ and set $\beta(v) = v b^{k'}.$ For each $v \in V$, define $$\psi(x)(v) = \big( x(v) \oplus x( \alpha(v)), x(v) \oplus x( \beta(v)) \big).$$ Conditioned on $V$, we have that $ (\psi(X))_{v \in V}$ is a family of independent random variables uniformly distributed on $\ns{ 00, 01, 10, 11}$. We iterate the map $\psi$ as we did with the Ornstein and Weiss map $\phi$. Set $\psi_2 = \psi$. For $k \geq 3$, let $$ \psi_{k}(x)(v) = \Big( [{\psi_{k-1} (x)(v)}]_1, \ldots, [{\psi_{k-1}(x)(v) }]_{k-2}, (\psi \circ \tilde{\psi}_{k-1})(x) (v) \Big),$$ where $\tilde{\psi}_{k-1}(x)(v) = [\psi_{k-1}(x)(v)]_{k-1}$ is the last coordinate of $\psi_k$. Let $\psi_{\infty}$ be the limit, and let $B_v = \psi_{\infty}(X)(v)$, so that conditioned on $V$, the random variables $(B_v)_{v \in V}$ are independent, and each $B_v$ is an i.i.d.\ sequence of Bernoulli random variables with mean $\frac{1}{2}$. For all $x\in\{0,1\}^{\frgr r}$, let $\bar{x}(g) = x(g)$ for all $g$ not in a special filler, and let $\bar{x}(g) = 0$ if $g$ belongs to a special filler. It follows from Remark \ref{KS2} that if $B' = (B'_g)_{g \in \frgr r}$ are independent Bernoulli random variables with mean $\tfrac{1}{2}$ independent of $X$, then $(B'_v)_{v \in V(X) }$ has the same law as $(B_v)_{v \in V(X) }$. Moreover, \begin{equation} \label{use} \big(\bar{X}, (B_v)_{v \in V(X) } \big) \stackrel{d}{=} \big(\bar{X}, (B'_v)_{v \in V(X) }\big). \end{equation} We assign, in an equivariant way, one uniform random variable to each element in $\frgr r$ using the randomness provided by $(B_v)_{v \in V }$. Let $c:\ns{0,1}^{{\mathbb N}} \to [0,1]^{{\mathbb N}}$ be the function from Lemma \ref{rep}, and let $g\in\frgr r$. Then almost surely there exist $v\in V$ and a minimal $j>0$ such that $gb^j=v$; set $U_g=c(B_v)_{j} $. Define $\mathbf{u}:\ns{0,1}^{\frgr r} \to [0,1]^{\frgr r}$ by setting $ \mathbf{u}(X):=(U_g)_{g\in\frgr r}$. Recall that $U' = (U_g')_{g \in \frgr r}$ are independent random variables uniformly distributed in $[0,1]$ independent of $X$. From \eqref{use}, % \begin{equation} \label{use2} \big(\bar{X}, {\mathbf{u}(X)} \big) \stackrel{d}{=} \big(\bar{X}, {U'} \big). \end{equation} Let $R: \ns{0,1} \times [0,1] \to \ns{0,1}$ and $\Gamma:\ns{0,1}^n \times [0,1] \to \ns{0,1}^n$ be the functions that appear in the definition of $ \Phi$ in Example \ref{partition}. Recall that $R$ facilitated independent thinning and $\Gamma$ the key monotone coupling of Lemma \ref{fund}. Also recall $\Gamma(100^{n-2}, t) = 0 = \Gamma(010^{n-2}, t)$ for all $t \in [0,1]$. Now define $\phi:\{0,1\}^{\frgr r}\rightarrow \{0,1\}^{\frgr r}$ by $$\phi(x)(g)=R\big( x(g),\mathbf{u}(x)(g) \big)$$ for $g$ not in a fitted filler; if $\ns{wb^i,\ldots,wb^{i+n-1}}$ is a fitted filler, then set $$(\phi(x)(wb^i),\ldots,\phi(x)(wb^{i+n-1}))=\Gamma\big( x(wb^i),\ldots, x(wb^{i+n-1}) ,\mathbf{u}(x)(wb^i) \big).$$ Note $\phi$ is defined so that $\phi(x) = \hat{\Phi}(x, \mathbf{u}(x))$. The map $\phi$ is equivariant and satisfies $\phi(x)(g) \leq x(g)$ by construction. It remains to verify that $\phi(X) \stackrel{d}{=} Y$. By the definition of $\Gamma$, we have $\phi(X) = \phi(\bar{X})$; that is, all special fillers are sent to $0^n$. A similar remark applies to the map $\hat{\Phi}$. From \eqref{eqd} and \eqref{use2}, $$\phi(X) = \hat{\Phi}({X}, \mathbf{u}(X)) =\hat{\Phi}(\bar{X}, \mathbf{u}(X)) \stackrel{d}{=} \hat{\Phi}(\bar{X}, U') = \hat{\Phi}(X, U')\stackrel{d}{=} Y.\qedhere$$ \end{proof} \section{Generalizations and questions} \subsection{Stochastic domination} Let $[N] = \ns{0, 1, \ldots, N-1}$ be endowed with the usual total ordering. Let $\kappa$ and $\iota$ be probability measures on $[N]$. We say that $\kappa$ \dff{stochastically dominates} $\iota$ if $\sum_{i=0} ^j \kappa_i \leq \sum_{i=0} ^j \iota_i$ for all $ j \in [N]$. An elementary version of Strassen's theorem \cite[Theorem 11] {Strassen} gives that $\kappa$ stochastically dominates $\iota$ if and only if there exists a monotone coupling of $\kappa$ and $\iota$. Notice that in the case $N=2$, we have that $\kappa$ stochastically dominates $\iota$ if and only if $\iota$ is not of higher intensity than $\kappa$. Thus Theorem \ref{group} gives a positive answer to a special case of the following question. \begin{question} \label{SDgen} Let $\kappa$ and $\iota$ be probability measures on $[N]$, where $\kappa$ stochastically dominates $\iota$, and $\kappa$ gives positive measure to at least two elements of $[N]$. Let $G$ be the free group of rank at least two. Does there exist a measurable equivariant map $\phi: [N] ^G \to [N]^G$ such that the push-forward of $\kappa^G$ is $\iota^G$ and $\phi(x)(g) \leq x(g)$ for all $x \in [N] ^G$ and $g \in G$? \end{question} In Question \ref{SDgen}, we call the map $\phi$ a \dff{monotone factor from $\kappa$ to $\iota$}. A necessary condition for the existence of a monotone factor from $\kappa$ to $\iota$ is that $\kappa$ stochastically dominates $\iota$. In the case $G= {\mathbb Z}$, Ball \cite{Ball} proved that there exists a monotone factor from $\kappa$ to $\iota$ provided that $\kappa$ stochastically dominates $\iota$, $H(\kappa) > H(\iota)$, and $\iota$ is supported on two symbols; Quas and Soo \cite{Qsc} removed the two symbol condition on $\iota$. In the non-amenable case, where $G$ is a free group of rank at least two, one can hope that Question \ref{SDgen} can be answered positively, without any entropy restriction. However, the analogue of Lemma \ref{fund} that was key to the proof of Theorem \ref{group} does not apply in the simple case where $\kappa = (0,\tfrac{1}{2}, \tfrac{1}{2})$ and $\iota = (\tfrac{1}{3}, \tfrac{1}{3}, \tfrac{1}{3})$. In particular, for all $n \geq 1$, there is no coupling $\rho$ of $\kappa^n$ and $\iota^n$ for which there exists $x \in \ns{1,2}^n$ and $y \in \ns{0,1,2}^n$ such that $\rho(x,y) = \kappa^n(x) = (\tfrac{1}{2})^n$, since $\rho(x,y) \leq \iota^n(y) = (\tfrac{1}{3})^n$. \subsection{Automorphism-equivariant factors} The Cayley graph of $\mathbb{F}_n$ is the regular tree $\mathbb{T}_{2n}$ of degree $2n$. We note that $\mathbb{F}_n$ is a strict subset of the group of graph automorphisms of $\mathbb{T}_{2n}$. The map that we constructed in Theorem \ref{group} is not equivariant with respect to the full automorphism group of $\mathbb{T}_{2n}$. In particular, our definition of a marker is not equivariant with respect to the automorphism which exchanges $a$-edges and $b$-edges in $\bb T_{2n}$. However, Ball generalizes the Ornstein and Weiss example to the full automorphism group in \cite[Theorem 3.3]{ball3} by proving that for any $d \geq 3$, there exists a measurable mapping $\phi: \ns{0,1} ^{\mathbb{T}_d } \to [0,1] ^{ \mathbb{T}_d} $ which pushes the uniform product measure on two symbols forward to the product measure of Lebesgue measure on the unit interval, equivariant with respect to the group of automorphisms of $\mathbb{T}_d$. Moreover, she proved the analogous result for any tree with bounded degree, no leaves, and at least three ends. \begin{question} Let $T$ be a tree with bounded degree, no leaves, and at least three ends. Let $\kappa$ and $\iota$ be probability measures on $\ns{0,1}$ and $\iota$ be of lower intensity. Does there exists a thinning from $\kappa$ to $\iota$ that is equivariant with respect to the full automorphism group of $T$? \end{question} \section*{Acknowledgements} We thank the referee for carefully reviewing the paper and providing various helpful comments and suggestions. \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:Intro} Suppose that $X$ is a random vector in $\mathbb{R}^d$, for $d \ge 1$, with distribution $\nu$. When $d=1$, the rank and quantile functions of $X$ are defined as $F$ and $F^{-1}$ (the inverse\footnote{$F^{-1}(p) := \inf \left\{x\in {\mathbb {R}}:p\leq F(x)\right\}$.} of $F$), respectively, where $F$ is the cumulative distribution function of $X$. Moreover, when $d=1$, quantile and rank functions and their empirical counterparts are ubiquitous in statistics and form the backbone of what is now known as classical nonparametrics (see e.g.,~\cite{Lehmann75}) and are important tools for inference (see e.g.,~\cite{HW03} and the references therein). In this paper we study many properties of {\it multivariate} (empirical) ranks and quantiles defined using the theory of optimal transport, as introduced in~\citet{Cher17}. Unlike the real line, the $d$-dimensional Euclidean space $\mathbb{R}^d$, for $d \ge 2$, has no natural ordering. This has been a major impediment in defining analogues of quantiles and ranks in $\mathbb{R}^d$, for $d \ge 2$. Several notions of multivariate quantiles have been proposed in the statistical literature --- some based on data depth ideas (see e.g.,~\cite{Oja83, Liu92, Zou03}) and some based on geometric ideas (see e.g.,~\cite{Chaud96, Kol97, HPS10}); see~\cite{Serfling10} and~\cite{dCHM} for recent surveys on this topic. However, most of these notions do not enjoy the numerous appealing properties that make univariate ranks and quantiles so useful. For example, most of these notions can lead to multivariate quantiles that may take values outside the support of the distribution $\nu$. To motivate the notions of ranks and quantiles based on the theory of optimal transportation (the subject of our study) let us first consider the case when $d=1$. Suppose that $X \sim \nu$ has a continuous distribution function $F$. An important property of the one-dimensional rank function $F$ is that $F(X) \sim \mu$ where $\mu \equiv $ Uniform$([0,1])$, i.e., $F$ {\it transports} (see \eqref{eq:PushMeasure-1} for the formal definition) the distribution $\nu$ to $\mu$. Similarly, the quantile function $F^{-1}$ (which is the inverse of the rank map) transports $\mu$ to $\nu$, i.e., $F^{-1}(U) \sim X$ where $U \sim \mu$. In fact, it can be easily shown that the quantile function $F^{-1}$ (or $F$) is the unique monotone nondecreasing map that transports $\mu$ to $\nu$ (or $\nu$ to $\mu$). Moreover, if $\nu$ has finite second moment, it can be shown that $F^{-1}$ is the almost everywhere (a.e.) unique map (on $[0,1]$) that transports $\mu$ to $\nu$ and minimizes the expected squared-error cost, i.e., \begin{equation}\label{eq:Q-F} F^{-1} = \arg \min_{T: T(U) \sim \nu} \mathbb{E} [(U - T(U))^2], \qquad \mbox{where} \;\; U \sim \mu \end{equation} and the minimization is over all functions $T$ that transport $\mu$ to $\nu$ (and thus the connection to optimal transportation); see Section~\ref{sec:Q-R} for the details. The rank function $F$ also minimizes the expected squared-error cost where now one considers maps that transport $\nu$ to $\mu$. The multivariate quantile and rank functions using optimal transportation essentially extend the above properties of univariate rank and quantile functions. Now let $\mu$ be an absolutely continuous probability measure with respect to (w.r.t.) Lebesgue measure on $\mathbb{R}^d$ ($d \ge 1$) and supported on a compact convex set $\mathcal{S}$; e.g., we can take $\mu$ to be Uniform$([0,1]^d)$ or uniform on the ball of radius one around $0 \in \mathbb{R}^d$. We often refer to $\mu$ as the \emph{reference distribution} and will define quantiles relative to this reference measure (when $d=1$ we usually take $\mu$ to be Uniform$([0,1])$). Let $\nu$ be another probability measure in $\mathbb{R}^d$ which we term as the \emph{target distribution}; we think of $\nu$ as the population distribution of the observed data. We define the {\it multivariate quantile function} $Q: \mathcal{S} \to \mathbb{R}^d$ of $\nu$ w.r.t.~$\mu$ as the solution to the following optimization problem: \begin{equation}\label{eq:Multi-Q-F} Q := \arg \min_{T: T(U) \sim \nu} \mathbb{E} [\|U - T(U)\|^2], \qquad \mbox{where} \;\; U \sim \mu, \end{equation} and the minimization is over all functions $T:\mathcal{S} \to \mathbb{R}^d$ that transport $\mu$ to $\nu$; cf.~\eqref{eq:Q-F} and see Section~\ref{sec:Q-R} for the details. Here $\|\cdot \|$ denotes the usual Euclidean norm in $\mathbb{R}^d$. Moreover, if $\nu$ does not have a finite second moment, the above optimization problem might not be meaningful but the notion of multivariate quantiles (using optimal transportation) can still be defined as follows. By Brenier-McCann's theorem (see Theorem~\ref{thm:Brenier}), there exists an a.e.~unique map $Q: \mathcal{S} \to \mathbb{R}^d$ --- which we define as the quantile function of $\nu$ (w.r.t.~the reference measure $\mu$) --- that is the gradient of a convex function\footnote{Note that a convex function is differentiable a.e.~in its domain.} and transports $\mu$ to $\nu$; i.e., $Q(U) \sim \nu$ where $U \sim \mu$. Further, it can be shown that (see Theorem~\ref{thm:Brenier}) when~\eqref{eq:Multi-Q-F} is meaningful, the above two notions yield the same function $Q$. Note that when $d=1$, the gradient of a convex function is a monotone nondecreasing function and thus the above two characterizations of the quantile function $Q$ are the exact analogues of the one-dimensional case described in the previous paragraph. Although the rank function can be intuitively thought of as the inverse of the quantile function, such an inverse might not always exist --- especially when $\nu$ is a discrete probability measure (which arises naturally when defining the empirical rank map). In Section~\ref{sec:Q-R} we tackle this issue and use the notion of the Legendre-Fenchel transform (see Section~\ref{sec:prelim}) to define the rank function. In Lemma~\ref{lem:QRrelation} we show that the defined notion of rank function is a right inverse of the quantile function a.e.~if the reference and the target distributions are absolutely continuous. Furthermore, Proposition~\ref{thm:Q_prop} shows that, under mild regularity conditions, the quantile and rank functions are continuous bijections (i.e., homeomorphisms) between the (interiors of the) supports of the reference and target distributions and they are inverses of each other. It is worth noting that for $d=1$, a continuous bijective rank map corresponds to the distribution function being continuous and strictly increasing. Lemma~\ref{lem:Monotonicity} states that the one-dimensional projection of the rank map, along any direction, is nondecreasing. Further, in Lemma~\ref{cor:RankProp} we show that, when the rank map is a homeomorphism, it approaches a limit along every ray that depends only on the geometry of $\mathcal{S}$ (and not on $\nu$). Given $n$ i.i.d.~random vectors $X_1,\ldots, X_n$ in $\mathbb{R}^d$, in Sections~\ref{sec:Emp-Q-R} and~\ref{sec:Comp}, we discuss the characterization and computation (see Lemmas~\ref{lem:ExtremePt} and~\ref{lem:Char3}) of the empirical quantile and rank maps --- which are defined via~\eqref{eq:Multi-Q-F} but with $\nu$ replaced by the empirical distribution of the data. In particular, the empirical quantiles (and ranks) are obtained from the solution of a convex program with $n$ variables. An attractive property of the empirical ranks, when $d=1$, that makes ranks useful for statistical inference, is that they are distribution-free. Lemma~\ref{lem:Uniform} shows that a distribution-free version of multivariate ranks can be obtained by external randomization (also see Lemma~\ref{lem:Distfree}). Although our approach is very similar to that of~\cite{Cher17} there are subtle and important differences; in Section~\ref{sec:Comparison} we discuss these connections in detail. Some useful properties of the above defined quantile and rank functions, including (i) equivariance under orthogonal transformations (see Lemma~\ref{lem:LinTrans-2}), and (ii) decomposition when $X \sim \nu$ has mutually independent coordinates (see Proposition~\ref{prop:Indep}), are given in Section~\ref{sec:Q-R-Prop}. In Section~\ref{sec:UnifConv} we state our first main theoretical result on the almost sure (a.s.) uniform convergence of the empirical quantile and rank maps to their population counterparts. An informal statement of this result is given below. \textbf{Uniform convergence of empirical quantile and rank maps} (informal restatement of Theorem~\ref{thm:GCProp}): Suppose that $\mu$ is supported on a compact convex set $\mathcal{S}\subset \mathbb{R}^d$ with non-empty interior. Let $\mathcal{Y}$ be the support of $\nu$ and let $\{\hat{\nu}_n\}_{n\ge 1}$ be a sequence of random probability distributions converging weakly to $\nu$ a.s. Suppose that the quantile map $Q$ of $\nu$ (w.r.t.~$\mu$) is a continuous bijection from $\mathrm{Int}(\mathcal{S})$ (the interior of $\mathcal{S}$) to $\mathrm{Int}(\mathcal{Y})$. Then, with probability (w.p.) 1, the empirical quantile and rank maps corresponding to $\hat{\nu}_n$ (w.r.t.~$\mu$) --- $\hat{Q}_n$ and $\hat{R}_n$ --- converge uniformly to $Q$ and $R \equiv Q^{-1}$, respectively, over compacts in $\mathrm{Int}(\mathcal{S})$ and $\mathrm{Int}(\mathcal{Y})$. Moreover, if all the supporting hyperplanes of $\mathcal{S}$ touch its boundary at only one point, then, the empirical rank map $\hat{R}_n$ converges uniformly to $R = Q^{-1}$ over the whole of $\mathbb{R}^d$ a.s.; furthermore, w.p.~1, the tail limit of $\hat{R}_n$ stabilizes along any direction. We list below some of the novelties of the above result (Theorem~\ref{thm:GCProp}). \noindent \textbf{(i)} One of the main consequences of Theorem~\ref{thm:GCProp} is the a.s.~convergence of the empirical rank function on the whole of $\mathbb{R}^d$, under certain conditions on the support of the reference distribution $\mu$. This can indeed be thought of as a generalization of the famous Glivenko-Cantelli theorem for rank functions when $d>1$. Moreover, our result does not need any boundedness assumption on the support of the target distribution $\nu$ and even applies when the second moment of $\nu$ is not finite. This is a major improvement over the corresponding result in \cite[Theorem 3.1]{Cher17}. Also,~\cite[Theorem 3.1]{Cher17} only shows in probability convergence compared to a.s.~convergence in Theorem~\ref{thm:GCProp}. Although~\citet{dCHM} show the a.s.~convergence of the empirical \emph{center-outward rank functions}, our notion of ranks and quantiles are different from theirs. Furthermore, unlike \cite{dCHM}, $\mu$ can be supported on any compact convex domain with minor restrictions on its boundary. See Section~\ref{sec:Comparison} for a detailed discussion where we compare and contrast the notions of multivariate ranks and quantiles of \cite{Cher17} and~\cite{dCHM} with ours. \noindent \textbf{(ii)} In Theorem~\ref{thm:GCProp}, one of the assumptions is that the quantile map $Q$ of $\nu$ (w.r.t.~$\mu$) is a homeomorphism (i.e., continuous bijection) from $\mathrm{Int}(\mathcal{S})$ to $\mathrm{Int}(\mathcal{Y})$. Proposition~\ref{thm:Q_prop} provides a sufficient condition for $Q$ to be a homeomorphism. It is shown that when $\mathcal{S}$ and $\mathcal{Y}$ are convex sets and $\nu$ has bounded nonvanishing density on compacts contained in $\mathrm{Int}(\mathcal{Y})$, then, $Q:\mathrm{Int}(\mathcal{S})\to \mathrm{Int}(\mathcal{Y})$ is indeed a homeomorphism. This is an improvement over the result in~\citet{Figalli18} which shows a similar result under the assumption that $\nu$ is supported on the whole of $\mathbb{R}^d$. Since \cite{dCHM} appeals to~\cite{Figalli18} for the consistency result of their center-outward ranks, their target distribution $\nu$ has to be supported on the whole of $\mathbb{R}^d$. \noindent \textbf{(iii)} Our result (see \eqref{eq:Asymptot} of Theorem~\ref{thm:GCProp}) implies that when the population rank map is a homeomorphism, the tail limits of the estimated rank maps $\hat{R}_n$ depend neither on $\nu$ nor on $\mu$; rather they depend on the geometry of $\mathcal{S}$ --- the domain that supports the reference distribution $\mu$. This is reminiscent of the case when $d=1$ where the limits of the distribution (rank) function towards $-\infty$ and $+\infty$ are always $0$ and $1$, respectively. \noindent \textbf{(iv)} To prove Theorem~\ref{thm:GCProp}, one needs to develop tools that deal with the convergence of (sub)-gradients of a sequence of convex functions and their Legendre-Fenchel duals. These tools are summarized in three determinstic lemmas in Section~\ref{sec:UnifConv}. Lemma~\ref{KeyLemma} demonstrates how the (sub)-gradients of a sequence of convex functions and their Legendre-Fenchel duals behave when the corresponding convex functions converge uniformly on some compact subset of $\mathbb{R}^d$. It sets the stage for the application of the last two lemmas in this section, namely~Lemmas~\ref{KeyLemma2} and~\ref{lem:UnifConv}. These results, proved using tools from convex analysis, might also be of independent interest. Theorem~\ref{thm:GCProp} naturally leads to the following question: ``What is the pointwise rate of convergence of the empirical quantile and rank maps?". This is indeed a hard question and not much is known in the literature. We address this question in Section~\ref{sec:RateSec}. We consider Theorem~\ref{ppn:RateProp1} as a first step towards understanding the local behavior of transportation maps. In Theorem~\ref{ppn:RateProp1}, we show that the local (uniform) rate of convergence of the empirical quantile and rank maps is tied to their local behavior under the $L_2$-loss. In the special case $\mu \equiv \nu$, in Corollary~\ref{thm:RateTheo}, we obtain an upper bound on this local uniform rate using the rate of convergence of the $L_2$-Wasserstein distance between the empirical and the true measure $\nu$. The rate of convergence of the $L_2$-Wasserstein distance has been well-studied in the literature (see e.g.,~\cite{Tala1,Tala2,FG15,BL14,WB17}). In Section~\ref{sec:Goodness-Fit-Test}, we investigate some statistical applications of the multivariate ranks and quantiles studied in this paper --- we propose methodology for nonparametric multivariate two-sample goodness-of-fit testing and testing of mutual independence. In Section~\ref{sec:2S-Goodness-Fit-Test} we propose a test-statistic --- motivated by the Cram\'{e}r von-Mises one-sample statistic (when $d=1$) --- for two-sample testing, based on the empirical multivariate ranks and quantiles. In Section~\ref{sec:IndepTest} we propose a method for testing the mutual independence of the coordinates of a random vector, given i.i.d.~data. Applying the uniform convergence results of Theorem~\ref{thm:GCProp}, we prove the consistency of these proposed tests, i.e., the power of these tests converges to 1 under fairly general assumptions on the underlying distribution (see Propositions~\ref{lem:Power1} and~\ref{lem:Power2}). This leads to omnibus nonparametric tests that are computationally feasible, and being rank based, do not depend on moment assumptions on the underlying distribution (cf.~\cite{BF04, SR13, EquivRKHS13, SzekelyCorrDist07}). Further, being based on ranks, we suspect that these test statistics, properly normalized, may be (asymptotically) distribution-free --- in Section~\ref{sec:Simul} we provide simulation studies that illustrate this phenomenon. The paper is organized as follows. We introduce notation and some basic notions from convex analysis and optimal transportation in Section~\ref{sec:prelim}. Section~\ref{sec:Q-R} defines the multivariate quantile and rank maps and their empirical counterparts and investigates some of their properties. The asymptotic results on the uniform a.s.~convergence of the empirical quantile and rank maps and their local rates of convergence are given in Section~\ref{sec:Asymptotics}. Two statistical applications --- multivariate two-sample testing and testing for mutual independence --- based on the studied multivariate ranks and quantiles are given in Section~\ref{sec:Goodness-Fit-Test}. The paper concludes with a discussion (Section~\ref{sec:Disc}). The proofs of our main results, some remarks and plots are given in Appendix~\ref{sec:Proofs}. Additional technical results used in the proofs are relegated to Appendix~\ref{Appendix-B}. \section{Preliminaries}\label{sec:prelim} We start with some notation and recall some important concepts from convex analysis that will be relevant for the rest of the paper. For $u, v \in \mathbb{R}^d$, we use $\langle u , v \rangle$ to denote the dot product of $u$ and $v$ and $\|\cdot\|$ denotes the usual Euclidean norm in $\mathbb{R}^d$. For $y_1, \ldots, y_k \in \mathbb{R}^d$ we write $\mathrm{Conv}(y_1,\ldots, y_k)$ to denote the convex hull of $\{y_1, \ldots, y_k\} \subset \mathbb{R}^d$. A {\it convex polyhedron} is the intersection of finitely many closed half-spaces. A {\it convex polytope} is the convex hull of a finite set of points. The interior, closure and boundary of a set $\mathcal{X} \subset \mathbb{R}^d$ will be denoted by Int($\mathcal{X}$), Cl($\mathcal{X}$), and Bd($\mathcal{X}$), respectively. The Lebesgue measure on $\mathbb{R}^d$ is denoted by $\lambda_d$. The Dirac delta measure at $x$ is denoted by $\delta_x$. For $\delta >0$ and $x \in \mathbb{R}^d$, $B_{\delta}(x) := \{y \in \mathbb{R}^d: \|y - x\| < \delta\}$ denotes the open ball of radius $\delta$ around $x$. The set of natural numbers will be denoted by $\mathbb{N}$. The {\it domain} of a function $f : \mathbb{R}^d \to \mathbb{R} \cup \{+\infty\}$, denoted by $\operatorname{dom}(f)$, is the set $\{x \in \mathbb{R}^d: f(x) < + \infty\}$. The function $f$ is called {\it proper} if $\operatorname{dom}(f) \ne \emptyset$. We say that $f$ is lower semi-continuous (l.s.c.) at $x_0 \in \mathbb{R}^d$ if $\liminf _{x\to x_{0}}f(x)\geq f(x_{0})$. For a proper function $f : \mathbb{R}^d \to \mathbb{R} \cup \{+\infty\}$, the {\it Legendre-Fenchel dual} (or convex conjugate or simply the dual) of $f$ is the proper function $f^* : \mathbb{R}^d \to \mathbb{R} \cup \{+\infty\}$ defined by \begin{equation}\label{eq:LF-dual} f^*(y) := \sup_{x \in \mathbb{R}^d} \left\{ \langle x,y\rangle - f(x) \right\}, \qquad \mbox{for all } y \in \mathbb{R}^d. \end{equation} It is well known that $f^*$ is a proper, l.s.c.~convex function. The Legendre-Fenchel duality theorem says that for a proper l.s.c.~convex function $f$, $(f^*)^* = f$. Given a convex function $f : \mathbb{R}^d \to \mathbb{R} \cup \{+\infty\}$ we define the \emph{subdifferential} set of $f$ at $x\in \operatorname{dom}(f)$ by $$\partial f(x) := \{\xi \in \mathbb{R}^d: f(x) + \langle y-x, \xi\rangle \le f(y), \quad \mbox{for all } y \in \mathbb{R}^d\}.$$ Any element in $\partial f(x)$ is called a \emph{subgradient} of $f$ at $x$. If $f$ is differentiable at $x$ then $\partial f(x) = \{\nabla f(x)\}$. We follow the convention that if $f(x)= + \infty$ for some $x\in \mathbb{R}^d$, then, $\partial f(x) = \emptyset$. Moreover, for $A \subset \mathbb{R}^d$, we use the following notation: \begin{eqnarray*} \partial f(A) & := &\{x \in \mathbb{R}^d: x \in \partial f(u) \;\; \mbox{for some}\;\; u \in A\}, \qquad \mbox{and} \\ (\partial f)^{-1}(A) &:= & \{u \in \mathbb{R}^d: x \in \partial f(u)\;\; \mbox{for some}\;\; x \in A\}. \end{eqnarray*} A convex function is a.e.~differentiable (w.r.t.~Lebesgue measure) on Int($\operatorname{dom}(f)$). As a consequence, a convex function is continuous in the interior of its domain. For a convex function $f : \mathbb{R}^d \to \mathbb{R} \cup \{+\infty\}$ we sometimes just write $\nabla f(x)$ to denote the (sub)-differential of $f$ at $x$ with the understanding that when $f$ is not differentiable at $x$ we can take $\nabla f(x)$ to be any point in the set $\partial f(x)$. This avoids the need to deal with the set-valued function $\partial f$. However, sometimes we will need to view $\partial f$ as a multi-valued mapping, i.e., a mapping from $\mathbb{R}^d$ into the power set of $\mathbb{R}^d$, and we will use the notation $\partial f$ in that case. We will find the following results useful (see e.g.,~\cite[Proposition 2.4]{V03}). \begin{lemma}[Characterization of subdifferential]\label{lem:SubD} Let $f:\mathbb{R}^d \to \mathbb{R} \cup \{+ \infty\}$ be a proper l.s.c.~convex function. Then for all $x,y \in \mathbb{R}^d$, \begin{equation}\label{eq:Charac-Sub} \langle x,y\rangle = f(x) + f^*(y) \Longleftrightarrow y \in \partial f(x) \Longleftrightarrow x \in \partial f^*(y). \end{equation} \end{lemma} \begin{remark}\label{rem:dualrem} For a proper l.s.c.~convex function $f:\mathbb{R}^d \to \mathbb{R} \cup \{+ \infty\}$, using \eqref{eq:Charac-Sub}, one can see that \begin{align} \big\{y\in \mathbb{R}^d: y\in \partial f(x) \text{ for some }x\in \mathbb{R}^d\big\} &= \big\{y\in \mathbb{R}^d: x\in \partial f^{*}(y)\text{ for some }x\in \mathbb{R}^d \big\}\nonumber\\&= \big\{y\in \mathbb{R}^d: f^{*}(y)< +\infty\big\} \end{align} where the last equality follows since $\partial f^{*}(y) = \emptyset$ if and only if $f^{*}(y) = +\infty$. \end{remark} Throughout the paper, we will assume that all the convex functions that we will be dealing with are l.s.c.~Lemma~\ref{lem:SubD} shows a one-to-one relation between the subdifferential set of a convex function and its Legendre-Fenchel dual. For instance, if the domain of the Legendre-Fenchel dual of a convex function $f$ is bounded then Remark~\ref{rem:dualrem} shows that the subdifferentials of $f$ are contained in a bounded set. \begin{definition}[Set convergence]\label{bd:SetConv} Let $K_1\subset K_2 \subset \ldots$ be an increasing sequence of sets in $\mathbb{R}^d$. We say that $K_n$ \emph{increases} to $K\subset \mathbb{R}^d$, and write $K_n \uparrow K$, if for any compact set $A \subset \mathrm{Int}(K)$ there exists $n_0=n_0(A) \in \mathbb{N}$ such that $A \subseteq K_n$ for all $n\geq n_0$. \end{definition} The above notion is slightly stronger than just assuming $K_1 \subset K_2 \subset \ldots$ and $\lim \inf_{n \to \infty} K_n = K$. A supporting hyperplane of a set $S \in \mathbb{R}^{d}$ is a hyperplane that has both of the following two properties: (i) $S$ is entirely contained in one of the two closed half-spaces bounded by the hyperplane, and (ii) $S$ has at least one boundary-point on the hyperplane. Let $\mu$ and $\nu$ be two Borel probability measures supported on $\mathcal{X} \subset \mathbb{R}^d$ and $\mathcal{Y}\subset \mathbb{R}^d$ respectively. The goal of optimal transport (Monge's problem), under the $L_2$-loss, is to find a measurable transport map $T \equiv T_{\mu;\nu} : \mathcal{X} \to \mathcal{Y}$ solving the (constrained) minimization problem \begin{equation}\label{eq:Meas_Trans-1} \inf_T \int_{\mathcal{X}} \|x - T(x)\|^2 d\mu(x) \qquad \quad \mbox{subject to }\quad T\#\mu = \nu \end{equation} where the minimization is over $T$ (a {\it transport map}), a measurable map from $\mathcal{X}$ to $\mathcal{Y}$, and $T\#\mu$ is the {\it push forward} of $\mu$ by $T$, i.e., \begin{align}\label{eq:PushMeasure-1} T\#\mu(B) = \mu(T^{-1}(B)), \qquad \mbox{for all } B \subset \mathcal{Y}\;\; \mbox{Borel}. \end{align} A map $T_{\mu;\nu}$ that attains the infimum in~\eqref{eq:Meas_Trans-1} is called an {\it optimal transport} map from $\mu$ to $\nu$. We state an important result in this theory, namely Brenier-McCann's theorem (\cite{B91},~\cite{McCann95}); this result will be very useful to us. \begin{theorem}\label{thm:Brenier} Let $\mu$ and $\nu$ be two Borel probability measures on $\mathbb{R}^d$. Suppose further that $\mu$ has a Lebesgue density. Then there exists a convex function $\psi: \mathbb{R}^d \to \mathbb{R} \cup \{+\infty\}$ whose gradient $G = \nabla \psi : \mathbb{R}^d \to \mathbb{R}^d$ pushes $\mu$ forward to $\nu$. In fact, there exists only one such $G$ that arises as the gradient of a convex function, i.e., $G$ is unique $\mu$-a.e. Moreover, if $\mu$ and $\nu$ have finite second moments, $G$ uniquely minimizes Monge's problem~\eqref{eq:Meas_Trans-1}. \end{theorem} The above result implies that, for the $L_2$-loss, the solution of Monge's problem exists (if $\mu$ and $\nu$ have finite second moments), is ($\mu$-a.e.) unique, and is given by the {\it gradient of a convex function}. See Section~\ref{sec:OT} in the Appendix for a brief introduction to the field of optimal transportation. \section{Quantile and rank maps in $\mathbb{R}^d$ when $d \ge 1$}\label{sec:Q-R} Suppose that $X \sim \nu$ is supported on $\mathcal{Y} \subset \mathbb{R}^d$. Let $\mu$ be a known absolutely continuous distribution on $\mathbb{R}^d$ (i.e., $\mu$ has a density w.r.t.~Lebesgue measure $\lambda_d$ on $\mathbb{R}^d$) with support $\mathcal{S}$ --- a compact convex subset of $\mathbb{R}^d$ with nonempty interior; e.g., we can take $\mu$ to be Uniform$([0,1]^d)$. Other natural choices of $\mu$ are the uniform distribution on the unit ball $B_1(0)$ in $\mathbb{R}^d$ (\cite{Cher17}), and the spherical uniform distribution ($V$ has the spherical uniform distribution if $V = L \varphi$ where $\varphi$ is uniformly distributed on the unit sphere around $0 \in \mathbb{R}^d$ and $L$ has the uniform distribution on the interval $[0,1]$ and $L$ and $\varphi$ are mutually independent); see~\cite{dCHM, Figalli18}. In the following we define the multivariate {\it quantile} and {\it rank} maps for $\nu$ w.r.t.~the distribution $\mu$ using the theory of optimal transportation. We first define the quantile function for $\nu$ and then use it to define the rank map. Recall that, when $d=1$ the (univariate) quantile map $Q(\cdot)$ (for $\nu$) is the monotone nondecreasing map that pushes forward (i.e., transports) the uniform distribution on $[0, 1]$ to $\nu$; i.e., the quantile map $Q(\cdot)$ is the unique nondecreasing map such that if $U \sim $ Uniform($[0, 1]$), then $Q(U) \sim \nu$. This point of view will be the basis of our multivariate generalization of the notion of quantiles; also see~\cite{Galichon16, Galichon17, Cher17}. Our approach is essentially the same as outlined in~\cite{Cher17} although there are some important and subtle differences; see Section~\ref{sec:Comparison} for a detailed discussion. \begin{definition}[Quantile function]\label{def:Quantile} The quantile function of the probability measure $\nu$ (w.r.t.~$\mu$) is defined as the $\mu$-a.e.~unique map $Q :\mathcal{S} \to \mathbb{R}^d$ which pushes $\mu$ to $\nu$ and has the form \begin{equation}\label{eq:Quantile} Q := \nabla \psi \end{equation} where $\psi:\mathbb{R}^d \to \mathbb{R}\cup \{+ \infty\}$ is a convex function. We call such a $\psi(\cdot)$ a potential function. \end{definition} \begin{remark}[Uniqueness of $Q$]\label{rem:QUnique} As the convex function $\psi$ in Definition~\ref{def:Quantile} need not be differentiable everywhere, there is a slight ambiguity in the definition of $Q$. When $\psi$ is not differentiable, say at $u \in \mathcal{S}$, we can define $Q(u)$ to be any element of the subdifferential set $\partial \psi(u)$ (see Section~\ref{sec:prelim} for its formal definition). As a convex function is differentiable a.e.~(on its domain) this convention does not affect the $\mu$-a.e.~uniqueness of $Q$. Further, this convention bypasses the need to define quantiles as a multi-valued map. \end{remark} The existence and $\mu$-a.e.~uniqueness of the quantile map $Q(\cdot)$, for any probability measure $\nu$ on $\mathbb{R}^d$, is guaranteed by Theorem~\ref{thm:Brenier}. Further, by Theorem~\ref{thm:Brenier}, if $\nu$ has finite second moment, then $Q(\cdot)$ can be expressed as in~\eqref{eq:Multi-Q-F}. As discussed in the Introduction, the above notion of quantiles obviously extend our usual definition of quantiles when $d=1$; see Section~\ref{sec:d=1} in the Appendix for a more detailed discussion. A few remarks are in order now. \begin{remark}[Non-uniqueness of $\psi$] Although $Q$ is $\mu$-a.e.~unique it is easy to see that $\psi$ (as in Definition~\ref{def:Quantile}) is not unique; in fact, $\psi(\cdot) + c$ where $c\in \mathbb{R}$ is a constant would also suffice (as $\partial (\psi+c) = \partial \psi$). Further, we can change $\psi(\cdot)$ outside the set $\mathcal{S}$ and this does not change $Q$ (as $Q$ has domain $\mathcal{S}$). For this reason, we will consider \begin{equation}\label{eq:Conv-Cvx} \psi(u) = +\infty, \qquad \mbox{for } u \in \mathbb{R}^d\setminus \mathcal{S}. \end{equation} The above convention will be useful in the subsequent discussion. \end{remark} \begin{definition}[Rank map]\label{def:Rank} Recall the convex function $\psi:\mathbb{R}^d \to \mathbb{R} \cup \{+\infty\}$ whose gradient yields the quantile map (see~\eqref{eq:Quantile}; also see~\eqref{eq:Conv-Cvx}). We define the rank function $R:\mathbb{R}^d \to \mathcal{S}$ of $\nu$ (w.r.t.~$\mu$) as \begin{equation}\label{eq:Rank} R := \nabla \psi^* \end{equation} where $\psi^*:\mathbb{R}^d \to \mathbb{R} \cup \{+\infty\}$ is the Legendre-Fenchel dual of the convex function $\psi$, i.e., $\psi^*(x) := \sup_{y \in \mathbb{R}^d} \{\langle x, y\rangle - \psi(y)\}$, for $x \in \mathbb{R}^d$. \end{definition} As $Q \in \partial \psi$, and $R \in \partial \psi^*$ are (sub)-gradients of two convex functions that are convex conjugates of each other, we have the following as a direct consequence of Lemma~\ref{lem:SubD}; see Section~\ref{sec:SubD-1} for a proof. \begin{lemma}\label{lem:SubD-1} We have \begin{align}\label{eq:EssR-Inv} x\in \partial \psi(\partial \psi^*(x)), \;\mbox{for} \; x \in \mathbb{R}^d \qquad \mbox{and} \qquad u\in \partial \psi^*(\partial \psi(u)),\;\mbox{for} \;u \in \mathcal{S}. \end{align} Moreover, for every Borel set $B \subset \mathbb{R}^d$, we have $(\partial \psi)^{-1}(B)= \partial \psi^*(B)$. \end{lemma} A few remarks are in order now. \begin{remark}[The domain of $R$] Although not immediately clear from its definition, the rank map $R(x)$ is finite for a.e.~$x$; cf.~the quantile map $Q(\cdot)$ which is $\mu$-a.e.~uniquely defined. To see this observe that $\psi^*(x)$ is finite for all $x \in \mathbb{R}^d$ as $\psi^*(x) = \sup_{y \in \mathbb{R}^d} \{\langle x, y\rangle - \psi(y)\} = \sup_{y \in \mathcal{S}} \{\langle x, y\rangle - \psi(y)\}.$ The last equality is a consequence of the fact that for $y \notin \mathcal{S}$, $\psi(y) = + \infty$ (by our convention~\eqref{eq:Conv-Cvx}) and as $\mathcal{S}$ is compact we see that $\psi^*(x) <\infty$ for every $x \in \mathbb{R}^d$ (as $\langle x, \cdot\rangle - \psi(\cdot)$ is continuous on the compact set $\mathcal{S}$ and hence bounded). \end{remark} \begin{remark}[The range of the rank map] Using Lemma~\ref{lem:SubD} one can argue that $R(x) \in \mathcal{S}$ for a.e.~$x \in \mathbb{R}^d$. This follows from the fact that $R(x) \in \partial \psi^*(x)$ exists for every $x \in \mathbb{R}^d$ (as $\psi^*$ is convex), and by Lemma~\ref{lem:SubD}, $y \in \partial \psi^*(x) \Leftrightarrow x \in \partial \psi(y).$ Note that as $\partial \psi(y)$ exists we must have $\psi(y) <+\infty$, which in turn implies that $y \in \mathcal{S}$ (as $\psi(y) = +\infty$, for $y \in \mathbb{R}^d\setminus \mathcal{S}$ by~\eqref{eq:Conv-Cvx}). \end{remark} \begin{remark}[When $\psi^*$ is not differentiable] As $\psi^*$ is a convex function it has a gradient a.e. Thus, $R(x)$ is uniquely defined for a.e.~$x$. For $x\in \mathbb{R}^d$ where $\psi^*(x)$ is not differentiable, $R(x)$ is not uniquely defined. Although for such an $x$ we can define $R(x)$ to be any element in the subdifferential set $\partial \psi^*(x)$ (as was done in~\cite{Cher17}), in Section~\ref{sec:R_n} we give a randomized choice of $R(x)$ that leads to the map $R$ having appealing theoretical properties. \end{remark} The following result, proved in Section~\ref{sec:QRrelation}, shows that absolute continuity of $\nu$ is a sufficient condition for the rank map $R$ to be the right-inverse of the quantile map $Q$ $\mu$-a.e. This justifies the definition of $R$ via~\eqref{eq:Rank}. \begin{lemma}\label{lem:QRrelation} Suppose that $\mu$ and $\nu$ are absolutely continuous w.r.t.~Lebesgue measure. Let $Q$ and $R$ be the quantile and rank maps of $\nu$ (w.r.t.~$\mu$), as defined in~\eqref{eq:Quantile} and \eqref{eq:Rank}. Then: \begin{enumerate} \item[(a)] The rank map $R(\cdot)$ is the essential right-inverse of the quantile function, i.e., \begin{equation}\label{eq:Ess-Inv} R \circ Q (u)= u,\qquad\mbox{for}\;\mu\mbox{-a.e.}~u, \end{equation} and $R\# \nu= \mu$. \item[(b)] Let $f$ and $g$ be the densities of $\mu$ and $\nu$ respectively. For any Borel set $A\subset \mathbb{R}^d$, define $\rho_{Q}(A) := \int_{Q(A)} dx$. Then, $\rho_{Q}$ satisfies the Monge-Amp\`{e}re differential equation (see~\cite{PF14}), i.e., $ \rho_{Q}(A) = \int_{A}\frac{f(x)}{g(Q(x))} dx. $ \end{enumerate} \end{lemma} Compare part (a) of the above result with~\cite[Theorem 2.1 and Equation (8)]{Cher17} where slightly stronger conditions are assumed on $\nu$ for obtaining similar conclusions. Observe that the rank map, as in Definition~\ref{def:Rank}, clearly extends the notion of the distribution function beyond $d=1$. Part (b) of Lemma~\ref{lem:QRrelation} shows the connection between the quantile map and the celebrated Monge-Amp\'{e}re differential equation; one can find similar results in \cite{Ca1,PF14,CF19}. For the sake of completeness, we have added a proof of this result. In many statistical applications it is often useful to know the regularity of the quantile and rank maps. Due to part (b) of Lemma~\ref{lem:QRrelation}, the regularity of our multivariate notions of quantiles and ranks follow from the regularity theory of the solution of the Monge-Amp\'ere equation which has been extensively studied by many authors in the past (see e.g.,~\cite{Cafa90,PF13,PF15,F10,FRV11,GO17}). Although we know that $R = Q^{-1} \;\mu$-a.e.~when $\nu$ is absolutely continuous, we may ask if the equality holds everywhere (as opposed to a.e.). Several results have been obtained in this direction that provide sufficient conditions for such an equality. Caffarelli (see \cite{Ca1,Ca2,Ca3}) showed that when $\mathcal{S}$ and $\mathcal{Y}$ are two bounded convex sets in $\mathbb{R}^d$ and $\mu$ and $\nu$ are absolutely continuous with positive densities (on their supports), then, the corresponding optimal transport maps $T:\mathcal{S}\to \mathcal{Y}$ (such that $T\# \mu = \nu$) and $T^{*}:\mathcal{Y}\to \mathcal{S}$ (such that $T^{*}\# \nu= \mu$) are continuous homeomorphisms and $T^*= T^{-1}$ everywhere in $\mathcal{Y}$; see \cite[Pages 317--323]{V09} for other sufficient conditions. The following result, proved in Section~\ref{sec:Q_prop}, gives another such sufficient condition that may be particularly useful in statistical applications. \begin{proposition}\label{thm:Q_prop} Let $\mathcal{S} \subset \mathbb{R}^d$ be a compact convex set and let $\mathcal{Y} \subset \mathbb{R}^d$ be a convex set. Let $\mu$ be a probability measure supported on $\mathcal{S}$ such that the density of $\mu$ (w.r.t.~Lebesgue measure) is bounded away from zero and bounded above by a constant (on $\mathcal{S}$). Let $\nu$ be a probability measure supported on $\mathcal{Y}$ with density $p_{\mathcal{Y}}$ satisfying the following: there exists a sequence of convex compact sets $\{K_n\}_{n\geq 1}$ with $K_n\uparrow \mathcal{Y}$ and constants $\{\lambda_n, \Lambda_n\}_{n\ge1} \subset \mathbb{R}$ such that \begin{align}\label{eq:sandwitch} 0<\lambda_n\leq p_{\mathcal{Y}}(x) \leq \Lambda_{n}, \qquad \quad \mbox{for all }\; x\in K_n. \end{align} Let $\psi:\mathbb{R}^d\to \mathbb{R}\cup \{+\infty\}$ be a convex function such that $\psi(x) = +\infty$ for $x\notin \mathcal{S}$, $\partial \psi(\mathrm{Int}(\mathcal{S})) = \mathrm{Int}(\mathcal{Y})$ and $\partial \psi \# \mu = \nu$. Let $\psi^{*}:\mathbb{R}^d \to \mathbb{R}\cup \{+\infty\}$ be the Legendre-Fenchel dual of $\psi$. Then: \begin{enumerate} \item[(a)] $\nabla \psi^{*}$, restricted on $\mathrm{Int}(\mathcal{Y})$, is a homeomorphism from $\mathrm{Int}(\mathcal{Y})$ to $\mathrm{Int}(\mathcal{S})$. \item[(b)] $\nabla \psi$ is a homeomorphism from $\mathrm{Int}(\mathcal{S})$ to $\mathrm{Int}(\mathcal{Y})$. Furthermore, we have $\nabla \psi = (\nabla \psi^{*})^{-1}$ in $\mathrm{Int}(\mathcal{S})$. \end{enumerate} \end{proposition} \begin{remark}[Convexity of $\mathcal{S}$ and $\mathcal{Y}$]\label{rem:Cvx-S-Y} Convexity of the domains, $\mathcal{S}$ and $\mathcal{Y}$, is one of the important conditions for the existence of continuous transport maps. Cafarrelli constructed a counterexample (see e.g.,~\cite[pp.~283--285]{V09}) where he showed that the transport map may fail to be continuous when the two measures are absolutely continuous with bounded densities on two smooth and simply connected non-convex domains. \end{remark} \begin{remark}[On condition~\eqref{eq:sandwitch}]\label{rem:Cond-11} We would like to point out that condition \eqref{eq:sandwitch} is important for our proof of Proposition~\ref{thm:Q_prop}. It is one of the sufficient conditions, similar in flavor to Cafarrelli~\cite{Ca1} (also see~\cite{Cher17}) but weaker, for showing that the transport map $\nabla \psi^{*}:\mathrm{Int}(\mathcal{Y}) \to \mathrm{Int}(\mathcal{S})$ is continuous on $\mathrm{Int}(\mathcal{Y})$. Recently, \cite[Theorem~1.1]{Figalli18} used a condition like \eqref{eq:sandwitch} to show that the center-outward quantile function (see~\cite{dCHM} for its definition) is a homeomorphism. However,~\cite{Figalli18} assumed that $\mathcal{Y} = \mathbb{R}^d$. In contrast,~\eqref{eq:sandwitch} is more flexible and covers the case when $\mathcal{Y}$ is a compact convex subset of $\mathbb{R}^d$ as well as when $\mathcal{Y}=\mathbb{R}^d$, and $\nu$ is a probability measure with positive bounded density on $\mathcal{Y}$. For example, any unimodal multivariate density supported on a convex domain $\mathcal{Y}\subset \mathbb{R}^d$ satisfies \eqref{eq:sandwitch}; in particular, this includes the family of multivariate normal distributions. \end{remark} Next we illustrate that the rank map (as defined in~\eqref{eq:Rank}) possesses many properties similar to the univariate distribution function. Our first result, Lemma~\ref{lem:Monotonicity} (proved in Section~\ref{sec:Monotonicity}), states that the one-dimensional projection of the rank map, along any direction, is nondecreasing. \begin{lemma}[Monotonicity of the rank map]\label{lem:Monotonicity} Let $R$ be the rank map of $\nu$ w.r.t. $\mu$. For $x,y\in \mathbb{R}^d$, define $R_{x,y}:\mathbb{R}\to \mathbb{R}$ as \begin{align*} R_{x,y}(t):=(x-y)^{\top}R(tx+(1-t)y), \qquad t \in \mathbb{R}. \end{align*} Then, $R_{x,y}(\cdot)$ is a nondecreasing function. \end{lemma} Note that a univariate distribution function is not only nondecreasing but takes the value 0 or 1 as one approaches $-\infty$ or $+\infty$, irrespective of the probability measure $\nu$. Under mild regularity assumptions on $\mathcal{S}$ and the rank map $R(\cdot)$, we show in the Lemma~\ref{cor:RankProp} (proved in Section~\ref{sec:RankProp}) that $R(\cdot)$ is continuous on the whole of $\mathbb{R}^d$ and it approaches a limit along every ray that depends only on the geometry of $\mathcal{S}$ (and not on the measure $\nu$). \begin{lemma}\label{cor:RankProp} Let $\mathcal{S}$ be a compact convex set in $\mathbb{R}^d$ such that all the supporting hyperplanes of $\mathcal{S}$ touch the boundary of $\mathcal{S}$ at most once. Let $\mu$ and $\nu$ be two probability measures on $\mathcal{S}$ and $\mathcal{Y} \subset \mathbb{R}^d$, respectively, where $\mathcal{Y}$ has nonempty interior. Let $R$ be the rank map of $\nu$ w.r.t. $\mu$. Suppose that $R$ is a homeomorphism from $\mathrm{Int}(\mathcal{Y})$ to $\mathrm{Int}(\mathcal{S})$. Then, \begin{enumerate} \item[(a)] $R$ is everywhere continuous in $\mathbb{R}^d$, \item[(b)] for any $x\in \mathbb{R}^d$, $\lim_{\lambda \to +\infty} R(\lambda x) = \argmax_{v\in \mathcal{S}} \langle x, v\rangle$. \end{enumerate} \end{lemma} Note that the above required condition on $\mathcal{S}$ is certainly satisfied, for example, when $\mathcal{S}$ is the unit ball in $\mathbb{R}^d$, i.e., $\mathcal{S} = B_1(0)$; unfortunately when $\mathcal{S} = [0,1]^d$, the condition is not satisfied. \subsection{The sample quantile and rank maps}\label{sec:Emp-Q-R} \begin{figure} \centering \includegraphics{4-Points} \caption{The left plot shows a data set with four two-dimensional points $X_1,X_2,X_3$ and $X_4$. The right plot shows the four cells (each with area 1/4) marked 1, 2, 3, 4, and the four data points in blue (appropriately scaled to lie in $[0,1]^2$) along with dashed lines connecting each data point to the centroid (in red) of the corresponding cell. The two points $A$ and $B$ in the right plot correspond to the intersection of three cells --- $1, 2, 3$ and $1, 3, 4$.} \label{fig:Q-map} \end{figure} As before, we fix an absolutely continuous distribution $\mu$ with compact convex support $\mathcal{S} \subset \mathbb{R}^d$. Given a random sample $X_1,\ldots, X_n$ from a distribution $\nu$ (on $\mathbb{R}^d$), we now consider estimating the population quantile and rank maps $Q(\cdot)$ and $R(\cdot)$, respectively (w.r.t.~$\mu$). We simply define the sample versions of the quantile and rank maps as those obtained by replacing the unknown distribution $\nu$ with its empirical counterpart $\hat \nu_n$ --- the empirical distribution of the data, i.e., $$ \hat \nu_n(A) = \frac{1}{n} \sum_{i=1}^n \delta_{X_i}(A), \qquad \mbox{ for any Borel set $A \subset \mathbb{R}^d$}. $$ \subsubsection{Empirical quantile function}\label{sec:Q_n} By Theorem~\ref{thm:Brenier} there exists an $\mu$-a.e.~unique map $\hat Q_n$ which pushes $\mu$ to $\hat \nu_n$ and can be expressed as \begin{equation}\label{eq:SampQ_n} \hat Q_n = \nabla \hat \psi_n, \end{equation} where $\hat \psi_n:\mathbb{R}^d \to \mathbb{R} \cup \{+\infty\}$ is a convex function. Further, by Theorem~\ref{thm:Brenier}, $\hat Q_n$ can be computed via: \begin{equation}\label{eq:Emp_Q} \hat Q_n = \argmin_T \int \|u - T(u)\|^2 d\mu(u) \qquad \quad \mbox{subject to }\quad T\#\mu = \hat \nu_n. \end{equation} Note that $\hat Q_n= \nabla \hat \psi_n$ is $\mu$-a.e.~unique; when $\hat \psi_n$ is not differentiable at $u$ we can define $\hat Q_n(u)$ to be any point in $\partial \hat \psi_n(u)$. As $\hat Q_n = \nabla \hat \psi_n$ pushes $\mu$ to $\hat \nu_n$, $\hat \psi_n$ is a convex function whose gradient takes $\mu$-a.e.~finitely many values (in the set $\{X_1,\ldots, X_n\}$). Thus $\hat \psi_n$ is piecewise linear (affine), and hence, there exists $\hat h = (\hat h_1,\ldots, \hat h_n) \in \mathbb{R}^n$ (unique up to adding a scalar multiple of $(1,\ldots, 1) \in \mathbb{R}^n$) such that $\hat{\psi}_n:\mathbb{R}^d \to \mathbb{R} \cup \{+\infty\}$ can be represented as \begin{equation}\label{eq:Emp_G} \hat{\psi}_n(u) := \begin{cases} \max_{i=1,\ldots, n} \{u^\top X_i + \hat h_i\} & \mbox{for } u \in \mathcal{S} \\ +\infty & \mbox{for } u \notin \mathcal{S}. \end{cases} \end{equation} The vector $\hat h$ can be computed by solving a convex optimization problem; see Section~\ref{sec:Comp}. Note that from the form of $\hat{\psi}_n$ above it is clear that $\hat Q_n(u)$, for any $u \in \mathrm{Int}(\mathcal{S})$, belongs to the convex hull of the data. \begin{remark}[Form of the subdifferential set $\partial \hat{\psi}_n(u)$]\label{rem:Q-d} As $\hat{\psi}_n$ is piecewise linear (affine) and convex (and thus a finite pointwise maximum of affine functions), we can explicitly write its subdifferential, i.e., for any $u \in \mathcal{S}$, $$\partial \hat{\psi}_n(u) = \mathrm{Conv}(\{X_i: \langle u,X_i \rangle + \hat h_i = \hat{\psi}_n(u)\}).$$ \end{remark} The function $\partial \hat{\psi}_n(\cdot)$ induces a cell decomposition of $\mathcal{S}$: Each cell is a convex set and is defined as \begin{equation}\label{eq:W_i} W_i(\hat h) := \{u \in \mathcal{S}: \nabla \hat{\psi}_n(u) = X_i\}; \end{equation} see~\cite{Gu12} for the details. In defining $W_i(\hat h)$, as in~\eqref{eq:W_i}, we only consider points $u \in \mathcal{S}$ where $\hat{\psi}_n$ is differentiable. Note that, for a.e.~sequence $X_1,\ldots, X_n$, each cell $W_i(\hat h)$ has $\mu$ measure $1/n$ and $\cup_{i=1}^n W_i(\hat h) \subset \mathcal{S}$. Figure~\ref{fig:Q-map} illustrates this with four points $X_1,X_2,X_3$ and $X_4$ and $\mu = $ Uniform$([0,1]^2)$. Each point in the four cells (labelled 1, 2, 3 and 4) is mapped to the corresponding data point ($X_1,X_2,X_3$ and $X_4$) by the sample quantile function $\hat Q_n \equiv \nabla \hat \psi_n$. The convex function $\hat \psi_n$ is not differentiable at the boundary of the 4 cells (marked by the black lines in the right panel of Figure~\ref{fig:Q-map}). Remark~\ref{rem:Q-d=1} in the Appendix illustrates the above ideas when $d=1$ and $\mu = $ Uniform$([0,1])$. \subsubsection{Empirical rank map}\label{sec:R_n} Let us define $\hat{\psi}_n^*:\mathbb{R}^d \to \mathbb{R} \cup \{+ \infty\}$ to be the {\it Legendre-Fenchel} dual of $\hat{\psi}_n$, i.e., \begin{equation}\label{eq:Sup} \hat{\psi}^{*}_n(x) :=\sup_{y\in \mathbb{R}^d}\big\{\langle x,y\rangle - \hat{\psi}_n(y)\big\} = \sup_{u\in \mathcal{S}}\big\{\langle x,u\rangle - \hat{\psi}_n(u)\big\}, \quad \mbox{for } x \in \mathbb{R}^d. \end{equation} We define the {\it multivariate sample rank function} $\hat{R}_n:\mathbb{R}^d \to \mathcal{S}$ as \begin{equation}\label{eq:SampR_n} \hat{R}_n := \nabla \hat{\psi}^{*}_n. \end{equation} The following result (proved in Section~\ref{sec:RAltDef}) expresses $\hat{R}_n$ in another form, which is especially convenient for computation; cf.~\cite[Definition 3.1]{Cher17}. \begin{lemma}\label{lem:RAltDef} Consider the multivariate sample rank function $\hat{R}_n$ defined in \eqref{eq:SampR_n}. Then, for all $y\in \mathbb{R}^d$, \begin{align}\label{eq:RAltDef} \hat{R}_n(y) \in \argmax_{u\in \mathcal{S}}\{\langle u, y\rangle - \hat{\psi}_n(u)\}. \end{align} \end{lemma} The following result, proved in Section~\ref{Pf:lem:ExtremePt}, expresses the value of $ \hat{\psi}^{*}_n$ at the data points in terms of $\hat {h}_i$ (see~\eqref{eq:Emp_G}). \begin{lemma}\label{lem:ExtremePt} Fix $i \in \{1,\ldots, n\}$. Consider $\hat{\psi}^{*}_n$ as defined in~\eqref{eq:Emp_G}. Then, $\hat{\psi}^{*}_n(X_i) = -\hat {h}_i$. \end{lemma} \begin{figure} \includegraphics{2-Dist} \caption{The two plots show the cell decomposition of $\mathcal{S} = [0,1]^2$ (each with area $1/n$ where $n=100$) induced by the estimated quantile function (w.r.t.~$\mu = $ Uniform$([0,1]^2)$) and the data cloud (appropriately scaled to lie in $[0,1]^2$) along with dashed lines indicating which cell corresponds to which data point. The data sets are drawn from the following distributions: (i) $X \sim N_2((0,0),I_2)$; (ii) $X \sim N_2((0,0),\Sigma)$ where $\Sigma_{1,1} = \Sigma_{2,2} = 1$ and $\Sigma_{1,2} = \Sigma_{2,1} = 0.99$. \label{fig:4Plots} \end{figure} It is a simple fact that the Legendre-Fenchel dual of a piecewise affine convex function is also convex piecewise affine. In fact, for $x \in \mathrm{Conv}(X_1, \ldots, X_n)$, \begin{equation}\label{eq:Dual-2} \hat{\psi}^{*}_n(x) = \min \left\{\sum_{i=1}^n t_i \hat{\psi}^{*}_n(X_i): t_i \ge 0, \sum_{i=1}^n t_i =1, \sum_{i=1}^n t_i X_i = x\right\}; \end{equation} see e.g.,~\cite[Theorem 2.2.7]{H94}. Thus, $\hat R_n(\cdot)$ a.e.~takes finitely many distinct values (as it is a gradient of a piecewise affine convex function). Remark~\ref{rem:R-d=1} in the Appendix shows that when $d=1$, the rank function $\hat{R}_n$ is not defined uniquely at the data points $X_i$'s. Note that the non-uniqueness of the rank function when $d=1$ was finessed by enforcing right-continuity, which is hard to do as we go beyond $d=1$. Indeed, for any $d \ge 1$, $\hat R_n(X_{i})$ could be defined as any element in the (closure of the) cell $W_i(\hat h)$; this follows from Lemma~\ref{lem:SubD}. Figure~\ref{fig:Q-map} illustrates this when $\mu = $ Uniform$([0,1]^2)$. Using the next result, Lemma~\ref{lem:Char3} (proved in Section~\ref{sec:Char3}), we see in Figure~\ref{fig:Q-map} that any point in the interior of the triangle formed by $X_1, X_2$ and $X_3$ (or $X_1, X_3$ and $X_4$) is mapped to the point $A$ (or $B$) by the sample rank map $\hat R_n$. \begin{lemma}\label{lem:Char3} Fix $x\in \mathbb{R}^d$. Suppose that for $i_1,\ldots, i_{d+1} \subset \{1,\ldots, n\}$: (i) $x\in \mathrm{Int}\big(\mathrm{Conv}\big(X_{i_1}, \ldots , X_{i_{d+1}}\big)\big)$, and (ii) there exists a unique $u\in \mathcal{S}$ such that $u = \mathrm{Cl}(W_{i_1}(\hat h)) \cap \ldots \cap \mathrm{Cl}(W_{i_{d+1}}(\hat h))$ (see~\eqref{eq:W_i}). Then, $u$ is the unique point in $\mathcal{S}$ such that $x\in \partial \hat{\psi}_n (u)$. Furthermore, $\partial\hat{\psi}^{*}_n(x) = u = \hat{R}_n(x)$. \end{lemma} \subsubsection{Ranks}\label{sec:Ranks} By the ``ranks" of the data points we mean the rank function evaluated at the data points $X_i$'s. When $d=1$ and the underlying distribution is continuous, the usual ranks, i.e., $\{\mathbb{F}_n(X_i)\}_{i=1}^n$ (here $\mathbb{F}_n$ is the empirical distribution function), are identically distributed on the discrete set $\{1/n,2/n,\ldots, n/n\}$ with probability $1/n$ each. As a consequence, the usual ranks are {\it distribution-free} (in $d=1$), i.e., the distribution of $\mathbb{F}_n(X_i)$ does not depend on the distribution of $X_i$ (as long as $X_i$ comes from a continuous distribution). We may ask: ``Does a similar property hold for the multivariate ranks $\hat R_n(X_i)$?". From the discussion in Section~\ref{sec:R_n} it is clear that the multivariate ranks $\hat R_n(X_{i})$ are non-unique. In fact, we can choose $\hat R_n(X_i)$ to be any point in the set $W_i(\hat h)$ (see~\eqref{eq:W_i}). In the sequel we will use a special choice of $\hat R_n(X_i)$ which will lead to a distribution-free notion. We define $\hat R_n(X_i)$ as a random point drawn from the uniform distribution on the cell $W_i(\hat h) \subset \mathcal{S}$, i.e., \begin{equation}\label{eq:Rank-X_i} \hat R_n(X_i)|X_1,\ldots, X_n \sim \mathrm{Uniform}(W_i(\hat h)). \end{equation} Thus, our choice of the empirical ranks $\{\hat R_n(X_i)\}_{i=1}^n$ is random. However, this external randomization leads to the following interesting consequence --- the multivariate ranks are distribution-free (see Section~\ref{pf:Uniform} for the proof). \begin{lemma}\label{lem:Uniform} Suppose that $X_1,\ldots, X_n$ are i.i.d.~$\nu$, an absolutely continuous distribution on $\mathbb{R}^d$. Let $\mu = $ Uniform$(\mathcal{S})$, where $\mathcal{S}$ is a compact convex set in $\mathbb{R}^d$. Then, for any $i =1,\ldots, n$, $\hat R_n(X_i) \sim \mu = \mathrm{Uniform}(\mathcal{S}).$ \end{lemma} Compare Lemma~\ref{lem:Uniform} with the result that $R(X) \sim \mu$ where $R(\cdot)$ is the population rank map of $X \sim \nu$, a consequence of the fact that $R(\cdot)$ pushes forward $\nu$ to $\mu$ (by Lemma~\ref{lem:QRrelation} ). Note that the above result may not hold if $\mu$ is not the uniform distribution on $\mathcal{S}$. If we do not want a randomized choice of ranks, then we can define $\hat R_n(X_i) := \max_{u \in \mathrm{Cl}(W_i(\hat h))}\|u\|;$ the above choice is convenient for computational purposes. \subsection{Computation of the sample quantile and rank functions}\label{sec:Comp} The computation of the empirical quantile function (via~\eqref{eq:Emp_Q}) is often referred to as the semi-discrete optimal transport problem. The computation of~\eqref{eq:W_i}, which is obtained from solving~\eqref{eq:Emp_Q}, leads to a ``partition'' of $\mathcal{S}$ into $n$ convex sets (each with volume $1/n$) and is usually called the power diagram~\cite{Auren87} --- a type of weighted Voronoi diagram. Several authors have worked on the computation of the power diagram; see e.g.,~\cite{Auren87, Auren98, Merigot11, Levy18}. In particular, it is known that the computation of~\eqref{eq:Emp_Q} can be formulated as a convex program with $n$ variables. Moreover, recently, several approximate algorithms that are computationally efficient (especially when $d \ge 4$) have also been investigated, see e.g.,~\cite{Genevay16} and the references therein. Once the empirical potential function $\hat{\psi}_n$ is computed, the computation of the empirical rank map at a data point, via~\eqref{eq:RAltDef}, involves solving a convex program. The two plots in Figure~\ref{fig:4Plots} show different convex polyhedral partitions of $[0,1]^2$ obtained from the potential function corresponding to two different simulated data sets --- notice the different shapes and structure of the cells $W_i(\hat h)$ for the two types of data distributions. As the (empirical) rank and quantile functions take values in $\mathbb{R}^d$, they are a bit difficult to visualize. In Figure~\ref{fig:Banana} we plot the estimated depth function --- defined as $\hat D_n:\mathbb{R}^d \to \mathbb{R}$ such that $\hat D_n(x) := 1/2 - \|\hat R_n(x) - \mathbf{1}/2\|_\infty$ where $\hat R_n(\cdot)$ is the estimated rank function and $\mathbf{1} = (1,1,\ldots, 1) \in \mathbb{R}^d$ --- for the banana-shaped distribution; cf.~\cite{Cher17} where the authors motivate the use of multivariate ranks/quantiles based on optimal transportation using this data (see~\cite[Figure 2]{Cher17}). The banana-like geometry of the data cloud is correctly picked up by the non-convex contours in Figure~\ref{fig:Banana}; cf.~\cite[Figure 2]{Cher17}. \begin{figure} \includegraphics{Banana-Dist} \caption{Left: A random sample of size $n=1000$ drawn from the banana-shaped distribution. Right: The estimated depth function for the banana-shaped data obtained from a sample of 10000 observations.} \label{fig:Banana} \end{figure} \subsection{Properties of the quantile and rank maps}\label{sec:Q-R-Prop} In this section we describe some important properties of the defined quantile and rank functions. Our first result, proved in Section~\ref{Pf:lem:LinTrans}, shows that the quantile/rank function of $Y := c X + b$, where $X \sim \nu$ is a random vector in $\mathbb{R}^d$ ($b \in \mathbb{R}^d$ and $c >0$), can be easily obtained from the quantile/rank function of $X$. \begin{lemma}\label{lem:LinTrans} Suppose that $X \sim \nu$ where $\nu$ is a distribution on $\mathbb{R}^d$. Let $\mu$ be an absolutely continuous distribution on $\mathbb{R}^d$ with support $\mathcal{S}$. Suppose that $c >0$ is a scalar and $b \in \mathbb{R}^d$. Let $Y := c X + b$. Let $Q_X:\mathcal{S} \to \mathbb{R}^d$ and $R_X: \mathbb{R}^d \to\mathbb{R}^d$ be the quantile and rank maps of $X$ (w.r.t.~$\mu$). Let $Q_Y:\mathcal{S} \to \mathbb{R}^d$ and $R_Y: \mathbb{R}^d \to \mathbb{R}^d$ be the quantile and rank maps of $Y$ (w.r.t.~$\mu$). Then, for $\mu$-a.e.~$u$ and for a.e.~$y \in \mathbb{R}^d$, $$Q_Y(u) = c\,Q_X(u) + b, \qquad \mbox{and} \qquad R_Y(y) = R_X((y -b)/c).$$ \end{lemma} The above lemma holds for any probability measure $\nu$ (discrete or continuous); it justifies the fact that we can rescale the data (by adding a constant vector and multiplying by a positive scalar) in Figures~\ref{fig:Q-map} and~\ref{fig:4Plots} and the cell decomposition (of $\mathcal{S}$) does not change (see~\eqref{eq:W_i}) as the transformed (piecewise linear and convex) potential function is obtained by adding a constant to a positive multiple of the previous (piecewise linear) potential function. It is natural to ask if it is possible to relate the quantile and rank functions of $Y := AX$, where $A_{d \times d}$ is a matrix, to those of $X$. The following result (proved in Section~\ref{Pf:lem:LinTrans-2}) shows that the quantile map is equivariant under any orthogonal transformation if $\mu$ is spherically symmetric. \begin{lemma}\label{lem:LinTrans-2} Suppose that $A$ is an orthogonal matrix, i.e., $A A^\top = A^\top A = I_d$. Let $\mu$ be a spherically symmetric absolutely continuous distribution on $\mathbb{R}^d$ (e.g., the uniform distribution on the unit ball around $0 \in \mathbb{R}^d$). Let us denote by $\psi_X$ the potential function linked to the random variable $X\sim \nu$, i.e., $\nabla \psi_X \# \mu = \nu$ and $\psi_X$ is convex. Then a potential function of $Y:=AX$ is given by $\psi_Y(u) = \psi_X(A^\top u)$, for $u \in \mathbb{R}^d$. As a consequence, for $\mu$-a.e.~$u$ and for a.e.~$y \in \mathbb{R}^d$, $$Q_Y(u) = A Q_X(A^\top u), \qquad \mbox{and} \qquad R_Y(y) = A R_X(A^\top y).$$ \end{lemma} For the next result (proved in Section~\ref{pf:prop:Indep}) we take $\mu =$ Uniform$([0,1]^d)$; note that the choice of $\mu =$ Uniform$([0,1]^d)$ has been studied before (see e.g.,~\cite{D14, Galichon16}). This has implications in testing for mutual independence between random vectors; see Section~\ref{sec:IndepTest} for more details. \begin{proposition}\label{prop:Indep} Suppose that $\nu$ is a distribution on $\mathbb{R}^d$ and let $\mu= $ Uniform$([0,1]^d)$. Suppose that $X =(X_1,X_2, \ldots, X_k) \sim \nu$ where $k \ge 2$, $X_i \sim \nu_i$, for $i=1,\ldots, k$, are random vectors in $\mathbb{R}^{d_i}$ (here $d_1 + \ldots + d_k = d$). Let $Q$ and $Q_i$ be the quantile maps of $X$ and $X_i$, for $i = 1,\ldots, k$, respectively (w.r.t.~$\mu$ and $\mu_i = $ Uniform$([0,1]^{d_i})$). Let $R$ and $R_i$, for $i = 1,\ldots,k$, be the corresponding rank maps. If $X_1, \ldots, X_k$ are mutually independent then \begin{equation}\label{eq:Indep-Q} Q(u_1,\ldots, u_k) = (Q_1(u_1), \ldots, Q_k(u_k)), \quad \mbox{for $\mu$-a.e. } (u_1,\ldots,u_k) \in \mathbb{R}^{d}, \end{equation} and \begin{equation}\label{eq:Indep} R(x_1,\ldots, x_k) = (R_1(x_1), \ldots, R_k(x_k)), \qquad \mbox{for a.e.~}(x_1,\ldots,x_k) \in \mathbb{R}^{d}. \end{equation} Conversely, suppose that~\eqref{eq:Indep-Q} or~\eqref{eq:Indep} holds. Then $X_1,\ldots,X_k$ are mutually independent. \end{proposition} \section{Asymptotic Properties of Empirical Quantile and Rank maps}\label{sec:Asymptotics} \subsection{Uniform convergence}\label{sec:UnifConv} The rank and quantile functions in one dimension enjoy many interesting asymptotic properties. For example, if $X_1, \ldots , X_n\sim \nu$, where $\nu$ is a probability measure over $\mathbb{R}$, then by the Glivenko-Cantelli theorem, the empirical rank function (which is same as the empirical distribution function when $d=1$) converges uniformly to the population rank function a.s. Similarly, for $d=1$, the empirical quantile function converges uniformly (on compacts $[a,b] \subset (0,1)$) to the population quantile function, when the underlying distribution function is continuous and strictly increasing. One may wonder if such results also hold for the multidimensional empirical quantile and rank functions studied in this paper. In Theorem~\ref{thm:GCProp} below we show that this is indeed the case. Suppose that $\nu$ is absolutely continuous with support $\mathcal{Y} \subset \mathbb{R}^d$; here $\nu$ is the target distribution. Let $\mu$ be an absolutely continuous distribution supported on a compact convex set $\mathcal{S} \subset \mathbb{R}^d$. Let $Q$ and $R$ be the quantile and rank maps of $\nu$ (w.r.t.~$\mu$); see~\eqref{eq:Quantile} and~\eqref{eq:Rank} respectively. Let $X_1, X_2, \ldots$ be a sequence of i.i.d.~random variables with distribution $\nu$. Let $\{\hat{\nu}_n\}_{n\ge1}$ be a sequence of random probability distributions (computed from $X_1,\ldots, X_n$) such that $\hat{\nu}_n$ converges weakly to $\nu$ a.s., i.e., \begin{equation}\label{eq:Conv-Emp-Meas} \hat{\nu}_n \stackrel{d}{\to} \nu \quad \mbox{a.s.} \end{equation} For example, we can take $\hat{\nu}_n$ to be the empirical distribution obtained from the first $n$ data points, i.e., $\hat{\nu}_n = \frac{1}{n} \sum_{i=1}^n \delta_{X_i}$; in this case we know that~\eqref{eq:Conv-Emp-Meas} holds (see e.g.,~\cite[Theorem~11.4.1]{Dudley}). Denote the multivariate quantile and rank functions for $\hat{\nu}_n$ (w.r.t.~$\mu$) by $\hat{Q}_n$ and $\hat{R}_n$. In particular, when the underlying potential functions (see Definition~\ref{def:Quantile}) are not differentiable, we define $\hat{Q}_n$ and $\hat{R}_n$ to be any point in the corresponding subdifferential set. The following is a main result of this paper (see Section~\ref{proof:thm-GC} for its proof). \begin{theorem}\label{thm:GCProp} Consider the notation introduced above and suppose that~\eqref{eq:Conv-Emp-Meas} holds. Suppose that $Q:\mathrm{Int}(\mathcal{S}) \to \mathrm{Int}(\mathcal{Y})$ is a homeomorphism. Let $K_1\subset \mathrm{Int}(\mathcal{S})$ and $K_2\subset \mathrm{Int}(\mathcal{Y})$ be any two compact sets. \begin{enumerate} \item[(a)] Then, we have \begin{equation}\label{eq:GC} \sup_{u\in K_1} \|\hat{Q}_n(u) - Q(u)\| \stackrel{a.s.}{\rightarrow} 0. \end{equation} \item[(b)] Further, \begin{align}\label{eq:GC-Rank} \sup_{x\in K_2} \|\hat{R}_n(x) - R(x)\| \stackrel{a.s.}{\rightarrow} 0. \end{align} \item[(c)] Suppose that all the supporting hyperplanes of $\mathcal{S}$ touch the boundary of $\mathcal{S}$ at most once. Let $\{\lambda_n\}_{n\in \mathbb{N}}\subset \mathbb{R}$ be a sequence such that $\lambda_n\to \infty$ as $n\to \infty$. Then, \begin{align}\label{eq:GC-RankFiner} \sup_{x\in \mathbb{R}^d} \|\hat{R}_n(x) - R(x)\| \stackrel{a.s.}{\rightarrow} 0, \end{align} and \begin{align}\label{eq:Asymptot} \qquad \lim_{\lambda_{n}\to \infty}\hat{R}_n(\lambda_n x) \stackrel{a.s.}{=} \argmax_{v \in \mathcal{S}} \langle x, v\rangle, \quad \mbox{for all}\;\; x\in \mathbb{R}^d. \end{align} \end{enumerate} \end{theorem} Theorem~\ref{thm:GCProp}-$(a)$ extends the uniform convergence of the empirical quantile function (on compacts in the interior of $[0,1]$, for $d=1$) beyond $d=1$. Theorem~\ref{thm:GCProp}-$(b)$ shows the uniform convergence of the estimated rank map on any compact set inside $\mathrm{Int}(\mathcal{Y})$. One may notice that Theorem~\ref{thm:GCProp}-$(a)$ and~$(b)$ improve upon the result of \cite[Theorem 3.1]{Cher17} where the authors prove a similar convergence result for the estimated quantile/rank maps under the additional assumption of compactness of $\mathcal{Y}$. Also, compared to~\cite[Theorem~3.1]{Cher17} where only convergence in probability of the empirical quantile/rank maps are established, in Theorem~\ref{thm:GCProp} we show a.s.~convergence. Theorem~\ref{thm:GCProp}-$(c)$ (see \eqref{eq:GC-RankFiner}) can be thought of as the proper generalization of the Glivenko-Cantelli theorem beyond $d=1$ where we show the a.s.~convergence of the estimated rank map uniformly over the whole of $\mathbb{R}^d$. \begin{remark}[On the sufficient condition for~\eqref{eq:GC-RankFiner}] In~\eqref{eq:GC-RankFiner} we show that the empirical rank map converges to the population rank function uniformly on $\mathbb{R}^d$, under a certain assumption on $\mathcal{S}$. This sufficient condition is certainly satisfied, for example, when $\mathcal{S}$ is the unit ball in $\mathbb{R}^d$, i.e., $\mathcal{S} = B_1(0)$. Unfortunately when $\mathcal{S} = [0,1]^d$, this condition is not satisfied. \end{remark} \begin{remark}[Necessity of $Q$ to be a homeomorphism] One of the main assumptions in Theorem~\ref{thm:GCProp} is that the population quantile $Q$ is a homeomorphism; for $d=1$ this corresponds to assuming that the distribution function is continuous and strictly increasing. It is actually a necessary condition for showing the uniform convergence of $\hat{Q}_n$ (the sample quantile function) to $Q$; in fact, more generally, for a sequence of (sub)-gradients of convex functions. To see this, consider the example of a sequence of convex functions $\phi_n:\mathbb{R} \to \mathbb{R}$ defined as $\phi_n(x) := (x^2+n^{-1})^{1/2}$. Note that $\phi^{\prime}_n(x)=x(x^2+n^{-1})^{-1/2}$ and $\phi^{\prime \prime}_n(x)= (x^2+n^{-1})^{-3/2}$, for all $x\in \mathbb{R}$. This shows that $\phi_n$ is convex for all $n\geq 1$. As $n\to \infty$, $\phi_n(x)$ converges pointwise to $\phi(x) := |x|$. However, the subdifferential set of the function $\phi(x)$ at $x=0$ is equal to $[-1,1]$ whereas $\phi^{\prime}_n(0)=0$ for all $n\geq 1$. Hence, $\phi^{\prime}_n(\cdot)$ does not converge uniformly on any compact set containing $0$. \end{remark} \begin{remark}[When is $Q$ a homeomorphism?] From Proposition~\ref{thm:Q_prop} it follows that when the support of $\nu$ is convex and $\nu$ is absolutely continuous with density satisfying \eqref{eq:sandwitch} then the quantile map $Q$ will be a homeomorphism; also see Remarks~\ref{rem:Cvx-S-Y} and~\ref{rem:Cond-11}. Recently, in~\cite[Proposition~4.5 and Corollary~4.6]{KM17} some results are provided that show that the transport map (quantile map) can be a homeomorphism even when the support of $\nu$ is a union of convex domains. \end{remark} \begin{remark}[Connection to~\citet{VS18}] During the final stages of the preparation of this paper, we came across the recent paper~\cite{VS18} which would imply ``graphical convergence" of the estimated quantile maps (see~\cite[Theorem 4.2 and Corollary 4.4]{VS18}). Their result does not need absolute continuity of $\nu$ and no restrictions are placed on the supports of the measures $\mu$ and $\nu$. However, note that graphical convergence, which implies a form of local uniform convergence, is weaker than uniform convergence on compacta stated in Theorem~\ref{thm:GCProp}. Moreover, since the empirical rank maps $\hat{R}_n$ are not strictly transport maps, it is not clear if~\cite{VS18} implies any notion of convergence for $\hat{R}_n$. \end{remark} One of the key ingredients in the proof of Theorem~\ref{thm:GCProp} is the following lemma which investigates some limiting properties of (sub)-gradients of a sequence of convex functions. Before proceeding to the main statement, let us provide some motivation for such a result. Suppose that $\phi_n:\mathbb{R}^d \to \mathbb{R} \cup \{+\infty\}$, for ${n\ge 1}$, is a sequence of convex functions such that the sets $\{\phi^{*}_n<\infty\}$ are uniformly bounded; here $\phi^{*}_n$ denotes the Legendre-Fenchel dual of $\phi_n$. We may think of $\phi_n$ as $\hat{\psi}^*_n$ (see~\eqref{eq:Sup}) in which case $\phi_n^* = \hat{\psi}_n$. Then, owing to Lemma~\ref{lem:SubD}, $\{\partial \phi_n(\mathbb{R}^d): n \ge 1\}$, are uniformly bounded sets. Further, on any bounded set $\mathfrak{K} \subset \mathbb{R}^d$, $\{\phi_n\}_{n\ge 1}$ will be a sequence of uniformly bounded equicontinous\footnote{We say a sequence of functions $\{f_n:\mathbb{R}^d\to \mathbb{R}\}_{n\ge 1}$ is uniformly equicontinuous if for any $\epsilon>0$ there exists $\delta>0$ such that $|f_n(x)-f_n(y)|\leq \epsilon$ whenever $\|x-y\|\leq \delta$.} functions on $\mathfrak{K}$; see~Lemma~\ref{lem:Bd}. Therefore, by the Arzela-Ascoli theorem, there exists a convex function $\phi$ such that $\phi_n$ converges to $\phi$ uniformly on $\mathfrak{K}$. However, this does not guarantee the uniform convergence of the corresponding subdifferential sets. The following lemma, proved in Section~\ref{pf:KeyLemma}, addresses the mode of convergence of the subdifferential sets when the underlying convex functions converge uniformly. The result may also be of independent interest. \begin{lemma}\label{KeyLemma} For $n \ge 1$, let $\phi_n:\mathbb{R}^d \to \mathbb{R}$ be a sequence of convex functions with $\phi_n(0)=0$ and $\partial \phi_n(\mathbb{R}^d) \subset S$ for some compact set $S\subset \mathbb{R}^d$. Let $K\subset \mathbb{R}^d$ be a compact set. Assume that there exists a convex function $\phi:\mathbb{R}^d \to \mathbb{R} \cup \{+\infty\}$ such that $$\sup_{u\in K^{\prime}} |\phi_n(u) - \phi(u) |\to 0 \quad \mbox{ as } \quad n \to \infty,$$ where $K^{\prime}\subset \mathbb{R}^d$ is a compact set such that $\mathrm{Int}(K') \supset K$. Then, the following hold: \begin{enumerate} \item[(a)] Fix $\delta>0$. For a set $A\subset \mathbb{R}^d$, define $B(\delta, A) := \cup_{x\in A} B_{\delta}(x)$. There exists $n_0=n_0(\delta) \in \mathbb{N}$ such that for all $n\geq n_0$, we have $\partial \phi_n(K)\subset B(\delta, \partial \phi(K))$. \item[(b)] $\partial\phi(K) \subset S$. \item[(c)] Furthermore, if $\phi$ is differentiable everywhere on $K$, then, \begin{equation} \sup_{u\in K} \sup_{\xi\in \partial \phi_n(u)} \|\xi- \nabla \phi(u)\| \to 0, \quad \text{as }n\to \infty. \vspace{-0.12in} \nonumber \end{equation} \item[(d)] If $\phi$ is strictly convex on $K^{\prime}$, then, for any open set $U\subset K$, there exists $n_0=n_0(U)\in \mathbb{N}$ such that $\partial \phi(U)\subset \partial \phi_n(K^{\prime})$ for all $n\geq n_0$. \item[(e)] Let $K^{\prime\prime} \subset S$ be a compact convex set. Suppose that $\phi^{*}_n$, the Legendre-Fenchel dual of $\phi_n$, converges uniformly to another differentiable convex function $\psi$ on $K^{\prime\prime}$, i.e., $\sup_{u\in K^{\prime\prime}} |\phi_n^*(u) - \psi(u) |\to 0$ as $n \to \infty$. Also suppose that $\nabla \psi(K^{\prime\prime}) \subset K$. For any $x\in K^{\prime\prime}$, if $\nabla\psi(x) =y\in K$, then $x\in \partial \phi(y)$. \end{enumerate} \end{lemma} Lemma~\ref{KeyLemma}-(e) shows that if a sequence of convex functions and their Legendre-Fenchel duals converge uniformly to the convex functions $\phi$ and $\psi$ respectively, and if one of these two functions is everywhere differentiable, then,~\eqref{eq:Charac-Sub} of Lemma~\ref{lem:SubD} holds partially between $\phi$ and $\psi$. This raises the following question (which is also relevant to the proof of Theorem~\ref{thm:GCProp}): If \eqref{eq:Charac-Sub} holds partially between any two convex functions $\phi$ and $\psi$, then, is it true that $\phi$ is the Legendre-Fenchel dual of $\psi$? As one can guess, this is not true in general and the main reason is the invariance of \eqref{eq:Charac-Sub} under the addition of constants to $\phi$ or $\psi$. However, one may still hope that the subgradients of $\phi$ will be the same as the subgradients of the Legendre-Fenchel dual of $\psi$. Our next result, proved in Section~\ref{pf:KeyLemma2}, provides a sufficient condition under which we are able to validate this claim. \begin{lemma}\label{KeyLemma2} Let $\mathcal{S}$ and $\mathcal{Y}$ be two sets in $\mathbb{R}^d$. Suppose that $\mathcal{S}$ is bounded and there is a differentiable convex function $\psi: \mathcal{S}\to \mathbb{R}$ such that $\nabla \psi:\mathrm{Int}(\mathcal{S}) \to \mathrm{Int}(\mathcal{Y})$ is a homeomorphism. Then $\psi^{*}$, the convex conjugate of $\psi$, is differentiable everywhere on $\mathrm{Int}(\mathcal{Y})$. Now, let $\phi:\mathbb{R}^d\to \mathbb{R}$ be another convex function such that: (i) $\partial \phi(\mathbb{R}^d)\subset \mathcal{S}$, and (ii) $y= \nabla \psi(x)$ for some $x\in \mathcal{S}\; \Rightarrow \; x\in\partial \phi(y)$. Then, $\phi$ is differentiable everywhere on $\mathrm{Int}(\mathcal{Y})$, and $\nabla\phi(y) = \nabla \psi^{*}(y)$ for all $y\in \mathrm{Int}(\mathcal{Y})$. \end{lemma} Suppose that a sequence of convex functions $\{\phi_n\}_{n\geq 1}$ converges to another convex function $\phi$ uniformly on every compact subset of $\mathbb{R}^d$. Our last result of this section Lemma~\ref{lem:UnifConv} (proved in Section~\ref{pf:UnifConv}) provides a sufficient condition under which one can show the uniform convergence of the subgradients of $\phi_n$ to those of $\phi$, over the whole of $\mathbb{R}^d$. \begin{lemma}\label{lem:UnifConv} Let $\mathcal{S}$ be a compact convex set in $\mathbb{R}^d$ such that all the supporting hyperplanes of $\mathcal{S}$ touch the boundary of $\mathcal{S}$ at most once. Let $\mathcal{Y} \subset \mathbb{R}^d$ have nonempty interior. Let $\phi_n:\mathbb{R}^d \to \mathbb{R}$ be a sequence of convex functions such that $\partial \phi_n(\mathbb{R}^d)\subset \mathcal{S}$ for all $n\geq 1$. Suppose that $\{\phi_n\}_{n\geq 1}$ converges uniformly to a convex function $\phi:\mathbb{R}^d \to \mathbb{R}$ on every compact set of $\mathbb{R}^d$ and $\nabla\phi$ is a homeomorphism from $\mathrm{Int}(\mathcal{Y})$ to $\mathrm{Int}(\mathcal{S})$. Then, \begin{enumerate} \item[(a)] $\phi$ is everywhere differentiable in $\mathbb{R}^d$, \item[(b)] $\sup_{x\in \mathbb{R}^d} \sup_{y\in \partial\phi_n(x)}\|y - \nabla \phi(x)\|\to 0, \quad \text{ as }n\to \infty,$ \item[(c)] for any $x\in \mathbb{R}^d$, $\lim_{\lambda \to +\infty} \nabla \phi(\lambda x) = \argmax_{v\in \mathcal{S}} \langle x, v\rangle$. \end{enumerate} \end{lemma} We observe that \cite[Section~3.2.3]{dCHM} has a similar result like Lemma~\ref{lem:UnifConv} when $\mathcal{S}$ is the closed unit ball in $\mathbb{R}^d$. Lemma~\ref{lem:UnifConv} generalizes \cite{dCHM} by giving a sufficient condition on $\mathcal{S}$ under which a similar conclusion holds. \subsection{Comparison with~\citet{Cher17} and~\citet{dCHM}}\label{sec:Comparison} In the papers~\cite{Cher17} and~\cite{dCHM} the authors use ideas from the theory of optimal transportation to define multivariate quantiles and ranks. Although our approach is similar in spirit to that of~\cite{Cher17} there are subtle and important differences. As opposed to~\cite{Cher17}, we completely avoid the dual formulation of Kantorovich and Brenier (see~\cite[Theorem 2.2]{Cher17}) and define the multivariate quantiles and ranks through Monge's primal problem; we hope that this makes the exposition more accessible to the uninitiated reader. ~\citet{Cher17} studied multivariate quantiles and ranks to obtain notions of statistical depth whereas we study quantiles and ranks to aid us to construct nonparametric goodness-of-fit and mutual independence tests. The approach to defining multivariate ranks and quantiles proposed and studied in~\cite{dCHM} (also see~\cite{BSS18}) is quite different from ours.~\citet{dCHM} choose a set of $n$ representative grid points within the set $\mathcal{S}$ (that approximates the measure $\mu$) and then solve the discrete optimal transport problem (between the sample data points and the $n$ chosen grid points) to define the empirical rank map. Thus the ``ranks'' of the data points are forced to be the points in the chosen grid. This approach immediately leads to many attractive features for the empirical ranks, e.g., the distribution-freeness of the ranks. However this approach also has many obvious drawbacks: (i) the choice of the $n$ grid-points is ad-hoc (see~\cite{dCHM}); (ii) it does not automatically give rise to a quantile function (or quantile contours) and special smoothing interpolation is required (which again involves the choice of tuning parameters). In comparison, our approach (and that of~\cite{Cher17}) is completely automated and tuning parameter-free. However, this necessarily entails dealing with non-unique ranks at the data points. In a sense, our approach yields an elegant and useful notion of quantiles while the approach of~\cite{dCHM} yields a notion of ranks with attractive properties. Theorem~\ref{thm:GCProp} aims towards a unification of the asymptotic results of the Monge-Kantorovich ranks and quantiles of \cite[Theorem 3.1]{Cher17} and the center-outward ranks and quantiles of \cite[Propositions~1.5.1 and 1.5.2]{dCHM}. In~\cite{Cher17}, the authors show the convergence (in probability) of the Monge-Kantorovich ranks and quantiles by relating them to the solutions of the dual problem of the Kantorovich relaxation (to Monge's problem). This correspondence works under the assumption that both the reference and target measures have finite second moments.~\citet{dCHM} avoided this dependence on the finite second moment assumption by defining the center-outward ranks and quantiles on the basis of McCann's construction \cite{McCann95} of transport maps as limits of cyclically monotone maps. For proving our uniform consistency result, we marry the weak convergence theory of the Monge-Amp\'{e}re measures (see~\cite{G16}) and the recently introduced theory of graphical convergence of transport maps (see~\cite{del2019}); the connections between these two notions of convergence are made precise in Lemmas~\ref{KeyLemma},~\ref{KeyLemma2} and~\ref{lem:UnifConv}. By doing so we neither have to impose any finite moment assumptions on the target measure nor have to interpret the quantiles or ranks as limits of cyclically monotone maps. \subsection{Local rate of convergence}\label{sec:RateSec} When $d=1$, the pointwise limiting distribution (under appropriate scaling and centering) of the empirical rank and quantile functions are easy to derive. In fact, the central limit theorem plays a key role in obtaining these limiting distributions; this is tied to the fact that the one-dimensional rank and quantile functions can be approximated by sums of i.i.d.~random variables. However, it is not yet clear if the multidimensional ranks and quantiles, studied in this paper, enjoy such properties. This makes it difficult to even find the pointwise rate of convergence of the empirical rank and quantile maps. As a first step to finding the local uniform rate of convergence of the empirical quantile and rank functions, we give the following result (proved in Section~\ref{sec:RateProp1}) that provides a deterministic upper bound on the local uniform rate of convergence of the subdifferentials (of a sequence of convex functions and their Legendre-Fenchel duals) by a local $L_2$-loss of the subdifferentials. \begin{theorem}\label{ppn:RateProp1} Let $\mu$ be an absolutely continuous probability measure supported on a compact convex set $\mathcal{S} \subset \mathbb{R}^d$. Let $\nu$ be a probability measure supported on $\mathcal{Y}\subset \mathbb{R}^d$ and let $\psi$ be a convex function such that $\nabla \psi \# \mu = \nu$. Suppose that $\{\hat{\nu}_n\}_{n \ge 1}$ is a sequence of probability distributions on $\mathbb{R}^d$. Let $\{\hat{\psi}_n\}_{n \ge 1}$ be a sequence of convex functions such that $\nabla \hat{\psi}_n\# \mu = \hat{\nu}_n$, for all $n \ge 1$. Fix $u_0\in \mathrm{Int}(\mathcal{S})$ and $\delta_0\equiv\delta_0(u_0)>0$ such that $B_{\delta_0}(u_0) \subset \mathcal{S}$. Suppose that $\mu$ has a bounded (from below and above) nonvanishing density on $B_{\delta_0}(u_0)$. Suppose that $\psi$ is differentiable everywhere in $\mathrm{Int}(\mathcal{S})$ and let $\nabla\psi$ be locally uniformly Lipschitz in $B_{\delta_0}(u_0)$ with Lipschitz constant $K$. Define \begin{align}\label{eq:DeltaDef} \delta_n:= \Big(\int_{B_{\delta_0}(u_0)} \|\nabla \hat{\psi}_n(u)- \nabla\psi(u)\|^2 d\mu(u)\Big)^{\frac{1}{d+2}}. \end{align} Then, there exists $C=C(\mu, d, K) >1/2$ such that {\small \begin{align}\label{eq:SupBd} \quad \;\; \sup_{u\in B_{\delta_0/3}(u_0)} \sup_{y \in \partial \hat{\psi}_n(u)}\|y- \nabla\psi(u) \| \leq \begin{cases} C\delta_n & \text{if }\delta_n \leq \delta_0/3, \\ C \delta_n^{d+2}\delta^{-(d+1)}_{0} + \frac{1}{2}\delta_0& \text{if } \delta_n> \delta_0/3. \end{cases} \end{align}} Now, assume that $\mu$ has bounded nonvanishing density everywhere on $\mathcal{S}$ and that $\nabla \psi$ is uniformly Lipschitz on $\mathcal{S}$ with Lipschitz constant $K$. Suppose that there exists $\tilde{\delta}_0>0$ such that $\nabla\psi^{*}(\mathrm{Cl}(B_{\tilde{\delta}_0}(\nabla \psi(u_0))))\subset B_{\delta_{0}/6}(u_0)$. Define \begin{align*} \delta^{\prime}_n:= \Big(\int_{\mathcal{S}}\|\nabla \hat{\psi}_n(u) - \nabla \psi(u)\|^2 d\mu(u)\Big)^{\frac{1}{d+2}}, \end{align*} and {\small $$ \tilde{\delta}_n: = \sup_{v \in \mathrm{Cl}(B_{\tilde{\delta}_0}(\nabla \psi(u_0))), \; w\in \nabla\psi(\mathrm{Cl}(B_{\delta_0/3}(u_0)))}\Big\{\|\nabla\psi^{*}(v)-\nabla\psi^{*}(w)\|: \|v-w\|\leq C\delta^{\prime}_n\Big\} $$} where $C$ is the same constant as in \eqref{eq:SupBd}. If $\max\{C \delta^{\prime}_n, \tilde{\delta}_n\}<\delta_0/6$, then, \begin{align}\label{eq:SupBd1} \sup_{x\in B_{\tilde{\delta}_0}(\nabla \psi(u_0))} \sup_{w \in \partial \hat{\psi}^{*}_n(x)}\|w -\nabla\psi^{*}(x)\| \leq \tilde{\delta}_n. \end{align} \end{theorem} Note that the existence of $\tilde \delta_0$ is guaranteed if $\nabla \psi$ is a homeomorphism from $\mathrm{Int}(\mathcal{S})$ to $\mathrm{Int}(\mathcal{Y})$. In~\eqref{eq:SupBd} of Theorem~\ref{ppn:RateProp1} we provide a deterministic upper bound on the pointwise difference between $\partial \hat{\psi}_n(\cdot)$ and $\nabla\psi(\cdot)$, uniformly on the local ball $B_{\delta_0/3}(u_0)$, in terms of the $L_2$-loss~\eqref{eq:DeltaDef}. Similarly,~\eqref{eq:SupBd1} bounds the pointwise difference between $\partial \hat{\psi}_n^*(\cdot)$ and $\nabla\psi^*(\cdot)$, uniformly over a local ball around $\nabla\psi(u_0)$, in terms of $\tilde{\delta}_n$. As $\tilde{\delta}_n$ is defined implicitly in terms of $\delta_0,\tilde{\delta}_0$ and $\delta^{\prime}_n$ it is natural to ask how $\tilde{\delta}_n$ varies with $\delta^{\prime}_n$. If $\nabla \psi^{*}$ is uniformly $\beta$-H\"older continuous, for $\beta \in (0,1]$, in the neighborhood $\mathrm{Cl}(B_{\tilde{\delta}_0}(\nabla \psi(u_0)))\cup \nabla\psi(\mathrm{Cl}(B_{\delta_0/3}(u_0)))$, then, for all $v,w\in \mathrm{Cl}(B_{\tilde{\delta}_0}(\nabla \psi(u_0)))\cup \nabla\psi(\mathrm{Cl}(B_{\delta_0/3}(u_0)))$ \begin{align} \|\nabla \psi^{*}(v)- \nabla \psi^{*}(w)\| \leq \kappa (\delta^{\prime}_n)^{\beta}, \quad \text{such that } \|v-w\|\leq C\delta^{\prime}_n, \nonumber \end{align} where $\kappa = \kappa(C,\beta, \mu, \delta_0)>0$ is a constant. As a consequence, $\tilde{\delta}_n$ is also less than $\kappa (\delta^{\prime}_n)^{\beta}$. This shows that the local rate of convergence of the rank map adapts to the local smoothness of the transport maps $\nabla \psi$ and $\nabla \psi^{*}$. Suppose that $\hat{\nu}_n$ is the empirical measure of $X_1,\ldots ,X_n$. Suppose that $\psi$ and $\hat{\psi}_n$ are two convex functions such that $\nabla \psi\# \mu =\nu$ and $\nabla \hat{\psi}_n\#\mu =\hat{\nu}_n$. By an application of the deterministic result Theorem~\ref{ppn:RateProp1}, we can bound the local uniform rate of convergence of $\partial \hat{\psi}_n$ and $\partial \hat{\psi}_n^*$ by the local $L_2$-rate of convergence of $\partial \hat{\psi}_n$. It is natural to ask if useful bounds on the local $L_2$-rate of convergence of the transport map $\partial \hat{\psi}_n$ is known. Obviously, $\delta^{(d+2)/2}_n$, as defined in \eqref{eq:DeltaDef}, is bounded from above by the global $L_2$-loss of $\hat{\psi}_n$, i.e., $\int_{\mathcal{S}} [\|\nabla \hat{\psi}_n(u)- \nabla \psi(u)\|^2 d\mu(u)]^{{1}/{2}}$. Recently,~\cite{HR} studied the rate of convergence of the $L_2$-loss function of transport maps. Under additional smoothness assumptions on $\psi$ (and $\nabla \psi$), \cite{HR} found the minimax rate of convergence of transport maps in the $L_2$-loss (when $n$ independent samples are available from $\mu$). We believe that a similar approach will provide a rate of convergence for $\delta_n$. However, a complete analysis of the rate of convergence of $\delta_n$ will need new techniques and is beyond the scope of the present paper and will be pursued in a follow-up work. In the special case when $\mu=\nu$, the analysis is much simpler, and we can use known results to obtain upper bounds on the rate of convergence of the global $L_2$-loss for the transport map $\nabla \hat{\psi}_n$, thereby yielding a rate for the local uniform loss via Theorem~\ref{ppn:RateProp1}. Indeed, this special case in important in its own right and has received much attention in the literature; see Section~\ref{sec:LR} in the Appendix for motivation and a detailed review. Our final result of this section, Corollary~\ref{thm:RateTheo} (proved in Section~\ref{sec:RateTheo}), demonstrates a local-global correspondence to yield a rate of convergence of the empirical quantile/rank maps when $\mu \equiv \nu$ and is a consequence of Theorem~\ref{ppn:RateProp1} and the main result of~\citet{FG15}. \begin{corollary}\label{thm:RateTheo} Let $\mu\equiv \nu$ be an absolutely continuous probability measure supported on $\mathcal{S} \subset \mathbb{R}^d$ ($d > 1$) such that $\int \|x\|^{q}d\nu(x)<\infty$, for some $q>2$. Suppose that $X_1, \ldots , X_n$ are i.i.d.~$ \nu$. Let $\hat{\psi}_n$ be a convex function such that $\nabla \hat{\psi}_n\# \mu = \hat \nu_n \equiv \frac{1}{n}\sum_{i=1}^{n}\delta_{X_i}$. Define \begin{align} \Psi(n,d,q):= \begin{cases} n^{-\frac{1}{2(d+2)}} + n^{-\frac{q-2}{q(d+2)}} & \text{if } d<4, q\neq 4,\\ n^{-\frac{1}{2(d+2)}} (\log(1+n))^{\frac{1}{d+2}} + n^{-\frac{q-2}{q(d+2)}} & \text{if }d=4, q\neq 4, \\ n^{-\frac{2}{d(d+2)}} + n^{-\frac{q-2}{q(d+2)}} & \text{if }d>4, q\neq \frac{d}{d-2}. \end{cases} \end{align} Fix $u_0\in \mathrm{Int}(\mathcal{S})$ and $\delta_0\equiv\delta_0(u_0)>0$ such that $B_{\delta_0}(u_0) \subset \mathcal{S}$. Then, there exists $C=C(\mu,d,q)>0$ such that, for all $n\geq 1$, \begin{align}\label{eq:RateAc} \qquad \mathbb{E}\Big[\sup_{u\in B_{\delta_0/3}(u_0)}\sup_{z \in \partial \hat{\psi}_n(u)}\|z-u\|\Big]\leq C \Psi(n,d,q). \end{align} Now, suppose that the support of $\mu$ is bounded. Then, \begin{align}\label{eq:RateAc2} \qquad \mathbb{E}\Big[\sup_{x\in B_{\delta_0/6}(u_0)}\sup_{w \in \partial \hat{\psi}^{*}_n(x)}\|w-x\|\Big]\leq C \Psi(n,d,q). \end{align} \end{corollary} To the best of our knowledge, the above result is the first attempt to study the local uniform behavior of transport (quantile/rank) maps. However, it is not clear to us whether the above bounds are tight when $d\ge 2$. We believe that it may be possible to improve our rate of convergence result under further assumptions on $\mu$. We hope to address this in future work. \section{Applications to nonparametric testing}\label{sec:Goodness-Fit-Test} \subsection{Two-sample goodness-of-fit testing in $\mathbb{R}^d$}\label{sec:2S-Goodness-Fit-Test} Suppose that we observe $X_1,\ldots, X_m$ i.i.d.~$\nu_X$ and $Y_1,\ldots, Y_n$ i.i.d.~$\nu_Y$, where $m, n \ge 1$, and $\nu_X$ and $\nu_Y$ are unknown absolutely continuous distributions on $\mathbb{R}^d$. We also assume that both the samples are drawn mutually independently. In this section we consider the two-sample equality of distribution hypothesis testing problem: \begin{equation}\label{eq:2-Sample-Test} H_0: \nu_X = \nu_Y\quad \mbox{ versus }\quad H_1: \nu_X \ne \nu_Y. \end{equation} The two-sample problem for multivariate data has been extensively studied, beginning with the works of~\cite{Weiss60, Bickel68}. Several graph based methods have been proposed in the literature for this problem; see e.g.,~\cite{FR79, Sc86, Rosenbaum05, B15} and the references therein. Also see~\cite{BF04, SR13, EquivRKHS13} for distance and kernel based methods for the two-sample problem when $d \ge 1$. In this section we introduce another multivariate two-sample test that uses ideas from optimal transportation and, in particular, the quantile and rank functions defined in Section~\ref{sec:Q-R}. Let $\mu$ be an absolutely continuous distribution supported on a compact convex set $\mathcal{S} \subset \mathbb{R}^d$ having a density (w.r.t.~Lebesgue measure), e.g., $\mu = $ Uniform$([0,1]^d)$ or $\mu = $ Uniform$(B_1(0))$. Let $\hat Q_X$ and $\hat Q_Y$ be the sample quantile functions (as defined in~\eqref{eq:SampQ_n}) of the $X_i$'s and $Y_j$'s, respectively (w.r.t.~$\mu$). Let $\hat R_{X,Y}$ be the empirical rank map of the combined sample $X_1,\ldots, X_m, Y_1,\ldots, Y_n$ (w.r.t.~$\mu$). Note that, as in Section~\ref{sec:Ranks}, we define the rank at any data point as a randomized value (as defined in~\eqref{eq:Rank-X_i}). We use the following test statistic for testing~\eqref{eq:2-Sample-Test}: \begin{align}\label{eq:2-S-TS} T_{X,Y} & := \int_\mathcal{S} \|\hat R_{X,Y}(\hat Q_X(u)) - \hat R_{X,Y}(\hat Q_Y(u))\|^2 d \mu(u) \\ & = \mathbb{E}_{U}\Big[\big\|\hat R_{X,Y}(\hat Q_{X}(U))- \hat R_{X,Y}(\hat Q_{Y}(U))\big\|^2\Big] \nonumber \end{align} where $U\sim \mu$ is independent of $X_1, \ldots , X_m, Y_1, \ldots ,Y_n$ and the above expectation is taken w.r.t.~$U$. Although exactly computing $T_{X,Y}$ is possible (as the above integral reduces to a finite sum; see Section~\ref{sec:Simul-2S} for the details) it is computationally involved. One can easily approximate $T_{X,Y}$ using Monte Carlo. We reject $H_0$ when $T_{X,Y}$ is large. To motivate the form of the above test statistic consider the one-sample Cram\'{e}r-von Mises statistic when $d=1$. Let $\mathbb{F}_n$ be the empirical distribution of the data (when $d=1$) and $F$ be the true distribution function (assumed to be absolutely continuous). Then the Cram\'{e}r-von Mises statistic can be written as $$ \int_{\mathbb{R}} \{\mathbb{F}_n(x) - F(x)\}^2 dF(x) = \int_0^1 \{\mathbb{F}_n(F^{-1}(u)) - u\}^2 du.$$ Indeed,~\eqref{eq:2-S-TS} is similar to the right side of the above display; however as we are now in the two-sample case, $F^{-1}$ is unknown and is replaced by the sample quantile function. The connection to the Cram\'{e}r-von Mises statistic above immediately raises the following question: Is $T_{X,Y}$ distribution-free under $H_0$ (as the Cram\'{e}r-von Mises statistic when $d=1$)? Unfortunately, we do not know the exact answer to this question; see Section~\ref{sec:Simul-2S} for a detailed discussion with simulation studies. Our simulations suggest that this might very well be the case, at least for $d=2$. We plan to study this in the future. In the following lemma (proved in Section~\ref{pf:Distfree}) we show that $\hat R_{X,Y}(\hat Q_{X}(U))$ and $\hat R_{X,Y}(\hat Q_{Y}(U))$ (as in~\eqref{eq:2-S-TS}) are both marginally distributed as $\mu$ under $H_0$ (when $\mu$ is the uniform distribution on $\mathcal{S}$), and are thus distribution-free. \begin{lemma}\label{lem:Distfree} Let $\mathcal{S}$ be a compact convex set in $\mathbb{R}^d$. Let $\mu = $ Uniform$(\mathcal{S})$, i.e., $\mu$ is the uniform distribution on $\mathcal{S}$. Suppose that $H_0$ is true, i.e., $\nu_X = \nu_Y$. Then $\hat R_{X,Y}(\hat Q_{X}(U)) \sim \mu$ and $\hat R_{X,Y}(\hat Q_{Y}(U)) \sim \mu$, and hence their distributions do not depend on $\nu_X \equiv \nu_Y$. \end{lemma} The following result (proved in Section~\ref{pf:Power1}) shows that our proposed test has asymptotic power 1 for any two distributions $\nu_X \ne \nu_Y$, as $m,n \to \infty$. \begin{proposition}[Consistency]\label{lem:Power1} Suppose that $H_0: \nu_X = \nu_Y = \nu$ holds. Assume that $\nu$ is supported on a domain $\mathcal{Y} \subset \mathbb{R}^d$ such that the quantile map $Q:\mathrm{Int}(\mathcal{S})\to \mathrm{Int}(\mathcal{Y})$ is a homeomorphism. Also, assume that $m,n\to \infty$ such that $\frac{m}{m+n}\to \theta\in (0,1)$. Then, under $H_0$, as $m,n\to \infty$, \begin{align}\label{eq:Consis1} T_{X,Y} \stackrel{a.s.}{\longrightarrow} 0. \end{align} Now, suppose $X_1, \ldots , X_m\stackrel{i.i.d}{\sim} \nu_X$ and $Y_1, \ldots , Y_n\stackrel{i.i.d}{\sim} \nu_Y$ where $\nu_X \ne \nu_Y$ are two distinct measures supported on domains $\mathcal{Y}_X$ and $\mathcal{Y}_Y$ respectively. Denote the quantile maps of the measures $\nu_X$, $\nu_Y$ and $\theta \nu_X+(1-\theta)\nu_Y$ by $Q_X$, $Q_Y$ and $Q_{X,Y}$ respectively. Assume that $Q_X,Q_Y, Q_{X,Y}:\mathrm{Int}(\mathcal{S})\to \mathrm{Int}(\mathcal{Y}_X \cup \mathcal{Y}_Y)$ are homeomorphisms. Then, there exists $c>0$ such that \begin{align}\label{eq:Consis2} T_{X,Y} \stackrel{a.s.}{\longrightarrow} c, \qquad \mbox{ as } m,n\to \infty. \end{align} \end{proposition} \begin{remark}[Finding the critical value of $T_{X,Y}$] Although we have shown (in Lemma~\ref{lem:Distfree}) that $\hat R_{X,Y}(\hat Q_{X}(U)) \sim \mu$ and $R_{X,Y}(\hat Q_{Y}(U)) \sim \mu$ (and thus both quantities are distribution-free) it is not immediately clear if the test statistic $T_{X,Y}$ is distribution-free. In Section~\ref{sec:Simul-2S} we provide simulation evidence that illustrates that a properly normalized version of $T_{X,Y}$ may be asymptotically distribution-free, at least when $d=2$. In any case, the critical value of the test under $H_0$ can always be computed using the following permutation principle: Under the null hypothesis $X_1,\ldots, X_m, Y_1, \ldots, Y_n$ are i.i.d.~and thus we can consider any partition of the $m+n$ data points into two sets of sizes $m$ and $n$ and recompute our test statistic to simulate its null distribution. \end{remark} Our test statistic $T_{X,Y}$ (see~\eqref{eq:2-S-TS}) is inspired by the form of the Cram\'{e}r-von Mises (one-sample) goodness-of-fit statistic. One can, of course, use other test statistics. A key observation for constructing other tests is to realize that, under $H_0$, $\hat R_{X,Y}(X_1), \ldots, \hat R_{X,Y}(X_m), \hat R_{X,Y}(Y_1), \ldots, \hat R_{X,Y}(Y_n)$ are {\it exchangeable} and are all marginally distributed as $\mu$. \subsection{Mutual independence testing}\label{sec:IndepTest} Let $X = (X^{(1)}, \ldots, X^{(k)}) \sim \nu$ be a random vector in $\mathbb{R}^d$ where $k \ge 2$ and $X^{(j)}\sim \nu_j$ is a random vector in $\mathbb{R}^{d_j}$, for $j = 1, \ldots, k$, with $\sum_{j=1}^k d_j = d$. In this section we consider the problem of testing the mutual independence of $X^{(1)},\ldots, X^{(k)}$. Specifically, we consider testing whether $\nu$ is equal to the product measure $\nu_1 \otimes \ldots \otimes \nu_k$, for some $\nu_1,\ldots, \nu_k$, i.e., \begin{equation}\label{eq:Ind-Test} H_0: \nu = \nu_1 \otimes \ldots \otimes \nu_k, \qquad \mbox{ versus } \qquad H_1: \nu \ne \nu_1 \otimes \ldots \otimes \nu_k, \end{equation} when we observe i.i.d.~data from $\nu$. This is again a fundamental problem in statistics and there has been many approaches investigated in the literature; see e.g.,~\cite{Blomqvist50, BlumEtAl61},~\cite[Chapter 8]{HW99} and the references therein. Recently, the use of kernel (see e.g.,~\cite{GrettonKernelMeasInd05, EquivRKHS13, Lyons13}) and distance covariance (see e.g.,~\cite{SzekelyBDCov09, SzekelyCorrDist07, 4-Axioms-MS19, SR13}) based methods have become very popular. In this section, we use ideas from optimal transportation to construct a test for~\eqref{eq:Ind-Test}. For simplicity of notation, let us first assume that $k=2$. Let $\{Z_i \equiv (X_i,Y_i): 1 \le i \le n\}$ be $n$ i.i.d.~{$\nu$}, assumed to be absolutely continuous on $\mathbb{R}^{d_X} \times \mathbb{R}^{d_Y}$; here $d_X, d_Y \ge 1$ and $d_X + d_Y =d$. Further, we assume that $X \sim \nu_X$ and $Y \sim \nu_Y$. In this section we want to test the hypothesis of mutual independence between $X$ and $Y$, i.e., we want to test: \begin{equation}\label{eq:Ind-Test-2} \qquad {H_0: \nu = \nu_X \otimes \nu_Y}\quad \mbox{ versus }\quad {H_1: \nu \ne \nu_X \otimes \nu_Y}. \end{equation} Let ${\mu_X }= $ Uniform($[0,1]^{d_X}$), $ {\mu_Y} = $ Uniform($[0,1]^{d_Y}$) and let {$\mu := \mu_X \otimes \mu_Y$} = Uniform($[0,1]^d$). We define $\hat R: \mathbb{R}^d \to [0,1]^d$ and $\hat Q:[0,1]^d \to \mathbb{R}^d$ to be the empirical rank and quantile maps of the joint sample $(X_1,Y_1),\ldots, (X_n, Y_n)$. Let ${\hat R_X}: \mathbb{R}^{d_X} \to \mathbb{R}^{d_X}$ be the empirical rank map of $X_1,\ldots, X_n$; similarly let {$\hat R_Y: \mathbb{R}^{d_Y} \to \mathbb{R}^{d_Y}$} be the sample {rank map} obtained from $Y_1,\ldots, Y_n$. Define ${\tilde R := (\hat R_X, \hat R_Y)}:\mathbb{R}^d \to [0,1]^d$. We consider the following test statistic: \begin{equation}\label{eq:Ind-TS} T_{n} := {\int_{[0,1]^d} \|\hat R(\hat Q(u)) - \tilde R(\hat Q(u))\|^2 d u } = \frac{1}{n} \sum_{i=1}^n \|\hat R(Z_i) - \tilde R(Z_i)\|^2. \end{equation} Note that the above integral reduces to a finite average as $\hat Q(\cdot)$ can only take $n$ distinct values a.s. We reject the null hypothesis in~\eqref{eq:Ind-Test} when $T_{n}$ is large. As in Section~\ref{sec:2S-Goodness-Fit-Test}, the critical value of the above test can be computed using the permutation principle. Although it is beyond the scope of the present paper, we illustrate through simulations in Section~\ref{sec:Simul-Ind} that a properly normalized version of $T_{n}$ may be (asymptotically) distribution-free (at least when $d=2$). The following result, proved in Section~\ref{sec:Pf:lem:Power2}, describes the asymptotic behavior of the proposed test statistic under the null and alternative hypotheses; in particular, it shows that the power of the test converges to 1, under mild regularity conditions. \begin{proposition}[Consistency]\label{lem:Power2} We have $\hat R(\hat Q(U)) \sim \mu$, where $U \sim \mu = $ Uniform$([0,1]^d)$. Suppose $H_0$ holds, i.e., $\nu = \nu_X \otimes \nu_Y$. Then, $\tilde R(\hat Q(U)) \sim \mu$. Assume further that $\nu_{X}$ and $\nu_{Y}$ are two probability measures supported on the domains $\mathcal{Y}_X\subset \mathbb{R}^{d_{X}}$ and $\mathcal{Y}_{Y}\subset \mathbb{R}^{d_{Y}}$ respectively. Denote the quantile maps of the measures $\nu_{X}$, $\nu_{Y}$ and $\nu$ w.r.t. the measures $\mathrm{Uniform}([0,1]^{d_X})$, $\mathrm{Uniform}([0,1]^{d_Y})$ and $\mathrm{Uniform}([0,1]^{d})$ by $Q_X$, $Q_Y$ and $Q$ respectively, where $d=d_X+d_Y$. Assume that $Q_X:(0,1)^{d_X}\to\mathrm{Int}(\mathcal{Y}_X)$, $Q_X:(0,1)^{d_Y}\to\mathrm{Int}(\mathcal{Y}_Y)$ and $Q:(0,1)^{d}\to\mathrm{Int}(\mathcal{Y}_X \times \mathcal{Y}_Y)$ are homeomorphisms. Then, under $H_0$, as $n\to\infty$, \begin{align}\label{eq:Consis21} T_{n} \stackrel{a.s.}{\longrightarrow} 0. \end{align} Furthermore, if $\nu \ne \nu_X \otimes \nu_Y$, then, there exists $c>0$ such that \begin{align}\label{eq:Consis22} T_{n} \stackrel{a.s.}{\longrightarrow} c, \qquad \mbox{ as } n \to \infty. \end{align} \end{proposition} \section{Discussion}\label{sec:Disc} In this paper we have studied a notion of multivariate ranks and quantiles based on the theory of optimal transportation. We have also proposed multivariate goodness-of-fit tests based on these empirical ranks and quantiles. One of the main motivations for proposing such test statistics (see e.g.,~\eqref{eq:2-S-TS} and~\eqref{eq:Ind-TS}) is that the resulting tests can be asymptotically distribution-free, borrowing the analogy from one-dimension and the established distribution-freeness of the multivariate ranks (see Lemma~\ref{lem:Distfree}, Proposition~\ref{lem:Power2} and the simulation studies in Section~\ref{sec:Simul}); see Sections~\ref{sec:2S-Goodness-Fit-Test} and~\ref{sec:IndepTest} for the details. However, we do not have a proof of the distribution-freeness of the proposed tests, and believe that it would be an interesting future problem. In Theorem~\ref{thm:GCProp} we establish the uniform a.s.~consistency of the empirical rank and quantile maps under the assumption that their population analogues are homeomorphisms. In Proposition~\ref{thm:Q_prop} we also provide a sufficient condition for this to hold. It is worth exploring if other sufficient conditions, that can arise naturally in statistics, also imply the same conclusions. Note that when $d=1$, the empirical rank function converges to its population counterpart under no assumptions on the target measure. It is not clear to us if such a result holds beyond $d=1$ without further assumptions. Corollary~\ref{thm:RateTheo} shows a local uniform rate of convergence of the empirical quantile and rank functions when the reference measure is the same as the target measure. However our proof strategy fails to generalize when the reference measure is different from the target measure. We consider this as an important open problem; also see~\cite{AST19}. \section*{Acknowledgements} We are extremely grateful to Peng Xu for creating the R-package \url{https://github.com/Francis-Hsu/testOTM} (see~\cite{OTM}) for the computation of all the estimators studied in this paper. In particular, all the plots in the paper are obtained from his R-package. The second author would like to thank Marc Hallin for sharing an early version of his manuscript and Johan Segers for helpful discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In the inner $\sim$100~kpc of a galaxy cluster, the hot (10$^7-10^8$~K) intracluster medium is often sufficiently dense that the cooling time, which is roughly $t_{cool} \propto T_{\rm{x}}^{1/2} n_e^{-1}$, is shorter than a Hubble time. In these so-called ``cool core'' clusters, cooling gas should sink toward the center of the cluster, establishing a cooling flow which could deposit as much as $\sim$1000 M$_{\odot}$ yr$^{-1}$ of cold gas onto the central brightest cluster galaxy \citep[for a review, see][]{fabian94}. The fact that brightest cluster galaxies (hereafter BCGs) are rarely forming stars at such prodigious rates (with the exception of the newly-discovered Phoenix cluster; McDonald et~al.~ 2012, 2013) is prime evidence that some form of feedback offsets this cooling. The most likely culprit is mechanical feedback from the central active galactic nucleus \citep[AGN; see][]{churazov01, mcnamara07,mcnamara12, fabian12}, although other heat sources such as particle heating \citep{mathews09}, blazars \citep{pfrommer12}, and mergers \citep{gomez02} are also viable. If the balance between energy input from feedback and energy loss due to cooling is not exact, one would expect a residual cooling flow to develop. There is substantial evidence for such ``reduced cooling flows''. Clumps and filaments of cooling intracluster gas have been detected at 10$^6$--10$^7$~K in the cores of clusters via high resolution X-ray spectroscopy \citep[e.g.,][]{peterson06,sanders10} and OVI emission in the far ultraviolet \citep[e.g.,][]{bregman01,oegerle01,bregman06}. \cite{sparks12} recently reported evidence for 10$^5$~K gas (as traced by the C IV $\lambda$1549\AA\ emission line) in the core of the Virgo cluster. Warm (10$^4$~K) gas is nearly ubiquitous in cool core clusters \citep[e.g.,][]{Hu85,johnstone87, heckman89, crawford99, edwards07, hatch07, mcdonald10, mcdonald11a}, as are both warm \citep[e.g.,][]{jaffe05, edge10, donahue11,lim12} and cold \citep[e.g.,][]{edge01,edge03,salome03,salome08,lim08,mcdonald12b} molecular gas components. Finally, perhaps the most convincing evidence that a fraction of the cooling intracluster medium (ICM) is reaching low temperatures is the fact that nearly all cool core clusters have star-forming BCGs, with star formation rates that correlate with the ICM cooling rate \citep[e.g.,][]{mcnamara89,odea08,rafferty08,donahue10,hicks10,mcdonald11b}. Thus, while there is significant evidence that some form of feedback is offsetting a large fraction of energy loss due to cooling in the ICM, it is also clear that this balance is imperfect and likely to vary on both short (periodic outbursts) and long (evolution) timescales. While the physical processes that conspire to prevent or allow the formation of a dense, cool core in the ICM are not fully understood, there has been significant effort towards understanding the overall properties of these systems. Early, large surveys, including those by \cite{white97}, \cite{peres98}, and \cite{allen00}, have formed the basis of our understanding of cooling flows (or lack thereof). These studies established the distribution of cooling properties, including the mass deposition rate, the cooling radius, and the central cooling time, for large X-ray flux-limited samples of nearby galaxy clusters. These studies showed that, among other things, clusters with strong cooling signatures tend to have multi-phase (i.e., H$\alpha$-emitting) gas, radio-loud BCGs, and cooling rates that correlate with the total X-ray luminosity. Studies mentioned in the previous paragraph have largely built upon these early, pioneering works to classify the cooling properties of the intracluster medium. While the properties of nearby ($z\lesssim0.3$) cool core clusters are well documented, very little is presently known about how cooling flows have evolved. Early work by \cite{donahue92} reported that, while the general properties of cooling flows appear to be unchanged since $z\sim0.3$, cool cores were more common by roughly a factor of two at this epoch. More recently, utilizing higher quality data from the \emph{Chandra X-ray Observatory}, as well as ground-based optical data, on much larger, more complete samples, various studies have found evidence that there may be a decline in the fraction of clusters harboring strong (cuspy) cool cores with increasing redshift \citep{vikhlinin07, santos10, samuele11,mcdonald11c}. These studies all report cool core fractions $\lesssim$10\% at $z\gtrsim0.5$, indicating that cool core clusters are a recent phenomenon. Indeed, only a small number of clusters with strong cool cores are known at $z>0.5$ \citep[e.g.,][]{siemiginowska10, russell12, santos12, mcdonald12c}. It has been suggested that, since most of these samples were drawn from early surveys with the ROSAT X-ray telescope, they may be biased against cool cores at high redshifts due to their point-like appearance compared to the ROSAT resolution. The fact that optically-selected samples \citep{mcdonald11c} show the same evolution suggests that such a bias may not be a serious issue. One significant issue affecting our understanding of the evolution of ICM cooling is the lack of large samples of high-redshift clusters with a well understood selection. The South Pole Telescope \citep[SPT;][]{carlstrom11} recently completed a 2500 square degree survey that has discovered hundreds of massive, high-redshift clusters using the Sunyaev-Zel'dovich \citep[SZ;][]{sunyaev72} effect. Unlike X-ray and optical surveys, which have strong surface brightness biases, the SPT selection is nearly redshift-independent \citep[at $z>0.3$, see][]{song12,reichardt13} and, based on simulations, is not expected to be significantly biased by the presence of cool cores \citep[][Lin et~al.~ in prep]{motl05,pipino10}. In principle, such a survey should be able to trace the evolution of cool cores in the most massive clusters out to $z>1$. Indeed, \cite{semler12} showed, in a pilot study of 13 SPT-selected clusters, that there is a significant population of cool core clusters at $z>0.5$, contrary to the majority of the results reported in the literature at the time. Furthermore, the most extreme cool core cluster known is at $z=0.597$, the Phoenix cluster \citep[SPT-CLJ2344-4243;][]{mcdonald12c}, and was discovered by the SPT. Taken together, these results suggest that ICM cooling has not changed drastically in the past $\sim$8 Gyr. In this work, we expand significantly on \cite{semler12}, presenting \emph{Chandra} X-ray observations of 83 massive, SPT-selected clusters. The majority of these observations were completed as part of a recent \emph{Chandra X-ray Visionary Project} (PI B. Benson). With these data we are able to address two outstanding questions about the evolution of the cooling intracluster medium: i) Were cool cores less common at $z>0.5$? and ii) How have the properties of cooling flows evolved in the most massive galaxy clusters over the past $\sim$8 Gyr? In \S2 we present the sample, describing first the selection and observations, followed by the analysis. In \S3 we present the major results of this work, following in spirit the early works of \cite{white97} and \cite{peres98} which identified the cooling flow properties of low-redshift, X-ray selected clusters. The implications of these results are discussed in \S4. Throughout this work, we assume H$_0$=70 km s$^{-1}$ Mpc$^{-1}$, $\Omega_M$ = 0.27, and $\Omega_{\Lambda}$ = 0.73. \section{Data and Analysis} \subsection{Sample Definition} The clusters used in this work were selected based on their Sunyaev-Zel'dovich (SZ) signature in the 2500 deg$^2$ SPT-SZ survey. The SPT-SZ survey was completed in November 2011, producing maps in three frequency bands (95, 150, and 220 GHz), with a key science goal of discovering clusters via the SZ effect \citep{staniszewski09, vanderlinde10, williamson11, benson13,reichardt13}. The clusters considered in this work have additionally been observed with the \emph{Chandra X-ray Observatory}, with exposures typically sufficient to obtain $\sim$2000 X-ray source counts. The majority of the clusters have been observed through a Chandra X-ray Visionary Project to obtain X-ray imaging of the 80 clusters detected with the highest SZ significance ($\xi$) in the first 2000 deg$^2$ of the 2500 deg$^2$ SPT-SZ survey at $z > 0.4$ (hereafter B13; Benson et~al.~ in prep.) While B13 analyze the full XVP sample, we exclude six of the 80 clusters only observed with XMM-\emph{Newton}, which does not have sufficient angular resolution to resolve the cool cores in typical high-redshift clusters. In addition, we include nine clusters at $z > 0.3$ also detected in the SPT-SZ survey that were observed by Chandra, a sub-sample that primarily consists of clusters observed either in previous Chandra GO and GTO proposals from the SPT-SZ collaboration, or in other proposals to observe SZ-selected clusters from the Atacama Cosmology Telescope \citep[ACT;][]{marriage11} and Planck \citep{planck11} collaborations. We note that every SPT-selected cluster that was targeted with \emph{Chandra} yielded an X-ray detection -- perhaps unsurprising due to the dependence of both techniques on a rich ICM. The final sample used in this work, referred to hereafter as SPT-XVP, is summarized in Table 1. The sample consists of 83 clusters, spanning a redshift range of $0.3 < z < 1.2$ and a mass range of $\sim 2 \times 10^{14} < $~M$_{500} < 20 \times 10^{14}$~M$_{\odot} / h_{70}$. The clusters were all identified in the SPT-SZ survey maps with a SPT detection significance, $\xi$, spanning a range from $5.7 < \xi < 43$. As was done in \cite{vanderlinde10}, we predict the SPT survey completeness using cosmological and scaling relation constraints of the $\xi$-mass relation. We assume the $\Lambda$CDM cosmological constraints from \cite{reichardt13} when using a CMB data set and the SPT cluster catalog. At our median redshift of $z\sim 0.7$, the SPT-XVP sample is expected to be $\sim$50\% complete at $M_{500} = 4 \times 10^{14} M_{\odot} / h_{70}$ and nearly 100\% complete at $6 \times 10^{14} M_{\odot} / h_{70}$. These completeness thresholds are nearly redshift independent, varying by $\lesssim$15\% over the redshift range of the sample. \begin{longtable}{@{\extracolsep{\fill}}l c c c c} \hline\hline Name & $\alpha$ [$^{\circ}$] & $\delta$ [$^{\circ}$]& OBSIDs \\ \hline SPT-CLJ0000-5748 & 0.250 & -57.809 & 9335\\ SPT-CLJ0013-4906 & 3.331 & -49.116 & 13462\\ SPT-CLJ0014-4952 & 3.690 & -49.881 & 13471\\ SPT-CLJ0033-6326 & 8.469 & -63.444 & 13483\\ SPT-CLJ0037-5047 & 9.447 & -50.788 & 13493\\ SPT-CLJ0040-4407 & 10.208 & -44.132 & 13395\\ SPT-CLJ0058-6145 & 14.586 & -61.768 & 13479\\ SPT-CLJ0102-4603 & 15.677 & -46.072 & 13485\\ SPT-CLJ0102-4915 & 15.734 & -49.266 & 12258\\ SPT-CLJ0123-4821 & 20.796 & -48.358 & 13491\\ SPT-CLJ0142-5032 & 25.546 & -50.540 & 13467\\ SPT-CLJ0151-5954 & 27.857 & -59.908 & 13480\\ SPT-CLJ0156-5541 & 29.042 & -55.698 & 13489\\ SPT-CLJ0200-4852 & 30.141 & -48.872 & 13487\\ SPT-CLJ0212-4657 & 33.108 & -46.950 & 13464\\ SPT-CLJ0217-5245 & 34.304 & -52.763 & 12269\\ SPT-CLJ0232-5257 & 38.202 & -52.953 & 12263\\ SPT-CLJ0234-5831 & 38.677 & -58.523 & 13403\\ SPT-CLJ0236-4938 & 39.258 & -49.637 & 12266\\ SPT-CLJ0243-5930 & 40.865 & -59.515 & 13484,15573\\ SPT-CLJ0252-4824 & 43.212 & -48.415 & 13494\\ SPT-CLJ0256-5617 & 44.106 & -56.298 & 13481,14448\\ SPT-CLJ0304-4401 & 46.067 & -44.033 & 13402\\ SPT-CLJ0304-4921 & 46.067 & -49.357 & 12265\\ SPT-CLJ0307-5042 & 46.961 & -50.705 & 13476\\ SPT-CLJ0307-6225 & 46.830 & -62.436 & 12191\\ SPT-CLJ0310-4647 & 47.634 & -46.785 & 13492\\ SPT-CLJ0324-6236 & 51.053 & -62.598 & 12181,13137,13213\\ SPT-CLJ0330-5228 & 52.728 & -52.473 & 0893\\ SPT-CLJ0334-4659 & 53.547 & -46.996 & 13470\\ SPT-CLJ0346-5439 & 56.733 & -54.649 & 12270\\ SPT-CLJ0348-4515 & 57.075 & -45.247 & 13465\\ SPT-CLJ0352-5647 & 58.241 & -56.798 & 13490,15571\\ SPT-CLJ0406-4805 & 61.731 & -48.082 & 13477\\ SPT-CLJ0411-4819 & 62.814 & -48.320 & 13396\\ SPT-CLJ0417-4748 & 64.347 & -47.813 & 13397\\ SPT-CLJ0426-5455 & 66.520 & -54.918 & 13472\\ SPT-CLJ0438-5419 & 69.575 & -54.322 & 12259\\ SPT-CLJ0441-4855 & 70.451 & -48.924 & 13475,14371,14372\\ SPT-CLJ0446-5849 & 71.514 & -58.830 & 13482,15560\\ SPT-CLJ0449-4901 & 72.275 & -49.025 & 13473\\ SPT-CLJ0456-5116 & 74.118 & -51.278 & 13474\\ SPT-CLJ0509-5342 & 77.339 & -53.704 & 9432\\ SPT-CLJ0528-5300 & 82.023 & -52.998 & 11747,11874,12092,13126\\ SPT-CLJ0533-5005 & 83.406 & -50.096 & 11748,12001,12002\\ SPT-CLJ0542-4100 & 85.709 & -41.000 & 0914\\ SPT-CLJ0546-5345$^a$ & 86.655 & -53.759 & 9332,9336\\ SPT-CLJ0551-5709 & 87.896 & -57.147 & 11743,11871\\ SPT-CLJ0555-6406 & 88.864 & -64.105 & 13404\\ SPT-CLJ0559-5249 & 89.933 & -52.827 & 12264,13116,13117\\ SPT-CLJ0616-5227 & 94.144 & -52.453 & 12261,13127\\ SPT-CLJ0655-5234 & 103.974 & -52.568 & 13486\\ SPT-CLJ2031-4037 & 307.966 & -40.623 & 13517\\ SPT-CLJ2034-5936 & 308.537 & -59.605 & 12182\\ SPT-CLJ2035-5251 & 308.793 & -52.855 & 13466\\ SPT-CLJ2043-5035 & 310.823 & -50.592 & 13478\\ SPT-CLJ2106-5844$^b$ & 316.518 & -58.743 & 12180\\ SPT-CLJ2135-5726 & 323.912 & -57.439 & 13463\\ SPT-CLJ2145-5644 & 326.468 & -56.749 & 13398\\ SPT-CLJ2146-4632 & 326.645 & -46.549 & 13469\\ SPT-CLJ2148-6116 & 327.181 & -61.279 & 13488\\ SPT-CLJ2218-4519 & 334.746 & -45.316 & 13501\\ SPT-CLJ2222-4834 & 335.712 & -48.577 & 13497\\ SPT-CLJ2232-5959 & 338.141 & -59.998 & 13502\\ SPT-CLJ2233-5339 & 338.319 & -53.654 & 13504\\ SPT-CLJ2236-4555 & 339.219 & -45.930 & 13507,15266\\ SPT-CLJ2245-6206 & 341.260 & -62.116 & 13499\\ SPT-CLJ2248-4431 & 342.183 & -44.530 & 4966\\ SPT-CLJ2258-4044 & 344.706 & -40.740 & 13495\\ SPT-CLJ2259-6057 & 344.752 & -60.960 & 13498\\ SPT-CLJ2301-4023 & 345.471 & -40.389 & 13505\\ SPT-CLJ2306-6505 & 346.734 & -65.090 & 13503\\ SPT-CLJ2325-4111 & 351.302 & -41.196 & 13405\\ SPT-CLJ2331-5051 & 352.963 & -50.865 & 9333\\ SPT-CLJ2335-4544 & 353.785 & -45.739 & 13496\\ SPT-CLJ2337-5942 & 354.352 & -59.706 & 11859\\ SPT-CLJ2341-5119 & 355.300 & -51.329 & 11799\\ SPT-CLJ2342-5411 & 355.692 & -54.185 & 11741,11870,12014,12091\\ SPT-CLJ2344-4243$^c$ & 356.183 & -42.720 & 13401\\ SPT-CLJ2345-6405 & 356.250 & -64.099 & 13500\\ SPT-CLJ2352-4657 & 358.068 & -46.960 & 13506\\ SPT-CLJ2355-5055 & 358.948 & -50.928 & 11746\\ SPT-CLJ2359-5009 & 359.933 & -50.170 & 9334,11742,11864,11997\\ \hline \\ \multicolumn{4}{l}{Table 1. Summary of Chandra X-ray observations. Positions listed} \\ \multicolumn{4}{l}{here are of the X-ray centroid (\S2.2). The fourth column provides}\\ \multicolumn{4}{l}{the observational IDs from the Chandra X-ray Observatory.} \\ \multicolumn{4}{l}{$^a$: \cite{brodwin10}}\\ \multicolumn{4}{l}{$^b$: \cite{foley11}}\\ \multicolumn{4}{l}{$^c$: \cite{mcdonald12c}} \label{table:sample} \end{longtable} \subsection{Data Reduction and Analysis} Our basic data reduction and analysis follows closely that outlined in \cite{vikhlinin05} and \cite{andersson11}. Briefly, this procedure includes filtering for background flares, applying the latest calibration corrections, and determining the appropriate blank sky background. In addition to using blank-sky backgrounds, we simultaneously model additional background components from Galactic sources as well as unresolved cosmic X-ray background (CXB) sources in off-source regions. Point sources were identified using an automated routine following a wavelet decomposition technique \citep{vikhlinin98}, and then visually inspected. Clumpy, asymmetric substructure was masked by hand, and excluded in calculations of the global temperature. The center of the cluster was chosen by iteratively measuring the centroid in a 250--500~kpc annulus. This choice, rather than the peak of emission, can play a significant role in whether or not the cluster is ultimately classified as a cool core or not -- a subject we will return to in \S4. Global cluster properties (L$_{X,500}$, M$_{500}$, T$_{X,500}$, M$_{g,500}$) used in this work are derived in B13, following closely the procedures described in \cite{andersson11}. For each of these quantities, the subscript refers to the quantity measured within R$_{500}$ -- the radius within which the average enclosed density is 500 times the critical density. We estimate R$_{500}$ by requiring the measured quantities (T$_X$, M$_g$, Y$_X$) to satisfy a set of scaling relations between T$_{X,500}$, M$_{g,500}$, and Y$_{X,500}$ and M$_{500}$ \citep{vikhlinin09}. Each of these three scaling relations are individually satisfied by iteratively adjusting R$_{500}$. In this paper, we use R$_{500}$ from the Y$_X$--M relation only. Further details on both the data reduction and the derivation of global X-ray properties can be found in \cite{vikhlinin05} and \cite{andersson11}, respectively. \subsection{Surface Brightness Profiles and Concentration Measurements} \label{sec:sb} The surface brightness profile for each cluster, extracted in the energy range 0.7--2.0~keV, is measured in a series of 20 annuli, with the outer radii for each annulus defined as: \begin{equation} r_i=1.5\textrm{R}_{500}\left(\frac{i}{20}\right)^{1.5} ~~~i=1...n~. \end{equation} Following the techniques described in \cite{vikhlinin06}, we correct these surface brightness profiles for spatial variations in temperature, metallicity, and the telescope effective area. Calibrated surface brightness profiles (see Appendix A) are expressed as a projected emission measure integral, $\int n_en_p dl$, where $n_e$ and $n_p$ are the electron and proton densities, respectively. We model the calibrated surface brightness profile with a modified beta model: \begin{equation} n_en_p = n_0^2\frac{(r/r_c)^{-\alpha}}{(1+r^2/r_c^2)^{3\beta-\alpha/2}}\frac{1}{(1+r^3/r_s^3)^{\epsilon/3}} , \label{eq:ne} \end{equation} where $n_0$ is the core density, and $r_c$ and $r_s$ are scaling radii of the core and extended components, following \cite{vikhlinin06}. This 3-dimensional model is numerically projected along the line of sight, yielding a model emission measure profile that is fit to the data. We estimate the 3-dimensional gas density assuming $n_e=Zn_p$ and $\rho_g=m_pn_eA/Z$, where $A=1.397$ and $Z=1.199$ are the average nuclear charge and mass, respectively, for a plasma with metal abundance 30\% of solar (0.3$Z_{\odot}$). The calibrated surface brightness profiles and best-fit projected gas density models for the full sample are shown in Appendix A. In recent studies of high-redshift cool core clusters \citep[e.g.,][]{vikhlinin07,santos08,santos10,semler12}, the presence of a cool core has been quantified solely by the central cuspiness of the surface brightness profile. While measuring the central deprojected temperature and cooling time typically requires $>$10,000 X-ray counts, the surface brightness profile can often be constrained in the central region with as few as $\sim$500 counts, making this an inexpensive method of classifying cool cores. To classify a sample of high-redshift clusters as cool core or non-cool core, \cite{vikhlinin07} defined a ``cuspiness'' parameter, \begin{equation} \alpha \equiv \left. \frac{d\log\rho_g}{d\log r}\right|_{r=0.04\textrm{R}_{500}} \end{equation} This parameter has been shown to correlate well with the central cooling time for galaxy clusters at $z\sim0$ \citep{vikhlinin07,hudson10}. \cite{vikhlinin07} showed that $\alpha$ was typically higher for a low-redshift sample of clusters ($z<0.5$), suggesting a rapid evolution in cool core strength \citep{vikhlinin07}. While easily measurable, this parameter has the drawback that it assumes that the cool core radius evolves at the same rate as the cluster radius (R$_{500}$). An alternative measure of the surface brightness cuspiness is the ``concentration'' parameter, as defined by \cite{santos08}: \begin{equation} c_{SB} \equiv \frac{F_{0.5-5.0\rm{keV}}(r <40~\rm{kpc})}{F_{0.5-5.0\rm{keV}}(r<400~\rm{kpc})} , \end{equation} where $F_{0.5-5.0\rm{keV}}$ is the X-ray flux in the energy bandpass 0.5--5.0 keV. This value can range from $\sim$0 (no flux peak), to 1 (all flux in central 40~kpc). This choice of parameter is relatively insensitive to redshift effects, such as worsening spatial resolution, reduced counts, and $k$-corrections \citep{santos08}, but has the potential drawback that it assumes no evolution in the cooling radius. We will use both the full 3-dimensional density profile ($n_e(r)$), as well as the commonly-used single-parameter estimates of profile peakedness ($\alpha$, $c_{SB}$) to trace the evolution of cool cores in this unique sample. \subsection{Deprojecting Radial X-ray Profiles} \label{sec:deprojection} \subsubsection{$\rho_g(r)$, $\Phi(r)$, T$_X(r)$} Many recent works have verified the presence, or lack, of high-redshift cool cores via surface brightness quantities, as discussed in the previous section. We wish to extend this analysis further and quantify the cooling properties of the ICM. With only $\sim$2000 X-ray counts per cluster, we cannot perform a full temperature and density deprojection analysis, as is typically done at low redshift \citep[e.g.,][]{vikhlinin06, sun09a}. Instead, motivated by earlier studies with the Einstein X-ray observatory \citep{white97, peres98}, we combine our knowledge of the X-ray surface brightness and a coarse temperature profile with assumptions about the underlying dark matter distribution to produce best-guess 3-dimensional temperature profiles. This procedure, which will be described in complete detail in an upcoming paper (McDonald et~al.~ in prep), is summarized below. First, a 3-bin temperature profile is derived by extracting X-ray spectra in logarithmically-spaced annuli over the range $0 < r < \rm{R}_{500}$. These spectra were fit in {\sc xspec} \citep{arnaud96} with a combined {\sc phabs(mekal)} model\footnote{\url{http://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/manual/\\XspecModels.html}}. In cases requiring additional background components, as determined from off-source regions of the field (see B13 for more details), an additional {\sc mekal} (Galactic) or {\sc bremss} (CXB) component was used, with temperatures fixed to 0.18~keV and 40~keV, respectively. For the source model, we fix the abundance to $0.3Z_{\odot}$ and the hydrogen absorbing column, n$_H$, to the average from the Leiden-Argentine-Bonn survey \citep{kalberla05}. The resulting temperature profiles for the full sample are shown in Appendix A. In order to model the underlying dark matter potential, we use a generalized Navarro-Frenk-White \citep[GNFW;][]{zhao96,wyithe01} profile: \begin{equation} \rho_{D}=\frac{\rho_{D,0}}{(r/r_s)^{\beta_{D}}(1+r/r_s)^{3-\beta_{D}}} , \label{eq:nfw} \end{equation} \noindent{}where $\rho_{D,0}$ is the central dark matter density, $r_s$ is a scale radius related to the halo concentration by $C=\rm{R}_{200}/r_s$, and $\beta_{D}$ is the inner slope of the dark matter profile. This model is similar to the NFW \citep{nfw} profile at large radii, but has a free ``cuspiness'' parameter at small radii. We estimate the initial values of $\rho_{D,0}$ and $r_s$ using the measured M$_{500}$ and the mass-concentration relation \citep{duffy08}. Given an assumed 3-dimensional functional form of both the dark matter (Eq.\ \ref{eq:nfw}) and gas density profiles (\S\ref{sec:sb}, Eq.\ \ref{eq:ne}), and further assuming a negligible contribution from stars to the total mass, we can derive the 3-dimensional temperature profile by combining hydrostatic equilibrium, \begin{equation} \frac{d\textrm{P}}{dr} = -\frac{\textrm{GM}(r)\rho(r)}{r^2} , \end{equation} with the ideal gas law (P$=n_Tk$T, where $n_T = n_e + n_p$). This temperature profile is projected along the line of sight (weighted by $n_e^2$T$^{1/2}$), producing a 2-dimensional temperature profile which is compared to the data, allowing a calculation of $\chi^2$. We repeat this process, varying both the normalization of the GNFW halo ($\rho_{D,0}$), and thus the total dark matter mass, as well as the inner slope ($\beta_{D}$), while requiring that the mass--concentration \citep{duffy08} and M$_{500}$--P$_{500}$ \citep{nagai07} relations are always satisfied (removing $r_s$ and P$_{500}$ as free parameters), until a stable minimum in the $\chi^2$($\rho_{D,0}$, $\beta_{D}$) plane is found. The net result of this process is a 3-dimensional model of the gas density, gas temperature, and gravitational potential for each cluster (see Appendix A). \subsubsection{$t_{\textrm{cool}}(r)$, K$(r)$, \.{M}$(r)$} While a centrally-concentrated surface brightness profile is an excellent indicator that the ICM is cooling rapidly \citep[e.g.,][]{hudson10}, we ultimately would like to quantify, in an absolute sense, the strength of this cooling. Classically, clusters have been identified as ``cooling flows'' if the cooling time in the central region is less than the age of the Universe, with the cooling time defined as: \begin{equation} t_{\textrm{cool}} = \frac{3}{2}\frac{n_Tk\rm{T}}{n_en_H\Lambda(\rm{T,Z})} , \end{equation} \noindent{}where $\Lambda(\rm{T,Z})$ is the cooling function for an optically-thin plasma \citep{sutherland93}. Similarly, the specific entropy of the gas is defined as: \begin{equation} \textrm{K} = \frac{k\textrm{T}}{n_e^{2/3}} . \end{equation} Both the central cooling time and central entropy are smallest in the centers of cool core clusters, and are distributed bimodally over the full cluster population \citep[e.g.,][]{cavagnolo09,hudson10}. In nearby, well-studied clusters, the central cooling time and entropy are well-defined, as both of these functions tend to flatten at radii less than $\sim$10 kpc. However, for lower signal-to-noise data, the measurement of central cooling time is strongly dependent on the choice of bin size \citep[see e.g.,][]{white97,peres98}, and inwards extrapolation can be risky if a flattening of the surface brightness profile is not observed due to poor spatial sampling. For this work, we choose as our central bin $r<0.012\textrm{R}_{500}$ ($r\lesssim10$ kpc), which roughly corresponds to the first data point in our surface brightness profiles. While these quantities are not truly ``central'', this choice allows us to avoid the increasingly large uncertainty associated with extrapolating our temperature and density fits as $r\rightarrow0$. Following \cite{white97}, we estimate the classical mass deposition rate, \.M(r), using the following formula: \begin{equation} \frac{d\textrm{M}}{dt}(r_i)=\frac{\textrm{L}_X(r_i)-(\Delta\phi(r_i)+\Delta h(r_i))\dot{\textrm{M}}(<r_{i-1})}{h(r_i)+f(r_i)\Delta\phi(r_i)} , \end{equation} where L$_X(r_i)$ is the X-ray luminosity in shell $i$, $\Delta\phi(r_i)$ is the change in the gravitational potential across shell $i$, $h(r_i)$ is the temperature in units of energy per particle mass, $h(r_i)=\frac{5}{2}(k$T$(i)/\mu m_p)$, and $f(r_i)$ is the fraction of the shell that the gas crosses before dropping out of the flow. This equation calculates the cooling rate due to X-ray radiation ($\frac{dM}{dt} \propto \frac{L_X}{kT}$) corrected for the gravitational work done on the gas as it falls inwards towards the center of the cluster's gravitational potential. There are currently no constraints on what $f(r_i)$ should be, so we choose the mid-point ($f(r_i)=0.5$). We note that varying $f(r_i)$ from 0 to 1 typically alters the estimate of $d$M$/dt$ by only $\sim$5\%. We integrate the mass deposition rate out to the radius at which the cooling time equals the age of the Universe \emph{at the epoch of the cluster}. The resulting $\frac{dM}{dt}(r<r_{cool}$) represents the time-averaged cooling rate if the cluster as we currently observe it has been in equilibrium for all time. We note that, by this definition, our sample ought to have overall smaller cooling radii due to the fact that these high-redshift clusters have had less time to cool than their low-redshift counterparts. \subsection{Comparing Aperture and 3-D Model Temperatures} In previous studies \citep[e.g.,][]{hudson10}, the central entropy and cooling time are calculated from a combination of 3-dimensional, central electron density, $n_{e,0}$, and a 2-dimensional, aperture temperature measured in some small aperture (e.g., $r\lesssim0.05$R$_{500}$). For clusters with only $\sim$2000 X-ray counts, this central aperture may only contain $\sim$100 counts, making the estimate of a central temperature complicated. However, in cool core clusters, where a significant fraction of the flux originates from this small aperture, we can measure a reliable temperature and compare to our 3-dimensional models described above. In Figure \ref{fig:compare_kT0}, we show the measured spectroscopic temperature (kT$_{0,2D}$) in an aperture of $r<0.1$R$_{500}$ (with AGN masked), where the outer radius was chosen to maximize the number of X-ray counts, while still capturing the central temperature drop in cool core clusters, following the universal profile shown in \cite{vikhlinin06}. The modeled 3-dimensional temperature profile was projected onto this same aperture (kT$_{0,3D}$) to enable a fair comparison of the two quantities. While the uncertainty in the 2-D temperature for these small X-ray apertures is high, there appears to be good agreement between the models and data (reduced $\chi^2$ = 87.2/83), suggesting that this technique is able to recover the ``true'' central temperature of the cluster. \begin{figure}[htb] \centering \includegraphics[width=0.49\textwidth]{plots/compare_kT0.pdf} \caption{Comparison of 2-dimensional spectroscopic temperatures measured in an aperture of $r<0.1$R$_{500}$ (with AGN masked) to the 3-dimensional model temperature projected onto the same annulus. The one-to-one correspondence (dashed line) between the data and models suggest that our mass-modeling approach (\S2.4) yields reliable estimates of the central temperature for clusters that are relaxed. Point color corresponds to redshift, from $z=0.3$ (blue) to $z=1.2$ (red), indicating that the scatter in this plot is largely independent of redshift. } \label{fig:compare_kT0} \end{figure} Based on this agreement, we feel confident extrapolating inwards into a regime without sufficient X-ray counts to measure the spectroscopic temperature and proceed throughout \S3 utilizing the central ($r<0.012$R$_{500}$), deprojected model temperatures to calculate $t_{\textrm{cool},0}$ and K$_0$. In \S4 we will return to the comparison between 2-D and 3-D quantities to determine the dependence of our results on this extrapolation and our choice of models. \section{Results} This sample represents the largest and most complete sample of galaxy clusters with X-ray observations at $z\gtrsim0.4$. Given the depth of our X-ray exposures, combined with the high angular resolution of the \emph{Chandra X-ray Observatory}, we are in a unique position to study the evolution of cooling in the ICM for similar-mass clusters over timescales of $\sim$8~Gyr. In this section we will present the broad results of this study, drawing comparisons to samples of nearby clusters \citep[e.g.,][]{white97,vikhlinin06, vikhlinin09,cavagnolo09,hudson10}. The interpretation of these results, as well as systematic errors that may affect them, are discussed in \S4. \subsection{Cooling Time and Entropy Profiles} In Figure \ref{fig:kprof}, we present the radial entropy and cooling time profiles for our full sample, based on the deprojection procedures described in \S\ref{sec:deprojection}. For comparison, we show the average entropy profiles for low-redshift cool core and non-cool core clusters from \cite{cavagnolo09}. Perhaps surprisingly, there is no qualitative difference in the entropy profiles between this sample of high-redshift, massive, SZ-selected clusters and the low-redshift sample of groups and clusters presented in \cite{cavagnolo09}. Naively, one might expect the mean central entropy to decrease with time, as clusters have had more time to cool. However, the similarity between low-redshift clusters and this sample, which has a median redshift of 0.63 and a median age nearly half of the $z\sim0$ sample, indicates that the entropy and cooling time profiles are unchanging. This suggests that the characteristic entropy and cooling time profiles, having minimal core entropies of $\sim$10 keV cm$^2$ and cooling radii of $\sim$100 kpc, were established at earlier times than we are probing with this sample ($z\gtrsim1$). In order to look for evolution in the entropy profile, we plot in Figure \ref{fig:kprof_stack} the average entropy profile for cool core (K$_0<30$ keV cm$^2$) and non-cool core (K$_0>30$ keV cm$^2$) clusters in the SPT-XVP sample, divided into two redshift bins corresponding to $z<0.75$ and $z>0.75$. These are compared to the average profiles from \cite{cavagnolo09}, for clusters at $z\lesssim0.1$. In general, the average profiles are indistinguishable in the inner few hundred kiloparsecs, with high-redshift clusters having slightly higher entropy at small radii ($r\lesssim200$~kpc) than their low-redshift counterparts. At larger radii, the profiles vary according to the self-similar E$(z)^{4/3}$ scaling \citep{pratt10}. These results suggest that the outer entropy profile is following the gravitational collapse of the cluster, while the inner profile has some additional physics governing its evolution, most likely baryonic cooling. The mild central entropy evolution in the right panel of Figure \ref{fig:kprof_stack} could be thought of as the effect of ``forcing'' an evolutionary scaling in a regime where the profile is unevolving The combination of Figures \ref{fig:kprof} and \ref{fig:kprof_stack} suggests that the cooling properties of the intracluster medium in the inner $\sim$200 kpc have remained relatively constant over timescales of $\sim$8 Gyr. The short cooling times at these radii imply that the core entropy profile should change on short timescales. The fact that this is not observed suggests that some form of feedback has offset cooling on these exceptionally long timescales, keeping the central entropy at a constant value. There is evidence that mechanical feedback from AGN is stable over such long periods of time \citep{hlavacek-larrondo12}, perhaps maintaining the observed entropy floor of 10 keV cm$^2$ since $z\sim1.2$. \begin{figure*}[h!] \centering \includegraphics[width=0.95\textwidth]{plots/kprof.pdf} \caption{Radial entropy (K) and cooling time (t$_{cool}$) profiles for our full sample of SPT-selected clusters. The average entropy and cooling time curves from \cite{cavagnolo09} for both cool core (blue) and non-cool core (red) clusters are shown in thick dashed lines, which are found to be in good qualitative agreement with our high-$z$, more massive clusters. Overall, these profiles have similar shapes and normalization to low-redshift clusters, suggesting little evolution in the cooling properties of massive clusters over the past $\sim$8 Gyr.} \label{fig:kprof} \end{figure*} \begin{figure*}[h!] \centering \begin{tabular}{c c} \includegraphics[width=0.48\textwidth]{plots/kprof_stack_noscale.pdf}& \includegraphics[width=0.48\textwidth]{plots/kprof_stack_scale.pdf}\\ \end{tabular} \caption{Average entropy profiles for low-redshift ($z<0.1$) clusters \citep{cavagnolo09}, as well as intermediate ($0.3<z<0.75$) and high ($0.75<z<1.1$) redshift clusters from this work, divided into cool core and non-cool core bins based on their central entropy. This plot demonstrates that the inner $\sim$200~kpc of the cluster experiences very little evolution in both the shape and normalization of the entropy profile over $\sim$8~Gyr. At large radii, the entropy appears to decrease with increasing redshift, leading to overall shallower entropy profiles at early times. In the right panel, we apply the self-similar scaling from \cite{pratt10}, $E(z)^{4/3}$, which shows that the central entropy is becoming slightly higher \emph{relative to the outer entropy profile} as a function of redshift. The central entropy evolution in the right panel could be thought of as the effect of ``forcing'' an evolutionary scaling in a regime where the profile is unevolving. Combined, these two plots suggest that the outer profile is evolving as expected based on cosmological models, while the inner $\sim$100kpc has no measurable evolution.} \label{fig:kprof_stack} \end{figure*} \subsection{Distribution of Central Entropy, Cooling Time, and Cooling Rate} In Figure \ref{fig:tc_ent}, we compare the derived central entropy and cooling time (see \S\ref{sec:deprojection} for details on deriving central quantities) for the SPT-XVP sample to those for the low-redshift clusters in the Chandra Cluster Cosmology Project \cite[hereafter CCCP;][]{vikhlinin06,vikhlinin09}. Overall, we find excellent agreement between the two samples. While it is unsurprising that these two quantities are correlated, due to their similar dependence on both T$_X$ and n$_e$, the normalization and distribution of points along the sequence is reassuring. Both the SPT-XVP and the CCCP clusters have a slightly higher normalization than found by \cite{hudson10}, which can be accounted for by the fact that both of these samples target more massive clusters. Indeed, \cite{hudson10} showed that the scatter about the $t_{\textrm{cool},0}$--K$_0$ correlates with the cluster temperature, with high-T$_X$ clusters lying above the relation and low-T$_X$ groups lying below the relation. Similar to previous low-redshift studies \citep[e.g.,][]{cavagnolo09,hudson10}, we see hints of multiple peaks in both $t_{\textrm{cool},0}$ and K$_0$, with minima at $\sim$1 Gyr and 50 keV cm$^2$, respectively \citep[e.g.,][]{cavagnolo09,hudson10}. This threshold, separating cool core from non-cool core clusters, appears to be unchanged between the low-redshift and high-redshift samples. We will return to this point in \S3.3. \begin{figure}[htb] \centering \includegraphics[width=0.48\textwidth]{plots/tc_ent_hist.pdf} \caption{Central entropy ($K_0$) versus central cooling time ($t_{\textrm{cool},0}$) for our full sample of SPT-selected clusters, with the typical uncertainty shown in the bottom right. Low-redshift clusters from the CCCP \citep{vikhlinin09} are shown as blue squares, and the best fit for a low-$z$ sample from \cite{hudson10} is shown in red. Both the SPT-XVP and CCCP data lie slightly above this line, as is expected for higher-mass samples. \cite{cavagnolo09} found a bimodal distribution of both $t_{cool,0}$ and K$_0$ around $\sim$1 Gyr and 30 keV cm$^2$, respectively. We find similar, though less significant, minima in our cluster distributions around these same values.} \label{fig:tc_ent} \end{figure} \begin{figure}[htb] \centering \begin{tabular}{c} \includegraphics[width=0.4\textwidth]{plots/dmdthist_white.pdf} \\ \includegraphics[width=0.4\textwidth]{plots/dmdthist_const.pdf} \\ \end{tabular} \caption{Distribution of classical mass deposition rates (dM/dt) for the SPT-XVP sample, divided into intermediate-redshift (blue) and high-redshift (red) bins. For comparison, we also show a sample of nearby clusters from \citep[gray histogram;][L$_X \ge 1.5\times10^{44}$ erg s$^{-1}$]{white97}. In the top (a) and bottom (b) panels, we consider two different definitions of the cooling radius: (a) based on the age of the Universe at the epoch of the cluster (\S2.4.2), and (b) a constant value of 7.7 G-manyr \citep[e.g.,][]{odea08}. We find that the evolution observed in the mass deposition rate in the top panel (a) is due to our definition of the cooling radius, which is based on the cluster age -- if we assume that all clusters have been cooling for the same amount of time, the three samples are statistically identical. In the insets we show the cumulative distribution, which further highlights the similarities between the three samples.} \label{fig:dmdt} \end{figure} In Figure \ref{fig:dmdt}, we plot the distribution of mass deposition rates, $d$M$/dt$, for our SPT-selected sample. We show the integrated cooling rate within two radii: $r$($t_{\textrm{cool}} = t_{\textrm{Univ}})$ and $r(t_{\textrm{cool}} = 7.7$ Gyr). The former is more physically motivated, representing the amount of gas that has had time to cool since the cluster formed. The latter is motivated by the desire to have the definition of the cooling radius be independent of redshift -- the choice of a 7.7 Gyr timescale is arbitrary and was chosen simply to conform with the literature \citep[e.g.,][]{odea08}. We find that clusters at high-$z$ have overall smaller time-averaged cooling rates, which is unsurprising given that they have had less time to cool. If we remove this factor by instead computing the cooling rate within a non-evolving aperture ($r[t_{\textrm{cool}} = 7.7$ Gyr]), we find no significant difference between the mass deposition rates measured in intermediate redshift ($0.4<z<0.75$) and high redshift ($0.75<z<1.1$) clusters. These sub-samples have median mass deposition rates of 49 M$_{\odot}$ yr$^{-1}$ and 57 M$_{\odot}$ yr$^{-1}$ (excluding non-cooling systems), respectively. For comparison, we also show the distribution of cooling rates for nearby clusters from \cite{white97}, for which the distribution is cut off at $d$M$/dt$ $\lesssim$ 50 M$_{\odot}$ yr$^{-1}$ due to poorer sampling and, thus, reduced sensitivity to modest cooling rates. However, as evidenced by the cumulative distribution, at $d$M$/dt$ $>$ 50 M$_{\odot}$ yr$^{-1}$ the three samples are nearly identical, suggesting very little evolution in the rate of cooling in the ICM over timescales of $\sim$8 Gyr. \subsection{Evolution of Cooling Flow Properties} \begin{figure}[htb] \centering \includegraphics[width=0.49\textwidth]{plots/tcK_evol.pdf} \caption{Redshift evolution of the mass deposition rate ($d$M$/dt$), central cooling time ($t_{c,0}$), and central entropy (K$_0$) for the sample presented in this paper. Blue and red points represent cooling and non-cooling clusters, respectively, with divisions following Figure \ref{fig:kprof}. For comparison, we show nearby X-ray selected samples \citep{white97, vikhlinin09}, with cuts made to mimic the SPT selection (L$_X>1.5\times10^{44}$ ergs s$^{-1}$, M$_{500}>3\times10^{14}$ M$_{\odot}$). This plot suggest that there has been little evolution in the cooling properties of cluster cores over the range $0<z<1.2$. The lower panel shows the expectation for self-similar evolution of the central entropy \citep[$E(z)^{4/3}$;][]{pratt10}, which is consistent with the observations. The fact that the gas in central $\sim$100~kpc appears not to be cooling suggests that the balance between cooling and feedback has been stable for several Gyr. } \label{fig:dmdt_evolution} \end{figure} Figures \ref{fig:kprof_stack} and \ref{fig:dmdt} suggest that the cooling properties of the ICM vary little from $z\sim0$ to $z\sim1$. In order to directly quantify this, we show in Figure \ref{fig:dmdt_evolution} the evolution of $d$M$/dt$, K$_0$, and $t_{\textrm{cool},0}$ with redshift. For each quantity, we separate cool core and non-cool core clusters using the following thresholds: dM/dt $> 0$ M$_{\odot}$ yr$^{-1}$, K$_0 < 30$ keV cm$^2$, and $t_{\textrm{cool},0} < 1$ Gyr. This figure more clearly shows that there is very little, if any, evolution in the cooling properties of SPT-selected clusters over the range $0.3<z<1.2$. We compare the range of $d$M$/dt$, K$_0$, and t$_{c,0}$ observed for these clusters to samples of nearby clusters from \cite{white97} and the CCCP and find no appreciable change. The entropy floor, at K$_{0} \sim 10$ keV cm$^2$, is constant over the full redshift range of our sample, consistent with earlier work by \cite{cavagnolo09} which covered clusters at $0 < z\lesssim0.5$. The data are also consistent with the self-similar expectation \citep[$E(z)^{4/3}$;][]{pratt10}, which predicts only a factor of $\sim$1.6 change in central entropy from $z=1$ to $z=0$. This self-similar evolution is based on gravity alone -- the fact that it is an adequate representation of the data in the central $\sim$100~kpc of clusters, where cooling processes should be responsible for shaping the entropy profile, suggests that cooling is offset exceptionally well over very long over the past $\sim$7 Gyr. In the absence of feedback, cool core clusters at $z\sim$1 should have $K_0\rightarrow0$ in $<$1 Gyr. \subsection{Evolution of Cluster Surface Brightness Profiles} Figures \ref{fig:kprof}--\ref{fig:dmdt_evolution} suggest that there is little change in the ICM cooling properties in the cores of X-ray- and SZ-selected clusters since $z\sim1.2$, in agreement with earlier studies at lower redshift \citep[e.g.,][]{cavagnolo09}. However, Several recent studies have argued that there are fewer cool core clusters at high redshift \citep{vikhlinin07, santos08, santos10} based on measurements of surface brightness concentration, suggesting that ``cool cores'' and ``cooling flows'' may not have the same evolution and, thus, are not necessarily coupled at high redshift as they are now. \begin{figure}[htb] \centering \includegraphics[width=0.49\textwidth]{plots/sb_evol.pdf} \caption{Evolution of surface brightness quantities from \cite{vikhlinin07}, $\alpha$, and \cite{santos08}, $c_{SB}$. For comparison we show measurements for low redshift clusters \citep{vikhlinin09, hudson10, santos10}. This figure confirms the strong evolution in cool core strength reported by both \cite{vikhlinin07} and \cite{santos08}, suggesting that the cuspy surface brightness profiles associated with nearby cooling flows were not present at $z\gtrsim0.7$. } \label{fig:csb_evol} \end{figure} In Figure \ref{fig:csb_evol}, we duplicate the analyses of \cite{vikhlinin07}, \cite{santos08}, and \cite{semler12} in order to look for evolution in the cool core properties. We find that the number of galaxy clusters classified as ``cool core'' by both $\alpha$ and $c_{SB}$ (see \S2.3) decreases with redshift, from $\sim$40\% at $z\sim0$ to $\sim$10\% at $z\gtrsim0.75$. These results confirm the evolution in cool core strength reported by \cite{vikhlinin07} and \cite{santos08} for X-ray selected samples, however the evolution appears to be a bit slower for the SZ-selected sample, with several strong cool cores in the range $0.5<z<0.75$ \citep{semler12}. The higher fraction of ``moderate'' cool cores in Figure \ref{fig:csb_evol} at higher redshift in consistent with recent work by \cite{santos10}. All samples agree that there is a lack of strong, classical cool cores at $z>0.75$, which seems to be in opposition to the results presented in Figures \ref{fig:kprof}--\ref{fig:dmdt_evolution} which suggest no evolution in the cooling properties. One possible explanation for this discrepancy is that both the concentration parameter, $c_{SB}$, and the cuspiness parameter, $\alpha$, (see eqs. 3 \& 4) assume no evolution in the scale radius of the cool core: $c_{SB}$ assumes a radius of 40~kpc, while $\alpha$ uses $0.04$R$_{500}$. To further investigate the surface brightness evolution, we move away from single-parameter measures of surface brightness concentration and, instead, directly compare the X-ray surface brightness profiles for low- and high-redshift clusters in Figure \ref{fig:sb_evol}. At $z<0.75$, we confirm that, overall, clusters with low central entropy (K$_0<30$ keV cm$^2$) have more centrally-concentrated surface brightness profiles than those with high central entropy (K$_0>30$ keV cm$^2$). However, at high redshift ($z>0.75$) we find a lack of strongly-concentrated clusters, with only a weak increase in concentration for the clusters with low central entropy, consistent with earlier studies of distant X-ray-selected clusters \citep[e.g.,][]{vikhlinin07, santos08, santos10}. This difference in surface brightness concentration is not a result of increased spatial resolution for low-redshift clusters -- the difference in spatial resolution between the centers of these redshift bins is only $\sim$30\%. \begin{figure*}[htb] \centering \includegraphics[width=0.99\textwidth]{plots/finesb.pdf} \caption{ X-ray surface brightness profiles in the 0.5--2.0 keV energy band for the sample presented in this paper, normalized such that they are self-similar at large radii. Profiles are separated into cool core (blue; K$_0 < 30$ keV cm$^2$) and non-cool core (red; K$_0 >30$ keV cm$^2$). The centrally-peaked surface brightness profile, characteristic of a cooling-flow cluster, is present only in the low-redshift ($z<0.75$) sample. At high redshift ($z>0.75$) both cool-core and non-cool core clusters, as defined by their central entropy, are indistinguishable from their surface brightness profiles alone. In the right panel, average surface brightness profiles are shown, which demonstrate the similarity between low-redshift non-cool cores, high-redshift non-cool cores, and high-redshift cool cores. } \label{fig:sb_evol} \end{figure*} \begin{figure*}[htb] \centering \begin{tabular}{c} \includegraphics[width=0.99\textwidth]{plots/rhog_pcrit2.pdf}\\ \end{tabular} \caption{ Gas density profiles for an X-ray-selected sample of nearby clusters \citep[CCCP;][]{vikhlinin09}, as well as the sample of SPT-selected clusters presented in this work. In the upper panels we show all of the profiles, scaled in terms of the critical density ($\rho_{crit}$) and R$_{500}$. In the middle row, we show the median profiles for clusters with K$_0< 30$ keV cm$^2$ (blue) and K$_0>30$ keV cm$^2$ (red). In the bottom row we classify ``non-cool cores'' as having K$_0>150$ keV cm$^2$ \citep{hudson10}, which further highlights the difference between cool cores and non-cool cores. In the right-most column, we show all of the median profiles together, demonstrating the substantial evolution in the median gas density profile as a function of redshift. This figure shows clearly that the 3-dimensional gas density is becoming more centrally concentrated over time in cool core clusters, while remaining nearly constant in non-cool core clusters. } \label{fig:rho_evol} \end{figure*} This result becomes even more dramatic when the data are deprojected into gas density, rather than surface brightness, and including $z\sim0$ clusters for comparison. In Figure \ref{fig:rho_evol}, we compare the ICM gas density profiles for the sample presented in this work to a low-redshift sample \citep[Chandra Cluster Cosmology Project;][]{vikhlinin09}. We restrict the low-redshift sample to clusters with M$_{500}>3\times10^{14}$ M$_{\odot}$, in order to approximate the SPT-selection cut. This comparison is particularly appropriate since our reduction and analysis pipeline is identical to that used by \cite{vikhlinin09}. Figure \ref{fig:rho_evol} shows that the 3-dimensional gas density profiles become more centrally concentrated with decreasing redshift, with nearly an order of magnitude difference in central gas density between cool core clusters at $z\sim0.1$ and $z\sim0.9$. On the contrary, non-cooling clusters (K$_0>30$ keV cm$^2$) experience no appreciable evolution in the central physical density over the same timescale. The combination of Figures \ref{fig:sb_evol} and \ref{fig:rho_evol} seem to suggest that the dense cores which are associated with cooling flows have built up slowly over the past $\sim$8 Gyr. This scenario would explain the lack of centrally-concentrated, or ``cuspy'', clusters at high redshift. \section{Discussion} Figures \ref{fig:kprof}--\ref{fig:rho_evol} present an interesting story. The cooling properties of the intracluster medium in the most massive galaxy clusters appear to be relatively constant -- that is, classical cooling rates are not getting any higher and central entropies are not getting any lower -- since $z\sim1.2$. Over the same timescale, the central gas density has increased by roughly an order of magnitude in clusters exhibiting cooling signatures, leading to considerably more concentrated surface brightness profiles in low-redshift cool core clusters. Below, we discuss potential explanations for these results, along with systematics that may be confusing the issue. \begin{figure}[htb] \centering \includegraphics[width=0.49\textwidth]{plots/ccev.pdf}\\ \caption{Cool core gas mass within 0.1R$_{500}$ as a function of redshift for clusters with K$_0<30$~keV cm$^2$ from the CCCP (open squares) and SPT-XVP (filled squares). Here, the cool core mass represents the volume-integrated difference between the median ``non-cool core'' (K$_0>150$ keV cm$^2$) profile and the cluster density profile. Individual clusters are shown as filled grey squares, while the averages in three redshift bins are shown as black circles. Black circles represent the median values in three bins: $z<0.1$, $0.3<z<0.75$, and $0.75<z<1.2$. The median growth of cool cores is well-modeled by a constant cooling flow ($d$M$/dt$ = 150 M$_{\odot}$ yr$^{-1}$) that began at $z=1$, with the full range of points being consistent with cooling flows beginning at $0.8<z<2$. This plot suggests that cooling flows do bring low-entropy material into the core of the cluster, but that some form of feedback prevents this gas from cooling completely, leading to the build-up of low-entropy gas in the cluster core. This scenario is in agreement with the lack of evolution of dM/dt and its peak value reported in Figure \ref{fig:dmdt}, coupled with the constant entropy floor of 10 keV cm$^2$ shown in Figure \ref{fig:dmdt_evolution}. } \label{fig:ccev} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.49\textwidth]{plots/ktcore.pdf}\\ \caption{Central temperature, measured within $r<0.05$R$_{500}$, for cool core clusters as a function of redshift. Black circles represent the mean redshift and central temperature in three redshift bins. The binned points are inconsistent with the hypothesis of no evolution ($dk$T$/dz=0$) at the $\gtrsim1\sigma$ level ($\gtrsim$67\% confidence). The long-dashed line here represents the self-similar expectation (i.e., factoring in that high-$z$ clusters are lower mass and, thus, cooler in general), while the short-dashed line represents no evolution in the central entropy, K$_0$ (see Figure \ref{fig:dmdt_evolution}). This figure suggests that present day cool cores are warmer than their high-$z$ counterparts, although we are unable to distinguish if this is purely due to self-similar evolution, or if there may be some contribution from feedback in order to prevent the central entropy from reaching $<$10 keV cm$^2$. } \label{fig:ktcore} \end{figure} \subsection{The Origin of Cool Cores} The results presented thus far suggest that the dense cores associated with cooling flow clusters were not as pronounced $\sim$8 Gyr ago, despite the fact that clusters at these early times had similar cooling rates and central entropy (Figure \ref{fig:dmdt_evolution}). We propose that these cores have grown over time as a direct result of cooling flows being halted by feedback. In this scenario, gas from larger radii cools and flows inwards, but it hits a ``cooling floor'' at $\sim$10 keV cm$^2$, below which cooling is less efficient. This floor is likely a result of some form of feedback, with the most promising explanation currently being mechanical energy injection from the central AGN \citep[e.g.,][]{fabian12, mcnamara12}. This concept of a cooling floor is supported by observations. \cite{peterson06} show, using high resolution X-ray spectroscopy of nearby galaxy clusters, that gas at temperatures less than $\sim$1/3 of the ambient ICM temperature is cooling orders of magnitude less effectively than predicted. Further, in agreement with \cite{cavagnolo09}, we show in Figure \ref{fig:dmdt_evolution} that there is a lower entropy limit in the cores of galaxy clusters of $\sim$10 keV cm$^2$ which is roughly constant over the range $0<z<1.2$. The fact that the ICM appears unable to cool efficiently below $\sim$1/3 of the ambient temperature, or $\sim$10 keV cm$^2$, implies that, if material is indeed flowing into the cluster core, then we should observe a build up of low-entropy gas in the core. To test this hypothesis, we measure the ``cool core mass'' for all clusters in our sample with K$_0<30$ keV cm$^2$. The cool core mass is defined as: \begin{equation} \textrm{M}_{cool} = 4\pi\int_0^{0.1\textrm{R}_{500}} (\rho_{g}-\left<\rho_{g,\textrm{NCC}}\right>)r^2dr , \end{equation} \noindent{}where $\left<\rho_{g,\textrm{NCC}}\right>$ represents the median non-cool core (K$_0>150$ keV cm$^2$) density profile (Figure \ref{fig:rho_evol}) and the outer radius of 0.1R$_{500}$ is roughly where the uncertainty in $\rho_g$ is similar in scale to the difference between the median cool core and non-cool core profiles. In Figure \ref{fig:ccev}, we plot the cool core mass, M$_{cool}$, versus redshift for the full sample of SPT-XVP clusters with K$_0<30$ keV cm$^2$, including 5 nearby ($z<0.1$) clusters from \cite{vikhlinin06}. We find a rapid evolution in the total cool core mass, with an order of magnitude increase in M$_{cool}$ between $z\sim1$ and $z\sim0.5$. As we show in Figure \ref{fig:ccev}, the median growth is fully consistent with a constant cooling flow since $z=1$ with $d$M$/dt$ = 150 M$_{\odot}$ yr$^{-1}$. The range of cool core masses is consistent with cooing flowing initiating at $0.8\lesssim z \lesssim 2$, providing the first constraints on the onset of cooling in galaxy clusters. Figure \ref{fig:ccev} suggests that cool cores at $z\sim0$ are a direct result of long-standing cooling flows (Figure \ref{fig:dmdt}) coupled with a constant entropy floor (Figure \ref{fig:dmdt_evolution}) -- most likely the result of AGN feedback. The long-standing balance between cooling and feedback prevents gas from cooling completely and, instead, leads to an accumulation of cool gas in the core of the cluster. This simple evolutionary scenario, which we offer as an explanation for the increase in central gas density in clusters from $z\sim1$ to $z\sim0.5$, is based on a sample of massive (M$_{500} > 2\times10^{14}$ M$_{\odot}$), rich galaxy clusters. While the cooling rate (dM/dt) is proportional to cluster mass \citep{white97}, there is no evidence that the presence of a cool core is dependent on whether the host is a rich galaxy cluster or a poor group \citep{mcdonald11a}. Thus, if these trends hold at $z\gg0.1$, we would expect future surveys of low-mass, high-redshift clusters, via optical \citep[e.g., LSST;][]{lsst} or X-ray \citep[e.g., eRosita;][]{erosita} detection, to see a similar decline in the cuspiness of cool cores at high redshift. At this point, however, such an extrapolation to lower masses is purely speculative. \subsection{The Evolution of ICM Cooling} In Figure \ref{fig:dmdt_evolution}, we show that there is no measurable evolution in the minimum central entropy, K$_0$, over the range $0.3<z<1.2$. Coupled with the apparent increase in central density, this would imply that cool cores today are \emph{warmer} than their high-$z$ counterparts. Since a detailed spatial comparison, such as we did for density, is more challenging with a spectroscopically-measured quantity, we reduce the problem to a single measurement of ``central'' temperature. Here we define the central temperature as the spectroscopically-measured temperature within 0.05R$_{500}$, with central point sources masked. We consider here only systems with K$_0<30$ keV cm$^2$, which are the most centrally concentrated systems in our sample, allowing us to use a smaller aperture than in Figure 1. In Figure \ref{fig:ktcore}, we see that, indeed, for clusters classified as cool core on the basis of their central entropy (K$_{0}<30$ keV cm$^2$), central temperature increases with decreasing redshift. The slope of this relation is equally consistent with the expected self-similar evolution, as well as what is required to have no evolution in K$_0$ over this redshift interval (dashed line, see also Figure \ref{fig:dmdt_evolution}). This figure seems to suggest that there may be some additional heating in low-redshift cluster cores above the self-similar expectation, perhaps resulting from AGN feedback. However, we stress that these data are insufficient to distinguish between these two scenarios, and so we defer any further speculation on the evolution of the central temperature to an upcoming paper which will perform a careful stacking analysis of these clusters. The combined evidence presented in \S4.1 and \S4.2 support the scenario that cooling material has been gradually building up in cluster cores over the last $\sim$8 Gyr, but has been prevented from completely cooling by an almost perfectly balanced heating source. While at first glance it would appear that the amount of energy injected by feedback must increase rapidly to offset increased cooling (L$_{\textrm{cool}} \propto n_e^2$T$^{1/2}$), much of this is offset by the fact that the gravitational potential in the core is increasing, leading to more heating as cooling material falls into the potential well. So, while cool cores are becoming denser with decreasing redshift, the actual cooling rates, and by extension the energy needed for feedback to offset cooling, have remained nearly constant since $z\sim1$. Recent observations of the ``Phoenix Cluster'' \citep{mcdonald12c, mcdonald13} suggest that some clusters may undergo episodes of runaway cooling, perhaps before the feedback responsible for establishing the cooling flow was fully established, or that the feedback mechanism is strongly episodic. \subsection{The Cool Core / Cooling Flow Fraction} Much effort has recently focused on determining the fraction of high-redshift clusters which harbor a cool core \citep{vikhlinin07,santos08,santos10,samuele11,mcdonald11c,semler12}. However, based on the results of this paper, we now know that the inferred evolution in the cool core fraction depends strongly on the criteria that are used to classify cool cores. In Figure \ref{fig:ccfrac} we demonstrate this point, showing that the measured fraction of high-$z$ cool cores is drastically different if the classification of cool cores is based on the presence of \emph{cooling} (K$_0$, t$_{\textrm{cool},0}$, $d$M$/dt$; $\sim$35\%) or \emph{cooled} ($\alpha$, c$_{\rm{SB}}$; $\sim$5\%) gas. This figure shows that, at high redshift, it is important to differentiate between ``cooling flows'' and ``cool cores'' when classifying galaxy clusters as cooling or not -- a distinction which is unnecessary in nearby clusters. \begin{figure}[htb] \centering \includegraphics[width=0.49\textwidth]{plots/ccfrac.pdf} \caption{The fraction of clusters harboring a cooling flow or cool core, as determined by a variety of indicators, versus redshift. This figure demonstrates the difficulty in classifying cool core clusters at $z>0.75$, where the cooling rate is high, but the density profile is not cuspy. Low redshift points here come from Figures \ref{fig:dmdt_evolution} and \ref{fig:csb_evol}. We have chosen slightly different thresholds for K$_0$ and c$_{\rm{SB}}$ than in previous plots, in order to reduce the scatter at $z<0.5$ where these parameters should all agree on the cool core fraction. We show in the lower left corner the typical uncertainty on the cool core fraction for each bin. For comparison, we also show the fraction of low-redshift clusters with strong emission line nebulae, which are generally indicative of a cool core \citep[gray shaded region;][]{mcdonald11c}.} \label{fig:ccfrac} \end{figure} Figure \ref{fig:ccfrac} shows that the fraction of strongly-cooling clusters ($d$M$/dt>50$ M$_{\odot}$ yr$^{-1}$, K$_0<35$ keV cm$^2$, $t_{\textrm{cool},0}<1$ Gyr) undergoes little evolution over the range $0.3<z<1.2$. This is qualitatively apparent in Figure \ref{fig:dmdt_evolution}. The fraction of strongly-cooling clusters increases from 25$^{+11}_{-5}$\% to 35$^{+10}_{-10}$\%, consistent at the $1\sigma$ level with no change. This figure shows that, not only is the rate at which the ICM is cooling roughly constant since $z\sim1$ (Figure \ref{fig:dmdt} and \ref{fig:dmdt_evolution}), but the fraction of clusters which experience strong cooling is also nearly constant over similar timescales. It is also worth noting here the overall agreement in Figure \ref{fig:ccfrac} between the evolution of ICM cooling inferred from X-ray properties (this work) and optical properties \citep{mcdonald11c} at $z\leq0.5$. The latter sample was drawn from optically-selected catalogues, and used emission-line nebulae as a probe of ICM cooling. This overall agreement suggests that the evolution of cooling properties is relatively independent of how the clusters are selected (optical vs SZ). The steep rise in the cooling fraction at $z<0.5$ was interpreted by \cite{mcdonald11c} as being due to timing -- we are seeing clusters transitioning from ``weak'' to ``strong'' cool cores as the central cooling time drops over time. \subsection{Potential Biases} While the observed cool core evolution presented here is interesting, it may suffer from a combination of several biases in both the sample selection and in the analysis. Below we address four biases and quantify their effects on the observed cool core evolution. \subsubsection{3-Dimensional Mass Modeling} \begin{figure}[htb] \centering \includegraphics[width=0.49\textwidth]{plots/compare_k0.pdf}\\ \caption{Comparison of central entropy (K$_0$) measured in a 2-dimensional aperture with $r<0.1$R$_{500}$ to our 3-dimensional models projected onto the same annulus. In the lower panel the mean residuals in three different redshift bins are shown as black crosses. This figure demonstrates that the 2-dimensional and 3-dimensional measurements agree well with each other, and any differences are redshift independent. } \label{fig:compare_k0} \end{figure} The results presented thus far rely on our ability to estimate the central temperature based on assumptions about the dark matter halo and hydrostatic equilibrium (see \S2.3). While we established in \S2.5 that these estimates are reliable, it is worthwhile to investigate whether any of the observed evolutionary trends are due to this approach. In Figure \ref{fig:compare_k0}, we compare the central entropy calculated using a 2-dimensional, spectroscopically-measured temperature (see \S2.5) to our 3-dimensional model calculation. We find very good one-to-one agreement between these two quantities, with the scatter being uncorrelated with redshift. This suggests that the observed lack of evolution in K$_0$ is not a result of our modeling technique. We further demonstrate this in Figure \ref{fig:compare_k0evol}, showing the evolution of the central entropy based on the 2-dimensional central temperature. Comparing to low-redshift clusters from the CCCP, with K$_{0,2D}$ calculated in the same way, we confirm the lack of evolution in the central entropy from Figure \ref{fig:dmdt_evolution} over the range $0<z<1.2$. \begin{figure}[htb] \centering \includegraphics[width=0.45\textwidth]{plots/compare_k0evol2.pdf}\\ \caption{Similar to Figure \ref{fig:dmdt_evolution}, but with the central entropy calculated using a 2-dimensional aperture. Open triangles show low-redshift clusters from the CCCP, while filled crosses represent data from this work (SPT-XVP). This figure demonstrates that, regardless of the method used to estimate the central entropy, there appears to be no evolution over the range $0<z<1.2$. } \label{fig:compare_k0evol} \end{figure} \subsubsection{Increased SZ Signal in Cool Cores} In the central regions of cool core clusters, the increased density leads to a substantial increase in the inner pressure profile \citep{planck12}. This should, in turn, lead to an increase in the SZ detection significance, biasing SZ-selected samples towards detecting clusters with dense cores. However, given the small relative volume and mass of cool cores to the rest of the cluster, we expect this bias to be small. This bias was first quantified via detailed numerical simulations by \cite{motl05}. These authors found that integrated SZ quantities were relatively unbiased. In simulations that allowed unrestricted radiative cooling, the logarithmic slope, $\alpha$, of the $M_{500}-y_{SZ}$ relation increased by $\sim$7.5\% over their non-cooling counterparts. It is well understood that galaxy clusters which are simulated without feedback or star formation become too centrally concentrated \citep[the ``over-cooling problem'';][]{balogh01}, so this represents the upper limit of the SZ bias due to the presence of ICM cooling. When \cite{motl05} include star formation and stellar feedback -- which yields the most realistic-looking cool core clusters -- the difference in $\alpha$ between cooing and non-cooling clusters is reduced to $\sim$1\%. The relatively small bias of SZ integrated quantities was confirmed by \cite{pipino10}, who explicitly simulated the bias for a SPT-like survey and found that, at masses above $\sim2\times10^{14}$ M$_{\odot}$, the observed fraction of non-cool cores with the SPT should be nearly identical to the true fraction. In an upcoming publication (Lin et~al.~ in prep), we further show that this small bias is nearly redshift independent. Thus, while there is a small bias in the SZ signal due to the presence of a low-entropy core, we do not expect this bias to seriously alter our results, due both to its small magnitude and weak redshift dependence. \subsubsection{X-ray Centroid Determination} In \S2.2 we describe our method of determining the cluster center, which is based on the X-ray emission in an annulus of 250--500~kpc. This method, which reduces scatter in X-ray scaling relations by finding the large-scale center rather than what may be the displaced core, will yield less centrally-concentrated density profiles than if we chose the X-ray peak as the cluster center. Regardless of how the center is defined, we can investigate how our choice can affect the resulting density profile. \begin{figure}[htb] \centering \includegraphics[width=0.49\textwidth]{plots/csb_compare.pdf} \caption{Comparison of X-ray surface brightness concentration (c$_{SB}$) measured around the X-ray peak and the large-scale centroid. Dotted lines represent the thresholds for weak cool cores (c$_{SB}>0.075$) and strong cool cores (c$_{SB}>0.155$ from \cite{santos08}). This figure demonstrates that, while c$_{SB}$ is biased high when measuring around the X-ray peak, this bias appears to have no redshift dependence and is likely not responsible for the observed evolution in cool core density profiles. The three outliers in this plot are SPT-CLJ0102-4915, SPT-CLJ0411-4819, and SPT-CLJ0307-6226 -- all of which have cores displaced from the centroid of the large-scale emission.} \label{fig:csb_compare} \end{figure} In Figure \ref{fig:csb_compare}, we compare the measured surface brightness concentration \citep[c$_{SB}$; ][]{santos08} based on our large-scale centroid and the X-ray peak. This figure confirms that, for relaxed strong cool cores, the X-ray peak and the large-scale centroid are nearly equivalent. By switching to c$_{SB,peak}$, the number of high-$z$ strong cool cores (c$_{SB} > 0.155$) would remain constant at 1, while the number of low-$z$ strong cool cores would increase from 5 to 7. Repeating this exercise for moderate cool cores ($0.075 < c_{SB} < 0.155$) we find increases of 5 (from 1 to 6) and 2 (from 3 to 5) for high and low redshift clusters, respectively. This large difference is primarily due to the arbitrary definition of moderate cool cores -- if we switched to a threshold of 0.07 rather than 0.075, the number of high-$z$ clusters which would be re-classified as moderate cool cores by re-defining the center would decrease from 5 to 2. Perhaps most importantly, the increase in c$_{SB}$ resulting from changing the center to the X-ray peak appears to have no dependence on redshift, as the lower panel of Figure \ref{fig:csb_compare} demonstrates. Thus, while the choice of center certainly affects the shape of the density profile, there is no evidence that this could result in low-$z$ clusters being measured to be more centrally concentrated than their high-$z$ counterparts. \subsubsection{Radio-Loud and Star-forming BCGs} There is a strong correlation in the local Universe between the presence of a cool core and radio emission from the BCG \citep{sun09b}, which may conspire to fill in the SZ signal for the strongest cool cores. Intuitively, this bias should tend to be strongest for nearby clusters (since the SZ signal is nearly redshift independent, but radio flux is not), which would lead to a bias \emph{against} detecting strong cool cores at low redshift with the SZ effect. This is exactly the opposite of what we observe -- the strongest cool cores in our sample are all at $z<0.75$, with a general lack of such systems at high redshift. We can further quantify this bias by appealing to Figure 3 from \cite{sayers13}, which presents a correlation between radio flux density and the SZ bias for Bolocam, which has a similar frequency coverage to SPT. This figure demonstrates that a 140 GHz flux density of $>$0.5 mJy is required to produce more than a 1\% change in the SZ S/N measurement. Assuming a typical radio luminosity for strong cool cores of 10$^{32}$ erg s$^{-1}$ Hz$^{-1}$ \citep{sun09b}, and a spectral index of $\alpha=-0.8$, we find a typical 140 GHz flux at $z=0.3$ of 0.9 mJy, corresponding to an SZ bias of $\sim3$\%. While there certainly may be systems with higher radio luminosity in this sample, we note that this bias becomes substantially weaker with increasing redshift. This conclusion qualitatively agrees with estimates from radio observations of clusters which found that correlated radio emission is negligible relative to the SZ signal at 150 GHz for typical clusters in the SPT mass and redshift range \citep{lin09,sehgal10} Thus, we conclude that radio-loud BCGs should not substantially bias our sample against cool core clusters, and certainly can not drive the observed growth of cool cores that we observe. The most star-forming BCG in this sample is in the Phoenix cluster \cite[SPT-CLJ2344-4243;][]{mcdonald12c,mcdonald13}, with a star formation rate of $\sim$800 M$_{\odot}$ yr$^{-1}$. In \cite{mcdonald12c} we demonstrated that, at 1.5mm and 2.0mm, the flux of this source would be $\sim$0.5 mJy and $\sim$0.1 mJy, respectively. This is significantly lower than the detection limit of the SPT ($\sim$20 mJy), suggesting that star formation has a negligible effect on the SZ signal. Since the other 82 clusters in this sample have \emph{significantly} lower star formation rates, we conclude that star-forming BCGs are not biasing our selection for or against cool cores. \subsubsection{X-ray Underluminous Clusters} If there is a population of galaxy clusters at high-z which meet our mass threshold (M$_{500}>2\times10^{14}$ M$_{\odot}$) but are gas-poor ($f_{gas} \ll 0.125$) and, as a result, go undetected in the SPT survey, then our estimate of the cool core fraction (e.g., Figure \ref{fig:ccfrac}) would be biased high. Such ``X-ray underluminous'' clusters have been identified in large optical surveys, and may be the result of either delayed assembly of the ICM or strong interactions that strip a substantial fraction of the hot gas. However, these systems are, in general, lower mass than the clusters which we consider here. Indeed, \cite{koester07} show, using a sample of 13,823 optically-selected galaxy clusters, that there is a near 1-to-1 correspondence between optically-selected and X-ray selected clusters at the high mass end, with the fraction of X-ray underluminous clusters increasing with decreasing cluster richness. Thus, assuming we can extrapolate this to high redshift, we do not expect that our results are seriously biased by the presence of a significant population of massive, high-redshift, gas-poor galaxy clusters. \section{Summary} We present X-ray observations of 83 massive SZ-selected clusters from the 2500 deg$^2$ South Pole Telescope SZ (SPT-SZ) survey, which includes the first results of a large \emph{Chandra X-ray Observatory} program to observe the 80 most-significant clusters detected at $z > 0.4$ from the first 2000 deg$^2$ of the SPT-SZ survey. This uniformly-selected sample provides a unique opportunity to study the evolution of the cooling intracluster medium in clusters from $z=0.3$ to $z=1.2$. We find no evolution in the cooling properties of the intracluster medium over this large redshift range, with the average entropy and cooling time profiles remaining roughly constant in the inner $\sim$100 kpc despite the outer profile ($r>200$ kpc) following self-similar evolution. The distribution of the central entropy (K$_0$), central cooling time ($t_{cool,0}$), and mass deposition rate ($d$M$/dt$) in cool core clusters remains unchanged from $z=0$ to $z=1.2$. Further, the fraction of clusters experiencing strong cooling ($\sim$30\%) has not changed significantly over the 8~Gyr sampled here. The fact that the cooling properties of galaxy clusters are not evolving suggests that feedback is balancing cooling on very long ($\sim$8 Gyr) timescales. We observe a strong evolution in the central density of galaxy clusters over this same timescale, with the average $\rho_{g,0}$/$\rho_{crit}$ increasing by a factor of $\sim$10 in this same redshift interval. We find a general lack of centrally concentrated cool cores at $z>0.75$, consistent with earlier reports of a lack of cool cores at high redshift from X-ray surveys. We show that this steady growth of cool cores from $z>1$ to $z=0$ is consistent with a cooling flow of $\sim$150 M$_{\odot}$ yr$^{-1}$ which is unable to reach entropies below 10 keV cm$^2$, leading to an accumulation of cool gas in the central $\sim$100~kpc. In order to build cool cores of the observed masses at $z\sim0$, we estimate that cooling flows would need to begin at $0.8<z<2.0$ in most massive galaxy clusters. This work represents the first observations of galaxy clusters that span a broad enough redshift range and are sufficiently well-selected to track the growth of cool cores from their formation. These measurements give further evidence that stable, long-standing feedback is required to both halt cooling of the ICM to low temperatures and grow cool, dense cores. This dataset, which contains dozens of new clusters, both cooling and non-cooling, at $0.3<z<1.2$ will prove invaluable for understanding the complex interplay between cooling and feedback in galaxy cluster cores, the formation and evolution of galaxy clusters, and the growth of massive central galaxies in cluster cores. \section*{Acknowledgements} M. M. acknowledges support by NASA through a Hubble Fellowship grant HST-HF51308.01-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. The South Pole Telescope program is supported by the National Science Foundation through grant ANT-0638937. Partial support is also provided by the NSF Physics Frontier Center grant PHY-0114422 to the Kavli Institute of Cosmological Physics at the University of Chicago, the Kavli Foundation, and the Gordon and Betty Moore Foundation. Support for X-ray analysis was provided by NASA through Chandra Award Numbers 12800071, 12800088, and 13800883 issued by the Chandra X-ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA. Galaxy cluster research at Harvard is supported by NSF grant AST-1009012 and at SAO in part by NSF grants AST-1009649 and MRI-0723073. Argonne National Laboratory's work was supported under U.S. Department of Energy contract DE-AC02-06CH11357.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The description of inhomogeneous charge and spin phases in the Hubbard model it a topic of current interest, mainly because it is now generally accepted that the high-T$_c$ compounds, at least in the underdoped regime, are intrinsic electronically inhomogeneous systems. A powerful tool for the investigation of such inhomogeneous electronic states is the unrestricted Hartree-Fock (HF) scheme which allows for the diagonalization of reasonable cluster sizes. Recently an extension of this approach based on the Gutzwiller wave function \cite{GSH} has shown to significantly improve the HF solutions which strongly underestimate the attraction between charge carriers. In this paper we extend the approach of Ref. \cite{GSH} to include transversal spin degrees of freedoms which permits the description of coplanar and three-dimensional inhomogeneous spin textures. Among coplanar spin structures homogeneous spiral solutions have been studied by a large variety of methods (see e.g. \cite{FLECK,FRESARD}) which show that a small amount of holes doped into the half-filled system leads to a ${\bf Q}\sim (1,1)$ spiral phase which changes its direction to ${\bf Q}\sim (1,0)$ above some critical concentration. Relaxing the constraint of a homogeneous charge distribution the formation of coplanar, vortexlike phases has been investigated in Ref. \cite{BISHOP} using an unrestricted HF approach. These are configurations where the antiferromagnetic (AF) spin order rotates by multiples of 2$\pi$ around the localized hole. Due to this twist in the magnetization their energy increases $\sim ln(L^2)$ which implies their instability in large clusters. A three-dimensional spin texture which is known to be topologically stable is the skyrmion as a solution of the O(3) non-linear $\sigma$-model \cite{BELAVIN}. It has been studied by several authors also for an AF background \cite{SHRAIMAN,GOODING} mainly concentrating on small clusters. However, it is still controversal wether skyrmion solutions exist also on discrete lattices. Whereas unrestricted HF theory for the 2-d Hubbard model predicts the decay of skyrmions into conventional spin-polarons \cite{BISHOP}, exact diagonalization studies of a small cluster within the tJ-model Ref. \cite{HAAS} seem to support their existence even when one takes the contribution of the skyrmion far field into acount. Within our slave-boson approach we will show below that skyrmions textures are stable solutions of the discrete two-dimensional Hubbard model, when the system is doped with two holes away from half-filling. Connected with the discovery of static stripe order in La$_{2}$NiO$_4$ and La$_{1.48}$Nd$_{0.4}$Sr$_{0.12}$CuO$_{4}$ \cite{TRAN,TRAN1} a lot of work has been done in order to understand the different domain wall structures in these compounds \cite{VARIOUS,OLES,SCCG,WHITE}. Whereas in the Ni-doped compounds one finds the stripes along the diagonal with one hole per Ni-site the charge and spin order in the Nd-doped cuprates is along the copper-oxygen bond direction with one hole per every second copper site only. Since half-filled horizontal walls are at odds with HF calculations the inclusion of correlations turns out to be a necessary ingredient for the study of domain walls in these systems \cite{SCCG,WHITE}. In addition long-range Coulomb interactions can play an important role in stabilizing half-filled vertical stripes \cite{SCCG}. Based on a Landau free-energy analysis of coupled charge and spin-density-wave order parameters it has been shown in Ref. \cite{ZACHAR} that within some region of parameter domain walls may have a spiral component. Whereas this type of ordering is not observed in the Ni-oxides, some spiral contribution cannot be rigorously excluded to be present in the Nd-doped compounds \cite{TRAN1}. According to our analysis presented below, elliptical stripes are not stable for concentrations around $1/8$ but may be formed in the very low doping limit. The rest of the paper is organized as follows: In Sec. II we give a detailed description of the formalism, in Sec. III we present the results for vortex, skyrmion and elliptical domain wall solutions respectively, and in Sec. IV we summarize our conclusions. \section{Model and Formalism} We consider the two-dimensional Hubbard model on a square lattice, with hopping restricted to nearest neighbors (indicated by the bracket $<i,j>$) \begin{equation}\label{HM} H=-t\sum_{<ij>,\sigma}c_{i,\sigma}^{\dagger}c_{j,\sigma} + U\sum_{i} n_{i,\uparrow}n_{i,\downarrow} \end{equation} where $c_{i,\sigma}^{(\dagger)}$ destroys (creates) an electron with spin $\sigma$ at site i, and $n_{i,\sigma}=c_{i,\sigma}^{\dagger}c_{i,\sigma}$. U is the on-site Hubbard repulsion and t the transfer parameter. For the calculations in Sec. III we take t=1. In the following we use a spin-rotation-invariant form \cite{WH} of the slave-boson representation introduced by Kotliar and Ruckenstein in Ref. \cite{KOTLIAR}. The subsidiary boson fields $e_{i}^{(\dagger)}$, $d_{i}^{(\dagger)}$ stand for the annihilation (creation) of empty and doubly occupied sites, respectively, whereas the matrix \begin{equation}\label{PI} {\bf p_{i}} = \left( \begin{array}{cc} p_{i,\uparrow} & \frac{1}{\sqrt{2}}(p_{i,x}-i p_{i,y}) \\ \frac{1}{\sqrt{2}}(p_{i,x}+i p_{i,y}) & p_{i,\downarrow} \end{array}\right) \end{equation} represents the case of a singly occupied site. Since we consider the mean-field limit all boson operators will be approximated as numbers. Besides the completeness condition \begin{equation}\label{CONST1} e_{i}^{2}+tr({\bf p_{i,\mu}^{*}p_{i,\mu}})+d_{i}^{2}=1 \end{equation} the boson fields are constrained by the following relations \begin{equation}\label{CONST2} tr({\bf \tau_{\mu} p_{i}^{*}p_{i}})+2 \delta_{\mu,0} d_{i}^{2} =\sum_{\sigma \sigma'}c_{i,\sigma}^{\dagger}({\bf \tau_{\mu}}) _{\sigma \sigma'} c_{i,\sigma'} \end{equation} where ${\bf \tau_{\mu=1,2,3}}$ are the Pauli spin matrices and ${\bf \tau_{\mu=0}} \equiv {\bf 1} $. Then, in the physical subspace defined by Eqs. (\ref{CONST1},\ref{CONST2}) the Hamiltonian (\ref{HM}) takes the form \begin{equation} \tilde{H}= -t\sum_{<ij>,\sigma \sigma_1 \sigma_2} z_{i,\sigma \sigma_1}^{*}c_{i,\sigma_1}^{\dagger} c_{j,\sigma_2}z_{j,\sigma_2 \sigma} + U\sum_{i}d_{i}^{2} \label{SB} \end{equation} where \begin{eqnarray} {\bf z_{i}} &=& {\bf L_{i}}(e_i{\bf p_i} + {\bf \tilde{p}_i} d_i) {\bf R_i} \label{zdef} \\ {\bf L_i} &=& \left\lbrack (1-d_i^2){\bf 1} - {\bf p_{i}^{*}p_{i}} \right\rbrack^{-1/2} \\ {\bf R_i} &=& \left\lbrack (1-e_i^2){\bf 1} - {\bf \tilde{p}_{i}^{*} \tilde{p}_{i}} \right\rbrack^{-1/2} \end{eqnarray} The matrices ${\bf L_i}$ and ${\bf R_i}$ guarantee the correct behavior in the limit $U \rightarrow 0$ within the mean-field approximation and ${\bf \tilde{p}_i}=\hat{T} {\bf p_i} \hat{T}^{-1}$ is the time-reversal transformed of ${\bf p_i}$. The matrix elements of ${\bf z_i}$ can be calculated by transforming to a diagonal representation for the ${\bf p_i}$ (see Appendix). The resulting effective one-particle Hamiltonian describes the dynamics of particles which upon hopping between sites are subjected to a modulation of their spin amplitude and spin direction, respectively. It can be diagonalized by the transformation \begin{equation} c_{i,\sigma}=\sum_{k}\Phi_{i,\sigma}(k)a_{k} \end{equation} where the orthogonality of the transformation requires \begin{equation}\label{CONST3} \sum_{i,\sigma}\Phi^{\ast}_{i,\sigma}(k)\Phi_{i,\sigma}(q)=\delta_{kq}. \end{equation} Given a system with $N_{el}$ particles we finally obtain for the total energy \begin{eqnarray} E_{tot}&=&-t\sum_{<ij>,\sigma \sigma_1 \sigma_2} z_{i,\sigma \sigma_1}^{*} z_{j,\sigma_2 \sigma}\sum_{k=1}^{N_{el}} \Phi^{\ast}_{i,\sigma_1}(k)\Phi_{j,\sigma_2}(k) \nonumber \\ &+&U\sum_{i}d_{i}^{2} \label{E1} \end{eqnarray} which has to be evaluated within the constraints (\ref{CONST1},\ref{CONST2}, \ref{CONST3}). This is achieved by adding these constraints quadratically to Eq. (\ref{E1}) following the procedure already applied in the Gutzwiller limit \cite{GSH}. The resulting energy functional then has to be minimized with respect to the fermionic and bosonic fields which is most conveniently done by using a standard conjugate gradient algorithm since the gradients of the energy functional can be calculated analytically. In order not to end up in pathological side minimas we have generally started the minimization from an HF Ansatz for the amplitudes $\Phi_{i,\sigma}(k)$. \section{Results} Since in a previous publication \cite{GSH} the unrestricted slave-boson approximation has been already applied to the description of collinear spin structures, we will restrict here to textures with two- and three dimensional spin ordering. In this section we discuss the spin structure of vortex, skyrmion and elliptical domain wall textures which turn out to be stable energy minima within our slave-boson approach. Obviously on finite lattices one has to use open boundary conditions in order to describe higher dimensional spin structures and the cluster sizes we are considering in the following are ranging from $6\times 6$ up to $10\times 10$. The incorporation of transversal spin degrees of freedom allows for the definition of spin currents $\nabla {\bf j} = -\partial_t{\bf S} $. The flow direction of these currents is along the bonds of the lattice, however, they are additionally vectorial in spin space and within the present approach we obtain for the i-th component of the spin current flowing between sites $<nm>$: \begin{equation} j^{i}_{n m} \sim Im \sum_{\sigma_1 \sigma_2 \sigma_3 \sigma_4} \sum_{k=1}^{N_{el}} \Phi^{\ast}_{n,\sigma_1}(k) \tau^i_{\sigma_1 \sigma_2} z_{n,\sigma_3 \sigma_2}^{\ast}z_{m,\sigma_4 \sigma_3} \Phi_{m,\sigma_4}(k) \end{equation} where $\tau^i$ are the Pauli matrices and the hopping factors $z_{n,\sigma \sigma'}$ are defined in Eq.\ (\ref{zdef}). The i-th component of ${\bf j}_{nm}$ can be thought of as measuring the spin-twist in the orthogonal directions $l,k \ne i$ so that the total current components ${j^i_n}$, which are plotted in the results, visualize the direction of maximal twist in the spin components $lk \ne i$ at lattice site n. \subsection{Vortex States} The structure of vortex solutions, where the magnetization rotates in a plane by some multiples of $2\pi$ around the localized holes, has already been studied in Ref. \cite{BISHOP} within unrestricted HF theory. We also obtain vortex states as local minima of the energy functional (\ref{E1}), where the total energy is about $5\%-10\%$ lower than in the HF approach, depending on lattice size and on-site repulsion U. However, for one hole away from half-filling vortex solutions are always higher in energy than the conventional N\'{e}el ordered spin polaron. Moreover, their total energy increases logarithmically with the cluster size as a consequence of the twist between neighboring magnetization vectors in agreement with Ref. \cite{BISHOP}. \begin{figure} {\hspace{0.7cm}{\psfig{figure=fig1.ps,width=6.5cm}}} \vspace*{4.5cm} {\small FIG. 1. Spin structure and spin currents for a vortex-antivortex pair on a $10 \times 10$ lattice. The Hubbard on-site repulsion is $U=10t$.} \end{figure} This logarithmic divergency can be compensated when two holes form a vortex-antivortex pair. According to our calculations the vortex cores are located at the center of diagonally next nearest neighbor plaquettes, thus separated by the plaquette where the two holes are localized. Fig.\ 1 shows the spin structure and spin current of such a vortex-antivortex pair on a $10 \times 10$ cluster and $U=10t$. All the spins lie in the xy-plane which means that only the z-component of the spin current has a non zero contribution. The spin field of this texture can be described by \begin{equation} \label{VAEQ} {\bf S}_i = S_0 e^{i {\bf Q R}_i} \lbrack \cos(\phi_1-\phi_2){\bf e_x} + \sin(\phi_1-\phi_2){\bf e_y}\rbrack \end{equation} where $\phi_{1}$, ($\phi_{2}$) refer to the angles between x-axis and the vectors connecting vortex (antivortex) core and site ${\bf R}_i$. The AF wave vector is denoted by ${\bf Q}$. Concerning the stability of the vortex-antivortex pair we obtain for the $10 \times 10$ cluster and $U=10t$ a binding energy of $E^{b}=-0.047t$ with respect to the N\'{e}el type bipolaron. Taking into account the far field energy, calculated within the xy-model for the spin structure Eq.\ (\ref{VAEQ}) and an exchange constant of $J=4t^2/U=0.4t$ still results in a negative binding energy of $E^{b}_{tot}=-0.016t$. We note that the absolute value of $E^{b}_{tot}$ slightly decreases for smaller cluster sizes since the boundary spins do not properly adjust to the solution Eq.\ (\ref{VAEQ}). One can therefore safely conclude that vortex-antivortex pairs are also stable in the thermodynamic cluster limit. \subsection{Skyrmions} On a discrete 2-dimensional AF lattice the spin structure of skyrmions, originally obtained as solutions of the O(3) non-linear $\sigma$-model \cite{BELAVIN}, has the form \cite{SHRAIMAN,GOODING} \begin{eqnarray} S_x &=& (-1)^{i_x+i_y}\frac{\lambda i_x}{i_x^2+i_y^2+\lambda^2}\nonumber\\ S_y &=& (-1)^{i_x+i_y}\frac{\lambda i_y}{i_x^2+i_y^2+\lambda^2} \label{SKF}\\ S_z &=& (-1)^{i_x+i_y}\frac{1}{2}\frac{i_x^2+i_y^2-\lambda^2} {i_x^2+i_y^2+\lambda^2}\nonumber \end{eqnarray} where $\lambda$ denotes the core size of the skyrmion and its center is located at $i_x=i_y=0$. In order to enhance convergence we initialized our minimization procedure with (non self-consistent) HF wave functions corresponding to the spin fields Eqs.\ (\ref{SKF}). Despite intensive search we could not obtain skyrmion states for one hole doped in the half-filled system. These solutions always converged towards a spin-polaron embedded into a collinear AF N\'{e}el state as already observed in Ref. \cite{BISHOP}. \begin{figure} {\hspace{0.7cm}{\psfig{figure=fig2.ps,width=5.5cm}}} \vspace*{4cm} {\small FIG. 2. Charge- ($\langle n_{i} \rangle$) and Spin-distribution for a skyrmion texture on a $8\times 8$ lattice for $U=10t$. } \end{figure} \end{multicols} \begin{figure} {\hspace{3cm}{\psfig{figure=fig3.ps,width=10.0cm}}} \vspace*{4.7cm} {\small FIG. 3. xy-, yz- and xz-projections of the spins together with the respective spin currents for the skyrmion shown in Fig.\ 2 } \end{figure} \begin{multicols}{2} The situation changes when removing two particles from the half-filled system. Fig.\ 2 displays the charge- and spin structure in case of 62 particles on a $8\times 8$ lattice for $U=10t$. The two holes then are localized on a $2\times 2$ plaquette at the skyrmion center and in their vicinity spins show a remarkable deviation from the z-direction. This is more easily seen in Fig.\ 3 where we have plotted the xy-, xz- and yz- spin projections, respectively, together with the corresponding spin currents. The xy-spin components rotate by $360^{\circ}$ around the skyrmion center resulting in a circular spin current for $j^z$. However, all current components strongly decay for sites far away from the skyrmion center indicating that the core size parameter $\lambda$ is small. Upon fitting our solutions to the skyrmion field Eq.\ (\ref{SKF}) we obtain $\lambda \approx 0.7$ for $U=10t$ and cluster sizes $6 \times 6$, $8 \times 8$ and $10 \times 10$, respectively. This already indicates that the skyrmion state should survive the limit of large clusters. To assess the question of stability in more detail we have also calculated the total energy using skyrmionic boundary conditions \cite{HAAS}. These are defined through an exchange field $J {\bf \cal S}(R,\lambda)$ to which the spins at the boundary are coupled and ${\bf \cal S}(R,\lambda)$ has the form of Eq.\ (\ref{SKF}). For the exchange constant we take the strong-coupling value of the Hubbard model $J=4t^2/U$. The total energy then is evaluated as a function of $\lambda$ and in the result we substract the energy contribution of the exchange field. This energy has to be compared with the corresponding value of a collinear bipolaron where two holes are localized on neighbored sites within the N\'{e}el ordered system. The results are plotted in Fig.\ 4 again for $U=10t$ and three different cluster sizes. As can be seen all curves display a clear minimum at some value of $\lambda$ indicating the presence of a stable skyrmion solution with a significant lower energy than the collinear bipolaron. It should be mentioned that these minima for the corresponding one-hole doped systems are always at $\lambda=0$, i.e. the configuration of a conventional spin-polaron. From the results shown in Fig.\ 4 one further sees that a lowering of energy with respect to the AF bipolaron is already obtained for $\lambda=0$ indicating that despite the imposed N\'{eel} boundaries the system has a skyrmion like core. This energy shift increases with the system size since the central spins of large lattices can more properly adjust to the skyrmion state. From the fact that the $\lambda=0$ results for the $8\times 8$ and $10 \times 10$ coincide within the numerical error we conclude that also in the thermodynamic limit the skyrmion solution should survive. Also plotted in Fig.\ 4 with filled symbols are the energy differences between bipolarons and skyrmions calculated with open boundary conditions. As already mentioned one obtains the same core parameter $\lambda$ for all three cluster sizes which agrees with the position of the minimum of the solid line ($10 \times 10$ system). Also this feature demonstrates that our largest cluster should already correctly describe the skyrmion structure of infinite clusters. Since the skyrmion on small lattices is very much influenced by the boundary conditions the minimum of the dotted curve ($6 \times 6$ cluster) is shifted to a higher value of $\lambda \approx 1$. \begin{figure} {\hspace{1.5cm}{\psfig{figure=fig4.ps,width=4.0cm}}} {\small FIG. 4. Total energy of a two-hole-skyrmion with respect to the energy of a N\'{e}el-type bipolaron as a function of the skyrmion core size parameter $\lambda$. The boundary spins have been coupled to the skyrmion solution Eq.\ (\ref{SKF}) via a mean-field exchange field. Solid line and circles: $10 \times 10$ lattice; Dashed line and squares: $8 \times 8$ lattice; Dotted line and diamonds: $6 \times 6$ lattice. The full symbols mark the energies for open boundary conditions. $U=10t$.} \end{figure} In order to compare the stability of skyrmion states with the vortex-antivortex solutions, one has to take into account the far field contribution to the total energy. Considering again a $10 \times 10$ system and $U=10t$ the skyrmion binding energy with respect to the N\'{e}el-type bipolaron (cf. Fig.\ 4) is $E^{b}_{tot}=-0.0265t$. Including the far field energy (taking again $J=0.4t$) one obtains $E^{b}_{tot}=-0.0071t$ which is approximately half the value of the vortex-antivortex binding energy. \subsection{Elliptical domain walls} The possibility that charge-spin coupling in correlated systems may induce the formation of elliptical domain walls was proposed by an analysis of the Landau free-energy functional for coupled charge- and spin density waves \cite{ZACHAR}. These are coplanar spin structures where the spin components in the first harmonic are modulated as \begin{eqnarray} S_x({\bf r})&=& S_0 e^{i {\bf Q r}}\cos(\alpha) \cos({\bf qr}) \nonumber \\ S_y({\bf r})&=& \pm S_0 e^{i {\bf Q r}}\sin(\alpha) \sin({\bf qr}) \label{EDW} \end{eqnarray} The eccentricity of the elliptical domain wall is determined by $\alpha$ and ${\bf Q, q}$ correspond to the wave vectors of commensurate AF and the domain wall periodicity, respectively. The case $\alpha=\pi/4$ describes an ideal spiral solution whereas $\alpha=0$ reduces the spin structure to a collinear 'classical' domain wall. We have used (non-self consistent) HF states corresponding to Eq.\ (\ref{EDW}) as starting fields for our minimization for different values of $\alpha$. Only vertical domain walls have been considered and periodic boundary conditions where applied in the x- and y-direction, respectively. In case of a completely filled domain wall (i.e. one hole per site along the wall) we only found the collinear solutions whereas coplanar structures become stable for half-filled walls. \begin{figure} {\hspace{-0.2cm}{\psfig{figure=fig5.ps,width=7cm}}} \vspace*{4.7cm} {\small FIG. 5. Two possible spin structures for elliptical domain walls together with the corresponding spin currents. The on-site repulsion is $U=6t$ and periodic boundary conditions in x- and y-direction have been used. Shown are the results for a $9 \times 8$ lattice doped with 4 holes. } \end{figure} This is shown in Fig.\ 5 for a $9\times 8$ lattice with 68 particles where we plot two kinds of possible spin structures which are local minima of the energy functional Eq.\ (\ref{E1}). Also shown are the respective spin currents (only z-components since the magnetization vector is completely in the xy-plane). Spin fields and currents display a quadrupled structure along the wall which is a necessary condition for stability within any mean-field approach \cite{OLES}. Due to the stripe charge structure the current flows are more complex than for the elliptical solution Eq.\ (\ref{EDW}) which predicts currents flowing in a single direction orthogonal to the wall. Instead we observe also currents along the wall (Fig.\ 5a) and a vortex-antivortex structure in Fig.\ 5b. Although the elliptical stripes are local minima of the energy functional Eq.\ (\ref{E1}) they are slightly higher in energy ($\approx 5\%$) than collinear domain walls. These have been shown to correspond to the ground state when one takes long-range Coulomb interactions into account \cite{SCCG}. However, we do not expect significant differences of a long-range contribution to collinear and elliptical stripe solutions. The structures shown in Fig.\ 5 correspond to systems with hole doping $1/18$ and we could not obtain elliptical solutions for higher doping. However, one can also study the very low doping limit upon using open boundaries in the x-direction. Indeed in this case elliptical half-filled stripes become favored with repect to collinear domain walls for not too large values of the on-site repulsion U ($<9t$). \begin{figure} {\hspace{-0.2cm}{\psfig{figure=fig6.ps,width=7.0cm}}} \vspace*{4.7cm} {\small FIG. 6. The same as in Fig. 5 but now for open boundaries in x-direction. } \end{figure} In Fig.\ 6 we show the same spin structures as in Fig.\ 5 but for open boundaries in x-direction. The main difference between these two figures concerns the angle of spin rotation $\Theta$ across the domain wall which is only $\approx 3/4$ of the expected value of $\pi$ according to Eq.\ (\ref{EDW}). This is due to the fact that the system now can acquire a state which has zero total spin current, whereas for periodic boundaries the spiral component always requires a net flow in x-direction. However, since for a regular array of stripes the charge periodicity and the spin modulation have to be related by $k^{charge}=2 k^{spin}$ \cite{ZACHAR} the domain wall induced kink type spin rotation has to be supplemented by an additional spiral field. Assuming an exponential relaxation of the spin rotation from $\Theta$ to $\pi$ by this spiral field gives an additional energy per site of $\Delta E \approx \rho_s (\pi - \Theta)^2 \lambda /L$ where $\rho_s=J S^2$ is the spin stiffness, $\lambda$ defines the length scale of relaxation and L is the stripe separation. Consequently this additional energy can become small enough in the low doping regime so that elliptical stripe configurations are more stable than collinear solutions for small hole concentrations. \section{Conclusion} Summarizing, we have presented the structure of spin textures which we obtained by applying an unrestricted slave-boson mean-field approximation in its spin-rotation invariant form to the 2D Hubbard model. This approach is suited for calculating the charge and spin distribution of electrons (holes) in inhomogeneous and strongly correlated systems since it incorporates correlation effects beyond the standard unrestricted HF theory. These correlation effects turn out to be especially important for the interaction among holes in the 2-D Hubbard model where the HF approximation strongly underestimates their attractive potential \cite{GSH}. Including the effect of transversal degrees of freedom we have shown that within our slave-boson mean-field approach two holes in the half-filled two-dimensional Hubbard model are bound by forming a vortex-antivortex pair, oriented along the diagonal direction. This texture has significantly lower energy than a conventional collinear bipolaron, also when the far field contribution is taken into account. Additionally we have found that skyrmion states can be stable even on discrete lattices. We attribute this to the inclusion of correlation effects within our approach, since unrestricted Hartree-Fock theory cannot account for skyrmions as self-consistent solutions (or local minima of energy). Indeed it has been also observed by the authors of Ref. \cite{HAAS} that within the tJ-model a semiclassical description cannot account for the occurence of skyrmions but that it is the quantum fluctuations which stabilize this texture. Considering the formation of elliptical domain walls it turned out that these structures only appear for half-filled walls in the low doping regime. Similar to the collinear stripes they are stabilized by a quadrupling of the period along the wall. This can be realized either by alternating on-wall spin currents (Fig.\ 5a) or by forming a vortex-antivortex structure (Fig.\ 5b). \acknowledgements We would like to thank V. Hizhnyakov for valuable discussions and a critical reading of the manuscript.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Over the past decade or so, there has been a surge of interest in the non-equilibrium dynamics of closed quantum systems following a switch of a Hamiltonian parameter. This is primarily due to a series of spectacular experiments in ultra-cold atoms, whereby the high degree of isolation permits the study of coherent dynamics over timescales typically inaccessible in conventional condensed matter physics~\cite{bloch2008,polkovnikov2011,cazalilla2011}. These experiments have raised new fundamental questions in the realm of non-equilibrium statistical mechanics, but have also revived a number of important theoretical issues such as the relationship between thermalisation and integrability~\cite{polkovnikov2011,eisert2015,borgonovi2016,gogolin2016,alessio2016} and the universality of defect generation following evolution across a critical point~\cite{dziarmaga2010}. Over the same period of time, there has also been a great deal of activity in the statistical mechanics community surrounding the development of stochastic thermodynamics~\cite{sekimoto2010} and the study of non-equilibrium fluctuation relations in both the classical~\cite{jarzynski2011,seifert2012} and quantum domain~\cite{esposito2009,campisi2011,hanggi2015}. Loosely speaking, the name of the game is to study the thermodynamics of both classical and quantum systems beyond the linear response and to describe and understand the usual thermodynamic quantities such as work, heat and entropy as stochastic variables described by probability distributions. The fluctuation relations, then, encode the full non-linear response of a system to a time-dependent change of a Hamiltonian parameter. One particular feature is that the formalism permits the definition of irreversible entropy production of a unitary evolving system following a thermodynamic transformation; and, as such, it allows us to understand the emergence of thermodynamic behaviour in systems where the microscopic laws are inherently reversible~\cite{dorner2012}. These ideas have been cross-fertilized by the emergence of another community studying what has become known as quantum thermodynamics~\cite{goold2016} aiming at understanding the relationship between quantum mechanics and thermodynamics from first principles. Given the current experimental interest in the non-equilibrium dynamics of quantum many-body systems, and the recent developments in statistical mechanics, along with the emergence of a flourishing community in quantum thermodynamics, it is natural to study the dynamics of quantum many-body systems in this far-from-equilibrium thermodynamical formulation. In fact, this endeavour has been initiated a decade ago by Silva, who focused on explicit calculations of work statistics in a quantum critical many-body system~\cite{silva2008}; the universal features that can be uncovered in this way were further elucidated in a series of subsequent works~\cite{gambassi2011,gambassi2012, shchadilova2014,smacchia2012, smacchia2013,Kolodrubetz2013,palmai2015}. In fact, throughout this decade, there has been quite a remarkable amount of activity uncovering the features of work statistics in a range of physical models including spin chains~\cite{dorosz2008,dorner2012,mascarenhas2014,fusco2014,zhong2015work,apollaro2015,zhong2015,hoang2015, sharma2015, bayocbob2015, mazza2015,bayat2016nonequilibrium}, Fermionic systems~\cite{goold2011,Heyl2012b,Heyl2012a,knap2012,plastina2013decoherence,sindona2013orthogonality,campbell2014quenching,Schiro2014,Sindona2014,schiro2014transient}, Bosonic systems and Luttinger liquids~\cite{Roux2009,Dora2012,Sotiriadis2013,Dora2013, Bacsi2013,Dechiara2015,johnson2016thermometry,lena2016work,villa2018cavity}, periodically driven quantum systems~\cite{Bunin2011,Russomanno2015,Dutta2015,Lorenzo2017} among many others~\cite{Paraan2009,Wisniacki2013,Palmai2014,Gong2014,Deffner2015,liu2016quantum, eth,Jordi2017,cosco2017nonequilibrium,rotondo2018singularities}. Work statistics have also proved to be useful in the analysis of dynamical quantum criticality~\cite{Heyl2013,heyl2018} and more recently to shed light on the phenomenon of information scrambling~\cite{campisi2017,chenu2017}. The purpose of this brief review, together is to motivate and pontificate on the view that such non-equilibrium manipulations of quantum many-body systems can be seen, primordially, as thermodynamic transformations. In particular, we would like to focus our efforts on singling out what we consider to be the advantages of this line of reasoning and to highlight some interesting features of work statistics in many-body physics, which may not be apparent or appreciated across different communities. For the purposes of this contribution, we will primarily focus on the paradigm of sudden quenches. We shall begin with an overview of work statistics and associated quantities such as the irreversible entropy production. We then move on to the issue of sudden quenches and show how the characteristic function is related to the partition function of a higher-dimensional statistical model. From here, we show how it is possible to understand universal features of quench problems through connections with the well-known concepts of fidelity, fidelity susceptibility and large deviation theory~\cite{touchette2009}. Furthermore, we highlight the connection with the historically important problem of Anderson orthogonality catastrophe~\cite{anderson1967infrared} and the closely related Fermi edge singularity~\cite{mahan1967,nozieres1969} and explain how ongoing experiments in ultra-cold atoms are, in fact, linked to this problem and in principle can and should be used as a platform in order to extract work statistics in the many-body domain. \section{Quantum Work Statistics and Thermodynamics} Consider a quantum system described by a Hamiltonian $H(\lambda)$ that depends on an external work parameter $\lambda$, i.e. an externally controlled parameter whose value determines the equilibrium configuration of the system. The system is prepared at time $t\le 0$ by allowing it to equilibrate with a heat reservoir at inverse temperature $\beta$ for a fixed value of the time-dependent work parameter $\lambda(t\leq0)=\lambda_0$. The initial state of the system is, thus, the Gibbs state $\rho_\textrm{G}(\lambda_0)$, where \begin{equation} \rho_\textrm{G}(\lambda):=\frac{\e{-\beta H(\lambda)}}{\mathcal{Z}(\lambda)}, \label{eq:tmanifold} \end{equation} and the partition function $\mathcal{Z}(\lambda):=\tr{\e{-\beta H(\lambda)}}$. At time $t=0$ the system-reservoir coupling is removed and a protocol is performed on the system taking the work parameter $\lambda(t)$ from its initial value $\lambda_0$ to a final value $\lambda_\tau$ at a later time $t=\tau$. The initial and final Hamiltonians connected by the protocol $\lambda_0\to\lambda_\tau$ have the spectral decompositions $H(\lambda_0)=\sum_n \epsilon_n(\lambda_0) \ket{\epsilon_n}\bra{\epsilon_n}$ and $H(\lambda_\tau)=\sum_m \epsilon'_m(\lambda_\tau) \ket{\epsilon'_m}\bra{\epsilon'_m}$, respectively, where $\ket{\epsilon_n}$ ($\ket{\epsilon'_m}$) is the $n$-th ($m$-th) eigenstate of the initial (final) Hamiltonian with eigenvalue $\epsilon_n$ ($\epsilon'_m$). Work in the quantum domain results from a process and is not an observable in the sense that one cannot ascribe a Hermitian operator to it~\cite{talkner2007}. The definition of the work done on the system as a consequence of the protocol, $W$, requires two projective measurements; the first projects onto the eigenbasis of the initial Hamiltonian $H(\lambda_{0})$ at $t=0$, with the system in thermal equilibrium and renders a certain value $\epsilon_n$ with probability $p_n^0 = e^{-\beta \epsilon_n}/\mathcal{Z}(\lambda_0)$. The system, then, evolves under the unitary dynamics $U(\tau; 0)$ generated by the protocol $\lambda_0\to\lambda_\tau$ before the second measurement projects onto the eigenbasis $\{|\epsilon'_m\rangle \}$ of the final Hamiltonian $H(\lambda_\tau)$ and yields the values $\{\epsilon'_m\}$ with probability $p_{m|n}^\tau = |\langle \epsilon'_m|U(\tau,0)|\epsilon_n\rangle|^2$. The probability of obtaining $\epsilon_n$ for the first measurement outcome followed by $\epsilon'_m$ for the second measurement is then $p_n^0p_{m|n}^\tau$ and, accordingly, the work distribution is given by \begin{equation} P(W)=\sum_{n,m\ge 0} p^0_n\; p^\tau_{m \vert n} \delta\left(W-(\epsilon_m'-\epsilon_n)\right). \label{eq:qworkdist} \end{equation} Equation~\eqref{eq:qworkdist} therefore encodes the fluctuations in the work that arise from both the thermal statistics $p_n^0$ and the quantum measurement statistics $p^\tau_{m \vert n}$ over many identical realizations of the protocol. The first moment $\langle W\rangle$ of the distribution is the average work done and can be easily shown to be $\langle W\rangle=\tr{H(\lambda_{\tau})\rho_\tau}-\tr{H(\lambda_{0})\rho_{G}(\lambda_{0})}$ where $\rho_\tau=U(\tau,0)\rho_{G}(\lambda_{0})U^{\dagger}(\tau,0)$, i.e., nothing more than the energy change along the driven unitary process. Now compare this transformation $\lambda_0 \to \lambda_\tau$ with an ideal quasi-static isothermal one, which, unlike an adiabatic transformation, is not unitary in general, and would bring the system through a path within the manifold of equilibrium states described by Eq.~\eqref{eq:tmanifold}. The work performed in the isothermal process is given by the free energy change $\Delta F$. When this is subtracted from the actual work done $\langle W\rangle$ one obtains back the so called irreversible work, which, when multiplied by the initial inverse temperature, defines the average irreversible entropy change \begin{equation} \langle S_{irr} \rangle = \beta\langle W_{irr} \rangle := \beta(\langle W \rangle - \Delta F) = D(\rho_\tau || \rho_G(\lambda_{\tau})) \label{sirr} \end{equation} The energetic deviation $\langle W_{irr} \rangle$ is often also called dissipated work rather than irreversible. The reason it is given the name irreversible is an assumption that somewhere in the background there is a canonical thermal bath where, after the driving, the system re-relaxes to a thermal state at the same initial temperature. The last equality in Eq. (\ref{sirr}) expresses the irreversible entropy as the quantum relative entropy between the actual final state $\rho_\tau$ and the final reference state $\rho_G (\lambda_{\tau})$, $D(\rho_\tau || \rho_G(\lambda_{\tau}))= - S(\rho_\tau) - \Tr{\rho_\tau \ln \rho_G(\lambda_{\tau}}$, with $S(\rho)$ being the Von Neumann entropy $S(\rho)=-\Tr{\rho\ln \rho}$. $D$ is the quantum analogue of the Kullback-Leibler divergence and a very stringent measure of the distinguishability of two quantum states via a result known as quantum Stein's lemma. While not itself a metric, it still upper bounds the trace distance via Pinsker's inequality, \begin{equation} \langle S_{irr} \rangle = D[\rho_\tau|| \rho_G(\lambda_{0})] \geq \|\rho_{\tau} - \rho_G(\lambda_{0}) \|_1^2/2 \end{equation} which captures the optimal distinguishability of quantum states with a single measurement. It can also be seen as a type of generalized second law for unitary processes~\cite{deffner2010}. A non zero $\langle S_{irr} \rangle$, thus, signals the fact that the system has been brought out-of-equilibrium though a thermodynamically irreversible process, and it also gives a quantification of how far from equilibrium it has gone, as it marks the difference between the actual final state $\rho_{\tau}$ and the equilibrium state $\rho_G(\lambda_{\tau})$. It is also directly connected to the fact that the work has a stochastic nature. Indeed, the existence of work fluctuations implies that the cumulants $C_n$ of the work distribution $P(W)$ are in general non-zero. In particular, the fact that $C_n\ne0$ with $n\geq 2$ means that $W$ has not a well defined value, as it is the case, instead, in the macroscopic thermodynamic context. It is possible to show that the irreversible entropy is, in fact, related to these cumulants by \begin{equation} \langle S_{irr} \rangle = \sum_{n=2} (-1)^n \frac{\beta^n}{n !} C_n \, , \end{equation} which, in the linear response regime, under a gaussian approximation, reduces to $\langle S_{irr} \rangle = \beta^2 \sigma^2/2$, where $\sigma^2=C_2$ is the variance of the work distribution function. Being cast in the form of a relative entropy, the strict positivity of $\langle S_{irr}\rangle$ is guaranteed via Klein's inequality. In fact, one could also work directly with the fluctuation theorems to demonstrate this positivity - for example Jarzynski's equality simply states that $\langle e^{-S_{irr}}\rangle=1$ and then by application of Jensen's inequality one gets $\langle S_{irr}\rangle\ge 0$. Quantum fluctuation theorems and their important physical consequences are covered in many excellent and comprehensive overviews~\cite{esposito2009,campisi2011,hanggi2015} and we direct the reader towards them for more in depth analysis. For more information on work statistics we recommend the original paper detailing the non-observable nature of work~\cite{talkner2007} and an excellent overview paper on many aspects of the quantum work distributions~\cite{talkner2016aspects}. \section{Sudden quench from the ground state} \subsection{Quantum-Classical Correspondence and Universality} \label{sec:silva} One particular type of protocol that is very popular in ultra cold atomic experiments is the so-called sudden quench, in which the change in the work parameter $\lambda$ is performed on a vanishingly small time scale. A very appealing thermodynamics description of such processes is given in terms of the characteristic function of work. Consider, first, the characteristic function of the work distribution as the Fourier transform of Eq.~\eqref{eq:qworkdist} for a general time-dependent driving process, \begin{align} g(u,\tau) &:= \integral{W}{}{}\e{iuW}P(W), \nonumber \\ &=\tr{U^\dag(\tau,0)\e{iuH(\lambda_\tau)}U(\tau,0)\e{-iuH(\lambda_0)}\rho_\textrm{G}(\lambda_0)}. \label{eq:loschmidt} \end{align} In the case of a sudden quench, $\lambda_{0}\rightarrow\lambda_{f}$, the expression simplifies due to $U(\tau=0,0)=\mathds{1}$ such that $g(u,\tau=0)=g(u)=\tr{\e{iuH(\lambda_f)}\e{-iuH(\lambda_0)}\rho_\textrm{G}(\lambda_0)}$. This expression is very similar, in particular in the case of a pure initial state, to the core hole Green's function, quantity typically studied in the context of X-ray Fermi Edge singularities in condensed matter physics (see Sec. IV). We now assume that the system is prepared in the ground state $|\epsilon_{0}\rangle$ of $H(\lambda_{0})$, i.e., $\rho_\textrm{G}(\lambda_0) = |\epsilon_0\rangle\langle\epsilon_0|$ and a sudden quench $\lambda_0 \to \lambda_f$ is performed so that the characteristic function now takes the form \begin{align} g(u) &= e^{-i\epsilon_0 u} \langle \epsilon_0| e^{iH(\lambda_f)u}|\epsilon_0 \rangle\\ &=\sum_{m\ge 0}e^{i(\epsilon^{\prime}_{m}-\epsilon_{0})u}|\langle \epsilon^{\prime}_{m}| \epsilon_{0}\rangle|^{2}. \label{eq:lem} \end{align} If we interpret the conjugate variable $u$ as a time scale, we see that this expression represents, up to a phase, a vacuum persistence amplitude, i.e., the probability amplitude to remain in the initial ground state at time $t=u$. As such, it is obviously related to the survival probability $L(t)=|g^{*}(t)|^2$ which is often studied in quantum chaos~\cite{peres1984stability}. Most importantly, with the characteristic function written as a matrix element of the evolution operator $\exp[iH(\lambda_f)u]$ we can map upon analytic continuation $u \rightarrow iR$ the sudden quantum quench problem to the problem of a classical system in a film geometry of thickness $R$. This was first pointed out by Silva in Ref.~\cite{silva2008}, but here we will outline the approach fleshed out in a later work~\cite{gambassi2011}. Let us define the difference $\Delta\epsilon_{0}=\epsilon^{\prime}_0-\epsilon_0$ between the ground state energies of the pre- and post-quench Hamiltonians and perform the analytic continuation to the imaginary axis mentioned above, obtaining \begin{align} g(R)&=e^{-\Delta\epsilon_{0} R}\times \mathcal{Z}(R),\nonumber \\ \mathcal{Z}(R)&=\langle \epsilon_{0}|e^{-[H(\lambda_{f})-\epsilon'_{0}]R}|\epsilon_{0}\rangle. \label{eq:partition} \end{align} For a $d$-dimensional quantum system possessing a $d+1$ dimensional classical correspondent, $\mathcal{Z}(R)$ can be seen as the partition function of the latter on a film of thickness $R$, with two boundary states $|\epsilon_{0}\rangle$ and a transverse ``area'' $L^d$ set by the extension of quantum system, assumed to be characterised by the large length $L$. With the partition function form now in place, we can appeal to traditional statistical mechanics and take the logarithm of the expression in order to get a free energy $F(R)=-\ln(g(R))$ of the film and its corresponding density per transverse area, i.e., $f(R) = F(R)/L^d$. For large $R$ it is possible to separate this out into three contributions based on their dependence on decreasing powers of $R$, i.e., \begin{equation} f(R)= R\times f_{b}+2f_{s}+f_C(R), \label{eq:free} \end{equation} where $f_{b}=\Delta\epsilon_{0}/(R\times L^{d})$ is the bulk free energy density of the classical system and $f_{s}$ is the surface free energy per unit area associated with the two identical boundaries of the film. The remaining contribution $f_{C}(R)$ represents an effective finite-size interaction per unit area between the two confining surfaces which generically decays to zero at large separation $R$. In particular, one can write \begin{equation} \mathcal{Z}(R)=e^{-L^{d}[2f_{s}+f_C(R)]}. \end{equation} Let us now translate the information contained in the three components of the free energy density $f(R)$ into information about the statistics of the work. As stated above, the bulk free energy determined by $f_b$ is just related to a global phase in front of the vacuum persistence amplitude. Since $f_C(R) \rightarrow 0$ as $R\rightarrow +\infty$, the surface component $f_s$ of the free energy is, instead, connected to the limit for large $R$ of the matrix element $\langle\epsilon_{0}|e^{-i[H(\lambda_f)-\epsilon^{\prime}_0]R} |\epsilon_{0} \rangle$ which defines ${\mathcal Z}(R)$ in Eq.~\eqref{eq:partition}. It is then easy to see that \begin{equation} e^{-2L^d\;f_s}=|\langle\epsilon_{0}|\epsilon^{\prime}_{0}\rangle|^2={\cal F}^2, \end{equation} where the quantity ${\cal F}$ introduced on the right-hand side is the so-called fidelity between the ground states of the post- and pre-quench Hamiltonians, a quantity which is intensively studied in quantum information and many-body physics. In particular, it is a useful tool for analysing quantum critical systems~\cite{venuti2007quantum,zanardi2007information,gu2010fidelity}. In the context of the work distribution for the sudden quench protocol at zero temperature, the fidelity ${\cal F}$ is the probability to measure the adiabatic work $W=\Delta\epsilon_0$ in the two-time measurement scheme. If the post-quench Hamiltonian $H(\lambda_f)$ of the quantum many-body system in question has a critical point at $\lambda_f=\lambda_c$, then the generalised susceptibilities~\cite{venuti2007quantum} $\chi_{n}(\lambda_{0},\lambda_{f})=-L^{-d}\partial^{n}_{\lambda_f}\ln{\mathcal{F}(\lambda_{0},\lambda_{f})}$, can develop a non-analytic behaviour~\cite{venuti2007quantum}. In particular, it is known that the fidelity susceptibility $\chi_{2}$ scales as \begin{equation} \label{eq:fidelitys} \chi_{2}\propto|\lambda_f-\lambda_{c}|^{\nu d-2} \, , \end{equation} where $\nu$ is the correlation length critical exponent. The generalised susceptibilities $\chi_n$ have a straightforward interpretation in terms of the analogy with boundary statistical mechanics, which is particularly suggestive when the quenched parameter $\lambda$ of $H(\lambda)$ can be interpreted as a "temperature" in the $d+1$ classical correspondent of the quantum model. This is the case, for example, of the transverse field in the quantum Ising chain~\cite{sachdev}. Since in general the generalized susceptibilities can be written as derivatives of the surface free energy, \begin{equation} \label{eq:fidelitys2} \chi_{n}(\lambda_0,\lambda_f)=\partial^{n}_{\lambda_{f}}f_{s}(\lambda_0,\lambda_{f}). \end{equation} it is evident that in when $\lambda$ is a temperature-like variable they have a clear physical interpretation, $\chi_{1}$ being the excess internal energy and $\chi_2$ the excess specific heat of the corresponding $d+1$ classical confined system. The excess specific heat is well-known to scale as $\chi_2\propto |\lambda_f -\lambda_c|^{-\alpha_{s}}$ close to the critical point where the exponent $\alpha_s$ is related to the bulk critical exponents of the correlation length $\nu$ and of the specific heat $\alpha$ by $\alpha_{s}=\alpha+\nu$ of the classical $d+1$ dimensional system and satisfy the hyperscaling relation $\alpha+\nu(d+1)=2$ \cite{D-86,D-861}. Relating quantum quenches to the statistical physics of classical confined systems provides useful information not only on the singularities of the generalised susceptibilities, but also on the emergent universal features of the probability distribution function of the work when, as above, the post-quench Hamiltonian is close to a critical point. Before proceeding, we note that for a quench starting from the ground state $|\epsilon_0\rangle$ of the pre-quench Hamiltonian, the quantity $\Delta\epsilon_0$ represents the \emph{reversible} work one would do on the system by performing the change $\lambda_0\to\lambda_f$ adiabatically. This also sets the minimal possible value of the stochastic variable $W$ and, accordingly, the irreversible work $W_{irr} = W - \Delta\epsilon_0$ can take only positive values. This means that its probability distribution $P(W_{irr})$ displays a lower edge and, based on the correspondence highlighted above, we can finally demonstrate that \begin{equation} \label{genericpw} \begin{split} P(W_{irr})\simeq {\cal F}^2 \, [&\delta(W_{irr}) \\ &+{\cal C}\,(W_{irr}-q m)^{1-a}u_s(W_{irr}-q m) + \dots], \end{split} \end{equation} where $u_s(\cdots)$ is the Heaviside unit step and the parameters appearing in this expression are the fidelity ${\cal F}$ (defined above), the mass $m$ of the lightest quasi-particle in the model, and three \emph{universal} constants ${\cal C}$, $q$, and $a$. This universality stems ultimately from the fact that the leading part of the finite-size free energy $f_C(R)$ of the $d+1$-dimensional classical systems in finite-size geometry \cite{FSS,FSS1} acquires a universal character whenever it --- and thus the post-quench Hamiltonian $H(\lambda_f)$ of the $d$-dimensional quantum system --- is close to a bulk \emph{critical point}. In particular this so-called critical Casimir effect \cite{G-09} is characterised by a scaling behaviour \begin{equation} f_{C}(R) = R^{-d} \Theta_{\mathcal B}(\pm R/\xi_+), \label{eq:CCF} \end{equation} where $+$ and $-$ refer to the disordered and ordered phases, respectively. $\xi_+ \propto |\lambda_f-\lambda_c|^{-\nu} $ is the exponential correlation length associated with the critical fluctuations of the relevant order parameter of the transition, which grows upon approaching it. Both $\xi_+$ and $R$ are assumed to be much larger than any microscopic length scale of the system such as, e.g., a possible lattice spacing. The scaling function $\Theta_{\mathcal B}$ is characterised by a certain degree of \emph{universality} \cite{G-09,G-10}, as many other properties close to critical points, which become independent of the microscopic features of the system. In particular, $\Theta_{\mathcal B}$ depends only on the universality class of the classical critical point and, because of the presence of the boundaries set by the quantum state $|\epsilon_0\rangle$, it depends also on their \emph{surface universality class} \cite{D-86,D-861} or, equivalently, on which of the few effective boundary states ${\mathcal B} \in \{|\phi^*_i\rangle\}_i$, the initial/boundary state $|\epsilon_0\rangle$ flows to as the critical point is approached. Accordingly, $\Theta_{\mathcal B}$ is also largely independent of the specific values assumed by $\lambda_0$ and $\lambda_f\simeq \lambda_c$. In view of its numerous applications to the physics of soft matter, the critical Casimir effect has been extensively studied both theoretically and experimentally in the past few years (see, e.g., Ref.~\cite{G-09}) and many of its features are known, including the scaling function $\Theta_{\mathcal B}$ for a variety of bulk and surface universality classes. While the large-$R$ decay of $f_C$ is $\propto R^{-d}$ at criticality $\xi_+ = \infty$, away from the critical point it is dictated by the asymptotic expansion of $\Theta_{\mathcal B}(x)$ for $|x|\gg 1$. Generically, it takes the form \begin{equation} \label{asympt} \Theta_{\mathcal B}(x\rightarrow \pm \infty) = {\cal C}_\pm |x|^{a_\pm}\; e^{-q_\pm |x|}+ \dots, \end{equation} where ${\cal C}_\pm$, $a_\pm$ and $q_\pm>0$ are universal constants (dependent only on ${\mathcal B}$), which take different values within the ordered ($-$) and disorderd $(+)$ phases. For the quantum (classical) Ising model in $d=1$ ($d=2$) one has $q_+=a_+=1$, while $q_-=2$ and $a_-=-1/2$, corresponding to two possible instances of quenches; i.e., within the same phase or across the quantum phase transition. These constants directly enter Eq.~\eqref{genericpw}, while the mass $m$ of the lightest quasi-particle of the quantum model in the paramagnetic phase has to be identified with the inverse $\xi_+^{-1}$ of the correlation length of the classical system. Performing the analytic continuation $R\mapsto -iu$ the behavior of $P(W)$ for $W$ close to threshold is determined by the asymptotic behavior of the characteristic function $g(u)$ for large $u$. Therefore, performing a large $u$ expansion of $g(u)$ and taking its Fourier transform we readily obtain that $P(W)$ takes the form in Eq.~\eqref{genericpw}. \subsection{Large Deviations} Work is an extensive quantity in thermodynamics. Standard statistical mechanics textbook tells us that the mean of a generic extensive quantity such as the average work $\langle W\rangle$ done on a system will grow proportionally to its number $N$ of degrees of freedom, where $N=L^d$ for a system of typical size $L$ in $d$ spatial dimensions. Accordingly, it is natural to define an associated \emph{intensive} quantity $w$ by dividing $W$ by $N$ and, assuming weak correlations among the $N$ degrees of freedom, the central limit theorem tells us that the distribution of $w$ will be generically Gaussian for large $N$, with fluctuations $\Delta w =\sqrt{ \langle (w-\bar w)^2\rangle}$ around the average value $\bar w \equiv \langle w \rangle$ suppressed as $1/\sqrt{N}$. This means that, generically, $w-\bar w \sim N^{-1/2}$ and that the distribution of $w$ concentrates around its average and most probable value $\bar w$. However, rare as they may be, large deviations of $w$ with $w-\bar w \sim 1$ are well-known to be able to probe specific features of statistical systems~\cite{touchette2009}. As shown in Ref.~\cite{gambassi2012}, the work statistics of a sudden quantum quench can be cast in the framework of large deviation theory and it might display some \emph{universal} features when the post-quench Hamiltonian is close to an equilibrium critical point. Consider now as the stochastic variable the intensive work $w=W/N$ as opposed to extensive one $W$. As shown in Ref.~\cite{gambassi2012}, the probability $P(w)$ that a large fluctuation will occur is expected to be exponentially small in the size $N$, i.e., \begin{equation} P(w)\propto\exp[-NI(w)], \label{eq:largedev} \end{equation} where $I(w)$ is the non-negative rate function which characterises the large deviations and which vanishes for $w=\bar w$. The quadratic approximation of $I(w)$ around $w=\bar w$ describes the typical Gaussian fluctuations $w-\bar w \sim N^{-1/2}$ expected on the basis of the central limit theorem. Let us consider the case of a quench from the ground state $|\epsilon_0\rangle$ of the pre-quench Hamiltonian: as discussed in the previous section, the irreversible work $W_{irr} = W - \Delta\epsilon_0$ can take only positive values. For convenience, we focus on the large deviations of the irreversible intensive work $W_{irr}/N$, which is denoted below, for simplicity, by $w$, with $P(w< 0)=0$ and therefore $I(w<0) = +\infty$. In order to compute $I(w)$ appearing in Eq.~\eqref{eq:largedev} it is convenient first to focus on the moment generating function of $W_{irr}$, i.e., on $\langle e^{- R W_{irr}}\rangle$ which is related to $g(R)$ in Eq.~\eqref{eq:partition}. In particular, because of the shift in the definition of $W_{irr}$ compared to $W$, the contribution $N R \times f_b$ corresponding to the bulk free energy on the r.h.s.~of that equation is cancelled, and only the so-called \emph{excess free energy} $N f_{ex}(R)$, with density $f_{ex}(R)=f(R)-Rf_b$, contributes to the moment generating function \cite{gambassi2012}: \begin{equation} \langle e^{- R W_{irr}}\rangle = \exp[-N f_{ex}(R)]. \label{eq:moment} \end{equation} Note that the generating function on the r.h.s.~is certainly defined for all possible non-negative values of $R \in {\mathbb R}^+$ and it is actually related to the excess free energy density of the corresponding classical system in a film geometry $L^d\times R$, as discussed in Sec.~\ref{sec:silva}, only in this case. Depending on the behaviour of $P(W_{irr})$ for large $W_{irr}$, however, the domain ${\mathcal D}$ within which the generating function is defined can also include negative values of $R$ and therefore the equality on Eq.~\eqref{eq:moment} is understood after an analytic continuation of $f_{ex}(R)$ on the r.h.s.~towards $R<0$. With the generating function in the form \eqref{eq:moment} we can apply the formalism of large deviation theory that gives us the prescription to evaluate the rate function through the Legendre-Fenchel transformation of $f_{ex}(R)$, \begin{equation} I(w)= - \inf_{R \in {\mathcal D}}[R\,w-f_{ex}(R)], \label{eq:LF} \end{equation} i.e., via a saddle-point evaluation of $P(w)$ in Eq.~\eqref{eq:largedev} as the inverse Laplace transform of Eq.~\eqref{eq:moment} for $N\to\infty$. From the r.h.s.~of Eq.~\eqref{eq:moment} one sees that $f_{ex}(0)=0$, $f'_{ex}(0)=\bar w$, and that $f_{ex}(R)$ is a concave function of $R$, being the exponential function on the l.h.s.~a convex function. In addition, $f_{ex}$ approaches $2f_{s}$ as $R\rightarrow\infty$. We can infer some properties of the rate function $I(w)$ on the basis of these qualitative features of $f_{ex}$. For $w<0$, for example, the infimum on the r.h.s.~of Eq.~\eqref{eq:LF} is attained for $R\to\infty$ and therefore $I(w<0) = +\infty$, as expected because $W_{irr} \ge 0$. Similarly, the behaviour of $I(w)$ close to the threshold $w\to 0^+$ is determined by that of $f_{ex}(R\to\infty)$ and, in particular, $I(0) = 2 f_s$. The way this value is approached depends on the finite-size contribution $f_{C}(R)$ in Eq.~\eqref{eq:free}. As discussed in Sec.~\ref{sec:silva}, this contribution acquires a universal character whenever the corresponding $d+1$-dimensional classical system --- and therefore the post-quench Hamiltonian $H(\lambda_f)$ of the $d$-dimensional quantum system --- is close to a bulk critical point. In this case, $f_C(R)$ takes the universal scaling form in Eq.~\eqref{eq:CCF}, characterised by the scaling form $\Theta_{\mathcal B}$, which is known in a variety of cases (see, e.g., Ref.~\cite{G-09}). Based on this knowledge of $\Theta_{\mathcal B}$, the rate function $I(w)$ can be readily calculated via Eq.~\eqref{eq:LF}, finding \begin{equation} I(w) = 2 f_s + \xi^{-d} \vartheta(w\xi^{d+1}), \end{equation} where $\vartheta(y)$ is the Legendre-Fenchel transform (see Eq.~\eqref{eq:LF}) of $x^{-d}\Theta_{\mathcal B}(x)$ and it is as universal as $\Theta_{\mathcal B}$. Accordingly, not only the edge singularities of the extensive work $W$ discussed in Sec.~\ref{sec:silva} above are determined by universal features of the system and of the quench, but also the large deviations of the intensive variable associated with the irreversible work $W_{irr}$ display universal properties in their rate function $I(w)$ close to the threshold $w=0$. For larger values of $w$, the rate function $I(w)$ depends on the excess free energy $f_{ex}(R)$ of films with increasingly smaller thickness $R$, which even becomes negative for $w\ge \bar w$. In this case, the correspondence with the physics of classical systems in film geometry breaks down and universality is generically lost. However, concrete examples show that interesting phenomena analogous to a Bose-Einstein condensation \cite{gambassi2012} as well as non-analyticities of the rate function $I(w)$ \cite{rotondo2018singularities} similar to non-equilibrium phase transitions may occur. \subsection{Thermal Quenches} The previous two sub-sections, strictly speaking, describe only sudden quenches starting from the ground-state, which are characterised by the fact that the work $W$ cannot be smaller than the value $\Delta \epsilon_0$. Accordingly, the probability distribution $P(W)$ of $W$ features an edge which acquires the universal features discussed above. Here we will demonstrate that interesting information can still be obtained from both the average work and the irreversible entropy production when an initial thermal state $\rho_{G}(\lambda_{0})$ is assumed and the edge is absent. The average work done $\langle W\rangle=\int P(W)W dW$ by a sudden quench $\lambda_0\to \lambda_f$ of a parameter $\lambda$ which couples linearly in $H(\lambda)$ and starting from a thermal state can be written in the following form~\cite{Sotiriadis2013,mascarenhas2014} \begin{align} \langle W\rangle &=\frac{\Tr{e^{-\beta H(\lambda_{0})}[H(\lambda_f)-H(\lambda_0)] }}{\Tr{e^{-\beta H(\lambda_{0})}}} \\ \nonumber &=(\lambda_{f}-\lambda_{0}) F'_{\beta}(\lambda_{0}), \label{eq:firstd} \end{align} which follows from the fact that $H(\lambda_f) - H(\lambda_0) = (\lambda_f-\lambda_0)\partial{H(\lambda_{0})}/\partial{\lambda_{0}}$ and equals the derivative of the equilibrium free energy with respect to the parameter $\lambda_{0}$ times the quench amplitude. From this, we can see that, for sudden quenches, the expression for the irreversible work takes the interesting form $\langle W_{irr}\rangle=\langle W\rangle-\Delta F=(\lambda_{f}-\lambda_{0}) F'_{\beta}(\lambda_{0})- F_{\beta}(\lambda_{f})+F_{\beta}(\lambda_{0})$. Let us restrict ourselves to small quenches, such that $\delta\lambda=\lambda_f-\lambda_0\ll 1$; in this case, the irreversible entropy production defined by Eq.~\eqref{sirr} can be seen as proportional to the second derivative of the equilibrium free energy, \begin{equation} \langle S_{irr}\rangle=-(\delta\lambda)^{2}\beta F''_{\beta}(\lambda_{0})/2+\mathcal{O}(\beta(\delta \lambda)^{3}). \end{equation} Generically, possible non-analytic behaviour in $F''_{\beta}$ is characterised by the critical exponent $\alpha$ of the specific heat $C\propto-F_{\beta}''(\lambda_{0})\propto|\lambda_{0}-\lambda_{c}|^{-\alpha}$. At finite temperature, there is no quantum phase transitions; but, as was shown in \cite{dorner2012} for quenches of the Ising model, the irreversible entropy production starts to diverge as the temperature decreases, thus signalling the proximity to a quantum critical point. In a first order quantum phase transition, the average work becomes discontinuous. A comparison between first and second order phase transitions has been performed in \cite{mascarenhas2014}. We note that we have been explicitly considering sudden quench problems in the preceeding sections and from the many body perspective very little work has been done on more generic time dependent processes~\cite{smacchia2013} it would be interesting to extend studies in this direction. In addition there has been recently some interesting work in the direction of finite time charging of quantum batteries with connection to quantum correlations~\cite{alicki,binder1,barcelona,binder2,pisa}, given that these studies are finite time manipulations of interacting quantum systems, it would be interesting to explore connections with the prescription outlined here. \section{Anderson Orthogonality Catastrophe and current experiments} \subsection{Orthogonality Catastrophe} In this section, we would like to discuss how the concept of quantum work statistics in sudden quenches is actually connected to the historically important problem of orthogonality catastrophe (OC) in condensed matter physics. This phenomenon was discovered by Phil Anderson in 1967~\cite{anderson67}. Anderson was studying the seemingly innocuous problem of a single scatterer in the presence of a non interacting Fermi gas. He considered the ground states of both $N$ Fermions in a spherical box of radius $R$ in the presence and absence of a single local scattering potential with only s-wave ($l=0$) contribution. In the presence of a scattering potential, $V$, the single particle states in the box acquire a phase shift $\delta(E)$. The fermions being assumed to be non interacting, the ground states are expressed as Slater determinants of the single particle eigenstates. Let the ground state of the unperturbed system be $\Psi_{i}(x_{1},x_{2},\dots x_{N})$ and the ground state of the perturbed system be $\Psi_{f}(x_{1},x_{2},\dots x_{N})$; Anderson proved that the overlap (fidelity) of these two states scales as \begin{align} F &=\int dx_{1}dx_{2}\dots dx_{N}\Psi^*_{f}(x_{1},x_{2},\dots x_{N})\Psi_{i}(x_{1},x_{2},\dots x_{N})\\ \nonumber &=N^{-\alpha}, \label{eq:oc} \end{align} where $\alpha=\delta^{2}/\pi^2$. This implies that the ground states become orthogonal as the system size increases with a power-law that depends universally on the phase shift $\delta$. This phenomenon is known as the orthogonality catastrophe (OC). Innocuous and all as this result may first seem, it actually has several deep implications for the physics of Fermi gases. For example, there is an immediate consequence for the situation where a local impurity is made to interact with the gas. Generically, the interaction between an impurity and a Fermi gas will lead to a dressing of the impurity by the excitations of the gas. In particular, when the mass of the impurity is finite, then one may talk about the formation of a well defined quasi-particle: the Fermi polaron. In condensed matter physics, this effect is quantified by the quasi-particle residue, which, mathematically speaking, is equivalent to the fidelity between perturbed and un-perturbed ground states. In the limit of a heavy particle, instead, the problem may be framed in the infinite mass approximation and the dressed particle looses its quasi particle description as the fidelity goes to zero due to the manifestation of the OC. At this point one might ask: what is the connection between work statistics and the previously recalled formalism? Well, going beyond ground-state physics, perhaps the most dramatic consequence of the OC is in non-equilibrium dynamics. Historically, this consequence was discovered, very soon after the original realisation by Anderson, by Nozieres and de~Dominicis~\cite{nozieres1969} who were considering the many electron response to the sudden switching on of a core hole in a metal. Physically, this occurs after an x-ray photon has created a deep hole, with the promotion (emission) of a core electron in a metal. Nozieres and de~Dominicis considered the core hole Green's function, which is defined as \begin{equation} \mathcal{G}(t)=-ie^{-\omega_{T}}u_s(t)\langle e^{itH_{i}}e^{-it(H_{i}+V})\rangle, \end{equation} where $\omega_{T}$ is the threshold frequency for the creation of a core hole in the valence band, $u_s(t)$ is the Heaviside step function and $H_{i}$ and $H_{i}+V$ are the perturbed and unperturbed Hamiltonians respectively. We note that for the purposes of illustration we are assuming an initial thermal state but the calculation can be performed directly in the zero temperature limit~\cite{mahan2010}. The key quantity of this problem is the vacuum persistence amplitude, \begin{equation} \nu_{\beta}(t>0)=\left\langle e^{\frac{i}{\hbar }\hat{H}_{i}t}e^{-\frac{i}{% \hbar }\left( \hat{H}_{i}+\hat{V}\right) t}\right\rangle. \label{Eq:VacAmp} \end{equation}% This gives the response of the gas to the switching on of the localized impurity potential, and, as pointed out previously, this coincides with the complex conjugate of the characteristic function of work defined by Eq.~\eqref{eq:lem}, thus giving an immediate connection to the work statistics formalism. We will return to this shortly. In the interaction picture, the expression for the vacuum persistence amplitude (or if you like, the complex conjugate of the characteristic function of work) reads: \begin{equation} \nu_{\beta}(t)=\left\langle Te^{-\frac{i}{\hbar }\int_{i}^{t}dt^{\prime }% \tilde{V}(t^{\prime })}\right\rangle \text{,\quad }\tilde{V}(t)=e^{\frac{i}{% \hbar }\hat{H}_{0}t}\hat{V}e^{-\frac{i}{\hbar }\hat{H}_{0}t}, \label{eq:Vbeta} \end{equation}% which, by virtue of the linked cluster theorem, reduces to an exponential sum of connected Feynman diagrams containing an increasing number of interaction vertices:\\ \scalebox{0.95}{\includegraphics{graphs2.pdf}}\\ If the reader is interested in precise details of the calculation of the linked cluster expansion we recommend the text book by Mahan~\cite{mahan2010}. It turns outs that the first term in the sum is nothing but a first order shift to the Fermi gas energy, which brings only a phase factor to $\nu_{\beta}(t)$. The second term, instead, gives the dominant (singular) contribution, which is a direct manifestation of the OC. Taking the $\beta\rightarrow\infty$ limit, an explicit calculation gives \begin{equation} \Lambda_2^{\beta\rightarrow\infty}(t)=-g \, \ln(it/\tau_{0}+1), \label{historicMND} \end{equation} where $g$ is a rescaled impurity interaction strength parameter which is proportional to $\alpha$ in the Anderson overlap. The mathematical consequence of this result is a power-law decay of $\nu(t)$. However, the real physical implication becomes apparent if we consider the absorption spectrum, which is precisely what is measured in X-ray scattering experiments and is defined as \begin{equation} A(\omega)=2 Re \int^{+\infty}_{0}\!\!dt\, e^{i\omega_{T}t}\nu(t). \end{equation} The power-law decay of $\nu(t)$ leads to a power-law threshold singularity in the absorption spectrum. This iconic non-equilibrium effect is known as the Fermi edge singularity in condensed matter physics. The physics is crucially encapsulated by the Anderson overlap and it is illuminating now to rather interpret the phenomenon from a thermodynamic perspective. Since $\nu_{\beta\rightarrow\infty}$ is nothing more than the complex conjugate of the characteristic function, then the absorption spectrum can be interpreted as the probability to do work on the system as they are mathematically equivalent. Writing the absorption spectrum in the Lehmann representation, indeed, we have \begin{equation} A(\omega)\propto \sum_{m}|\langle \epsilon'_{m}|\epsilon_{0}\rangle|^{2}\delta(W-\epsilon'_{m}+\epsilon_{0}). \end{equation} The thermodynamic implication of these considerations should be immediately obvious. Interpreting the creation of a core hole as thermodynamic work on a metal, and interpreting the absorption spectrum as the work distribution $P(W)$, we see that due to the OC there is no possibility that this process can be adiabatic. The probability to do adiabatic work $W=\epsilon'_{0}-\epsilon_{0}$ goes to zero as a power-law due to the OC between $|\epsilon_{0}\rangle$ and the ground state of the system. In \cite{sindona2015statistics} the authors have undertaken a very careful examination of the moments of the work statistics and found that it was in fact the third moment of the work distribution, which quantifies the skewness or asymmetry of the process, which scales with the universal exponent of the edge-singularity $g$. In summary, we would like to stress that, although the motivation is very different, from the operational perspective the mathematics of the Fermi-edge singularity problem at its core are identical to the formal framework needed to describe the quantum work statistics of a sudden {\it local} quench of a Fermi gas. This is, in fact, quite important and not just a mere curiosity, as it paves the way for a detailed experimental study of quantum work statistics in ultra-cold atom setups, where orthogonality catastrophe and Fermi-edge singularity physics are currently being probed. This connection is not fully appreciated in either the ultra-cold atom or the thermodynamics community. Indeed, it has been argued that trapped ultra-cold fermion atoms could constitute an almost ideal set-up to investigate this physics in a controlled fashion \cite{goold2011,knap2012,schmidt18}, and a detailed treatment of the Fermi edge singularity problem for a harmonically trapped gas has been recently performed \cite{sindona2013orthogonality,Sindona2014}, which generalises the result in Eq. (\ref{historicMND}). \subsection{Experiments with Ultra-cold Fermions} The experimental extraction of quantum work statistics involves the rather precarious setting of not only having to prepare a well defined initial state and perform controlled unitary operations, but also there is the apparent stringent necessity of performing two non destructive projective measurements on the eigen-basis of the system. This is in contrast to classical work statistics and has rendered the experimental acquisition of quantum work statistics elusive up until relatively recently. The first proposal was for a clever phonon shelving technique for direct extraction of work statistics in an ion trap~\cite{huber2008}. Actually as it turned out, two papers appearing at the same time proposed the use of Ramsey interferometry on an ancillary qubit in order to extract the characteristic function of work~\cite{Dorner2013,Mazzola2013} (see also \cite{Campisi2013}) avoiding the difficult projective measurements. This led to the first experimental extraction of work statistics and experimental verification of the fluctuation theorems in a quantum system in a liquid state NMR setup~\cite{Batalhao2014}. An experimental work confirming the first proposal for direct measurement in the energy domain appeared soon after in an ion trap setup~\cite{an2015experimental}. More recently, following the realization that a work distribution can in principle be reformulated as a positive operator valued measurement (POVM)~\cite{povm,Dechiara2015}, an experiment has been performed to reconstruct the distribution $P(W)$ with Rubidium atoms in an atom chip~\cite{cerisola2017using}. More directly related to our discussion about the relations between work distribution and OC, two experiments have been performed with cold Li atoms, forming a Fermi sea to which an impurity K atom is coupled \cite{cetina15,cetina16}. After preparing the impurity in a superposition of its (two lowest) internal energy states, Feshbach resonance has been used to tune the coupling with the gas and to switch it on only in the excited state. The theoretical blue print for this idea were first outlined in \cite{goold2011} and \cite{knap2012}. Using Ramsey interferometry of the impurity internal state and monitoring the decoherence dynamics, the vacuum persistence amplitude has been experimentally measured, both in amplitude and phase, together with the absorption spectrum, for both repulsive and attractive impurity-gas interactions. Given the direct connection between work statistics and the OC and associated Fermi-edge singularity problem, we would like to strongly emphasise that these experimental platforms of dilute Fermi mixtures are ideal playgrounds for the controlled exploration of universal features in the quantum thermodynamics of many-body systems. \begin{acknowledgements} J.G. is supported by a SFI Royal Society University Research Fellowship. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Optimal control, which aims at devising ideal control pulses to optimize a given physical process, is finding wide application in the fields of theoretical quantum information science~\cite{PhysRevLett.89.188301, PhysRevA.74.022312, schirmer09, PhysRevA.84.042315, PhysRevLett.103.110501, 0953-2048-27-1-014001, PhysRevA.90.052331, PhysRevLett.106.190501, PhysRevA.84.022307}, quantum optics~\cite{glaser2015training} and quantum chemistry~\cite{shapiro2003principles} amongst other quantum fields~\cite{brif10review}. Quantum optimal control theory has also found applications in the laboratory, in particular with nuclear magnectic resonance~\cite{nmrReview2007}, trapped ions~\cite{PhysRevA.77.052334} and superconducting qubits~\cite{PhysRevA.82.040305, PhysRevLett.112.240504}. In most instances, optimal control is applied to unitary processes where dissipation is a nuisance and is considered to be detrimental to the process. If properly engineered, dissipation can, however, be a useful resource for tasks ranging from quantum state preparation in circuit QED~\cite{didier:2014a, liu:2016a} to universal quantum computation~\cite{verstraete:2009a}. While not as widespread as its dissipation-less version, open quantum optimal control has also been studied~\cite{PhysRevA.78.012358, PhysRevA.84.022305, schulte11opengrape, floether12open, goerz14open}, with the most widely used algorithms being the open system versions of the GRAPE (Gradient Ascent Pulse Engineering)~\cite{Khaneja2005296,schulte11opengrape} and Krotov~\cite{krotov1995global, yvon03krotov} algorithms, while other optimization algorithms \cite{PhysRevA.84.022326, engel2009local} may also prove useful in the context of open systems. An important difficulty when dealing with open quantum systems is that the Schr\"odinger equation is replaced by a master equation and the wavefunction by a density matrix~\cite{gardiner:2004b}. For a system of dimension $d$, described by the master equation $\dot\rho = \hat{\mathcal{L}} \rho$, a standard approach is then to express the density matrix $\rho$ in Liouville space as a vector $\rho_L$ of dimension $d^2\times1$ and the superoperator $\hat{\mathcal{L}} \cdot$ representing the master equation as a matrix $L$ of size $d^2 \times d^2$~\cite{schulte11opengrape}. With this representation, time evolution is obtained by either direct matrix exponentiation or it can be implemented with optimized time propagators, for example using expansion in Newton polynomials~\cite{goerz2015optimizing,ashkenazi95newton} or by projection onto Krylov subspace~\cite{gutknecht2007brief, tal2007restart}. Given the large size of $L$ even for moderate $d$, this approach rapidly becomes numerically intensive. Optimal control in open quantum systems has therefore been mostly limited to systems with small Hilbert space size. Here, we present an alternative implementation of the open GRAPE algorithm that eliminates the passage to Liouville space and which is thus well suited for large open quantum systems. This approach avoids matrix exponentiation and rather relies on using standard Runge-Kutta time-integration of the master equation. As an example, we apply this open GRAPE implementation to a problem of current experimental interest: resonator reset in circuit QED. In this architecture, qubit readout consists in injecting microwave photons in a resonator which is dipole coupled to qubits. After readout and before further coherent manipulations or subsequent readout of the qubit can be performed, it is essential to reset the system by removing the measurement photons from the resonator. The usual approach is to wait for several photon decay times $T_\kappa = 1/\kappa$, with $\kappa$ the resonator decay rate, for the photons to leak out of the resonator~\cite{mcclure2015rapid, bultink:2016a}. In practice, this is, however, often too slow as a fast repetition time of qubit measurements is critical, e.g., for quantum error correction~\cite{PhysRevA.86.032324}. With this standard passive approach, this need for fast decay is in contradiction with the necessity to use high-Q resonators to avoid qubit Purcell decay~\cite{houck:2008a}. Alternatively, active reset can be performed, where a microwave tone is used to empty the resonator in a shorter time. Such a reset tone can be either conditional on the readout result~\cite{bultink:2016a} or unconditional~\cite{mcclure2015rapid, bultink:2016a} using no knowledge of the resonator and qubit states. Devising an active unconditional reset protocol is an ideal test problem for our open GRAPE implementation since it is an intrinsically dissipative process requiring a large Hilbert space size due to the many resonator photons used for qubit measurement. Moreover, active resonator reset in circuit QED was recently explored experimentally~\cite{mcclure2015rapid, bultink:2016a}, giving us the opportunity to compare our numerical results to experimental data. The paper is organized as follows: We first present a brief overview of open GRAPE in Sec.~\ref{sec:grape}. We then discuss our implementation of this algorithm in Sec.~\ref{sec:implementation}. Section~\ref{sec:reset} is devoted to the application of the algorithm to active resonator reset. Finally, Sec.~\ref{sec:conclusion} summarizes our work. \section{Optimal control for open quantum systems} \label{sec:grape} Before discussing our implementation of open GRAPE, we first present an overview of the problem solved by the GRAPE algorithm~\cite{Khaneja2005296} and of open GRAPE~\cite{schulte11opengrape}. The reader familiar with these concepts can immediately skip to Sec.~\ref{sec:implementation}. \subsection{The control problem} Consider a system with the free Hamiltonian $H_0$ and subject to $R$ independent control fields each described by the Hamiltonians $H_k$ such that the full system Hamiltonian reads~\cite{PhysRevA.84.022307,Khaneja2005296} \begin{equation} H(t) = H_0 + \sum_{k=1}^R u_{k}(t) H_k. \label{eq:Hamiltonian} \end{equation} The classical parameters $u_k(t)$ in the above expression can be continuously adjusted to change the strength of the control fields on the system. In the context of circuit QED, these $u_k(t)$ can, for example, correspond to the time-dependent amplitude of different microwave drives on the resonator or the qubit. The objective of the control problem is to find the optimal set $\left\{ u_k(t)\right\}$ to accomplish a specific task, most typically implementing quantum gates~\cite{PhysRevLett.103.110501, 0953-2048-27-1-014001}. This can be expressed as an optimization problem where the goal is to maximize the performance index $\Phi[\lbrace u_k \rbrace]$, a measure for the success of the desired task and a functional of the control parameters. As the optimization problem must be of finite dimension, the control amplitudes, $u_k(t)$, are taken to be piecewise constant. For a process of duration $T$, each $u_k(t)$ is divided in $N$ time steps of duration $\Delta t = T/N$ as illustrated in Fig.~\ref{fig:filtering}(a). In this way, for the $j^{\mathrm{th}}$ step, i.e. for \mbox{$t \in [(j-1)\Delta t; \; j\Delta t [\,$}, the function $u_{k}(t)$ is a constant of amplitude $u_k(j)$ with $j \in \lbrace 1,2, \ldots\, , N\rbrace$. The elements of the set $\left\{ u_k(j) \right\}$ are referred to as the controls. In practice, these sharp controls are smoothed out by the finite bandwidth of the control lines. Following Ref.~\cite{PhysRevA.84.022307} and as illustrated in Fig.~\ref{fig:filtering}(b), this important experimental consideration can be taken into account by filtering the controls in the evaluation of the performance index and its gradient. This filtering procedure maps the piecewise constant functions described by the set $\left\{ u_k(j) \right\}$ to smoother piecewise constant functions defined by the larger set $\left\{ s_k(l) \right\}$ with $l= 1,2, \dots M$ and $M = T / \delta t\gg N$. For completeness, details of this filtering procedure can be found in Appendix~\ref{sec:gaussian}. \begin{figure}[t] \includegraphics[width=\linewidth]{Figure_grape_filter_v2.pdf} \caption{Schematic of a gradient-based optimization step with GRAPE-type controls update, and Gaussian filtering to account for experimental constraints. Starting from initial controls shown in (a), we calculate the filtered experimental pulse shape in (b). From this filtered field, the gradient, $\partial \Phi / \partial u_k$, is calculated using the chain rule (see Ref.~\cite{PhysRevA.84.022307} or App.~\ref{sec:gaussian} for details) and the controls are updated in (c), which leads to a new filtered field in (d). The boundary conditions of the field are taken into account by fixing the first and last control.} \label{fig:filtering} \end{figure} An approach to optimize the performance index is to update the controls by using a gradient-based optimization algorithm such that~\cite{Khaneja2005296} \begin{align} u_k(j) \rightarrow u_k(j) + \sum_{lm}B_{kj,lm} \frac{\partial \Phi }{\partial u_l(m)}, \label{eq:update} \end{align} where $B_{kj,lm}$ are the elements of a step matrix which depends on the details of the chosen optimization algorithm. Simple gradient descent optimization corresponds to the choice $B_{kj,lm} \propto \delta_{kl}\delta_{jm}$, while for more sophisticated methods, such as the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm, $B_{kj,lm}$ is related to the inverse of the Hessian matrix~\cite{Fletcher:2000uq}. Since the BFGS algorithm leads to improved convergence~\cite{Fouquieres:2011fk}, it will be used in the numerical computations presented below. A non-trivial step in the update rule Eq.~\eqref{eq:update} is the evaluation of the gradient of the performance index. While this can be done by numerical derivatives, this approach become intractable for problems with a large set of controls. Using an analytical result described below for open systems, the GRAPE algorithm allows for an efficient calculation of this gradient. \subsection{Open GRAPE} We consider an open quantum system whose dynamics is described by the Markovian master equation \begin{align}\label{eq:master} \dot{\rho} = -i [H,\rho] + \hat\Gamma \rho \equiv \hat{\mathcal{L}} \rho. \end{align} In this expression, $\hat\Gamma \cdot$ is the superoperator for the different possible dissipation channels acting on the system and which can be expressed in standard Lindblad form as~\cite{gardiner:2004b} \begin{align} \hat\Gamma \rho = \sum_j \gamma_j \hat{ \mathcal{D}}[a_j]\rho, \label{eq:disspation} \end{align} with $\hat{\mathcal{D}}[a_j]\rho = a_j \rho a_j\dag - \{a_j\dag a_j, \rho \}/2$ and $\gamma_j$ the damping rate for channel $j$ associated to the system operator $a_j$. The formal solution to this equation can be expressed as the time-ordered exponential \begin{align} \rho(t) = \mathcal{T}\! \exp \left\{ \int_{0}^{t} \mathrm{d} t'\, \hat{\mathcal{L}}(t') \right\} \rho(0). \end{align} Taking advantage of the piecewise constant nature of the controls, this can be written more simply as \begin{align} \rho(T) = \hat{L}_N \ldots \hat{L}_j \ldots \hat{L}_1 \rho(0), \end{align} with the evolution superoperator defined from time $(j-1)\Delta t$ to time $j\Delta t$ as, \begin{align} \hat{L}_j\cdot = \exp \left\{ -i \Delta t \,(\,[H_j\, , (\cdot) ] + i\hat\Gamma\cdot\,)\, \right\} \end{align} where $H_j = H_0 + \sum_k u_k( j ) H_k$ is the time-independent Hamiltonian associated to the $j^\mathrm{th}$ time step. For many control problems, the performance index can be expressed as a function of operator averages or, alternatively, as the overlap between a final state $\rho(T)$ and a target state. In both cases the resulting performance index takes the form \begin{align} \Phi = \text{Tr} \Big( \sigma \hat{L}_N \ldots \hat{L}_1 \rho(0) \Big), \label{eq:phi} \end{align} where $\sigma$ is either the target state or an operator whose expectation value is evaluated. In the former case, this figure of merit is bounded between 0 and 1, with $\Phi = 1$ for $\rho(T) = \sigma$. Taking advantage of the piecewise constant character of the evolution, the derivative of the performance index takes the form~\cite{Khaneja2005296} \begin{align} \frac{\partial \Phi }{\partial u_k(j)} &= \mathrm{Tr} \left\{ \lambda_j(\sigma) \frac{\partial \hat{L}_j }{\partial u_k(j)} \rho_{j-1} \right\} \end{align} where \begin{align} \rho_j = \hat{L}_j \ldots \hat{L}_1 \rho(0) \label{eq:rho_j} \end{align} is a forward-in-time evolved density matrix, while \begin{align}\label{eq:lambda} \lambda_j(\sigma) = \hat{L}_{j+1}\dag \ldots \hat{L}_N\dag \sigma \end{align} is the backward-in-time evolution from the final target state. To first order in $\Delta t$ the derivative of the $j^\mathrm{th}$ time-evolution operator is~\cite{schulte11opengrape} \begin{equation} \frac{\partial \hat{L}_j\cdot }{\partial u_k(j)} \approx -i \Delta t \, [ H_k, \, (\hat{L}_j\cdot) ] . \label{eq:partialuk} \end{equation} Approximation of the gradient to higher-order in $\Delta t$ can improve convergence of the optimization~\cite{Fouquieres:2011fk}. Moreover, for simplicity, we have considered the controls to be parameters of the Hamiltonian only. This approach can, however, be adapted to allow for control over the dissipation rates $\gamma_j$~\cite{PhysRevA.90.052331}. Finally, the derivative of the performance index is \begin{equation} \frac{\partial \Phi }{\partial u_k(j)} = -i \Delta t \, \mathrm{Tr} \big\{ \lambda_j(\sigma) [ H_k, \, \rho_j ] \big\}. \label{eq:dphidu} \end{equation} Thus, evaluating the gradient of the performance index requires the calculation of the forward-in-time evolved states $\rho_j$ and of the backward-in-time evolved targets $\lambda_j(\sigma)$. The analytical result of Eq.~\eqref{eq:dphidu} is the core of the GRAPE algorithm~\cite{Khaneja2005296}. The standard approach to obtain these states, $\rho_j$ and $\lambda_j(\sigma)$, is to express the density matrices and the master equation in Liouville space~\cite{schulte11opengrape}. For a system with Hilbert space dimension $d$, the superoperators then take the form of $d^2 \times d^2$ matrices and the $N$ evolution operators $\hat L_j$ are obtained by computing matrix exponentials of these matrices. While simple to implement, this procedure is numerically intensive for moderate to large system sizes. \section{Open GRAPE with Runge-Kutta integration} \label{sec:implementation} Rather than moving to Liouville space and relying on matrix exponentiation, we present here an approach based on direct integration of the master equation using a standard Runge-Kutta routine. As we argue below, this leads to computational speedups even for moderate Hilbert space dimension, $d$. With this method, the forward-in-time propagation is performed by numerical integration of the differential equation \begin{align} \label{eq:rkmaster} d\rho = \hat{\mathcal{L}}\rho \, dt \end{align} starting from the initial state $\rho(0)$ using standard Runge-Kutta routines. In practice, the integration is done in a stepwise manner to obtain $\rho_j$ for all values of $j$. In other words, Eq.~\eqref{eq:rkmaster} is integrated for a time $\Delta t$ from the initial state of $\rho_0$ to obtain $\rho_1$, which is saved for later use. Then $\rho_1$ is used as initial state and integrated for a time $\Delta t$ to obtain $\rho_2$ and so on. Similarly, the backward-in-time propagation is performed by numerical integration of the master equation \begin{align} -d\lambda = \hat{\mathcal{L}}^\dagger\lambda \, (-dt) , \end{align} which is also solved stepwise but backwards in time, such that $\lambda(t-\delta t) = \lambda(t) + \hat{\mathcal{L}}^\dagger \lambda(t)(- \delta t)$ with $\delta t$ as a small numerical step, from the initial (target) state $\lambda_N = \lambda(T) = \sigma$. Integration for a time $\Delta t$ backwards in time leads to $\lambda_{N-1}$ which is then used as the next initial state and, continuing this way, all $\lambda_j$ are obtained. With $\rho_j$ and $\lambda_j$ calculated, the derivative given in Eq.~\eqref{eq:dphidu} is readily evaluated using the saved $\rho_j$ and $\lambda_j$. We now turn to a simple analysis of the scaling with system size $d$ of the standard approach versus the present Runge-Kutta integration method. For simplicity, we neglect the efficiency gain that can be obtained in both cases from taking advantage of the sparse character of matrices. We also take the complexity of the multiplication and exponentiation of $n\times n$ matrices to be $\mathcal{O}(n^3)$. Better scaling can be obtained from state-of-the-art algorithms, resulting in improvements for both the standard approach and the present Runge-Kutta integration method. In the standard Liouville-space approach, the matrix exponentiation involved in computing the superoperators $\hat L_j$ of dimensions $d^2 \times d^2$ has a complexity $\mathcal{O}(d^6)$. For the ${N}$ piecewise constant steps of the controls, the total complexity is therefore \begin{equation} \mathcal{C}_{\mathrm{exp}} = \mathcal{O}\left( {N} \times d^6 \right). \label{eq:Cexp} \end{equation} In contrast, the Runge-Kutta integration approach described here requires the products of operators represented by $d \times d$ matrices. One caveat of this method is that the calculation is specific to the given input state $\rho(0)$. The complexity of this approach can then be estimated as \begin{equation} \mathcal{C}_{\mathrm{RK}} = \mathcal{O}\left( n_s \, n_{\mathrm{RK}} \times d^3 \right), \label{eq:RK} \end{equation} where $n_s$ is the number of input states to be considered and $n_{\mathrm{RK}}$ the number of Runge-Kutta steps. Improvement over the standard Liouville space approach is thus expected for system size \mbox{$d \gg (n_s n_{\mathrm{RK}}/N)^{1/3}$}. Importantly, the numbers $n_s$, $n_{\mathrm{RK}}$ and $N$ are often independent of system size, suggesting a computational speedup for large Hilbert spaces. When considering bandwidth filtered controls, where the $N$ controls are replaced by $M\gg N$ sub-pixels in order to approximate a smooth function~\cite{PhysRevA.84.022307} (see Appendix~\ref{sec:gaussian}), computational speedup is expected for even smaller Hilbert space sizes. A second advantage of the present approach, not captured by this simple analysis, is the reduced memory usage since superoperators in Liouville space are never created nor stored in memory. The optimization of an arbitrary process requires averaging the performance index over $n_s = d^2$ input states spanning the full Liouville space~\cite{goerz14open}. However, many processes of general interest are simpler and, as a consequence, can lead to $n_s \ll d^2$. In particular, average over only three appropriately chosen input states is required to optimize a unitary process in the presence of dissipation~\cite{goerz14open}. Estimating $n_{\mathrm{RK}}$ is a more difficult task since, with adaptive integration step size, the number of integration steps is parameter and problem dependent~\cite{Galassi:2011uq}. As an example, for the reset process described in Sec.~\ref{sec:clear}, we observe that $n_{\mathrm{RK}}/M \sim 10-100$ depending on the chosen value of $M$. Given that $n_s = 2$ for the reset problem, we expect significant speedup even for moderate Hilbert space size of $d\sim 10$. Finally, we note that the Runge-Kutta approach presented here is only efficient if we perform a GRAPE-type concurrent update of the controls. In the case of a Krotov-type update where only one control is updated at each step of the optimization algorithm~\cite{PhysRevA.84.022305}, the complexity of both the present and the Liouville space approach are expected to be similar. Indeed, the latter approach allows to reuse most of the calculated exponentials between updates. Here, we consider a GRAPE-type update where all controls are updated concurrently. We note that it has been shown that, while the initial convergence can be slower in some cases for the GRAPE-type concurrent update, the resulting pulse sequences are usually smoother~\cite{Jager2014}. \section{Application to resonator reset} \label{sec:reset} As an application of this open GRAPE implementation, we consider the problem of active reset following qubit readout in circuit QED \cite{wallraff2004strong, PhysRevA.69.062320}. Before presenting numerical results, we first briefly review qubit readout in this system and present the active reset problem. \subsection{Readout and reset in circuit QED}\label{sec:clear} Circuit QED is characterized by the strong electric-dipole coupling $g$ between a superconducting qubit of frequency $\omega_a$ and a microwave resonator of frequency $\omega_r$. In the dispersive regime, where the qubit-resonator detuning $|\Delta| = |\omega_a-\omega_r| \gg g$, the system is described by the effective Hamiltonian ($\hbar$ = 1) \begin{align} \label{eq:Hdrive} H_0 = ( \omega_r + \chi \sigma_z) a\dag a + \frac{\omega_a}{2}\sigma_z + \varepsilon(t) \left[ a\dag e^{-i \omega_d t} + \mathrm{h.c.} \right], \end{align} where $\chi = g^2/\Delta$ is the dispersive shift and $\mathrm{h.c.}$ stands for hermitian conjugate. The last term represents a drive on the cavity of amplitude $\varepsilon(t)$ and frequency $\omega_d$. Because of the dispersive coupling, the cavity frequency is shifted by $\pm \chi$ depending on the state of the qubit. Under drive, the time-evolution leads to a qubit-state dependent population and/or phase of the cavity state. This dependency can be resolved by homodyne detection of the cavity output field, leading to a qubit measurement. In order to include cavity damping in our calculations, we use the master equation \begin{equation} \dot \rho = -i \left[H \,,\, \rho\right] + \kappa \hat{\mathcal{D}}\left[a \right]\rho, \label{eq:ME} \end{equation} where $\kappa$ is the cavity decay rate associated to the dissipator $\hat{\mathcal{D}}[a]\rho = a \rho a\dag - \{a\dag a, \rho \}/2$. Under a constant drive of amplitude $\varepsilon$, the steady-state solution in the dispersive regime (i.e. $H = H_0$) of this master equation leads to the qubit-state dependent intracavity average photon number \begin{equation} \bar n_{g/e} = \frac{ \varepsilon ^2}{(\omega_r \pm \chi - \omega_d)^2 + (\kappa/2)^2}. \end{equation} Here, we are concerned with the return to vacuum state once the measurement is completed. The common approach of passive reset is to wait for a time $T \gg 1/\kappa$ for the photons to naturally escape from the resonator. We use our implementation of open GRAPE to find an optimal $\varepsilon(t)$ to speed-up this process to times smaller than $1/\kappa$ through an active process. When driving at a frequency $\omega_r \approx \omega_d$, the average number of photons is independent of the qubit state and an active reset is easily obtained by changing the phase of the drive. However, active reset is not as simple when considering the nonlinear corrections to the dispersive Hamiltonian. The first of these corrections is a qubit-induced nonlinearity of the cavity described by the Hamiltonian\footnote{In the two-level approximation of circuit QED, the sign of this nonlinear corrections is qubit-state dependent, with $H_K \propto \sigma_z$. However, in the more complete multilevel treatment, the Kerr nonlinearities $K_{g}$ ($K_e$) of the resonator for a qubit in the ground (excited) state can have the same sign~\cite{PhysRevLett.105.100504}. In particular, for the parameters considered the Kerr nonlinearities have the same sign and are of similar amplitudes for both qubit states~\cite{mcclure2015rapid}. For simplicity, we consider $K_e \approx K_g$. }~\cite{PhysRevLett.105.100504, bourassa:2012a, nigg:2012a} \begin{equation} H_K = K (a\dag a)^2, \end{equation} with $K$ the Kerr-nonlinearity. This correction makes exact analytical solutions of the active reset problem difficult as it leads to nonlinear equations of motions for the resonator state. This nonlinearity can moreover lead to vastly different qubit-state dependent resonator states, something that has been exploited for qubit readout, e.g., in the Josephson bifurcation amplifier~\cite{JBA_review}. Here, because of this nonlinearity, a reset pulse more complicated than in the purely dispersive case is found to be necessary~\cite{mcclure2015rapid}. In the next section, we present numerical results for active cavity reset based on the experimental parameters reported in Ref.~\cite{mcclure2015rapid}. For these calculations, we use the master equation of Eq.~\eqref{eq:ME} with Hamiltonian $H = H_0 + H_K$ and the parameters $\chi = 2\pi \times 1.3$~MHz, $K = -2\pi \times 2.1$~kHz and $\kappa = 2\pi \times 1.1$~MHz, corresponding to a photon decay time of $T_\kappa = 1/\kappa = 145$~ns. Moreover, to help in making comparisons, we will express the drive strength in similar terms as in Ref.~\cite{mcclure2015rapid}. We therefore introduce the normalized drive power $P_\mathrm{norm} = P/P_\mathrm{1ph}$, where $P$ is the applied drive power and $P_\mathrm{1ph}$ is the drive power leading to an average steady-state resonator population of one photon. With the above parameters, we numerically identify the corresponding driving amplitude $\sqrt{P_\mathrm{1ph}} = 2\pi \times 1.595$ MHz such that $\varepsilon = \sqrt{P_{\mathrm{norm}}P_\mathrm{1ph}}$. \begin{figure}[t] \includegraphics[width=0.99\linewidth]{Figure_exp_clear_grape_v2.pdf} \caption{ (a) Average photon number during resonator reset procedures following a readout process with drive power $P_{\mathrm{norm}} = 4$. The solid (dashed) lines indicate results for the qubit in the ground (excited) state. See panel~(b) for legend. The inset shows the average photon number for $T_\kappa >1$ on a logarithmic scale to allow for better comparison of the reset schemes. (b) Pulse shapes for the resonator reset procedures used in panel~(a). System parameters and the Hamiltonian are described in Sec.~\ref{sec:clear}. Additional parameters for the GRAPE algorithm includes a control duration $\Delta t = 1$~ns and Gaussian filtering with bandwidth $\omega_B/2\pi = 100$~MHz and subpixel duration $\delta t = 0.1$~ns (see Appendix~\ref{sec:gaussian} for parameter definitions).} \label{fig:expcleargrape} \end{figure} \subsection{Active reset using open GRAPE} \label{sec:fasterreset} We now turn to a numerical study of active resonator reset using the open GRAPE implementation introduced in Sec.~\ref{sec:implementation}. For simplicity, we assume the measurement preceeding the resonator reset to be quantum non-demolition and, thus, consider the qubit's state to be fixed thoughout the process. As a result, we can replace the operator $\sigma_z$ by the number $\pm 1$ in Eq.~\eqref{eq:Hdrive}. As we seek an active reset protocol independent of measurement outcomes, the performance index used for the open GRAPE optimization is averaged over the two qubit states. Following Eq.~\eqref{eq:phi}, the simplest performance index is \begin{align}\label{eq:Phi0} \Phi = \sum_{i=g,e} \text{Tr}\left\{\rho_T \rho_{i}(T) \right\}, \end{align} with $\rho_{i=g,e}(t)$ the qubit-dependent resonator state and $\rho_T = \ket{0}\bra{0}$ the target (vacuum) state. Here, $\rho_{i=g,e}(t=0)$ are the qubit-dependent resonator states following a measurement pulse $\varepsilon(t)$ of duration $T_m$ similar to that used in Ref.~\cite{mcclure2015rapid}. In our simulations, the resonator is initialized to the vacuum state at time $t = -T_m$, the state is then time evolved using the master equation, Eq.~\eqref{eq:ME} with $H = H_0 + H_K$, leading to the qubit-dependent states, $\rho_{i=g,e}(t=0)$. Starting from these states and using the same master equation, the open GRAPE algorithm is then used to optimize the unconditional reset pulse shape $\varepsilon(t)$ for $t \in ]0,T[$, with $\varepsilon(t=0)$ fixed by the measurement pulse and $\varepsilon(t=T)=0$. Using the parameters of the previous section, Fig.~\ref{fig:expcleargrape}(a) compares the average intracavity photon number as a function of time under various resonator reset schemes. In particular, the passive reset (orange curves) is compared to GRAPE optimized active reset (blue curves) of duration $T = 300~\mathrm{ns} \approx 2 T_\kappa$. While there is still significant resonator population after a wait time $T\gtrsim 2 T_\kappa$ in the passive case, the GRAPE optimized pulse empties the cavity independently of the qubit state. More precisely, the log-scale inset, shows that the optimized pulse shape brings the photon number below $10^{-4}$ while in the same time passive reset leads to a residual average photon population close to 1. The numerically found pulse shape corresponding to these results is the blue line in Fig.~\ref{fig:expcleargrape}(b). It shows a fast oscillating behaviour on top of a slowly evolving envelope. Importantly, the quality of the reset is only marginally affected by these rapid oscillations. Indeed, as shown by the red lines in both panels, a polynomial fit to the optimized pulse shape essentially leads to changes in the average photon number that are only visible on the logscale inset of Fig.~\ref{fig:expcleargrape}(a). This indicates that a complex pulse shape is not essential to obtain good performance, and that the solution may be amenable to regularization, whereby penalties are added to the objective function (for instance to penalize rapid changes in time) in order to make the result simpler and/or more robust. As a comparison, the green lines in Fig.~\ref{fig:expcleargrape} correspond to the average photon number and pulse shape used in an optimized two-steps active reset similar to the so-called CLEAR pulse introduced in Ref.~\cite{mcclure2015rapid}. Compared to CLEAR, the GRAPE pulse shape leads to a smaller residual photon population of the cavity in $T = 300$~ns $\sim 2 T_\kappa$. Importantly, because photon decay under GRAPE optimized pulse shapes is far from exponential, in the example of Fig.~\ref{fig:expcleargrape} the cavity is already close to having reached its final state at a time $\sim 220$ ns. This suggests that faster resets are possible. \begin{figure}[tb] \includegraphics[width=0.99\linewidth]{Figure_Nend.pdf} \caption{ Average photon number at the end of the active reset pulse as a function of the readout power $P_{norm}$. The results are shown for pulses of duration {$T = $} 150 ns, 110 ns, 70 ns and 40 ns, corresping to $T \approx 1.04~T_k, 0.76~T_\kappa, 0.48~T_\kappa, 0.28~T_\kappa$. The solid (dashed) line is the average final photon number for the qubit in the ground (excited) state. The gray lines indicate the failure points for the 70~ns and the 40~ns optimizations.} \label{fig:finaln} \end{figure} \begin{figure}[tb] \includegraphics[width=0.99\linewidth]{Figure_speed_limit_fit.pdf} \caption{ The blue dots are the numerical speed limit extracted from the open GRAPE optimizations. We define here the speed limit as the time where the optimization fails, corresponding to the branching points indicated by gray lines in Fig \ref{fig:finaln} for the 40~ns and 70~ns curves. The dashed gray line is a power law fit,~$\propto (P_{\mathrm{norm}})^\alpha$, to the data with $\alpha = 0.65$. } \label{fig:speed} \end{figure} To further speed-up the process, we follow the insight from DRAG and optimize over two quadratures of the drive~\cite{PhysRevLett.103.110501}. In a frame rotating at the drive frequency, the last term of the dispersive Hamiltonian of Eq.~\eqref{eq:Hdrive} is then replaced by \begin{align} H_{d} = \varepsilon_X(t) (a\dag + a) + i\varepsilon_Y(t) (a\dag - a). \end{align} Results from optimization of these two quadrature are presented in Fig.~\ref{fig:finaln}, which shows the average photon number at the final pulse time $T$ for increasing measurement power $P_\mathrm{norm}$. As an initial guess, the $X$ quadrature is set to the CLEAR pulse shape and the $Y$ quadrature is randomly set. These results are shown for four different values of $T$, ranging from $150~\mathrm{ns}\sim1.04 T_\kappa$ (red lines) to times as short as $40~\mathrm{ns}\sim0.3 T_\kappa$ (green lines). Following the convention of Fig.~\ref{fig:expcleargrape}(a), the full lines correspond to the qubit ground state and the dashed lines the qubit excited state. Unsurprisingly, the general trend is an increase of the residual photon number with $P_\mathrm{norm}$. However, for $150~\mathrm{ns}\sim1.04~T_\kappa$, the optimization results in residual population as small as $10^{-3}$ at high power $P_\mathrm{norm} = 10$. The difficulty of the open GRAPE algorithm to converge with decreasing $T$ is made apparent with the large fluctuations of the residual photon number with $P_\mathrm{norm}$. Despite this, and quite remarkably, final populations of less than $10^{-3}$ photons are obtained for reset times under $T_\kappa$ and all $P_\mathrm{norm}$ values considered. The complexity in converging becomes more apparent at very short times where we observe large fluctuations and large separations between the results obtained for the two qubit states. These branchings, corresponding to a change in the optimization landscape as a function of $T$ and $P_{\mathrm{norm}}$, are illustrated by vertical gray lines for the two shortest values of $T$. \begin{figure}[b] \includegraphics[width=0.99\linewidth]{Figure_110_70.pdf} \caption{Photon number as a function of time during the active reset pulse for a pulse duration of 110 ns and 70 ns. The solid (dashed) line is for the qubit in the ground (excited) state. The inset shows the corresponding Gaussian filtered drives. The solid lines of the inset is the $X$-drive, while the dotted is the $Y$-drive.} \label{fig:timeevo} \end{figure} \begin{figure}[tb] \includegraphics[width=\linewidth]{Figure_PhotonPenalty.pdf} \caption{Photon number as a function of time for optimized drives with (orange curves) and without (blue curves) the photon number penalty, $\Phi_p$, included. The solid (dashed) curves are for the qubit in the ground (excited) state. The inset displays the same data with a logarithm photon number axis. The parameters are the same as Fig.~\ref{fig:expcleargrape}. We use the penalty weight $\beta = 0.2/T$.} \label{fig:photonapp} \end{figure} Fig.~\ref{fig:speed} presents this branching time as a function of $P_{\mathrm{norm}}$. As illustrated by the dashed line, this failure time follows a simple power law behaviour. This is reminiscent of a quantum speed limit, which here corresponds to the minimal time $T$ in which the optimization can be successful~\cite{PhysRevLett.103.240501, goerz11speed}. For pure state evolution, the quantum speed limit can be expressed analytically in terms of the mean value and the variance of the energy~\cite{PhysRevA.67.052109, PhysRevLett.103.160502}. Expressions have also been obtained for open processes~\cite{PhysRevLett.110.050403, PhysRevLett.110.050402}. The observed simple behaviour with $P_{\mathrm{norm}}$ suggests that analytical expressions could also be obtained for the reset problem. We note, however, that variations in the initial guess for the controls, cost function or optimization algorithm could lead to faster reset times~\cite{sorensen2015exploring}, and that the results of Fig.~\ref{fig:speed} therefore do not represent an absolute speed limit. To gain more insights on the optimization, Fig.~\ref{fig:timeevo} presents the average photon number as a function of time and the corresponding pulse shapes obtained from GRAPE (inset). These results are shown for $T= 70$ ns (orange lines) and $T=110$ ns (blue lines) with a readout power of $P_\mathrm{norm} = 6$. Both pulse shapes are similar and are reminiscent of a smoothed CLEAR pulse~\cite{mcclure2015rapid}. The $Y$ quadrature also appears to have minimal impact and is always close to zero. For both of the final times $T$, the average photon number first increases from its initial value of $\sim 6$ before decreasing to the value shown in Fig.~\ref{fig:finaln}. This increase is particularly notable for the short pulse time $T=70$ ns and points to the difficulty in converging as the reset time $T$ is decreased. In practice, this large photon population can lead to a breakdown of the dispersive approximation used here and to a departure from the quantum non-demolition character of the dispersive readout~\cite{PhysRevA.79.013819}. This breakdown is expected to occur in the case for $T=70$ ns where the average photon number exceeds the critical photon number $n_\mathrm{crit} = (\Delta/2g)^2 \sim 29$ for a short period of time. To prevent this large photon number increase, a penalty $\Phi_p$ related to the intracavity photon number can be added to the performance index such that \begin{align}\label{eq:PerformancePenalty} \Phi = \Phi_0 - \beta \Phi_p, \end{align} with $\Phi_0$ defined by Eq.~\eqref{eq:Phi0} and $\beta$ a constant weighting the penalty and which is determined by trial and error. To penalize large photon populations we take \begin{align} \Phi_p = \sum_{i=g,e} \int_0^T \text{Tr} \left\{a\dag a \,\rho_i(t) \right\} dt. \label{eq:phi_p} \end{align} Details about the numerical implementation of $\Phi_p$ and its derivative with respect to the controls can be found in Appendix~\ref{app:photonpenalty}. Results for optimization with this modification of the performance index are presented in Fig.~\ref{fig:photonapp} for $T= 80$~ns and $P_\mathrm{norm} = 6$. For these values, the optimization without penalty reaches a final photon population of $10^{-4}$ but reaching close to 25 photons in the transient dynamics. On the other hand, using Eq.~\eqref{eq:PerformancePenalty} with the initial value of the pulse given by the results obtained without penalty, the transient photon number can be kept well below $n_\mathrm{crit}$. This is however achieved at the cost of an increase of the final photon number to $\sim10^{-1}$. These proof-of-principle results for the photon penalty can be improved by considering more diverse initial pulse shapes probing a larger regions of the optimization space. In addition, a more systematic study of the role and optimal value of the weights $\beta$ could improve results. \section{Conclusion and outlook} \label{sec:conclusion} We have shown an implementation of the GRAPE algorithm for open quantum systems that circumvent the usual transformation to Liouville space. This implementation is advantageous when optimizing quantum processes in large open quantum systems. As an example of this approach, we have demonstrated an optimized reset protocol for a readout resonator in circuit QED. Rapid qubit reset after readout is of high practical importance. Indeed, the reset time limits the repetition time of current experiments and rapid qubit recycling can, moreover, be advantageous in the implementation of quantum algorithms~\cite{martin2012experimental}. Furthermore the results of our optimization may be directly applied to protocols that rely on repetitive qubit readout in circuit QED, e.g., in quantum feedback schemes \cite{riste2013deterministic, andersen2015closing, PhysRevLett.112.170501} or in quantum error correction protocols \cite{PhysRevA.86.032324,corcoles2015demonstration}. While resonator reset in the dispersive regime of circuit QED serves as an instructive study, we emphasize that this implementation of GRAPE may have much broader use. Following recent experimental results, our work could be expanded to study resonator reset in the strongly nonlinear regime of circuit QED~\cite{bultink:2016a}. Our approach appears ideally suited to simulate the large Hilbert space that is needed to simulate these experiments. Another interesting application is the optimization of qubit measurement in circuit QED~\cite{PhysRevA.90.052331,PhysRevLett.114.200501}. Finally, our implementation may prove useful in optimizing unitary gates that not only works in the qubit subspace but rely on the full Hilbert space of a resonator and multiple qubits~\cite{PhysRevA.91.032325}. \begin{acknowledgments} The authors acknowledge valuable feedback from F.~Motzoi. CKA and JV thank Universit\'{e} de Sherbrooke for their hospitality. SB and AB acknowledge financial support from NSERC. CKA acknowledges financial support from the Villum Foundation Center of Excellence, QUSCOPE, and from the Danish Ministry of Higher Education and Science. JV would like to thank MITACS Globalink Program for financial assistance. Computations were made on the supercomputer Mammouth parallele II from Universit\'{e} de Sherbrooke, managed by Calcul Qu\'{e}bec and Compute Canada. The operation of this supercomputer is funded by the Canada Foundation for Innovation (CFI), NanoQu\'{e}bec, RMGA and the Fonds de recherche du Qu\'{e}bec - Nature et technologies (FRQ-NT). This research was undertaken thanks in part to funding from the Canada First Research Excellence Fund. \end{acknowledgments} \begin{appendix} \section{Gaussian filter} \label{sec:gaussian} In this Appendix, we present the Gaussian filtering procedure developped by Motzoi \textit{et. al.} in Ref.~\cite{PhysRevA.84.022307}, and mentionned in Sec.~\ref{sec:grape}. This approach allows to incorporate in the GRAPE algorithm experimental constraints such as the limited bandwidths of control lines and pulse generators. In circuit QED, while typical electronics limits the controls $\{u_k(j)\}$ to a minimal duration $\Delta t$ of a \mbox{few ns}, these effects leads to a smoothed drive which can significantly modify the dynamics. The main idea of Ref.~\cite{PhysRevA.84.022307} is to calculate the dynamics using a new smoothed pulse $s_k(t) \equiv s_k[\{u_k(j)\},t]$ which is a functional of the set of controls, while still performing the optimization on the $N$ controls $\{u_k(j)\}$. As the GRAPE algorithm requires a piecewise constant field, this new smoothed drive $s_k(t)$ is approximated as a piecewise constant drive, with each step a subpixel of amplitude $s_{k,n}$ and duration $\delta t \ll \Delta t$. The set of controls, $\{u_k(j)\}$, now translates into a set of drive amplitudes, $s_{k}(n)$, for a time $t \in [(n-1) \delta t ; \, n\delta t [$ with $n \in \{1, 2, \dots M\}$ and $M = T/\delta t \gg N$ the number of subpixels. The controls and the smoothed drive are related by \begin{align} s_{k}(n) = \sum_{j=1}^{N} T_{k,n,j}\, u_{k}(j), \end{align} with $T_{k,n,j}$ a transfer function matrix which act as a filter on the controls. The derivatives of the performance index can be calculated using the chain rule \begin{align} \pfrac{\Phi}{u_k(j)} = \sum_{n=1}^M \pfrac{\Phi}{s_{k}(n)} \pfrac{s_{k}(n)}{u_{k}(j)}, \end{align} where the derivative with respect to $s_{k}(n)$ can be found using Eq. \eqref{eq:partialuk}, while $\partial s_{k}(n) / \partial u_k(j)$ comes directly from the transfer matrix. In this paper, all numerics use transfer functions based on Gaussian filters since most experimental hardware constraints can be approximated well by such a filter~\cite{PhysRevA.84.022307}. Hardware components are typically characterized by their 3dB attenuation bandwidth, $\omega_B$. Using a filter function \begin{align} F(\omega) = \exp (- \omega^2 / \omega_{0}^2 ), \end{align} with the reference bandwidth for a given control field given by \mbox{$\omega_0 = \omega_B / (-\text{ln}(1/\sqrt{2}))^{1/2} \approx \omega_B /0.5887$}, the transfer matrix can now be calculated as~\cite{PhysRevLett.103.110501} \begin{align} T_{k,n,j} &= \int_{-\infty}^{\infty} \hspace{-0.1cm} \frac{F(\omega){}\cos\big(\omega\frac{2(n{-}1)\delta t {-}(2j{-}1)\Delta t}{2}\big)\sin(\frac{\omega\Delta t}{2})}{\pi \omega} d\omega \nonumber \\ &= \frac{ \text{erf}\Big[\omega_{0} \frac{(n{-}1)\delta t{-}(j{-}1)\Delta t}{2}\Big] - \text{erf}\Big[\omega_{0} \frac{(n{-}1)\delta t{-}j\Delta t}{2}\Big]}{2}, \end{align} with erf being the error function. \section{Photon number penalty} \label{app:photonpenalty} In this Appendix, we detail the numerical calculation of the gradient $\partial \Phi_p / \partial s_k(j)$ of the photon number penalty to the performance index $\Phi_p$ defined in Eq.~\eqref{eq:phi_p} of Sec.~\ref{sec:fasterreset}. Using Appendix~\ref{sec:gaussian}, this can be translated into $\partial \Phi_p / \partial u_k(j)$ needed for the update rule, Eq.~\eqref{eq:update}. We show that, even though $\Phi_p$ is the result of a time integration over the full duration of the reset process, the gradient can still be calculated using a single forward and a single modified backward evolution. In order to calculate $\Phi_p$ numerically, we approximate the continuous integral of Eq.~\eqref{eq:phi_p} by a discrete sum over the subpixels defined in Appendix~\ref{sec:gaussian}, \begin{align} \Phi_p \approx \sum_{i=e,g} \sum_{n=0}^M \delta t \text{Tr}\big( a\dag a \hat{L}_n \ldots \hat{L}_1 \rho_i(0) \big). \end{align} Now, we need to find $\partial \Phi_p / \partial u_k(j)$. In general, the gradient of the integration over time of the mean value of an operator $A$ is given by Using the linearity of the derivative, this sum is \begin{align} \sum_{n=0}^M \delta t \frac{\partial \exv{A}_n }{\partial s_k(j)} =& \sum_{n=0}^M \delta t\, \text{Tr} \Big( A \frac{\partial (\hat{L}_n \ldots \hat{L}_1)}{\partial s_k(j)} \rho \Big) \\ =& \sum_{n=0}^M \delta t \, \text{Tr} \Big[ A\, \hat{L}_n \ldots \hat{L}_{j+1} \Big( \frac{\partial \hat{L}_j}{\partial s_k(j)} \Big) \nonumber \\ & \times \hat{L}_{j-1} \ldots L_1 \rho(0) \Big] \Theta(n-j) \end{align} where we have used the Heavyside step function \begin{align} \Theta(n) = \begin{cases}0 & \text{ if } n< 0 \\ 1 & \text{ if } n \geq 0 \end{cases}. \end{align} Using the linearity of the trace, we see that \begin{align} \sum_{n=0}^M \delta t \frac{\partial \exv{A}_n }{\partial s_k(j)} =& \text{Tr} \Big[ \Big( \sum_{n=0}^M \delta t \Theta(n-j) A \,\hat{L}_n \ldots \hat{L}_{j+1} \Big) \nonumber \\ & \phantom{\text{Tr} \Big[} \times \frac{\partial \hat{L}_j}{\partial s_k(j)} (\hat{L}_{j-1} \ldots L_1 \rho(0)) \Big], \end{align} such that the last parentheses of the trace is the same as the forward evolution used for the calculation of $\Phi_0$, while the first parenthesis is a stepwise backward evolution starting from the operator $A$. This backward evolution is equiavlent to a sum over backward evolutions starting at all time steps. For example, for $j=M-2$ the parenthesis reads $ A \hat L_M \hat L_{M-1} + A \hat L_{M-1} + A = (A \hat L_M + A)\hat L_{M-1} + A$. Therefore we can rewrite the gradient of the photon number penalty as \begin{align} \frac{\partial \Phi_p}{\partial s_k(j)} = \delta t \sum_{i=e,g} \text{Tr} \Big( \zeta_{M-j} \frac{\partial \hat{L}_j}{\partial s_k(j)} \rho_{j-1}\Big), \end{align} with the quantities $\zeta_{M-j} = a\dag a + \hat{L}_{j+1}^{\dagger} \zeta_{M-j+1}$ defined recursively starting from $\zeta_M = a^\dagger a$ and $\rho_{j} = \hat{L}_{j} \ldots L_1 \rho(0)$ as defined in Eq.~\eqref{eq:rho_j}. The derivative $ \partial \hat{L}_j/\partial s_k(j)$ is calculated as in Eq.~\eqref{eq:partialuk}. Thus, by adding $a^\dagger a $ to the result of the backward evolution at each timestep, the scaling of the GRAPE algorithm is not affected by this more complicated performance index and the gradient of the penalty function is obtained by the calculation of only one forward and one modified backward evolution. \end{appendix}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Recent progress in black-hole observations in the gravitational and electromagnetic spectra as well as theoretical efforts to test strong gravity via black holes \cite{Abbott:2016blz,TheLIGOScientific:2016src,Goddi:2016jrs,Bambi:2015kza,Konoplya:2016pmh} makes it important to understand possible correlations between characteristics of both fields in the vicinity of a black hole. In \cite{Cardoso:2008bp} it was stated that parameters of the unstable circular null geodesics around any stationary spherically symmetric and asymptotically flat black holes, such as the angular velocity $\Omega_c$ and the principal Lyapunov exponent $\lambda$, are in the remarkable correspondence with the quasinormal modes \cite{QNMreviews} that the black hole emits in the eikonal (short wavelengths or high multipole number $\ell$) part of its spectrum. There it was shown that the eikonal quasinormal frequencies of the four and higher dimensional Schwarzschild black hole are \begin{equation}\label{QNM} \omega_n=\Omega_c\,\ell-i(n+1/2)\,|\lambda|, \end{equation} where $n$ is the overtone number. In addition, it was argued that the above formula must be valid for all stationary, spherically symmetric non-asymptotically flat black holes, allowing for the outgoing wave boundary condition in the far region (for example, asymptotically de Sitter black holes). The issue of rotating black holes was also addressed in \cite{Cardoso:2008bp}. For slowly rotating black holes, the eikonal real oscillation frequencies are linear combinations of the orbit's precessional and orbital frequencies, while for Kerr black holes of arbitrary spin the link between photon spheres and eikonal quaisnormal modes is more complicated \cite{Yang:2012he}. At the same time it has been recently noticed that the association of the characteristics of the null geodesics with quasinormal modes is more based on the history of the specific black-hole models than an actual and generic constraining link \cite{Khanna:2016yow}. The essential element of the correspondence is the event horizon: when the event horizon is replaced by the reflecting surface \cite{Price:2017cjr} or a wormhole throat \cite{Khanna:2016yow}, the correspondence (\ref{QNM}) is not observed. The arguments of \cite{Cardoso:2008bp} for spherically symmetric black holes implied the applicability of the WKB formula developed in \cite{Schutz:1985zz} for a particular, though quite wide, class of effective potentials, which have the form of the potential barrier with a single extremum outside the event horizon and approach constant values at the horizon and spacial infinity (or de Sitter horizon). This requirement certainly cannot be guaranteed ad hoc, so that, if one supposes that this initial setting is not valid for some black hole, then the counterexample would be straightforward. At the same time, there are a number of cases where the correspondence do works and even more cases where it is erroneously believed to be working (examples of both can be found in \cite{EikonalWork,Hod,Gallo} and references therein). Therefore, here we are interested in testing the possible correspondence in even the narrower setup: Assuming that radiation of gravitational waves by a spherical black hole is governed by ``the WKB-well-behaved'' effective potential with a single extremum, we would like to learn how broad the set of situations is, in which the relation (\ref{QNM}) between null geodesics and quasinormal modes is guaranteed? With this aim we shall consider the situation when the WKB formula is accurate and even \emph{exact} in the eikonal regime, and, nevertheless, the relation (\ref{QNM}) is not fulfilled. We shall show that there is a counterexample (suggested by the Einstein-Lovelock theory) to the claimed correspondence. The Einstein-Lovelock theory of gravity \cite{Lovelock:1971yv} is the most general mathematically consistent metric theory, leading to second order equations of motion in arbitrary number of spacetime dimensions $D$. It is natural generalization of Einstein theory in $D >4$ and may represent string theory motivated quantum corrections to the classical geometry in higher dimensions. Thus, this discussion gives us also an excuse to find analytic formulas for the eikonal quasinormal modes for gravitational perturbations of higher curvature corrected black holes and complement, in this way, a recent WKB analysis of quasinormal spectrum of Lovelock black holes, which was done in \cite{Yoshida:2015vua}. The paper is organized as follows: Sec. II gives the basic formulas for calculations of the principal Lyapunov exponent and the angular velocity for the unstable null geodesics in spherically symmetric spacetimes. In Sec. III the Lyapunov exponents and the angular velocity are found for the asymptotically flat Einstein-Gauss-Bonnet black hole. Sec. IV shows that the frequencies predicated by the Lyapunov exponent and angular velocity are different from those given by the WKB formula for the generic Einstein-Lovelock black hole. In Sec. V, analytical formulas for quasinormal modes in the eikonal (i.e. high multipole numbers $\ell$) regime are written down in terms of black-hole parameters for the Einstein-Gauss-Bonnet case. In Sec. VI we discuss the obtained results and formulate actual limits of the correspondence. \section{Null geodesics in the background of spherically symmetric black holes} A static, spherically symmetric metric in $D$ -dimensional spacetime has the form: \begin{equation}\label{metric} d s^2 = f(r) d t^2 - \frac{1}{g(r)} d r^2 -r^2 d\Omega_{n}^2, \end{equation} where the functions $f(r)$ and $g(r)$ represent solutions of the field equations under consideration and $d\Omega_n^2$ is a $(n=D-2)$-dimensional sphere. Let us consider geodesic particle motion around such a black hole and restrict attention to stability of null circular orbits. The stability can be analyzed in terms of the so called Lyapunov exponents \cite{Lyapunov}. This kind of analysis for the Schwarzschild black hole was developed for the first time in \cite{Cornish:2001jy}. In \cite{Dettmann} it was shown that when a system consisting of any finite number of particles moves under the action of a scalar potential at a constant kinetic energy, then the Lyapunov exponents come in pairs which sum to the same constant. The equations of motions can be written in the following schematic way \begin{equation} \frac{d X_{i}}{d t} = H_{i}(X_{j}). \end{equation} A small deviation from a given orbit to a nearby curve through the small perturbation $\delta X_{i}$, \begin{equation} X_{i} \rightarrow X_{i} + \delta X_{i}, \end{equation} implies the linearization of the equation of motion \begin{equation} \frac{d \delta X_{i}(t)}{dt} = K_{ij}(t) \delta X_{j}(t), \end{equation} where \begin{equation} K_{ij}(t) = \left.\frac{\partial H_{i}}{\partial X_{j}} \right\vert_{X_{i}(t)} \end{equation} is called the \emph{infinitesimal evolution matrix}. The solution to the linearized equation can be expressed in terms of the \emph{evolution matrix} $L_{ij}$: \begin{equation} \delta X_{i} (t) = L_{ij} (t) \delta X_{j}(0). \end{equation} The evolution matrix obeys the relations \begin{equation} \dot{L}_{ij} (t) = K_{im} L_{mj} (t), \quad L_{ij} (0) = \delta_{ij}. \end{equation} The principal Lyapunov exponents are given by \begin{equation} \lambda = \lim_{t \rightarrow \infty} \frac{1}{t} \left(\frac{L_{jj}(t)}{L_{jj}(0)}\right). \end{equation} The general conditions for the existence of the above limit are given by the Oseledets theorem \cite{Oseledets}. Following Cardoso et. al. \cite{Cardoso:2008bp}, one can see that the principal Lyapunov exponent for null geodesics around a static, spherically symmetric metric (\ref{metric}) is \begin{equation}\label{GenLyap} \lambda = \frac{1}{\sqrt{2}}\sqrt{-\frac{r_c^2}{f_c}\left(\frac{d^2}{dr_*^2}\frac{f}{r^2}\right)_{r=r_c}}, \end{equation} where the tortoise coordinate is defined as $dr/dr_*=\sqrt{g(r)f(r)}$. The coordinate angular velocity for the null geodesics is \begin{equation}\label{angularvel} \Omega_c = \frac{f_c^{1/2}}{r_c}, \end{equation} where $r_{c}$ is the radius of the circular null geodesics, satisfying the equation \begin{equation}\label{circulareq} 2f_c=r_cf'_c. \end{equation} With the above formulas at hand, one is able to analyze stability and angular velocity of particles orbiting around arbitrary static spherically symmetric black hole. Recent discussion of the general features and instabilities of the null geodesics in the arbitrary spherically symmetric spacetimes and Lyapunov exponents has been suggested in \cite{Jia:2017nen}. \section{Null geodesics in the background of the Einstein-Gauss-Bonnet black hole} Here we shall consider the null geodesics in the black-hole background within the Einstein-Gauss-Bonnet theory. The Lagrangian of the D-dimensional Einstein-Gauss-Bonnet theory has the form: \begin{equation}\label{gbg3} \mathcal{L}=-2\Lambda+R+ k (R_{\mu\nu\lambda\sigma}R^{\mu\nu\lambda\sigma}-4\,R_{\mu\nu}R^{\mu\nu}+R^2), \end{equation} where $k= \alpha/((D-3)(D-4))$. The metric function of the asymptotically flat Einstein-Gauss-Bonnet black hole is given by \cite{Boulware:1985wk} \begin{equation}\label{metricfunction} f(r) = g(r) = 1+ \frac{r^2}{2 \alpha} - \frac{r^2}{2 \alpha} \sqrt{1 + \frac{8 \alpha \mu}{(D-2) r^{D-1}}}, \end{equation} where $\mu$ is the mass parameter. It is well known that Gauss-Bonnet black holes, as well as their Lovelock generalizations, are gravitationally unstable when the coupling constant $\alpha$ (and higher order constants in the case of the Lovelock theory) are not small enough. Therefore, in order to obtain concise and easily interpretable analytical expressions, one can expand all the necessary relations in terms of small parameter $\alpha$. Thus, the radius of the circular geodesics can be written as follows \begin{equation}\label{rexpansion1} r_{c} = r_{c0} + r_{c1} \alpha + r_{c2} \alpha^2 + {\cal O}(\alpha^3). \end{equation} When expanding in terms of the Gauss-Bonnet coupling, from here and on we shall imply that the corresponding dimensionless parameter is $\alpha/r_{H}^2$, where $r_{H}$ is the black hole radius. In order to measure everything in terms of the black-hole radius it is sufficient to re-parameterize the mass $\mu$ as a function of radius $r_{H}$ as in eq. 4 of \cite{Konoplya:2010vz}. The equation for the null circular orbits (\ref{circulareq}) for the metric (\ref{metricfunction}) reads \begin{equation}\label{circulareq2} r^3 \mu(1 - D)+r^{D/2} \sqrt{(D-2) \left((D-2) r^D+8 r \alpha \mu \right)}=0. \end{equation} Substituting (\ref{rexpansion1}) into (\ref{circulareq2}), one can find the coefficients of the expansion (\ref{rexpansion1}), which are: $$ r_{c0} = \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{1}{3-D}}, \quad r_{c1} = -\frac{4 \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{1}{D-3}}}{D^2-4 D+3}, $$ \begin{equation}\label{rc0} r_{c2} = -\frac{24 \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{3}{D-3}}}{\left(D^2-4 D+3\right)^2}. \end{equation} In the same way one can expand the angular velocity $\Omega_c$ (given by (\ref{angularvel})) of the null geodesics in terms of $\alpha$: $$ \Omega_c = \sqrt{\frac{D-3}{D-1}} \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{1}{D-3}} + $$ \begin{equation} \frac{2 \alpha \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{3}{D-3}}}{\sqrt{D-3} (D-1)^{3/2}} + {\cal O}(\alpha^2) \end{equation} Using (\ref{GenLyap}) we can show that the principal Lyapunov exponents has the form $$ \lambda =\frac{(D-3) \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{1}{D-3}}}{\sqrt{D-1}}-\frac{2 \mu \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{D}{D-3}}}{\sqrt{D-1}} \alpha + $$ \begin{equation}\label{Lyapunov-explic} \frac{2 (3 (D-8) D+28) \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{5}{D-3}}}{(D-3) (D-1)^{5/2}} \alpha ^2 +O\left(\alpha ^3\right). \end{equation} Here we expanded the Lyapunov exponents until the second order in $\alpha$, because there will be situation in which the difference between $\lambda$ and the eikonal quaisnormal modes appears only at the second order. Notice, that the Lyapunov exponents are not invariant measures and should be interpreted with care \cite{Cornish:2003ig}. \section{Gravitational perturbations of the Einstein-Lovelock black hole} The natural generalization of the second order in curvature Gauss-Bonnet term to arbitrary order is given by the Loevlock theory \cite{Lovelock:1971yv}. A static spherically symmetric black-hole solution in the Einstein-Lovelock gravity is given by the general form (\ref{metric}), where (see \cite{Boulware:1985wk}, \cite{Wheeler:1985qd}) \begin{equation}\label{Lfdef} f(r)=1-r^2\,\psi(r). \end{equation} The function $\psi(r)$ satisfies the following relation $$ W(\psi(r))\equiv $$ \begin{equation}\label{LWdef} \frac{D-2}{2}\left(\psi(r)+\sum_{m=2}^\infty\a_m\psi(r)^m\right) = \frac{\mu}{r^{D-1}}\,, \end{equation} where $$\a_m=\frac{\alpha_m}{m}\prod_{p=1}^{2m-2}(D-2-p)=\frac{\alpha_m}{m}\frac{(D-3)!}{(D-1-2m)!},$$ and $\a_m=0$ for any $D-2\leq2m$, implying that $W(\psi)$ is a finite polynomial of $\psi$. Here we are interested only in the solutions to the above algebraic equations which describe the branch having the Einsteinian limit. In other words, we require that our black-hole metric goes over into the corresponding Tangherlini metric \cite{Tangherlini:1963bw} when $\alpha_m \rightarrow 0$. Following \cite{Takahashi:2010ye}, we shall define a new function $T(r)$ as: $$ T(r)\equiv r^{D-3}\frac{dW}{d\psi}= $$ \begin{equation}\label{LTdef} \frac{(D-2) r^{D-3}}{2}\left(1+\sum_{m=2}^\infty m\a_m\psi(r)^{m-1}\right). \end{equation} The gravitational perturbation equations can be treated separately for irreducible representations, so that scalars, vectors and tensors relatively the $(D-2)$-dimensional rotation group obey separate sets of equations. In \cite{Takahashi:2010ye} it was shown that after the decoupling of the angular variables, the perturbations equations are reduced to the corresponding second-order master differential equations \begin{equation} \left(\frac{\partial^2}{\partial t^2}-\frac{\partial^2}{\partial r_*^2}+V_i(r_*)\right)\Psi_{i}(t,r_*)=0, \end{equation} where $\Psi_i$ are the wave functions for each type of perturbation: scalar, vector and tensor. In the eikonal regime, the effective potentials for all three types of gravitational perturbations can be approximated as follows $$ V_t(r)=\ell^2\left(\frac{f(r)T''(r)}{(D-4)rT'(r)}+{\cal O}\left(\frac{1}{\ell}\right)\right),$$ \begin{equation} \label{dominant} V_v(r)=\ell^2\left(\frac{f(r)T'(r)}{(D-3)rT(r)}+{\cal O}\left(\frac{1}{\ell}\right)\right), \end{equation} \begin{equation}\nonumber V_s(r)=\ell^2\left(\frac{f(r)(2T'(r)^2-T(r)T''(r))}{(D-2) rT'(r)T(r)}+{\cal O}\left(\frac{1}{\ell}\right)\right). \end{equation} For further calculations it is useful to re-write the above formulas for the effective potentials symbolically as \begin{equation} V_{i} = \ell^2 \left(\frac{f_{i}(r)}{r^2} + {\cal O}\left(\frac{1}{\ell}\right)\right), \end{equation} where, $i$ stands for tensor (t), vector (v) and scalar (s) types of gravitational perturbations. Thus, $$ f_{t}(r) = \frac{f(r)r T''(r)}{(D-4)T'(r)}, \quad f_{v}(r) = \frac{f(r)r T'(r)}{(D-3)T(r)}, $$ \begin{equation} f_{s}(r) = \frac{rf(r)(2T'(r)^2-T(r)T''(r))}{(D-2) T'(r)T(r)}. \end{equation} \begin{figure} \resizebox{0.9 \linewidth}{!}{\includegraphics*{GBpotential.eps}} \caption{Effective potentials for the scalar-type gravitational perturbations of the asymptotically flat Einstein-Gauss-Bonnet black hole. Here, the black-hole radius $r_{H} =1$, $D=6$, $\alpha =1/10$ $\ell=3$ (black, bottom), $\ell=4$ (blue), $\ell=10$ (red, top).}\label{GBpotential} \end{figure} At high $\ell$, once the effective potential has the form of the potential barrier, falling off at the event horizon and spacial infinity, the WKB formula found in \cite{Schutz:1985zz} (for improvements and extensions of this formula, see \cite{Iyer:1986np,Konoplya:2003ii,Matyjasek:2017psv}) can be applied for finding quasinormal modes: \begin{equation} \frac{Q_0(r_0)}{\sqrt{2Q_0^{(2)}(r_0)}}=i(n+1/2). \label{wkb} \end{equation} Here, the second derivative $Q_0^{(2)}\equiv d^2Q_0/dr_*^2$ is evaluated at the extremum $r_{0}$ of the function $Q_0$. An example of such a ``good'' effective potential is shown on fig. (\ref{GBpotential}). Notice that in the Einstein-Lovelock theory such behavior of the potential barrier takes place only for sufficiently small values of the coupling constants, which correspond to the stable black hole. Otherwise, the effective potential may have a negative gap near the event horizon, which becomes deeper when $\ell$ is increased. It is important that in the eikonal regime $\ell \rightarrow \infty$ the WKB formula (\ref{wkb}) for potentials, like the one in fig. (\ref{GBpotential}), is \emph{exact}. In the eikonal limit for each type of perturbations \begin{equation} Q_0\simeq \omega^2-f_{i}\frac{l^2}{r^2} \end{equation} Then we observe that \begin{equation}\label{extremum} 2 f_{i}(r_0)=r_0 f_{i}'(r_0), \end{equation} i.e. as $f(r)$ does not coincide with $f_{i}(r)$, then the position of the effective potential's extremum $r_0$ must not coincide (in the general case) with the location of the null circular geodesic $r_c$. The WKB formula for quasinormal modes is also different from the Einsteinian ones, as now it includes $f_{i}(r)$ instead of $f(r)$: \begin{equation}\label{main} \omega_{\rm QNM i}=\ell \sqrt{\frac{f_{i0}}{r_0^2}} -i\frac{(n+1/2)}{\sqrt{2}} \sqrt{-\frac{r_0^2}{f_{i0}}\,\left (\frac{d^2}{dr_*^2}\frac{f_{i}}{r^2}\right )_{r_0}}. \end{equation} Thus, it is evident that, even when the effective potential has the form of the barrier, i. e. the WKB formula (\ref{wkb}) can be applied and is exact, in the general case: \begin{itemize} \item Radius of the circular null geodesics $r_{c}$ does not coincide with the position of the extremum of the effective potential $r_{i 0}$ in the eikonal regime; \item The WKB formula for quasinormal modes include now the functions $f_{i}$ which are not identical to $f(r)$, so that eikonal quasinormal frequencies are different for each type of gravitational perturbations (scalar, vector, tensor) and different from the ones expected for the test scalar field. \end{itemize} Each of the above two reasons is sufficient for the breakdown of the proposed correspondence. Thus, it is evident from our general consideration of the Lovelock black holes that the characteristics of the null geodesics and eikonal quasinormal modes are not necessarily linked by the formula (\ref{QNM}). In the next section we shall write down analytical formulas for the eikonal quasinormal modes in terms of parameters of the Einstein-Gauss-Bonnet black holes and show the discrepancy between QN modes and null geodesics explicitly. \section{Eikonal quasinormal modes in the Einstein-Gauss-Bonnet theory} Here we shall derive analytical expressions for quasinormal modes in the regime of large multipole number $\ell$ for all three types of gravitational perturbations of the Einstein-Gauss-Bonnet black hole. \textbf{Tensor type.} Let us start from finding the position $r_0$ of the extremum of the effective potential, which can be expanded in terms of small $\alpha$: \begin{equation}\label{rexpansion2} r_{0} = r_{00} + r_{01} \alpha + r_{02} \alpha^2 + {\cal O}(\alpha^3). \end{equation} Then, eq. (\ref{extremum}) expanded in $\alpha$ gives us the values of the coefficients $r_{0i}$. Thus, for the tensor type of perturbations one has \begin{equation}\label{r00} r_{00} = \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{1}{3-D}},~ r_{01} =-\frac{4 (2 D-5) \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{1}{D-3}}}{D^3-8 D^2+19 D-12}, \end{equation} \begin{equation}\nonumber r_{02} = \frac{8 (D (D (D (2 D-19)+49)+5)-64) \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{3}{D-3}}}{(D-3)^2 \left(D^2-5 D+4\right)^2}. \end{equation} In a similar way one can find coefficients for the two other types of gravitational perturbations. From the above we can see that while $r_{00}$ given by (\ref{r00}) coincides with $r_{c0}$ given by (\ref{rc0}), that is not so for $r_{01}$ and $r_{c1}$ and all the higher corrections. In other words, while the positions of the null circular orbit and extremum of the effective potential coincide in the $D$-dimensional Schwarzschild space-time, they do not when the $\alpha$-correction is turned on. Expanding the real part of (\ref{main}) in $\alpha$, one can see that the real oscillation frequency over the multipole number $\ell$ is $$ \frac{Re (\omega)}{\ell} = \sqrt{\frac{D-3}{D-1}} \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{1}{D-3}}+ $$ \begin{equation} \frac{6 (D-2) \sqrt{\frac{D-3}{D-1}} \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{3}{D-3}}}{D^3-8 D^2+19 D-12} \alpha +O\left(\alpha ^2\right), \end{equation} while the damping rate, characterised by $Im (\omega)$, obeys the relation: \begin{widetext} $$ \frac{Im (\omega)}{ \left(n+\frac{1}{2}\right)} = \frac{(D-3) \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{1}{D-3}}}{\sqrt{D-1}}- \frac{2 \mu \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{D}{D-3}}}{\sqrt{D-1}} \alpha + $$ \begin{equation} \frac{2 (D (D ((D-4) D (4 D-21)+144)-616)+484) \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{5}{D-3}}}{(D-4)^2 (D-3) (D-1)^{5/2}} \alpha ^2 +O\left(\alpha ^3\right) \end{equation} \end{widetext} It is interesting to notice that while $Re (\omega)/\ell$ differs from the one expected from the angular velocity of null geodesics already in the linear order in $\alpha$, the value of $Im (\omega)/(n+1/2)$ differs from the Lyapunov exponent, given by (\ref{Lyapunov-explic}), only at the second and higher orders in $\alpha$. \textbf{Vector type.} In a similar fashion, the position of the extremum of the effective potential is given by: $$ r_{00} = \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{1}{3-D}}, \quad r_{01} = \frac{2 \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{1}{D-3}}}{D-1},$$ \begin{equation} r_{02} = -\frac{2 (D ((D-2) D-23)+36) \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{3}{D-3}}}{\left(D^2-4 D+3\right)^2}. \end{equation} The real oscillation frequency $Re (\omega)$ obeys the following relation $$ \frac{Re(\omega)}{\ell} = \sqrt{\frac{D-3}{D-1}} \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{1}{D-3}}- $$ \begin{equation} \frac{2 \alpha (D-2) \sqrt{\frac{D-3}{D-1}} \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{3}{D-3}}}{D^2-4 D+3}+O\left(\alpha ^2\right). \end{equation} The relation for the damping rate $Im (\omega)$ reads $$ \frac{Im (\omega)}{ \left(n+\frac{1}{2}\right)} = \frac{(D-3) \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{1}{D-3}}}{\sqrt{D-1}}-\frac{2 \alpha \mu \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{D}{D-3}}}{\sqrt{D-1}}- $$ \begin{equation} \frac{2 \alpha ^2 (D ((D-18) D+51)-41) \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{5}{D-3}}}{(D-3) (D-1)^{5/2}}+O\left(\alpha ^3\right). \end{equation} Here, again, we see that the damping rate differs from the one expected from the Lyapunov exponent only at the second and higher orders in $\alpha$. \textbf{Scalar type.} The position of the extremum of the effective potential is given by: \begin{equation}\nonumber r_{00} = \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{1}{3-D}}, \quad r_{01} = \frac{4 (D-2) \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{1}{D-3}}}{D^2-4 D+3}, \end{equation} \begin{equation}\nonumber r_{02} =-\frac{8 (D (D ((D-6) D-5)+39)-32) \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{3}{D-3}}}{(D-2) \left(D^2-4 D+3\right)^2}. \end{equation} The real oscillation frequency $Re (\omega)$ obey the relations: $$ \frac{Re(\omega)}{\ell} = \sqrt{\frac{D-3}{D-1}} \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{1}{D-3}}- $$ \begin{equation} \frac{2 \alpha \sqrt{\frac{D-3}{D-1}} (2 D-3) \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{3}{D-3}}}{D^2-4 D+3}+O\left(\alpha ^2\right). \end{equation} The damping rate $Im (\omega)$, again can be found as a series expansion in small $\alpha$: $$ \frac{Im (\omega)}{ \left(n+\frac{1}{2}\right)} = \frac{(D-3) \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{1}{D-3}}}{\sqrt{D-1}}-\frac{2 \alpha \mu \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{D}{D-3}}}{\sqrt{D-1}}+ $$ $$ \frac{2 (D (3 D (13 D-54)+232)-116) \alpha ^2 \left(\frac{D-2}{(D-1) \mu }\right)^{\frac{5}{D-3}-1}}{(D-3) (D-1)^{7/2} \mu } $$ \begin{equation} +O\left(\alpha ^3\right). \end{equation} The damping rate of the scalar type of gravitational perturbations differs from those of vector and tensor ones, again, only at the second and higher orders in $\alpha$. Notice that the higher order correction one wishes to find for the QN frequencies, the higher orders he needs to reach in the expansion of the position of the extremum of the effective potential. \textbf{A test scalar field.} For a test scalar field in the background of the Einstein-Gauss-Bonnet or Einstein-Lovelock black hole, the dominant centrifugal term in the effective potential is simply $f (r) \ell (\ell+1)/r^2$, so that up to a different function $f(r)$ (which now includes Lovelock coupling constants) all the deductions of \cite{Cardoso:2008bp} are strict and valid at all steps. Thus, the quasinormal frequencies of the test scalar field will evidently satisfy (\ref{QNM}), while the frequencies of gravitational perturbations are different for all three types and are different from those for the test scalar field even in the eikonal regime. One should also remember that all the above formulas are obtained in the dominant (in terms of $1/\ell$-expansion) order of the eikonal regime. In order to use it for accurate estimations of quasinormal modes with sufficiently low $\ell$, one must take into consideration the next order of the $1/\ell$-expansion everywhere. At $\alpha =0$, we reproduce the eikonal formulas found for the D-dimensional Schwarzschild black holes \cite{Konoplya:2003ii}. However, we used here different units and in order to reproduce, for example, eqs. (12, 13) of \cite{Konoplya:2003ii}, one should take in our formulas $\mu \rightarrow (1/2)(D-2) \mu $. \section{Discussion} Though perturbations and quasinormal modes of black holes and branes in the Einstein-Gauss-Bonnet and Lovelock theories were considered in a number of papers \cite{LovelockQNM,Brigante:2007nu,Grozdanov} for various types of asymptotical behavior (flat, dS, AdS), no explicit analytical formula for the eikonal quasinormal frequencies of the gravitational perturbations of asymptotically flat black hole was presented. At the same time, the eikonal regime is special in Gauss-Bonnet and Lovelock theories, because, at sufficiently large values of coupling constants it bring a special kind of instability. Here we have found analytical expressions for quasinormal modes of gravitational perturbations of the Einstein-Gauss-Bonnet black hole. When the Gauss-Bonnet coupling constants approaches zero, the analytic expressions for $\omega$ obtained here describe eikonal quasinormal modes of the D-dimensional Schwarzschild black hole. The gravitational quasinormal modes coincide with those for a test scalar field \cite{Konoplya:2003ii},\cite{Konoplya:2003dd} \emph{only} in the Einsteinian limit. In our opinion, the broad belief that the eikonal quasinormal modes and unstable null geodesics are necessarily linked by eq. (\ref{QNM}) at least for any spherically symmetric stationary black holes, brought a number of misinterpretations in the current literature. For example, ``the universal upper bound'' for quasinormal modes of arbitrary spherical black holes given by eq. 39 in \cite{Hod}, as well as its extension to the Gauss-Bonnet theory suggested by eq. 69 in \cite{Gallo}, does not take into account possibility of different features of the eikonal regime of gravitational perturbations, so that their arguments are compulsory only for test fields. Therefore, a clear determination of the boarders of such an association between the two phenomena must have been spoken out. Here we have learnt that although the association of the null geodesics with eikonal quasinormal modes exists in some cases, the range of its applicability is considerably constrained. Namely, the correspondence can be guaranteed for any stationary, spherically symmetric, asymptotically flat black holes only provided the two following conditions are fulfilled: \begin{itemize} \item Perturbations are described by a ``good'' (from the WKB point of view developed in \cite{Schutz:1985zz}) effective potential, i. e. the potential barrier with a single extremum, implying the two turning points and decaying at the event horizon and infinity. \item One is limited by perturbations of test fields only, and not of the gravitational field itself or other fields, which are non-minimally coupled to gravity. \end{itemize} In principle, the first condition must be satisfied for a test field in the background of a black-hole with well defined horizon, once $f(r)$ is positive everywhere outside the event horizon, so that the second condition alone is sufficient. This may be not true for more exotic objects, such as wormholes, naked singularities etc. Rather unexpectedly, we have found that the damping rate of all three types of gravitational perturbations differs from that of a test scalar field (and consequently from the one predicted by the Lyapunov exponent) only at the second and higher orders of the Gauss-Bonnet coupling $\alpha$. This certainly cannot be interpreted on behalf of the correspondence, first, because it concerns only the imaginary part of $\omega$, and, second, because the relatively small difference for the $Im(\omega)$ at small $\alpha$ simply means that the damping rate is less sensitive to small curvature corrections than $Re(\omega)$. It would be interesting to find analytical expressions for eikonal quasinormal modes in terms of black-hole parameters in the most general case of the Lovelock theory in a similar way it was done here for the Gauss-Bonnet black hole. However, as in the general case even the metric coefficients cannot be easily written in the explicit form, the final expressions may appear to be too much involved. Notice, that perturbations of a black hole in the nonlinear electrodynamics also show the non-standard behavior in the eikonal regime \cite{Chaverra:2016ttw}, so that it would be reasonable to check whether the correspondence works in this case. At the same time, for example, when analyzing test fields in the conformal gravity \cite{Toshmatov:2017bpx}, the correspondence is fulfilled according to our conclusions above. \acknowledgments{R. K. was supported by ``Project for fostering collaboration in science, research and education'' funded by the Moravian-Silesian Region, Czech Republic and by the Research Centre for Theoretical Physics and Astrophysics, Faculty of Philosophy and Science of Sileasian University at Opava. Z. S. acknowledges the Albert Einstein Centre for Gravitation and Astrophysics supported under the Czech Science Foundation (Grant No. 14-37086G)}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Over 400 compact roundish objects, presenting diffuse emission, were identified at $24\mic{m}$ from visual inspection of the MIPSGAL Legacy Survey (\citealt{Carey2009}; \citealt{Mizuno2010}) mosaic images, obtained with MIPS\footnote{The Multiband Imaging Photometer for Spitzer.} \citep{Rieke2004} on board of the \textit{Spitzer Space Telescope}. These small ($\leq1'$) rings, disks or shells (hereafter denoted as `bubbles'') are pervasive throughout the entire Galactic plane in the mid-infrared (IR). Their distribution is approximately uniform in Galactic latitude and longitude, and the average density is found to be around 1.5 bubbles per square degree. A further analysis of the GLIMPSE\footnote{The Galactic Legacy IR Mid-Plane Survey Extraordinaire, conducted with the InfraRed Array Camera (IRAC) on board of the Spitzer Space Telescope.} ($3.6\mic{m}$ to $8.0\mic{m}$) and MIPSGAL ($70\mic{m}$) images indicates that the bubbles are mostly detected at $24\mic{m}$ only. The absence, for most of these objects, of a counterpart at wavelengths shorter than $24\mic{m}$ could be interpreted either as a sign of extreme extinction, which would explain the non-detection of these objects in previous visible or near-IR surveys, or as intrinsic property of the objects. The main hypothesis about the nature of the bubbles is that they are different type of evolved stars (planetary nebulae, supernova remnants, Wolf--Rayet stars, asymptotic giant branch stars, etc.). Some bubbles present a central source in the middle of the nebula in the MIPSGAL images. Studies by \citet{Wachter2010} show how this central source is usually well detected at shorter wavelengths (down to 2MASS \textit{J} band or even optical for exceptional cases). In particular the authors spectroscopically examined 62 bright source surrounded by a 24-$\umu$m shell, being able to characterize the nature of 45 central sources. They found that 19 of them are compatible with Oe/WN, Wolf--Rayet (WR) and luminous blue variable (LBV) stars. Furthermore, they also pointed out that it is possible to explain that many bubbles emit only at $24\mic{m}$ assuming that this emission is not a continuum from warm dust but it rises from an intense \mbox{[O\,{\sc iv}]} line emission at $25.89\mic{m}$, as found by \citet{Morris2006}, resulting in an almost pure gas nebula. The presence of very massive stars can also be inferred by the morphology of the nebula. \citet{Gvaramadze2010} found that many bubbles, showing central sources, resemble known nebulae surrounding blue supergiant (BSG), LBV, or WR stars. They confirmed the nature of some bubbles, inferred by a morphological analysis, by means of spectroscopic identification of their central sources, showing that the mere presence and shape of the nebula can suggest the possibility of these massive stars. Mid-IR spectroscopic observations with IRS\footnote{The IR Spectrograph on board of the Spitzer Space Telescope.} were carried out for 14 bubbles, 4 in high-resolution mode \citep{Flagey2011} and 10 in low-resolution (Nowak et al., \textit{in prep.}). Among the 4 bubbles observed in high-resolution mode, two show a dust-poor spectrum dominated by highly ionized gas lines of [O \textsc{iv}], [Ne \textsc{iii}], [Ne \textsc{v}], [S \textsc{iii}], and [S \textsc{iv}], typical of planetary nebulae with a very hot central white dwarf ($\gtrsim 200\ 000\um{K}$). The other two spectra are dominated by a dust continuum and lower-excitation lines. These two bubbles also show a central source and are, respectively, a nebula surrounding a WR star \citep{Stringfellow2012} and a LBV candidate \citep{Wachter2010}. An extensive search of available catalogues had allowed us to identify less than 15 per cent of these objects. The majority of the already known bubbles were found to be planetary nebulae (PNe). Three supernova remnants (SNRs) and one post-asymptotic giant branch (AGB) star were also identified. Therefore, about 90 per cent of the objects within the MIPSGAL bubbles were new discoveries. Further studies on the bubble catalogue allowed to extend the number of classified bubbles to, presently, about 30 per cent. Massive stars play a pivotal role in the evolution of their host galaxies. They are among the major contributors to the interstellar ultraviolet radiation and, via their strong stellar winds and final explosion, provide enrichment of processed material (gas and dust) and mechanical energy to the interstellar medium, strongly influencing subsequent local star formation. Still, the details of post-main sequence (MS) evolution of massive stars are poorly understood. On one side, theoretical modelling depends on mass-loss from the stars, which in turn is function of poorly constrained parameters such as metallicity and rotation (e.g. \citealt{Leitherer1991}; \citealt{Chieffi2013}). On the other side, empirical studies had relied on a low number of objects at different stages of post-MS evolution \citep{Clark2005}, and only recently IR observations have permitted the discovery of hundreds of new WR and LBV stars (e.g. \citealt{Shara2009}; \citealt{Shara2012}; \citealt{Wachter2010}; \citealt{Wachter2011}; \citealt{Mauerhan2011}; \citealt{Stringfellow2012}; \citealt{Stringfellow2012b}). Besides being a powerful `game reserve' for evolved massive stars, the mid-IR bubbles catalogue is the right place where to look for the missing Galactic population of embedded PNe. The small number of known PNe (i.e. $\sim2000$) compared to those expected to populate the Galaxy disk ($\sim23\,000$, \citealt{Zijlstra1991}) is usually explained in terms of strong interstellar extinction in the Galactic plane. Part of the missing PNe population are thought to consist of more rare objects embedded in thick circumstellar envelopes. Such heavily obscured PNe may descend from the most massive AGB stars and they might be the key objects for understanding the late evolution of the most massive ($M\geq2M_\odot$) PNe progenitors. In this paper we present results of a radio continuum study of a sub-sample of the bubbles aimed at understanding their nature. Spectral information, as derived from multi-frequency radio observations, are an unique tool for a first assessment of the content of non-thermal and thermal radio emitters in our sample, sorting out SNRs or, more generally, shocked nebulae (synchrotron emission), from nebulae associated to evolved massive stars (LBV and WR) and PNe (thermal free-free emission). It is eventually showed how radio and IR observations can be combined to establish more exhaustive classification schemes. \section{Observations and data reduction} \subsection{Sample selection} From the original sample of the MIPSGAL bubbles only sources with $\delta\geq-40^\circ$ (to be visible with the EVLA\footnote{The Expanded Very Large Array.}) were selected, resulting in a total `northern sample' of 367 sources. We then checked a $1'\times1'$ field centred on each of the MIPSGAL positions in both the NVSS\footnote{http://www.cv.nrao.edu/nvss/NVSSlist.shtml} catalogue and in MAGPIS\footnote{http://third.ucllnl.org/gps/catalogs.html} for radio emission, ending up with a total of 55 sources possibly detected at $20\um{cm}$. Despite the fact that, for our targeted sources, either NVSS or MAGPIS (or both) data already exist, these cannot be used for the purpose of identifying the $24\mic{m}$ MIPSGAL bubbles. In fact, the existing NVSS/MAGPIS data suffer from three main issues: the available data were obtained with a typical rms of $0.3-0.45\um{mJy/beam}$, on average one order of magnitude worst than what is achievable with EVLA; the existing observations were taken at a different time with respect to ours and time variability effects could potentially affect the spectral index analysis; the combination of VLA and EVLA data can be, in principle, very problematic from a technical point of view. The available NVSS and MAGPIS data yet provided very useful indications regarding the size and flux of our selected sample of sources, and this information was used to guide our observing strategy in terms of configuration and time request. Remarkably, 11 of the objects selected for EVLA observations were already classified, according to the SIMBAD\footnote{http://simbad.u-strasbg.fr} database. \subsection{Observing strategy} Observations of the bubbles sample were made with the EVLA at $6\um{cm}$ (central frequency $4.959\um{GHz}$ -- $C$ band) in configuration D during March 2010 and at $20\um{cm}$ ($1.4675\um{GHz}$ -- $L$ band) in configuration C and CnB during, respectively, March and May 2012. For $C$-band observations the sample was split in four subset, observed in four different days. Each bubble was observed for slightly less than 10 minutes and in two 128-MHz wide spectral windows (resulting therefore in a total bandwidth of $256\um{MHz}$) allowing us to achieve a theoretical noise level of $\sim\!10\mic{Jy/beam}$. We note that calibration errors and required flagging introduce further sources of noise that eventually dominate over the theoretical thermal noise. For observations in $L$ band, the previous 6-cm observations were used to select a sub-sample on which focusing our attention. In particular we selected a sub-sample of 34 bubbles detected or possibly detected at $6\um{cm}$, excluding some bubbles that appeared too extended at $6\um{cm}$ or whose classification was certain. The larger field-of-view at $20\um{cm}$ allowed to include other 6 bubbles as field sources, resulting in a total sample of 40 bubbles. Though the total bandwidth was as wide as $1\um{GHz}$, a lot of radio-frequency interferences (RFI) contaminated our data and the signal-to-noise ratio was much lower than we expected, sometimes one order of magnitude or more. Since observations were made toward the galactic plane, it was also necessary to check the confusion limit. At $6\um{cm}$ we expected a value around $7\mic{Jy/beam}$, while at $20\um{cm}$ slightly less than $20\mic{Jy/beam}$. Both limits were well below our expected noise levels. In Table \ref{tab:obsVLA} all observed objects are reported along with their coordinates and the date and duration of each observation. Beside the official designation (MGE$l\!\pm\!b$), for each bubble we list a shorter identification name (second column), derived from shorthands used during the identification phase, that will be used in this work as a compact notation. \begin{table*} \caption{Observations summary. Source dimensions at $24\mic{m}$ are from the bubbles catalogue \citep{Mizuno2010}.} \begin{minipage}{\textwidth} \begin{center} \begin{tabular}{cccccccccc} \hline Designation & Bubble & RA & DEC & Obs. day & Obs. time & Obs. day & Obs. time & Dimension & Classified\\ {[MGE]} & ID & (J2000) & (J2000) & (2010) & (min) & (2012) & (min) & at $24\mic{m}$ & in SIMBAD?\\ & & & & $C$ band & $C$ band & $L$ band & $L$ band\\ \hline 010.5569+00.0188 & 3153 & $\HAng{18}{08}{50.5}$ & $-\DAng{19}{47}{39}$ & 13--Mar & 12 & -- & -- & $\phantom{0}25''$\\ 013.5944+00.2139 & 3173 & $\HAng{18}{14}{17.1}$ & $-\DAng{17}{02}{16}$ & 14--Mar & 12 & -- & -- & $\phantom{0}24''$\\ 014.1176+00.0816 & 3177 & $\HAng{18}{15}{48.9}$ & $-\DAng{16}{38}{27}$ & 14--Mar & 12 & 06--Mar\footnote{Observed as field source.} & 10 & $\phantom{0}15''$\\ 016.1871+00.1202 & 3188 & $\HAng{18}{19}{45.1}$ & $-\DAng{14}{48}{02}$ & 14--Mar & 10 & 06--Mar & 10 & $\phantom{0}16''$\\ 015.9774+00.2955 & 3192 & $\HAng{18}{18}{42.2}$ & $-\DAng{14}{54}{09}$ & 14--Mar & 12 & 06--Mar$^{\textstyle a}$ & 10 & $\phantom{0}18''$\\ 016.1274+00.3327 & 3193 & $\HAng{18}{18}{51.7}$ & $-\DAng{14}{45}{10}$ & 14--Mar & 12 & 06--Mar & 10 & $\phantom{0}18''$\\ 019.6492+00.7740 & 3214 & $\HAng{18}{24}{04.0}$ & $-\DAng{11}{26}{16}$ & 14--Mar & 10 & -- & -- & $\phantom{0}18''$ & PN\footnote{[1] \citealt{Miszalski2008}; [2] \citealt{Green2009}; [3] \citealt{Kerber2003}; [4] \citealt{Chevalier2005}; [5] \citealt{Parker2006}; [6] \citealt{Kohoutek2001}.}$^{\textstyle [1]}$\\ 030.1503+00.1237 & 3222 & $\HAng{18}{45}{55.2}$ & $-\DAng{02}{25}{08}$ & 14--Mar & 10 & 06--Mar & 10 & $\phantom{0}26''$\\ 023.4499+00.0820 & 3259 & $\HAng{18}{33}{43.3}$ & $-\DAng{08}{23}{35}$ & 14--Mar & 10 & -- & -- & $\phantom{0}25''$\\ 023.6857+00.2226 & 3269 & $\HAng{18}{33}{39.5}$ & $-\DAng{08}{07}{08}$ & 13--Mar & 10 & -- & -- & $\phantom{0}44''$\\ 026.4700+00.0209 & 3282 & $\HAng{18}{39}{32.2}$ & $-\DAng{05}{44}{20}$ & 14--Mar & 10 & -- & -- & $\phantom{0}80''$\\ 027.5373+00.5473 & 3309 & $\HAng{18}{39}{37.4}$ & $-\DAng{04}{32}{56}$ & 14--Mar & 10 & 06--Mar & \phantom{0}9 & $\phantom{0}15''$\\ 027.3891--00.0079 & 3310 & $\HAng{18}{41}{19.9}$ & $-\DAng{04}{56}{06}$ & 13--Mar & \phantom{1}5 & 06--Mar$^{\textstyle a}$ & \phantom{0}9 & $250''$ & SNR$^{\textstyle b[2]}$\\ 028.4451+00.3094 & 3313 & $\HAng{18}{42}{08.2}$ & $-\DAng{03}{51}{03}$ & 14--Mar & 10 & 06--Mar & \phantom{0}9 & $\phantom{0}80''$\\ 029.0784+00.4545 & 3328 & $\HAng{18}{42}{46.8}$ & $-\DAng{03}{13}{17}$ & 14--Mar & 10 & 06--Mar$^{\textstyle a}$ & \phantom{0}9 & $\phantom{0}28''$ & PN$^{\textstyle b[3]}$\\ 028.7440+00.7076 & 3333 & $\HAng{18}{41}{16.0}$ & $-\DAng{03}{24}{11}$ & 14--Mar & 10 & 06--Mar & 10 & $\phantom{0}23''$\\ 030.8780+00.6993 & 3347 & $\HAng{18}{45}{12.0}$ & $-\DAng{01}{30}{32}$ & 14--Mar & 10 & 06--Mar & 10 & $\phantom{0}18''$\\ 031.7290+00.6993 & 3354 & $\HAng{18}{46}{45.2}$ & $-\DAng{00}{45}{06}$ & 13--Mar & 10 & 06--Mar\footnote{Observed also on 13--May.} & 18 & $\phantom{0}44''$\\ 032.8593+00.2806 & 3362 & $\HAng{18}{50}{18.3}$ & $\phantom{-}\DAng{00}{03}{48}$ & 13--Mar & 10 & 06--Mar & 10 & $\phantom{0}15''$\\ 032.4982+00.1615 & 3367 & $\HAng{18}{50}{04.3}$ & $-\DAng{00}{18}{45}$ & 13--Mar & 10 & 06--Mar$^{\textstyle c}$ & 15 & $\phantom{0}16''$\\ 034.8961+00.3018 & 3384 & $\HAng{18}{53}{56.8}$ & $\phantom{-}\DAng{01}{53}{08}$ & 13--Mar & 10 & 06--Mar$^{\textstyle c}$ & 18 & $\phantom{0}21''$\\ 042.0787+00.5084 & 3438 & $\HAng{19}{06}{24.6}$ & $\phantom{-}\DAng{08}{22}{02}$ & 13--Mar & 10 & 06--Mar$^{\textstyle c}$ & 21 & $\phantom{0}21''$\\ 042.7665+00.8222 & 3448 & $\HAng{19}{06}{33.6}$ & $\phantom{-}\DAng{09}{07}{20}$ & 13--Mar & 10 & 13--May & \phantom{0}9 & $\phantom{0}33''$\\ 065.9141+00.5966 & 3558 & $\HAng{19}{55}{02.4}$ & $\phantom{-}\DAng{29}{17}{20}$ & 13--Mar & 10 & -- & -- & $\phantom{0}33''$ & PN$^{\textstyle b[3]}$\\ 040.3704--00.4750 & 3654 & $\HAng{19}{06}{45.8}$ & $\phantom{-}\DAng{06}{23}{53}$ & 13--Mar & 10 & 13--May & \phantom{0}4 & $\phantom{0}27''$ & PN$^{\textstyle b[3]}$\\ 031.9075--00.3087 & 3706 & $\HAng{18}{50}{40.1}$ & $-\DAng{01}{03}{09}$ & 13--Mar & \phantom{1}6 & 13--May & \phantom{0}4 & $\phantom{0}25''$ & PN$^{\textstyle b[3]}$\\ 029.4034--00.4496 & 3724 & $\HAng{18}{46}{35.9}$ & $-\DAng{03}{20}{43}$ & 14--Mar & 10 & 06--Mar & 10 & $\phantom{0}16''$\\ 027.3839--00.3031 & 3736 & $\HAng{18}{42}{22.5}$ & $-\DAng{05}{04}{29}$ & 14--Mar & 10 & 06--Mar & 10 & $\phantom{0}37''$\\ 016.2280--00.3680 & 3866 & $\HAng{18}{21}{36.9}$ & $-\DAng{14}{59}{41}$ & 14--Mar & 10 & 06--Mar & \phantom{0}5 & $\phantom{0}18''$\\ 011.1805--00.3471 & 3910 & $\HAng{18}{11}{28.9}$ & $-\DAng{19}{25}{29}$ & 13--Mar & \phantom{1}6 & -- & -- & $260''$ & SNR$^{\textstyle b[4]}$\\ 010.6846--00.6280 & 3915 & $\HAng{18}{11}{30.8}$ & $-\DAng{19}{59}{41}$ & 13--Mar & 12 & -- & -- & $\phantom{0}15''$\\ 001.0178--01.9642 & 4409 & $\HAng{17}{55}{43.1}$ & $-\DAng{29}{04}{04}$ & 27--Mar & 10 & 08--May$^{\textstyle a}$ & 10 & $\phantom{0}28''$ & PN$^{\textstyle b[5]}$\\ 003.5533--02.4421 & 4422 & $\HAng{18}{03}{18.4}$ & $-\DAng{27}{06}{22}$ & 13--Mar & 12 & -- & -- & $\phantom{0}30''$ & PN$^{\textstyle b[3]}$\\ 356.7168--01.7246 & 4436 & $\HAng{17}{44}{29.6}$ & $-\DAng{32}{38}{11}$ & 27--Mar & 10 & 08--May & 10 & $\phantom{0}18''$\\ 359.5381--01.0838 & 4443 & $\HAng{17}{48}{46.6}$ & $-\DAng{29}{53}{34}$ & 21--Mar & 10 & -- & -- & $\phantom{0}23''$\\ 001.2920--01.4680 & 4452 & $\HAng{17}{54}{23.6}$ & $-\DAng{28}{34}{51}$ & 21--Mar & 10 & 08--May & 10 & $\phantom{0}15''$\\ 002.0599--01.0642 & 4463 & $\HAng{17}{54}{34.4}$ & $-\DAng{27}{42}{51}$ & 27--Mar & 10 & 08--May & 10 & $\phantom{0}27''$\\ 002.2128--01.6131 & 4465 & $\HAng{17}{57}{03.9}$ & $-\DAng{27}{51}{30}$ & 27--Mar & 10 & 08--May & 10 & $\phantom{0}18''$\\ 003.4305--01.0738 & 4467 & $\HAng{17}{57}{42.5}$ & $-\DAng{26}{32}{05}$ & 13--Mar & 12 & -- & -- & $\phantom{0}19''$\\ 005.6102--01.1516 & 4473 & $\HAng{18}{02}{48.4}$ & $-\DAng{24}{40}{54}$ & 13--Mar & 12 & 08--May & 10 & $\phantom{0}20''$\\ 009.4257--01.2294 & 4479 & $\HAng{18}{11}{10.6}$ & $-\DAng{21}{23}{15}$ & 13--Mar & 12 & -- & -- & $\phantom{0}20''$ & PN$^{\textstyle b[5]}$\\ 351.2381--00.0145 & 4485 & $\HAng{17}{23}{04.4}$ & $-\DAng{36}{18}{20}$ & 21--Mar & 10 & 08--May & 10 & $\phantom{0}40''$\\ 352.3117--00.9711 & 4486 & $\HAng{17}{29}{58.3}$ & $-\DAng{35}{56}{56}$ & 21--Mar & 10 & 08--May & 10 & $\phantom{0}18''$\\ 356.8155--00.3843 & 4497 & $\HAng{17}{39}{21.3}$ & $-\DAng{31}{50}{44}$ & 27--Mar & 10 & 08--May & 10 & $\phantom{0}15''$\\ 006.5850--00.0135 & 4530 & $\HAng{18}{00}{35.2}$ & $-\DAng{23}{16}{18}$ & 21--Mar & \phantom{1}5 & 08--May$^{\textstyle a}$ & 10 & $\phantom{0}12''$\\ 349.7294+00.1747 & 4534 & $\HAng{17}{17}{59.3}$ & $-\DAng{37}{26}{09}$ & 21--Mar & \phantom{1}5 & -- & -- & $\phantom{0}68''$\\ 356.1447+00.0550 & 4552 & $\HAng{17}{35}{54.5}$ & $-\DAng{32}{10}{35}$ & 27--Mar & 10 & 08--May & 10 & $\phantom{0}25''$\\ 355.7638+00.1424 & 4555 & $\HAng{17}{34}{35.2}$ & $-\DAng{32}{26}{58}$ & 27--Mar & 10 & -- & -- & $\phantom{0}16''$\\ 001.5280+00.9171 & 4580 & $\HAng{17}{45}{40.7}$ & $-\DAng{27}{09}{15}$ & 21--Mar & 10 & 08--May & 10 & $\phantom{0}18''$\\ 001.9965+00.1976 & 4583 & $\HAng{17}{49}{32.1}$ & $-\DAng{27}{07}{32}$ & 21--Mar & 10 & 08--May & 10 & $\phantom{0}12''$\\ 001.6982+00.1362 & 4584 & $\HAng{17}{49}{04.9}$ & $-\DAng{27}{24}{47}$ & 21--Mar & 10 & 08--May & 10 & $\phantom{0}18''$\\ 005.2641+00.3775 & 4589 & $\HAng{17}{56}{13.4}$ & $-\DAng{24}{13}{13}$ & 27--Mar & 10 & 08--May & 10 & $\phantom{0}16''$\\ 006.9367+00.0497 & 4595 & $\HAng{18}{01}{06.4}$ & $-\DAng{22}{56}{05}$ & 21--Mar & 10 & 08--May & 10 & $\phantom{0}20''$\\ 009.3523+00.4733 & 4602 & $\HAng{18}{04}{38.9}$ & $-\DAng{20}{37}{27}$ & 13--Mar & 12 & 08--May & 10 & $\phantom{0}28''$ & PN?$^{\textstyle b[6]}$\\ 008.9409+00.2532 & 4607 & $\HAng{18}{04}{36.3}$ & $-\DAng{21}{05}{26}$ & 13--Mar & 12 & 08--May & 10 & $\phantom{0}18''$\\ \hline \end{tabular} \label{tab:obsVLA} \end{center} \end{minipage} \end{table*} \subsection{Data reduction} The entire data reduction process was performed using the package \textsc{casa}. As a first step, the data were edited and flagged in order to identify and delete not properly working antennas, bad baselines and border (and usually noisy) channels. For $C$-band observations the editing process revealed no great corruptions in our data, while for $L$-band observations a large amount of flagging was needed in order to filter out the conspicuous RFI, leading to less 33 per cent of useful data remaining. For all the observations, the bandpass and flux calibrations were done using 3C286 as calibrator. In order to improve the quality of our gain calibration, depending on the distance from the source (typically within $10^\circ$), we used a variety of standard calibrators spanning a range of flux densities. \subsection{Imaging} \label{sec:ima} Data imaging was made using the Clark implementation \citep{Clark1980} of the CLEAN algorithm \citep{Hogbom1974}, convolving the resulting `clean components' with a Gaussian PSF. For $C$ band, since all observations were carried out with the EVLA in the same configuration (D), no significant differences were found in synthesis beam sizes. Therefore all the images were built using a $4''$ pixel and a total size of $256\times256$ pixels, in such a way that each map covers approximately a $17'\times17'$ area (the primary beam is about $9'$ FWHM). In some maps we were able to clean down to a $\mathrm{rms}\sim30\mic{Jy/beam}$, with an average beam size around $25''\times15''$. The typical noise was one order of magnitude greater than the confusion limit. For $L$ band instead, since multiple configurations were used, for each image a best choice between a pixel size of $3''$ or $4''$ was adopted. Also, the size of images was allowed to vary to best accommodate for field sources cleaning. The typical rms was about $0.5-1\um{mJy/beam}$ with an average beam size of $18''\times12''$. The typical noise was two orders of magnitude greater than the confusion limit. In $C$ band we expected that only sources with dimensions significantly less than $2'$ (EVLA largest angular scale) could be reasonably well imaged, and this would also permit a total flux density recovery. The more a source is extended the less reliable is its flux density measurement. Therefore, at the end of the imaging process, we cautiously excluded eight bubbles (namely 3259, 3282, 3310, 3328, 3558, 3910, 4485 and 4595) from the remainder of this work since they were suspected to be resolved out by the EVLA. A single-dish analysis for these bubbles is in progress. Radio maps and 24-$\umu$m images of some Bubbles are presented in appendix A (online only). \section{Spectral index analysis} \label{sec:detsrc} \subsection{Detections and flux densities calculation} \label{sec:det} The majority of the bubbles observed were detected in both bands. In particular for $C$ band we detected 44 bubbles out of 55, with 3 uncertain detections and 8 non-detections. For $L$ band we detected 23 bubbles out of 40, with 3 uncertain detections and 14 non-detections. Since one of the main goals of this work was to characterise the radio emission of the bubbles as an important aid to their classification, a very accurate flux density determination was needed. To avoid introducing methodological errors or biases, a unique procedure in this calculation was adopted. First of all the sources were divided into two classes depending on whether they were resolved or not. For point sources (not resolved) the flux density was determined using the \textsc{casa} task \texttt{imfit}, which fits an elliptical Gaussian component to an image. Given that the maps units are jansky/beam, the total flux density for a point source is equal to the peak value of the fitted Gaussian, i.e. $S=S_p$. The error was computed as the quadratic sum of the error derived from the fit, the map rms and the calibration error (this one, negligible in both bands): \begin{equation} \Delta S=\sqrt{\sigma_\mathrm{fit}^2+\sigma_\mathrm{rms}^2+\sigma_\mathrm{cal}^2}. \end{equation} The flux density calculation for extended sources proved much more difficult. For extended sources detected or resolved in one band only, the strategy was to localise the source boundary as the lowest brightness level at which we were confident to encompass only our object. Theoretically, one should go down to $\sigma_\mathrm{rms}$, below which the source becomes indistinct with respect to the background. However, the artefacts in interferometric images usually do not allow to look so deep and, for many bubbles, we were forced to stop at higher levels. Selected then an appropriate region for each object, the flux density was calculated by means of an integration over this area, performed directly with the \textsc{casa} \texttt{viewer}. The total error was estimated as the map rms multiplied by the square root of the integration area expressed in beams. For sources resolved in both bands we proceeded as follows. First the map with the higher angular resolution was degraded by convolving the clean components with the lower resolution beam and adding back the residual map. Then, for each bubble, we selected a region large enough to cover the source in both bands, and used this to estimate the flux and corresponding error as in the previous case. Furthermore an approximate size for resolved bubbles was calculated as follows: the observed size of the source, $\Omega_o$, is expressed as \begin{equation} \Omega_o=\Omega_s+\Omega_b \end{equation} where $\Omega_s$ is the `real' angular size of the source and $\Omega_b$ is the beam solid angle. The quantity $\Omega_o$ can also been expressed in term of number of beams, $N_b$, a quantity already computed for the determination of flux densities \begin{equation} \Omega_o=N_b\Omega_b, \end{equation} hence \begin{equation} \Omega_s=(N_b-1)\Omega_b \end{equation} and we calculated the corresponding mean size as \begin{equation} \langle\theta_s\rangle=\sqrt{b_\mathrm{maj}b_\mathrm{min}(N_b-1)}, \end{equation} where $b_\mathrm{maj}$ and $b_\mathrm{min}$ are, respectively, the beam major and minor axis. The results obtained are listed, along with some useful characteristics of each map, in Table \ref{tab:fluxC} for $C$ band and in Table \ref{tab:fluxL} for $L$ band. \begin{table*} \caption{Flux densities at $6\um{cm}$. Among the 44 bubbles detected at this frequency 8 are, likely, resolved out (see section \ref{sec:ima}) and for one (Bubble 3173) the flux density measurement is not reliable. Therefore only 35 bubbles are listed.} \begin{tabular}{cccccccc} \hline Bubble & Map rms & Beam & PA & Flux density & Resolved? & $\langle\theta_s\rangle$ & Notes\\ & (mJy/beam) & & & (mJy)\\\hline 3188 & 0.24 & $23.6''\times15.8''$ & $\phantom{0}{-5}^\circ$ & $\phantom{0}1.0\pm0.3$ & no?\\ 3192 & 0.53 & $26.7''\times15.5''$ & $-34^\circ$ & $\phantom{0}1.2\pm0.6$ & no?\\ 3193 & 0.61 & $22.5''\times15.0''$ & $-18^\circ$ & $\phantom{0}1.4\pm0.6$ & no\\ 3214 & 0.11 & $22.4''\times16.0''$ & $\phantom{-0}0^\circ$ & $\phantom{0}4.0\pm0.2$ & no\\ 3222 & 0.82 & $21.3''\times14.4''$ & $\phantom{-}41^\circ$ & $22.9\pm1.5$ & no\\ 3309 & 0.21 & $19.8''\times15.5''$ & $\phantom{-}14^\circ$ & $\phantom{0}3.4\pm0.4$ & yes & $26''$\\%b03.2 3313 & 0.30 & $19.6''\times15.1''$ & $\phantom{-}24^\circ$ & $\phantom{0}5.1\pm0.7$ & yes & $34''$\\%b04.8 3333 & 0.15 & $19.3''\times15.1''$ & $\phantom{-}22^\circ$ & $\phantom{0}6.4\pm0.3$ & yes & $24''$\\%b03.0 3347 & 0.14 & $20.2''\times14.1''$ & $\phantom{-}40^\circ$ & $\phantom{0}1.5\pm0.3$ & no\\ 3354 & 0.04 & $21.1''\times18.5''$ & $\phantom{-}51^\circ$ & $12.3\pm0.1$ & yes & $23''$\\%b01.9 3362 & 2.35 & $21.8''\times15.6''$ & $\phantom{-}34^\circ$ & $12.1\pm2.5$ & no\\ 3367 & 0.79 & $21.3''\times15.4''$ & $\phantom{-}31^\circ$ & $\phantom{0}4.7\pm0.9$ & no\\ 3384 & 0.16 & $27.2''\times21.4''$ & $\phantom{0}{-2}^\circ$ & $17.8\pm0.4$ & yes & $54''$ & \textit{Self-calibrated}\\%b06.1 3438 & 0.09 & $19.8''\times16.1''$ & $\phantom{-}46^\circ$ & $10.5\pm0.1$ & no?\\ 3448 & 0.13 & $20.6''\times16.1''$ & $\phantom{-}51^\circ$ & $12.7\pm0.4$ & no\\ 3654 & 0.18 & $22.8''\times16.2''$ & $\phantom{-}52^\circ$ & $59.7\pm0.5$ & yes & $38''$\\%b04.4 3706 & 0.40 & $23.1''\times15.7''$ & $\phantom{-}37^\circ$ & $19.6\pm0.8$ & yes & $22''$\\%b01.8 3724 & 0.18 & $19.9''\times14.0''$ & $\phantom{-}36^\circ$ & $\phantom{0}3.2\pm0.4$ & yes & $31''$\\%b04.4 3736 & 0.14 & $20.2''\times14.9''$ & $\phantom{-}27^\circ$ & $18.1\pm0.5$ & yes & $56''$\\%b11.6 3866 & 0.30 & $24.0''\times16.0''$ & $\phantom{0}{-1}^\circ$ & $10.3\pm0.6$ & no\\ 4409 & 0.03 & $68.2''\times12.7''$ & $-25^\circ$ & $\phantom{0}7.3\pm0.1$ & no\\ 4422 & 0.06 & $35.1''\times13.5''$ & $\phantom{-}15^\circ$ & $40.3\pm0.2$ & no\\ 4436 & 0.06 & $74.4''\times11.3''$ & $-15^\circ$ & $\phantom{0}5.8\pm0.1$ & no\\ 4452 & 0.18 & $39.6''\times13.9''$ & $-30^\circ$ & $\phantom{0}2.1\pm0.4$ & yes? & $43''$\\%b04.4 4465 & 0.04 & $52.3''\times12.2''$ & $-13^\circ$ & $\phantom{0}1.6\pm0.1$ & no\\ 4473 & 0.52 & $31.8''\times13.4''$ & $-12^\circ$ & $38.1\pm0.8$ & no\\ 4479 & 0.08 & $31.2''\times13.5''$ & $\phantom{-}23^\circ$ & $16.0\pm0.2$ & no\\ 4486 & 0.58 & $43.9''\times14.2''$ & $-16^\circ$ & $15.5\pm0.9$ & no\\ 4497 & 0.18 & $63.8''\times11.7''$ & $-16^\circ$ & $15.1\pm0.4$ & no\\ 4552 & 0.48 & $66.0''\times11.7''$ & $-18^\circ$ & $15.0\pm0.7$ & no\\ 4580 & 0.35 & $46.5''\times14.2''$ & $-37^\circ$ & $\phantom{0}2.0\pm0.4$ & no\\ 4584 & 0.32 & $41.0''\times14.5''$ & $-33^\circ$ & $\phantom{0}2.1\pm0.5$ & yes? & $28''$\\%b02.3 4589 & 0.08 & $47.4''\times12.3''$ & $-15^\circ$ & $\phantom{0}9.0\pm0.2$ & no\\ 4602 & 0.25 & $30.1''\times13.8''$ & $\phantom{-}18^\circ$ & $17.7\pm0.7$ & no\\ 4607 & 0.26 & $29.6''\times13.7''$ & $\phantom{-}16^\circ$ & $\phantom{0}8.2\pm0.4$ & no & & \textit{Self-calibrated}\\ \hline \end{tabular} \label{tab:fluxC} \end{table*} \begin{table*} \caption{Flux densities at $20\um{cm}$. The flux densities (or their upper limits) for 7 bubbles were not reliable and are not listed.} \begin{tabular}{cccccccc} \hline Bubble & Map rms & Beam & PA & Flux density & Resolved? & $\langle\theta_s\rangle$ & Notes\\ & (mJy/beam) & & & (mJy)\\\hline 3188 & 1.46 & $25.1''\times14.4''$ & $-23^\circ$ & $<4.5$ & -- & & \textit{Upper limit only}\\ 3192 & 0.74 & $29.1''\times14.1''$ & $-21^\circ$ & $<2.1$ & -- & & \textit{Upper limit only}\\ 3193 & 0.74 & $29.1''\times14.1''$ & $-21^\circ$ & $<2.1$ & -- & & \textit{Upper limit only}\\ 3222 & 2.26 & $18.4''\times14.4''$ & $-17^\circ$ & $21.5\pm3.1$ & no\\ 3309 & 0.88 & $19.0''\times14.2''$ & $-12^\circ$ & $<5.2$ & -- & & \textit{Upper limit only}\\ 3313 & 0.98 & $18.7''\times14.2''$ & $-10^\circ$ & $<6.9$ & -- & & \textit{Upper limit only}\\ 3328 & 0.87 & $20.1''\times14.5''$ & $-27^\circ$ & $11.3\pm4.4$ & yes & $85''$ & \textit{Resolved-out at $6\um{cm}$}\\%b25.6 3333 & 0.69 & $20.1''\times14.5''$ & $-27^\circ$ & $\phantom{0}4.0\pm1.2$ & yes & $24''$\\%b03.0 3347 & 1.88 & $18.7''\times14.4''$ & $-21^\circ$ & $<5.7$ & -- & & \textit{Upper limit only}\\ 3354 & 0.58 & $15.5''\times12.0''$ & $-36^\circ$ & $12.0\pm1.1$ & yes & $23''$\\%b03.9 3362 & 1.88 & $17.1''\times13.2''$ & $\phantom{0}{-4}^\circ$ & $<5.7$ & -- & & \textit{Upper limit only}\\ 3367 & 0.92 & $16.1''\times11.9''$ & $-42^\circ$ & $\phantom{0}6.8\pm0.9$ & no\\ 3384 & 0.82 & $15.6''\times12.0''$ & $-38^\circ$ & $\phantom{0}2.1\pm0.8$ & yes & & \textit{Peak intensity}\\ 3438 & 0.39 & $15.0''\times12.0''$ & $-26^\circ$ & $10.2\pm0.8$ & yes & $24''$\\%b04.2 3448 & 0.73 & $14.9''\times10.7''$ & $-77^\circ$ & $12.8\pm1.1$ & yes & $14''$\\%b02.3 3654 & 0.24 & $13.5''\times10.8''$ & $-43^\circ$ & $64.1\pm0.8$ & yes & $38''$\\%b11.1 3706 & 1.93 & $14.3''\times11.6''$ & $-63^\circ$ & $10.8\pm3.7$ & yes & $22''$\\%b03.8 3724 & 1.50 & $18.7''\times14.3''$ & $\phantom{0}{-8}^\circ$ & $<9.6$ & -- & & \textit{Upper limit only}\\ 3736 & 0.64 & $20.8''\times14.6''$ & $-23^\circ$ & $<6.1$ & -- & & \textit{Upper limit only}\\ 3866 & 3.31 & $25.1''\times14.5''$ & $-22^\circ$ & $\phantom{0}9.7\pm3.5$ & no\\ 4409 & 0.46 & $17.3''\times12.0''$ & $\phantom{0}{-3}^\circ$ & $<1.5$ & -- & & \textit{Upper limit only}\\ 4436 & 0.33 & $21.0''\times12.2''$ & $-29^\circ$ & $\phantom{0}6.6\pm0.4$ & no\\ 4452 & 0.57 & $17.3''\times12.0''$ & $\phantom{0}{-3}^\circ$ & $<6.1$ & -- & & \textit{Upper limit only}\\ 4465 & 0.55 & $18.4''\times12.1''$ & $-25^\circ$ & $\phantom{0}1.0\pm0.6$ & no\\ 4473 & 3.94 & $16.8''\times12.0''$ & $\phantom{-0}5^\circ$ & $34.7\pm3.9$ & no?\\ 4486 & 0.53 & $24.1''\times12.1''$ & $-30^\circ$ & $20.9\pm1.1$ & yes & $31''$\\%b04.3 4497 & 0.62 & $21.0''\times11.8''$ & $-31^\circ$ & $15.0\pm0.9$ & no\\ 4552 & 0.66 & $23.9''\times11.7''$ & $-36^\circ$ & $15.9\pm1.1$ & yes & $22''$ & \textit{Self-calibrated}\\%b02.8 4580 & 1.00 & $17.5''\times12.2''$ & $-12^\circ$ & $<3.0$ & -- & & \textit{Upper limit only}\\ 4584 & 1.21 & $17.4''\times12.1''$ & $-11^\circ$ & $<9.2$ & -- & & \textit{Upper limit only}\\ 4589 & 0.55 & $17.8''\times11.7''$ & $-29^\circ$ & $\phantom{0}8.9\pm0.6$ & no\\ 4602 & 1.16 & $15.7''\times11.7''$ & $\phantom{-}18^\circ$ & $14.7\pm1.7$ & no\\ 4607 & 0.51 & $15.4''\times11.7''$ & $\phantom{-}15^\circ$ & $\phantom{0}5.9\pm0.6$ & no\\ \hline \end{tabular} \label{tab:fluxL} \end{table*} As mentioned in the previous section, the determination of a spectral index for as many bubbles as possible was critical for this work. Once the flux densities were estimated as described above, the spectral index $\alpha$ is defined as \begin{equation} S_\nu\propto\nu^\alpha, \end{equation} with an associated error given by \begin{equation} \Delta\alpha\simeq\frac{\displaystyle\sqrt{\left(\frac{\Delta S_L}{S_L}\right)^2+\left(\frac{\Delta S_C}{S_C}\right)^2}}{\displaystyle\ln\frac{\nu_C^{\phantom{x}}}{\nu_L}}, \end{equation} where subscripts $C$ and $L$ refer, respectively, to $6\um{cm}$ and $20\um{cm}$ observations. The error on frequencies was neglected. \subsection{Results} The analysis of the spectral indices, obtained as described above, suggests that many bubbles are free-free emitters, with the majority optically thick at $20\um{cm}$ (see Table \ref{tab:spInPS} and Figure \ref{fig:hist}). Only Bubbles 3367 and 4486 may have spectral index values compatible with non-thermal emission. \begin{table} \caption{Spectral index for sources detected in both bands.} \begin{tabular}{ccccc}\hline Bubble & Flux density & Flux density & $\alpha$ & Resolved?\\ & at $20\um{cm}$ (mJy) & at $6\um{cm}$ (mJy)\\\hline 3222 & $21.5\pm3.1$ & $22.9\pm1.5$ & $\phantom{-}0.05\pm0.13$ & no\\ 3333 & $\phantom{0}4.0\pm0.2$ & $\phantom{0}6.4\pm0.3$ & $\phantom{-}0.39\pm0.26$ & yes\\ 3354 & $12.0\pm1.1$ & $12.3\pm0.1$ & $\phantom{-}0.02\pm0.08$ & yes\\ 3367 & $\phantom{0}6.8\pm0.9$ & $\phantom{0}4.7\pm0.9$ & $-0.30\pm0.19$ & no\\ 3438 & $10.2\pm0.8$ & $10.5\pm0.1$ & $\phantom{-}0.02\pm0.07$ & yes\\ 3448 & $12.8\pm1.1$ & $12.7\pm0.4$ & $-0.01\pm0.08$ & yes\\ 3654 & $64.1\pm0.8$ & $59.7\pm0.5$ & $-0.06\pm0.01$ & yes\\ 3706 & $10.8\pm3.7$ & $19.6\pm0.8$ & $\phantom{-}0.49\pm0.28$ & yes\\ 3866 & $\phantom{0}9.7\pm3.5$ & $10.3\pm0.6$ & $\phantom{-}0.05\pm0.30$ & no\\ 4436 & $\phantom{0}6.6\pm0.4$ & $\phantom{0}5.8\pm0.1$ & $-0.11\pm0.05$ & no\\ 4465 & $\phantom{0}1.0\pm0.6$ & $\phantom{0}1.6\pm0.1$ & $\phantom{-}0.39\pm0.48$ & no\\ 4473 & $34.7\pm3.9$ & $38.1\pm0.8$ & $\phantom{-}0.08\pm0.09$ & no\\ 4486 & $20.1\pm1.1$ & $15.5\pm0.9$ & $-0.25\pm0.06$ & yes\\ 4497 & $15.0\pm0.9$ & $15.1\pm0.4$ & $\phantom{-}0.01\pm0.06$ & no\\ 4552 & $15.9\pm1.1$ & $15.0\pm0.7$ & $-0.04\pm0.07$ & yes\\ 4589 & $\phantom{0}8.9\pm0.6$ & $\phantom{0}9.0\pm0.2$ & $\phantom{-}0.01\pm0.06$ & no\\ 4602 & $14.7\pm1.7$ & $17.7\pm0.7$ & $\phantom{-}0.15\pm0.10$ & no\\ 4607 & $\phantom{0}5.9\pm0.6$ & $\phantom{0}8.2\pm0.4$ & $\phantom{-}0.27\pm0.09$ & no\\ \hline \end{tabular} \label{tab:spInPS} \end{table} \begin{figure} \includegraphics[width=\columnwidth]{fig1.pdf} \caption{Spectral index statistical distribution.} \label{fig:hist} \end{figure} Among all the observed objects, two bubbles, 3654 and 3706, were already classified as PNe \citep{Kerber2003}. These two sources appear resolved in both bands in our images. As mentioned in Section \ref{sec:det}, not all the bubbles were detected, especially in $L$ band. It is very likely that the bubbles detected in $C$ band but not in $L$ band are characterised by positive spectral indices and also, due to the higher rms in L band, are simply below the detection limit. It is possible to estimate a minimum spectral index for each non-detected bubble assuming an upper limit for their flux density as follows: (1) for point sources in $C$ band the flux density upper limit at $20\um{cm}$ is simply assumed as three times the rms of the respective $L$-band map, (2) for extended sources the size of the source as imaged in $C$ band is reported in number of beams of the $L$-band map and the square-root of this number is multiplied by three times the map rms. Assuming pure black-body emission ($\alpha=2$), a minimum flux density at $20\um{cm}$ was also computed so that, for each bubble, it is possible to define a range of possible $L$-band flux density values. In Table \ref{tab:soloC} we provide the results of this estimate. \begin{table} \caption{Bubbles detected only in $C$ band. For flux densities in $L$ band a possible range is provided as described in the text.} \begin{tabular}{cccccc}\hline Bubble & \multicolumn{2}{c}{$S(L)$ (mJy)} & $S(C)$ & $\alpha$ & Resolved?\\\cline{2-3} & min & max & (mJy) & & (mJy)\\\hline 3188 & 0.1 & 4.5 & $\phantom{0}1.0\pm0.2$ & $\gtrsim-1.2$ & no\\ 3192 & 0.1 & 2.1 & $\phantom{0}1.2\pm0.6$ & $\gtrsim-0.4$ & no\\ 3193 & 0.1 & 2.1 & $\phantom{0}1.4\pm0.6$ & $\gtrsim-0.3$ & no\\ 3309 & 0.3 & 5.2 & $\phantom{0}3.4\pm0.4$ & $\gtrsim-0.3$ & yes\\ 3313 & 0.4 & 6.9 & $\phantom{0}5.1\pm0.7$ & $\gtrsim-0.3$ & yes\\ 3347 & 0.1 & 5.7 & $\phantom{0}1.5\pm0.3$ & $\gtrsim-1.1$ & no\\ 3362 & 1.1 & 5.7 & $12.1\pm2.5$ & $\gtrsim+0.6$ & no\\ 3724 & 0.3 & 9.6 & $\phantom{0}3.2\pm0.4$ & $\gtrsim-0.9$ & yes\\ 3736 & 1.6 & 6.1 & $18.1\pm0.5$ & $\gtrsim+0.9$ & yes\\ 4409 & 0.6 & 1.5 & $\phantom{0}7.3\pm0.1$ & $\gtrsim+1.2$ & no\\ 4452 & 0.2 & 6.1 & $\phantom{0}2.1\pm0.4$ & $\gtrsim-0.8$ & yes\\ 4580 & 0.2 & 3.0 & $\phantom{0}2.0\pm0.4$ & $\gtrsim-0.3$ & no\\ 4584 & 0.2 & 9.2 & $\phantom{0}2.1\pm0.5$ & $\gtrsim-1.2$ & yes\\ \hline \end{tabular} \label{tab:soloC} \end{table} \section{Classification} The determination of the radio spectral index in the previous section has allowed us to make preliminary hypotheses of the nature of the bubbles. However a multi-wavelength approach is necessary to fully characterise these objects. In addition to MIPSGAL and GLIMPSE observations, many bubbles were detected in other IR bands, from $1.25\mic{m}$ to $160\mic{m}$.\footnote{Herschel observations detected bubbles also at longer wavelengths, but they will not be discussed in this work.} In particular we took into account data from on-line catalogues of: the 2-Micron All Sky Survey (2MASS) at $1.25\mic{m}$ ($J$ band), $1.65\mic{m}$ ($H$ band) and $2.17\mic{m}$ ($K_s$ band) \citep{Cutri2003}; the Wide-field IR Survey Explorer (WISE) at $3.4\mic{m}$, $4.6\mic{m}$, $12\mic{m}$ and $22\mic{m}$ \citep{Cutri2012}; the Midcourse Space Experiment (MSX) at $8.3\mic{m}$, $12\mic{m}$, $15\mic{m}$ and $21\mic{m}$ \citep{Egan2003}; the IR Astronomical Satellite (IRAS) at $12\mic{m}$, $25\mic{m}$ and $60\mic{m}$; the Japanese satellite AKARI at $9\mic{m}$, $18\mic{m}$, $60\mic{m}$, $90\mic{m}$, $140\mic{m}$ and $160\mic{m}$. In the Table \ref{tab:synopt} for each bubble listed in the Table \ref{tab:spInPS}, except 3654 and 3706, a brief summary of all the available IR observations will be presented. In the last comment we report a possible classification for each bubble as reported in literature or derived in this work. Beside the IR archive search, we also looked for possible detections in H$\alpha$ using the SuperCOSMOS H-alpha Survey (SHS; \citealt{Parker2005}). The survey detects all known PNe in Table \ref{tab:obsVLA} (except 3558 and 3654, not covered by the survey), but also bubbles 3193, 4436, 4602 and 4607. Our radio spectral index analysis has shown that these four bubbles are thermal emitters (see Tables \ref{tab:spInPS}, \ref{tab:soloC} and \ref{tab:synopt}). If we assume that the H$\alpha$ emission is a good tracer of the radio free-free continuum, the detection of these four bubbles in SHS corroborates our classification. However only the Bubble 4602 is clearly detected in H$\alpha$, while the other three nebulae appear very faint and barely visible (we cannot even exclude a fake detection). We therefore cautiously avoid a quantitative analysis in this moment. In the following subsections, we will make use of this information to attempt a classification of the bubbles whose nature is still uncertain. \begin{landscape} \begin{table} \caption{Synoptic table of IR observations. Legend: `C' only central source, `N' only diffuse emission, `B' both central source and diffuse emission, `P' point source due to low resolution, `--' no source detected. In the last column the `?' indicates a candidate while `\textit{RadTh}' that we can only state that we are observing a radio thermal emitter.} \begin{tabular}{cccccccl}\hline Bubble & 2MASS & WISE & IRAC\footnote{From GLIMPSE.} & MSX & IRAS & AKARI & Comments\\%Herschel? & $J$/$H$/$K_s$ & [3.4]/[4.6]/[12]/[22] & [3.6]/[4.5]/[5.8]/[8] & [8.3]/[12]/[15]/[21] & [12]/[25]/[60] & [9]/[18]/[65]/[90]/[140]/[160]\\\hline 3222 & --/C/C & C/C/N/N & C/C/B/B & P/P/P/P & --/P/-- & --/--/--/--/-- & PN? \citep{Urquhart2009}\\ 3333 & --/--/-- & --/--/--/N & --/--/--/-- & --/--/--/-- & --/--/-- & --/--/--/--/-- & \textit{RadTh} \textbf{(This work)}\\ 3354 & --/--/-- & --/--/N/N & --/--/N/N & --/--/--/-- & N/--/P & --/--/P/--/P/P & H \textsc{ii} region? \citep{Anderson2011}\\ 3367 & --/C/C & C/C/N/N & C/C/C/N & --/--/--/-- & --/--/-- & --/P/--/--/--/-- & PN? \textbf{(This work)}\\ 3438 & C/C/C & C/C/C/N & C/C/C/C & P/P/P/P & P/P/-- & P/--/--/--/--/-- & \textit{RadTh} \textbf{(This work)}\\ 3448 & C/C/C & C/C/N/N & C/C/C/N & --/--/--/-- & --/P/P & --/P/--/--/--/-- & PN? \citep{Gvaramadze2010}\\ 3866 & --/--/-- & --/--/--/-- & --/--/--/-- & --/--/--/-- & --/--/-- & --/--/--/--/--/-- & PN? \citep{Anderson2011}\\ 4436 & --/--/-- & --/--/N/N & --/--/--/-- & --/--/--/-- & --/P/P & --/P/--/--/--/-- & PN? \textbf{(This work)}\\ 4465 & --/--/-- & --/--/N/N & --/--/--/-- & --/--/--/-- & --/--/-- & --/--/--/--/--/-- & \textit{RadTh} \textbf{(This work)}\\ 4473 & --/--/-- & --/N/N/N & --/N/N/N & --/--/--/-- & --/P/-- & --/P/--/--/--/-- & PN? \textbf{(This work)}\\ 4486 & --/--/-- & --/--/N/N & --/--/--/-- & --/--/--/-- & --/--/-- & --/--/--/--/--/--\\ 4497 & --/--/-- & --/--/N/N & --/--/--/-- & --/--/--/-- & --/--/-- & --/--/--/--/--/-- & \textit{RadTh} \textbf{(This work)}\\ 4552 & --/--/-- & --/--/N/N & --/--/--/-- & --/--/--/-- & --/--/-- & --/--/--/--/--/-- & \textit{RadTh} \textbf{(This work)}\\ 4589 & --/--/-- & --/--/N/N & --/--/--/-- & --/--/--/-- & --/--/-- & --/P/--/--/--/-- & \textit{RadTh} \textbf{(This work)}\\ 4602 & --/--/-- & N/N/N/N & N/N/N/N & P/--/P/P & --/P/P & --/P/--/P/P/-- & PN? \citep{Kohoutek2001}\\ 4607 & --/--/-- & --/--/N/N & --/--/--/N & --/--/--/-- & --/--/-- & --/--/--/--/--/-- & \textit{RadTh} \textbf{(This work)}\\\hline \end{tabular} \label{tab:synopt} \end{table} \end{landscape} \subsection{Radio emission characterization} \label{sec:radiochar} In Section \ref{sec:detsrc} we discussed the derivation of the radio spectral index between $20\um{cm}$ and $6\um{cm}$ for all those bubbles whose flux density is well determined. We found that most of the bubbles have a positive or slightly negative spectral index, indicating that we are very likely observing thermal free-free emission typically in optically thick regime, with a large amount of sources presenting a spectral index of 0. This behaviour was somehow expected, since the majority of the already classified bubbles are PNe (see Table \ref{tab:obsVLA}). Furthermore also other kinds of evolved stars (such as LBV or WR) are characterized by a radio free-free emission, with only SNR showing clear non-thermal features. For 5 bubbles, a potential classification is available from the literature, according to which 4 are PNe candidate (denoted as squares in Figure \ref{fig:radio_color}) and 1 is a H \textsc{ii} region candidate (denoted as triangles in Figure \ref{fig:radio_color}). For these sources, the spectral index derived from our analysis is consistent with the existing classification. Two sources, i.e. Bubbles 3367 and 4486, are characterized by rather negative spectral index values. Their spectral indices were estimated as $-0.30$ and $-0.25$ respectively, values too low to be ascribed to pure free-free emission. However, the errors associated with these measurements are significant, so the thermal emission hypothesis cannot be entirely ruled out. \begin{figure*} \begin{center} \includegraphics[width=10cm]{fig2_bw.pdf} \caption{Flux densities comparison at $20\um{cm}$ and at $6\um{cm}$. The red (lower) area delimits the range of expected values for free-free emission, with the red line (bottom) representing a pure black-body emission ($\alpha=2$) and the green (top) a pure optically thin free-free emission ($\alpha=-0.1$). The blue (upper) area delimited spectral indices between $-0.5$ and $-1$, typical of an optically thin synchrotron emission. Noticeably, the majority of the points lie close to the green line.} \label{fig:radio_color} \end{center} \end{figure*} \subsection{Relation between radio and MIPS 24 micron emission} \label{sec:radioMIPSGAL} The emission at $24\mic{m}$ and $6\um{cm}$ have a different origins. In fact, the emission at $24\mic{m}$ originates both from warm thermal dust emission, and from gas forbidden lines, such as [O \textsc{iv}] at $25.89\mic{m}$ \citep{Flagey2011}. The radio emission at $6\um{cm}$, instead, originates from either thermal free-free emission or synchrotron emission. However it was shown by several authors that a strong correlation between mid-/far-IR and radio emission exists (\citealt{deJong1985}; \citealt{Helou1985}; \citealt{Pinheiro2011}). In Figure \ref{fig:MIra} the flux density at $24\mic{m}$ from MIPSGAL plotted against the flux density at $6\um{cm}$ from our observations (Table \ref{tab:fluxC}), for all the bubbles with measured 6-cm flux density with the only exception of the Bubble 3313 (see below). The figure evidences a clear correlation between the emission in the two bands. If we defined for each bubble the quantity \begin{equation} q=\log\frac{S_{\mathrm{IR}}}{S_{\mathrm{ra}}} \end{equation} we find that $\overline{q}=1.9\pm0.4$, where the error is computed as the standard deviation of the distribution. A linear fit to the ensemble of the $\log S_\mathrm{IR}$ vs. $\log S_\mathrm{ra}$ values retrieves \begin{equation} \log S_{\mathrm{IR}}=0.9\log S_{\mathrm{ra}}+2.0 \end{equation} from which, despite the small size of the sample, it is clear that the relation is almost perfectly linear (0.9 instead of 1), therefore the mean value $\overline{q}$ is a good representation of $\log(S_\mathrm{IR}/S_\mathrm{ra})$. Bubble 3313 has a much higher $S_\mathrm{IR}/S_\mathrm{ra}$ value ($\sim\!4000$) with respect to the rest of the sample. At $24\mic{m}$ this source appears very extended (about $80''$) and might be interacting with Bubble 3312 (\citealt{Gvaramadze2010}; \citealt{Wachter2010}). Spectroscopic near-IR studies of the central sources of these two bubbles reveal that both can be classified as WR stars of the same spectral type WN9h \citep{Burgemeister2013}. Our radio observations at $6\um{cm}$ show a very faint irregular nebula around the central star of Bubble 3313, less extended than the 24-$\umu$m nebula, with no emission around the other bubble or in any other region where the 24-$\umu$m emission is present (see Figure A17 in appendix A). Despite the fact that this bubble is detected in the MAGPIS 20-cm tile, no emission is visible from our maps at $20\um{cm}$. Indeed it is possible that the extended emission is below our detection limit (especially at $20\um{cm}$) and/or that it was resolved out (especially at $6\um{cm}$). For these reasons the flux density computation is not considered reliable enough and the bubble was not included in this part of the analysis. Although the emission at $24\mic{m}$ is well correlated with the emission at $6\um{cm}$, we cannot use this effect to classify our sources. For example, if we compute $\overline{q}$ for 8 known PNe, we find a value of $1.7\pm0.4$ which is consistent with the value for the whole sample. \begin{figure*} \begin{center} \includegraphics[width=10cm]{fig3_bw.pdf} \caption{Correlation between MIPSGAL flux densities at $24\mic{m}$ and radio data at $6\um{cm}$ from our EVLA observations. The grey dotted lines represent flux density ratios of 10, 100 and 1000.} \label{fig:MIra} \end{center} \end{figure*} \subsection{Relation between radio and IRAS 25 micron emission} \label{sec:radioIRAS} Combining our radio observations with IRAS archive data it is possible to discriminate whether a source is a PN candidate or not. Although IRAS poor resolution did not allow us to resolve individual PNe, its sensitivity was enough to detect these objects at least at the distance of the galactic center \citep{Pottasch1988}. Unfortunately, only few bubbles studied here have archival IRAS fluxes, and none has a flux density determination in more than two bands. Using the IRAS Point Source Catalogue and archival VLA 6-cm data \citep{Becker1994} for a sample of known PNe and H \textsc{ii} regions, we were able to generate color plots useful for our classification purposes. As a first step, it is important to notice that, following the discussion in Section \ref{sec:radioMIPSGAL}, the IRAS flux densities at $25\mic{m}$ are well-correlated with the radio flux densities at $6\um{cm}$ (Figure \ref{fig:I25ra}). \begin{figure*} \begin{center} \includegraphics[width=10cm]{fig4_bw.pdf} \caption{Correlation between IRAS flux densities at $25\mic{m}$ and radio data at $6\um{cm}$. Small crosses are archive PNe and small points archive H \textsc{ii} regions; larger markers represent our bubbles with IRAS archive values and our radio data. It is possible to notice that the two flux densities are well-correlated and that the PNe are usually characterized by lower flux density values.} \label{fig:I25ra} \end{center} \end{figure*} This plot is quite similar to Figure \ref{fig:MIra}. It is however interesting to notice how the two plots span a different range of values in flux parameter space, with the MIPSGAL and EVLA observations extending the coverage towards lower flux densities. We also notice that, though PNe and H \textsc{ii} regions partly overlap in Figure \ref{fig:I25ra}, H \textsc{ii} regions become dominant at very high flux densities. All our 6 bubbles, for which both flux density values are available, are located in the lower-left region of the plot, so they are all compatible both with PNe and H \textsc{ii} regions. A more interesting result can be obtained by plotting the IRAS flux density values at $60\mic{m}$ against the radio flux densities at $6\um{cm}$ (Figure \ref{fig:I60ra}). \begin{figure*} \begin{center} \includegraphics[width=10cm]{fig5_bw.pdf} \caption{Correlation between IRAS flux densities at $60\mic{m}$ and radio data at $6\um{cm}$. Small crosses are archive PNe and small points archive H \textsc{ii} regions; larger markers represent our bubbles with IRAS archive values and our radio data. It is possible to notice that the PN population is characterized by a lower value of the two flux density ratio and is well-separated from the H \textsc{ii} regions.} \label{fig:I60ra} \end{center} \end{figure*} In this plot it is still evident how IR and radio flux density values correlate but it is also possible to notice how PNe represent a population clearly separated from other H \textsc{ii} regions (despite some exceptions). From this plot, we might be tempted to classify Bubble 4436 as a PN candidate. However, this hypothesis is not supported by the distribution of IRAS 60 micron vs. 25 micron fluxes (Figure \ref{fig:I60I25}). \begin{figure*} \begin{center} \includegraphics[width=10cm]{fig6_bw.pdf} \caption{Correlation between IRAS flux densities at $60\mic{m}$ and at $25\mic{m}$. Small crosses are archive PNe and small points archive H \textsc{ii} regions; larger markers represent our bubbles with IRAS archive values. Also in this plot it is possible to notice that the PN population is characterized by a lower value of the two flux density ratio and is well-separated from the H \textsc{ii} regions.} \label{fig:I60I25} \end{center} \end{figure*} In this case, PNe still occupy a well-defined and separate region of space with respect to H \textsc{ii} regions, but bubbles and PNe do not share the same region in the plot, with bubbles having a much lower flux density than both PNe and H \textsc{ii} regions. Indeed, their low surface brightness is likely the reason these sources were not detected by the IRAS survey. Therefore, it is difficult to say which classification is more appropriate for Bubble 4436, given its outlier behaviour when compared to already classified objects. Using all IRAS bands combined with 6-cm data, we also generated color-color diagrams. However, none of them was useful for our classification attempt, since no particular trend was observed. \subsection{The importance of GLIMPSE data} \label{sec:claGLIMPSE} As we discussed in the introduction, one of the characteristic of bubbles is that they are mostly detected only at $24\mic{m}$. The GLIMPSE survey, in fact, failed to detect extended emission for the majority of the bubbles, despite the great sensitivity of IRAC. However, in seven cases, a faint nebular emission appears in the GLIMPSE data and for five of these we performed aperture photometry using the Aperture Photometry Tool\footnote{http://www.aperturephotometry.org}. For Bubbles 3222 and 4607 it was impossible to derive a reliable flux density: in fact the first nebula is very small and dominated by its central source while the second is faint and immersed in a confused fore- and background. To this end, we subtracted foreground point-sources, performed an interpolation of the empty pixels using the information from the surrounding background, and then estimated the sky background as the median value of a sufficiently large region in proximity of the source. In addition to aperture photometry, when the central source is visible within the bubble, we extracted point-source photometry from the online GLIMPSE catalogue. Information on the nature of a source detected in all IRAC bands come directly from the 3-color image obtained by superposition of the monochromatic maps at $8\mic{m}$, $5.8\mic{m}$ and one among the other two bands. As discussed in \citet{Murphy2010}, PNe usually appear red, while H \textsc{ii} regions appear either yellow or white. This is due, for H \textsc{ii} regions, to PAH emission (yellow) or broad-band thermal emission by dust (white) \citep{Cohen2011}. An inspection of the GLIMPSE 3-color images for Bubbles 3367, 3448, 4473 and 4602, reveals a red color for all of them. Of these, two, namely 3448 and 4602, are classified in the literature as PN candidates (\citealt{Kohoutek2001}; \citealt{Gvaramadze2010}), while nothing is found about the nature of the other two. From what emerges from this discussion and follows in the next section, it can be concluded that Bubbles 3367 and 4473 could also be considered PN candidates. It is remarkable, in particular, how Bubble 4473 morphologically resembles Bubble 4602 in the GLIMPSE images. On the other hand, Bubble 3354, classified as H \textsc{ii} by \citet{Anderson2011}, shows the expected yellow appearance. \begin{figure*} \begin{center} \includegraphics[width=5cm]{./3354_030508.pdf} \includegraphics[width=5cm]{./3367_030508.pdf} \includegraphics[width=5cm]{./3448_030508.pdf}\\ \includegraphics[width=5cm]{./4473_030508.pdf} \includegraphics[width=5cm]{./4602_030508.pdf} \caption{Three-color superposition of GLIMPSE tile cut-outs at $3.6\mic{m}$ (blue), $5.8\mic{m}$ (green) and $8\mic{m}$ (red) for Bubbles 3354, 3367, 3448, 4473 and 4602. It is remarkable how Bubble 3354 appears different in shape and color with respect to the others and how 4473 and 4602 are morphologically and chromatically similar.} \label{fig:GLIMPSEimg} \end{center} \end{figure*} All the 5 bubbles considered show a nebular emission at $8\mic{m}$, while for only one (Bubble 4602) this nebular emission is detected in all four bands. It was shown that the ratio between the flux density at $8\mic{m}$ and at $20\um{cm}$ ranges in a well-determined interval and that different kinds of PNe are characterized by different values of this ratio \citep{Cohen2011}. In Figure \ref{fig:I8R20} we plot the GLIMPSE flux densities against the radio values from our data. \begin{figure*} \begin{center} \includegraphics[width=10cm]{fig8_bw.pdf} \caption{GLIMPSE flux densities at $8\mic{m}$ against radio flux densities at $20\um{cm}$ derived from our observations. The coloured area represents the ratio interval where PNe are usually located according to \citet{Cohen2011}, with a confidence level of $1\sigma$ (darker area) and $3\sigma$ (lighter area).} \label{fig:I8R20} \end{center} \end{figure*} It is possible to notice how Bubble 3354 clearly does not satisfy the selection criterion, in agreement with a classification as an H \textsc{ii} region and not a PN. The other 4 bubbles all lie inside the area where PNe should be found. In particular the unclassified Bubble 4473 is very close to the median ratio value of 4.7, with a calculated ratio of 4.5. These 4 bubbles can be divided in two groups, according to their ratio value: the first group comprises Bubbles 4473 and 4602 and the second group Bubbles 3367 and 3448. We have already talked about the morphological similarities of Bubbles 4473 and 4602: the result found here may suggest that these two objects could share many of their physical characteristics. The other two bubbles appear different from the first two. Indeed, for Bubble 3367 no morphological consideration can be done, while Bubble 3448 seems to have bipolar structure. If all these bubbles will be confirmed to be PNe their morphological and physical differences may be due to intrinsic properties or their evolutionary stage. \section{Summary and conclusions} The classification of bubbles is very complicated and a definitive answer on this topic is far from being given here. However, from this analysis it has clearly emerged that the multi-wavelength approach that we presented is a powerful tool for achieving a sensible classification. For at least 21 bubbles, previously unclassified, the spectral index analysis suggests that they are thermal free-free emitters. Important results have been obtained when our radio data have been combined with archival data from IR observation with Spitzer and IRAS. We have shown that correlation and color-color plots can help to discriminate among different types of objects. A word of caution is necessary concerning the IR-radio correlation. Although we have demonstrated that such a correlation -- which is known to characterize various classes of astronomical objects -- holds true also for Galactic bubbles, yet it cannot be used alone for classification purposes. We have discussed the morphology of the bubbles at different wavelengths, considering a peculiar shape as indicative of some kind of circumstellar envelope. These considerations are applicable only to few sources. Indeed, many bubbles are barely resolved and their lack of significant feature may be both an intrinsic property or an instrumental limit. \section*{Acknowledgments} This work is based on observations made with the Very Large Array of the National Radio Astronomy Observatory, a facility of the National Science Foundation operated under cooperative agreement by Associated Universities Inc., and on data products from the \textit{Spitzer Space Telescope}, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Archive search made use of the SIMBAD database and the VizieR catalogue access tool, operated by the Centre de Donn\'ees astronomique de Strasbourg.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Since their discovery in 1991 \cite{Iijima1} carbon nanotubes have, due to their unique mechanical and electronic properties, been subject of a tremendous scientific and technological interest. In the field of mesoscopic physics, carbon nanotubes offer an easily accessible experimental platform for studying the physics of the text book example of a particle trapped inside a box, a so-called quantum dot or artificial atom~\cite{kastner,bockrath}. Single quantum dots can simply be realized by contacting a nanotube with two metallic contacts (normally made of Palladium); the contacts between the nanotube and the metallic leads usually act as tunnel barriers, characterized by the nanotube-lead tunneling rate $\Gamma$ and a capacitance $C$. The energy scales for nanotube quantum dots are given by a typical single-electron charging energy $U_{C}\approx 3$~meV~$\approx 30$~K and a quantum-mechanical level spacing $\delta E=\frac{h v_F}{2L}$, where $h$ is Planck's constant. Using the Fermi velocity $v_{F}=8 \times 10^{5}$~m/s and an effective nanotube length $L=1\,\mu$m the level spacing amounts to $\delta E \approx 2$~meV. However, the standard approach for manufacturing quantum dot devices has relied on structures in GaAs-based 2-dimensional electron gases (2-DEG), which can be defined using etching and gating techniques. The main advantage of this system is the high degree of control over the quantum dot properties, which has been achieved over the last years. These quantum dots allow for a precise tuning of the coupling to the leads by energizing locally acting gate electrodes, see e.g. \cite{KouwenhovenReview} and references therein. Additionally, center gates can be used in order to define double quantum dot structures with a tunable inter-dot coupling. This tunability is an essential ingredient for further experiments exploring the nature of electronic states in quantum dots - or, even more ambitious, for realizing quantum electronic devices such as spin- or charge-based quantum bits~\cite{loss,schoen,Burkard}. Whereas this high degree of control has been lacking in nanotube-based quantum dots so far, using carbon nanotubes offers fascinating opportunities. For example, new physical phenomena such as superconducting correlations or spin injection into quantum dots can be studied in carbon nanotube quantum dots~\cite{buit2,Sahoo}. In contrast to carbon nanotubes, up to now it has not been possible to attach ferromagnetic and superconducting to GaAs-based quantum dots. Moreover, the influence of the surrounding nuclear spins is expected to limit electron spin dephasing times in GaAs (double) quantum dots~\cite{hyperfine}. In carbon nanotubes, on the other hand, nuclear spins are predominantly absent and hyperfine interactions thus strongly reduced. The question, to which degree carbon nanotube quantum dots can be tuned using locally acting gate electrodes is therefore an important issue to address. In this article we describe a technique of implementing local top-gate electrodes onto a single-walled carbon nanotube (SWNT). After characterizing the functionality of the top-gates we will then make use of them in order to define and control double quantum dots inside SWNTs. \section{Local gating of carbon nanotubes} \subsection{Strategies for gating nanotube quantum dots} With the nanotube lying on an oxidized Si-substrate, a natural way of gating this single quantum dot is to apply a voltage to the doped Si-substrate. The Si then acts as a back-gate globally affecting the whole quantum dot. In order to create multiple dots in such a device and control them independently, however, one will need to find a way of locally gating a nanotube. By using such local gates, one can either create a barrier, or simply shift the chemical potential within a small part of the nanotube. In the following, we will briefly review two different strategies of local gating of nanotubes that have been reported in the literature, and will describe in detail the technique that has been developed in our lab. At the end of this section measurements of electrical transport through nanotube devices with local gates will be presented. \begin{figure} \begin{center} \includegraphics*[width=0.8\linewidth]{figure1.eps} \caption{\label{figure1} (a) Side-view schematic of a SWNT device with three top-gates. (b) Scanning electron micrograph of the device. Gates are labelled gate~1, center~gate, and gate~2 (from source to drain).} \end{center} \end{figure} In order to fabricate gate electrodes locally acting on a nanotube, side-gates represent a straightforward option~\cite{BiercukPRL}. Besides source and drain contact, additional electrodes are patterned in the vicinity of the nanotube. The advantage of this technique is that contacts and side-gates can be fabricated within the same processing step. It is, however, difficult to get the side-gates as close to the nanotube as possible, yet not contacting it electrically. Thus, typically side-gates are spaced by approximately 100~nm from the SWNT, making the gating less efficient and their action less local. More efficient are gates made by directly evaporating the gate electrode on top of the nanotube, with a thin gate oxide underneath. These so-called topgates are spaced from the SWNT only by the thickness of the gate oxide ($\approx 1-10~nm$), making them act more efficiently and (depending on their width) more locally as compared to side-gates. Despite the fact that there are drawbacks of this method as well (additional processing steps, nanotube properties may be modified underneath the top-gates), top-gates are the most promising approach for creating local barriers in SWNT. Therefore, we have developed a reliable method for fabricating top-gate electrodes in our laboratory, which we will now discuss in more detail. \subsection{Experimental} SWNTs were grown on a degenerately doped Si/SiO$_2$ substrate by means of chemical vapor deposition (CVD). Details of the CVD process can be found elsewhere~\cite{Juerg}. After the initial preparation of SiO$_2$/Ti/Au bond pads and alignment markers, SWNTs were then localized with a scanning electron microscope~(SEM). In the following step the gate electrodes were defined by e-beam-lithography. Electron-gun-evaporation of SiO$_2$ as gate-oxide, Ti as gate-metal, and Pd serving as anti-oxidant cover layer followed. The gate-oxide film thickness was chosen to be 10~nm, the Ti film thickness 30~nm, and that of the Pd layer 25~nm. The materials were evaporated at a pressure of $\approx$~10$^{-7}$~mbar. In a final lithography and evaporation step the source and drain electrodes of the nanotube, consisting of 40~nm Pd, were defined. The evaporation conditions were the same as described above, except the substrate was kept at a constant temperature of $\approx$ 0$^{\circ}$ C by cooling the sample holder inside the evaporation chamber. This cooling helps to reduce outgasing of materials inside the vacuum chamber due to heating during the evaporation. After lift-off of the remaining PMMA, the samples were glued into a 20-lead chip carrier and bonded. Figures~\ref{figure1}(a) and (b) show a side-view schematic and a scanning electron micrograph of a typical SWNT device with three top-gates in addition to the source and drain electrode. The spacing between source and drain electrode amounts to 2.2~$\mu$m, and the width of the gates was chosen to be 200 nm. The back-gate oxide has a commonly used thickness of 400~nm. \begin{figure}[t] \begin{center} \includegraphics*[width=\linewidth]{./figure2.eps} \caption[Bending of bands due to local gates]{(a) Linear conductance G on a logarithmic scale for a device with three top-gate electrodes (oxide thickness 10 nm) versus top-gate-voltage at T = 300~K. The gates non swept are connected to ground potential. Inset: (i) - (iii) illustrate the band structure for increasing top-gate voltage. (b) Colorscale plot (dark=0, bright=0.008~e$^2$/h) of the conductance versus gate~1 and gate~2 for constant center-gate voltage at 4.2 K.} \label{figure2} \end{center} \end{figure} \subsection{Effect of local gate electrodes at 300~K and at 4.2~K} In Fig.~\ref{figure2}(a) the linear conductance versus gate voltage of a device with three top-gate electrodes is plotted. The gate-dependence identifies the semiconducting nature of the SWNT. At a voltage of roughly 0.6 V applied to either of the three top-gates the conductance through the device is suppressed indicating that the chemical potential is shifted locally into the semiconducting gap of the SWNT. After a decrease of conductance for increasing gate voltage, the conductance rises again for more positive gate voltages. This behavior is explained by the band diagram sketched in the inset of Fig.~\ref{figure2}(a). Intrinsically the tube is p-doped and the chemical potential $\mu$ resides in the valence band (i). For increasing voltage at the top-gate the potential landscape is changed locally, making $\mu$ lie within the energy gap below the gate (ii). In this scenario the conductance through the nanotube reaches its minimum. With this technique it should thus be possible to create local barriers inside a carbon nanotube, allowing one to create artificial potential landscapes. If the gate voltage is increased even more, the lower edge of the conduction band will eventually reach the upper edge of the valence band (iii). Now thermally activated band-to-band processes indicated by the green arrows are possible and the conductance increases again. We have observed such behavior only at 300~K indicating the large activation barriers involved in these band-to-band charge transfer processes. Band-to-band charge transfer processes have also been reported in Ref.~\cite{Appenzeller}. In Fig.~\ref{figure2}(b) the linear differential conductance at 4.2~K is plotted on a colorscale (bright=more conductive) versus voltages applied at the top-gates~1~and~2 for a constant center-gate voltage of $V_{C}=-1$~V. At voltages of around 0.5~V applied to either of the top-gates the chemical potential is shifted into the energy gap of the nanotube and electrical transport is suppressed. For lower top-gate voltages, sweeping gate~1 and gate~2 leads to pronounced oscillations of the conductance due to single-electron charging and finite-size effects of the nanotube, which are accessible at low temperatures. \section{Nanotube double quantum dots} \subsection{Previous work} Recently, in the field of double quantum dots in carbon nanotubes an enormous progress has been achieved. In 2004 Mason et al. first demonstrated the local gate control of an intrinsic double quantum dot inside a carbon nanotube \cite{Mason}. This work was then extended by the same group in Ref.~\cite{Biercuk}, where a tunable mutual capacitance was demonstrated. In a recent work, Sapmaz~et~al. could observe electronic transport through excited states as seen in finite-bias triangles in a SWNT double dot~\cite{sapmaz}. In Ref.~\cite{graeber} molecular eigenstates of a strongly coupled carbon nanotube double quantum dot were observed and analyzed. \subsection{Experimental data} In this section we will show that it is possible to reliably define clean double quantum dots in SWNTs by using top-gate electrodes. We focus on three devices labelled A,B,C with three top-gates each. Samples A and B were fabricated according to Fig.~\ref{figure1}(a). In the case of device C the source -drain spacing was reduced to 1.4~$\mu$m and the top-gate width to 100~nm. Whereas devices A and B are based on a semiconducting SWNT (operated in the hole regime), device~C is metallic. In Fig.~\ref{figure6}(a)-(c) the differential conductance versus voltages applied at two top-gates is plotted on a colorscale (bright=more conductive). For devices A and C the center gate has been set to a constant value of -0.1~V and 0~V, respectively, and gate~1 and gate~2 are swept. In case of device B, center gate and gate~1 are swept, while gate~2 was kept at a constant voltage of $V_{G2}=-0.1$~V. The visible high-conductance ridges as observed for all three devices define a charge-stability map that is shaped like a honeycomb. This honeycomb pattern is characteristic of a double quantum dot. Within each cell, the number of holes (n,m) on the two dots is constant. Energizing gate~1~(2) to more negative voltages successively fills holes into dot~1~(2), whereas a more positive voltage pushes holes out of the dot. The fact that all three devices can be tuned to exhibit a honeycomb pattern shows that the double quantum dots are indeed defined by the local gates and not intrinsic to the nanotube. Common to all three devices is that the applied gate voltages are close to 0~V, i.e. far off the pinch-off voltage. In such a regime, we expect a smooth modulation of the electronic potential rather than sharp and steep barriers. \begin{figure}[t] \begin{center} \includegraphics*[width=\linewidth]{./figure3.eps} \caption{(a) Colorscale plot of the conductance versus top-gate voltages at 300~mK for device A. Bright corresponds to 0.4~e$^2$/h. The obtained honeycomb pattern is the charge stability map of a double quantum dot. (b) Same for device B at 500 mK, bright corresponds to 0.08~e$^2$/h. (c) Same for device C at 50 mK, bright corresponds to 0.035 e$^2$/h. (d) Zoom into the triple point region marked by the dashed box in (c) at a bias voltage of $V_{sd}$=500$\mu$V. (e) Capacitive model of a double quantum dot.} \label{figure6} \end{center} \end{figure} The honeycomb charge stability map allows for a quantitative determination of the double dot capacitances as defined in the electrostatic double dot model in Fig.~\ref{figure6}(e), following the work of van der Wiel et al.~\cite{VanderWiel}. As an example we will determine the capacitances of the double dot defined in device C, see Fig.~\ref{figure6}(c). From the dimensions of the honeycomb cell one can extract the gate capacitances: \begin{equation} C_{G1/2}=\mid e \mid/\Delta\,V_{G1/2}\;\;\;, \end{equation} yielding $C_{G1}\approx30$~aF and $C_{G2}\approx25$~aF. Of particular importance are the points where three charge states are degenerate, so-called triple points. Two such points are marked by dashed circles in Fig.~\ref{figure6}(b) for clarity. When applying a finite bias voltage, the triple points transform into triangles, in which transport is enabled. Fig.~\ref{figure6}(d) shows the triple point region within the dashed box of Fig.~\ref{figure6}(c) at an applied source-drain voltage of $V_{sd}=500\mu V$. From the dimensions of these triangles $\delta V_{G1(G2)}$ and \begin{equation} C_{G1(G2)}/C_{1(2)}=\mid V_{sd}\mid/\delta V_{G1(G2)}\;\;\;, \end{equation} we obtain the total capacitance $C_{1}=C_s+C_{G1}+C_m\approx 60$~aF and $C_{2}=C_d+C_{G2}+C_m\approx 75$~aF of dot~1 and dot~2, respectively. Here $C_m$ denotes the mutual capacitance and $C_{s(d)}$ the capacitance of the tunnel barrier to source (drain). In a purely electrostatic model the mutual capacitance can be evaluated from the spacing of two adjacent triple points. This spacing, however, is influenced by the tunnel coupling $t$ in between the two dots as well. This quantum mechanical effect leads to a level anti-crossing, resulting in curved wings in the vicinity of the triple points. A rough estimate of the mutual capacitance, however, can be achieved by drawing the asymptotes to the curved borders of the honeycomb, see the bottom left triple point region of Fig.~\ref{figure6}(c). From the vertical (horizontal) distance $\Delta V_{G1}^{m}$ ($\Delta V_{G2}^{m}$) it is then possible to extract $C_{m}$ by using \begin{equation} \Delta V_{G1,2}^{m} = |e| C_{m} / C_{G1,2} C_{2,1} \;\;\;. \end{equation} We obtain a mutual capacitance of $C_{m}\approx 5$~aF. Additionally, analyzing the curvature of the honeycomb borders allows one to precisely evaluate the tunnel coupling~$t$. For a detailed description on this we refer to Ref.~\cite{graeber}, where it was found that $t$ can exceed the electrostatic nearest-neighbor interaction by as much as an order of magnitude. This fact reflects the one-dimensional geometry of a nanotube; electrostatic interactions are reduced due to the large separation of the `center of mass' of the charges (while still allowing a significant overlap of the wavefunctions). \subsection{Where exactly are the two dots?} So far we have seen that it is possible to reliably define and control double quantum dots in SWNTs - the question where precisely the two dots are located, however, has not been addressed yet. As we will point out, from Fig.~\ref{figure6}(a) and (b) it follows that the dots are separated by the center gate electrode. Recall that devices A and B are identical except that for device B the center gate (instead of gate~2) is used to control dot~2. The dashed lines in Fig.~\ref{figure6}(a) and (b) connect triple points corresponding to a constant charge $Q_{1(2)}=const.$, residing on dot~1(2). A non-zero slope of these lines indicates a cross capacitance, i.e. the gate controlling one of the two dots also affects the chemical potential of the other. A non-zero slope is observed for the $Q_1=const.$-lines in (b). Hence, the center~gate affects both dot~1 and dot~2. On the other hand, this is not the case for gate~1 or gate~2 in Fig.~\ref{figure6}(a) and (b). Such behavior can be explained assuming that the two dots are separated by the center~gate, screening the cross-action of gates~1~and~2. The center~gate, however, located in between the dots and creating a barrier, is not screened and thus acts on the two dots. If the center gate is capable of creating a tunnel barrier inside the nanotube, so will be gate~1 and gate~2 as well. Also recall that the voltages applied to the top-gates are all within the same range, $V_{Top-gate}\approx0$~V. Consequently dot~1 is located between gate~1 and center~gate, whereas dot~2 extends from the center~gate to gate~2. The scenario suggested implies that the part of the SWNT between gate~1(2) and source (drain) electrode has an effectively energy-independent transmission. In fact this assumption is quite reasonable, taking into account the high quality of Pd-nanotube electrical contacts~\cite{daiPd}. Very transparent contacts ($\Gamma\approx\delta E$) lead to a constant, or at least only slightly modulated transmission. Transport through our device will be dominated by the bottleneck in transmission - the gate-defined double quantum dot. \section{Conclusions} In this article we have presented a reliable approach to define and control double quantum dots in SWNTs by using locally acting top-gate electrodes. That the double quantum dots are not intrinsic to the carbon nanotubes is confirmed by the presented measurements of honeycomb patterns for three different devices. Furthermore, using an electrostatic model, we have been able to characterize the double-dot system quantitatively by extracting its capacitances. Despite these encouraging results, further research is necessary. Challenges to master include the gate-control of the quantum-mechanical tunnel coupling of the two quantum dots and the access to regimes of only a few charge carriers per dot. Carbon nanotubes may, due to their unique properties and their experimental ease, then play an important role in future information technology. \section*{Acknowledgements} For theoretical support we are gratefully indebted to W.A. Coish and D. Loss. We acknowledge experimental contributions by J. Furer, C. Hoffmann, and J. Gobrecht for oxidized Si substrates. Financial support from the Swiss NFS, the NCCR on Nanoscience, and the `C. und H. Dreyfus Stipendium'~(MRG) is greatly appreciated. \section*{References} \bibliographystyle{apsrev}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The increase in the density of partons with the energy of a hadron collider increases the amount of interactions where more than one parton in each hadron participates in hard interactions. Multi-parton interactions are therefore expected to play a larger role at the LHC than ever before. They contain information about the proton structure not available from single-parton interactions, and they can be important backgrounds to other signal processes, such as Higgs production \cite{DelFabbro:1999tf}. Multi-parton interactions have an experimental history stretching from the ISR \cite{Akesson:1987ab} to the LHC \cite{Dobson:2011}, and have long been modeled in event generators \cite{Buckley:2011ms}. However, the treatment of multi-parton interactions from the point of perturbative QCD has only recently started to accelerate. The hard interactions are connected via parton distributions and the partons originating from the same hadron can be correlated. This leads to interferences and correlation effects not present in single-parton scatterings. These effects are often neglected in studies of multiple hard scatterings, but the validity of such approaches still has to be investigated. A suitable scene for such a study is set by the double Drell-Yan process \cite{Mekhfi:1983az,Gaunt:2010pi,Manohar:2012jr}, where two quark/anti-quark pairs annihilate into two vector bosons ($\gamma^*$, $Z$, $W^\pm$) which decay leptonically. This process has the advantage of being theoretically clean and well understood in the single-parton scattering case. We calculate the differential cross section, taking initial parton correlations into account, and derive constraints on the size of the spin correlations following from positivity of double parton densities. \section{Double Parton Interactions} When two partons in a proton interact, it is only the sum of momenta and quantum numbers which have to be the same in the amplitude and its conjugate. This allows for a momentum difference, $r$, between a parton in the amplitude and its partner in the conjugate amplitude, see figure \ref{figures:DYmomentum}. This difference has to be balanced by the parton in the other interaction. Similarly the color and flavor of the partons can be matched inside each hard collision or as an interference effect between them. There can even be fermion number interference between quarks and anti-quarks but we do not include that in our calculations. Further the spin of the interacting partons can be correlated, much in the same way as in single-parton collisions with polarized protons \cite{Boer:1999mm}. The correlations between the initial state partons are described by double parton distributions, DPDs. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{DoubleDYMomentumSmaller.pdf} \caption{\label{figures:DYmomentum}The double Drell-Yan process where two quarks in the right moving proton interact with two anti-quarks from the left moving proton. $q_i$ is the momentum of the vector boson from interaction i. $k_1$ and $k_2$ ($\bar{k}_1$ and $\bar{k}_2$) are average momenta carried by the partons from the right (left) moving proton taking part in hard interaction 1 and 2 respectively and $r$ the momentum difference.} \end{figure} \subsection{Double Parton Distributions} In transverse position space the DPDs depend on the collinear momentum fractions $x_1$ and $x_2$ of the two partons taking part in hard collisions, on their differences in transverse positions between the amplitude and its conjugate $\vek{z}_1$ and $\vek{z}_2$ and the transverse distance between the hard vertices $\vek{y}$. $\vek{z}_1$ ($\vek{z}_2$) is the Fourier conjugate of the average transverse momentum $\vek{k}_1$ ($\vek{k}_2$) and $\vek{y}$ of the momentum difference $\vek{r}$, see figure \ref{figures:DYmomentum}. The DPDs describing the different quark polarizations are labeled by $q$ for unpolarized, $\Delta q$ for longitudinally polarized and $\delta q$ for transversely polarized quarks \cite{Diehl:2011yj}. For unpolarized and longitudinally polarized quarks the possible combinations are \begin{equation}\label{eq:DPD1}\begin{aligned} F_{qq} &= f_{qq}(x_1,x_2,\vek{z}_1,\vek{z}_2,\vek{y}), & F_{q \Delta q} &= g_{q \Delta q} (x_1,x_2,\vek{z}_1,\vek{z}_2,\vek{y}), \\ F_{\Delta q \Delta q} &= f_{\Delta q \Delta q}(x_1,x_2,\vek{z}_1,\vek{z}_2,\vek{y}), & F_{\Delta q q} &= g_{\Delta q q}(x_1,x_2,\vek{z}_1,\vek{z}_2,\vek{y}), \end{aligned}\end{equation} where $f$'s are scalar- and $g$'s are pseudo scalar-functions. Distributions with transversely polarized quarks in one of the two hard interactions carry an open index corresponding to the transverse spin vector, and have to be expanded in a basis spanning the transverse plane \begin{equation}\begin{aligned} F_{\Delta q \delta q}^i &= M\left(y^if_{\Delta q \delta q} + \tilde{y}^ig_{\Delta q \delta q}\right),& F_{q \delta q}^i &= M\left(\tilde{y}^if_{q\delta q} + y^ig_{q\delta q}\right) \end{aligned}\end{equation} and similarly with the subscripts interchanged. $M$ is the proton mass and $\tilde{y}^i=y^j\epsilon^{ij}$, is a transverse vector orthogonal to $y^i$. Transversely polarized quarks in both interactions give two transverse indices and we need a tensor basis \begin{equation}\begin{split} \label{eq:DPD3} F_{\delta q \delta q}^{ij} &= \delta^{ij}f_{\delta q \delta q} + \left( 2y^i y^j - y^2\delta^{ij}\right)M^2f_{\delta q\delta q}^{\,t} + \left(y^i\tilde{y}^j + \tilde{y}^iy^j\right)M^2g_{\delta q\delta q}^{\,s} \\ &\hspace{0.3cm}+ \left(y^i\tilde{y}^j-\tilde{y}^iy^j\right)M^2g_{\delta q\delta q}^{\,a}. \end{split}\end{equation} Taking color interference into account gives one color singlet and one color interference distribution for each flavor combination and thus doubles the number of DPDs. When the flavors of the quarks are different there are flavor square and interference distributions. The DPDs for the left moving proton will be denoted by a bar, $\bar{f}_{qq}$, which should not be confused with the bar appearing over subscripts (indicating anti-particles). \begin{figure} \centering \includegraphics[width=0.7\textwidth]{3Dangles.pdf} \caption{\label{figures:angles}Reference frame for each interaction is the rest frame of the vector boson. $\theta_i$ is the polar and $\varphi_i$ the azimuthal angle of the lepton. $\vec{l}_i$ ($\vec{\bar{l}}_i$) is the three momentum of the lepton (anti-lepton) from interaction $i$.} \end{figure} \section{Cross Section} The partonic cross sections are calculated to leading order for unpolarized, longitudinally and transversely polarized quarks and the results are combined with the DPDs describing the corresponding parton densities. The cross section formula, assuming transverse momentum dependent factorization, derived in \cite{Diehl:2011yj}, can be expressed as \begin{equation}\begin{split} \label{eq:cross} &\frac{d\sigma}{\prod_{i=1}^2dx_id\bar{x}_id^2\vek{q}_id\Omega_i}=\frac{1}{C}\sum_{q_1q_2\bar{q}_3\bar{q}_4}\sum_{a_1a_2\bar{a}_3\bar{a}_4}\frac{d\hat{\sigma}_{a_1\bar{a}_3}}{d\Omega_1}\frac{d\hat{\sigma}_{a_2\bar{a}_4}}{d\Omega_2}\int\frac{d^2\vek{z}_1}{(2\pi)^2}e^{-i \vek{z}_1\vek{q}_1}\int\frac{d^2\vek{z}_2}{(2\pi)^2}e^{-i \vek{z}_2\vek{q}_2}\\ &\hspace{1.2cm}\times\int d^2\vek{y}\left[F_{a_1a_2}\bar{F}_{\bar{a}_3\bar{a}_4}+F_{a_1\bar{a}_4}\bar{F}_{\bar{a}_3a_2}+F_{\bar{a}_3a_2}\bar{F}_{a_1\bar{a}_4}+F_{\bar{a}_3\bar{a}_4}\bar{F}_{a_1a_2}\right]+\left\{\text{\small flavor interference}\right\}. \end{split}\end{equation} $C$ equals 2 when the final states of the two hard interactions are identical and 1 otherwise. In the first sum $q_1$ ($q_2$) labels the flavor of the quark and $\bar{q}_3$ ($\bar{q}_4$) of the anti-quark entering the first (second) hard scattering. $a_i$ and $\bar{a}_i$ label the different combinations of unpolarized ($q_i$) and longitudinally ($\Delta q_i$) and transversely ($\delta q_i$) polarized quarks and anti-quarks. The color square and interference distributions enter the cross section with equal weights and we can therefore keep color square/interference labels implicit. $d\hat{\sigma}_i/d\Omega_i$ is the partonic cross section differential in the direction of the outgoing lepton. To describe the final state we use angles defined in the rest frames of the vector bosons, with $\hat{z}$-axes defined as the direction bisecting the angle between $\vec{\bar{p}}$ and $-\vec{\bar{p}}$ ($\vec{p}$ and $\vec{\bar{p}}$ are three momenta of the protons), see figure \ref{figures:angles}. The $\hat{x}$-axis is an arbitrary transverse direction which we for definiteness choose to point towards the center of the LHC ring. The differences between the axes defined for the two vector bosons are of order $\vek{q}_i/Q_i$ and can be neglected. We will display the final results only for the cross sections integrated over transverse boson momenta explicitly, not including flavor interference, and refer to \cite{Diehl:2012me} for the cross sections depending on transverse boson momenta. Integrating over the transverse momenta of the bosons yields collinear double parton distributions \cite{Mekhfi:1983az}. After integration the DPDs depend on $x_1$, $x_2$ and only one transverse vector, $\vek{y}$, and therefore no pseudo-scalar functions can contribute. Due to time reversal symmetry, functions with one longitudinal and one transversely polarized quark vanish, for example $f_{\Delta q \delta q}=0$. We are then left with six collinear double parton distributions for each combination of quark flavors. For unpolarized and longitudinally polarized quarks the cross section is \begin{equation}\begin{split} \frac{d\sigma^{(0)}}{dx_i\bar{x}_id\Omega_i} & = \frac{1}{C}\sum_{q_1q_2\bar{q}_3\bar{q}_4} \bigg\{ \left[K_{q_1\bar{q}_3}(1+\cos^2\theta_1)+K_{q_1\bar{q}_3}'\cos\theta_1\right]\left[K_{q_2\bar{q}_4}(1+\cos^2\theta_2)+K_{q_2\bar{q}_4}'\cos\theta_2\right]\hspace{-0.5cm} \\ & \hspace{1.8cm}\times \int d^2\vek{y} \hspace{0.1cm}\left[f_{q_1q_2}\bar{f}_{\bar{q}_3\bar{q}_4}+f_{\Delta q_1\Delta q_2}\bar{f}_{\Delta\bar{q}_3\Delta \bar{q}_4} + \text{perm.}\right]\\ &\hspace{0.9cm}+ \left[K_{q_1\Delta \bar{q}_3}(1+\cos^2\theta_1)+K_{q_1\Delta \bar{q}_3}'\cos\theta_1\right]\left[K_{q_2\Delta \bar{q}_4}(1+\cos^2\theta_2)+K_{q_2\Delta \bar{q}_4}'\cos\theta_2\right] \\ & \hspace{1.8cm}\times \int d^2\vek{y} \hspace{0.1cm}\left[f_{q_1q_2}\bar{f}_{\Delta \bar{q}_3\Delta \bar{q}_4}+f_{\Delta q_1\Delta q_2}\bar{f}_{\bar{q}_3\bar{q}_4}+ \text{perm.}\right]\bigg\}. \end{split}\end{equation} $K_{q \bar{q}}$, $K_{q \bar{q}}'$, $K_{q\Delta \bar{q}}$ and $K_{q\Delta \bar{q}}'$ are $Q^2$ dependent coupling factors for different polarizations, defined in \cite{Diehl:2012me}. $\Omega_i$'s are solid angles and $\theta_i$'s polar angles of the leptons. 'perm' stands for permutations of the quark/anti-quark subscripts in the DPDs. For $W^\pm$ the coupling factors are polarization independent and including partonic spin correlations therefore only changes the rate, while for the neutral current the longitudinal polarization changes both the rate and the angular distribution of the process. The part of the cross section with one interaction involving transversely polarized quarks vanish upon integration over the transverse boson momenta while the cross section with both interactions containing transversely polarized quarks is non-zero for the neutral current, \begin{equation}\begin{split} \frac{d\sigma^{(2)}}{dx_id\bar{x}_id\Omega_i} =&\, \frac{1}{C}\sum_{q_1q_2} \sin^2\theta_1\sin^2\theta_2 \int d^2\vek{y} \bigg\{\Big[(K_{\delta q_1 \delta q_1}K_{\delta q_2 \delta q_2}-K_{\delta q_1 \delta q_1}'K_{\delta q_2 \delta q_2}')\cos2(\varphi_1-\varphi_2)\hspace{-1cm}\\ &\hspace{-1cm}-(K_{\delta q_1 \delta q_1}'K_{\delta q_2 \delta q_2}-K_{\delta q_1 \delta q_1}K_{\delta q_2 \delta q_2}')\sin2(\varphi_1-\varphi_2) \Big]2\left[f_{\delta q_1\delta q_2}\bar{f}_{\delta\bar{q}_1\delta\bar{q}_2}+\text{perm.}\right]\bigg\}. \end{split}\end{equation} $K_{\delta q\delta \bar{q}}$ and $K'_{\delta q\delta \bar{q}}$ are $Q^2$ dependent coupling factors for two transversely polarized quarks and the $\varphi_i$'s are the azimuthal angles of the final state leptons. For $W$ bosons the coupling factors are zero for transversely polarized quarks, since the $W$ only couples to left handed quarks. The cross section for the neutral current depend on the azimuthal angle between the two outgoing leptons ($\varphi_1-\varphi_2$). This dependence originates in the correlations between the spin of the initial state quarks. \section{Positivity Bounds} The double parton distributions of different polarizations can be organized in a positive semi-definite spin density matrix. The positivity can then be used to derive upper limits on the polarized distributions, similarly to what was done for generalized parton distributions in \cite{Diehl:2005jf} and for transverse momentum dependent distributions in \cite{Bacchetta:1999kz}. The projection operators $\Gamma_{++}$ ($\Gamma_{--}$) project out quarks with positive (negative) helicities while $\Gamma_{-+}$ and $\Gamma_{+-}$ give helicity interferences. These can be expressed in terms of operators projecting out un- ($\Gamma_q$), longitudinal- ($\Gamma_{\Delta q}$) and transversely- ($\Gamma_{\delta q}$) polarized quarks \begin{equation}\begin{aligned} \Gamma_{++} &= \frac{\gamma^+}{2}(1+\gamma_5) = \Gamma_q+\Gamma_{\Delta q}, & \Gamma_{-+} &= -\frac{i\sigma^{+1}}{2}(1+\gamma_5) = \Gamma_{\delta q}^1-i\Gamma_{\delta q}^2, \\ \Gamma_{--} &= \frac{\gamma^+}{2}(1-\gamma_5) = \Gamma_q-\Gamma_{\Delta q}, & \Gamma_{+-} &= \frac{i\sigma^{+1}}{2}(1-\gamma_5) = \Gamma_{\delta q}^1+ i\Gamma_{\delta q}^2. \end{aligned}\end{equation} The DPDs can then be organized as a matrix in the light-cone helicity basis \begin{equation}\begin{split} \hspace{-0.5cm} \left( \begin{matrix}\vspace{0.1cm} f_{qq}+f_{\Delta q\Delta q} & -i|y|Mf_{\delta qq} & -i|y|Mf_{q\delta q} & 2y^2M^2f_{\delta q\delta q}^{\,t}\\\vspace{0.1cm} i|y|M f_{\delta q q} & f_{qq}-f_{\Delta q\Delta q} & 2f_{\delta q \delta q} & -i|y|Mf_{q \delta q} \\ \vspace{0.1cm}i|y|Mf_{q\delta q} & 2f_{\delta q \delta q} & f_{qq}-f_{\Delta q\Delta q} & -i|y|Mf_{\delta qq} \\\vspace{0.1cm} 2y^2M^2f_{\delta q \delta q}^{\,t} & i|y|Mf_{q\delta q} &i|y|Mf_{\delta qq} & f_{qq}+f_{\Delta q \Delta q} \end{matrix}\right) \end{split}\end{equation} where the rows (columns) correspond to $++,-+,+-,--$ helicities of the two quarks in the amplitude (conjugate amplitude). The non-interference parton distributions can be interpreted as the probability of finding quarks having specific helicities inside a proton. The helicity matrix is therefore positive semi-definite and calculating the eigenvalues we obtain necessary and sufficient positivity conditions \vspace{-0.2cm} \begin{equation}\begin{split} f_{qq} \geq |f_{\delta q\delta q}-y^2M^2f_{\delta q\delta q}^{\,t}| \end{split}\end{equation} \begin{equation}\begin{split} \Big(f_{qq}\pm (f_{\delta q\delta q}-y^2M^2f_{\delta q\delta q}^{\,t})\Big)^2-\Big(f_{\Delta q\Delta q}\mp(f_{\delta q \delta q}+y^2M^2f_{\delta q\delta q}^{\,t})\Big)^2 \geq y^2M^2\Big(f_{\delta q q}\pm f_{q\delta q}\Big)^2. \end{split}\end{equation} This implies the somewhat more transparent but weaker inequalities \begin{equation}\begin{aligned} f_{qq}+f_{\Delta q \Delta q} & \geq 2y^2M^2f^{\,t}_{\delta q\delta q} & f_{qq}-f_{\Delta q \Delta q} & \geq 2f_{\delta q \delta q}. \end{aligned}\end{equation} \section{Summary} Spin correlations as well as color and flavor interference result in a large number of double parton distributions. Many of these distributions contribute to similar terms in the double Drell-Yan cross section. Longitudinal polarization changes the overall rate for the charged bosons, while for the neutral bosons the angular distribution is also affected. The spin vectors of transversely polarized quarks lead to a dependence on the azimuthal angle between the two outgoing leptons. Similar features as those appearing in the double Drell-Yan cross section are expected to be present also in other types of processes, such as double dijet production, with larger cross sections but a dramatic increase of complexity due to their color structure. We used the probability interpretation of double parton distributions to derive constraints on the polarized parton distributions. Similar constraints have proven very useful for single parton distributions. \section{Acknowledgments} I am grateful to Markus Diehl for collaboration and helpful discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The Central Molecular Zone (CMZ) at Galactocentric radii less than $\sim$500 pc contains the most massive, dense, and turbulent molecular clouds in the Galaxy \citep{RodriguezFernandez2008,Ferriere2007, RodriguezFernandez2006, Pierce-Price2000, MorrisSerabyn1996, BallyGC1987,BallyGC1988} along with some of the most compact and massive star clusters \citep{Figer2002,Habibi2013}. \citet{LisCarlstrom1994} found an exceptionally massive and dense cloud seen in projection against the bright near- and mid-infrared emission from the CMZ, G0.253+0.016. Located towards the brightest portion of the Galactic plane at infrared wavelengths, this cloud was clearly seen in silhouette against the CMZ in the IRAS mid-infrared images and is the most prominent infrared dark cloud (IRDC) in the sky \citep{Menten_etal2005,Arendt2008,Ramirez2008}. \citet{Lis_etal_1994} and \citet{LisMenten1998} demonstrated that this cloud has an unusually high mass ($M > 10^5$ M{$_{\odot}$} ) and density ($N(H_2) > 10^5$ cm$^{-2}$) and is very compact ($r < $ 3 pc). \citet{Lis2001} used ISO data and millimeter-wave spectra to show that the dust temperature is low but the gas is hot. In the cloud center, 15 $<~T_{dust} <$ 22 K but increases to $\sim$27 K at the cloud edges \citep{Longmore2012}. On the other hand, the gas is hot with $T_{gas} \sim$ 80 K \citep{Ao2013}. The mid-infrared fine-structure cooling lines measured with ISO indicated that near-UV interstellar radiation field surrounding the cloud is about $10^3$ times stronger than in the Solar vicinity \citep{Lis2001}, consistent with warmer dust at the cloud periphery. \citet{Kauffmann2013} presented interferometric SiO and N$_2$H$^+$ maps of G0.253+0.01 that exhibit some localized narrow spectral lines with central velocities covering the wide velocity range seen in single-dish data \citep{Rathborne2014_MALT90}. The large line-widths and bright emission in tracers such as SiO, and the complicated kinematics was interpreted as evidence for shocks, possibly indicating an early stage of a cloud-cloud collision \citep{Higuchi2014}. G0.253+0.016 (also known as the `Lima Bean' or the `Brick') has been proposed to be a possible progenitor to a massive star cluster \citep{Longmore2012}. Fitting the dust continuum SED to the Herschel 60, 160, 250, 350, and 500 $\mu$m data obtained from the Herschel Space Observatory Hi-GAL survey \citep{Molinari2011, Molinari2010a,Molinari2010b} and 1100 $\mu$m from the Bolocam Galactic Plane Survey (BGPS; Bally et al. 2010) reveals a dust temperature of $\sim$20 K, H$_2$ column density of $\sim 3.3 \times 10^{23}$~cm$^{-2}$, and mass of $\sim 10^5$ M{$_{\odot}$}\ \citep{Longmore2012,Longmore2013}. Absorption in the near- and far-infrared and application of the virial theorem to spectral line data {\citep{Rathborne2014_MALT90} give similar results for the mass and column density. With a mean radius of about 3 pc and a mean density $n(H_2) \approx 7 \times 10^{4}$ cm$^{-3}$, G0.253+0.016 has sufficient mass to potentially form a young massive cluster \citep{PortegiesZwart2010}. However, previous studies found no evidence for internal heating sources or embedded HII regions \citep{Lis2001}. Only one H$_2$O maser has been found in G0.253+0.016 \citep{Lis_etal_1994,BreenEllingsen2011}. Thus, G0.253+0.016 may either be in the very earliest stages of high-mass star or cluster formation, or because of the extreme conditions in the CMZ, may fail to form many stars. Most star formation in the CMZ occurs on a twisted elliptical ring-like structure with a projected radius of about 100 pc and offset towards positive Galactic longitudes with respect to the dynamical center of the Galaxy \citep{RodriguezFernandez2006,Molinari2011}. The ring contains the Sgr C, Sgr A (which contains the supermassive black hole marking the center of the Galaxy), Sgr B1, and Sgr B2 star forming complexes. G0.253+0.016 is the most prominent member of a chain of IRDCs at positive Galactic longitudes between Sgr A and Sgr B2 \citep{Lis2001,Longmore2013}. \citet{Longmore2012} and \citet{Rathborne2014_MALT90} found that G0.253+0.016 exhibits a line-width near zero intensity of about $\sim$50 to 60 km~s{$^{-1}$} , and an internal velocity dispersion of at least 16 km~s{$^{-1}$}\ typical for CMZ clouds that implies a crossing time of about 0.17 Myr. Models of orbits in barred, tri-axial potentials indicate that gas in the inner CMZ is likely to be moving on $x_2$ orbits elongated along the minor axis of the bar \citep{Contopoulos1980,Athanassoula1992b,Athanassoula1992a}. The major axis of the bar is oriented between 20 and 45 degrees with respect to our line of sight \citep{Binney1991,RodriguezFernandez2008}. G0.253+0.016 is located at a projected distance of about 50 pc from Sgr A and has a radial velocity V{$_{LSR}$}\ $\approx$ 40 km~s{$^{-1}$}. Assuming that it is moving close to the plane of the sky with a velocity of about 150 to 180 km~s{$^{-1}$}\ along an elongated $x_2$ orbit (this is the speed expected for a circular orbit given the enclosed mass), it would have passed near Sgr A about 0.25 to 0.3 Myr ago \citep{Kruijssen2014}. Clouds injected onto $x_2$ orbits near apocenter with velocities lower than the circular orbit speed at that location tend to plunge deeper into the potential well. Following pericenter passage near Sgr A, they climb out of the potential to a second apocenter located on the opposite side of the nucleus, and on the opposite side of the Galaxy from the first apocenter. Such plunging, elongated orbits precess at rates which depend on the details of the enclosed mass distribution and orbit eccentricity. \citet{Longmore2013} proposed that the encounter of G0.253+0.016 with Sgr A may have compressed this cloud. In this paper, high-angular resolution, high dynamic range, and sensitive ALMA observations of J = 1$-$0 {HCO$^+$}\ are presented that reveal an extensive network of filaments seen in absorption toward G0.253+0.016. The most prominent filament (Figure \ref{fig1}) has an absorption line-width of over 30 km~s{$^{-1}$}\ and is nearly parallel to the bright, non-thermal radio filaments which cross the Galactic plane at longitude 0.18\arcdeg\ a few arc minutes from G0.253+0.016. Dozens of filaments with less than 20 km~s{$^{-1}$}\ linewidths are predominantly seen on the blueshifted side of the {HCO$^+$}\ emission from G0.253+0.016. In this paper, it is shown that these absorption features are produced by very low-density, sub-thermally excited {HCO$^+$}\ absorbing optically thick, but nevertheless subthermaly excited background {HCO$^+$}\ emission from this cloud. If located on the near surface of G0.253+0.016, their blueshifted velocities with respect to the background {HCO$^+$}\ emission would indicate that the cloud surface layers are expanding, contrary to the naive expectation of global collapse if it were a progenitor to a massive cluster. \section{Observations} Observations presented here were obtained with the Atacama Large Millimeter Array (ALMA) located on the Chajnantor plateau in the Northern Chilean Andes. Full details of the observations and data reduction will be presented in another paper \citep{Rathborne2014_ALMA}. A short summary is presented below. G0.253+0.016 was imaged in the 3 mm wavelength bands (ALMA band 3) using 25 of ALMA's 12-meter diameter antennas. The synthesized beam produced by ALMA at this frequency has a diameter of 1.7\arcsec\ corresponding to a linear scale of 0.07 pc at the 8.5 kpc distance of the CMZ. Because the primary-beam field-of-view of ALMA is small ($\sim$ 70\arcsec) and G0.253+0.016 subtends a 1\arcmin\ by 3\arcmin\ region, a mosaic consisting of 13 separate pointings was required. Baselines ranged from 13 to 455 meters. G0.253+0.016 was imaged on six separate occasions over the period 29 July to 1 August 2012. Each data set was independently calibrated prior to merging. The ALMA correlator was configured to measure the continuum in a 1.875 GHz bandpass, and to observe 10 different molecular emission lines with a resolution of 0.488 MHz (about 1.5 km~s{$^{-1}$}\ velocity resolution) covering a velocity range from about V{$_{LSR}$}\ = -200 km~s{$^{-1}$}\ to about +240 km~s{$^{-1}$} . Single-dish observations of G0.253+0.016 in the same transitions as observed with ALMA were obtained with the 22 meter Mopra radio telescope in Australia as part of the MALT90 Survey \citep{Foster2011,Jackson2013,Foster2013}. The resulting maps were used to restore 0-spacings in the ALMA data in the Fourier domain \citep{Rathborne2014_ALMA}. Final data cubes were gridded to a pixel scale of 0.35\arcsec\ per pixel. Because the flux in adjacent channels in the data is correlated, the data were re-sampled to 3.4 km~s{$^{-1}$}\ per channel. The rms noise in each frequency channel is about 5 mJy (0.26 K) in the 1.7\arcsec\ beam. At the rest frequency of the HCO$^+$ J = 1-0 transition (89.1885 GHz) a flux of 1 Jy in the 1.7\arcsec\ diameter beam (or a surface brightness of 1 Jy/beam) corresponds to a brightness temperature of 53.5 K. All temperatures and fluxes refer to values above the cosmic microwave background or any other smooth background which are resolved out by ALMA and removed from the single-dish observations by beam-switching. \section{Results} The blueshifted emission in {HCO$^+$}\ shows an extensive network of curved and linear filamentary structures seen in absorption against the warmer background emission from G0.253+0.016. Figure \ref{fig1} shows an integrated intensity map of G0.253+0.016 in the J = 1-0 {HCO$^+$}\ transition. While the ALMA {HCO$^+$}\ integrated intensity map is dominated by area-filling emission, the optically thinner {H$^{13}$CO$^+$}\ cubes show bright arcs resembling bows moving towards increasing R.A. (left in the figures). This overall morphology is also seen in other optically thin (or at least thinner than {HCO$^+$}) tracers such as HNCO, H$_2$CS, SiO, and SO \citep{Rathborne2014_ALMA}. Figures 2 through 9 show individual 3.4 km~s{$^{-1}$}\ wide channel maps in {HCO$^+$} . The yellow arrows mark the location of a pair of filaments having a length of over $L$ $>$ 60\arcsec\ and a radial velocity dispersion greater than 20 km~s{$^{-1}$} . Dozens of filaments with line-widths less than 10 to 20 km~s{$^{-1}$}\ are located on the blueshifted (low velocity) side of the {HCO$^+$}\ emission. These filaments also have widths of only a few arcseconds, but their lengths tend to be shorter than the broad-line filaments. Some filaments form clusters of nearly parallel strands which at extreme blueshifted radial velocities (V{$_{LSR}$}\ = 5 to 30 km~s{$^{-1}$} ) blend into a complex network of absorption covering most of the spatial extent of G0.253+0.016. The morphology of the absorption filaments do not show any correlation with the underlying filamentary structure of G0.253+0.016 seen in optically thin emission lines. The absorption filaments can be subdivided into two categories based on their observed properties: {\it Broad-line absorption (BLA)} filaments have widths of less than a few arcseconds ($<$ 0.04 to 0.08 pc), lengths of 30 to 50 arcseconds ($\sim$ 1 to 2 pc), and absorption profiles extending over a velocity range larger than 20 km~s{$^{-1}$}\ around V{$_{LSR}$}\ = 10 to 40 km~s{$^{-1}$} . The two most evident BLAs (marked by in Figures 1 through 9) are nearly parallel to the nearby G0.18 non-thermal filaments seen in 20 and 6 cm wavelength radio continuum images of \citet{Yusef-Zadeh2004}. These BLAs have position angles (measured from North to East) of PA $\sim$ 125\arcdeg\ to 140\arcdeg , nearly perpendicular to the Galactic plane. The nearby non-thermal filaments have PA $\sim$ 125\arcdeg . Analysis of spatial-velocity cuts and spectra orthogonal to the most prominent BLA filament in Figure 1 shows that it extends from below 17 km~s{$^{-1}$}\ to about 45 km~s{$^{-1}$}\ (Figure \ref{fig10}). The intensity in the faintest part of the filament ranges from about 0.17 to greater than 0.8 times the {HCO$^+$}\ intensity in the surrounding region which has a brightness temperature of 3 to 4 K (0.05 to 0.08 Jy/beam). Thus corresponding brightness temperatures at the most opaque portions of the BLA filaments range from $\sim$ 0.5 K (0.01 Jy/beam) at the darkest part of the main BLA to about 2 K (0.04 Jy/beam) at more typical locations. The lowest-opacity occurs near the ends of the major BLA filaments. The deepest absorption occurs at the redshifted end of the feature near V{$_{LSR}$}\ $\approx$ 37 km~s{$^{-1}$}\ at J2000 $\approx$ 17:46:09.79, $-$28:42:03.5. The mean flux along cross-cuts is about 0.5 times the surrounding cloud flux. Thus, the optical depth of the absorbing gas ranges from $\sim$ 0.2 to about 2 with a mean value of 0.7. The {HCO$^+$}\ BLA filaments have no counterparts in any of the other spectral lines. Specifically, they are not seen in other optically thick species such as HCN. {\it Narrow-line absorption (NLA)} {HCO$^+$}\ filaments, some of which are also seen in absorption in other species with high optical depth such as HCN, have line widths of less than 20 km~s{$^{-1}$} . Some tend to cluster into filament bundles which taken together have velocity extents up to 20 km~s{$^{-1}$} . These filaments are mostly located at at V{$_{LSR}$}\ $\sim$ 10 to 40 km~s{$^{-1}$}, blueshifted relative to the median radial velocity of G0.253+0.016. Table~1 lists several examples of NLA filaments. Near the Northwest portion of the cloud, several arcs form nearly concentric loops with radii of about 6\arcsec\ and 13\arcsec\ between V{$_{LSR}$}\ = 11 and 21 km~s{$^{-1}$}\ (Figures \ref{fig2} and \ref{fig3}). NLA1 is a narrow line-width absorption feature about 20\arcsec\ south of the two BLAs. The NLA2 consists of several parallel east-west strands near the middle of the cloud around $\delta$ = -28:42:50 (Figures \ref{fig5}, \ref{fig6}, and \ref{fig7}). NLA 3 is a cluster of bent filaments near the redshifted southern part of G0.253+0.016 around $\delta$ = -28:43:00 to - 28:44:00 (Figures \ref{fig4}, \ref{fig5}, and \ref{fig6}). Many NLA filaments blend into broad regions of absorption on the blueshifted, low-radial velocity side of the {HCO$^+$}\ data cube. Figure \ref{fig11} shows spectra of an NLA filament near the northern side of the NLA3 cluster. Figure \ref{fig12} shows the {HCO$^+$}\ integrated intensity map of G0.253+0.016 with the locations of the BLA and NLA filaments with the locations of the features listed in Table~1 marked. At the extreme blueshifted end of the data cube, the NLA filament network blends into an area-filling absorption. The channel maps between V{$_{LSR}$}\ = 14 and 21 km~s{$^{-1}$}\ indicate that most of the surface of G0.253+0.016 is covered by NLA filaments. The fractional area covered diminishes toward higher radial velocities. Most NLAs are shorter than the BLAs and many are curved. A few, such as those centered near J2000 = 17:46:10.6, -28:41:55 trace nearly complete circles. Others are partial arcs. The distinction between BLA and NLA filaments is not sharp; they form a continuum with decreasing line-widths and lengths from BLA1 and 2 (Table~1) to the narrower and shorter NLA filaments. Comparison with the HCN and HNCO data (not shown) indicate that some {HCO$^+$}\ NLAs are also seen as absorption features in these tracers (Figure \ref{fig13}). A few {HCO$^+$}\ NLA segments go into emission where their surroundings are dim in HCN or HNCO such as near the projected edges of the cloud. These filaments are confined to within a few arc-seconds of the cloud's edge, indicating that they may be close to the surface of G0.253+0.016. The distribution of {HCO$^+$}\ absorption features have no correlation with the underlying structure of G0.253+0.016 seen in other tracers. G0.253+0.016 is filamentary in optically thin emission lines such as {H$^{13}$CN}\ where G0.253+0.016 resembles a bow shock propagating towards the east. Figure \ref{fig14} shows the optically thin {H$^{13}$CO$^+$}\ in three broad velocity ranges V$_{LSR}$ = 10.8 to 27.8 km~s{$^{-1}$}\ (blue), 31.2 to 41.4 km~s{$^{-1}$}\ (green), and 44.8 to 61.8 km~s{$^{-1}$}\ (red). The east-facing bow shapes are not visible in the optically thick {HCO$^+$}\ (or HCN) data presumably because this emission is area-filling and these tracers only probe cloud structure to the $\tau \approx$ 1 surface. The emission-line structure of G0.253+0.016 is discussed in \citet{Rathborne2014_ALMA}. For reference, Figure \ref{fig15} shows the 3 mm continuum emission from \citet{Rathborne2014_ALMA}. It is unclear what fraction of this emission is produced by dust and what fraction is produced by free-free emission. \subsection{Opacity and Excitation Conditions for {HCO$^+$} } The typical {HCO$^+$}\ brightness temperature in both the ALMA and single dish observations is only about 2 to 4 K \citep{Rathborne2014_ALMA, Rathborne2014_MALT90}, considerably lower than the gas temperatures of 60 to 80 K inferred from NH$_3$ or H$_2$CO observations \citep{Lis2001,MillsMorris2013,Ao2013} and even lower than the dust temperature, which has been estimated to be around 19 K in the cloud center and about 27 K at the cloud surface \citep{Lis2001,Pierce-Price2000,Longmore2012}. The ALMA data shows that the {HCO$^+$}\ emission is area filling at arcsecond scales. Thus, the low brightness temperature must be due to low excitation temperature at the $\tau_{12}$ = 1 surface, implying densities much lower than the critical density for the J=1$-$0 89 GHz transition of {HCO$^+$}\ (for which the critical density is $\sim 2 \times 10^5$ cm{$^{-3}$} ). Comparison with the {H$^{13}$CO$^+$}\ ALMA images demonstrates that the {HCO$^+$}\ emission is optically thick at most locations. The area-integrated intensity ratio of {HCO$^+$}\ divided by the area-integrated ratio of {H$^{13}$CO$^+$}\ is $R_{12/13}$ = I({HCO$^+$} )/I({H$^{13}$CO$^+$} ) = $10\pm 1$ for the northern part of G0.253+0.016 and $R_{12/13} = 11\pm 1$ for the southern part for integration areas ranging from 0.3 to 2.1 square arc-minutes. The maximum {HCO$^+$}\ brightness (0.23 Jy/beam or 12.3 Kelvin) occurs in a small 3\arcsec\ by 8\arcsec\ elongated knot near the southern part of G0.253+0.016 at J2000 = 17:46:08.5, $-$28:43:33 at V{$_{LSR}$}\ = 55 km~s{$^{-1}$} . At this location, the ratio $R_{12/13} \approx 12$. Thus, the observed brightness ratio, $R_{12/13}$ is lower than the abundance ratio, $X_{12/13}$ = $[^{12}C] / [^{13}C]$. \citet{LangerPenzias1990} measured the abundance ratio $X_{12/13}$ as a function of Galactocentric radius finding that in the CMZ $X_{12/13} \approx $ 24. In a more recent study, \citet{ Riquelme2010} found a similar ratio in most parts of the CMZ. However, near cloud edges, some ratios as high as 40 were found. \citet{Milam2005} found a lower value in the CMZ of $X_{12/13} \approx $ 17 $\pm$ 7. Thus, there may a some intrinsic variation possibly do to local enrichments or depletion such as might be caused by local stellar processing and injection into the ISM or isotope-selective photo-dissociation \citep{RolligOssenkopf2013,LyonsYoung2005, BallyLanger1982}. Assuming that the $X_{12/13}$ is 24 in G0.253+0.016, that the excitation temperatures of {H$^{13}$CO$^+$}\ and {HCO$^+$}\ are the same, and that for {HCO$^+$}\ the observed brightness is equal to the excitation temperature, as expected for an optically thick emission line, the $R_{12/13}$ ratio of $\sim$10 indicates that the mean emission line optical depth of {HCO$^+$}\ is $$ \tau _{12} \approx X_{12/13} ln [R_{12/13} / (R_{12/13} - 1)] \sim 2.5 $$ For $X_{12/13}$ ranging from 17 to 40, $\tau _{12}$, $R_{12/13}$ = 10 implies that $\tau_{12}$ ranges from 1.5 to 2.5. In the {HCO$^+$}\ photosphere (where $\tau_{12} \sim$ 1), the density must be much lower than the critical density, implying sub-thermal excitation, and densities much lower than the mean density of G0.253+0.016. The brightness temperature of {HCO$^+$}\ implies that $T_{ex} <$ 10 K over most of the G0.253+0.016 photosphere. \citet{Clark2013} modeled both the gas and dust temperatures in G0.253+0.016, finding that high gas temperatures and lower dust temperatures can be explained by a very high cosmic-ray flux and ionization rate. The enhanced cosmic-ray flux may also enhance the abundances of certain molecules such as methanol \citep{Yusef-Zadeh2013} and {HCO$^+$} . {HCO$^+$}\ is the most abundant molecular ion next to H$_3^+$ in the molecular phase of the interstellar medium. Collisional excitation of its J=1-0 transition requires a collision partner density (H$_2$ and He) of $n > 10^4$ cm$^{-3}$ to thermalize. {HCO$^+$}\ has as relatively high abundance of $n({\rm HCO^+}) / n({\rm H_2}) \sim 10^{-9}$ to $10^{-7}$ in nearby molecular clouds. {HCO$^+$}\ is created by the reaction $\rm{CO + H_3^+ ~-> HCO^+ + H_2}$ and destroyed by reactions with H$_2$, $\rm {HCO^+ + H_2 -> H_2CO + H^+}$ or by recombination with electrons, $\rm{ HCO^+ + e^- -> CO + H^+}$. The large abundance of $\rm { H_3^+}$ in the CMZ \citep{Goto2008, Goto2011} may drive the {HCO$^+$}\ abundance in the CMZ to values near the upper-end of the above range. The column densities of the absorption filaments can be estimated from the absorption optical depth. Following \citet{LisztLucas2004} the column density of a linear molecule having dipole moment $\mu$ (about 4 Debye for {HCO$^+$} ) absorbing from its ground state is $$ N_0 = {{8.0 \times 10^{12} \int \tau_{10} dV } \over { \mu^2 (1 - exp(-h \nu_{10} / k T_{ex})}} $$ where $\nu_{10}$ is the J = 1$-$ 0 frequency (89.1885 GHz) for {HCO$^+$}\ and $T_{ex}$ is the excitation temperature. Measuring the {HCO$^+$}\ fluxed in the filament and comparing this with the surrounding off-filament pixels indicates that typically about one-half of the background light is absorbed, implying a typical absorption optical depth of about 0.7. For the broad-line filaments, $dV \sim$ 10 to 30 km~s{$^{-1}$}\ and $\tau_{10} \sim $0.2 to 2 with a mean value of 0.7. For a typical equivalent width (integral of absorption optical depth over the line width) of 10 km~s{$^{-1}$} , the column density of {HCO$^+$}\ absorbers is $N_0 \approx $ $10^{13}$ cm$^{-2}$ for $T_{ex}$ = 2.73 K, and $6 \times 10^{13}$ cm$^{-2}$ for $T_{ex}$ = 10 K. Table~2 lists the excitation temperatures ($T_{ex}$), opacities ($\tau$), and brightness temperatures ($T_R$) for the J = 1$-$0 transition of {HCO$^+$}\ for a range of molecular hydrogen volume densities ($n$) and {HCO$^+$}\ column densities ($N$) using the radiative transfer code RADEX \citep{vanderTak2007}. These calculations show that an $H_2$ density of $1 \times 10^3$ cm$^{-3}$ an {HCO$^+$}\ column density of $1 - 10 \times 10^{13}$ cm$^{-2}$ and line width $\Delta$V = 10 km~s{$^{-1}$}\ in a gas with a kinetic temperature of 60 K results in an excitation temperature of about 2.8 $-$ 3.2 K, an optical depth 0.8 $-$ 7 in the J = 1-0 line. These parameters would produce the absorption seen in the NLA filaments. The BLA filaments require a slightly lower $H_2$ density to produce the low observed brightness temperature and an opacity of order unity required to explain the amount of background attenuation. The {HCO$^+$}\ radiation field of G0.253+0.016 may contribute to the excitation of the upper state by effectively raising the background radiation temperature (radiative pumping). Detailed modeling of the excitation conditions and radiative transfer using realistic cloud geometries is needed to determine if the low observed brightness temperature of the NLA and BLA filaments can be used to place a lower bound on separation between the absorbing gas and G0.253+0.016. The width of these filaments is less than about 2\arcsec\ or about 0.08 pc. (If only 1\arcsec\ wide, the filaments must be completely dark and only a lower bound can be placed on their optical depth. Higher angular resolution ALMA observations will be obtained during ALMA Cycle 1 to measure their actual widths.) From the estimated column density and {HCO$^+$}\ abundance $X( HCO^+ ) = 10^{-7}$, assuming that the depth of the filament is comparable to its projected width, the higher excitation temperature (T$_{ex}$ = 10 K) would imply an H$_2$ column density of $6 \times 10^{20}$ cm$^{-2}$ and a volume density of around $2.5$ to $5 \times 10^3$~cm$^{-3}$. A density larger than this value would drive the absorption filaments into emission in {HCO$^+$} . A lower {HCO$^+$}\ abundance increases the estimated volume density proportionally and conversely, a higher {HCO$^+$}\ abundance decreases the H$_2$ volume density. Thus, either the {HCO$^+$}\ abundance must be higher than $X( HCO^+ ) = 10^{-7}$, or the line-of-sight depth of the filaments must be larger than their projected widths. It is possible that the filaments are sheets seen edge-on so that the line-of-sight depth is much larger than projected width of the filament. In such a sheet, the column density to volume density ratio can be much larger than for a cylinder. For a given {HCO$^+$}\ abundance, the observed column density requires a volume density that is lower by the ratio of the line-of-sight depth of the sheet divided by its projected width on the plane of the sky. Such edge-on sheets may be produced by shocks propagating into molecular media at right angles to our line-of-sight. With n(H$_2$) $< 5 \times 10^3$~cm$^{-3}$, the densities in the absorption filaments must be lower than the density at the $\tau \sim$ 1 surface in {HCO$^+$}\ which is sub-thermally excited, and well below the mean density of G0.253+0.016, $n(H_2) \approx 7 \times 10^4$ cm$^{-3}$ \citep{Lis2001, Longmore2012,Rathborne2014_MALT90}. Inspection of the {HCO$^+$}\ and HCN data show that both tracers are heavily absorbed on the blueshifted, approaching side of the spectral line profiles. Thus, there is a pervasive layer of sub-thermally excited, low density {HCO$^+$}\ and HCN bearing gas in front of G0.253+0.016. If associated with G0.253+0.016, this gas would indicate the presence of an extended layer expanding from G0.253+0.016 at velocities 10 to 20 km~s{$^{-1}$}\ over the entire face of this cloud. While NLA filaments near V{$_{LSR}$}\ = 0 km~s{$^{-1}$}\ could be located anywhere between the Sun and G0.253+0.016, the higher radial velocity NLA filaments with V{$_{LSR}$}\ $>$ 10 km~s{$^{-1}$}\ are most likely to be in the CMZ (local gas is found between V{$_{LSR}$}\ $\sim$ $-$10 to +10 km~s{$^{-1}$}\ and the Galactic spiral arms between the Sun and the CMZ are at V{$_{LSR}$}\ $\sim$ $-$ 10 to $-$65 km~s{$^{-1}$} ). Near the projected edge of G0.253+0.016 some NLA filaments go faintly into emission in HCN and HNCO. Because these features disappear a few arc seconds beyond the cloud's edge, they must be physically close to G0.253+0.016, suggesting that they trace sub-thermal {HCO$^+$}\ and HCN molecules on the near surface of the cloud. \section{The Dynamic State of G0.253+0.016: Collapse, Expansion, or Both?} \citet{Rathborne2014_MALT90} analyzed millimeter and sub-millimeter-wave emission in multiple transitions from G0.253+0.016 obtained with the Mopra and APEX single-dish telescopes. In addition to the large north-south velocity gradient they found that the optically thick dense-gas tracers were systematically redshifted with respect to the optically thin and hot gas tracers, indicating radial motions which could be interpreted as either due to expansion or collapse. If the brighter, redshifted emission in optically thick species such as {HCO$^+$}\ and HCN originate in an externally-heated layer at the {\it front surface}, then the cloud is contracting. \citet{Rathborne2014_MALT90} refer to this as their `Baked Alaska' model. The bright blueshifted emission expected from the far-side heated layer would be absorbed by {HCO$^+$}\ molecules in the supersonically turbulent, colder, cloud interior. This interpretation requires that the excitation temperature of the {HCO$^+$}\ J=1$-$0 transition in the cloud surface layer producing the redshifted line component must be higher than excitation temperature of the component producing the lower-velocity and lower brightness-temperature emission in the cloud interior. Alternatively, if the excitation temperature in the layer where the optical depths reach a value of order unity is low compared to the excitation temperature deep in the cloud interior, then these observations can be interpreted as evidence for expansion of the cloud surface layers. For sub-thermal excitation of molecules at the cloud surface, collapse would produce redshifted absorption while expansion would produce blueshifted absorption. At lower, blueshifted radial velocities, the {HCO$^+$}\ J=1$-$0 transition is seen as a network of absorption filaments against brighter background emission. The brighter, background {HCO$^+$}\ emission appears to fill the synthesized ALMA beam and to cover most of the projected surface area of G0.253+0.016 with a peak brightness temperature of order 2 to at most 10 K, well below the gas kinetic temperature, $T_K > $ 60 K. A peak line temperature lower than $T_K$ requires that either the emitting region fills only a small portion of the telescope beam (`beam dilution'), or a small fraction of the spectral channel (`frequency dilution'), or that the excitation temperature $T_{ex}$ at the $\tau \approx$ 1 surface be much lower than $T_K$. The bright {HCO$^+$}\ emission between the absorption filaments appears smooth on scales larger than the 1.7\arcsec\ ALMA beam, and continuous from one velocity channel to the next, indicating that beam and frequency dilution of the {HCO$^+$}\ emission is unlikely. Thus, the relatively low brightness temperature of {HCO$^+$}\ is most likely due to low excitation temperature, implying that the H$_2$ density near the $\tau$ = 1 surface is much lower than the critical density for the {HCO$^+$}\ J = 1$-$0 transition. The {HCO$^+$}\ filaments are seen in absorption against this background emission, they have an even lower excitation temperature than the regions seen in emission. If these blueshifted features are at the cloud surface, the G0.253+0.016 surface layers must be expanding away from the cloud center of mass despite the large mass and gravity field. This is consistent with the second (`P-Cygni') interpretation of the single-dish data presented by \citet{Rathborne2014_MALT90}. The mass-loss rate is difficult to estimate because the density on the absorbing layer is an upper bound, the column density is a lower-bound (based on an assumed {HCO$^+$}\ abundance), and the thickness of the absorbing layer is not known. For a given density $n(H_2)$, and assuming that all sides of the cloud are expanding in a manner similar to that side facing the Sun, the mass-loss rate is given by $\dot M \sim 4 \pi f R^2_{eff} \mu m_H n(H_2) V$ where $f$ is the fraction of the cloud surface area occupied by expanding gas, $ R_{eff} \approx 2.7$ pc is the mean cloud radius, $n(H_2)$ is the area averaged mean density of the NLA filaments at the cloud's surface, and $V \sim$ 10 to 20 km~s{$^{-1}$} is the expansion speed of the absorbing layer. For $f $ = 0.5 and $n(H_2) = 10^3$ cm$^{-3}$, set by the constraint that {HCO$^+$}\ in the absorbing gas is severely sub-thermally excited, $\dot M \sim $ 0.03 to 0.06 M{$_{\odot}$}\ yr$^{-1}$. In the last 0.25 Myr, since its close passage to Sgr A, G0.253+0.016 could have lost 10 to 20\% of its mass. It is possible that G0.253+0.016 is simultaneously undergoing {\it both} collapse and expansion. The presence of a maser and at least one massive clump surrounding it suggests that at least one moderate or high mass star is forming in the cloud interior. The cold and dense cloud interior may thus be experiencing gravitational collapse and star formation. However, the lack of internal heating and absence of any other signposts of young stars other than the maser source, suggests that the cloud is only in the earliest phases of star formation. Blueshifted absorption in optically thick species such as {HCO$^+$}\ over much of the cloud surface area is an indication that outer layers may be expanding. \section{Discussion} The filamentary structure of molecular clouds has been recognized for decades \citep{BallyOrion1987}. However, the extensive arc-second scale {HCO$^+$}\ absorption filaments revealed by ALMA are new. The broad-line (BLA) filaments in G0.253+0.016 are extraordinary because of their narrow minor axis dimension (less than $\sim 10^4$ AU), their 20 km~s{$^{-1}$}\ width in absorption, and their low antenna temperatures. The more abundant narrow-line (NLA) filaments are less extreme in their velocity extent. Comparison of the optically thick {HCO$^+$}\ and HCN cubes with the other ALMA spectral line cubes of optically thinner tracers such as HNCO, SiO, C$_2$H, H$_2$CS, {H$^{13}$CN} , and {H$^{13}$CO$^+$}\ show that {HCO$^+$}\ and HCN are heavily absorbed between V$_{LSR}$ = $-2.8 $ and about $+30$ km~s{$^{-1}$}. The BLAs are only seen in {HCO$^+$}\ and have no corresponding features in any other species in either emission or absorption. Two models are considered for the BLA filaments. The BLAs may be associated with dense gas in the Galactic Center Ring (GCR) a portion of which has a radial velocity V{$_{LSR}$}\ = 10 to 30 km~s{$^{-1}$}\ towards Galactic longitude of 0.18 \citep{RodriguezFernandez2006, RodriguezFernandez2008}. Assuming that the GCR has similar orientation and kinematics but is larger than the 100 pc radius ring proposed by \citet{Molinari2011}, the front side is expected to have positive radial velocities while the far side is expected to have negative velocities. Given the possible association of G0.253+0.016 with the Molinari ring, the front-side of GCR may be in front of G0.253+0.016. Low density, low-excitation gas could then be seen in absorption against the warmer emission from the cloud. In this scenario the BLAs are produced by sub-thermal {HCO$^+$}\ far in front (a few to as much as $\sim$200 pc in front of G0.253+0.016). Their narrow widths, highly supersonic velocity extents, and elongated filamentary morphology suggest that they are ram pressure confined. In this model, they may be caustic surfaces produced where shock fronts are moving in the plane of the sky and whose post-shock layers are sheets observed very close to edge-on. Thus, their depth along the line-of-sight may be much larger than their observed widths. The {HCO$^+$}\ molecules must either survive passage through the shock, or are rapidly formed in the post-shock cooling layers. The crossing time of these filaments is about $10^3$ years. If molecule reformation is responsible for the {HCO$^+$}\ abundance, then this time-scale establishes a limit on the formation time. The alternative model is that the BLAs are related to the non-thermal filaments near Galactic longitude 0.18 and are magnetically confined. This picture is motivated by the similar orientations of the BLAs and the 20-cm non-thermal filaments that lie only a few parsecs in projection from the southeast edge of G0.253+0.016. Both sets of filaments are oriented roughly at right angles to the Galactic plane. Using equipartition arguments applied to the synchrotron, \citet{YusefZadehMorris1987} estimated that the magnetic fields in the non-thermal streamers have strengths of about 0.05 to 0.2 milli-Gauss. The polarization of the dust continuum emission from G0.253+0.016 has been measured at $\lambda$ = 350 $\mu$m \citep{Dotson2010}. The magnetic field is nominally orthogonal to the 350 $\mu$m polarization vector. Inspection of Figure 39 in \citet{Dotson2010} shows that the magnetic field in the cloud interior where most of the sub-mm dust emission originates, closely follows the bow-shaped emission-line filaments observed in the optically-thin tracers. The magnetic field in the northern part of G0.253+0.016 is approximately orthogonal to the Galactic plane and parallel to the BLA filaments. The magnetic field in the southern part of G0.253+0.016 is roughly aligned with the Galactic plane. Application of the Chandrasekhar-Fermi method to estimating magnetic field strengths in the molecular gas in the CMZ as traced by sub-millimeter polarimetry of the dust continuum emission leads to field strength estimates of 1 - 3 milli-Gauss \citep{Chuss2003}. Using the scaling relations of magnetic field strength with cloud density based on Zeeman effect measurements \citep{Crutcher2012} and assuming a mean cloud density of few~$\times 10^5$~cm$^{-3}$ leads to similar field strength estimates. \citet{Ferriere2010} presented a review of CMZ magnetic fields. An ionization fraction of $\sim 10^{-7}$ (the estimated abundance of {HCO$^+$}\ in the absorbing medium) implies very long ambipolar diffusion time-scales and highly elongated structures if magnetic fields are present in the absorption region. Heavy molecular ions such as {HCO$^+$}\ gyrate around the magnetic field lines at a frequency $f_{gyro} \sim e B / m(HCO^+) c \approx 0.3 ~ B_{mG}$ radians per second. Here, $e$ is the electron charge, $c$ is the speed of light, $m_{HCO+} = 29 ~ m_H$ is the mass of an {HCO$^+$}\ molecule, and $B_{mG}$ is the magnetic field in units of 1 milli-Gauss. Using the observed spectral line half-width of the most prominent BLA filament, $\Delta V \sim$10 km~s{$^{-1}$} , implies a gyro-radius of {HCO$^+$}\ molecules of $\sim$ 30 km. The mean-free path of molecules, $\lambda _{mfp} \sim 1 / n(H_2) \sigma$ for a density $n(H_2) = 10^{3}$ and a collision cross-section $\sigma \sim 10^{-15}$~cm$^{-2}$ is $\lambda _{mfp} \sim 10^{12}$ cm, 5-orders of magnitude smaller than the width of the BLA filaments ($\sim 10^{17}$ cm). The mean time between molecular collisions which lead to diffusion is $ t_{col} \sim 10^{6}$ seconds. To random-walk a distance $10^5$ times larger than the mean free-path requires about $10^{10}$ collisions. Thus, the time for molecules to diffuse over a distance of $r_{filament} \sim10^4$ AU (the typical width of the absorption filaments) is therefore $t_{AD} \sim10^{10} t_{col} \sim 10^{16}$ seconds. For the parameters implied by the observations, the magnetic field and the partially ionized molecular gas are tightly coupled. The effective diffusion velocity {\it across} the filaments is $V_{perp} \sim r_{filament} / t_{AD} \sim$ 10 cm~s$^{-1}$. However, atoms and molecules can move unimpeded {\it along} the field lines at the $\Delta V \sim 10$ km~s{$^{-1}$}\ measured velocity dispersion. Thus the aspect ratio (length-to-width) of magnetized filaments can be as high as $\Delta V / V_{perp} > 10^5$. \subsection{Expansion of the Surface G0.253+0.016: Photo-ablation?} Two mechanisms could be responsible for the apparent expansion of the near-side surface layers of G0.253+0.016; photo-ablation and stored magnetic energy. The first mechanism relies on external heating of the cloud surface layers by the combined effects of UV, soft X-ray, visual, and near-IR radiation that can photo-ablate cloud surfaces, generate large internal motions, and generate shocks which compress the cloud surface layers \citep{Mellema2006,Henney2009M,Arthur2011}. Diffuse dust emitting between 8 and 70 $\mu$m tends to be associated with photo-ionized plasmas in HII regions. The extended 20 cm free-free continuum \citep{YusefZadeh2004} and 8 to 70 $\mu$m continuum emitted by diffuse mid-IR emission from warm dust implies that much of the inner 100 pc of the Galaxy contains photo-ionized plasma and warm dust, likely produced by radiation from massive stars. Although the Spitzer and WISE mid-IR images show that the Galactic Center Bubble located between Sgr A and G0.253+0.016, and likely ionized and heated by the Arches and Quintuplet clusters, dominates the diffuse mid-IR emission in the CMZ, these images also exhibit somewhat fainter and more diffuse warm dust emission centered about 6\arcmin\ (15 pc in projection) due East of G0.253+0.016 at $[l,b]$ = $[0.304, -0.067]$ (J2000 = 17:46:36.2, -28:42:41). Diffuse 20 cm radio continuum and several HII regions within 6\arcmin\ of this region of peak infrared emission (but generally due East of G0.253+0.016) indicate the presence of OB stars whose energy output may irradiate and heat the G0.253+0.01 cloud's surface. If the Lyman continuum luminosity of these stars is $Q_{50} \sim 10^{50}$ ionizing photons per second, and located at a distance $D_{15}$ = 15 pc, the electron density at the surface of G0.253+0.016 would be (neglecting any intervening extinction) about $n_e = (Q / 4 \pi \alpha _B r)^{1/2} D^{-1}$ $\sim 40~Q_{50}^{1/2} r_3^{-1/2} D^{-1}_{15}$ cm$^{-3}$ assuming a spherical cloud of radius $r = r_3 \sim $ 3 pc (here, $\alpha _B = 2.6 \times 10^{-13}$ cm$^3$~s$^{-1}$ is the Case B hydrogen recombination coefficient). But, G0.253+0.016 appears filamentary with local radii of curvature at least an order of magnitude smaller in at least one direction. Thus, the actual electron density may be $n_e > 10^2$ cm$^{-3}$. The implied mass-loss rate due to heating (assuming that the gas is accelerated to about the sound-speed in ionized gas; $\sim$ 10 km~s{$^{-1}$} ) is about $10^{-3}$ M{$_{\odot}$}\ yr$^{-1}$ or only about $10^2$ to $10^3$ M{$_{\odot}$}\ in 0.25 Myr. This is an order-of-magnitude lower than the mass-loss rate estimated from the NLA absorption filaments. Unless additional energy input mechanisms operate, photo-ablation is an unlikely source of cloud surface layer expansion. \subsection{Expansion of the Surface G0.253+0.016: Magnetic Bounce?} \citet{Longmore2013} proposed that G0.253+0.016 passed near Sgr A about 0.25 Myr ago. Passage of the cloud through the deepest part of the Galactic gravitational potential would have compressed the cloud. If its interior magnetic fields were sufficiently tangled, they would act as a spring, storing the energy of compression, limiting the dissipation of internal motions by shocks. As G0.253+0.016 emerged from the potential well, the compressed fields could drive re-expansion of the cloud. While some regions with high density or mass-to-flux ratios may collapse, less dense or relatively weakly bound parts of the cloud such as its outer layers, may expand as the fields relax. An order of magnitude estimate of the mean field strength required to support such a `bounce' can be obtained by assuming that the stored magnetic energy, $E_B \sim f B^2 r^3$ is comparable to the gravitational self-energy $E_G \sim g G M^2 / r$. Here, $f$ and $g$ are factors of order unity depending on the magnetic field geometry, the cloud shape, and density distribution, and for G0.253+0.016, $r \sim$ 3 pc is the mean cloud radius, and $M \sim 10^5$ M{$_{\odot}$}\ is the cloud mass. If the gravitational potential energy is similar to the magnetic energy (as required by a `bounce'), these parameters imply a mean magnetic field strength $B \sim $ 1.5 milli Gauss, consistent with the magnetic field-strength estimates. Thus, the adiabatic behavior of a tangled internal magnetic field could drive a re-expansion of G0.253+0.016 following its emergence form the deepest part of the Galactic gravitational potential. Future work should search for evidence for BLAs in other optically-thick ground-state transitions of ions such as N$_2$H$^+$. BLAs detected only in other ions would lend support to the magnetic structure interpretation. BLAs detected in other neutral species, especially ones with enhanced abundances in shocks, would provide support for the shock-wave interpretation. High resolution dust continuum polarization measurements and searches for Zeeman splitting are needed to constrain the magnetic field geometry and strength to determine if the magnetic flux-to-mass ratio is super-critical and allows the cloud to collapse, or sub-critical and may prevent collapse and formation of a cluster of stars. \section{Conclusions} ALMA Cycle 0 observations of the massive, compact, and dense Galactic center cloud, G0.253+0.016 are presented with an angular resolution of about 1.7\arcsec\ in the {HCO$^+$}\ and {H$^{13}$CO$^+$}\ J = 1$-$0 lines. The main results derived from these observations are: [1] The observed intensity ratios of {HCO$^+$}\ and {H$^{13}$CO$^+$}\ indicate that the mean optical depth of {HCO$^+$}\ is around unity or higher. The observed J=1$-$0 {HCO$^+$}\ brightness temperature of 2 to 12K in a cloud in which the kinetic temperature of the gas is estimated to be around 60 to 80 K indicates sub-thermal excitation at the {HCO$^+$}\ photosphere and an H$_2$ density (n(H$_2$)~$\sim 10^3$~cm$^{-3}$) more than an order of magnitude lower than the critical density for the J=1$-$0 transition and more than an order of magnitude lower than the mean density of G0.253+0.016. [2] The ALMA observations reveal a network of blueshifted (relative to the cloud) absorption filaments seen against the sub-thermally excited {HCO$^+$}\ emission. These filaments must have densities even lower than density at the $\tau_{12} \sim$ 1 photosphere where most of the {HCO$^+$}\ emission is produced. A few broad-line absorption (BLA) filaments have line widths of over 20 km~s{$^{-1}$} , widths of only a few arc seconds, and lengths of nearly 1 arc minute. They are only seen in {HCO$^+$} . The BLA filaments may be foreground magnetic structures possibly associated with the brightest non-thermal filaments that cross the Galactic plane at longitude 0.18\arcdeg . Alternatively, they may be shock fronts in foreground CMZ clouds propagating orthogonal to the line-of-sight so that their compressed layers are seen nearly edge-on with a depth along the line-of-sight much greater than their projected widths. Additionally, G0.253+0.016 is laced with dozens of narrow-line absorption (NLA) filaments with line widths of less than 20 km~s{$^{-1}$} . Most of these features are seen on the blueshifted side of the line profiles. A few have counterparts in HCN. Some absorption filaments seen in both {HCO$^+$}\ and HCN which extend over faint HCN backgrounds go into emission in HCN but not in {HCO$^+$} . HCN also shows extensive absorption on the blueshifted side of G0.253+0.016. These characteristics suggest that the NLA filaments trace sub-thermally excited, optically thick gas on the front surface of G0.253+0.016. [3] The blueshift of the absorption filaments relative to the peak {HCO$^+$}\ emission indicates that the surface layers of G0.253+0.016 are expanding with velocities larger than 5 to 20 km~s{$^{-1}$}\ with respect to the cloud's center of mass. The apparent ``P-Cygni" line profile may be an indication of re-expansion of the G0.253+0.016 surface layers following a recent passage close to Sgr A. Compression of a milli-Gauss magnetic field during the passage, followed by subsequent expansion driven by stored magnetic energy may be responsible. [4] Weather or not G0.253+0.016 will form a star cluster or only a few stars remains unclear. Although the {HCO$^+$}\ absorption indicates that the cloud surface layers are expanding, the cloud interior may be in the earliest stages of star formation as indicated by the presence of at least one cloud core with an associated water maser. The dynamic state of the dozens of other 3 mm dust condensations in G0.253+0.016 remains unclear. Measurements of the amplitude and geometry of the magnetic field are needed to determine if the magnetic pressure is sufficient or insufficient to prevent this massive, high-density, and compact cloud from collapsing to form a cluster or association of stars. \acknowledgments{This research of JB and AG was in part supported by National Science Foundation (NSF) grant AST-1009847. Observations reported here were obtained at the Atacama Large Millimeter Array (ALMA) located in Chile. The authors acknowledge our ALMA Contact Scientist, Dr Crystal Brogan, for assistance in preparing the observations and performing the initial data calibration. J.M.R acknowledges funding support via CSIRO's Julius Career Award that was used to host a week-long team meeting which laid the foundation for this work. J.M.R, S.N.L, J.M.D.K, J.B. and N.B. acknowledge the hospitality of the Aspen Center for Physics, which is supported by the National Science Foundation Grant No.~PHY-1066293. This paper makes use of the following ALMA data: ADS/JAO.ALMA\#2011.0.00217.S. ALMA is a partnership of the European Southern Observatory (ESO) representing member states, Associated Universities Incorporated (AUI) and the National radio Astronomy Observatories (NRAO) for the National Science Foundation (NSF) in the USA, NINS in Japan, NRC in Canada, and NSC and ASIAA in Taiwan, in cooperation with the Republic of Chile. The Joint ALMA Observatory (JAO) is operated by ESO (Europe) , AUI/NRAO (USA), and NAOJ (Japan). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. } \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Hysteresis is a nonlinear phenomenon observed in some physical systems under low-frequency excitations. It appears in many areas as biology, electronics, ferroelasticity, magnetism, mechanics or optics \cite{BM2006,BS1996,JMS2017,MNZ93,V94}. This phenomenon is currently classified into two categories: \emph{rate independent} (RI) and \emph{rate dependent} (RD) hysteresis. For RI hysteresis, the output-versus-input graph of the hysteresis system does not change with the frequency of the input signal. This is the case for example of the Bouc-Wen or the Preisach models, see \cite{IR2007} and \cite{M03} respectively. For RD hysteresis, the output-versus-input graph of the hysteresis system may change with the frequency, but it converges in some sense to a fixed loop called the hysteresis loop when the frequency goes to zero. This is the case for example of the LuGre model and the semilinear Duhem model, see \cite{Naser et al.(2015)} and \cite{Ikhouane2017,OB05} respectively. Research in the field of hysteresis has focused mainly on the study of rate-independent hysteresis, and it is only in the last 15 years that the importance of rate-dependent phenomena has been acknowledged, and it constitutes a challenge by itself. The recent years have witnessed a growing interest in a phenomenon that appears in hysteretic systems under perturbed periodic signals: the hysteresis loop shows to be composed of a big cycle called major loop, and one or several small lobes called minor loops located inside the major loop. Figure \ref{plot66yX5y} shows the hysteresis loop of a magnetic system when the input is the one of Figure \ref{f:figugen}, see \cite{HMFA14} for instance. \begin{figure}[H] \centering \includegraphics[scale=0.15]{x5.pdf} \caption{Hysteresis loop of a magnetic system with the input of Figure \ref{f:figugen}. Black: major loop. Grey: minor loop.} \label{plot66yX5y} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.15]{figuprovv3.pdf} \caption{A bimodal $T$--periodic input $u(t)$ versus $t$.} \label{f:figugen} \end{figure} This interest in the study of minors loops is due in part to the fact that minor loops are related to a loss of energy, see \cite{ZRdKAF2017} for instance. From a formal point of view, minor loops have been studied mainly in relation with the Preisach model \cite[p.19]{M03}. Apart from \cite[Sections 10, 11.9]{Ikhouane2017} we are not aware of any mathematical analysis of minor loops of hysteresis systems given by differential equations. The aim of the present paper is to fill this void by providing an explicit analytic description of the minor loops of the Dahl and the LuGre models. The Dahl model is an idealization of dynamic dry friction proposed by Dahl in 1976 \cite{D1976}. This model relates an input displacement $u$ to an output force $y$ as \begin{align*} &y(t)=F_{c}\,w(t), \\ &\dot{w}(t)=\rho\big(\dot{u}(t)-|\dot{u}(t)|w(t)\big), \\ &-1 \leq w (0) \leq 1, \end{align*} where $w$ is an internal state and $\rho>0$, $F_c>0$ are constants. A good introductory text on the relationship between the Dahl model and the Coulomb model of dry friction may be found in \cite{GBI2016}. The LuGre model is a generalization of the Dahl model introduced in 1995 to include the \emph{Stribeck effect}, that is the decrease of friction at low velocities \cite{wc95}. The LuGre model is given by \cite{JC08}: \begin{equation}\label{equation1} \begin{array}{lcl} \dot{x}\left(t\right) & = & -\sigma_{0}\dfrac{|\dot{u}(t)|}{g\big(\dot{u}(t)\big)}x\left(t\right)+\dot{u}(t), \\ x(0) & = & x_{0}, \\ F\left(t\right) & = & \sigma_{0}x(t)+\sigma_{1}\dot{x}(t)+f\big(\dot{u}(t)\big), \end{array} \end{equation} \noindent where $t\geq0$ denotes time; the parameters $\sigma_{0}>0$ and $\sigma_{1}>0$ are respectively the stiffness and the microscopic damping friction coefficients; the function $g$ is continuous with $g\left(\vartheta\right)>0$ for all $\vartheta\in\mathbb{R}$ and it represents the macrodamping friction ; $x(t)\in\mathbb{R}$ is the average deflection of the bristles; $x_{0}\in\mathbb{R}$ is the initial state; $u(t)$ is the relative displacement and is the input of the system; $F(t)$ is the friction force and is the output of the system; and $f$ is continuous and such that $f(0)=0$. When the function $g$ is constant, $\sigma_1=0$ and $f$ is the zero function, the system (\ref{equation1}) reduces to the Dahl model. Both the LuGre and the Dahl models have been used in various applications, see for instance \cite{AIRC2012,FHH2008,GP2017}. The main contribution of this paper is Theorem \ref{t:main} which is stated in Section \ref{ss:statement}. This theorem provides the analytic description of the minor loop of the LuGre and Dahl models when the input is bimodal like in Figure \ref{f:figugen}. The paper is organized as follows: Section \ref{notation} provides the mathematical notation used in the text. In Section \ref{s:main} we present and prove the main result which is the analytical description of the minor loop of the LuGre and Dahl models. Section \ref{s:numsim} has a pedagogical interest: using numerical simulations we present examples that illustrate the constructive process which leads to the hysteresis and minor loops. Some of these examples are aimed for the reader who may not be familiar with the technicalities that underline the methodology used here. The conclusions are provided in Section \ref{Conclusions and comments}. \section{Mathematical notation}\label{notation} We say that a subset of $\mathbb{R}$ is measurable when it is Lebesgue measurable. Consider a function $g: I \subseteq \mathbb{R} \rightarrow \mathbb{R}$ where $I $ is an interval. We say that $g$ is measurable if $g^{-1}(B)$ is a measurable set for any set $B$ in the Borel algebra of $\mathbb{R}$ or, equivalently, if $\{x\in I:\, g(x)>a\}$ is a measurable set for all $a\in\mathbb{R}$, \cite{Rudin, YV1966}. For a measurable function $g:I\rightarrow \mathbb{R}$, $\|g\|$ denotes the essential supremum of the function $|g|$ where $|\cdot|$ is the absolute value. We recall that $\mathcal{C}^0(\mathbb{R}^+,\mathbb{R})$ denotes the space of continuous functions defined from $\mathbb{R}+$ to $\mathbb{R}$ endowed with the norm $\| \cdot\|$. Also $W^{1;\infty}(\mathbb{R}^+,\mathbb{R})$ denotes the Sobolev space of absolutely continuous functions $u:\mathbb{R}^+ \rightarrow \mathbb{R}$. For this class of functions, the derivative $\dot{u}$ is measurable, and we have $\|u\| <\infty$, $\|\dot{u}\|<\infty$. Endowed with the norm $\|u\|_{1,\infty}=\max\left(\|u\|, \|\dot{u}\|\right)$, $W^{1;\infty}(\mathbb{R}^+,\mathbb{R})$ is a Banach space \cite[pp.\,280--281]{Leoni09}. Finally, $L^\infty(I,\mathbb{R})$ denotes the Banach space of measurable functions $u : I \rightarrow \mathbb{R}$ such that $\|u\| <\infty$, endowed with the norm $\| \cdot \|$. For $T>0$ we define $\Omega_T$ as the set of all $T$--periodic functions~$u\in W^{1;\infty}(\mathbb{R}^+,\mathbb{R})$. \section{Main result}\label{s:main} \subsection{Statement of the main result}\label{ss:statement} We consider the LuGre model (\ref{equation1}) with an input $u \in W^{1;\infty}(\mathbb{R}^+,\mathbb{R})$. In \cite{Naser et al.(2015)} it is proved that for all $x_0 \in \mathbb{R}$, the differential equation (\ref{equation1}) has a unique Carath\'eodory solution $x \in W^{1;\infty}(\mathbb{R}^+,\mathbb{R})$ and that $F \in L^\infty(\mathbb{R}^+, \mathbb{R})$. To present the main result of this work which is the analytic characterization of the minor loop we define formally the set of bimodal inputs needed to generate this minor loop. \begin{definition}\label{a:ass1} Let $u_{\min,1},u_{\min,2},u_{\max,1}, u_{\max,2} \in \mathbb{R}$ be such that $u_{\min,1} \leq u_{\min,2}<u_{\max,1}\leq u_{\max,2}$ and at least one of the following holds: $u_{\min,1} \neq u_{\min,2}$ or $u_{\max,1} \neq u_{\max,2}$. Let $t_1,t_2,t_3,t_4 \in \mathbb{R}^+$ be such that $0<t_1<t_2<t_3<t_4$. We define $\mathbb{M}_{u_{\min,1},u_{\min,2},u_{\max,1}, u_{\max,2},t_1,t_2,t_3,t_4}$ as the set of all functions $u \in \Omega_{t_4}$ such that $u$ is strictly increasing on the interval $[0,t_1]$, strictly decreasing on the interval $[t_1,t_2]$, strictly increasing on the interval $[t_2,t_3]$, strictly decreasing on the interval $[t_3,t_4]$; and $u(0)=u_{\min,1}$, $u(t_1)=u_{\max,1}$, $u(t_2)=u_{\min,2}$, $u(t_3)=u_{\max,2}$, $u(t_4)=u(0)$. \end{definition} \begin{teoa}\label{t:main} Let us consider the LuGre model given by Equations \eqref{equation1} with an input $u \in \mathbb{M}_{u_{\min,1},u_{\min,2},u_{\max,1}, u_{\max,2},t_1,t_2,t_3,t_4}$. Then the following statements hold: \begin{enumerate} \item[\textnormal{(a)}] The hysteresis loop that corresponds to the input $u$ is the set $$ G_u^\circ = \big \{ \big(\psi_u(t),y^\circ(t)\big)\in\mathbb{R}^2,\,t \in [0,\varrho_4] \big \}, $$ where $y^\circ$ is given by $$ y^\circ(t)=\mathrm{e}^{-\frac{\sigma_0}{g(0)}(t-\varrho_i)}\left( y^\circ(\varrho_i)- g(0)\left[ \mathrm{e}^{\frac{\sigma_0}{g(0)}(t-\varrho_i)} -1\right]\right), \mbox{ for } t \in [\varrho_i,\varrho_{i+1}] $$ and $i\in\{0,1,2,3\}$, and where \begin{align*} &y^\circ(0) =g(0)\,\frac{\mathrm{e}^{-\frac{\sigma_0}{g(0)}\varrho_4}}{1-\mathrm{e}^{-\frac{\sigma_0}{g(0)}\varrho_4}}\left( 2\mathrm{e}^{\frac{\sigma_0}{g(0)}\varrho_1} -2\mathrm{e}^{\frac{\sigma_0}{g(0)}\varrho_2}+2\mathrm{e}^{\frac{\sigma_0}{g(0)}\varrho_3}-\mathrm{e}^{\frac{\sigma_0}{g(0)}\varrho_4}-1\right) \mbox{ and } \\ &y^\circ(\varrho_i)=\mathrm{e}^{-\frac{\sigma_0}{g(0)}(\varrho_i-\varrho_{i-1})}\left( y^\circ(\varrho_{i-1})+ g(0)\left[ \mathrm{e}^{\frac{\sigma_0}{g(0)}(\varrho_i-\varrho_{i-1})} -1\right]\right) \mbox{ for } i\in\{1,2,3,4\}; \end{align*} and $\psi_u$ is given by $$\psi_u(t)=\begin{cases}\begin{array}{ll} t+u_{\min,1}& \text{for } t\in[0,\varrho_1],\\ -t+\varrho_1+u_{\max,1}& \text{for } t\in[\varrho_1,\varrho_2],\\ t-\varrho_2+u_{\min,2}& \text{for } t\in[\varrho_2,\varrho_3],\\ -t+\varrho_3+u_{\max,2}& \text{for } t\in[\varrho_3,\varrho_4], \end{array} \end{cases}$$ being $\varrho_0=0$, $\varrho_1=\;u_{\max,1}-u_{\min,1}>0,$ $\varrho_2=\;\varrho_1+u_{\max,1}-u_{\min,2}>\varrho_1,$ $\varrho_3=\;\varrho_2+u_{\max,2}-u_{\min,2}>\varrho_2,$ and $\varrho_4=\;\varrho_3+u_{\max,2}-u_{\min,1}>\varrho_3$. \item[\textnormal{(b)}] The minor loop that corresponds to the input $u$ is the set $$\mathcal{N}_u=\big\{\big(\psi_u(t),y^\circ(t)\big),t \in [\varrho_1,\varrho_5]\big\},$$ where $\varrho_5=u_{\max,1}-u_{\min,2}+\varrho_2 \in (\varrho_2,\varrho_3]$. \end{enumerate} \end{teoa} \textbf{Comment.} Observe that the sets $G^{\circ}_{u}$ and $\mathcal{N}_u$ are the geometric loci of parametrized curves. Theorem \ref{t:main}, thus, gives an explicit parametrization of these curves. \subsection{Proof of Theorem \ref{t:main}} \label{ss:proof} The proof of Theorem \ref{t:main} is done is three steps: \begin{enumerate} \item[\textbf{Step 1:}] The hysteresis loop of the LuGre and the Dahl models are derived in Section \ref{ss:hystloop}. \item[\textbf{Step 2:}] A normalized input is presented in Section \ref{ss:normalizedinput}. \item[\textbf{Step 3:}] The determination of the equations of the minor loop is done in Section \ref{Analytical expression of the hysteresis and minor loops}. \end{enumerate} \subsubsection{Hysteresis loop of the LuGre and the Dahl models}\label{ss:hystloop} To prove Theorem \ref{t:main} and therefore to derive the explicit expression of the hysteresis loop of the LuGre and the Dahl models, we follow the methodology presented in \cite{I09,Naser et al.(2015)}. In this section we recall and adapt the main steps of this methodology. The reader unfamiliar with this theoretical framework is first referred to Example 1 in Section \ref{ss:ex1}. Let $u \in \mathbb{M}_{u_{\min,1},u_{\min,2},u_{\max,1}, u_{\max,2},t_1,t_2,t_3,t_4}$ and take $T=t_4$. Also, take $\gamma \in (0,\infty)$ and consider the linear time-scale change $s_\gamma:\mathbb{R} \rightarrow \mathbb{R}$ defined by $s_\gamma(t)=t/\gamma$ for all $t \in \mathbb{R}$. Then $u \circ s_\gamma$ is $\gamma T$--periodic. The system (\ref{equation1}) for which the input is $u \circ s_\gamma$ can be written as \begin{eqnarray*} \dot{x}_\gamma\left(t\right) & = & -\sigma_{0}\frac{\big|\dot{\aoverbrace[L1R]{{u \circ s_\gamma}}}(t)\big|}{g\big(\dot{\aoverbrace[L1R]{u \circ s_\gamma}}(t)\big)}x_\gamma\left(t\right)+\dot{\aoverbrace[L1R]{{u \circ s_\gamma}}}\left(t\right), \text{ for almost all } t \in \mathbb{R}^+,\\ x_\gamma(0) & = & x_{0}, \\ F_\gamma\left(t\right) & = & \sigma_{0}x_\gamma\left(t\right)+\sigma_{1}\dot{x}_\gamma(t)+f\big(\dot{\aoverbrace[L1R]{{u \circ s_\gamma}}}(t)\big), \text{ for almost all } t \in \mathbb{R}^+. \end{eqnarray*} On the other hand, $$\dot{\aoverbrace[L1R]{{u \circ s_\gamma}}}(t)=\dot{u} \big( s_\gamma(t)\big) \cdot \dot{s}_\gamma(t)=\dot{u}\left(\frac{t}{\gamma}\right) \cdot \frac{1}{\gamma},$$ so that we get \begin{eqnarray*} \dot{x}_\gamma\left(t\right) & = & -\sigma_{0}\frac{\left| \frac{1}{\gamma}\dot{u}\left(\frac{t}{\gamma}\right) \right|}{g\left(\frac{1}{\gamma}\dot{u}\left(\frac{t}{\gamma}\right)\right)}x_\gamma\left(t\right)+\frac{1}{\gamma}\dot{u}\left(\frac{t}{\gamma}\right), \text{ for almost all } t \in \mathbb{R}^+,\\ x_\gamma(0) & = & x_{0}, \\ F_\gamma\left(t\right) & = & \sigma_{0}x_\gamma\left(t\right)+\sigma_{1}\dot{x}_\gamma(t)+f\left(\frac{1}{\gamma}\dot{u}\left(\frac{t}{\gamma}\right)\right), \text{ for almost all } t \in \mathbb{R}^+. \end{eqnarray*} Now, defining $\nu=t / \gamma $ we rewrite these relations in terms of $\nu$ obtaining \begin{equation} \label{ecuacion1explicativa} \begin{split} \gamma \dot{x}_\gamma\left(\gamma \nu \right) & = -\sigma_{0}\frac{| \dot{u}(\nu) |}{g\left(\frac{1}{\gamma}\dot{u}(\nu)\right)}x_\gamma(\gamma \nu)+\dot{u}(\nu), \text{ for almost all } \nu \in \mathbb{R}^+,\\ x_\gamma(0) & = x_{0}, \\ F_\gamma(\gamma \nu) & = \sigma_{0}x_\gamma(\gamma \nu)+\sigma_{1}\dot{x}_\gamma(\gamma \nu)+f\left(\frac{1}{\gamma}\dot{u}(\nu)\right), \text{ for almost all } \nu \in \mathbb{R}^+. \end{split} \end{equation} \noindent We define the function $z_\gamma$ by the relation $z_\gamma = x_\gamma \circ s_{1/\gamma}$ so that $$\dot{z}_\gamma (\nu) = \dot{x}_\gamma \big( s_{1/\gamma}(\nu) \big) \cdot \dot{s}_{1/\gamma}(\nu)=\dot{x}_\gamma (\gamma \nu) \cdot \gamma .$$ Substituting in (\ref{ecuacion1explicativa}) provides: \begin{equation} \label{equation13} \begin{split} &\dot{z}_\gamma\left(t\right) = -\sigma_{0}\frac{|\dot{u}(t)|}{g\left(\frac{\dot{u}(t)}{\gamma}\right)}z_\gamma(t)+\dot{u}(t), \text{ for almost all } t \in \mathbb{R}^+, \\ &z_\gamma(0) = x_{0}, \\ &y_\gamma(t) = \sigma_{0}z_\gamma(t)+\frac{\sigma_{1}}{\gamma}\dot{z}_\gamma(t)+f\left(\frac{\dot{u}(t)}{\gamma}\right), \text{ for almost all } t \in \mathbb{R}^+, \end{split}\end{equation} where $y_\gamma=F_\gamma \circ s_{1/\gamma}$. For a given $\gamma>0$, the corresponding output-versus-input graph is the set $G_{u \circ s_\gamma} = \big \{ \big(u \circ s_\gamma(t),F_\gamma(t)\big),t \geq 0 \big \} = \big \{ \big(u(t),F_\gamma \circ s_{1/\gamma}(t)=y_\gamma(t)\big),t \geq 0 \big \}$. The hysteresis loop of system \eqref{equation13} is the output-versus-input graph obtained for very slow inputs (that is when $\gamma \rightarrow \infty$) in steady state. The next result, which is a straightforward combination of \cite[Proposition 5]{FI13} and \cite[Theorem 9]{Naser et al.(2015)}, describes the result of this convergence process. \begin{theorem}[\cite{FI13,Naser et al.(2015)}]\label{t:conv} The following statements hold: \begin{enumerate} \item[\textnormal{(a)}] The sequence of functions $(y_\gamma )_{\gamma >0}$ converges in the space $L^\infty(\mathbb{R}^+,\mathbb{R})$ as $\gamma \rightarrow \infty$. Denote $y^{\star}_{u}=\lim_{\gamma \rightarrow \infty} y_\gamma $, then for all $t \in \mathbb{R}^+$ we have \begin{eqnarray} y^{\star}_{u}(t) &=& \sigma_0 \mathrm{e}^{-\frac{\sigma_0}{g(0)}\rho_u(t)}\left(x_0 + \int_0^t \mathrm{e}^{\frac{\sigma_0}{g(0)}\rho_u(\tau)} \dot{u}(\tau)d\tau\right), \label{eqFstar1}\\ \rho_u(t) &=& \int_0^t |\dot{u}(\tau)|d\tau. \label{eqFstar2} \end{eqnarray} \item[\textnormal{(b)}] For any $k \in \mathbb{N}$ define the function $y^{\star}_{u,k}\in L^\infty\big([0,T],\mathbb{R} \big)$ by $y^{\star}_{u,k}(t)=y^{\star}_{u}(t+kT)$, for all $t \in [0,T]$. The sequence of functions $(y^{\star}_{u,k})_{k \in \mathbb{N}}$ converges in the space $L^\infty\big([0,T],\mathbb{R}\big)$ as $k \rightarrow \infty$. Denote $y^{\circ}_u=\lim_{k \rightarrow \infty} y^{\star}_{u,k}$, then \begin{equation} \label{eqFcirc1} y^{\circ}_u(t) =\sigma_0 \mathrm{e}^{-\frac{\sigma_0}{g(0)}\rho_u(t)}\left( \frac{y^{\circ}_u(0)}{\sigma_0}+ \int_0^t \mathrm{e}^{\frac{\sigma_0}{g(0)}\rho_u(\tau)} \dot{u}(\tau)d\tau\right) \mbox{ for all } t \in [0,T]. \end{equation} Moreover, $y^{\circ}_u(T)=y^{\circ}_u(0)$. \end{enumerate} \end{theorem} Statement (a) implies that the graphs $G_{u \circ s_\gamma}$ converge in a sense precised in \cite[Lemma 9]{I09} to the graph $G^\star_u = \big \{ \big(u(t),y^{\star}_{u}(t)\big),t \geq 0 \big \}$ as $\gamma \rightarrow \infty$. The hysteresis loop is given by the ``steady state'' of the parametrized curve $G^\star_u$ which by statement (b) is the set \begin{equation} \label{eqhystloopLuGre} G_u^\circ = \big \{ \big(u(t),y^{\circ}_u(t)\big),t \in [0,T] \big \}. \end{equation} Moreover, Theorem \ref{t:conv} (b) gives \begin{eqnarray*} y^{\circ}_u(T)&=&\sigma_0 \mathrm{e}^{-\frac{\sigma_0}{g(0)}\rho_u(T)}\left( \frac{y^{\circ}_u(0)}{\sigma_0}+ \int_0^T \mathrm{e}^{\frac{\sigma_0}{g(0)}\rho_u(\tau)} \dot{u}(\tau)d\tau\right),\\ y^{\circ}_u(T)&=&y^{\circ}_u(0), \end{eqnarray*} which leads to \begin{equation} \label{eqy(0)circ} y^{\circ}_u(0)=\frac{\sigma_0\mathrm{e}^{-\frac{\sigma_0}{g(0)}\rho_u(T)}\int_0^T \mathrm{e}^{\frac{\sigma_0}{g(0)}\rho_u(\tau)} \dot{u}(\tau)d\tau}{1-\mathrm{e}^{-\frac{\sigma_0}{g(0)}\rho_u(T)}}. \end{equation} Equations (\ref{eqFcirc1}) and (\ref{eqy(0)circ}) provide the analytical expression of the hysteresis loop (\ref{eqhystloopLuGre}). This expression includes both the major loop and the minor loop. Observe that for the LuGre model neither $\sigma_1$ nor $f$ intervene in the equation of the hysteresis loop, and only the value $g(0)$ appears in this equation. Also note that Equations (\ref{eqFcirc1})--(\ref{eqy(0)circ}) are also valid for the Dahl model since the latter is a particular case of the LuGre model. In Example 2 of Section \ref{ss:ex2} the reader can find a detailed illustration of the concepts presented in this section. \subsubsection{The normalized input}\label{ss:normalizedinput} The hysteresis loop of the LuGre and the Dahl models is given in \eqref{eqhystloopLuGre}, and it is characterized by the function $y^{\circ}_u$ of Theorem \ref{t:conv} (b). Note that we are considering general input functions $u \in \mathbb{M}_{u_{\min,1},u_{\min,2},u_{\max,1}, u_{\max,2},t_1,t_2,t_3,t_4}$ that may not allow an explicit calculation of the integral present in Equation (\ref{eqFcirc1}). To get an explicit calculation of that integral we follow the approach of \cite{I09} and \cite{Naser et al.(2015)} that leads to the explicit expression of the hysteresis loop by using the so-called \emph{normalized input} $\psi_u$ associated to $u$. The use of the normalized input will give another parametrization of the curve in \eqref{eqhystloopLuGre}, an explicit one. According to \cite{I09}, the normalized input associated to $u$ is a piecewise-linear function $\psi_u \in W^{1;\infty}(\mathbb{R}^+,\mathbb{R})$ that satisfies \begin{equation}\label{e:conjugacion} \psi_u\big(\rho_u(t)\big)=u(t)\mbox{ for all } t\in\mathbb{R}^+, \end{equation} where $ \rho_u(t) = \int_0^t |\dot{u}(\tau)|d\tau $ is the \emph{variation function} of $u$. Note that $\rho_u$ is strictly increasing so that it is invertible, and $\rho_u^{-1}$ is also strictly increasing. From equation (\ref{e:conjugacion}) it comes that $\psi_u = u \circ \rho_u^{-1}$ so that $\psi_u$ is strictly increasing on the interval $[0,\varrho_1]$, where $\varrho_1=\rho_u(t_1)$. Thus $\dot{\psi}_u(\varrho) \geq 0$ when $\varrho \in (0,\varrho_1)$ and $\dot{\psi}_u(\varrho)$ exists. On the other hand, by \cite[Lemma 2]{I09}, the set on which $\dot{\psi}_u$ is not defined or is different from $\pm 1$ has measure zero. Thus $\dot{\psi}_u(\varrho) = 1$ for almost all $\varrho \in (0,\varrho_1)$. Using the fact that $\psi_u$ is absolutely continuous we obtain from the Fundamental Theorem of Calculus that $$ \psi_u(\varrho)-\psi_u(0)=\int_0^\varrho \dot{\psi}_u(\tau)\,\text{d}\tau=\int_0^\varrho \text{d}\tau=\varrho, \text{ for all } \varrho \in [0,\varrho_1].$$ Taking into account that $\psi_u\big(\rho_u(0)\big)=u(0)$ it comes that $\psi_u(0)=u_{\min,1}$ so that $$\psi_u(\varrho)=\varrho+u_{\min,1}, \text{ for all } \varrho \in [0,\varrho_1].$$ \noindent The details for the intervals $[\varrho_1,\varrho_2]$, $[\varrho_2,\varrho_3]$, and $[\varrho_3,\varrho_4]$ are given hereafter. \begin{itemize} \item By definition of $u$ we have that $u$ is strictly increasing on the interval $[0,t_1]$ so that $$\varrho_1 = \rho_u(t_1) = \int_0^{t_1} |\dot{u}(t)| \text{d}t = \int_0^{t_1} \dot{u}(t) \text{d}t = u(t_1)-u(0) = u_{\max,1}-u_{\min,1}.$$ Also, from $\psi_u(\varrho)=\varrho +u_{\min,1}$ for $\varrho \in [0,\varrho_1]$ we get $$\psi_u(\varrho_1)=\varrho_1+u_{\min,1}=u_{\max,1}.$$ Note that $\rho_u$ is strictly increasing so that it is invertible, and $\rho_u^{-1}$ is also strictly increasing. From equation (\ref{e:conjugacion}) it comes that $\psi_u = u \circ \rho_u^{-1}$ so that $\psi_u$ is strictly decreasing on the interval $[\varrho_1,\varrho_2]$, where $\varrho_2=\rho_u(t_2)$. Thus $\dot{\psi}_u(\varrho) \leq 0$ when $\varrho \in (\varrho_1,\varrho_2)$ and $\dot{\psi}_u(\varrho)$ exists. On the other hand, by \cite[Lemma 2]{I09}, the set on which $\dot{\psi}_u$ is not defined or is different from $\pm 1$ has measure zero. Thus $\dot{\psi}_u(\varrho) = -1$ for almost all $\varrho \in (\varrho_1,\varrho_2)$. Using the fact that $\psi_u$ is absolutely continuous we obtain from the Fundamental Theorem of Calculus that $$ \psi_u(\varrho)-\psi_u(\varrho_1)=\int_{\varrho_1}^\varrho \dot{\psi}_u(\tau)\,\text{d}\tau=\int_{\varrho_1}^\varrho -1 \;\text{d}\tau=\varrho_1-\varrho, \text{ for all } \varrho \in [\varrho_1,\varrho_2],$$ which leads to $$\psi_u(\varrho)=\psi_u(\varrho_1)+\varrho_1-\varrho=u_{\max,1}+\varrho_1-\varrho.$$ \item By definition of $u$ we have that $u$ is strictly decreasing on the interval $[t_1,t_2]$ so that \begin{eqnarray*} \varrho_2 &=& \rho_u(t_2) = \int_0^{t_2} |\dot{u}(t)| \text{d}t = \underbrace{\int_0^{t_1} |\dot{u}(t)| \text{d}t}_{\varrho_1}+\int_{t_1}^{t_2} -\dot{u}(t) \text{d}t \\ &=&\varrho_1+ u(t_1)-u(t_2) = \varrho_1+u_{\max,1}-u_{\min,2}. \end{eqnarray*} Also, from $\psi_u(\varrho)=u_{\max,1}+\varrho_1-\varrho$ for $\varrho \in [\varrho_1,\varrho_2]$ we get $$\psi_u(\varrho_2)=u_{\max,1}+\varrho_1-\varrho_2=u_{\min,2}.$$ From equation (\ref{e:conjugacion}) it comes that $\psi_u = u \circ \rho_u^{-1}$ so that $\psi_u$ is strictly increasing on the interval $[\varrho_2,\varrho_3]$, where $\varrho_3=\rho_u(t_3)$. Thus $\dot{\psi}_u(\varrho) \geq 0$ when $\varrho \in (\varrho_2,\varrho_3)$ and $\dot{\psi}_u(\varrho)$ exists. On the other hand, by \cite[Lemma 2]{I09}, the set on which $\dot{\psi}_u$ is not defined or is different from $\pm 1$ has measure zero. Thus $\dot{\psi}_u(\varrho) = 1$ for almost all $\varrho \in (\varrho_2,\varrho_3)$. Using the fact that $\psi_u$ is absolutely continuous we obtain from the Fundamental Theorem of Calculus that $$ \psi_u(\varrho)-\psi_u(\varrho_2)=\int_{\varrho_2}^\varrho \dot{\psi}_u(\tau)\,\text{d}\tau=\int_{\varrho_2}^\varrho \text{d}\tau=\varrho-\varrho_2, \text{ for all } \varrho \in [\varrho_2,\varrho_3],$$ which leads to $$\psi_u(\varrho)=\psi_u(\varrho_2)+\varrho-\varrho_2=u_{\min,2}+\varrho-\varrho_2.$$ \item By definition of $u$ we have that $u$ is strictly increasing on the interval $[t_2,t_3]$ so that \begin{eqnarray*} \varrho_3 &=& \rho_u(t_3) = \int_0^{t_3} |\dot{u}(t)| \text{d}t = \underbrace{\int_0^{t_2} |\dot{u}(t)| \text{d}t}_{\varrho_2}+\int_{t_2}^{t_3} \dot{u}(t) \text{d}t \\ &=&\varrho_2+ u(t_3)-u(t_2) = \varrho_2+u_{\max,2}-u_{\min,2}. \end{eqnarray*} Also, from $\psi_u(\varrho)=u_{\min,2}+\varrho-\varrho_2$ for $\varrho \in [\varrho_2,\varrho_3]$ we get $$\psi_u(\varrho_3)=u_{\min,2}+\varrho_3-\varrho_2=u_{\max,2}.$$ From equation (\ref{e:conjugacion}) it comes that $\psi_u = u \circ \rho_u^{-1}$ so that $\psi_u$ is strictly decreasing on the interval $[\varrho_3,\varrho_4]$, where $\varrho_4=\rho_u(t_4)$. Thus $\dot{\psi}_u(\varrho) \leq 0$ when $\varrho \in (\varrho_3,\varrho_4)$ and $\dot{\psi}_u(\varrho)$ exists. On the other hand, by \cite[Lemma 2]{I09}, the set on which $\dot{\psi}_u$ is not defined or is different from $\pm 1$ has measure zero. Thus $\dot{\psi}_u(\varrho) = -1$ for almost all $\varrho \in (\varrho_3,\varrho_4)$. Using the fact that $\psi_u$ is absolutely continuous we obtain from the Fundamental Theorem of Calculus that $$ \psi_u(\varrho)-\psi_u(\varrho_3)=\int_{\varrho_3}^\varrho \dot{\psi}_u(\tau)\,\text{d}\tau=\int_{\varrho_3}^\varrho -1 \;\text{d}\tau=\varrho_3-\varrho, \text{ for all } \varrho \in [\varrho_3,\varrho_4],$$ which leads to $$\psi_u(\varrho)=\psi_u(\varrho_3)+\varrho_3-\varrho=u_{\max,2}+\varrho_3-\varrho.$$ \end{itemize} As a summary, we have \begin{equation}\label{e:uuniversal} \psi_u(\varrho)=\begin{cases}\begin{array}{ll} \varrho+u_{\min,1}& \mbox{for } \varrho\in[0,\varrho_1],\\ -\varrho+\varrho_1+u_{\max,1}& \mbox{for } \rho\in[\varrho_1,\varrho_2],\\ \varrho-\varrho_2+u_{\min,2}& \mbox{for } \varrho \in[\varrho_2,\varrho_3],\\ -\varrho+\varrho_3+u_{\max,2}& \mbox{for } \varrho \in[\varrho_3,\varrho_4], \end{array} \end{cases} \end{equation} where $\varrho_1=\;u_{\max,1}-u_{\min,1}>0,$ $\varrho_2=\;\varrho_1+u_{\max,1}-u_{\min,2}>\varrho_1,$ $\varrho_3=\;\varrho_2+u_{\max,2}-u_{\min,2}>\varrho_2,$ and $\varrho_4=\;\varrho_3+u_{\max,2}-u_{\min,1}>\varrho_3$. The function $\psi_u$ is continuous and $\varrho_4$--periodic. Its graph in the interval $[0,\varrho_4] $ is displayed in Figure \ref{MLX4}. \begin{figure}[H] \centering \includegraphics[scale=0.2]{x4.pdf} \caption{$\psi_u(\varrho)$ versus $\varrho$.} \label{MLX4} \end{figure} \subsubsection{Analytic expression of the hysteresis and minor loops} \label{Analytical expression of the hysteresis and minor loops} Applying Theorem \ref{t:conv} (b) (Equation \eqref{eqFcirc1}) to the particular input $\psi_u$, and denoting for simplicity $y^\circ:=y^{\circ}_{\psi_u}$ we obtain that $$ y^{\circ}(\varrho) =\sigma_0 \mathrm{e}^{-\frac{\sigma_0}{g(0)}\varrho}\left( \frac{y^\circ(0)}{\sigma_0}+ \int_0^\varrho \mathrm{e}^{\frac{\sigma_0}{g(0)}\tau} \dot{\psi}_u(\tau)d\tau\right) \mbox{ for } \varrho \in [0,\varrho_4]. $$ Since this expression can be explicitly integrated, we obtain \begin{eqnarray} y^\circ(\varrho)&=& \mathrm{e}^{-\frac{\sigma_0}{g(0)}\varrho}\left( y^\circ(0)+ g(0)\left[ \mathrm{e}^{\frac{\sigma_0}{g(0)}\varrho} -1\right]\right) \mbox{ for } \varrho \in [0,\varrho_1], \mbox{ with } \label{ycircdezeroML}\\ y^\circ(0) &=&g(0)\,\frac{\mathrm{e}^{-\frac{\sigma_0}{g(0)}\varrho_4}}{1-\mathrm{e}^{-\frac{\sigma_0}{g(0)}\varrho_4}}\left( 2\mathrm{e}^{\frac{\sigma_0}{g(0)}\varrho_1} -2\mathrm{e}^{\frac{\sigma_0}{g(0)}\varrho_2}+2\mathrm{e}^{\frac{\sigma_0}{g(0)}\varrho_3}-\mathrm{e}^{\frac{\sigma_0}{g(0)}\varrho_4}-1\right); \notag \end{eqnarray} \begin{eqnarray} y^\circ(\varrho)&=&\mathrm{e}^{-\frac{\sigma_0}{g(0)}(\varrho-\varrho_1)}\left( y^\circ(\varrho_1)- g(0)\left[ \mathrm{e}^{\frac{\sigma_0}{g(0)}(\varrho-\varrho_1)} -1\right]\right) \mbox{ for } \varrho \in [\varrho_1,\varrho_2], \mbox{ with }\label{[rho1rho2ycirc]}\\ y^\circ(\varrho_1)&=&\mathrm{e}^{-\frac{\sigma_0}{g(0)}\varrho_1}\left( y^\circ(0)+ g(0)\left[ \mathrm{e}^{\frac{\sigma_0}{g(0)}\varrho_1} -1\right]\right); \notag \end{eqnarray} \begin{eqnarray} y^\circ(\varrho)&=&\mathrm{e}^{-\frac{\sigma_0}{g(0)}(\varrho-\varrho_2)}\left( y^\circ(\varrho_2)+ g(0)\left[ \mathrm{e}^{\frac{\sigma_0}{g(0)}(\varrho-\varrho_2)} -1\right]\right) \mbox{ for } \varrho \in [\varrho_2,\varrho_3], \mbox{ with } \label{[rho2rho3ycirc]}\\ y^\circ(\varrho_2)&=&\mathrm{e}^{-\frac{\sigma_0}{g(0)}(\varrho_2-\varrho_1)}\left( y^\circ(\varrho_1)- g(0)\left[ \mathrm{e}^{\frac{\sigma_0}{g(0)}(\varrho_2-\varrho_1)} -1\right]\right);\notag \end{eqnarray} and \begin{eqnarray} y^\circ(\varrho)&=&\mathrm{e}^{-\frac{\sigma_0}{g(0)}(\varrho-\varrho_3)}\left( y^\circ(\varrho_3)- g(0)\left[ \mathrm{e}^{\frac{\sigma_0}{g(0)}(\varrho-\varrho_3)} -1\right]\right) \mbox{ for } \varrho \in [\varrho_3,\varrho_4], \mbox{ with } \label{[rho3rho4ycirc]}\\ y^\circ(\varrho_3)&=&\mathrm{e}^{-\frac{\sigma_0}{g(0)}(\varrho_3-\varrho_2)}\left( y^\circ(\varrho_2)+ g(0)\left[ \mathrm{e}^{\frac{\sigma_0}{g(0)}(\varrho_3-\varrho_2)} -1\right]\right). \notag \end{eqnarray} Finally, observe that the hysteresis loop of the LuGre model that corresponds to the input $\psi_u$ is the set \begin{equation} \label{analytMLeq} G_{\psi_u}^\circ = \big \{ \big(\psi_u(\varrho),y^\circ(\varrho)\big),\,\varrho \in [0,\varrho_4] \big \}. \end{equation} Taking into account the fact that $y^\circ_u=y^\circ \circ \rho_u$ and $u=\psi_u \circ \rho_u$ it comes from \cite[Lemma 8]{I09} that $G_{\psi_u}^\circ=G_u^\circ = \big \{ \big(u(t),y_u^\circ(t)\big),\,t \in [0,T] \big \}$, thus proving statement (a) of Theorem \ref{t:main}. To prove statement (b) of Theorem \ref{t:main} observe that the minor loop corresponding to the input $\psi_u$ is the part of the hysteresis loop \eqref{analytMLeq} that corresponds to $\varrho \in [\varrho_1,\varrho_5]$, where $$ \varrho_5=\rho_u(t_5)=\int_{0}^{t_5} |\dot{u}(t)|dt=\varrho_2+u_{\max,1}-u_{\min,2}\in(\varrho_2,\varrho_3), $$ and where $t_2<t_5<t_3$ is the time such that $u(t_5)=u(t_1)$, see Figure \ref{f:figugen}. This set is the union of the two arcs $\big\{\big(\psi_u(\varrho),y^\circ(\varrho)\big),\, \varrho \in [\varrho_1,\varrho_2]\big\}$ and $\big\{\big(\psi_u(\varrho),y^\circ(\varrho)\big),\,\varrho \in [\varrho_2,\varrho_5]\big\}$. With this argument, the proof of Theorem \ref{t:main} is complete. \\ We remark that the explicit construction of the hysteretic loop, and therefore the identification of the arcs corresponding to the minor loops given in the proof of Theorem \ref{t:main} can be generalized to multimodal input functions giving rise to hysteresis loops with many minor loops. This can be done using the normalized input and Equation \eqref{eqFcirc1}. \section{Examples}\label{s:numsim} \subsection{Example 1: an approach to the concept of hysteresis loop}\label{ss:ex1} A hysteresis system is one for which the output-versus-input graph presents a loop in the steady state for slow inputs \cite{Ikhouane2017}. The way to obtain the hysteresis loop that corresponds to a given input is as follows. Consider a periodic input $t \rightarrow u(t)$. Composing this input with the time-scale change $t \rightarrow t/\gamma$ provides a new input $u_\gamma:t \rightarrow u( t/\gamma)$. This new input gives rise to an output $y_\gamma(t)$ such that the corresponding output-versus-input graph $\big\{\big(u_\gamma(t),y_\gamma(t)\big),t \geq 0\big\}$ converges to a fixed curve -the hysteresis loop- in steady state when $\gamma \rightarrow \infty$. Our aim in this section is to illustrate this process using an example. \\ Consider for instance the following system constructed using the Dahl model: \begin{equation}\label{eqPCCp}\begin{split} \dot{x}_\gamma(t)=&\;\dot{u}_\gamma(t)-|\dot{u}_\gamma(t)|x_\gamma(t),\\ \dot{y}_\gamma(t)=&-y_\gamma(t)+x_\gamma(t),\\ x_\gamma(0)=& \;0,\quad y_\gamma(0)=0, \end{split}\end{equation} with input $u_\gamma(t)=\sin(2\pi t/\gamma)$ and output $y_\gamma(t)$. Figure \ref{Figguraseis} provides the output-versus-input graph $\big\{\big(u_\gamma(t),y_\gamma(t)\big),\, t \geq 0\big\}$ of system (\ref{eqPCCp}) for increasing values of $\gamma$. \begin{figure}[H] \centering \includegraphics[scale=0.15]{figure4.pdf} \caption{Output of system (\ref{eqPCCp}), $y_\gamma$ versus $u_\gamma$. Dotted: transient; solid: steady state for $\gamma = 2000$; dashed: steady state for $\gamma = 200$; dash-dotted: steady state for $\gamma = 20$.} \label{Figguraseis} \end{figure} \noindent It can be seen that, as $\gamma \rightarrow \infty$, the steady-state part of the output-versus-input graph converges to a fixed closed curve. This curve is the hysteresis loop of system (\ref{eqPCCp}). \begin{figure}[H] \centering \includegraphics[scale=0.15]{mlfig2.pdf} \caption{ The macrodamping friction function $g(\nu)$ versus $\nu$.} \label{mlfig2} \end{figure} \subsection{Example 2: the hysteresis loop of the LuGre model}\label{ss:ex2} The aim of this section is to illustrate the concepts presented in Section \ref{ss:hystloop} by means of numerical simulations. Following \cite{JC08}, to approximate the Stribeck effect we set: $$ g(\nu)=F_{c}+\left(F_{s}-F_{c}\right)\mathrm{e}^{-\left|\nu/v_{s}\right|^{\beta}}\,\mbox{ for }\nu\in\mathbb{R}, $$ where $F_{c}>0$ is the Coulomb friction force, $F_{s}>0$ is the stiction force, $v_{s}>0$ is the Stribeck velocity, and $\beta$ is a strictly positive constant. The function $f$ is taken to be zero. The values of the different constants are taken to be $\sigma_{0}=1$, $\sigma_1=1$, $F_c=1$, $F_s=2$, $v_{s}=1$, $\beta=1$, see Figure \ref{mlfig2}. The input is the continuous $2$-periodic piecewise-linear function defined by $u(t)=t$ for $t\in [0,1]$ and $u(t)=2-t$ for $t \in [1,2]$; see Figure \ref{mlfig1}. Observe that $\psi_u=u$. \begin{figure}[H] \centering \includegraphics[scale=0.25]{mlfig1.pdf} \caption{$u(t)$ versus $t$.} \label{mlfig1} \end{figure} We take $x\left(0\right)=x_0=0$. With these values we obtain $y_\gamma$ by a numerical integration of Equations (\ref{equation13}) using Matlab solver \texttt{ode23s}.. Also, using Equations (\ref{eqFstar1})--(\ref{eqFstar2}) we obtain $y^{\star}_{u}$. Figure \ref{mlfig3} provides the plots $y_\gamma(t)$ versus $t$ for $\gamma=1,10,100$ along with the plot $y^{\star}_{u}(t)$ versus $t$. It can be seen that as $\gamma$ increases, $y_\gamma$ converges to $y^{\star}_{u}$. \begin{figure}[H] \centering \includegraphics[scale=0.15]{mlfig3.pdf} \caption{$y_\gamma(t)$ versus $t$. Dashed $\gamma=1$, dash-dotted $\gamma=10$, dotted $\gamma=100$; solid $y^{\star}_{u}(t)$ versus $t$.} \label{mlfig3} \end{figure} The functions $y^{\star}_{u,k}$ are given by $y^{\star}_{u,k}(t)=y^{\star}_{u}(t+kT),t \in [0,T],k\in \mathbb{N}$ whilst $y_u^\circ$ is calculated from Equations (\ref{eqFcirc1}) and (\ref{eqy(0)circ}). Figure \ref{mlfig4} provides the plots $y^{\star}_{u,k}(t)$ versus $t$ for increasing values of $k$. It can be seen that $y^{\star}_{u,k}$ converges to $y_u^\circ$ as $k \rightarrow \infty$. \begin{figure}[H] \centering \includegraphics[scale=0.15]{mlfig4.pdf} \caption{Convergence of the functions $y^{\star}_{u,k}$ to $y_u^\circ$. Dashed $y^{\star}_{u,k}(t)$ versus $t$ for $k=0,1,2,3$; solid $y_u^\circ(t)$ versus $t$.} \label{mlfig4} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.15]{mlfig5.pdf} \caption{Convergence of the output-versus-input graph $G^\star_u$ to the hysteresis loop $G^\circ_u$. Dotted $G^\star_u$; solid $G^\circ_u$.} \label{mlfig5} \end{figure} As in Example 1, the hysteresis loop is the output-versus-input graph obtained for very slow inputs (that is when $\gamma \rightarrow \infty$) in steady state (that is when $k \rightarrow \infty$). For a given $\gamma$, the corresponding output-versus-input graph is the set $G_{u \circ s_\gamma} = \big \{ \big(u \circ s_\gamma(t),F_\gamma(t)\big),t \geq 0 \big \} = \big \{ \big(u(t),F_\gamma \circ s_{1/\gamma}(t)=y_\gamma(t)\big),t \geq 0 \big \}$. Owing to Theorem \ref{t:conv} (a) and to \cite[Lemma 9]{I09} it comes that the graphs $G_{u \circ s_\gamma}$ converge in a sense detailed in \cite{I09} to the graph $G^\star_u = \big \{ \big(u(t),y^{\star}_{u}(t)\big),t \geq 0 \big \}$ as $\gamma \rightarrow \infty$. Equations (\ref{eqFcirc1}) and (\ref{eqy(0)circ}) provide the analytic expression of the hysteresis loop (\ref{eqhystloopLuGre}). Finally, Figure \ref{mlfig5} provides the graph $G^\star_u$ along with the hysteresis loop $G^\circ_u$. \subsection{Example 3: process of convergence that leads to the hysteresis and minor loops} \label{ML0Numerical simulations} The aim of Sections \ref{ML0Numerical simulations} and \ref{MLNumerical simulations} is to illustrate Theorem \ref{t:main} by means of numerical simulations. Section \ref{ML0Numerical simulations} focuses on the process of convergence that leads to the hysteresis loop. Section \ref{MLNumerical simulations} focuses on the variation of the minor loop with the model's parameters. The function $g$ that characterizes the Stribeck effect is the same as in Section \ref{ss:ex2}. Also, the function $f$ is taken to be zero. The input is the continuous $\rho_4$-periodic piecewise-linear function defined by Equation (\ref{e:uuniversal}) where $u_{\min,1}=0$, $u_{\min,2}=0.2$, $u_{\max,1}=1$, $u_{\max,2}=1.5$, $\varrho_1=1$, $\varrho_2 = 1.8$, $\varrho_3 = 3.1$, $\varrho_4 = 4.6$, see Figure \ref{figv2_ref4}. Observe that $\psi_u=u$ and that the normalized variable $\varrho$ is equal to time $t$. \begin{figure}[H] \centering \includegraphics[scale=0.15]{figv2_4.pdf} \caption{$u(t)$ versus $t$.} \label{figv2_ref4} \end{figure} We take $x\left(0\right)=x_0=0$. With these values we obtain $y_\gamma$ by a numerical integration of Equations (\ref{equation13}) using Matlab solver \texttt{ode23s}. Also, using Equations (\ref{eqFstar1})--(\ref{eqFstar2}) we obtain $y^{\star}_{u}$. Figure \ref{figv2_ref1} provides the plots $y_\gamma(t)$ versus $t$ for $\gamma=1,10,100$ along with the plot $y^{\star}_{u}(t)$ versus $t$. It can be seen that as $\gamma$ increases, $y_\gamma$ converges to $y^{\star}_{u}$. \begin{figure}[H] \centering \includegraphics[scale=0.15]{figv2_1.pdf} \caption{$y_\gamma(t)$ versus $t$. Dashed $\gamma=1$, dash-dotted $\gamma=10$, dotted $\gamma=100$; solid $y^{\star}_{u}(t)$ versus $t$.} \label{figv2_ref1} \end{figure} The functions $y^{\star}_{u,k}$ are given by $y^{\star}_{u,k}(t)=y^{\star}_{u}(t+k\varrho_4),t \in [0,\varrho_4],k\in \mathbb{N}$ whilst $y_u^\circ$ is calculated from Equations (\ref{eqFcirc1}) and (\ref{eqy(0)circ}). Figure \ref{figv2_ref2} provides the plots $y^{\star}_{u,k}(t)$ versus $t$ for increasing values of $k$. It can be seen that $y^{\star}_{u,k}$ converges to $y_u^\circ$ as $k \rightarrow \infty$. \begin{figure}[H] \centering \includegraphics[scale=0.15]{figv2_2.pdf} \caption{Convergence of the functions $y^{\star}_{u,k}$ to $y_u^\circ$. Dashed $y^{\star}_{u,k}(t)$ versus $t$ for $k=0,1,2$; solid $y_u^\circ(t)$ versus $t$.} \label{figv2_ref2} \end{figure} Figure \ref{figv2_ref3} provides the graphs $\big\{\big(u(t), y^{\star}_{u,k}(t)\big),t \in [0,\varrho_4]\big\}$ for $k=0,1,2$. It can be seen that these graphs converge to the hysteresis loop $\big\{\big(u(t), y_u^{\circ}(t)\big),t \in [0,\varrho_4]\big\}$. \begin{figure}[H] \centering \includegraphics[scale=0.15]{figv2_3.pdf} \caption{Convergence of the graphs $\big\{\big(u(t), y^{\star}_{u,k}(t)\big),t \in [0,\varrho_4]\big\}$ to the hysteresis loop $\big\{\big(u(t), y_u^{\circ}(t)\big),t \in [0,\varrho_4]\big\}$.} \label{figv2_ref3} \end{figure} \subsection{Example 4: Variation of the minor loop with the model's parameters} \label{MLNumerical simulations} We consider the LuGre model of Section \ref{ss:ex2} with the value $\sigma_0=6$. The input $u$ is the one given in \eqref{e:uuniversal} (thus a normalized one) with $u_{\min,1}=0$, $u_{\min,2}=0.5$, $u_{\max,1}=1$, $u_{\max,2}=1.5$, with its corresponding values of $\varrho_i=t_i$ for $i=1,\ldots,5$, see Figure \ref{mlfig6}. The hysteresis loop which is given in Figure \ref{mlfig7} is obtained using Equations \eqref{ycircdezeroML}--\eqref{[rho3rho4ycirc]}. Observe that the shape of the minor loop depends greatly on the parameters $\sigma_0$, $F_s$, and on the relative values $u_{\min,2}-u_{\min,1}$, $u_{\max,1}-u_{\min,1}$, and $u_{\max,2}-u_{\min,1}$, see Figures \ref{mlfig8} and \ref{mlfig9}. \begin{figure}[H] \centering \includegraphics[scale=0.25]{mlfig6.pdf} \caption{$u(t)$ versus $t$} \label{mlfig6} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.25]{mlfig7.pdf} \caption{Hysteresis loop $G_u^\circ$. The marker \textit{open circle} corresponds to the point $\big(u(t_1),y^\circ(t_1)\big)$ whilst the marker \textit{rectangle} corresponds to the point $\big(u(t_5),y^\circ(t_5)\big)$. The minor loop is plotted in solid line.} \label{mlfig7} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.25]{mlfig8.pdf} \caption{Hysteresis loop $G_u^\circ$ for $\sigma_0=1$ (minor loop in solid line).} \label{mlfig8} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.25]{mlfig9.pdf} \caption{Hysteresis loop $G_u^\circ$ for $\sigma_0=1$ and $u_{\min,2}=0.2$ (minor loop in solid line).} \label{mlfig9} \end{figure} \section{Conclusions} \label{Conclusions and comments} Although the phenomenon of hysteresis has been studied since the second half of the 19th century, the behavior of minor loops as a specific issue did not emerge as a research subject until the second half of the 20th century. The present paper is framed within the increasing interest in the study of the behavior of minor loops. The originality of this work comes from being the first to provide an explicit analytic expression of the minor loops of the LuGre and the Dahl models. Our construction can be generalized to multimodal input functions giving rise to hysteresis loops with many minor loops. The obtained analytic expressions have been illustrated by means of numerical simulations. \section*{Acknowledgment} Funding: The authors are supported by the Ministry of Economy, Industry and Competitiveness -- State Research Agency of the Spanish Government through grant DPI2016-77407-P (MINECO/AEI/FEDER, UE). \section*{Compliance with Ethical Standards} Conflict of Interest: The authors declare that they have no conflict of interest. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Deep learning (DL) techniques are suitable for automated analysis of medical images and have been shown to outperform experts in several fields. Examples of applications of such approaches are to perform discriminative retinal image analysis, such as automated classification of fundus images for detection of referable diabetic retinopathy~\cite{gulshan2016development,quellec2017deep,gargeya2017automated,de2018clinically}, retinopathy of prematurity~\cite{brown2018automated}, exudates on the retina~\cite{khojastehCBM:2019}, and age-related macular degeneration (AMD)~\cite{burlina2019assessment,burlina2017automated,burlina2018utility}, and also for granular AMD severity classification~\cite{burlina2018use,grassmann2018deep}. However the potential of this approach for medical images is limited because they require a large numbers of annotated images that are not currently available. It is also essential that the datasets should be balanced to avoid biased training. Usually, annotated medical image datasets are not sufficiently large nor balanced, and this is often due to issues of privacy of medical data~\cite{esteva2019guide}. \begin{figure}[!ht] \centering \includegraphics[scale=0.11]{figures_pdf/amd_example.pdf} \caption{ Retina fundus image positive to age-related macular degeneration identified by the presence of drusen. The image was extracted from the iChallenge-AMD dataset~\cite{fu2020adam}.} \label{f.amd_example} \end{figure} Different approaches have been used to address this shortcoming such as transfer learning using previously trained models, and data augmentation. Pre-trained networks have limitations, for they are usually trained in a different learning domain. Additional images generated by spatial transformations~\cite{perez2017effectiveness} are acknowledged not to guarantee data variability. A promising solution to generate synthetic images lies in the Generative Adversarial Networks (GANs)~\cite{goodfellow2014generative}. Such an approach performs data augmentation by competitively creating new samples, i.e., a generator attempts to create synthetic images to fool the discriminator, which then tries to identify whether they are fake or real. GANs have provided exceptional results over a wide variety of medical applications, such as liver lesion classification~\cite{frid2018gan}, Barrett's esophagus identification~\cite{Souza2020:CBM,souza2021fine}, and chest pathology recognition~\cite{salehinejad2018generalization}. However, these did not address problems with multi-lesion medical images in a class, when there is large class imbalance, or where there is a variable image quality in the dataset. Bellemo et al.~\cite{bellemo2018generative} described the possible advantages and limitations towards synthetic retina image generation using GANs. The authors highlighted the potential clinical applications of GANs concerning early- and late-stage AMD classification. Das et al.~\cite{das2020unsupervised} proposed a quick and reliable super-resolution approach concerning Optical Coherence Tomography (OCT) imagery using GANs, achieving a classification accuracy of $96.54\%$. Burlina et al.~\cite{burlina2019assessment} trained a Progressive GAN~\cite{karras2017progressive} on $133,821$ color fundus images from $4,613$ age-related eye disease individuals to learn how to generate synthetic fundus images with and without AMD. Two retina specialists were asked to distinguish between images with and without AMD for original and synthetic images. Recognition rates varied from around $84\%$ for the first specialist to about $89\%$ for the second. The accuracies between synthetic and real images did vary slightly for both specialists. Although results are very promising, the authors have patented their methodology, and hence an alternate method is required. Age-related macular degeneration is a major cause of vision impairment and has affected approximately $200$ million people worldwide in 2020~\cite{jonas2014global}. With an aging population, such numbers are expected to rise to $288$ million by 2040~\cite{wong2014global}. AMD is a progressive disorder of the macular region that causes central vision loss and is one of the most common causes of irreversible vision impairment in people over $50$ years-old~\cite{harvey2003common}. Figure~\ref{f.amd_example} depicts an example of a retina fundus affected by AMD. This work has introduced an alternative approach for generating synthetic images for training deep networks and tested it for AMD identification. Retina images, positive and negative to AMD, from multiple databases were used to feed a GAN to provide a range of image qualities and lesions. Ten different GAN architectures were compared to generate synthetic eye-fundus images and trhe quality was assessed using the Fréchet Inception Distance (FID), two independent clinical experts who were label blinded and deep-learning classification. The primary contributions of this work are fourfold: \begin{itemize} \item To provide an open-source tool that can generate high-quality medical images of both, disease and healthy conditions; \item Generation and validation of eye-fundus images suitable for training of deep-learning networks for computerized detection of AMD. \item To introduce StyleGAN2-ADA~\cite{Karras2020ada} for eye fundus image generation; and \item To present a comparison among several GAN architectures for synthetic image generation. \end{itemize} The remainder of this paper is organized as follows. Sections~\ref{s.background} and~\ref{s.methodology} present the theoretical background and methodology, respectively. Section~\ref{s.experimental_results} describes the experimental results and Section~\ref{s.conclusion} states conclusions and future works. \section{Theoretical Background} \label{s.background} \subsection{Generative Adversarial Networks} \label{ss.gans} GANs are deep-learning based data generative networks. A significant improvement in more recent GANs is obtaining training stability to prevent the well-known ``mode collapse", i.e., the generator is trapped in a local minimum, thus producing images that are very similar to each other. Gulrajani et al.~\cite{gulrajani2017improved} proposed the Wasserstein GAN with gradient penalty (WGAN-GP) to mitigate this problem, which implements a distinct loss function and eases the training convergence stability. Meanwhile, Karras et al.~\cite{karras2017progressive} proposed the Progressive GAN, showing that performance gains and learning capacity can be obtained by increasing the image resolution gradually during training. \subsection{Style-based Generative Adversarial Networks} \label{ss.stylegan} The Style-based Generative Adversarial Network (StyleGAN)~\cite{karras2019style}, depicted in Figure~\ref{f.arq_stylegan}, is a state-of-the-art generative model that aims to create high-resolution images with greater fidelity. Additionally, such an architecture aims to enhance the diversity of outputs and simultaneously control image features. This gives more reliable control over the latent space which may allow users to edit image features, such as skin color, age, gender, and facial expression. \begin{sloppypar} StyleGAN2 with Adaptive Discriminator Augmentation (StyleGAN2-ADA) has shown promising results in synthesizing high-quality images~\cite{Karras2020ada}. This variant uses an adaptive discriminator augmentation technique that stabilizes training in data-constrained environments. The method does not require modifications in the loss functions or architectures and can be used to train the network from scratch and fine-tuning. \end{sloppypar} \begin{figure}[!ht] \centering \includegraphics[scale=0.43]{figures_pdf/arq_stylegan.pdf} \caption{Standard StyleGAN architecture. In this figure, FC stands for fully connected layer, `A' denotes an affine transformation that creates the style, and `B' corresponds to noise broadcast operation. Adapted from~\cite{karras2019style}.} \label{f.arq_stylegan} \end{figure} While traditional generators are designed as single neural networks, StyleGAN introduces a dual-network-based architecture, which is described below: \begin{itemize} \item \textbf{Mapping Network}: it maps a latent input to an intermediate representation using eight fully connected layers, mitigating the entanglement issue (e.g., an $n$-dimensional latent space is converted into an $n$-dimensional intermediate space). \item \textbf{Synthesis Network}: the outcome of the previous layer serves as an input to another network that uses different blocks, one for each image resolution. Each block comprises an upsampling operation (resolutions are increased progressively) followed by a convolutional layer with a $3\times 3$ kernel (CONV $3\times3$). Further, Gaussian noise (B) is added to the output to feed an Adaptive Instance Normalization (AdaIn) layer, which also requires style input (high-level features) obtained by the affine transformation (A). The noise is vital to produce fine variations such as in hair of a facial image. The aforementioned process, i.e., convolution followed by an AdaIn layer, is executed once more, and the output is used to feed the next block. The only exception concerns the first block, which replaces the upsampling operation with a constant input (CONST $4\times4\times512$), and it has one convolutional layer. \end{itemize} \subsection{Style-based Generative Adversarial Networks 2} \label{ss.stylegan2} One recurring problem of StyleGAN lies in creating blob-like pixels (resembles water droplets), which are more visible when examining the intermediate feature maps of the generator. Such a problem initially appears in all feature maps at a low resolution such as of $64 \times 64$ and is deepened when employing higher resolutions. Karras et al.~\cite{karras2020analyzing} traced the problem and found out that AdaIn loses some crucial information when performing the instance normalization. They proposed the StyleGAN2, an alternative architecture that stabilizes the generation of high-resolution images. This has a simplified first processing step achieved by normalizing the features without their mean, and removing the noise addition. Figure~\ref{f.arq_stylegan2} illustrates a comparison between StyleGAN and StyleGAN2 architectures. \begin{figure}[!ht] \begin{center} \begin{tabular}{cc} \includegraphics[scale=0.337]{figures_pdf/2.pdf} & \includegraphics[scale=0.337]{figures_pdf/arq_stylegan2c.pdf} \\ (a) & (b) \end{tabular} \caption{ A closer look at (a) StyleGAN and (b) StyleGAN2 architectures. The AdaIN layer has been expanded for both models for the sake of better visualization. For StyleGAN, AdaIn layer performs normalization and modulation operations using both mean and standard deviation per feature map; bias `b' that comes from the previous convolutional layer is added to the Gaussian noise and used as an input to the AdaIn layer. The mean operation is not employed in the StyleGAN2 block, and the bias is added to the noise after the standard deviation normalization. Adapted from~\cite{karras2019style}.} \label{f.arq_stylegan2} \end{center} \end{figure} \section{Methodology} \label{s.methodology} This section describes the datasets that were used in the study, the techniques employed to generate the synthetic images, and the methodology to evaluate the different neural architectures considered in the experimental section. \subsection{Dataset} \label{ss.datasets} This study used eye-fundus images from three public datasets reported in the literature to capture the typical differences due to equipment and demographics. Below, we summarized the datasets' preliminary information: \begin{itemize} \item \textbf{iChallenge-AMD:} Comprises of $1,200$ retinal fundus images that have been annotated for drusen and hemorrhage. The training set was made of 400 images (89 images of eyes with AMD and 311 from eyes without AMD), while the test sets contained the remaining images~\cite{fu2020adam}. \item \textbf{ODIR-2019:} Contains colored fundus images from both left and right eyes of $5,000$ patients obtained from multiple hospitals/medical centers in China, with varying image resolutions and observations from several specialists. The dataset has was designed to address normal and six diseases: diabetes, glaucoma, cataract, AMD, hypertension, myopia, and other diseases/abnormalities\footnote{\url{https://odir2019.grand-challenge.org/dataset/}}. The training set is made up of a structured dataset with $7,000$ images, of which $280$ images are labelled as having AMD. The testing set consists of $500$ colored fundus images, eliminating age and gender. \item \textbf{RIADD:} contains $3,200$ fundus images recorded using 3 different cameras and multiple conditions. The images have been annotated through the consensus of two retina experts. The dataset has been sub-divided based on six diseases/abnormalities; diabetic retinopathy, AMD, media haze, drusen, myopia, and branch retinal vein occlusion~\cite{pachade2021retinal}. The dataset was subdivided into three subsets: $60\%$ for training ($1,920$ images), $20\%$ for validation ($640$ images), and the remaining $20\%$ for testing purposes ($640$ images). We have used publicly available and online datasets, each of which have described their patient recruitment, clinical outcomes and human experiments ethics approval in their associated publications. Registration was necessary for access to the data from the three sources. i-Challenge datset authors confirm in their publications they received ethics approval guidelines from Sun-Yat Sen University, China and Sixth People's Hospital Affiliated to Shanghai Jiao Tong University, Shanghai, China. Details on https://amd.grand-challenge.org/Home/ ODIR dataset confirm that all data is from routine clinical examinations, have been de-identified and published after receiving clearance from Peiking University board. Details on https://odir2019.grand-challenge.org/dataset/ RIADD dataset confirm that data collection was conducted after ethics approval from Review Board of Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India. Details on https://riadd.grand-challenge.org/ \end{itemize} The datasets mentioned above were designed to address a challenge and hence the labels of the test subset were not available. Therefore, we used only the training subset. These datasets have an imbalanced class distribution, which is an inherent problem for most medical image datasets and using such a set generally leads to a bias towards the predominant class. Different research groups have developed these datasets with differences in the quality of the images and the demographics. Thus, these datasets offered both, data imbalance and wide range of image quality. The ODIR-2019 and RIADD datasets were organized into two subsets, AMD and non-AMD images. Preprocessing methodology was same as proposed by Fu et al.~\cite{fu2019evaluation}, which comprises of a preprocessing step that detects the retinal mask using the Hough Circle Transform and then crops it to remove the impact of the black background. The cropped region was then scaled to a $224\times224$ resolution. The resultant images were submitted to a three-level manual quality grading: ``Good', ``Usable' and ``Reject'. Only those images graded with ``Good" and ``Usable" scores were used for further analysis. As a result, the number of images positive to AMD decreased from $89$ to $74$, $280$ to $227$, and $100$ to $79$ images in the iChallenge-AMD, ODIR-2019, and RIADD datasets, respectively while the number of non-AMD images decreased from $311$ to $290$, $6,720$ to $4,993$, and $1,820$ to $1,143$ images concerning the iChallenge-AMD, ODIR-2019, and RIADD datasets, respectively. Figure~\ref{f.preprocess} illustrates the steps mentioned above for a sample image from the ODIR-2019 dataset. \begin{figure}[!ht] \centering \begin{tabular}{ccc} \includegraphics[width=2.3cm,height=2.3cm]{figures_pdf/p1.pdf} & \includegraphics[width=2.3cm,height=2.3cm]{figures_pdf/p2.pdf} & \includegraphics[width=2.3cm,height=2.3cm]{figures_pdf/p3.pdf} \\ (a) & (b) & (c) \end{tabular} \caption{ Sample image extracted from ODIR-2019 dataset and its corresponding transformations: (a) original image, (b) background removal using Hough Circle Transform and resizing, and (c) central cropping.} \label{f.preprocess} \end{figure} These images were then resampled to $390\times390$ pixels, followed by a cropping procedure keeping the center of the image to $256\times256$ pixels. Such a procedure is required to drive StyleGAN2-ADA generating images focused on the macula area (Figure~\ref{f.preprocess}-c). Ultimately, images were resized $224\times224$ pixels and normalized within the range $[-1,1]$ to be used as proper inputs to the deeper architectures considered in the manuscript. After quality assessment and selecting the images for the final dataset using the criterion described earlier, the resulting single dataset comprised totaL of $7,106$ images. In this, $6,896$ images were used to train the models ($275$ with AMD and $6,621$ without AMD) and the remaining $210$ images ($105$ with AMD) were used as a holdout test set. Figure~\ref{f.val_set} displays the number of images used per dataset to compose the final test set. \begin{figure}[!ht] \centering \begin{tabular}{cc} \includegraphics[scale=0.37]{figures_pdf/amd_valset.pdf} & \includegraphics[scale=0.37]{figures_pdf/nonamd_set.pdf} \\ (a) & (b) \end{tabular} \caption{ Number of images per dataset to compose the test set: (a) images positive to AMD and (b) non-AMD images.} \label{f.val_set} \end{figure} \subsection{Evaluation Measures} \label{ss.evaluation_metrics} We employed three evaluation measures, i.e., the Fréchet Inception Distance (FID), a well-known GAN evaluation score~\cite{heusel2017gans}, the experts ability to identify the syntetic images and the classification accuracy. FID is often used to assess the quality and variety of the generated images and, even though it has been proposed to improve the standard Inception Score, it still uses the Inception-v3 architecture to extract features from both, synthetic and real images. \subsection{Experimental Setup} \label{ss.experimental_setup} The experiments were divided into four rounds: (i) a FID comparison among StyleGAN2-ADA and nine different GAN architectures to assess the quality of the images generated, (ii) evaluation of the human experts to distinguish real and synthetic images, (iii) data perturbation with synthetic images generated by StyleGAN2-ADA and evaluated in three different deep networks and (iv) a comparison of the best-augmented model obtained in the previous step with human experts when identifying images that have AMD verses those that do not. \begin{sloppypar} The first experiment employed FID to compare StyleGAN2-ADA and nine distinct GAN models. The following models were considered in the experiment: Deep Convolutional GAN (DCGAN)~\cite{radford2015unsupervised}, Least Squares Generative Adversarial Networks (LSGAN)~\cite{mao2017least}, Wasserstein GAN (WGAN)~\cite{arjovsky2017wasserstein}, Wasserstein GAN with Gradient Penalty (WGAN-GP)~\cite{gulrajani2017improved}, Deep Regret Analytic Generative Adversarial Networks (DRAGAN)~\cite{kodali2017convergence}, Energy-based Generative Adversarial Network (EBGAN)~\cite{zhao2016energy}, Boundary Equilibrium Generative Adversarial Networks (BEGAN)~\cite{berthelot2017began}, Conditional GAN (CGAN)~\cite{mirza2014conditional}, and Auxiliary Classifier GAN (ACGAN)~\cite{odena2017conditional}. All models were trained with $50$ epochs, considering samples of size $100\times100$ pixels and a batch size of $32$. The training step used the ADAM\cite{kingma2017adam} optimizer with a learning rate of $0.0002$ and decay rates of $0.5$ and $0.999$ regarding the generator and the discriminator, respectively. The experiments were conducted using the training set with an Nvidia RTX 2060 GPU. Therefore, after training each GAN model, new images were generated and FID was computed. \end{sloppypar} The second experiment determines whether clinical experts, who are very experienced with the analysis of eye fundus images, can distinguish between synthetic and real images. Such a step is essential to evaluate the effectiveness of StyleGAN2-ADA for generating synthetic eye fundus images. The experts were provided with randomly generated image sets, one for AMD-diagnosed images and the other for non-AMD images, each consisting of ten synthetic images and ten real images. They were asked to identify the syntetic images in the mix. In the next experiment, we considered three deep architectures pre-trained with ImageNet dataset~\cite{deng2009imagenet} for performance comparison when the model is perturbed with synthetic images, i.e., SqueezeNet~\cite{iandola2016squeezenet}, AlexNet~\cite{krizhevsky2012imagenet}, and ResNet18~\cite{he2016deep}. During training, synthetic and real images were mixed within each batch according to a pre-determined hyperparameter $p\in[0,1]$. For each image, a uniform distributed number $x$ was sampled and compared with $p$: if the latter was greater than $x$, the image was replaced by a synthetic one. We considered images generated by StyleGAN2-ADA, for it had obtained the best FID values in the first experiment (such outcomes are later described in Section~\ref{s.experimental_results}). The deep networks were trained using a learning rate of $0.0001$, a decay rate of $0.9$, batch size of $32$, number of epochs equal to $5$, and samples of size $224\times224$ pixels. To overcome the issue of unbalanced dataset, Weighted Random Sampler~\cite{Efraimidis2008}, was used which performs oversampling in the minority class. This assigns a higher weight to the minority classes, thus affecting the probability of drawing a point from the majority class by moving from a uniform distribution to a multinomial distribution. Further augmentation was performed using classical image transformations, such as resizing and color jittering, which changes the brightness, contrast, saturation, and horizontal flipping. The test set was kept intact. The final round was aimed at comparing the best deep model obtained in the previous phase, ResNet-18, against human experts concerning the task of AMD identification. In this step, twenty real images (ten AMD images and ten non-AMD images) were randomly selected from the test set and provided to human experts for classification purposes. Following that, the same images were submitted to a ResNet-18 for comparison purposes. To allow a fair comparison between humans and deep models, we considered the following measures: standard accuracy (ACC), sensitivity, and specificity. In this work, we used the StyleGAN2-ADA official source code, and the hyperparameters suggested by Karras et al.~\cite{Karras2020ada}. Concerning StyleGAN2-ADA hyperparameters, we used a batch size of $12$, ADA target equal to $0.8$ (i.e., the probability of using ADA mechanism), the Adam algorithm with a learning rate of $0.0025$, decay values of $0$ and $0.99$, and a convergence error of $1e^{-8}$ for the generator and discriminator. StyleGAN2-ADA framework enables different augmentations (rotation, geometric transformations, and color transformations) and class-conditional training. The output image resolution was set to $256\times256$ pixels. \section{Results} \label{s.experimental_results} The experimental results are presented in four sub-sections: (i) FID for synthetic image assessment, (ii) human experts detecting synthetic images, (iii) image perturbation assessment, and (iv) comparison between human experts and deep models for detecting AMD images. \subsection{Synthetic Image Assessment} \label{ss.image_augmentation} Table~\ref{t.uncgans} presents the FID values for each GAN-based architecture. StyleGAN2-ADA achieved the lowest FID value of $166$, while EBGAN was placed in last and its FID value of $380$ was the highest. The smaller the FID value, the better is the quality of the generated image. Therefore, all further experiments only considered the StyleGAN2-ADA architecture for synthetic image generation. \begin{table}[!ht] \renewcommand{\arraystretch}{0.90} \setlength{\tabcolsep}{9pt} \centering \caption{Mean FID values for image quality assessment (the best result is highlighted in bold).} \label{t.uncgans} \begin{tabular}{ccc} \toprule \textbf{Type} & \textbf{Architecture} & \textbf{FID} \\ \midrule \multirow{7}{*}{Unconditional} & EBGAN & $380.18$ \\ & DCGAN & $326.85$ \\ & DRAGAN & $317.82$ \\ & WGAN & $307.00$ \\ & LSGAN & $305.59$ \\ & WGAN-GP & $295.23$ \\ & BEGAN & $225.89$ \\ \midrule \multirow{3}{*}{Conditional} & CGAN & $342.59$ \\ & ACGAN & $315.36$ \\ & \textbf{StyleGAN2-ADA} & $\bm{166.17}$ \\ \bottomrule \end{tabular} \end{table} Figure~\ref{f.labels} shows some examples of real and synthetic images produced by StyleGAN2-ADA. The trained model yields realistic-looking images for both, with and without AMD, conditioned by sampling from latent representations. Visual inspection shows that the generated images are similar to the real images. In the AMD images, macula degeneration is evident. \begin{figure}[!ht] \centering \begin{tabular}{cc} \includegraphics[scale=0.016]{figures_pdf/reals.pdf} & \includegraphics[scale=0.016]{figures_pdf/fakes000601.pdf} \\ (a) & (b) \end{tabular} \caption{Some examples of (a) real and (b) synthetic images (AMD and non-AMD) generated by StyleGAN2-ADA.} \label{f.labels} \end{figure} Figure~\ref{f.samples} provides examples of real and synthetic images that are from eyes, positive and negative to AMD. One can observe the high-quality images that were generated for both, AMD and non-AMD images. \begin{figure}[!ht] \centering \begin{tabular}{cccc} \includegraphics[scale=0.35]{figures_pdf/real_a.pdf} & \includegraphics[scale=0.35]{figures_pdf/synthetic_a.pdf} & \includegraphics[scale=0.35]{figures_pdf/real_n.pdf} & \includegraphics[scale=0.35]{figures_pdf/synthetic_n.pdf} \\ (a) & (b) & (c) & (d) \\ \end{tabular} \caption{Examples of synthetic and real images for AMD and Non\_AMD. (a) real, positive AMD, (b) synthetic, positive AMD, (c) real, Non-AMD and (d) synthetic, non-AMD.} \label{f.samples} \end{figure} \subsection{Distinguishing between Synthetic and Real Images} \label{ss.synthetic_versus_real} Table~\ref{t.clinical} presents the outcomes of each clinical expert. For AMD images, the accuracy was $50\%$ (standard deviation of $21.91\%$) for clinician \#1 and $55\%$ (standard deviation of $21.80\%$) for clinician \#2. For Non-AMD images, clinician \#1 achieved an accuracy of $60\%$ (standard deviation of $21.47\%$), and clinician \#2 obtained an accuracy of $50\%$ (standard deviation of $21.47\%$). These results highlight that both clinicians could not differentiate between real and synthetic images. \begin{table}[!ht] \renewcommand{\arraystretch}{0.90} \centering \caption{Synthetic versus real images by humans experts.} \label{t.clinical} \begin{tabular}{cc|ccccc} \toprule & & \textbf{ACC} & \textbf{Sensitivity} & \textbf{Specificity} \\ \midrule \multirow{2}{*}{AMD} & \textbf{Clinician \#1} & $0.50$ & $0.50$ & $0.50$ \\ & \textbf{Clinician \#2} & $0.55$ & $0.40$ & $0.70$ \\ \midrule \multirow{2}{*}{\shortstack{Non-\\AMD}} & \textbf{Clinician \#1} & $0.60$ & $0.60$ & $0.60$ \\ & \textbf{Clinician \#2} & $0.50$ & $0.40$ & $0.60$ \\ \bottomrule \end{tabular} \end{table} \subsection{Image Perturbation Assessment} \label{ss.image_perturbation} Figure~\ref{f.performance} shows the accuracy over the test set concerning different percentages ($p$ value) of real images that were replaced by synthetic images. Overall, the accuracy improved when combining synthetic and real images. While the accuracy lies between $50\%$ and $55\%$ when using only synthetic images, and between $78\%$ and $81\%$ when using only real images for training, the combination of both types of images gave the best results. However, this was network dependent: while SqueezeNet accuracy peaked at $81\%$ when using $70\%$ of synthetic images, AlexNet obtained its highest accuracy ($82\%$) when using only $20\%$ of synthetic images. ResNet18 achieved its best of $83\%$ with $60\%$ of synthetic images. In general, the networks performed poorly when the percentage of synthetic images exceeded $70\%$. \begin{figure}[!ht] \centering \includegraphics[scale=0.27]{figures_pdf/performance.pdf} \caption{ Accuracy over the test set for different percentages of synthetic image for perturbation purposes.} \label{f.performance} \end{figure} Figure~\ref{f.compare} compares the improvement due to the mixing of the synthetic with real images on the three evaluated architectures. The most significant improvement was by AlexNet, its accuracy increased by approximately $8\%$, while ResNet18 had highest accuracy (about $83\%$) using a mix of synthetic and real images. \begin{figure}[!ht] \centering \includegraphics[scale=0.27]{figures_pdf/best_scores.pdf} \caption{ Accuracy over the test set using only real ($p=0$) and also mixed data concerning ResNet18, AlexNet, and SqueezeNet architectures.} \label{f.compare} \end{figure} Tables~\ref{t.conf_reals} and~\ref{t.conf_mixed} present the confusion matrix and class-wise performance measures (precision, recall, and F1-score) when ResNet18 architecture was trained with (i) real data only and (ii) mixed data ($p=0.6$), respectively. We can observe that ResNet18 trained with mixed data provided more reliable outcomes, where there was a minor difference between precision and recall values, attesting that the network can better distinguish between classes and not be prone to overfit on a specific class. \begin{table}[!ht] \renewcommand{\arraystretch}{1.0} \centering \caption{Confusion matrix and evaluation measures for ResNet18 architecture trained with synthetic data only.} \label{t.conf_reals} \scalebox{1.0}{ \begin{tabular}{c|cc|ccc} \toprule & \textbf{AMD} & \textbf{Non-AMD} & \textbf{Precision} & \textbf{Recall} & \textbf{F-Score} \\ \midrule \textbf{AMD} & $77$ & $28$ & $0.87$ & $0.73$ & $0.79$ \\ \textbf{Non-AMD} & $12$ & $93$ & $0.77$ & $0.89$ & $0.82$ \\ \bottomrule \end{tabular} } \end{table} \begin{table}[!ht] \renewcommand{\arraystretch}{1.0} \centering \caption{Confusion matrix and evaluation measures for ResNet18 architecture trained with mixed data ($p=0.6$).} \label{t.conf_mixed} \scalebox{1.0}{ \begin{tabular}{c|cc|ccc} \toprule & \textbf{AMD} & \textbf{Non-AMD} & \textbf{Precision} & \textbf{Recall} & \textbf{F-Score} \\ \midrule \textbf{AMD} & $90$ & $15$ & $0.81$ & $0.86$ & $0.83$ \\ \textbf{Non-AMD} & $21$ & $84$ & $0.85$ & $0.80$ & $0.82$ \\ \bottomrule \end{tabular} } \end{table} One of the main issues regarding deep learning in medicine is the difficulty to interpret the decision mechanisms. Techniques, such as heatmaps from the Gradient-weighted Class Activation Mapping (Grad-CAM)~\cite{selvaraju2017grad} can be used to identify and highlight regions of interest for visualization. Figure~\ref{f.gradam} depicts such heatmaps in images with AMD, which can help clinicians better interpret the decision-making process. The results also show that ResNet18 architecture trained with mixed data can highlight the location of the lesion better. \begin{figure}[!ht] \centerline{\begin{tabular}{cccc} \includegraphics[scale=0.35]{figures_pdf/img1.pdf} & \includegraphics[scale=0.35]{figures_pdf/img1_can.pdf} & \includegraphics[scale=0.35]{figures_pdf/img2.pdf} & \includegraphics[scale=0.35]{figures_pdf/img2_can.pdf} \\ (a) & (b) & (c) & (d)\\ \end{tabular}} \caption{Some images positive to AMD in (a) and (c) and their corresponding heatmaps generated by Grad-CAM in (b) and (d).} \label{f.gradam} \end{figure} \subsection{Comparison between Human Experts and Deep Models} \label{ss.comparison_amd_non_amd} Table~\ref{t.diagnose} presents the comparison of AMD detection by human experts and the best deep model, Resnet-18. Overall, the results of both clinicains and deep-learning were similar, with the best performance by deep-learning while the lowest specificity was by clinician \#2. This shows that deep models can outperform clinicians for diagnosing AMD in eye fundus images. \begin{table}[!ht] \renewcommand{\arraystretch}{1.0} \centering \caption{Comparison between human experts and deep models to classify AMD and real Non-AMD images.} \label{t.diagnose} \begin{tabular}{c|ccc} \toprule & \textbf{ACC} & \textbf{Sensitivity} & \textbf{Specificity} \\ \midrule \textbf{Clinician \#1} & $0.8$ & $0.8$ & $0.8$ \\ \textbf{Clinician \#2} & $0.75$ & $1.00$ & $0.50$ \\ \textbf{Resnet-18} & $0.85$ & $0.9$ & $0.8$ \\ \bottomrule \end{tabular} \end{table} \section{Discussion} \label{ss.discussion} DLN are suitable for automated and accurate analysis of medical images, and these are faster and often more accurate than the experts. These have been used for computerized classification of eye fundus images for detection of referable diabetic retinopathy~\cite{gulshan2016development,quellec2017deep,gargeya2017automated,de2018clinically}, retinopathy of prematurity~\cite{brown2018automated}, exudates on the retina~\cite{khojastehCBM:2019}, AMD~\cite{burlina2019assessment,burlina2017automated,burlina2018utility}, and AMD severity ~\cite{burlina2018use,grassmann2018deep}. However DLN require large numbers of annotated images, and the datasets should be balanced. Unbalanced datasets when used for training can lead to bias and erroneous outcomes. However, most suitably annotated medical image datasets are neither large nor balanced. There are number of diverse reasons why this happens, such as commercial interests, privacy of medical data, and issues with obtaining clearance from ethics boards of the hospitals ~\cite{esteva2019guide}. While efforts are being made to develop bigger databases, the other alternate that has been proposed is to develop suitable synthetic images to overcome these shortcomings. Earlier efforts to develop synthetic images were by transforming the images, the use of deep-learning approach for this purpose has been found to be more effective ~\cite{burlina2019assessment,burlina2017automated,burlina2018utility}. Burlina et al have successfully developed a method for ~\cite{burlina2019assessment,burlina2018utility} generating the synthetic images. Their method is based on GAN, and they tested their method by showing that experts were unable to identify the synthetic images. However, their method has been patented and hence not available for being used by others. We have developed an alternate method based on StyleGAN2, an extension of the progressive GAN, to generate synthetic medical images that human experts were unable to identify. We have also shown that these images were suitable for increase the performance of deep-learning networks. The results were similar to those reported in the literature using patented technology. While Burlina et al. trained a Progressive GAN over a large number of images positive to AMD, our technique has the potential to generate similar high synthetic images through a small number of images positive to AMD. \begin{sloppypar} ResNet18 architecture trained over real and synthetic images provided the best results, marginally outperforming the human experts' performance. Therefore, we can conclude that deep models appropriately trained can be as accurate as humans for specific medical image classification applications and datasets. We can extend such an argument to the problem of diagnosing AMD in eye fundus images. On the other hand, one limitation of this study is that these methods have handled the problem only as a binary classification task, i.e., AMD versus non-AMD images. However, medical images are typically We believe more classes will make the problem more challenging to cope with. \end{sloppypar} We have made available an online system with this trained network so that anyone can use it and test it, simply by uploading images. The software automatically labels the images as positive or negative to AMD. We have also provided the source code of the entire software and it is available publicly to facilitate researchers to use this as it is, or improve it. We are focused on fostering partnerships to facilitate and conduct research towards the usage of deep-learning to generate and recognize medical images. \section{Conclusion and Future Works} \label{s.conclusion} Deep-learning-based analysis of medical images has been found to be very useful, for it can learn from examples without requiring prior knowledge of features that matter. However, its applications are limited because labeled and/ or annotated medical image datasets of eye fundus are usually small and imbalanced. This manuscript has developed a method to generate synthetic images that is not patented and the source code is publicly available. We have investigated the use of StyleGAN2-ADA to generate synthetic images using examples from publicly available databases to distinguish between AMD and healthy eyes. We found that experienced clinical experts were unable to differentiate between the synthetic and real images. We also demonstrated that synthetic images generated using StyleGAN2-ADA when mixed with real images improved the classification accuracy of deep learning networks, and these maginally outperformed clinical experts in separating the AMD and Non-AMD retinal images. Future works regarding the application of this for AMD detection will need to broaden the scope to detect the severity of AMD, and for differentiating from other diseases. For generating medical images, there is the need to consider a broader range of deep architectures and applications. We would also need to investigate the effectiveness of heatmaps generated by Grad-CAM for helping the clinicians. \section{Data availability} \label{s.data_availability} \begin{sloppypar} The iChallenge-AMD dataset can be found in~\url{https://ai.baidu.com/broad/introduction?dataset=amd}, while ODIR-2019 dataset is available on~\url{https://odir2019.grand-challenge.org/dataset/} and \textbf{RIADD} is available on ~\url{https://riadd.grand-challenge.org/Home/}. \end{sloppypar} \section{Code availability} \label{s.code_availability} \begin{sloppypar} The official code for StyleGAN2-ADA is available at~\url{https://github.com/NVlabs/stylegan2-ada-pytorch}. Our implementation of Deep Convolutional GAN (DCGAN), Least Squares Generative Adversarial Networks (LSGAN), Wasserstein GAN (WGAN), Wasserstein GAN with Gradient Penalty (WGAN-GP), Deep Regret Analytic Generative Adversarial Networks (DRAGAN), Energy-based Generative Adversarial Network (EBGAN), Boundary Equilibrium Generative Adversarial Networks (BEGAN), Conditional GAN (CGAN), and Auxiliary classifier GAN (ACGAN) are available for download at~\url{https://github.com/GuiCamargoX/gans\_pytorch}. All source code concerning image processing, Style-GAN2-ADA data generation, and the pre-trained networks are available at \url{https://github.com/GuiCamargoX/amd-synthetic}. \end{sloppypar} \section*{Acknowledgments} \begin{sloppypar} The authors thank Dr. Ione F. Alexim from BP Hospital - \textit{A Beneficência Portuguesa de São Paulo}, who provided her eye fundus grading expertise to the project. We are grateful to S\~ao Paulo Research Foundation (FAPESP) under the grants \#2013/07375-0, \#2014/12236-1, \#2018/15597-6, \#2019/00585-5, \#2019/07665-4, and \#2019/02205-5, to the Brazilian National Council for Research and Development (CNPq) grants \#307066/2017-7, \#427968/2018-6, \#309439/2020-5, and \#606573/2021-0, as well as the Engineering and Physical Sciences Research Council (EPSRC) grant EP/T021063/1 and its principal investigator Ahsan Adeel. We also acknowledge the funding from Promobilia Foundation, Sweden, and SPARC grant (P-134, 2019), Department of Biotechnology (India) for financial their support. \end{sloppypar} \section*{Author Information} \subsection{Contributions} G.O. developed the algorithms and performed the analysis. J.P. and D.K. conceptualized and designed the project. G.R., L.P., D.P. managed the data and designed the software. H.K. was responsible for the clinical assessment of the images and script. All authors contributed to writing the manuscript. D.K, G.O. and JP finalized the manuscript. \subsection{Corresponding authors} Correspondence to Guilherme C. Oliveira or Dinesh Kumar. \section*{Conflict of Interest declarations} The authors declare no conflict of interest for this manuscript. \printcredits \bibliographystyle{cas-model2-names}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Linear dynamical systems have surprising properties. By a linear dynamical system, we mean a couple $(X,T)$ where $X$ is a (complex) topological vector space and $T\in\mathfrak L(X) $ is a bounded linear operator on $X$. We are interested in transitive linear dynamical systems; namely we ask that $T$ is hypercyclic: there exists $x\in X$, called a hypercyclic vector for $T$, such that its $T$-orbit $\{T^n x;\ n\geq 0\}$ is dense in $X$. We shall denote by $HC(T)$ the set of hypercyclic vectors for $T$. A comprehensive treatment on linear dynamical systems is carried out in the recent books \cite{BM}, \cite{GP}. Linear dynamical systems have rigid properties which are not shared by general dynamical systems. For instance, Ansari has shown in \cite{A} that, for any hypercyclic operator $T$ and for any $q\geq 1$, $T^q$ is hypercyclic and $HC(T^q)=HC(T)$. Some years later, Le\'on and M\"uller were interested in the rotations of a hypercyclic vector. They proved in \cite{LM} that $\{T^n x;\ n\geq 1\}$ is dense in $X$ iff $\{\lambda T^n x;\ n\geq 1,\ \lambda\in\mathbb T\}$ is dense in $X$, where $\mathbb T$ is the unit circle. In particular, for any $\theta\in\mathbb R$, $HC(T)=HC(e^{i\theta}T)$. For further results along this line of research we refer to \cite{BG}, \cite{CMP}, \cite{LM}, \cite{LP}, \cite{M}, \cite{S}. \medskip To understand the influence of ``rotations" in linear dynamics it would be desirable to fully answer the following question.\medskip {\bf Question.} Let $T\in \mathfrak L(X) $ be a hypercylic operator and let $x\in X$. For which sequences $(\lambda_n )\subset\mathbb T^\mathbb N$ do we have $\overline{ \{ T^nx;\ n\geq 1 \} }=X$ if and only if $\overline{ \{ \lambda_n T^nx;\ n\geq 1 \} }=X$?\medskip Let us first show that an affirmative answer to the above question already imposes some restrictions to the sequence $(\lambda_n)$. Suppose $T\in \mathfrak L(X) $ is a hypercyclic operator and consider $x\in HC(T)$. Take any non-zero functional $x^*\in X^*$ and define $\lambda_n= |x^*(T^nx)|/x^*(T^nx)$ if $x^*(T^nx)\neq 0$ and $\lambda_n=1$ if $x^*(T^nx)= 0$. Observe that $\lambda_n\in \mathbb{T}$ for every $n\in \mathbb{N}$. We claim that the set $\{ \lambda_nT^nx;\ n\geq 1 \}$ is not dense in $X$. To see this, we argue by contradiction and so assume that $\{ \lambda_nT^nx;\ n\geq 1 \}$ is dense in $X$. Then, the set $\{ x^*(\lambda_nT^nx);\ n\geq 1 \}$ is dense in $\mathbb{C}$. However, the last is a contradiction since, by the construction of $\lambda_n$, $x^*(\lambda_nT^nx)\geq 0$ for every $n\in \mathbb{N}$. We may also obtain a counterexample if we require moreover that $(\lambda_n)$ belongs to a finite subset of $\mathbb T$. Indeed, let $T\in \mathfrak L(X) $ be a hypercyclic operator on a Banach space $X$ and take $x\in HC(T)$. Then define $\lambda_n=1$ if $\| T^nx-x\| \geq \| x\| /2$ and $\lambda_n=-1$ if $\| T^nx-x\| < \| x\| /2$. It readily follows that $\| \lambda_nT^nx-x\| \geq \| x\| /2$ for every $n\geq 1$. \medskip On the contrary, the Le\'on-Saavedra and M\"uller result shows that the answer is true provided $(\lambda_n)=(e^{in\theta})$, $\theta\in\mathbb R$. Our main result says that the rotations of the orbit by unimodular complex numbers with linear phases can be replaced by rotations consisting of unimodular complex numbers with polynomial phases. \begin{theorem} \label{mainthm} Let $X$ be a complex topological vector space, let $T\in \mathfrak L(X) $ and let $x\in X$. The following are equivalent. \begin{itemize} \item [(i)] $x$ is hypercyclic for $T$. \item [(ii)] $\overline{ \{ e^{iP(n)}T^n x;\ n\geq 1\}}=X$ for any polynomial $P\in\mathbb R[t]$. \item [(iii)] $\overline{ \{ e^{iP(n)}T^n x;\ n\geq 1\}}^o\neq \emptyset$ for some polynomial $P\in\mathbb R[t]$. \end{itemize} \end{theorem} The equivalence of (ii) and (iii) in this theorem is in fact an extension of Bourdon-Feldman's theorem, which says that $\{ T^nx;\ n\geq 0\}$ is dense if and only if it is somewhere dense, see \cite{BF}. Actually, this result holds even for the projective orbit i.e., if the set $\{ \lambda T^nx;\ \lambda \in \mathbb{C}, n\geq 0 \}$ is somewhere dense then it is everywhere dense. In this case $T$ is called supercyclic and $x$ is called a supercyclic vector for $T$. The supercyclic version of Bourdon-Feldman's theorem will be used in the proof of Theorem \ref{mainthm}. That (i) implies (ii) will depend on an extension of a result of Shkarin which has its own interest. \medskip The problem of rotations of hypercyclic vectors can also be studied for stronger forms of hypercyclicity. We recall that the lower density of a set of natural numbers $A$ is defined by $$ \underline{\textrm{dens}}(A):=\liminf_{N\to\infty}\frac{\textrm{card}(A\cap [1,N])}{N}\,\cdot$$ An operator $T\in\mathfrak L(X) $ is called frequently hypercyclic provided there exists $x\in X$ such that, for any $U\subset X$ open and nonempty, the set $\{n\in\mathbb N;\ T^n x\in U\}$ has positive lower density. The vector $x$ is then called frequently hypercyclic for $T$ and the set of $T$-frequently hypercyclic vectors will be denoted by $FHC(T)$. It has be shown in \cite{BM} that frequent hypercyclicity is invariant under rotation: for any $\lambda\in\mathbb T$, $FHC(T)=FHC(\lambda T)$. Here is the polynomial version of this property, to be proved in Section 3 of the paper. \begin{theorem}\label{mainthmfhc} Let $X$ be a complex topological vector space, let $T\in\mathfrak L(X) $ and let $x\in X$. The following are equivalent. \begin{itemize} \item [(i)] $x\in FHC(T)$. \item[(ii)] For any $P\in\mathbb R[t]$, for any $U\subset X$ open and nonempty, $\{n\in\mathbb N;\ e^{iP(n)}T^n x\in U\}$ has positive lower density. \item[(iii)] There exists $P\in\mathbb R[t]$ such that, for any $U\subset X$ open and nonempty, $\{n\in\mathbb N;\ e^{iP(n)}T^n x\in U\}$ has positive lower density. \end{itemize} \end{theorem} \medskip In Section 4, we investigate other choices of phases. We show that $\{e^{if(n)}T^n x;\ n\geq 1\}$ does not need to be dense for all $x\in HC(T)$ provided $f$ goes sufficiently fast to infinity, for instance if $f$ has exponential growth. On the contrary, we extend Theorem \ref{mainthm} to sequences which do not grow too quickly to infinity, for instance for sequences like $f(n)=n^a\log^b (n+1)$. The polynomial case (Theorem \ref{mainthm}) plays a crucial role in this extension. Finally, in Section 5, we study the link between the problem of rotations of hypercyclic vectors and the theory of uniformly distributed sequences. In particular, we point out that the uniform distribution of $(f(n))$ is not sufficient to ensure that $\{e^{i f(n)}T^n x;\ n\geq 1\}$ is dense for any $x\in HC(T)$. Nevertheless, uniform distribution will be a useful tool to obtain generic statements. \section{Proof of the main result and an extension of a theorem of Shkarin} \subsection{The strategy} As mentioned in the introduction, the equivalence between (i) and (ii) in Theorem \ref{mainthm} is already known when $P$ is a polynomial of degree $1$. This was first done when $P(n)=2\pi n\theta$ with $\theta=\frac pq\in\mathbb Q$, $q\geq 1$. Indeed, in that case, $e^{2\pi iqn\theta}T^{qn}x=T^{qn}x$ and Ansari has shown in \cite{A} that $HC(T)=HC(T^q)$ for any $q\geq 1$. If $\theta$ does not belong to $\mathbb Q$, this result goes back to the paper \cite{LM} by Le\'on-Saavedra and M\"uller: they showed that for any $\theta\in\mathbb R$, $HC(e^{i\theta}T)=HC(T)$, namely that given $x\in HC(T)$ and any $y\in X$, there exists a sequence $(n_k)$ such that $e^{2\pi i n_k\theta}T^{n_k}x\to y$. This was later improved by Shkarin in \cite{S} who proved the following result (see also \cite{M} for a similar abstract result). \begin{theorem} (Shkarin) \label{Shkarin} Let $T$ be a hypercyclic continuous linear operator on a topological vector space $X$ and let $g$ be a generator of a compact topological group $G$. Then $\{ (T^n x, g^n);\ n\geq 1 \}$ is dense in $X\times G$ for any $x\in HC(T)$. \end{theorem} In particular, one may apply this result with $G=\mathbb T$ and $g=e^{2\pi i \theta}$, which generates $\mathbb T$ provided $\theta\notin\mathbb Q$. This implies the Le\'on-Saavedra and M\"uller theorem since for any $y\in X$ we may pick a sequence $(n_k)$ such that $T^{n_k}x\to y$ and $e^{2\pi in_k\theta}\to 1$. This observation in the starting point of our investigations. We shall prove by induction a polynomial variant of Shkarin's result. Roughly speaking, it will say that, for any polynomial $P\in\mathbb Z[t]$ and for any $(x,y)\in HC(T)\times X$ there exists a sequence of positive integers $(n_k)$ such that $T^{n_k}x\to y$ and $g^{P(n_k)}\to 1_G$. At a first glance this seems weaker than Theorem \ref{Shkarin}, since we do not get the density of $\{ (T^n x,g^{P(n)});\ n\geq 1 \}$. And of course we can not do better because we do not require that $g$ is a generator of $G$. This allows us to handle the case of a rational phase in the same process. Moreover, this variant is exactly what is needed to deduce Theorem \ref{mainthm}. If we further assume that $\{ g^{P(n)};\ n\geq 1\}$ is dense in $G$, in which case $g$ is necessarily a generator of $G$, then we will be able to show the density of $\{ (T^n x,g^{P(n)});\ n\geq 1 \}$. Thus, we will obtain the full extension of Shkarin's result in our setting. \medskip At this stage we need to introduce notations. Let $G$ be an abelian compact topological group and let $p\geq 1$. For $\mathbf g=(g_1,\dots,g_p)\in G^p$, $\mathbf d=(d_1,\dots,d_p)\in\mathbb N^p$, and $u,v\in X$, we set $$\fgd{u}{v}=\left\{h\in G;\ (v,h,1_G,\dots,1_G)\in N_{u}^{\mathbf g,\mathbf d}\right\}$$ where $$N_{u}^{\mathbf g,\mathbf d}=\overline{\left\{(T^n u,g_1^{n^{d_1}},\dots,g_1^{n},g_2^{n^{d_2}},\dots,g_2^{n},\dots,g_p^{n^{d_p}},\dots,g_p^n);\ n\geq 1\right\}}.$$ Our variants of Shkarin theorem read as follows. \begin{theorem}\label{THMPREP} Let $T$ be a hypercyclic continuous linear operator on a topological vector space $X$, let $p\geq 1$ and let $g_1,\dots,g_p$ be elements of a compact topological group $G$. Let also $\mathbf d=(d_1,\dots,d_p)\in\mathbb N^p$. Then for any $u\in HC(T)$ and any $v\in X$, $1_G\in \fgd{u}{v}$. \end{theorem} \begin{theorem} \label{ExtensionShkarin} Let $T$ be a hypercyclic continuous linear operator on a topological vector space $X$. Let $P\in\mathbb Z[t]$ and let $g\in G$ be such that $\{ g^{P(n)};n\geq 1 \}$ is dense in $G$. Then $\{ (T^n x, g^{P(n)});\ n\geq 1 \}$ is dense in $X\times G$ for any $x\in HC(T)$. \end{theorem} The proof of (iii) implies (i) in Theorem \ref{mainthm} will need the following variant of the Bourdon-Feldman theorem. \begin{theorem} \label{THMBF} Let $T\in\mathfrak L(X) $ and let $x\in X$. If $\overline{ \{ \lambda T^nx;\ \lambda\in \mathbb{T}, n\geq 0 \} }^o\neq \emptyset$ then $\overline{ \{ \lambda T^nx;\ \lambda\in \mathbb{T}, n\geq 0 \} }=X$. \end{theorem} Before to prove Theorems \ref{THMPREP}, \ref{ExtensionShkarin} and \ref{THMBF} let us show how to deduce Theorem \ref{mainthm} from them. For simplicity, throughout this section, we will assume that $X$ and $G$ are metrisable. The same proofs work when $X$ or $G$ are not metrisable, replacing everywhere sequences by nets. To show that (i) implies (ii), let $x\in HC(T)$, let $G=\mathbb T$ and let $P(n)=\theta_pn^p+\dots+\theta_1 n$ (we may always assume that it has no constant term). Setting $g_k=e^{i\theta_k}$ and $d_k=k$ for $k=1,\dots,p$, we may apply Theorem \ref{THMPREP}. In particular, given any $y\in X$, one may find a sequence $(n_k)_k$ such that $$T^{n_k}x\to y,\ e^{in_k^p\theta_p}\to 1,\dots,e^{in_k\theta_1}\to 1.$$ This clearly implies that $e^{iP(n_k)}T^{n_k}x\to y$. It remains to prove that (iii) implies (i). (iii) yields $\overline{ \{ \lambda T^nx;\ \lambda\in \mathbb{T}, n\geq 0 \} }^o\neq \emptyset$ so, Theorem \ref{THMBF} gives $\overline{ \{ \lambda T^nx;\ \lambda\in \mathbb{T}, n\geq 0 \} }=X$ and we conclude by the Le\'on-Saavedra and M\"uller theorem. \subsection{Preparatory lemmas} Let us turn to the proof of Theorem \ref{THMPREP}. We fix an operator $T$ acting on a topological vector space $X$ and a compact group $G$. We will need the following elementary lemma. \begin{lemma}\label{LEMDOUBLESUITE} Let $g,h\in G$, $d\geq 1$, $m\geq 1$ and let $(n_k)$ be a sequence of integers such that $$g^{n_k}\to 1_G,\ g^{n_k^2}\to 1_G,\dots, g^{n_k^{d-1}}\to 1_G,\ g^{n_k^d}\to h.$$ Then $$g^{(n_k+m)^d}\to hg^{m^d}.$$ \end{lemma} \begin{proof} Write \begin{eqnarray*} g^{(n_k+m)^d}&=&\prod_{j=0}^d g^{n_k^j\binom djm^{d-j}}\\ &=&g^{n_k^d}\left(\prod_{j=1}^{d-1}\left(g^{n_k^j}\right)^{\binom dj m^{d-j}}\right)g^{m^d}\\ &\to&hg^{m^d}. \end{eqnarray*} \end{proof} The sets $\fgd{u}{v}$ share some properties which are summarized below. \begin{lemma} \label{subsem1} Let $u,v,w\in X$, $p\geq 1$, $\mathbf g\in G^p$, $\mathbf d\in \mathbb N^p$. The following hold. \begin{itemize} \item[(i)] $\fgd{u}{v}$ is closed; $\fgd{u}{v}\subset \fgd{Tu}{Tv}$. \item[(ii)] $\fgd{u}{v}\fgd{v}{w}\subset\fgd{u}{w}$. \item[(iii)] Let $(v_k)\subset X$ and $(h_k)\subset G$ be such that $v_k\to v$, $h_k\to h$ and $h_k\in\fgd{u}{v_k}$. Then $h\in\fgd{u}{v}$. \end{itemize} \end{lemma} \begin{proof} (i) is trivial. (ii) follows from Lemma \ref{LEMDOUBLESUITE}. Indeed, for $\mathcal O$ an open neighbourhood of $1_G$ in $G$ and $h\in G$, let us denote $$\mathcal O_h=(h.\mathcal O)\times\mathcal O\times\dots\times\mathcal O\subset G^{d_1+\dots+d_p}.$$ Let $h_1\in \fgd{u}{v}$, $h_2\in\fgd{v}{w}$ and let $W\times \mathcal O_{h_1h_2}$ be an open neighborhood of the point $(w,h_1h_2,1_G,\dots,1_G)$ in $X\times G^{d_1+\dots+d_p}$. Since $h_2\in \fgd{v}{w}$, there exists $m\in\mathbb N$ such that $$T^m v\in W\textrm{ and }(g_1^{m^{d_1}},\dots,g_p^m)\in \mathcal O_{h_2}.$$ Let $V$ be an open neighbourhood of $v$ such that $T^m V\subset W$. Since $h_1\in \fgd{u}{v}$, an application of Lemma \ref{LEMDOUBLESUITE} gives the existence of an integer $n$ satisfying \begin{itemize} \item $T^n u\in V\implies T^{n+m}u\in W$; \item $g_1^{(n+m)^{d_1}}\in h_1h_2\mathcal O$; \item $g_l^{(n+m)^k}\in\mathcal O$ provided $l=1$ and $k\leq d_1-1$ or $l\geq 2$ and $k\leq d_l$. \end{itemize} This shows that $h_1h_2\in \fgd{u}{w}$. The proof of (iii) goes along the same lines. Let $V\times\mathcal O_h$ be an open neighbourhood of $(v,h,1_G,\dots,1_G)$ in $X\times G^{d_1+\dots+d_p}$. There exists $k\geq 1$ such that $(v_k,h_k,1_G,\dots,1_G)\in V\times \mathcal O_h$ and since $h_k\in\fgd{u}{v_k}$, there exists $n\geq 1$ such that $(T^n u,g_1^{n^{d_1}},\dots,g_p)\in V\times\mathcal O_h$. Thus, $h\in\fgd{u}{v}$. \end{proof} For a proof of the following lemma see, for instance, \cite{HR}. \begin{lemma} \label{hr} A closed subsemigroup of a compact topological group is a subgroup. \end{lemma} \subsection{Proof of Theorem \ref{THMPREP}} We are now ready for the proof of Theorem \ref{THMPREP}. We proceed by induction on $d_1+\dots+d_p$. We first assume that $d_1+\dots+d_p=1$ and let $g\in G$, $u\in HC(T)$, $v\in X$. Define $G_0=\overline{\{g^n;\ n\geq 0\}}$. $G_0$ is an abelian compact topological group and $g$ is a generator of $G_0$. By applying Shkarin's result, $\{(T^n u,g^n);\ n\geq 1\}$ is dense in $X\times G_0$. In particular, $1_G\in \fgd{u}{v}$. Suppose now that $d_1+\dots+d_p\geq 2$ and let $u\in HC(T)$, $v\in X$. We set $\mathbf d'=(d_1-1,\dots,d_p)$. We consider any $x,y\in HC(T)$. By the induction hypothesis, $1_G\in F_{x,y}^{\mathbf g,\mathbf d'}$. This yields, by compactness of $G$, that $\fgd{x}{y}$ is nonempty. In particular, Lemma \ref{subsem1} tells us that $\fgd{x}{x}$ is a closed subsemigroup of $G$, hence a closed subgroup of $G$. Moreover, if we use again Lemma \ref{subsem1}, we observe that $$\left\{ \begin{array}{rcl} \fgd{x}{x}\fgd{x}{y}&\subset&\fgd{x}{y}\\[2mm] \fgd{x}{y}\fgd{y}{x}&\subset&\fgd{x}{x}. \end{array}\right.$$ Since $\fgd{x}{y}$ and $\fgd{y}{x}$ are both nonempty, $\fgd{x}{y}$ contains a coset of $\fgd{x}{x}$ and is contained in a coset of $\fgd{x}{x}$. Thus it is a coset of $\fgd{x}{x}$ (at this point, it is important to notice that we need that $G$ is abelian). We apply this for $x=u$ and $y=Tu$: there exists $g\in G$ such that $\fgd{u}{Tu}=g\fgd{u}{u}$. Now, using again (i) and (ii) of Lemma \ref{subsem1}, we get $$ \left\{ \begin{array}{l} \fgd{Tu}{T^2u}\supset \fgd{u}{Tu}\supset g\fgd{u}{u}\\[2mm] \fgd{u}{T^2 u}\supset \fgd{u}{Tu}\fgd{Tu}{T^2 u}\supset g^2\fgd{u}{u}. \end{array}\right.$$ Since $\fgd{u}{T^2 u}$ is a coset of $\fgd{u}{u}$, this in turn yields $\fgd{u}{T^2u}=g^2\fgd{u}{u}$. Repeating the process, we obtain that, for any $n\geq 1$, $\fgd u{T^n u}=g^n \fgd{u}{u}.$ Now, we use again the induction hypothesis, but for $d_1+\dots+d_p=1$. This gives a sequence $(n_k)$ such that $g^{n_k}\to 1_G$ and $T^{n_k}u\to v$. By the last part of Lemma \ref{subsem1}, $1_G\in \fgd{u}{v}$, which achieves the proof of Theorem \ref{THMPREP}. \subsection{Proof of Theorem \ref{ExtensionShkarin}} Fix $x\in HC(T)$ and let $y\in X$, $m\in \mathbb{N}$. Let also $d$ be the degree of the polynomial $P$, $d\geq 1$. Then $T^mx\in HC(T)$ and by Theorem \ref{THMPREP} there exists a sequence of positive integers $(n_k)$ such that $$T^{n_k}(T^mx)\to y,\ g^{n_k}\to 1_G,\ g^{n_k^2}\to 1_G, \ldots,\ g^{n_k^d} \to 1_G.$$ From the last we deduce that $$T^{n_k+m}x\to y, \ g^{(n_k+m)^j}\to g^{m^j} \,\,\, \textrm{for every}\,\,\, j=0,1,\ldots ,d$$ and this in turn implies $T^{n_k+m}x\to y$, $g^{P(n_k+m)}\to g^{P(m)}$. Thus, $$(y, g^{P(m)})\in \overline{ \{ (T^n x, g^{P(n)});\ n\geq 1 \} } \ \textrm{for every pair} \ (y,m)\in X \times \mathbb{N}.$$ Since $\{ g^{P(m)};m\geq 1 \}$ is dense in $G$ the conclusion follows. \subsection{An extension of the Bourdon-Feldman result} The next series of lemmas will be used in the proof of Theorem \ref{THMBF}. This kind of approach has appeared in \cite{CH1} and borrows ideas from \cite{P}. \begin{lemma} \label{l1BF} Let $x,y$ be vectors in $X$. If $$\overline{\{ \lambda T^nx;\ \lambda \in \mathbb{T}, n\geq 0\} }^o\cap \overline{ \{ \lambda T^ny;\ \lambda \in \mathbb{T}, n\geq 0\} }^o\neq \emptyset $$ then $$\overline{\{ \lambda T^nx;\ \lambda \in \mathbb{T}, n\geq 0\} }^o=\overline{\{ \lambda T^ny;\ \lambda \in \mathbb{T}, n\geq 0\} }^o.$$ \end{lemma} \begin{proof} There exist $\alpha \in \mathbb{T}$ and a positive integer $k$ such that $$\alpha T^kx\in \overline{\{ \lambda T^nx;\ \lambda \in \mathbb{T}, n\geq 0\} }^o\cap \overline{ \{ \lambda T^ny;\ \lambda \in \mathbb{T}, n\geq 0\} }^o.$$ From the last we deduce that $\overline{ \{ \lambda T^nx;\ \lambda \in \mathbb{T}, n\geq k \} }\subset \overline{ \{ \lambda T^ny;\ \lambda \in \mathbb{T}, n\geq 0\} }$ and since $ \overline{ \{ \lambda T^nx;\ \lambda \in \mathbb{T}, n\leq k \} }^o=\emptyset$ the inclusion $$\overline{ \{ \lambda T^nx;\ \lambda\in \mathbb{T}, n\geq 0 \} }^o \subset \overline{ \{ \lambda T^ny;\ \lambda \in \mathbb{T}, n\geq 0\} }^o$$ follows. Interchanging the roles of $x$ and $y$ in the previous argument we conclude the reverse inclusion and we are done. \end{proof} \begin{lemma} \label{l2BF} Let $x\in X$. For every non-zero complex number $\mu$, $\overline{\{ \lambda T^nx;\ \lambda\in \mathbb{T}, n\geq 0\} }^o=\overline{ \{ \lambda T^n(\mu x);\ \lambda\in \mathbb{T}, n\geq 0\} }^o$. \end{lemma} \begin{proof} We first assume that $\overline{ \{ \lambda T^n(x);\lambda \in \mathbb{T}, n\geq 0 \} }^o\neq\emptyset$ and let $U$ be a nonempty open subset of $X$ such that $U\subset \overline{ \{ \lambda T^nx;\ \lambda\in \mathbb{T}, n\geq 0\} }^o$. There exist a complex number $\rho $ and a positive integer $m$ such that \begin{equation} \rho U\cap U\neq\emptyset \end{equation} and \begin{equation} {\rho}^m=\mu . \end{equation} The inclusion $\rho^l U\subset \overline{ \{ \lambda T^n({\rho}^lx);\lambda \in \mathbb{T}, n\geq 0 \} }^o$ trivially holds for every non-negative integer $l$. Because of (2.1) we get \[ \rho^l U\cap\rho^{l+1}U \neq \emptyset\,\, \textrm{for every}\,\,l=0,1,2,\ldots . \] Hence, for every $l=0,1,\ldots ,m-1$ \[ \overline{ \{ \lambda T^n({\rho}^lx);\lambda \in \mathbb{T}, n\geq 0 \} }^o\cap \overline{ \{ \lambda T^n({\rho}^{l+1}x);\lambda \in \mathbb{T}, n\geq 0 \}}^o \neq\emptyset \] and by (2.2) and Lemma \ref{l1BF} we conclude that $$ \overline{\{ \lambda T^nx;\ \lambda\in \mathbb{T}, n\geq 0\} }^o=\overline{ \{ \lambda T^n(\mu x);\ \lambda\in \mathbb{T}, n\geq 0\} }^o. $$ If we now assume that $\overline{ \{ \lambda T^n(x);\lambda \in \mathbb{T}, n\geq 0 \} }^o=\emptyset$, then we shall have $$\overline{ \{ \lambda T^n(\mu x);\lambda \in \mathbb{T}, n\geq 0 \} }^o=\emptyset$$ for any $\mu\neq 0$, otherwise $$\overline{ \{ \lambda T^n(x);\lambda \in \mathbb{T}, n\geq 0 \} }^o=\overline{ \left\{ \lambda T^n\left(\frac1\mu \mu x\right);\lambda \in \mathbb{T}, n\geq 0 \right\} }^o$$ would be nonempty. \end{proof} \begin{proof}[Proof of Theorem \ref{THMBF}] Since $\overline{ \{ \lambda T^nx;\ \lambda\in \mathbb{T},n\geq 0 \} }^o\neq \emptyset$, Bourdon-Feldman's theorem implies that $T$ is supercyclic, in fact $x$ is a supercyclic vector for $T$. By the density of supercyclic vectors there exists a supercyclic vector $z$ for $T$ such that $z\in \overline{ \{ \lambda T^nx; \lambda\in \mathbb{T},n\geq 0 \} }^o$. Applying Lemma \ref{l2BF} we get $\mu z\in \overline{ \{ \lambda T^nx; \lambda\in \mathbb{T},n\geq 0 \}}^o$ for every non-zero complex number $\mu$. Since $z$ is supercyclic for $T$ and the set $\overline{ \{ \lambda T^nx;\ \lambda\in \mathbb{T},n\geq 0 \}}$ is $T$-invariant, the result follows. \end{proof} \subsection{Consequences and remarks} Theorem \ref{Shkarin} is a particular case of a more general result which can be found in \cite{S}. Let us recall that a continuous map $T:X\to X$, where $X$ is a topological space, is universal provided there exists $x\in X$, called universal vector for $T$, such that $\{T^n x;\ n\geq 1\}$ is dense in $X$. We denote by $\mathcal U(T)$ the set of universal vectors for $T$. Theorem \ref{Shkarin} can be extended to nonlinear dynamical systems whose set of universal vectors satisfies connectedness assumptions. Precisely, Shkarin has proved the following result: \begin{quote} Let $X$ be a topological space, let $T:X\to X$ be a continuous map and let $g$ be a generator of a compact topological group $G$. Assume also that there is a nonempty subset $Y$ of $\mathcal U(T)$ such that $T(Y)\subset Y$ and $Y$ is path connected, locally path connected and simply connected. Then the set $\{(T^n x,g^n);\ n\geq 1\}$ is dense in $X\times G$ for any $x\in Y$. \end{quote} Starting from this result and with exactly the same proof, we can get the following statement. \begin{quote} Let $X$ be a topological space, let $T:X\to X$ be a continuous map, let $g_1,\dots,g_p$ be elements of a compact topological group $G$ and let $\mathbf d=(d_1,\dots,d_p)\in\mathbb N^p$. Assume also that there is a nonempty subset $Y$ of $\mathcal U(T)$ such that $T(Y)\subset Y$ and $Y$ is path connected, locally path connected and simply connected. Then $X\times\{(1_G,\dots,1_G)\}\subset N_u^{\mathbf g,\mathbf d}$ for any $u\in Y$. \end{quote} We shall need later a variant of Theorem \ref{ExtensionShkarin}, where we allow the use of several polynomials. \begin{corollary}\label{COREXTENSION} Let $P_1,\dots,P_r$ be real polynomials and let $E$ be the closure of the set $\{e^{iP_1(n)},\dots,e^{iP_r(n)});\ n\geq1\}$. Let $T\in\mathfrak L(X) $ be hypercyclic and let $x\in HC(T)$. Then $\{(T^n x,e^{iP_1(n)},\dots,e^{iP_r(n)});\ n\geq 1\}$ is dense in $X\times E$. \end{corollary} \begin{proof} Let $d=\max(\deg(P_1),\dots,\deg(P_r))$ and let us write $P_p(x)=\sum_{j=0}^d \theta_{j,p}x^j$ for $p=1,\dots,r$. Let $y\in X$, $m\geq 1$ and let $(n_k)$ be a sequence of integers such that $$T^{n_k}(T^m x)\to y,\ e^{in_{k}^l \theta_{j,p}}\to 1\textrm{ for }1\leq j,l\leq d,\ 1\leq p\leq r.$$ Then, as in the proof of Theorem \ref{ExtensionShkarin}, $T^{n_k+m}\to y$ and $e^{iP_p(n_k+m)}\to e^{iP_p(m)}$ for any $p$ in $\{1,\dots,r\}$. \end{proof} This corollary is particularly interesting when the closure of $ \{(e^{iP_1(n)},\dots,e^{iP_r(n)});\ n\geq 1\}$ is equal to $\mathbb T^r$. This a well-known phenomenon in the theory of uniformly distributed sequences. \begin{definition} We say that the real polynomials $P_1,\dots,P_r$ are $\pi\mathbb Q$-independent provided for any $h\in\mathbb Z^r$, $h\neq 0$, the polynomial $h_1P_1+\dots+h_rP_r$ does not belong to $\pi\mathbb Q[t]$. \end{definition} A fundamental theorem in the theory of uniformly distributed sequences, see \cite{KN}, says that if $P_1,\dots,P_r$ is a $\pi\mathbb Q$-independent family of real polynomials, then the sequence $(P_1(n),\dots,P_r(n))$ is uniformly distributed modulo 1. Hence, $\{(e^{iP_1(n)},\dots,e^{iP_r(n)});\ n\geq 1\}$ is dense in $\mathbb T^r$. \begin{corollary} Let $P_1,\dots,P_r$ be real polynomials which are $\pi\mathbb Q$-independent. Let $T\in\mathfrak L(X) $ be hypercyclic and let $x\in HC(T)$. Then $\{(T^n x,e^{iP_1(n)},\dots,e^{iP_r(n)});\ n\geq 1\}$ is dense in $X\times\mathbb T^r$. \end{corollary} \section{Rotations of frequently hypercyclic vectors} This section is devoted to the proof of Theorem \ref{mainthmfhc}. We first need an elementary lemma on sets with positive lower density. Its proof can be found e.g. in \cite[Lemma 6.29]{BM}. \begin{lemma}\label{LEMROTFHC} Let $A\subset\mathbb N$ have positive lower density. Let also $I_1,\dots, I_q\subset\mathbb N$ with $\bigcup_{j=1}^q I_j=\mathbb N$, and let $n_1,\dots ,n_q\in\mathbb N$. Then $B:=\bigcup_{j=1}^q \left( n_j+A\cap I_j\right)$ has positive lower density. \end{lemma} As recalled before, if $P_1,\dots,P_r$ is a $\pi\mathbb Q$-independent family of real polynomials, then $\{(e^{iP_1(n)},\dots,e^{iP_r(n)});\ n\geq 1 \}$ is dense in $\mathbb T^r$. We need a variant of this result when there exist relations between the polynomials. \begin{proposition} Let $P_1,\dots,P_r$ be real polynomials without constant term. Assume that there exist $p\in\{1,\dots,r\}$, integers $m$ and $(a_{j,k})_{\substack{p+1\leq j\leq r\\ 1\leq k\leq p}}$, polynomials $(R_j)_{p+1\leq j\leq r}$ in $\pi\mathbb Z[t]$ so that \begin{itemize} \item[(i)] $P_1,\dots,P_p$ are $\pi\mathbb Q$-independent; \item[(ii)] For any $j$ in $\{p+1,\dots,r\}$, $$mP_j=a_{j,1}P_1+\dots+a_{j,p}P_p+R_j.$$ \end{itemize} Then the closure of $\{(e^{iP_1(n)},\dots,e^{iP_r(n)});\ n\geq 0\}$ is equal to $$\left\{(e^{i\theta_1},\dots,e^{i\theta_r});\ \forall j\geq p+1,\ m\theta_j=a_{j,1}\theta_1+\dots+a_{j,p}\theta_p\right\}.$$ \end{proposition} \begin{proof} Let $(\theta_1,\dots,\theta_r)\in\mathbb R^r$ be such that $ m\theta_j=a_{j,1}\theta_1+\dots+a_{j,p}\theta_p$ for $j\geq p+1$. We define $Q_j$ and $T_j$ by \begin{eqnarray*} Q_j(x)&=&\left\{ \begin{array}{ll} \frac1mP_j(2mx)&\textrm{ if }j\leq p\\ P_j(2mx)&\textrm{ if }j\geq p+1\\ \end{array}\right. \\ T_j(x)&=&\frac1mR_j(2mx),\ j\geq p+1. \end{eqnarray*} We may observe that $T_j(n) \in2\pi\mathbb Z$ for any integer $n$ since $R_j$ has no constant term and belongs to $\pi\mathbb Z[t]$. It is also easy to check that the family $(Q_1,\dots,Q_p)$ remains $\pi\mathbb Q$-independent. Hence we can find a sequence of integers $(n_k)$ such that $e^{iQ_j(n_k)}$ goes to $e^{i\theta_j/m}$ for any $j$ in $\{1,\dots,p\}$. Now, for $j\geq p+1$, $$Q_j=a_{j,1}Q_1+\dots+a_{j,p}Q_p+T_j$$ so that $e^{iQ_j(n_k)}$ goes to $e^{i(a_{j,1}\theta_1+\dots+a_{j,p}\theta_p)/m}=e^{i\theta_j}.$ Finally, we have shown that for any $j\leq r$, $e^{iP_j(2mn_k)}$ converges to $e^{i\theta_j}$, which implies the proposition. \end{proof} We shall use this proposition under the form of the following corollary. \begin{corollary}\label{CORINVROT} Let $P_1,\dots,P_r$ be real polynomials without constant term. Then the closure of $\{(e^{iP_1(n)},\dots,e^{iP_r(n)});\ n\geq 1\}$ is invariant under complex conjugation. \end{corollary} \begin{proof} We extract from $(P_1,\dots,P_r)$ a maximal family $(P_j)_{j\in J}$ which is $\pi\mathbb Q$-independent. Without loss of generality, we may assume that $J=\{1,\dots,p\}$. This means that the assumptions of the previous proposition are satisfied. Hence, the result of the proposition describes the closure of $\{(e^{iP_1(n)},\dots,e^{iP_r(n)});\ n\geq 1\}$. And this closure is clearly invariant under complex conjugation. \end{proof} We turn to the proof of Theorem \ref{mainthmfhc}. We shall in fact prove a variant of it which looks significantly stronger, since we control simultaneously several rotated orbits. \begin{theorem}\label{THMFHCSTRONG} Let $P_1,\dots,P_r$ be real polynomials without constant terms. Let also $T\in\mathfrak L(X) $ be frequently hypercyclic and let $x\in FHC(T)$. Then, for any nonempty open set $U\subset X$, $$\big\{n\in\mathbb N;\ \forall l\in\{1,\dots,r\},\ e^{iP_l(n)}T^n x\in U \big\}$$ has positive lower density. \end{theorem} \begin{proof} We denote by $d$ the maximum of the degree of $P_1,\dots,P_r$ and we argue by induction on $d$. The case $d=0$ is trivial since the polynomials have to be equal to zero. So, let us assume that the theorem has been proved until rank $d-1$ and let us prove it for $d\geq 1$. Let $V\subset X$ and let $\varepsilon>0$ be such that $V$ is open and nonempty and $D(1,\varepsilon)V\subset U$, where $D(a,\varepsilon)$ means the disk $|z-1|<\varepsilon$. Let us set $$E=\overline{\big\{e^{iP_1(k)},\dots,e^{iP_r(k)});\ k\geq 0\big\}}\subset\mathbb T^r.$$ By the compactness of $E$, there exist integers $m_1,\dots,m_q$ such that, for any $k\geq 0$, one may find $j\in\{1,\dots,q\}$ so that $|e^{iP_l(k)}-e^{iP_l(m_j)}|<\varepsilon$ for any $l=1,\dots,r$. We then set, for $j=1,\dots,q$, $$I_j=\big\{k\geq 0;\ \forall l\in\{1,\dots,r\},\ |e^{iP_l(k)}-e^{iP_l(m_j)}|<\varepsilon\big\}.$$ Therefore, $\bigcup_{j=1}^q I_j=\mathbb N$. By Corollary \ref{CORINVROT}, $E$ is invariant under complex conjugation. In particular, for any $j=1,\dots,q$, $(e^{-iP_1(m_j)},\dots,e^{-iP_r(m_j)})$ belongs to $E$. We now apply Corollary \ref{COREXTENSION}. For any $j=1,\dots,q$, we may find an integer $n_j$ such that, for any $l\in\{1,\dots,r\}$, $$e^{iP_l(m_j)}e^{iP_l(n_j)}T^{n_j}x\in V.$$ Since $T$ is continuous, there exists an open neighbourhood $W$ of $x$ such that $$\forall j\in\{1,\dots,q\},\ \forall l\in\{1,\dots,r\},\ e^{iP_l(m_j)}e^{iP_l(n_j)}T^{n_j}W\subset V.$$ Now, there exist a sequence of polynomials $(Q_{j,l})_{\substack{1\leq j\leq q\\ 1\leq l\leq r}}$ with degree at most $d-1$ and without constant term such that, for any $j$ in $1,\dots,q$, for any $l$ in $1,\dots,r$, for any $k\geq 0$, $$P_l(n_j+k)=P_l(n_j)+P_l(k)+Q_{l,j}(k).$$ We set $$A=\big\{k\geq 0;\ \forall (l,j)\in\{1,\dots,r\}\times\{1,\dots,q\},\ e^{iQ_{l,j}(k)}T^k x\in W\big\}.$$ By the induction hypothesis, $A$ has positive lower density. By Lemma \ref{LEMROTFHC}, this remains true for $$B=\bigcup_{j=1}^q (n_j+A\cap I_j).$$ Now, pick any $n\in B$. There exist $j$ in $\{1,\dots,q\}$ and $k\in A\cap I_j$ such that $n=n_j+k$. This leads to $$e^{iP_l(n_j+k)}T^{n_j+k}(x)=\underbrace{\underbrace{e^{iP_l(k)}e^{-iP_l(m_j)}}_{\in D(1,\varepsilon)}\underbrace{e^{iP_l(m_j)}e^{iP_l(n_j)}T^{n_j}(\underbrace{e^{iQ_{j,l}(k)}T^k x)}_{\in W}}_{\in V}}_{\in U}.$$ Thus, $B\subset \big\{n\in\mathbb N;\ \forall l\in\{1,\dots,r\},\ e^{iP_l(n)}T^n x\in U\big\}.$ \end{proof} \begin{proof}[Proof of Theorem \ref{mainthmfhc}] That $(i)$ implies $(ii)$ is a direct consequence of the previous theorem: when we have a single polynomial, we may allow a constant term since $e^{i\theta}x\in FHC(T)$ iff $x\in FHC(T)$. It remains to prove $(iii)$ implies $(i)$. The proof follows the same lines; let $U,V\subset X$ be open and nonempty and let $\varepsilon>0$ with $D(1,\varepsilon)V\subset U$. Let $\lambda_1,\dots,\lambda_q\in\mathbb T$ be such that $\mathbb T$ is contained in $\bigcup_{j=1}^q D(\lambda_j^{-1},\varepsilon)$. For $j=1,\dots,q$, let us set $$I_j=\big\{k\geq 0;\ e^{iP(k)}\in D(\lambda_j^{-1},\varepsilon)\big\}.$$ Moreover, for any $j$ in $\{1,\dots q\}$, one may find an integer $n_j$ such that $T^{n_j}x\in \lambda_jV$ since the assumption and Theorem \ref{mainthm} imply $x\in HC(T)$. Let now $W\subset X$ open and nonempty be such that $T^{n_j}(W)\subset\lambda_j V$ for any $j\in\{1,\dots,q\}$. We finally set \begin{eqnarray*} A&=&\big\{k\geq 0;\ e^{iP(k)}T^k x\in W\}\\ B&=&\bigcup_{j=1}^q (n_j+A\cap I_j). \end{eqnarray*} $A$, thus $B$, have positive lower density. Moreover, if $n=n_j+k$ belongs to $B$, then $$T^{n_j+k}(x)=\underbrace{\underbrace{e^{-iP(k)}}_{\in D(\lambda_j^{-1},\varepsilon)}\underbrace{T^{n_j}\underbrace{(e^{iP(k)}T^k x)}_{\in W}}_{\in \lambda_jV}}_{\in U}.$$ \end{proof} \begin{remark} Even if the result of Theorem \ref{THMFHCSTRONG} looks stronger than condition (ii) of Theorem \ref{mainthmfhc}, it is the natural statement which comes from our proof. Indeed, if you follow the proof for a single polynomial, then you have to apply the induction hypothesis for several polynomials! \end{remark} \section{Other sequences} In this section, we study if Theorem \ref{mainthm} remains true when we replace the sequence $(P(n))$ by other classical sequences. We first show that this is not the case if the sequence grows to infinity too quickly. The counterexample is very easy. It is just a backward shift on $\ell^2(\mathbb Z_+)$. Wy denote by $B$ the unweighted backward shift. As usual $(e_n)_{n\geq 0}$ denotes the standard basis on $\ell^2(\mathbb Z_+)$. \begin{proposition} \label{geomrate} Let $T=2B$ acting on $\ell^2(\mathbb Z_+)$ and let $(f(n))$ be a sequence of positive integers with $f(n+1)>af(n)$, $n\in \mathbb{N}$, for some $a>1$. Then for every $x\in HC(T)$ there exists $\theta\in\mathbb R$ such that the set $\big\{ e^{2\pi if(n)\theta}T^n x;\ n\geq 1\big\} $ is not dense in $\ell^2(\mathbb Z_+)$. \end{proposition} \begin{proof} Let $x\in HC(T)$. Choose $N\geq 1$ such that $\sum_{j\geq N+1}a^{-j}\leq 1/4$ and define the set $$A=\big\{n\in\mathbb N;\ \exists \lambda\in\mathbb T,\ \|\lambda T^n x-e_N\|<1/2\big\}.$$ Observe that if $n$ belongs to $A$, then $n+k$ does not belong to $A$ for any $k\leq N$. Indeed, one can write $T^n x=\mu e_N+y$ with $\|y\|\leq 1/2$ and $|\mu|=1$, so that $T^{n+k}x=\mu 2^k e_{N-k}+T^k y$. This yields, for any $\lambda\in\mathbb T$, \begin{eqnarray*} \|\lambda T^{n+k}x-e_N\|&=&\|\lambda\mu 2^k e_{N-k}-e_N+T^k y\|\\ &\geq&2^k-2^{k-1}\geq 1. \end{eqnarray*} We define a sequence $(\alpha_n)$ as follows: \begin{itemize} \item $\alpha_n=0$ provided $n\notin A$; \item if $n\in A$, we set $\theta_{n-1}=\sum_{j=0}^{n-1} \frac{\alpha_j}{f(j)}$. We then choose $\alpha_n\in \{0,1/2\}$ such that $$\Re e\left(e^{2\pi i(f(n)\theta_{n-1}+\alpha_n)}e_N^*(T^n x)\right)\leq 0.$$ \end{itemize} We finally define $\theta=\sum_{n\geq 0}\frac{\alpha_n}{f(n)}$ and we claim that $\{e^{2\pi if(n)\theta}T^n x;\ n\geq 1\}$ is not dense in $\ell^2(\mathbb Z_+)$. Precisely, let us show that $e_N$ does not belong to the closure of $\{e^{2\pi if(n)\theta}T^nx;\ n\geq 1\}$. Indeed, when $n$ does not belong to $A$, we are sure that $\|e^{2\pi if(n)\theta}T^n x -e_N\|\geq 1/2$. Otherwise, when $n$ belongs to $A$, we can write $$\|e^{2\pi if(n)\theta}T^n x-e_N\|\geq |e_N^*(e^{2\pi if(n)\theta}T^n x)-1|.$$ Now, \begin{eqnarray*} e_N^*(e^{2\pi if(n)\theta}T^n x)&=&e^{2\pi i(f(n)\theta_{n-1}+\alpha_n)}e_N^*(T^n x)e^{2\pi i f(n)\left(\sum_{j\geq n+1}\frac {\alpha_j}{f(j)}\right)}\\ &=&e^{2\pi i(f(n)\theta_{n-1}+\alpha_n)}e_N^*(T^n x)e^{2\pi i f(n)\left(\sum_{j\geq n+N+1}\frac {\alpha_j}{f(j)}\right)}. \end{eqnarray*} We set $z=e^{2\pi i(f(n)\theta_{n-1}+\alpha_n)}e_N^*(T^n x)$ and $\beta=2\pi f(n)\left(\sum_{j\geq n+N+1}\frac {\alpha_j}{f(j)}\right)$. By the construction of $\alpha_n$, $\Re e(z)\leq 0$. Moreover, it is easy to check that $\beta\in[0,\pi/4]$: $$0\leq2\pi f(n)\left(\sum_{j\geq n+N+1}\frac {\alpha_j}{f(j)}\right) \leq \pi f(n)\sum_{j\geq N+n+1}\frac{1}{a^{j-n}f(n)}\leq \frac \pi4.$$ Thus, $e_N^*(e^{2\pi if(n)\theta}T^n x)$ does not belong to the cone $\{\rho e^{i\gamma};\rho>0,\ |\gamma|\leq\pi/4\}.$ In particular, there exists $\delta>0$ such that $$\|e^{2\pi if(n)\theta}T^n x-e_N\|\geq\delta.$$ \end{proof} \begin{remark} In the previous proposition one cannot conclude the stronger assertion that there exists $\theta\in\mathbb R$ such that for every $x\in HC(2B)$ the set $\{ e^{2\pi if(n)\theta}(2B)^n x;\ n\geq 1\} $ is not dense in $\ell^2(\mathbb Z_+)$. The reason for this is very simple and comes from the fact that $2B$ is hypercyclic in a very strong sense, namely it satisfies the hypercyclicity criterion. For a comprehensive discussion on the hypercyclicity criterion and its several equivalent forms we refer to \cite{BM}, \cite{GP}. To explain now briefly, take any operator $S\in \mathfrak L(X) $ satisfying the hypercyclicity criterion and let $(\lambda_n)$ be any sequence of unimodular complex numbers. It is then immediate that the sequence of operators $(\lambda_nS^n)$ also satisfies the hypercyclicity criterion;\ hence, the set of $y\in X$ such that $\{ \lambda_nS^ny;n\geq 1\}$ is dense in $X$ is $G_{\delta}$ and dense in $X$. By an appeal of Baire's category theorem one can find $x\in HC(S)$ such that $\overline{ \{ \lambda_nS^nx;\ n\geq 0\} }=X$. Applying the last for $S:=2B\in \mathfrak L(\ell^2(\mathbb Z_+))$ and $\lambda_n:=e^{2\pi if(n)\theta}$, $n\geq 1$ we see that for every $\theta\in\mathbb R$ there exists $x\in HC(2B)$ such that the set $\{ e^{2\pi if(n)\theta}(2B)^n x;\ n\geq 1\} $ is dense in $\ell^2(\mathbb Z_+)$. \end{remark} On the contrary, we have an analog to Theorem \ref{mainthm} for sequences which grow slowly to infinity. The growth condition which comes into play here is based on the increases of the function. \begin{theorem} \label{THMSLOWGROWTH} Let $X$ be a Banach space, let $T\in \mathfrak L(X) $, let $x\in X$ and let $(f(n))$ be a sequence of real numbers satisfying the following condition: there exist an integer $d\geq 0$, sequences $(g_l(n))_n$ for $0\leq l\leq d$ and $(\varepsilon_k(n))_n$ for any $k\geq 1$, such that, for any $n,k\geq 1$, $$f(n+k)-f(n)=\sum_{l=0}^{d}g_l(n)k^l+\varepsilon_k(n)$$ and, for a fixed $k\geq 1$, $|\varepsilon_k(n)|\xrightarrow{n\to+\infty}0.$ Then the following are equivalent. \begin{itemize} \item[(i)]$x\in HC(T)$; \item[(ii)]$\{e^{ i f(n)}T^n x;\ n\geq 1\}$ is dense in $X$. \end{itemize} \end{theorem} \begin{proof} We just need to prove that $(i)\implies (ii)$. We are going to apply Theorem \ref{mainthm} (observe that a polynomial $P$ satisfies the assumptions of Theorem \ref{THMSLOWGROWTH} with $d=\deg(P)$ and $\varepsilon_k(n)=0$!) and a compactness argument. \begin{lemma}\label{LEMSLOWGROWTH} Let $d\geq 0$, $x\in HC(T)$, $y\in X$ and $\varepsilon>0$. There exists $K\geq 1$ such that, for any $P\in\mathbb R_d[t]$, for any $\mu\in\mathbb T$, there exists $k\leq K$ such that \begin{eqnarray} \|e^{iP(k)}T^k x-\mu y\|<\varepsilon. \label{EQLEMSLOWGROwTH} \end{eqnarray} \end{lemma} \begin{proof}[Proof of Lemma \ref{LEMSLOWGROWTH}] Let us observe that if $(P,\mu)$ satisfies (\ref{EQLEMSLOWGROwTH}) for a fixed $k$, this inequality is also satisfied with the same $k$ for any $(Q,\lambda)$ in a neighbourhood of $(P,\mu)$. Moreover, by Theorem \ref{mainthm}, given any $(P,\mu)\in \mathbb R_d[t]\times\mathbb T$, we know that we may always find an integer $k$ such that (\ref{EQLEMSLOWGROwTH}) is true. Define now $$\mathbb R_d^{\pi}[t]=\left\{P=\sum_{j=0}^d \theta_j t^j;\ \theta_j\in[0,2\pi]\right\}.$$ $\mathbb R_d^\pi[t]$ is a compact subset of $\mathbb R_d[t]$. Moreover, for any $P\in\mathbb R_d[t]$, there exists some $Q\in \mathbb R_d^\pi[t]$ such that $e^{iQ(k)}=e^{iP(k)}$ for any $k\in\mathbb Z$. Hence, Lemma \ref{LEMSLOWGROWTH} follows from the compactness of $\mathbb R_d^\pi[t]\times\mathbb T$. \end{proof} We come back to the proof of Theorem \ref{THMSLOWGROWTH}. We fix $\varepsilon>0$ and $y\in X$. We apply Lemma \ref{LEMSLOWGROWTH} to the 4-tuple $(d,x,y,\varepsilon)$. We then set $\delta=\varepsilon/\|T\|^K$. Since $x\in HC(T)$, there exist $n\geq 1$ as large as necessary and $\alpha_n\in\mathbb T$ such that $$e^{if(n)}T^n x=\alpha_n x+z,\ \|z\|<\delta.$$ Then, there exists $k\leq K$ such that, setting $P_n(k)=\sum_{l=0}^d g_l(n)k^l$, $$e^{iP_n(k)}T^k x=\alpha_n^{-1}y+z',\ \|z'\|<\varepsilon.$$ Thus, \begin{eqnarray*} e^{if(n+k)}T^{n+k}x&=&e^{i\big(f(n+k)-f(n)\big)}T^k\big(e^{if(n)}T^n x\big)\\ &=&e^{i\varepsilon_k(n)}e^{iP_n(k)}T^k\big(\alpha_n x+z\big)\\ &=&e^{i\varepsilon_k(n)}\big(y+\alpha_n z'+e^{iP_n(k)}T^k z\big). \end{eqnarray*} Since $\sup_{0\leq k\leq K}|\varepsilon_k(n)|$ goes to 0 as $n$ goes to infinity, we get $$\big\|e^{if(n+k)}T^{n+k}x-y\big\|<3\varepsilon,$$ provided $n$ has been chosen large enough. \end{proof} This theorem covers the cases of many sequences which do not grow too quickly to infinity, like $f(n)=n^a\log^b(n+1)$, $(a,b)\in\mathbb R^2$, or finite linear combinations of such functions. Interestingly, we may also observe that Theorem \ref{THMPREP} does not extend to this level of generality. \begin{example} \label{noTHMPREP} Let $T=2B$ acting on $\ell^2(\mathbb Z_+)$. There exists $x\in HC(T)$ such that $(e_0,1)$ does not belong to $\overline{\{(T^n x,e^{i\log (n)});\ n\geq 1\}}$. \end{example} \begin{proof} Observe that $$\Re e(e^{i\log (n)})\geq 0\iff n\in \bigcup_{k\geq 0}[\exp(2k\pi-\pi/2),\exp(2k\pi+\pi/2)]\iff n\in\bigcup_{k\geq 0} (a_k,b_k)$$ where $(a_k)$ and $(b_k)$ are sequences of integers satisfying $a_{k+1}-b_k\to+\infty$. Let $(x_k)$ be a dense subset of $\ell^2(\mathbb Z_+)$ such that for any $k\geq 1$, $\|x_k\|\leq k$ and $x_k$ has a finite support contained in $[0,a_{k+1}-b_k-1]$. We set $$x=\sum_{j\geq 1}\frac1{2^{b_{j^2}}}S^{b_{j^2}}x_j,$$ where $S$ is the forward shift. Provided $n\in \bigcup_k (a_k,b_k)$, we know that $\|T^n x-e_0\|\geq 1$ since $\langle T^n x,e_0\rangle=0$. In particular, $(e_0,1)$ does not belong to $\overline{\{(T^n x,e^{i\log (n)});\ n\geq 1\}}$. Nevertheless, $x$ is a hypercyclic vector for $T$. Indeed, $$\|T^{b_{k^2}}x-x_k\|\leq \sum_{j\geq k+1} \left\|\frac{1}{2^{b_{j^2}-b_{k^2}}}S^{b_{j^2}-b_{k^2}}x_j\right\|\leq \sum_{j\geq k+1}\frac{j}{2^{b_{j^2}-b_{k^2}}}\leq \sum_{j\geq k+1}\frac{j}{2^j}\xrightarrow{k\to+\infty} 0.$$ \end{proof} As a consequence of Theorem \ref{THMSLOWGROWTH}, given $T\in \mathfrak L(X) $ and a sequence of real numbers $(f(n))$ such that $f(n+1)-f(n) \to 0$, then $x\in HC(T)$ if and only if $\{e^{2\pi i f(n)}T^n x;\ n\geq 1\}$ is dense in $X$. Rephrasing this, we can say that for every sequence $(\lambda_n)\subset \mathbb{T}$ with $\lambda_{n+1}/\lambda_n \to 1$, $x\in HC(T)$ if and only if $\{ \lambda_nT^n x;\ n\geq 1\}$ is dense in $X$. One now may wonder whether the assumption ``$(\lambda_n)\subset \mathbb{T}$" can be dropped. However, this is not the case. Indeed, Le\'{o}n-Saavedra proved in \cite{L} that there exists a hypercyclic operator $T\in \mathfrak L(X) $ such that for every $x\in X$ the set $\{ \lambda_nT^nx;\ n\geq 1\}$ is not dense in $X$ where $\lambda_n=1/n$, $n=1,2,\ldots $ and of course $\lambda_{n+1}/\lambda_n\to 1$. In general, if we keep the assumption $\lambda_{n+1}/\lambda_n\to 1$ but we move away form the unit circle $\mathbb{T}$, that is $\lambda_n\in \mathbb{C}$, then none of the implications $(ii)\implies (i)$, $(iii)\implies (ii)$ in Theorem \ref{mainthm} hold. This follows by Propositions 2.4 and 2.5 from \cite{CH2}. On the other hand, under the additional assumption that $T\in \mathfrak L(X) $ is hypercyclic it is known \cite{CH1} that for $(\lambda_n)$ a sequence of non-zero complex numbers with $\lambda_{n+1}/\lambda_n\to 1$ and $x\in X$, if the set $\{ \lambda_nT^nx;\ n\geq 1\}$ is somewhere dense then it is everywhere dense. \section{Uniformly distributed sequences and generic statements} In Section 4 we showed that Theorem \ref{mainthm} is no longer true for sequences of unimodular complex numbers whose phases grow to infinity at a geometric rate, see Proposition \ref{geomrate}. In particular, denoting by $B$ the unweighted backward shift on $\ell^2(\mathbb Z_+)$ then, for every $x\in HC(2B)$ there exists $\theta\in\mathbb R$ such that the set $\big\{ e^{2\pi i2^n\theta}(2B)^n x;\ n\geq 1\big\} $ is not dense in $\ell^2(\mathbb Z_+)$. In this section we shall establish results, both in measure and category, going to the opposite direction. For instance, a consequence of our result, Proposition \ref{prop1generic}, is that for every $x\in HC(2B)$ the set of $\theta$'s in $\mathbb{R}$ such that $\big\{ e^{2\pi i2^n\theta }(2B)^n x;\ n\geq 1\big\} $ is dense in $\ell^2(\mathbb Z_+)$ is residual and of full Lebesgue measure in $\mathbb{R}$. This is in sharp contrast with Proposition \ref{geomrate}. Such kind of behavior comes as a natural consequence from the general metric theorem of Koksma, see Theorem 4.3 in Chapter 1 of \cite{KN}. Koksma's theorem generalizes the beautiful result of Weyl \cite{KN}: \textit{if $(n_k)$ is a distinct sequence of integers then the sequence $(n_k\theta )$ is uniformly distributed for almost every $\theta \in \mathbb{R}$ }. Here, we shall need the following corollary of Koksma's theorem, see Corollary 4.3 in Chapter 1 of \cite{KN}, which we state as a theorem. \begin{theorem} (Koksma) \label{Koksma} Let $(f(n))$ be a sequence of real numbers such that for some $\delta>0$, $|f(n)-f(m)|>\delta$ for $n\neq m$. Then the sequence $(f(n)\theta)$ is uniformly distributed for almost every $\theta \in \mathbb{R}$. \end{theorem} \begin{proposition} \label{prop1generic} Let $X$ be a Banach space and let $T\in \mathfrak L(X) $ be a hypercyclic operator. Let also $(f(n))$ be a sequence of real numbers such that for some $\delta>0$, $|f(n)-f(m)|>\delta$ for $n\neq m$. Then for every $x\in HC(T)$ there exists a residual subset $A$ of $\mathbb{R}$ with full measure such that for every $\theta \in A$ the set $\{ (T^nx,e^{2\pi i f(n)\theta });\ n\geq 1\}$ is dense in $ X\times \mathbb{T}$. \end{proposition} \begin{proof} Take $x\in HC(T)$. Let $\{ x_j;\ j\in \mathbb{N} \}$, $\{ t_l;l\in \mathbb{N}\}$ be dense subsets of $X$ and $\mathbb{T}$ respectively and define $A:=\bigcap_{j,l,s \in \mathbb{N}}\bigcup_{n\in \mathbb{N}}A_{j,l,s,n}$, where $$A_{j,l,s,n}:=\{ \theta \in \mathbb{R};\ | e^{2\pi i f(n)\theta }-t_l|<1/s \,\,\, \textrm{and} \,\,\, \| T^nx-x_j\| <1/s \} ,$$ for $j,l,s,n\in \mathbb{N}$. Clearly, for every $\theta \in A$ the set $\{ (T^nx,e^{2\pi i f(n)\theta });\ n\geq 1\}$ is dense in $ X\times \mathbb{T}$. Let us first prove that $A$ is residual in $\mathbb{R}$. It is easy to see that $A_{j,l,s,n}$ is open in $\mathbb{R}$ and by Baire's category theorem it remains to show that for every $j,l,s\in \mathbb{N}$ the set $\cup_{n\in \mathbb{N}}A_{j,l,s,n}$ is dense in $\mathbb{R}$. To this end, fix $j,l,s\in \mathbb{N}$ and let $\alpha \in \mathbb{R}$, $\epsilon >0$. Consider the following set of positive integers $ \{ n_1<n_2<\cdots \} :=\{ n\in \mathbb{N};\ \| T^nx-x_j\| <1/s \}$. By Theorem \ref{Koksma} the sequence $(f(n_k)\theta )$ is uniformly distributed for almost every $\theta \in \mathbb{R}$. In particular, there exists $\theta \in \mathbb{R}$ with $|\theta -\alpha |<\epsilon $ such that the set $\{ e^{2\pi i f(n_k)\theta };\ k\in \mathbb{N} \}$ is dense in $\mathbb{T}$. Thus, $|\alpha-\theta |<\epsilon $, $| e^{2\pi i f(n_m)\theta } -t_l|<1/s$ and $\| T^{n_m}x-x_j\| <1/s$ for some positive integer $m$. This shows the density result. In the above, it is implicit that for $j,l,s\in \mathbb{N}$ the set $\bigcup_{n\in \mathbb{N}}A_{j,l,s,n}$ has full measure. Hence, $A$ has full measure. \end{proof} In view of the above, one may be tempted to ask the following question. \medskip {\bf Question.} Let $X$ be a Banach space, let $T\in \mathfrak L(X) $ be hypercyclic and let $x\in HC(T)$. Consider a sequence of real numbers $(f(n))$ such that the sequence $(f(n)\theta )$ is uniformly distributed for some $\theta \in \mathbb{R}$. Is it true that $\{ e^{2\pi if(n)}T^nx;\ n\geq 1\}$ is dense in $X$? \medskip Although in many cases the above question admits an affirmative reply, for instance when $f(n)$ is a polynomial in $n$, the answer in general is no! To see this, fix a hypercyclic vector $x\in \ell^2(\mathbb Z_+)$ for $2B\in \mathfrak L (\ell^2(\mathbb Z_+))$, where $B$ is the unweighted backward shift. By Proposition \ref{geomrate} there exists $\theta '\in \mathbb{R}$ such that the set $\big\{ e^{2\pi i2^n\theta '}(2B)^n x; n\geq 1\big\} $ is not dense in $\ell^2(\mathbb Z_+)$. Define now $f(n)=2^n\theta '$. Now, on the one hand, by Theorem \ref{Koksma} there exists $\theta \in \mathbb{R}$ such that the sequence $(f(n)\theta )$ is uniformly distributed and, on the other hand, $\big\{ e^{2\pi if(n)}(2B)^n x;\ n\geq 1\big\} $ is not dense in $\ell^2(\mathbb Z_+)$. Regarding the previous question it is important to note that there are sequences of real numbers $(f(n))$ such that for no $\theta \in \mathbb{R}$ the sequence $( f(n) \theta )$ is uniformly distributed and yet $\overline{ \{ e^{2\pi if(n)}T^nx;\ n\geq 1\} }=X$ for every $x\in HC(T)$. This is the case for the sequence $f(n)=\log (n)$, $n=1,2,\ldots $. Indeed, this follows by Theorem \ref{THMSLOWGROWTH} and the fact that for every $\theta \in \mathbb{R}$ the sequence $( \log (n)\theta)$ is not uniformly distributed, see Examples 2.4, 2.5 in Chapter 1 of \cite{KN}. It is clear now that the right question along this line is the following. \medskip {\bf Question.} Let $X$ be a Banach space, let $T\in \mathfrak L(X) $ be hypercyclic and let $x\in HC(T)$. Consider a sequence of real numbers $(f(n))$ which is uniformly distributed. Is it true that the set $\{ e^{2\pi if(n)}T^nx;\ n\geq 1\}$ is dense in $X$? \medskip In turn, the answer to this question is negative. \begin{proposition} Let $T=2B$ acting on $X=\ell^2(\mathbb Z_+)$. There exists a uniformly distributed sequence of real numbers $(f(n))$ and a vector $x\in HC(T)$ such that $\{e^{2\pi i f(n)}T^n x;\ n\geq 1\}$ is not dense in $X$. \end{proposition} \begin{proof} We start from any uniformly distributed sequence $(g(n))$. The idea of the proof is to change slightly the sequence $(g(n))$ to a sequence $(f(n))$ which remains uniformly distributed and such that $e^{2\pi if(n)}$ can be arbitrarily chosen for $n$ belonging to some subset $A\subset \mathbb N$ containing arbitrarily large intervals. We define simultaneously $x\in HC(T)$ such that $$\left\{\begin{array}{rcll} e_0^*(T^n x)&=&0&\textrm{ provided }n\notin A\\ \Re e\big(e_0^*(e^{2\pi if(n)}T^n x)\big)&\geq&0&\textrm{ provided }n\in A. \end{array} \right.$$ Hence, $\{e^{2\pi if(n)}T^n x;\ n\geq 1\}$ will not be dense in $X$. We now proceed with the details. Let $(n_k)$ be an increasing sequence of integers such that $n_1=1$, $n_{k+1}>n_k+(k+1)$ for any $k\geq 1$ and $k^2/n_k\to 0$. Let $(x_k)$ be a dense sequence in $\ell^2(\mathbb Z_+)$ such that, for any $k\geq 1$, $\|x_k\|\leq k$ and $x_k$ has finite support contained in $[0,k]$. We set $$x=\sum_{j\geq 1}\frac1{2^{n_j}}S^{n_j}x_j$$ and we observe that $x\in HC(T)$. Indeed, $$\|T^{n_k}x-x_k\|=\left\|\sum_{j>k}\frac1{2^{n_j-n_k}}S^{n_j-n_k}x_j\right\|\leq\sum_{j>k}\frac j{2^j}\xrightarrow{k\to+\infty}0.$$ We then define $(f(n))$ by \begin{itemize} \item $f(n)=g(n)$ provided $n\notin\bigcup_{k\geq 1}[n_k,n_k+k]$; \item $f(n)$ is any positive real number such that $\Re e\big(e_0^*(e^{2\pi if(n)}T^n x)\big)\geq0$ provided $n\in \bigcup_{k\geq 1}[n_k,\ n_k+k]$. \end{itemize} As already observed, $\{e^{2\pi if(n)}T^n x;\ n\geq 1\}$ cannot be dense in $\ell^2(\mathbb Z_+)$. Thus it remains to show that $(f(n))$ is uniformly distributed. Let $I$ be a subarc of $\mathbb T$ and let $n\geq 1$. Let $k\geq 1$ be such that $n_k\leq n<n_{k+1}$. Then, since $f(j)$ and $g(j)$ may only differ for $j\leq n$ if $j$ belongs to $\bigcup_{l=1}^k[n_l,n_l+l]$ which has cardinality less than $(k+1)^2$, $$-\frac{(k+1)^2}n+\frac1n\textrm{card}\left(\left\{1\leq j\leq n;\ e^{2\pi ig(j)}\in I\right\}\right)\leq \frac1n\textrm{card}\left(\left\{1\leq j\leq n;\ e^{2\pi if(j)}\in I\right\}\right)$$ and $$\frac1n\textrm{card}\left(\left\{1\leq j\leq n;\ e^{2\pi if(j)}\in I\right\}\right)\leq \frac1n\textrm{card}\left(\left\{1\leq j\leq n;\ e^{2\pi ig(j)}\in I\right\}\right)+\frac{(k+1)^2}n.$$ Since $k^2/n\leq k^2/n_k$ goes to zero, $(f(n))$ remains uniformly distributed. \end{proof} \medskip We conclude the paper with a generic result which is related to the question we asked in the introduction. The space $\mathbb{T}^{\mathbb{N}}$ is endowed with the metric $d(\Lambda ,M)=\sum_{n=1}^{+\infty}\frac{|\lambda_n-\mu_n|}{2^n}$ for $\Lambda=(\lambda_n)\in \mathbb{T}^{\mathbb{N}}$, $M=(\mu_n)\in \mathbb{T}^{\mathbb{N}}$, and becomes a complete metric space. \begin{proposition} \label{prop2generic} Let $X$ be a Banach space and let $T\in \mathfrak L(X) $ be a hypercyclic operator. Then for every $x\in HC(T)$ there exists a residual subset $B$ of $\mathbb{T}^{\mathbb{N}}$ such that for every $(\lambda_1, \lambda_2,\ldots )\in B$ the set $\{ \lambda_nT^nx;\ n\geq 1\}$ is dense in $X$. \end{proposition} \begin{proof} Fix $x\in HC(T)$. Let $\{ x_j;\ j\in \mathbb{N}\}$ be a dense set in $X$ and define the set $B_{j,s,n}:=\{ \Lambda=(\lambda_m)\in \mathbb{T}^{\mathbb{N}};\ \| \lambda_nT^nx-x_j\| <1/s\}$ for $j,s,n\in \mathbb{N}$. It is straightforward to check that $B_{j,s,n}$ is open in $\mathbb{T}^{\mathbb{N}}$ for every $j,s,n\in \mathbb{N}$. We then define $B:=\bigcap_{j,s \in \mathbb{N}}\bigcup_{n\in \mathbb{N}}B_{j,s,n}$. Observe that if $\Lambda =(\lambda_n)\in B$ then the set $\{ \lambda_nT^nx;\ n\geq 1\}$ is dense in $X$. Hence, in view of Baire's category theorem it suffices to show that for every $j,s\in \mathbb{N}$ the set $\bigcup_{n\in \mathbb{N}}B_{j,s,n}$ is dense in $\mathbb{T}^{\mathbb{N}}$. Fix $j,s\in \mathbb{N}$ and let $M=(\mu_n)\in \mathbb{T}^{\mathbb{N}}$, $\epsilon >0$. There exists a positive integer $N$ such that $\sum_{n=N+1}^{+\infty}\frac{2}{2^n}<\epsilon$. Define the vector $\Lambda:=(\mu_1, \ldots, \mu_N, 1,1,1, \ldots )\in \mathbb{T}^{\mathbb{N}}$. From the above we get $d(\Lambda ,M)<\epsilon$ and $\| \lambda_nT^nx-x_j\| =\| T^nx-x_j\| <1/s$ for some $n\in \mathbb{N}$ with $n >N$. Hence, the set $\bigcup_{n\in \mathbb{N}}B_{j,s,n}$ is dense in $\mathbb{T}^{\mathbb{N}}$. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Numerous and complementary cosmological observations indicate that the expansion of the universe is undergoing cosmic acceleration at the present time\cite{observation}. This cosmic acceleration is viewed as due to a mysterious dominant component, dark energy, with negative pressure. The combined analysis of cosmological observations suggests that the universe is spatially flat, and consists of about $70\%$ dark energy, $30\%$ dust matter (cold dark matter plus baryons), and negligible radiation. Although we can affirm that the ultimate fate of the universe is determined by the feature of dark energy, the nature of dark energy as well as its cosmological origin remain enigmatic at present. Explanations have been sought within a wide range of physical phenomena, including a cosmological constant, exotic fields\cite{quint,phantom,k,quintom,vector}, a new form of the gravitational equation\cite{gt}, etc. Recently, a new model stimulated by the holographic principle has been put forward to explain the dark energy\cite{Hsu:2004ri,Li:2004rb}. According to the holographic principle, the number of degrees of freedom of a physical system scales with the area of its boundary. In the context, Cohen et al\cite{cohen} suggested that in quantum field theory a short distant cutoff is related to a long distant cufoff due to the limit set by formation of a black hole, which results in an upper bound on zero-point energy density. In line with this suggest, Hsu and Li\cite{Hsu:2004ri,Li:2004rb} argued that this energy density could be views as the holographic dark energy satisfying \begin{equation} \rho_{de}=3c^2M_P^2L^{-2}~,\label{de} \end{equation} where $c$ is a numerical constant, and $M_P\equiv 1/\sqrt{8\pi G}$ is the reduced Planck mass. If we take $L$ as the size of the current universe, for instance the Hubble scale $H^{-1}$, then the dark energy density will be close to the observed data. However, Hsu\cite{Hsu:2004ri} pointed out that this yields a wrong equation of state for dark energy. Li\cite{Li:2004rb} subsequently proposed that the IR cut-off $L$ should be taken as the size of the future event horizon \begin{equation} L=R_{eh}(a)=a\int_t^\infty{d\tilde{t}\over a(\tilde{t})}=a\int_a^\infty{d\tilde{a}\over H\tilde{a}^2}~.\label{eh} \end{equation} Then the problem can be solved nicely and the holographic dark energy model can thus be constructed successfully. The holographic dark energy scenario may provide simultaneously natural solutions to both dark energy problems as demonstrated in Ref.\cite{Li:2004rb}. The only undetermined parameter $c$ should be fixed by the observations. If $c\leq1$, which satisfies the original bound $L^3\rho_{de}\leq LM_p^2$, the equation of state (EOS) of dark energy evolves from the state of $w>-1$ to $w<-1$, and the critical state of $w=-1$ must be crossed. If $c>1$, the EOS of dark energy keeps $w>-1$\cite{Li:2004rb}, which naturally avoided the cosmic big rip. However, the original bound $L^3\rho_{de}\leq LM_p^2$ will be violated. Since the model we discuss here is only a phenomenological framework and it is unclear whether it is appropriate to tightly constrain the value of $c$ by means of the analogue to the black hole. As a matter of fact, the possibility of $c>1$ has been seriously dealt with and a modest value of $c$ larger than one could be favored in the literature\cite{c>1}. In this paper, we consider the general case with $c$ as a free parameter. For a kind of realized dark energy model, the feature of EOS crossing $-1$ can not be realized by the simple quintessence, phantom, or k-essence\cite{-1}. The quintom is one of the simplest models with EOS crossing $-1$, which is the combination of a quintessence $\phi_1$ and a phantom $\phi_2$. The hessence is a kind of simple quintom\cite{hessence,zhao}, which has the lagrangian density \begin{equation}\label{8} \L_{he}=\frac{1}{2}(\partial_{\mu}\phi_1)^2-\frac{1}{2} (\partial_{\mu}\phi_2)^2-V(\phi_1^2-\phi_2^2)~, \end{equation} where the potential function $V(\phi_1^2-\phi_2^2)$ is free for the models. Different choice of $V$ follows a different evolution of the universe. In Ref.\cite{brane}, the authors found that this kind of models may be the local effective approximation of the D3-brane Universe. In Ref.\cite{zhao}, we have proved that the evolution of potential function can be exactly determined by the EOS of hessence $w_{he}(z)$ and its evolution $w_{he} '(z)$. If considered the holographic constraint in Eq.(\ref{de}), the EOS of the hessence can be exactly determined for a fixed $c$. So the potential function for the holographic hessence only depends on the parameter $c$. In this letter, we first discuss the evolution of the EOS and potential of the holographic hessence models for the different $c$. Then considered the constraint on $c$ from the current observations, we reconstruct the potential function of holographic hessence models. \section{Holographic hessence models} We consider the action \begin{equation}\label{s} S=\int d^4x\sqrt{-g}\left(-\frac{\cal R}{16\pi G}+\L_{he}+\L_m\right), \end{equation} where $g$ is the determinant of the metric $g_{\mu\nu}$, $\cal R$ is the Ricci scalar, $\L_{he}$ and $\L_m$ are the lagrangian densities of the hessence dark energy and matter, respectively. The lagrangian density of hessence is in Eq.(\ref{8}). One can easily find that this lagrangian is invariant under the transformation \begin{equation} \phi_1\rightarrow\phi_1\cosh(i\alpha)-\phi_2\sinh(i\alpha)~, \end{equation} \begin{equation} \phi_2\rightarrow-\phi_1\sinh(i\alpha)+\phi_2\cosh(i\alpha)~, \end{equation} where $\alpha$ is constant. This property makes one can rewrite the lagrangian density (\ref{8}) in another form \begin{equation}\label{he} \L_{he}=\frac{1}{2} \left[(\partial_{\mu}\phi)^2-\phi^2(\partial_{\mu}\theta)^2\right]-V(\phi)~, \end{equation} where we have introduced two new variables $(\phi,~\theta)$, i.e. \begin{equation} \phi_1=\phi\cosh\theta~,~~~~~~\phi_2=\phi\sinh\theta~. \end{equation} Consider a spatially flat FRW (Friedmann-Robertson-Walker) universe with metric \begin{equation} ds^2=dt^2-a^2 (t) \gamma_{ij}dx^idx^j~, \end{equation} where $a(t)$ is the scale factor, and $\gamma_{ij}=\delta^i_j$ denotes the flat background space. Assuming $\phi$ and $\theta$ are homogeneous, from the action in (\ref{s}), we obtain the equations of motion for $\phi$ and $\theta$ \begin{equation}\label{1} \ddot{\phi}+3H\dot{\phi}+\phi\dot{\theta}^2+dV/d\phi=0~, \end{equation} \begin{equation}\label{2} \phi^2\ddot{\theta}+(2\phi\dot{\phi}+3H\phi^2)\dot{\theta}=0~, \end{equation} where $H\equiv\dot{a}/a$ is the Hubble parameter, an overdot denotes the derivatives with respect to cosmic time. Eq.(\ref{2}) implies \begin{equation} Q=a^3\phi^2\dot{\theta}={\rm const}~, \end{equation} which is associated with the total conserved charge within the physical volume due to the internal symmetry\cite{hessence}. This relation turns out \begin{equation} \dot{\theta}=\frac{Q}{a^3\phi^2}~. \end{equation} Substituting this into Eq.(\ref{1}), we can rewrite the kinetic equation as \begin{equation}\label{kinetic} \ddot{\phi}+3H\dot{\phi}+\frac{Q^2}{a^6\phi^3}+\frac{dV}{d\phi}=0~, \end{equation} which is equivalent to the energy conservation equation of the hessence $\dot{\rho}_{he}+3H(\rho_{he}+p_{he})=0$. The pressure, energy density and the EOS of the hessence are \begin{equation}\label{24} p_{he}=\frac{1}{2}\dot{\phi}^2-\frac{Q^2}{2a^6 \phi^2}-V(\phi)~,~~~~~~~ \rho_{he}=\frac{1}{2}\dot{\phi}^2-\frac{Q^2}{2a^6 \phi^2}+V(\phi)~, \end{equation} \begin{equation}\label{25} w_{he}=\left.\left[\frac{1}{2}\dot{\phi}^2-\frac{Q^2}{2a^6 \phi^2}-V(\phi)\right]\right/ \left[\frac{1}{2}\dot{\phi}^2-\frac{Q^2}{2a^6 \phi^2}+V(\phi)\right]~, \end{equation} respectively. It is easily seen that $w_{he}\geq-1$ when $\dot{\phi^2}\geq Q^2/(a^6\phi^2)$, while $\omega_{he}\leq-1$ when $\dot{\phi^2}\leq Q^2/(a^6\phi^2)$. The transition occurs when $\dot{\phi^2}=Q^2/(a^6\phi^2)$. In the case of $Q\equiv0$, the hessence becomes the quintessence model. From the expression of EOS of hessence, we can find it is only dependant of the potential function $V(\phi)$. If $V(\phi)$ is determined, $w$ is also determined. On the contrary, if $w(z)$ is fixed, the potential function $V(\phi)$ also can be solved. Here we consider the holographic hessence models, which satisfies the holographic constraint in Eq.(\ref{de}). Consider now a spatially flat FRW universe with matter component $\rho_{m}$ (including both baryon matter and cold dark matter) and holographic hessence component $\rho_{he}$. The Friedmann equation reads \begin{equation} 3H^2M_p^2=\rho_{m}+\rho_{he}~, \end{equation} or equivalently, \begin{equation} {H^2\over H_0^2}=\Omega_{m0}a^{-3}+\Omega_{he}{H^2\over H_0^2}~.\label{Feq} \end{equation} Combining the definition of the holographic dark energy (\ref{de}) and the definition of the future event horizon (\ref{eh}), we derive \begin{equation} \int_a^\infty{d\ln \tilde{a}\over H\tilde{a}}={c\over Ha\sqrt{\Omega_{he}}}~.\label{rh} \end{equation} We notice that the Friedmann equation (\ref{Feq}) implies \begin{equation} {1\over Ha}=\sqrt{a(1-\Omega_{he})}{1\over H_0\sqrt{\Omega_m^0}}~.\label{fri} \end{equation} Substituting (\ref{fri}) into (\ref{rh}), we get easily the dynamics satisfied by the dark energy, i.e. the differential equation about the fractional density of dark energy, \begin{equation} \Omega'_{he}=\Omega_{he}(1-\Omega_{he})\left(1+{2\over c}\sqrt{\Omega_{he}}\right),\label{deq} \end{equation} where the prime denotes the derivative with respect to $\ln a$. This equation describes behavior of the holographic dark energy completely, and it can be solved exactly. It is easy to prove that, this equation has only one steady attractor solution \begin{equation} \Omega_{he}=1. \end{equation} In the solution, the hessence is dominant in the universe, and the component of matter is negligible. Important observables to reveal the nature of dark energy are the EoS $w$ and its time derivative in units of Hubble time $w'$. The SNAP mission is expected to observe about $2000$ SNIa each year, over a period of three years. Most of these SNIa are at the redshift $z\in[0.2, ~1.2]$. The SNIa plus weak lensing methods conjoined can determine the present equation of state ratio, $\omega_0$, to $5\%$, and its time variation, $\omega'$, to $0.11$ \cite{snap}. It has a powerful ability to differentiate the various dark energy models. From the energy conservation equation of the holographic hessence, the EOS of the dark energy can be given \cite{Li:2004rb} \begin{equation}\label{ww} w_{he}=-1-{1\over 3}{d\ln\rho_{he}\over d\ln a}=-{1\over 3}\left(1+{2\over c}\sqrt{\Omega_{he}}\right)~, \end{equation} and its evolution is \begin{equation}\label{wwp} w_{he}'=-\frac{\sqrt{\Omega_{he}}}{3}(1-\Omega_{he})\left(1+\frac{2}{c}\sqrt{\Omega_{he}}\right). \end{equation} It can be seen clearly that the equation of state of the holographic dark energy evolves dynamically and satisfies $-(1+2/c)/3\leq w_{he}\leq -1/3$ due to $0\leq\Omega_{he}\leq 1$. The parameter $c$ plays a significant role in this model. If one takes $c=1$, the behavior of the holographic dark energy will be more and more like a cosmological constant with the expansion of the universe, such that ultimately the universe will enter the de Sitter phase in the far future. As is shown in\cite{Li:2004rb}, if one puts the parameter $\Omega_{he0}=0.73$ into (\ref{ww}), then a definite prediction of this model, $w_{he0}=-0.903$, will be given. On the other hand, if $c<1$, the holographic dark energy will exhibit appealing behavior that the equation of state crosses the ``cosmological-constant boundary'' (or ``phantom divide'') $w=-1$ during the evolution. This kind of dark energy is referred to as ``quintom'' \cite{quintom} which is slightly favored by current observations \cite{obs1}. If $c>1$, the equation of state of dark energy will be always larger than $-1$ such that the universe avoids entering the de Sitter phase and the Big Rip phase. Hence, we see explicitly, the value of $c$ is very important for the holographic dark energy model, which determines the feature of the holographic hessence as well as the ultimate fate of the universe. Now, we discuss the dark energy models in the $w-w'$ plane, which clearly shows the evolution character of the dark energy. The simplest model, cosmological constant, has the effective state of $w=-1$ and $w'=0$, which corresponds to a fixed point in the $w-w'$ plane. Generally, the dynamics model of dark energy shows a line in this plane, which describes the evolution of its EOS\cite{wwp}. The simple quintessence has the state of $w\geq-1$, which only occupies the region of $w'>-3(1+w)(1-w)$. The phantom field ($w\leq-1$) occupies the region of $w'<-3(1+w)(1-w)$. The evolution of hessence in the $w-w'$ plane is discussed in Ref.\cite{zhao}. Here we brief it as below. From the kinetic equation (\ref{kinetic}), one can get \begin{equation}\label{32} 1+\frac{1}{6}\frac{d\ln x}{d\ln a}=-\frac{1}{3HV}\frac{\dot{V}}{1+w_{he}}~, \end{equation} where $a$ is the scale factor, and we have set the present scalar factor $a_0=1$. The function $x$ is defined by \begin{equation} x\equiv\left|\frac{1+w_{he}}{1-w_{he}}\right|=\left|\frac{\frac{1}{2} \dot{\phi}^2-\frac{Q^2}{2a^6\phi^2}}{V}\right|~, \end{equation} and \begin{equation} \frac{d\ln x}{d\ln a}=\frac{2w_{he}'}{(1+w_{he})(1-w_{he})}~. \end{equation} This equation can be rewritten as \begin{equation}\label{37}\left. \left(1+\frac{w_{he}'}{3(1+w_{he})(1-w_{he})}\right)\right/\left(\frac{2\dot{V}}{3H(1+w_{he})\rho_{he}}\right)=-\frac{\rho_{he}}{2V}<0~. \end{equation} which follows that \begin{equation}\label{FV} F\dot{V}<0, \end{equation} where we have defined $F\equiv w_{he}'+3(1+w_{he})(1-w_{he})$. So the $w-w'$ plane is divided into four parts ~~~~~~~~~~~~~~{\bf I}:~~~~~~ $F>0$ ~~\&~~ $w>-1$;~~~~~~~{\bf II}:~~~~~~~$F>0$ ~~\&~~ $w<-1$; ~~~~~~~~~~~{\bf III}:~~~~~~ $F<0$ ~~\&~~ $w<-1$;~~~~~~{\bf IV}:~~~~~~ $F<0$ ~~\&~~ $w>-1$.\\ This can be seen clearly in Figure 1. From Eq.~(\ref{FV}), one can easily find that $\dot{V}<0$ is satisfied in Region \emph{I} and \emph{II} (rolling-down region), the field rolling down the potential, and $\dot{V}>0$ is satisfied in Region \emph{III} and \emph{IV} (climbing-up region), the field climbing up the potential. So from the value of the function $w'+3(1-w)(1+w)$ being positive or negative, one can immediately judge how the field evolves at that time. Toward the holographic hessence models, from the equations (\ref{ww}) and (\ref{wwp}), we have \begin{equation} F=\frac{1}{3}\left(\frac{2}{c}\Omega_{he}^2+\Omega_{he}^{\frac{3}{2}} -\frac{4+2c}{c^2}\Omega_{he}-\frac{c+4}{c}\sqrt{\Omega_{he}}+8\right). \end{equation} For a fixed $c$, $F$ only depends on the value of $\Omega_{he}$. So the evolution of potential function of hessence is also exactly determined by the evolution of $\Omega_{he}$. It is easily to find that this is a monotonic increasing function with the increasing $\Omega_{he}$. In order to exactly determine this function, one must numerically solve the Eq.(\ref{deq}) and get function $\Omega_{he}(z)$. Here we only concern on that whether the potential function of holographic hessence is monotonic. If $F<0$ is holden for all time, the potential is a monotonic increasing function. But it is not same with the phantom models, since the cosmological constant can be crossed in the this hessence. If $F>0$ is holden for all time, the potential is a monotonic decreasing function. Since $w=-1$ also can be crossed in this case, the models are different from the simple quintessence. If the state of $F=0$ is crossed in the evolution of hessence, the potential function of hessence can not be a monotonic function. Here we focus on the initial and finial states of the holographic hessence, and investigate them in the $w-w'$ plane. In the initial condition with $\Omega_{he}\rightarrow0$, one has \begin{equation} (w_{he},w_{he}')\rightarrow\left(-\frac{1}{3},~0\right), \end{equation} \begin{equation} F=w_{he}'+3(1+w_{he})(1-w_{he})\rightarrow\frac{8}{3}>0, \end{equation} which is in the rolling-down region (Region \emph{I}), and independent of value of $c$. So in any case of models, they evolve from the region \emph{I}, which is quintessence-like. However, in the finial stage with $\Omega_{he}\rightarrow1$, one has \begin{equation} w_{he}\rightarrow-{1\over 3}\left(1+{2\over c}\right),~~~~w_{he}'\rightarrow0, \end{equation} and \begin{equation} F=\frac{4}{3}\left(2-\frac{1}{c}-\frac{1}{c^2}\right). \end{equation} For the different choice of $c$, the finial value of $F$ is different. \begin{eqnarray} c&>&1,~~F>0~~~(~\rm in~~the~~rolling-down~~region~),\\ c&=&1,~~F=0~~~(~\rm at~~the~~critical~~point~),\\ c&<&1,~~F<0~~~(~\rm in~~the~~climbing-up~~region~). \end{eqnarray} In Figure 1, we have plotted the evolution of three different holographic hessence models with $c=1.5,1.0,0.5$, respectively, where the arrows indicate the evolution direction of hessence with the increasing value of $\Omega_{he}$ from $\Omega_{he}=0$ to $\Omega_{he}=1$. From this figure, we can find that, if $c>1$ is chosen, which violates the holographic constraint, the hessence is in the Region \emph{I} (rolling-down region) for all time, so the potential of hessence is a monotonic damping function, which is similar the quintessence models. This is consistent to the previous conclusion that holographic dark energy with $c>1$ can be described by the quintessence fields. However, if the fixed $c$ is smaller than $1$, which satisfies the holographic constraint, the hessence must evolve from Region \emph{I} (rolling-down region) to Region \emph{IV} (climbing-up region), and finally to Region \emph{III} (climbing-up region). So the potential of hessence is not a monotonic function. The field $\phi$ rolls down the potential at earlier stage, and later it turns to climb up. With the expansion of the universe, the state of $F=0$ must be crossed. The EOS of hessence also turns from the region of $w>-1$ to that of $w<-1$, and the state of $w=-1$ must be crossed. So dark energy is quintom-like. We note that the time of $F=0$ is a little earlier than that of $w=-1$. If $c=1$ is chosen, the holographic hessence is also in Region \emph{I} for all time, and finally it turns to the critical state of $(w,w')=(-1,0)$ with the expansion of the universe, and the universe is an exact de Sitter expansion. \begin{figure} \centerline{\includegraphics[width=10cm]{1.EPS}} \caption{\small The holographic hessence models evolve in the $w-w'$ plane, where we have considered three models with $c=0.5,1.0,1.5$, respectively. The arrows denote the evolution direction of the models with the expansion of the universe. } \end{figure} \section{Reconstruct the holographic hessence models} From the previous discussion, we find the value of the parameter $c$ should be fixed by the cosmological observations. This has been discussed by a number of authors\cite{obs1}. In the recent work\cite{zx}, the authors have constrained the holographic dark energy by the current observations of SNIa (Type Ia Supernova), CMB (cosmic microwave background radiation), and BAO (baryon acoustic oscillation). If setting $c$, $\Omega_{m0}$ and $H_0$ as the free parameters, and only using the up-to-date gold sample of SNIa consisted of $182$ data\cite{snia}, the author found that the best-fit for the analysis of gold sample of SNIa happens at \begin{equation}\label{fit1} c=0.37,~~\Omega_{m0}=0.43,~~h=0.64. \end{equation} By choosing $h=0.64$, the $1\sigma$ fit values for the parameters are: \begin{equation} c=0.37^{+0.56}_{-0.21},~~ \Omega_{m0}=0.43^{+0.08}_{-0.14}. \end{equation} It is obvious that the SNIa data alone seem not sufficient to constrain the holographic dark energy models strictly. The confidence region of $c$ is very large, and the best fit of $\Omega_{m0}$ is evidently different from other constraint\cite{seljak}. In the previous work\cite{0506310}, the authors found that the holographic dark energy model is very sensitive to the value of the present Hubble parameter $h$. So it is very important to use other results of CMB and LSS (large-scale structure), which are observational quantities irrelevant to $h$ as a complement to SNIa data. The authors considered the recent observations on the CMB shift parameter $R=1.70\pm0.03$\cite{cmb} and the measurement of BAO peak in the distribution of SDSS luminous red galaxies\cite{bao}. From the constraints of the combination of SNIa, CMB and BAO, and considered the prior $h=0.72\pm0.08$, which is got from the \emph{Hubble Space Telescope Key Project} (\emph{HST})\cite{h}, the fit values for model parameters with $1-\sigma$ errors are \begin{equation}\label{fits} c=0.91^{+0.23}_{-0.19},~~\Omega_{m0}=0.29\pm0.03. \end{equation} It is clear that in the joint analysis the derive value for matter density $\Omega_{m0}$ is very reasonable, which is important for this model is the determination of the value of $c$. As shown in \cite{0506310} and the discussion above, the constraint of $h$ can evidently change the constraint result. In order to show how strongly biased constraints can be derived from a factitious prior on $h$, the author also considered a strong $HST$ prior, fixing $h=0.72$. The constraint in equation (\ref{fits}) becomes \begin{equation}\label{fit3} c=0.42\pm0.05,~~\Omega_{m0}=0.24^{+0.02}_{-0.03}. \end{equation} We find that the confidence level contours get very evident shrinkage and left-shift in the $c-\Omega_{m0}$ parameter-plane, which also changes the evolution of the EOS parameter of the dark energy and deceleration parameter of the universe\cite{zx}. These all exactly consist with the previous results\cite{0506310}. We also find that the constraint of $c$ in (\ref{fits}) and (\ref{fit3}) are not overlapped, which is because that the confidence level in these results are two low. If considered the fit values for model parameters with $3-\sigma$ errors, the conclusion will be much improved\cite{zx,0506310}. However, if setting the Hubble constant as a free parameter in the range of $(0.64,0.80)$, the constraint becomes \begin{equation}\label{fit4} c=0.82^{+0.11}_{-0.13},~~\Omega_{m0}=0.28^{+0.03}_{-0.02}. \end{equation} which also have shrinkage and left-shift in the $c-\Omega_{m0}$ plane, comparing with the results in (\ref{fits}). From these joint analysis, we can find that, though the possibility of $c>1$ can not be excluded in one-sigma error range, the possibility of $c<1$ is much more favored, which determined that the dark energy is quintom-like, and the EOS crosses $-1$ at some time. \begin{figure} \centerline{\includegraphics[width=10cm]{2.EPS}} \caption{\small From the observations, we solve the EOSs of the holographic hessence models.} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{3.EPS}} \caption{\small The reconstructed holographic hessence models evolve in the $w-w'$ plane, where the arrows denote the evolution direction of the models with the expansion of the universe. } \end{figure} From the differential equation (\ref{deq}), we can get the evolution equation of $\Omega_{he}$ with the redshift $z$ \begin{equation} \frac{d\Omega_{he}}{dz}=-(1+z)^{-1}\Omega_{he}(1-\Omega_{he})\left(1+{2\over c}\sqrt{\Omega_{he}}\right). \end{equation} For the determined parameters $c$ and $\Omega_{he0}=1-\Omega_{m0}$, one can numerically solve this equation and get $\Omega_{he}=\Omega_{he}(z)$. Inserting this into equations (\ref{ww}) and (\ref{wwp}), we can get the EOS of the holographic hessence $w=w(z)$ and its evolution $w'=w'(z)$. In Figures 2 and 3 , we have plotted the EOS and its evolution of the holographic hessence with best-fit parameters in the $w-z$ and $w-w'$ planes. From Figure 2, we find that, in the earlier stage of universe, the $w>-1$ holds for all cases, and the hessence are quintessence-like. But the values of EOS decrease with time, and they become phantom-like at present time for the case of $c=0.37$ and $c=0.42$. So the cosmological constant has been crossed. In the cases of $c=0.91$ and $0.82$, though $w>-1$ is holden until now, $w<-1$ will occur in the near future, and crossing the cosmological constant is unavoidable. This feature determines that this holographic dark energy can not be described by the quintessence, phantom, k-essence, or Yang-Mills field models\cite{-1,vector}. But in the hessence models, it can be naturally and simply realized. From Figure 3, we find that, in the earlier stage, the hessence models are all in Region \emph{I} (rolling-down region). With the expansion of the universe, these all models will cross the line with $F=0$ and enter into the Region \emph{IV}, where although $w>-1$ is kept, the hessence fields begin to clime up the potentials. At last, the hessence models all cross the cosmological constant bound and stay in the Region \emph{III}, where the hessences are phantom-like, and the potentials are climbed up. So the potentials of these holographic hessence are not a monotonic function, which is consistent to the previous discussion. With these solved EOSs, we can reconstruction the potential of the holographic hessence models. Consider the FRW Universe, which is dominated by the non-relativistic matter and a spatially homogeneous hessence field $\phi$. The energy conservation equation of the hessence field is \begin{equation} \dot{\rho}_{he}+3H(\rho_{he}+p_{he})=0~, \end{equation} which yields \begin{equation}\label{58} \rho_{he}(z)=\rho_{he0} \exp\left[3\int_0^z(1+w_{he})d\ln(1+\tilde{z})\right]\equiv \rho_{he0}E(z)~, \end{equation} where the subscript $0$ denotes the value of a quantity at the redshift $z=0$ (present). From the expresses of the pressure and energy density of the hessence, we get \begin{equation}\label{55} V(\phi)=\frac{1}{2}\left(1-w_{he}\right)\rho_{he}~, \end{equation} \begin{equation}\label{56} \dot{\phi}^2=\frac{Q^2}{a^6\phi^2}+(1+w_{he})\rho_{he}~. \end{equation} Inserting the formula in these two equations, and after some normal calculation, we get\cite{zhao} \begin{equation}\label{63} \frac{d\tilde{\phi}}{dz}=\frac{\sqrt{3}}{(1+z)} \left[\frac{C(1+z)^6\tilde{\phi}^{-2}+(1+w_{he})E(z)}{r_0(1+z)^3+E(z)}\right]^{1/2}~, \end{equation} \begin{equation}\label{64} \tilde{V}[\phi]=\frac{1}{2}(1-w_{he})E(z)~, \end{equation} where $r_0\equiv\Omega_{m0}/\Omega_{he0}$ is the energy density ratio of matter to hessence at present time, and the dimensionless quantities are defined by \begin{equation}\label{62} \tilde{\phi}\equiv\frac{\phi}{M_p}~,~~~~~\tilde{V}\equiv \frac{V}{\rho_{he0}}~, ~~~~~C\equiv\frac{Q^2}{\rho_{he0}M_p^2}~. \end{equation} These two equations relate the hessence potential $V(\phi)$ to the EOS of the hessence $w_{he}(z)$. Given an effective $w_{he}(z)$, the construction Eqs.~(\ref{63}) and (\ref{64}) allow us to construct the hessence potential $V(\phi)$. \begin{figure} \centerline{\includegraphics[width=10cm]{4.EPS}} \caption{\small Evolution of potentials of holographic hessence models. } \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{5.EPS}} \caption{\small Constructed potentials of holographic hessence models. } \end{figure} Using the solved EOSs of the holographic hessence models in Figure 2, we numerically solve the equations (\ref{63}) and (\ref{64}), which are shown in Figures 4 and 5, where we have chosen the initial condition with $C=10.0$ and $\phi_0=0.1$. From Figure 4, we find that, for the cases with $c=0.37$ and $c=0.42$, the potentials are decreasing with the expansion of the universe in the earlier stage, which are same with the quintessence models\cite{guo}. But after the time, where $z\sim1$, the potentials begin to increase, and at present time, the potential functions are increasing functions, which are similar to a phantom model. For the cases of $c=0.91$ and $c=0.82$, although the potential functions are monotonically decreasing until the present time, they all will begin to increase with time in the near future. Once again, from Figure 5, we find that these four potentials are all not the monotonic functions, since we have used $c<1$, which is consistent to the previous analysis. The only difference is that the lowest positions of these potentials are different, which is determined by the initial condition and the parameter $c$. These are different from the simple quintessence, k-essence and tachyon models\cite{guo}. \section{summary} In this letter, we have investigated the hessence models, which satisfy the holographic principle. The potential of the hessence is determined by the holographic principle. We have discussed the evolution of the holographic hessence in the $w-w'$ plane, and found that the potential function only depends on the parameter $c$. If $c\geq1$ is chosen, $w_{he}\geq-1$ is kept for all time, and hessence field $\phi$ rolls down the potential, which is similar to the quintessence models. However if $c<1$ is chosen, which mildly favor the observations, the EOS of the models evolve from the region of $w_{he}>-1$ to that of $w_{he}<-1$, and state of $w_{he}=-1$ must be crossed. The potential of model is not a monotonic function. In the early time, the hessence is quintessence-like, the EOS is $w_{he}>-1$ and $\dot{V}<0$. Then it enters into the region with $w_{he}>-1$ and $\dot{V}>0$, where the potential begins to be climbed up. At last, the model must enter and stay in the phantom-like region with $w_{he}<-1$ and $\dot{V}>0$. Considered the current constraint of the parameter $c$, we have reconstruct the potential of the holographic hessence models, which are all the nonmonotonic functions. ~ \textbf{ACKNOWLEDGMENT}: The author thanks X.Zhang for helpful discussion. \baselineskip=12truept
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} We assume that the reader is familiar with the elementary Nevanlinna theory, for detailed information, we refer the reader \cite{Goldberg,Hay-1964,Laine-1993}. Meromorphic functions considered in this paper are always non-constant, unless otherwise specified. For such a function $f$ and $a\in\mathbb{\ol C}=:\mathbb{C}\cup\{\infty\}$, each $z$ with $f(z)=a$ will be called $a$-point of $f$. We will use here some standard definitions and basic notations from this theory. In particular by $N(r,a;f)$ ($\ol N(r,a;f)$), we denote the counting function (reduced counting function) of $a$-points of meromorphic functions $f$, $T(r,f)$ is the Nevanlinna characteristic function of $f$ and $S(r,f)$ is used to denote each functions which is of smaller order than $T(r,f)$ when $r\rightarrow \infty$. We also denote $ \mathbb{C^{*}} $ by $\mathbb{C^{*}}:=\mathbb{C}\setminus\{0\}$. \vspace{1mm} \par For a meromorphic function $ f $, the order $ \rho(f) $ and the hyper order $ \rho_2(f) $ of $ f $ are defined respectively by \beas \rho(f)=\limsup_{r\rightarrow\infty}\frac{\log^{+}T(r,f)}{\log r}\;\;\mbox{and}\;\;\rho_2(f)=\limsup_{r\rightarrow\infty}\frac{\log^{+}\log^{+}T(r,f)}{\log r}. \eeas \par For $ a\in\mathbb{C}\cup\{\infty\} $, we also define \beas \Theta(a;f)=1-\limsup_{r\rightarrow +\infty}\frac{\ol N\left(r,{1}/{(f-a)}\right)}{T(r,f)}.\eeas \par We denote $ \mathcal{S}(f) $ as the family of all meromorphic functions $ s $ for which $ T(r,s)=o(T(r,f)) $, where $ r\rightarrow\infty $ outside of a possible exceptional set of finite logarithmic measure. Moreover, we also include all constant functions in $ \mathcal{S}(f) $, and let $\hat{\mathcal{S}}(f)=\mathcal{S}(f)\cup\{\infty\} $. For $ s\in\hat{\mathcal{S}}(f) $, we say that two meromorphic functions $ f $ and $ g$ share $ s $ $ CM $ when $ f(z)-s $ and $ g(z)-s $ have the same zeros with the same multiplicities. If multiplicities are not taking into account, then we say that $ f $ and $ g $ share $ s $ $ IM $.\vspace{1mm} \par In addition, we denote $ \ol E(s,f) $ by the set of zero of $ f-s $, where a zero is counted only once in the set, and by the set $\ol E_{k)}(s,f) $, we understand a set of zeros of $ f-s $ with multiplicity $ p \leq k $, where a zero with multiplicity $ p $ is counted only once in the set. Similarly, we denote the reduced counting function corresponding to $ \ol E_{k)}(s,f) $ as $ \ol N_{k)}\left(r,1/(f-s)\right) $.\vspace{1mm} \par In the uniqueness theory of meromorphic functions, the the famous classical results are the five-point, resp. four-point, uniqueness theorems due to Nevanlinna \cite{Nevanlinna-1929}. The five-point theorems states that if two meromorphic functions $f$, $g$ share five distinct values in the extended complex plane $IM$, then $f\equiv g$. The beauty of this result lies in the fact that there is no counterpart of this result in case of real valued functions. On the other hand, four-point theorem states that if two meromorphic functions $f,\;g$ share four distinct values in the extended complex plane $CM$, then $f\equiv T\circ g$, where $T$ is a M$\ddot{o}$bius transformation.\vspace{1mm} \par Clearly, these results initiated the study of uniqueness of two meromorphic functions $f$ and $g$. The study of such uniqueness theory becomes more interesting if the function $g$ has some expressions in terms of $f$.\vspace{1mm} \par Next we explain the following definition which will be required in the sequel. \begin{defi} Let $f$ and $g$ be two meromorphic functions such that $f$ and $g$ share the value $a$ with weight $k$ where $a\in\mathbb{C}\cup\{\infty\}$. We denote by $\ol N_E^{(k+1}\left(r,1/(f-a)\right)$ the counting function of those $a$-points of $f$ and $g$ where $p=q\geq k+1$, each point in this counting function counted only once. \end{defi}\par In what follows, let $c$ be a non-zero constant. For a meromorphic function $f$, let us denote its shift $I_{c}f$ and difference operators $\Delta_{c}f$, respectively, by $I_{c}f(z)=f(z+c)$ and $\Delta_{c}f(z)=(I_{c}-1)f(z)=f(z+c)-f(z).$\vspace{1mm} \par Recently an increasing amount of interests have been found among the researchers to find results which are the difference analogue of Nevanlinna theory. For finite ordered meromorphic functions, {Halburd and Korhonen} \cite{Hal & Kor-JMMA-2006}, and {Chiang and Feng} \cite{Chi & Fen-2008} developed independently parallel difference version of the famous Nevanlinna theory. As applications of this theory, we refer the reader to see the articles in case of set sharing problems (see, for example \cite{Ahamed-SUBB-2019,Aha-TJA-2021,Ban-Aha-Filomat-2019,Ban-Aha-MS-2020,Che-Che-BMMSS-2012,Zha-JMMA-2010}), finding solutions to the Fermat-type difference equations (see e.g. \cite{Aha-JCMA-2021,Liu-AM-2012,Cao-MJM-2018}), Nevanlinna theory of the Askey–Wilson divided difference operators (see e.g. \cite{Chiang-Feng-AM-2010}), meromorphic solutions to the difference equations of Malmquist type (see e.g. \cite{Lu-BAMS-2016}) and references therein.\vspace{1mm} \par Regarding periodicity of meromorphic functions, {Heittokangas} et. al. \cite{Hei & Kor & Lai & Rie-CVTA-2001,Hei & Kor & Lai & Rie-JMMA-2009} have considered the problem of value sharing for shifts of meromorphic functions and obtained the following result. \begin{theoA}\cite{Hei & Kor & Lai & Rie-CVTA-2001} Let $ f $ be a meromorphic function of finite order, and let $ c\in\mathbb{C^{*}} $. If $ f(z) $ and $ f(z+c) $ share three distinct periodic functions $ s_1, s_2, s_3\in\hat{\mathcal{S}}(f) $ with period $ c $ $ CM $, then $ f(z)\equiv f(z+c) $ for all $ z\in\mathbb{C} $. \end{theoA} \par In $ 2009 $, Heittokangas \emph{et al.} \cite{Hei & Kor & Lai & Rie-JMMA-2009} improved {Theorem A} by replacing ``sharing three small functions $ CM $" by ``$ 2\; CM + 1\; IM $" and obtained the following result. \begin{theoB}\cite{Hei & Kor & Lai & Rie-JMMA-2009} Let $ f $ be a meromorphic function of finite order, and let $ c\in\mathbb{C^{*}} $. Let $ s_1, s_2, s_3\in\hat{\mathcal{S}}(f) $ be three distict periodic function with period $ c $. If $ f(z) $ and $ f(z+c) $ share $ s_1, s_2\in\hat{\mathcal{S}}(f) $ $ CM $ and $ s_3 $ $ IM $, then $ f(z)\equiv f(z+c) $ for all $ z\in\mathbb{C} $. \end{theoB}\par In $ 2014 $, {Halburd} \emph{et al.} \cite{Hal & Kor & Toh-TAMS-2014} extended some results in this direction to meromorphic functions $ f $ whose hyper-order $ \rho_2(f)$ less than one. One may get much more information from \cite{Aha-JCMA-2021,Aha-TJA-2021,Che & Lin-2016,Hei & Kor & Lai & Rie-CVTA-2001,Hei & Kor & Lai & Rie-JMMA-2009,Liu-JMMA-2009,Liu & Yan-AM-2009} and the references therein, about the relationship between a meromorphic function $ f(z) $ and it shift $ f(z+c) $.\vspace{1mm} \par In $ 2016 $, {Li and Yi} \cite{Li & Yi-BKMS-2016} obtained a uniqueness result of meromorphic functions $ f $ sharing four values with their shifts $ f(z+c) $. \begin{theoC}\cite{Li & Yi-BKMS-2016} Let $ f $ be a non-constant meromorphic function of hyper-order $ \rho_2(f)<1 $ and $ c\in\mathbb{C^{*}} $. Suppose that $ f $ and $ f(z+c) $ share $ 0$, $1$, $\eta $ $ IM $, and share $ \infty $ $ CM $, where $ \eta $ is a finite value such that $ \eta\neq 0, 1 $. Then $ f(z)\equiv f(z+c) $ for all $ z\in\mathbb{C} $. \end{theoC} We now recall here the definition of partially shared values by two meromorphic functions $ f $ and $ g $. \begin{defi}\cite{Chen-CMFT-2018} Let $ f $ and $ G $ be non-constant meromorphic functions and $ s\in\mathbb{C}\cup\{\infty\} $. Denote the set of all zeros of $ f-s $ by $ E(s,f) $, where a zero of multiplicity $ m $ is counted $ m $ times. If $ E(s,f)\subset E(s,g) $, then we say that $ f $ and $ g $ partially share the value $ s $ $ CM $. Note that $ E(s,f)=E(s,g) $ is equivalent to $ f $ and $ g $ share the value $ s $ $ CM $. Therefore, it is easy to see that the condition ``partially shared values $ CM $" is more general than the condition ``shared value $ CM $". \end{defi} \par In addition, let $ \ol E(s,f) $ denote the set of zeros of $ f-s $, where a zero is counted only once in the set, and $ \ol E_{k)}(s,f) $ denote the set of zeros of $ f-s $ with multiplicity $ l\leq k $, where a zero with multiplicity $ l $ is counted only once in the set. The reduced counting function corresponding to to $ \ol E_{k)}(s,f) $ are denoted by $ \ol N_{k)}(r,1/(f-s)) $.\vspace{1mm} \par Charak \emph{et al.} \cite{Cha & Kor & Kum-2016} gave the following definition of partial sharing. \begin{defi}\cite{Cha & Kor & Kum-2016} We say that a meromorphic function $ f $ share $ s\in\hat{\mathcal{S}} $ partially with a meromorphic function $ g $ if $ \ol E(s,f)\subseteq \ol E(s,g) $, where $ \ol E(s,f) $ is the set of zeros of $ f(z)-s(z) $, where each zero is counted only once. \end{defi} \par Let $ f $ and $ g $ be two non-constant meromorphic functions and $ s(z)\in\hat{\mathcal{S}}(f)\cap\hat{\mathcal{S}}(g) $. We denote by $ \ol N_0(r,s;f,g ) $ the counting function of common solutions of $ f(z)-s(z)=0 $ and $ g(z)-s(z)=0 $, each counted only once. Put \beas \ol N_{12}(r,s;f,g)=\ol N\left(r,\frac{1}{f-s}\right)+\ol N\left(r,\frac{1}{g-s}\right)- 2\ol N_{0}(r,s;f,g). \eeas It is easy to see that $ \ol N_{12}(r,s;f,g) $ denoted the counting function of distinct solutions of the simultaneous equations $ f(z)-s(z)=0 $ and $ g(z)-s(z)=0 $.\vspace{1mm} \par In $ 2016 $, \textit{Charak} \emph{et al.} \cite{Cha & Kor & Kum-2016} introduced the above notion of partial sharing of values and applying this notion of sharing, they have obtained the following interesting result. \begin{theoD}\cite{Cha & Kor & Kum-2016} Let $ f $ be a non-constant meromorphic function of hyper order $ \rho_2(f)<1 $, and $ c\in\mathbb{C^{*}} $. Let $ s_1, s_2, s_3, s_4\in\hat{\mathcal{S}}(f) $ be four distinct periodic functions with period $ c $. If $ \delta(s,f)>0 $ for some $ s\in\hat{\mathcal{S}}(f) $ and \beas \ol E(s_{j},f)\subseteq\ol E(s_{j}, f(z+c)), \;\;\; \text{j=1, 2, 3, 4,} \eeas then $ f(z)=f(z+c) $ for all $ z\in\mathbb{C} $. \end{theoD} \par In $ 2018 $, \textit{Lin} \emph{et al.} \cite{Lin & Lin & Wu} investigated further on the result of \textit{Charak} \emph{et al.} \cite{Cha & Kor & Kum-2016} replacing the condition ``partially shared value $ \ol E(s,f)\subseteq\ol E(s,f(z+c)) $" by the condition ``truncated partially shared value $ \ol E_{k)}(s,f)\subseteq\ol E_{k)}(s,f(z+c)) $", $ k $ is a positive integer. By the following example, \textit{Lin} et. al. \cite{Lin & Lin & Wu} have shown that the result of \emph{Charak} et. al. \cite{Cha & Kor & Kum-2016} is not be true for $ k=1 $ if truncated partially shared values is considered. \begin{exm}\cite{Lin & Lin & Wu} Let $ f(z)={2e^z}/{(e^{2z}+1)} $ and $ c=\pi i $, $ s_1=1 $, $ s_2=-1 $, $ s_3=0 $, $ s_4=\infty $ and $ k=1 $. It is easy to see that $ f(z+\pi i)=-{2e^z}/{(e^{2z}+1)}$ and $ f(z) $ satisfies all the other conditions of {Theorem D}, but $ f(z)\not\equiv f(z+c) $. \end{exm}\par However, after a careful investigation, we find that {Theorem D} is not valid in fact for each positive integer $k $ although $ f(z) $ and $ f(z+c) $ share value $ s\in\{s_1, s_2, s_3, s_4\} $ $ CM $. We give here only two examples for $ k=2 $ and $ k=3 $. \begin{exm} Let $ f(z)={\left(ae^z(e^{2z}+3)\right)}/{\left(3e^{2z}+1\right)} $, $ c=\pi i $ and $ s_1=a $, $ s_2=-a $, where $ a\in\mathbb{C^{*}} $, $ s_3=0 $, $ s_4=\infty $ and $ k_1=2=k_2 $. It i easy to see that $ f(z+\pi i)=-{\left(ae^z(e^{2z}+3)\right)}/{\left(3e^{2z}+1\right)} $ and $ f(z) $ satisfies all the conditions of {Theorem D}, but $ f(z)\not\equiv f(z+c) $. \end{exm} \begin{exm} Let $ f(z)={\left(4ae^z(e^{2z}+1)\right)}/{\left(e^{4z}+6e^{2z}+1\right)} $ and $ c=\pi i $, $ s_1=a $, $ s_2=-a $, where $ a\in\mathbb{C^{*}} $, $ s_3=0 $, $ s_4=\infty $ and $ k_1=3=k_2 $. Then clearly $ f(z+\pi i)=-{\left(4ae^z(e^{2z}+1)\right)}/{\left(e^{4z}+6e^{2z}+1\right)} $ and $ f(z) $ satisfies all the conditions of {Theorem D}, but $ f(z)\not\equiv f(z+c) $. \end{exm} \par In $ 2018 $, \textit{Lin} \emph{et al.} \cite{Lin & Lin & Wu} established the following result considering partially sharing values. \begin{theoE}\cite{Lin & Lin & Wu} Let $ f $ be a non-constant meromorphic function of hyper-order $ \rho_2(f)<1 $ and $ c\in\mathbb{C^{*}} $. Let $ k_1, k_2 $ be two positive integers, and let $ s_1, s_2\in\mathcal{S}(f)\cup\{0\} $, and $ s_3, s_4\in\hat{\mathcal{S}}(f) $ be four distinct periodic functions with period $ c $ such that $ f $ and $ f(z+c) $ share $ s_3, s_4 $ $ CM $ and \beas \ol E_{k_{j})}(s_{j},f)\subseteq \ol E_{k_{j})}(s_{j},f(z+c)),\;\; j=1, 2. \eeas If $ \Theta(0,f)+\Theta(\infty;f)>{2}/{(k+1)} $, where $ k=\min\{k_1, k_2\} $, then $ f(z)\equiv f(z+c) $ for all $ z\in\mathbb{C} $. \end{theoE} \par \par As a consequence of {Theorem E}, \textit{Lin} \emph{et al.} \cite{Lin & Lin & Wu} obtained the following result. \begin{theoF} \cite{Lin & Lin & Wu} Let $ f $ be a non-constant meromorphic function of hyper order $ \rho_2(f)<1 $, $ \Theta(\infty,f)=1 $ and $ c\in\mathbb{C^{*}} $. Let $ s_1, s_2, s_3\in\mathcal{S}(f) $ be three distinct periodic functions with period $ c $ such that $ f(z) $ and $ f(z+c) $ share $ s_3 $ $ CM $ and \beas \ol E_{k)}(s_j,f)\subseteq \ol E_{k)}(s_j,f(z+c)),\;\; j=1, 2. \eeas If $ k\geq 2 $, then $ f(z)\equiv f(z+c) $ for all $ z\in\mathbb{C} $. \end{theoF} \textit{Lin} \emph{et al.} \cite{Lin & Lin & Wu} have showed that number ``$ k= 2 $" is sharp for the function $ f(z)=\sin z $ and $ c=\pi $. It is easy to see that $ f(z+c) $ and $ f(z) $ share the value $ 0 $ $ CM $ and $ \ol E_{1)}(1,f(z))= \ol E_{1)}(1,f(z+c))=\phi $ and $ \ol E_{1)}(-1,f(z))= \ol E_{1)}(-1,f(z+c))=\phi $ but $ f(z+c)\not\equiv f(z) $. Since Theorem F is true for $ k\geq 2 $, hence \textit{Lin} \emph{et al.} \cite{Lin & Lin & Wu} investigated further to explore the situation when $ k=1 $ and obtained the result. \begin{theoG} \cite{Lin & Lin & Wu} Let $ f $ be a non-constant meromorphic function of hyper order $ \rho_2(f)<1 $, $ \Theta(\infty,f)=1 $ and $ c\in\mathbb{C^{*}} $. Let $ s_1, s_2, s_3\in\mathcal{S}(f) $ be three distinct periodic functions with period $ c $ such that $ f(z) $ and $ f(z+c) $ share $ s_3 $ $ CM $ and \beas \ol E_{1)}(s_j,f)\subseteq \ol E_{1)}(s_j,f(z+c)),\;\; j=1, 2. \eeas Then $ f(z)\equiv f(z+c) $ or $ f(z)\equiv - f(z+c) $ for all $ z\in\mathbb{C} $. Moreover, the later occurs only if $ s_1+s_2=2s_3 .$ \end{theoG} \begin{rem} We find in the proof of \cite[Theorem 1.6]{Lin & Lin & Wu}, \textit{Lin} \emph{et al.} made a mistake. In {Theorem 1.6}, they have obtained $ f(z+c)\equiv -f(z) $ as one of the conclusion under the condition $s_1+s_2=2s_3 $, where correct one it will be $ f(z+c)\equiv -f(z)+2s_3 $. One can easily understand it from the following explanation. In \cite[Proof of Theorem 1.6, page - 476]{Lin & Lin & Wu} the authors have obtained $ \alpha=-1 $, where $ \alpha $, the way they have defined, finally will be numerically equal with ${\left(f(z+c)-s_3\right)}/{\left(f(z)-s_3\right)}=\alpha $, when $ s_1+s_2=2s_3 $. Hence after combining, it is easy to see that ${\left(f(z+c)-s_3\right)}{\left(f(z)-s_3\right)}=-1 $ and this implies that $ f(z+c)\equiv -f(z)+2s_3.$ \end{rem} \par In this paper, taking care of these points. our aim is to extend the above results with certain suitable setting. Henceforth, for a meromorphic function $ f $ and $ c\in\mathbb{C^{*}} $, we recall here (see \cite{Aha & RM & 2019}) $ \mathcal{L}_c(f): =c_1f(z+c)+c_0f(z) $, where $ c_1 (\neq 0), c_0\in\mathbb{C} $. Clearly, $\mathcal{L}_c(f) $ is a generalization of shift $ f(z+c) $ as well as the difference operator $ \Delta_{c}f $.\vspace{1mm} \par To give a correct version of the result of Lin \emph{et al.} with a general setting, we are mainly interested to find the affirmative answers of the following questions. \begin{ques} Is it possible to extend $ f(z+c) $ upto $ \mathcal{L}_c(f) $, in all the above mentioned results? \end{ques} \begin{ques} Can we obtained a similar result of Theorem E, replacing the condition $ \Theta(0;f)+\Theta(\infty;f)>{2}/{(k+1)} $, where $ k=\min\{k_1, k_2\} $ by a more general one? \end{ques} If the answers of the above question are found to be affirmative, then it is natural to raise the following questions. \begin{ques} Is the new general condition, so far obtained, sharp? \end{ques} \begin{ques} Can we find the class of all the meromorphic function which satisfies the difference equation $ \mathcal{L}_c(f)\equiv f $? \end{ques} Answering the above questions is the main objective of this paper. We organize the paper as follows: In section 2, we state the main results of this paper and exhibit several examples pertinent with the different issues regarding the main results. In section 3, key lemmas are stated and some of them are proved. Section 4 is devoted specially to prove the main results of this paper. In section 5, some questions have raised for further investigations on the main results of this paper. \section{Main Results} We prove the following result generalizing that of \textit{Lin} \emph{et al.} \cite{Lin & Lin & Wu}. \begin{theo}\label{th2.1} Let $ f $ be a non-constant meromorphic function of hyper order $ \rho_2(f)<1 $ and $ c, c_1\in\mathbb{C^{*}} $. Let $ k_1 $, $ k_2 $ be two positive integers, and $ s_1 $, $ s_2\in\mathcal{S}\setminus\{0\} $, $ s_3 $, $ s_4\in\hat{\mathcal{S}}(f) $ be four distinct periodic functions with period $ c $ such that $ f $ and $ \mathcal{L}_c(f) $ share $ s_3, $ $ s_4 $ $ CM $ and \beas \ol E_{k_{j})}(s_j,f)\subseteq \ol E_{k_{j})}(s_j,\mathcal{L}_c(f)),\;\; j=1, 2. \eeas If \beas \Theta(0;f)+\Theta(\infty;f)>\frac{1}{k_1+1}+\frac{1}{k_2+1}, \eeas then $\mathcal{L}_c(f)\equiv f $. Furthermore, $ f $ assumes the following form \beas f(z)=\left(\frac{1-c_0}{c_1}\right)^{\displaystyle\frac{z}{c}}g(z) ,\eeas where $ g(z) $ is a meromorphic function such that $ g(z+c)=g(z) $, for all $ z\in\mathbb{C} $. \end{theo} \begin{rem} The following examples show that the condition\beas \Theta(0;f)+\Theta(\infty;f)>\frac{1}{k_1+1}+\frac{1}{k_2+1}\eeas in {Theorem \ref{th2.1}} is sharp. \end{rem} \begin{exm} Let $ f(z)={\left(ae^z(e^{2z}+3)\right)}/{\left(3e^{2z}+1\right)} $, $ c=\pi i $ and $ s_1=a $, $ s_2=-a $, where $ a\in\mathbb{C^{*}} $, $ s_3=0 $, $ s_4=\infty $ and $ k_1=2=k_2 $. It is easy to see that $ \mathcal{L}_{\pi i}(f)=-{\left(ae^z(e^{2z}+3)\right)}/{\left(3e^{2z}+1\right)} $, where $ c_1=c_0+1 $, $ c_0, c_1\in\mathbb{C^{*}} $, and $ f(z) $ satisfies all the conditions of {Theorem \ref{th2.1}} and \beas \Theta(0;f)+\Theta(\infty;f)=\frac{2}{3}=\frac{1}{k_1+1}+\frac{1}{k_2+1},\eeas \par where $ \Theta(0,f)={1}/{3}=\Theta(\infty,f) $, but $ \mathcal{L}_{\pi i}(f)\not\equiv f $. \end{exm} \begin{exm} Let $ f(z)={\left(4ae^z(e^{2z}+1)\right)}/{\left(e^{4z}+6e^{2z}+1\right)} $, $ c=\pi i $, $ s_1=a $, $ s_2=-a $, where $ a\in\mathbb{C^{*}} $, $ s_3=0 $, $ s_4=\infty $ and $ k_1=3=k_2 $. Then clearly $ \mathcal{L}_{\pi i}(f)=-{\left(4ae^z(e^{2z}+1)\right)}/{\left(e^{4z}+6e^{2z}+1\right)} $, where $ c_1=c_0+1 $, $ c_0, c_1\in\mathbb{C^{*}} $, and $ f(z) $ satisfies all the conditions of \emph{Theorem \ref{th2.1}} and \beas \Theta(0;f)+\Theta(\infty;f)=\frac{1}{2}=\frac{1}{k_1+1}+\frac{1}{k_2+1},\eeas \par where $ \Theta(0,f)={1}/{2}$, $\Theta(\infty,f)=0 $ but we see that $ \mathcal{L}_{\pi i}(f)\not\equiv f $. \end{exm} \par As the consequences of {Theorem \ref{th2.1}}, we prove the following result. \begin{theo}\label{th2.2} Let $ f $ be a non-constant meromorphic function of hyper-order $ \rho_2(f)<1 $, $ \Theta(\infty,f)=1 $ and $ c, c_1\in\mathbb{C^{*}} $. Let $ s_1, s_2, s_3\in\hat{\mathcal{S}}(f) $ be three distinct periodic functions with period $ c $ such that $ f $ and $ \mathcal{L}_c(f) $ share $ s_3 $ $ CM $ and \beas \ol E_{k_{j})}(s_j,f)\subseteq \ol E_{k_{j})}(s_j,\mathcal{L}_c(f)),\;\; j=1, 2. \eeas If $ k_1, k_2\geq 2 $, then $ \mathcal{L}_c(f)\equiv f $. Furthermore, $ f $ assumes the following form \beas f(z)=\left(\frac{1-c_0}{c_1}\right)^{\displaystyle\frac{z}{c}}g(z) ,\eeas where $ g(z) $ is a meromorphic function such that $ g(z+c)=g(z) $, for all $ z\in\mathbb{C} $. \end{theo} \par The following example shows that, the number $ k_1=2=k_2 $ is sharp in {Theorem \ref{th2.2}}. \begin{exm}\label{ex2.1} We consider $ f(z)=a\cos z $, where $ a\in\mathbb{C^{*}} $, $s_1=a, s_2=-a$ and $ s_3=0 $. We choose $ \mathcal{L}_{\pi}(f)=c_1f(z+\pi)+c_0f(z) $, where $ c_1,\; c_0\in\mathbb{C^{*}} $ with $ c_1=c_0+1 $. Clearly $ f $ and $ \mathcal{L}_{\pi}(f) $ share $ s_3 $ $ CM $, $ \Theta(\infty,f)=1 $, $ \ol E_{1)}(a,f)=\phi=\ol E_{1)}(a,\mathcal{L}_{\pi}(f)) $ and $ \ol E_{1)}(-a,f)=\phi=\ol E_{1)}(-a,\mathcal{L}_{\pi}(f)) $, but $ f(z)\not\equiv \mathcal{L}_{\pi}(f).$ \end{exm} \par Naturally, we are interested to find what happens, when $ k_1=1=k_2 $, and hence we obtain the following result. \begin{theo}\label{th2.3} Let $ f $ be a non-constant meromorphic function of hyper-order $ \rho_2(f)<1 $ with $ \Theta(\infty,f)=1 $ and $ c, c_1\in\mathbb{C^{*}} $. Let $ s_1, s_2, s_3\in\hat{\mathcal{S}}(f) $ be three distinct periodic functions with period $ c $ such that $ f $ and $ \mathcal{L}_c(f) $ share $ s_3 $ $ CM $ and \beas \ol E_{1)}(s_j,f)\subseteq \ol E_{1)}(s_j,\mathcal{L}_c(f)),\;\; j=1, 2. \eeas Then $ \mathcal{L}_c(f)\equiv f $ or $ \mathcal{L}_c(f)\equiv -f+2s_3 $.Furthermore, \begin{enumerate} \item[(i)] If $ \mathcal{L}_c(f)\equiv f $, then \beas f(z)=\left(\frac{ 1-c_0}{c_1}\right)^{\displaystyle\frac{z}{c}}g(z).\eeas \item[(ii)] If $ \mathcal{L}_c(f)\equiv -f+2s_3 $, then \beas f(z)=\left(\frac{-1-c_0}{c_1}\right)^{\displaystyle\frac{z}{c}}g(z)+2s_3,\;\;\text{for all}\; z\in\mathbb{C},\eeas \end{enumerate} where $ g(z) $ is a meromorphic function such that $ g(z+c)=g(z) $ Moreover, $ \mathcal{L}_c(f)\equiv -f+2s_3 $ occurs only if $ s_1+s_2=2s_3 $. \end{theo} \begin{rem} We see that {Theorems \ref{th2.1}, \ref{th2.2}}\; and {\ref{th2.3}}\; directly improved, respectively, {Theorems E, F} and {G}. \end{rem} \begin{rem} We see from {Example \ref{ex2.1}}\; that, in {Theorem \ref{th2.3}},\; the possibility $ \mathcal{L}_c(f)\equiv -f+2s_3 $ could be occurred. \end{rem}\par The following example shows that the restrictions on the growth of $ f $ in our above results are necessary and sharp. \begin{exm} Let $ f(z)=e^{g(z)} $, where $ g(z) $ is an entire function with $ \rho(g)=1 $, and hence for $ c_1={1}/{2}=c_0 $, it is easy to see that $ \mathcal{L}_{\pi}(z)={\left(e^{2g(z)}+1\right)}/{2e^{g(z)}} $. We choose $ s_1=1,\; s_2=-1 $ and $ s_3=\infty $. Clearly $ \Theta(\infty,f)=1 $, $ \rho_2(f)=1 $, $ f $ and $ \mathcal{L}_{\pi}(f) $ share $ s_3 $ $ CM $ and $ \ol E_{1)}(1,f)\subseteq\ol E_{1)}(1,\mathcal{L}_{\pi}(z)) $ and $ \ol E_{1)}(-1,f)\subseteq\ol E_{1)}(-1,\mathcal{L}_{\pi}(z)) $ but we see that neither $ \mathcal{L}_{\pi}(z)\not\equiv f $ nor $ \mathcal{L}_{\pi}(z)\not\equiv -f+2s_3 $. Also the function has not the specific form. \end{exm} \par \begin{rem} The next example shows that the condition $ \Theta(\infty,f)=1 $ in {Theorem \ref{th2.3}} can not be omitted. \end{rem} \begin{exm} Let $ f(z)=1/ \cos z $, $c_1=1$, $ c_0=0 $, $ s_1=1 $, $ s_2=-1 $ and $ s_3=0 $. Clearly $ \Theta(\infty,f)=0 $, $ f $ and $ \mathcal{L}_{3\pi/2}(f) $ share $ s_3 $ $ CM $, $ \ol E_{1)}(1,f)\subseteq\ol E_{1)}(1,\mathcal{L}_{3\pi/2}(z)) $ and $ \ol E_{1)}(-1,f)\subseteq\ol E_{1)}(-1,\mathcal{L}_{3\pi/2}(z)) $. However, one may observe that neither $ \mathcal{L}_{3\pi/2}(z)\not\equiv f $ nor $ \mathcal{L}_{3\pi/2}(z)\not\equiv -f+2s_3 $. Also the function has not the specific form. \end{exm} \section{Key lemmas} In this section, we present some necessary lemmas which will play key role to prove the main results. For a non-zero complex number $ c $ and for integers $ n\geq 1 $, we define the higher order difference operators $ \Delta_c^nf:=\Delta_c^{n-1}(\Delta_c f) $. \begin{lem}\cite{Yan-FE-1980}\label{lem3.1} Let $ c\in\mathbb{C} $, $ n\in\mathbb{N} $, let $ f $ be a meromorphic function of finite order. Then any small periodic function $ a\equiv a(z)\in\mathcal{S}(f) $ \beas m\left(r,\frac{\Delta_c^nf}{f(z)-a(z)}\right)=S(r,f), \eeas where the exponential set associated with $ S(r,f) $ is of at most finite logarithmic measure. \end{lem} \begin{lem}\cite{Moh-FFA-1971,Val-BSM-59}\label{lem3.2} If $ \mathcal{R}(f) $ is rational in $ f $ and has small meromorphic coefficients, then \beas T(r,\mathcal{R}(f))=\deg_f(\mathcal{R})T(r,f)+S(r,f). \eeas \end{lem} \begin{lem}\cite{Yan & Yi-2003}\label{lem3.3} Suppose that $ h $ is a non-constant entire function such that $ f(z)=e^{h(z)} $, then $ \rho_2(f)=\rho(h) $. \end{lem} \par In \cite{Chi & Fen-2008,Hal & Kor-JMMA-2006}, the first difference analogue of the lemma on the logarithmic derivative was proved and for the hyper-order $ \rho_2(f)<1 $, the following is the extension, see \cite{Hal & Kor & Toh-TAMS-2014}. \begin{lem}\cite{Hal & Kor & Toh-TAMS-2014}\label{lem3.4} Let $ f $ be a non-constant finite order meromorphic function and $ c\in\mathbb{C} $. If $ c $ is of finite order, then \beas m\left(r,\frac{f(z+c)}{f(z)}\right)=O\left(\frac{\log r}{r} T(r,f)\right) \eeas for all $ r $ outside of a set $ E $ with zero logarithmic density. If the hyper order $ \rho_2(f)<1 $, then for each $ \epsilon>0 $, we have \beas m\left(r,\frac{f(z+c)}{f(z)}\right)=0\left(\frac{T(r,f)}{r^{1-\rho_2-\epsilon}}\right) \eeas for all $ r $ outside of a set of finite logarithmic measure. \end{lem} \begin{lem}\cite{Yamanoi-AM-2004}\label{lem3.5} Let $ f $ be a non-constant meromorphic function, $ s_j\in\hat{\mathcal{S}}(f) $, $ j=1, 2, ..., q,\;$ $ (q\geq 3) $. Then for any positive real number $ \epsilon $, we have \beas (q-2-\epsilon)T(r,f)\leq\sum_{j=1}^{q}\ol N\left(r,\frac{1}{f-s_j}\right),\; r\not\in E, \eeas where $ E\subset [0,\infty) $ and satisfies $\displaystyle \int_{E}d\log \log r<\infty $. \end{lem} We now prove the following lemma, a similar proof of this lemma can also be found in \cite{Aha & RM & 2019}. \begin{lem}\label{lem3.6} Let $ f $ be a non-constant meromorphic function such that \beas \ol E(s_j,f)\subseteq \ol E(s_j,c_1f(z+c)+c_0f(z)),\;\; j=1, 2,\eeas where $ s_1, s_2\in\mathcal{S}(f) $, $ c,\; c_0,\; c_1(\neq 0)\in\mathbb{C^{*}} $, then $ f $ is not a rational. \end{lem} \begin{proof} We wish to prove this lemma by the method of contradiction. Let $ f $ be a rational function. Then $ f(z)={P(z)}/{Q(z)} $ where $ P $ and $ Q $ are two polynomials relatively prime to each other and $ P(z)Q(z)\not\equiv 0 $. Hence \bea\label{e1.1} E(0,P)\cap E(0,Q)=\phi \eea It is easy to see that\beas c_1f(z+c)+c_0f(z)&=&c_1\frac{P(z+c)}{Q(z+c)}+c_0\frac{P(z)}{Q(z)}\\&=&\frac{c_1P(z+c)Q(z)+c_0P(z)Q(z+c)}{Q(z+c)Q(z)}\\&=&\frac{P_1(z)}{Q_{1}(z)},\text{(say)} \eeas where $ P_1 $ and $ Q_1 $ are two relatively prime polynomials and $ P_1(z)Q_1(z)\not\equiv 0 $.\vspace{1mm} \par Since $ \ol E(s_1,f)\subseteq\ol E(s_1,c_1f(z+c)+c_0f(z)) $ and $ f $ is a rational function, there must exists a polynomial $ h(z) $ such that \beas c_1f(z+c)+c_0f(z)-s_1=(f-s_1)h(z), \eeas which can be re-written as \bea\label{e2.1} \frac{c_1P(z+c)Q(z)+c_0P(z)Q(z+c)}{Q(z+c)Q(z)}-s_1\equiv \left(\frac{P(z)}{Q(z)}-s_1\right)h(z).\eea We now discuss the following cases:\\ \noindent{\bf{Case 1.}} Let $ P(z) $ is non-constant.\par Then by the \textit{Fundamental Theorem of Algebra}, there exists $ z_0\in\mathbb{C} $ such that $ P(z_0)=0 $. Then it follows from (\ref{e2.1}) that \bea\label{e2.3} c_1\frac{P(z_0+c)}{Q(z_0+c)}\equiv(1-h(z_0))s_1^{0}, \eea where $ s_1^{0}=s_1(z_0) $.\par \noindent{\bf{Subcase 1.1.}} Let $ z_0\in\mathbb{C} $ be such that $ s_1(z_0)=0 $.\par Then from (\ref{e2.3}), it is easy to see that $ P(z_0+c)=0 $. Then we can deduce from (\ref{e1.1}) that $ P(z_0+mc)=0 $ for all positive integer $ m $. However, this is impossible, and hence we conclude that the polynomial $ P(z) $ is a non-zero constant.\par \noindent{\bf{Subcase 1.2.}} Let $ z_0\in\mathbb{C} $ be such that $ s_1(z_0)\neq 0 $. \par Then from (\ref{e2.3}), we obtain that \beas P(z_0+c)\equiv\frac{s_1^{0}}{c_1}(1-h(z_0))Q(z_0+c).\eeas\par Next proceeding exactly same way as done in above, we obtain \bea\label{e2.4} P(z_0+mc)\equiv\frac{s_1^{0}}{c_1}(1-h(z_0))Q(z_0+mc).\eea\par In view of (\ref{e2.3}) and (\ref{e2.4}), a simple computation shows that \beas \frac{P(z_0+c)}{Q(z_0+c)}=\frac{P(z_0+mc)}{Q(z_0+mc)}\;\; \text{for all positive integers $ m $,} \eeas which contradicts the fact that $ E(0,P)\cap E(0,Q)=\phi $.\par Therefore, we see that $ f(z) $ takes the form $ f(z)={\eta}/{Q(z)} $, where $ P(z)=\eta=\text{constant}\; (\neq 0).$\par \noindent{\bf{Case 2.}} Let $ Q(z) $ be non-zero constant.\par Now \bea\label{e2.5} c_1f(z+c)+c_0f(z)&=&\frac{c_1\eta\;Q(z)+c_0\eta\; Q(z+c)}{Q(z+c)Q(z)}.\eea\par Since $ E(s_2,f)=E(s_2,c_1f(z+c)+c_0f(z)) $ then there exists a polynomial $ h_1(z) $ such that $ c_1f(z+c)+c_0f(z)-s_2=(f-s_2)h_1(z),$ which can be written as \bea\label{e2.6} c_1\;Q(z)+c_0 Q(z+c)\equiv \frac{\eta-s_2Q(z)}{d}h_1(z)Q(z+c).\eea\par Since $ Q(z) $ and hence $ Q(z+c) $ be non-constant polynomials, hence by the \textit{Fundamental Theorem of Algebra}, there exist $ z_0 $ and $ z_1 $ such that $ Q(z_0)=0=Q(z_1+c) $. \par \noindent{\bf{Subcase 2.1.}} When $ Q(z_0)=0 $, then from (\ref{e2.6}), we see that $ h_1(z_0)=-{c_0}/{\eta} $, which is absurd.\par \par \noindent{\bf{Subcase 2.2.}} When $ Q(z_1+c)=0 $, then from (\ref{e2.6}), we get $ Q(z_1)=0 $, which is not possible.\par Thus we conclude that $ Q(z) $ is a non-zero constant, say $ \eta_2 $. Thus we have $ f(z)={\eta}/{\eta_2} $, a constant, which is a contradiction. This completes the proof. \end{proof} \begin{lem}\cite{Hal & Kor & Toh-TAMS-2014}\label{lem3.7} Let $ T: [0,+\infty]\rightarrow [0,+\infty] $ be a non-decreasing continuous function, and let $ s\in (0,+\infty) $. If the hyper-order of $ T $ is strictly less than one, i.e., \beas \limsup_{r\rightarrow +\infty}\frac{\log ^{+}\log ^{+} T(r)}{\log r}=\rho_2 <1,\eeas then \beas T(r+s)=T(r)+o\left(\frac{T(r)}{r^{1-\rho_2-\epsilon}}\right),\eeas where $ \epsilon>0 $ and $ r\rightarrow\infty $, outside of a set of finite logarithmic measure. \end{lem} \section{Proofs of the main results} In this section, we give the proofs of our main results. \begin{proof}[Proof of Theorem \ref{th2.1}] First of all we suppose that $ s_j\in\mathbb{C} $, $ j=1, 2, 3, 4 $. By the assumption of the theorem, $ f(z) $ and $ \mathcal{L}_c(f)=c_1f(z+c)+c_0f(z) $ share $ s_3 $, $ s_4 $ $ CM $, hence we must have \bea\label{e3.1} \frac{\left(f-s_3\right)\left(\mathcal{L}_c(f)-s_4\right)}{\left(f-s_4\right)\left(\mathcal{L}_c(f)-s_3\right)}=e^{h(z)}, \eea where $ h(z) $ is an entire function with $ \rho(h)<1 $ by Lemma \ref{lem3.3}. In view of Lemma \ref{lem3.4}, we obtain \beas T\left(r,e^h\right)=S(r,f). \eeas\par Therefore with the help of Lemma \ref{lem3.2}, we obtained \beas T(r,\mathcal{L}_c(f))=T(r,f)+S(r,f). \eeas \par Next we suppose that $ z_0\in\ol E_{k)}(s_1,f)\cup\ol E_{k)}(s_2,f) $. Then from (\ref{e3.1}), one may easily deduce that $ e^{h(z_0)}=1 $. For the sake of convenience, we set $ \gamma :=e^{h(z)} $ and \beas S(r) :=S(r,\mathcal{L}(f))=S(r,f).\eeas \par We now split the problem into two cases.\par \noindent{\bf Case 1.} Let $ e^{h(z)}\neq 1 $.\par A simple computation shows that that \bea\label{e3.2} \ol N_{k_1)} \left(r,\frac{1}{f-s_1}\right)&\leq& N\left(r,\frac{1}{\gamma -1}\right)\leq T(r,\gamma)+O(1)\leq S(r) \eea and \bea\label{e3.3} \ol N_{k_2)} \left(r,\frac{1}{f-s_2}\right)&\leq& N\left(r,\frac{1}{\gamma -1}\right)\leq T(r,\gamma)+O(1)\leq S(r). \eea \par Without loss of generality, we may assume that $ s_3 $, $ s_4 \in\mathcal{S}(f)\setminus\{0\}$. By Lemma \ref{lem3.5}, for \beas \epsilon\in\left(0,\frac{1}{3}\left(\Theta(0;f)+\Theta(\infty;f)\right)-\frac{1}{k_1+1}-\frac{1}{k_2+1}\right), \eeas we obtain \bea\label{e3.4} (4-\epsilon) T(r,f)\leq \ol N(r,f)+\ol N\left(r,\frac{1}{f}\right)+\sum_{j=1}^{4}\ol N\left(r,\frac{1}{f-s_j}\right)+S(r,f). \eea\par With the help of (\ref{e3.2}) and (\ref{e3.3}), it follows from (\ref{e3.4}) that \beas (2-\epsilon) T(r,f)\leq \ol N(r,f)+\ol N\left(r,\frac{1}{f}\right)+\sum_{j=1}^{2}\ol N_{(k_j+1}\left(r,\frac{1}{f-s_j}\right)+S(r,f) \eeas which gives \beas \Theta(0;f)+\Theta(\infty;f)\leq\frac{1}{k_1+1}+\frac{1}{k_2+1} \eeas which contradicts \beas \Theta(0;f)+\Theta(\infty;f)>\frac{1}{k_1+1}+\frac{1}{k_2+1}. \eeas \noindent{\bf Case 2.} Therefore, we have $ e^{h(z)}\equiv 1 $ and hence \beas \frac{(f-s_3)(\mathcal{L}_c(f)-s_4)}{(f-s_4)(\mathcal{L}_c(f)-s_3)}=1, \eeas on simplification, we obtain $ \mathcal{L}_c(f)\equiv f(z) $, for all $ z\in\mathbb{C} $.\par We are now to find the class of all the meromorphic functions satisfying the difference equation $ \mathcal{L}_c(f)\equiv f $. By assumption of the result, and using {Lemma \ref{lem3.6}}, it is easy to see that $ f $ is not a rational function. Therefore $ f(z) $ must be a transcendental meromorphic function. \par We also see that $ f(z) $ and $ f(z+c) $ are related by \bea\label{e3.5} f(z+c)=\left(\frac{1-c_0}{c_1}\right)f(z). \eea \par Let $ f_1(z) $ and $ f_2(z) $ be two solutions of (\ref{e3.5}) (see \cite{Aha & RM & 2019} for more details). Then it is easy to see that\bea\label{e3.6} f_1(z+c)=\left(\frac{1-c_0}{c_1}\right)f_1(z)\\\label{e3.7} f_2(z+c)=\left(\frac{1-c_0}{c_1}\right)f_2(z). \eea \par We set $ h(z):=f_1(z)/f_2(z) $. Then in view of (\ref{e3.6}) and (\ref{e3.7}), we obtain \beas h(z+c)=\frac{f_1(z+c)}{f_2(z+c)}=\displaystyle\frac{\displaystyle\frac{1-c_0}{c_1}f_1(z)}{\displaystyle\frac{1-c_0}{c_1}f_2(z)}=\frac{f_1(z)}{f_2(z)}=h(z),\eeas for all $ z\in\mathbb{C} $. Therefore, it is easy to verify that $$f_2(z)=\displaystyle\left(\frac{1-c_0}{c_1}\right)^{\displaystyle\frac{z}{c}}g_2(z),$$ where $ g_2(z) $ is a meromorphic function with $ g_2(z+c)=g_2(z) $, is a solution of (\ref{e3.5}). Hence, it is also easy to verify that $ f_1(z)=f_2(z)h(z) $, a solution of (\ref{e3.5}). Thus the linear combination \beas a_1f_1(z)+a_2f_2(z)&=& \displaystyle\left(\frac{1-c_0}{c_1}\right)^{\displaystyle\frac{z}{c}}\left(a_1h(z)+a_2\right)g_2(z)\\&=&\displaystyle\left(\frac{1-c_0}{c_1}\right)^{\displaystyle\frac{z}{c}}\sigma (z),\eeas where $ \sigma(z)=\left(a_1h(z)+a_2\right)g_2(z) $ is such that $ \sigma(z+c)=\sigma(z) $, for all $ z\in\mathbb{C} $, is the general solution of (\ref{e3.5}). Hence, the precise form of $ f(z) $ is \beas f(z)=\displaystyle\left(\frac{1-c_0}{c_1}\right)^{\displaystyle\frac{z}{c}}g(z), \eeas where $ g(z) $ is a meromorphic function with $ g(z+c)=g(z) $, for all $ z\in\mathbb{C} $.\vspace{1mm} \par This completes the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{th2.3}] Let us suppose that $ g(z) $ is the canonical product of the poles of $ f $. Then by Lemma \ref{lem3.4}, we obtained \bea\label{e4.8} m\left(r,\frac{g(z+c)}{g(z)}\right)=S(r,f)=S(r,f). \eea\par Since $ \Theta(\infty;f)=1 $, hence it is easy to see that \beas \limsup_{r\rightarrow +\infty}\frac{\ol N(r,f)}{T(r,f)}=0. \eeas\par Therefore it follows from (\ref{e4.8}) that\bea\label{e4.9} T\left(r,\frac{g(z+c)}{g(z)}\right)=S(r,f). \eea\par Since $ f $ and $ \mathcal{L}_c(f) $ share $ s_3 $ $ CM $, by Lemma \ref{lem3.3}, we obtain \bea\label{e4.10} \frac{\mathcal{L}_c(f)-s_3}{f-s_3}=e^{\mathcal{H}(z)}\frac{g(z)}{g(z+c)}, \eea where $ \mathcal{H}(z) $ is an entire function, with $ \rho(\mathcal{H})<1 $. By Lemma \ref{lem3.4}, we also obtain \bea\label{e4.11} T\left(r,e^{\mathcal{H}(z)}\frac{g(z)}{g(z+c)}\right)=S(r,f). \eea\par Therefore, by Lemma \ref{lem3.2} and (\ref{e4.11}), a simple computation shows that $ T(r,\mathcal{L}_c(f))=T(r,f)+S(r,f). $ For the sake convenience, we set \beas \beta:=e^{\mathcal{H}(z)}\frac{g(z)}{g(z+c)}\;\;\text{and}\;\; S(r):=S(r,\mathcal{L}_c(f))=S(r,f). \eeas\par If $ \mathcal{L}_c(f)\not\equiv f(z) $. i.e., if $ \beta\neq 1 $, then with the help of (\ref{e4.10}) and from the assumption, we obtain \bea\label{e4.12} \ol N_{1)}\left(r,\frac{1}{f-s_1}\right)&\leq& N\left(r,\frac{1}{\beta -1}\right)\leq T(r,\beta)+O(1)= S(r). \eea and \bea\label{e4.13} \ol N_{1)}\left(r,\frac{1}{f-s_2}\right)&\leq& N\left(r,\frac{1}{\beta -1}\right)\leq T(r,\beta)+O(1)= S(r). \eea By Lemma \ref{lem3.7}, and using (\ref{e4.12}) and (\ref{e4.13}), we easily obtain \bea\label{e4.14} \ol N_{1)}\left(r,\frac{1}{\mathcal{L}_c(f)-s_1}\right)\leq \ol N_{1)}\left(r,\frac{1}{f-s_1}\right)+S(r)=S(r). \eea and \bea\label{e4.15} \ol N_{1)}\left(r,\frac{1}{\mathcal{L}_c(f)-s_2}\right)\leq \ol N_{1)}\left(r,\frac{1}{f-s_1}\right)+S(r)=S(r). \eea\par On the other hand, it follows from (\ref{e4.10}) that \bea\label{e4.16} \mathcal{L}_c(f)-s_1&=&(s_3-s_1)+\beta\; (f-s_3)\\&=&\beta\;\left(f-\frac{s_1+(\beta -1)s_3}{\beta}\right)\nonumber. \eea \par Similarly, we obtain \bea\label{e4.17} \mathcal{L}_c(f)-s_2=\beta\;\left(f-\frac{s_2+(\beta -1)s_3}{\beta}\right).\eea\par It is easy to see that \bea\label{e4.18} N\left(r,\frac{1}{\mathcal{L}_c(f)-s_1}\right)=N\left(r,\frac{1}{f-\frac{s_1+(\beta -1)s_3}{\beta}}\right)+S(r). \eea and \bea\label{e4.19} N\left(r,\frac{1}{\mathcal{L}_c(f)-s_2}\right)=N\left(r,\frac{1}{f-\frac{s_2+(\beta -1)s_3}{\beta}}\right)+S(r). \eea\par Now our aim is to deal with the following three cases.\vphantom{1mm}\\ \noindent{\bfseries{Case 1.}} Suppose that $ \left({\left((\beta-1)s_3+s_1\right)}/{\beta}\right)\neq s_2 $.\par Since $ \left({\left((\beta -1)s_3+s_1\right)}/{\beta}\right)\neq s_1 $ and $ \Theta(\infty;f)=1 $, hence by Lemma \ref{lem3.5} for $ \epsilon\in \left(0,{1}/{2}\right) $, it follows from (\ref{e4.10}), (\ref{e4.12}), (\ref{e4.13}), (\ref{e4.14}) and (\ref{e4.18}) that \bea && (2-\epsilon) T(r,f)\\&\leq& \ol N(r,f)+\ol N\left(r,\frac{1}{f}\right)+\ol N\left(r,\frac{1}{f-s_2}\right)+\ol N\left(r,\frac{1}{f-\frac{(\beta -1)s_3+s_1}{\beta}}\right)\nonumber\\&\leq& \ol N_{(2}\left(r,\frac{1}{f-s_1}\right)+\ol N_{(2}\left(r,\frac{1}{f-s_2}\right)+\ol N_{(2}\left(r,\frac{1}{\mathcal{L}_c(f)-s_1}\right)\nonumber\\&\leq&\frac{1}{2} T(r,f)+\frac{1}{2} T(r,f)+\frac{1}{2} T(r,f)+S(r)\nonumber\\&=&\frac{3}{2} T(r,f)+S(r,f)\nonumber, \eea which is a contradiction.\vspace{1mm} \\ \noindent{\bfseries{Case 2.}} Suppose that $ \left({\left((\beta -1)s_3+s_2\right)}/{\beta}\right)\neq s_1 $.\par Since $ \left({\left((\beta -1)s_3+s_2\right)}/{\beta}\right)\neq s_2 $ and $ \Theta(\infty;f)=1 $, hence proceeding exactly same way as done {Case 1}, we arrive at a contradiction.\par Therefore, we must have $ \mathcal{L}_c(f)\equiv f $, and hence following the proof of {Theorem \ref{th2.1}}, we obtain the precise form of the function. \vspace{1mm}\\ \noindent{\bfseries{Case 3.}} Suppose that \beas \frac{(\beta -1)s_3+s_2}{\beta}=s_1 \eeas and \beas \frac{(\beta -1)s_3+s_1}{\beta}=s_2 .\eeas\par An elementary calculation shows that $ \beta =-1 $, so that $ 2s_3=s_1+s_2 $. Thus from (\ref{e4.10}), we have $ \mathcal{L}_c(f)\equiv -f(z)+2s_3 $ and by the same argument used in the previous cases, it is not hard to show that $ f(z) $ will take the form \beas f(z)=\left(\frac{-1-c_0}{c_1}\right)^{\displaystyle\frac{z}{c}}g(z)+2s_3,\;\;\text{for all}\; z\in\mathbb{C}, \eeas where $ g(z) $ is a meromorphic function with period $ c $. This completes the proof. \end{proof} \section{Concluding remarks and open question} \par Let us suppose that $ \mathcal{L}_c(f)\equiv f $, where $ f $ is a non-constant meromorphic functions. Since $ f $ can not be rational function (see \cite{Aha & RM & 2019} for detailed information), hence $ f $ must be transcendental and hence $ f(z) $ takes the precise form \beas f(z)=\left(\frac{1-c_0}{c_1}\right)^{\displaystyle\frac{z}{c}}g(z), \eeas where $ g(z) $ is a meromorphic periodic function $ c $. We can write $ f(z)=\alpha^{\frac{z}{c}}g(z), $ where $ \alpha $ is a root of the equation $ c_1z+c_0=1 $.\vspace{1mm} \par For further generalization, we define $ \mathcal{L}_c^n(f):=c_nf(z+nc)+\cdots+c_1f(z+c)+c_0f(z) $ (see \cite{Ban & Aha & JCMA-2020} for details), where $ c_n(\neq 0), c_1, c_0\in\mathbb{C} $. For particular values of the constants $ c_j=(-1)^{n-j}\binom nj $ for $ j=0, 1, \ldots, n $, it is easy to see that $\mathcal{L}_c^n(f)=\Delta_{c}^n(f). $ One can check that $ f(z)=2^{^{\frac{z}{c}}}g(z) $, where $ g $ is a meromorphic function of period $ c $, solves the difference equation $ \Delta_{c}^n(f)\equiv f $. We are mainly interested to find the precise form of the function $ f $ when it solves the difference equation $ \mathcal{L}_c^n(f)\equiv f $. However, we conjecture the following.\\ \noindent{\bf Conjecture:} Let $ f $ be a meromorphic function such that $ \mathcal{L}_c^n(f)\equiv f $, then $ f $ assumes the form $$ f(z)=\alpha_n^{{z}/{c}}g_n(z)+\cdots+\alpha_1^{{z}/{c}}g_1(z), $$ where $ g_j $ $ (j=1, 2, \ldots, n) $ are meromorphic functions of period $ c $, and $ \alpha_j $ $ (j=1, 2, \ldots, n) $ are roots of the equation $ c_nz^n+\cdots+c_1z+c_0=1 $. \vspace{1mm} \par Based on the above discussions, we now pose the following question for future investigations on the main results of the paper. \begin{ques} Keeping all other conditions intact, for a meromorphic function $ f $, is it possible to get a corresponding result of {Theorems \ref{th2.1}, \ref{th2.2}} and {\ref{th2.3}}\; for $ \mathcal{L}_c^n(f) $? \end{ques}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{Sec:Intro} \input{Section/Intro} \section{Preliminaries} \label{Sec:Prelims} \input{Section/Prelims} \section{Comparator automata} \label{Sec:Comparator} \input{Section/Comparator} \input{Section/DS} \section{Limit-average comparator} \label{Sec:LA} \input{Section/LA.tex} \section{Concluding remarks} \label{Sec:Conclusion} \input{Section/Conclude.tex} \section*{Acknowledgment} \noindent We thank the anonymous reviewers for their comments. We thank K. Chatterjee, L. Doyen, T. Henzinger, G. A. Perez and J. F. Raskin for corrections to earlier drafts, and their contributions to this paper. We thank P. Ganty and R. Majumdar for preliminary discussions on the limit-average comparator. This work was partially supported by NSF Grant No. 1704883, ``Formal Analysis and Synthesis of Multiagent Systems with Incentives". \bibliographystyle{abbrv} \subsection{Related work} \label{Sec:RelatedWork} \input{Section/RelatedWork.tex} \subsection{Quantitative inclusion} \label{Sec:Inclusion} {The aggregate function or comparator of a quantitative inclusion} problem refer to the aggregate function or comparator of the associated aggregate function. This section presents a generic algorithm (Algorithm~\ref{Alg:DSInclusion}) to solve quantitative inlcusion between $\omega$-weighted automata $P$ and $Q$ with $\omega$-comparators. This section focusses on the non-strict quantitative inclusion. $\mathsf{InclusionReg}$ (Algorithm~\ref{Alg:DSInclusion}) is an algorithm for quantitative inclusion between weighted $\omega$-automata $P$ and $Q$ with $\omega$-regular comparator $\mathcal{A}_f$ for relation $\geq$. $\mathsf{InclusionReg}$ takes $P$,$Q$ and $\mathcal{A}_f$ as input, and returns $\mathsf{True}$ iff $P \subseteq_f Q$. The results for strict quantitative inclusion are similar. We use the following motivating example to explain steps of Algorithm~\ref{Alg:DSInclusion}. \paragraph{Motivating example} \label{Sec:Example} \input{Section/Example.tex} \paragraph{Key ideas} A run $\rho_P$ in $P$ on word $w\in \Sigma^\omega$ is said to be {\em dominated} w.r.t $P\subseteq_f Q$ if there exists a run $\rho_Q$ in $Q$ on the same word $w$ such that $wt_P(\rho_P)\leq wt_Q(\rho_Q)$. $P \subseteq_f Q$ holds if for every run $\rho_P$ in $P $ is dominated w.r.t. $P\subseteq_f Q$. $\mathsf{InclusionReg}$ constructs B\"uchi automaton $\mathit{Dom}$ that consists of exactly the domianted runs of $P$ w.r.t $P\subseteq_f Q$. $\mathsf{InclusionReg}$ returns $\mathsf{True}$ iff $\mathit{Dom}$ contains all runs of $P$. To obtain $\mathit{Dom}$, it constructs B\"uchi automaton $\mathit{DomProof}$ that accepts word $(\rho_P, \rho_Q)$ iff $\rho_P$ and $\rho_Q$ are runs of the same word in $P$ and $Q$ respectively, and $wt_P(\rho_P)\leq wt_Q(\rho_Q)$ i.e. if $w_P$ and $w_Q$ are weight sequence of $\rho_P$ and $\rho_Q$, respectively, then $(w_P, w_Q)$ is present in the $\omega$-regular comparator $\mathcal{A}_f^\leq$ for aggregate function $f$ with relation $\leq$. The projection of $\mathit{DomProof}$ on runs of $P$ results in $\mathit{Dom}$. \begin{algorithm}[t] \caption{ $\mathsf{InclusionReg}(P,Q, \mathcal{A}_f)$, Is $P \subseteq_f Q$?} \label{Alg:DSInclusion} \begin{algorithmic}[1] \STATE \textbf{Input: } Weighted automata $P$, $Q$, and $\omega$-regular comparator $\mathcal{A}_f$ (Inequality $\leq$) \STATE \textbf{Output: } $\mathsf{True}$ if $P \subseteq_f Q$, $\mathsf{False}$ otherwise \STATE $\hat{P} \leftarrow \mathsf{AugmentWtAndLabel}(P)$ \label{alg-line:AugmentP} \STATE $\hat{Q} \leftarrow \mathsf{AugmentWtAndLabel}(Q)$ \label{alg-line:AugmentQ} \STATE $\hat{P}\times\hat{Q} \leftarrow \mathsf{MakeProduct}(\hat{P}, \hat{Q})$ \label{alg-line:Prod} \STATE $\mathit{DomProof} \leftarrow \mathsf{Intersect}(\hat{P}\times\hat{Q}, \mathcal{A}_{\succeq} )$ \label{alg-line:Intersect} \STATE $\mathit{Dom} \leftarrow \mathsf{FirstProject}(\mathit{DomProof})$ \label{alg-line:Project} \RETURN {$\hat{P} \equiv \mathit{Dom}$} \label{alg-line:ensure} \end{algorithmic} \end{algorithm} \paragraph{Algorithm details} For sake a simplicity, we assume that every word present in $P$ is also present in $Q$ i.e. $P\subseteq Q$ (qualitative inclusion). $\mathsf{InclusionReg}$ has three steps: (a). $\mathsf{UniqueId}$ (Lines~\ref{alg-line:AugmentP}-\ref{alg-line:AugmentQ}): Enables unique identification of runs in $P$ and $Q$ through {\em labels}. (b). $\mathsf{Compare}$ (Lines~\ref{alg-line:Prod}-\ref{alg-line:Project}): Compares weight of runs in $P$ with weight of runs in $Q$, and constructs $\mathit{Dom}$. (c). $\mathsf{DimEnsure}$ (Line~\ref{alg-line:ensure}): Ensures if all runs of $P$ are diminished. \begin{enumerate} \item $\mathsf{UniqueId}$: $\mathsf{AugmentWtAndLabel}$ transforms weighted $\omega$-automaton $\mathcal{A}$ into B\"uchi automaton $\hat{\mathcal{A}}$ by converting transition $\tau = (s, a, t )$ with weight $\gamma(\tau)$ in $ \mathcal{A}$ to transition $\hat{\tau} = (s, (a, \gamma(\tau), l), t)$ in $\hat{\mathcal{A}}$, where $l$ is a unique label assigned to transition $\tau$. The word $\hat{\rho} = (a_0,n_0,l_0)(a_1,n_1,l_1)\dots \in \hat{A}$ iff run $\rho \in \mathcal{A}$ on word $a_0a_1\dots$ with weight sequence $n_0n_1\dots$. Labels ensure bijection between runs in $\mathcal{A}$ and words in $\hat{\mathcal{A}}$. Words of $\hat{A}$ have a single run in $\hat{A}$. Hence, transformation of weighted $\omega$-automata $P$ and $Q$ to B\"uchi automata $\hat{P}$ and $\hat{Q}$ enables disambiguation between runs of $P$ and $Q$ (Line~\ref{alg-line:AugmentP}-\ref{alg-line:AugmentQ}). The corresponding $\hat{A}$ for weighted $\omega$-automata $P$ and $Q$ from Figure~\ref{Fig:WA-P}-~\ref{Fig:WA-Q} are given in Figure~\ref{Fig:WA-Phat}-~\ref{Fig:WA-Qhat} respectively. \begin{figure*}[t] \centering \begin{minipage}{0.28\textwidth} \centering \begin{tikzpicture}[shorten >=1pt,node distance=2.1cm,on grid,auto] \node[state,accepting, initial] (q_1) {$p_1$}; \node[state, accepting] (q_2) [right of = q_1] {$p_2$}; \path[->] (q_1) edge node {$(a,1,1)$} (q_2) (q_2) edge [loop above] node {$(a,1,2)$} (q_2); \end{tikzpicture} \caption{$\hat{P}$} \label{Fig:WA-Phat} \end{minipage} \hfill \centering \begin{minipage}{0.28\textwidth} \centering \begin{tikzpicture}[shorten >=1pt,node distance=2.1cm,on grid,auto] \node[state,accepting, initial] (q_1) {$q_1$}; \node[state, accepting] (q_2) [right of = q_1] {$q_2$}; \path[->] (q_1) edge [bend left] node {$(a,0,1)$} (q_2) edge node [below] {$(a,2,2)$} (q_2) (q_2) edge [loop above] node {$(a,1,3)$} (q_2); \end{tikzpicture} \caption{$\hat{Q}$} \label{Fig:WA-Qhat} \end{minipage} \hfill \centering \begin{minipage}{0.4\textwidth} \centering \begin{tikzpicture}[shorten >=1pt,node distance=3.4cm,on grid,auto] \node[state,accepting, initial] (s_1) {$p_1,q_1$}; \node[state, accepting] (s_2) [right of = s_1] {$p_2,q_2$}; \path[->] (s_1) edge [bend left] node {$(a,1,1,0,1)$} (s_2) (s_1) edge node [below] {$(a,1,1,2,2)$} (s_2) (s_2) edge [loop, distance=1.5cm] node [above] {$(a,1,2,1,3)$} (s_2); \end{tikzpicture} \caption{$\hat{P}\times\hat{Q}$} \label{Fig:PhatQhat} \end{minipage} \end{figure*} \item $\mathsf{Compare}$: The output of this step is the B\"uchi automaton $\mathit{Dom}$, that contains the word ${\hat{\rho}}\in\hat{P}$ iff $\rho$ is a dominated run in $P$ w.r.t $P\subseteq_f Q$ (Lines~\ref{alg-line:Prod}-\ref{alg-line:Project}). $\mathsf{MakeProduct}(\hat{P}, \hat{Q})$ constructs $\hat{P}\times \hat{Q}$ s.t. word $(\hat{\rho_P}, \hat{\rho_Q}) \in \hat{P}\times \hat{Q}$ iff $\rho_P$ and $\rho_Q$ are runs of the same word in $P$ and $Q$ respectively (Line~\ref{alg-line:Prod}). Concretely, for transition $\hat{\tau_\mathcal{A}}=(s_\mathcal{A},(a, n_\mathcal{A}, l_\mathcal{A}),t_\mathcal{A})$ in automaton $\mathcal{A}$, where $\mathcal{A} \in \{\hat{P}, \hat{Q}\}$, transition $\hat{\tau_P}\times\hat{\tau_Q}=((s_P, s_Q),(a, n_P, l_P, n_Q, l_Q),(t_P, t_Q))$ is in $ \hat{P}\times\hat{Q}$, as shown in Figure~\ref{Fig:PhatQhat}. $\mathsf{Intersect}$ intersects the weight components of $\hat{P}\times\hat{Q}$ with comparator $\mathcal{A}_f^\leq$ (Line~\ref{alg-line:Intersect}). The resulting automaton $\mathit{DomProof}$ accepts word $(\hat{\rho_P}, \hat{\rho_Q})$ iff $f(\rho_P)\leq f(\rho_Q)$, and $\rho_P$ and $\rho_Q$ are runs on the same word in $P$ and $Q$ respectively. The result of $\mathsf{Intersect}$ between $\hat{P}\times\hat{Q}$ with the limsup comparator $\mathcal{A}_{\mathsf{LS}}^\leq$ for relation $\leq$ (Figure~\ref{Fig:SupComparator}) is given in Figure~\ref{Fig:Intersect}. The projection of $\mathit{DomProof}$ on the words of $\hat{P}$ returns $\mathit{Dom}$ which contains the word $\hat{\rho_P}$ iff $\rho_P$ is a dominated run in $P$ w.r.t $P\subseteq_f Q$ (Line~\ref{alg-line:Project}), as shown in Figure~\ref{Fig:Dim}. \begin{figure*}[t] \centering \begin{minipage}{0.3\textwidth} \centering \begin{tikzpicture}[shorten >=1pt,node distance=1.7cm,on grid,auto] \node[state,initial] (s_1) {$s_1$}; \node[state, accepting] (s_2) [right of = s_1] {$s_2$}; \node[state] (s_3) [below right of = s_1] {$s_3$}; \path[->] (s_1) edge node {$(1,2)$} (s_2) (s_1) edge node [below,sloped] {$(1,0)$} (s_3) (s_2) edge [loop right] node [above] {$(1,1)$} (s_2) (s_3) edge [loop right] node [above] {$(1,1)$} (s_3); \end{tikzpicture} \caption{Snippet of limsup comparator $\mathcal{A}_{\mathsf{LS}}^\leq$ for relation $\leq$} \label{Fig:SupComparator} \end{minipage} \hfill \centering \begin{minipage}{0.33\textwidth} \centering \begin{tikzpicture}[shorten >=1pt,node distance=2.9cm,on grid,auto] \node[state,accepting, initial] (s_1) {$s_1$}; \node[state, accepting] (s_2) [right of = s_1] {$s_2$}; \path[->] (s_1) edge node [below] {$(a,1,1,2,2)$} (s_2) (s_2) edge [loop, distance=1.5cm] node [above] {$(a,1,2,1,3)$} (s_2); \end{tikzpicture} \caption{$\mathsf{Intersect}$} \label{Fig:Intersect} \end{minipage} \hfill \centering \begin{minipage}{0.28\textwidth} \centering \begin{tikzpicture}[shorten >=1pt,node distance=2.1cm,on grid,auto] \node[state,accepting, initial] (q_1) {$p_1$}; \node[state, accepting] (q_2) [right of = q_1] {$p_2$}; \path[->] (q_1) edge node {$(a,1,1)$} (q_2) (q_2) edge [loop above] node {$(a,1,2)$} (q_2); \end{tikzpicture} \caption{$\mathit{Dom}$} \label{Fig:Dim} \end{minipage} \end{figure*} \item $\mathsf{DimEnsure}$: $P\subseteq_f Q$ iff $\hat{P} \equiv \mathit{Dom}$ (qualitative equivalence) since $\hat{P}$ consists of all runs of $P$ and $\mathit{Dom}$ consists of all domianted runs w.r.t $P\subseteq_f Q$ (Line~\ref{alg-line:ensure}). \end{enumerate} \begin{lem} \label{lem:dim} B\"uchi automaton $\mathit{Dom}$ consists of all domianted runs in $P$ w.r.t $P\subseteq_f Q$. \end{lem} \begin{proof} Let $\mathcal{A}_f^\leq$ be the comparator for $\omega$-regular aggregate function $f$ and relation $\leq$ s.t. $\mathcal{A}_f$ accepts $(A,B)$ iff $f(A)\leq f(B)$. A run $\rho$ over word $w$ with weight sequence $wt$ in $P$ (or $Q$) is represented by the unique word $\hat{\rho} = (w, wt, l)$ in $\hat{P}$ (or $\hat{Q}$) where $l$ is the unique label sequence associated with each run in $P$ (or $Q$). Since every label on each transition is separate, $\hat{P}$ and $\hat{Q}$ are deterministic automata. Now, $\hat{P}\times\hat{Q}$ is constructed by ensuring that two transitions are combined in the product only if their alphabet is the same. Therefore if $(w, wt_1, l_1, wt_2, l_2) \in \hat{P}\times\hat{Q}$, then $\hat{\rho} = (w, wt_1, l_1)\in \hat{P}$, $\hat{\sigma} = (w, wt_2, l_2)\in \hat{Q}$. Hence, there exist runs $\rho$ and $\sigma$ with weight sequences $wt_1$ and $wt_2$ in $P$ and $Q$, respectively. Next, $\hat{P}\times\hat{Q}$ is intersected over the weight sequences with $\omega$-regular comparator $\mathcal{A}_f^\leq$ for aggregate function $f$ and relation $\leq$. Therefore $(w, wt_1, l_1, wt_2, l_2) \in \mathit{DomProof}$ iff $f(wt_1) \leq f(wt_2)$. Therefore runs $\rho$ in $P$ and $\sigma$ in $Q$ are runs on the same word s.t. aggregate weight in $P$ is less than or equal to that of $\sigma$ in $Q$. Therefore $\mathit{Dom}$ constitutes of these $\hat{\rho}$. Therefore $\mathit{Dom}$ consists of $\hat{\rho}$ only if $\rho$ is a dominated run in $P$ w.r.t $P\subseteq_f Q$. Every step of the algorithm has a two-way implication, hence it is also true that every dominated run in $P$ w.r.t $P\subseteq_f Q$ is present in $\mathit{Dom}$. \end{proof} \begin{lem} \label{Lemma:DSInclusionAlg} Given weighted $\omega$-automata $P$ and $Q$ and their $\omega$-regular comparator $\mathcal{A}_f^\leq$ for aggregate function $f$ and relation $\leq$. $\mathsf{InclusionReg}(P,Q,\mathcal{A}_f)$ returns $\mathsf{True}$ iff $P \subseteq_f Q$. \end{lem} \begin{proof} $\hat{P}$ consists of all runs of $P$. $\mathit{Dom}$ consists of all dominated run in $P$ w.r.t $P\subseteq_f Q$. $P\subseteq_f Q$ iff every run of $P$ is dominated w.r.t $P\subseteq_f Q$. Therefore $P\subseteq_f Q$ is given by whether $\hat{P} \equiv \mathit{Dom}$, where $\equiv$ denotes qualitative equivalence. \end{proof} Algorithm $\mathsf{InclusionReg}$ is adapted for {\em strict} quantitative inclusion $P\subset_f Q$ by repeating the same procedure with $\omega$-regular comparator $\mathcal{A}_f^<$ for aggregate function $ f$ and relation $<$. Here, a run $\rho_P$ in $P$ on word $w\in \Sigma^\omega$ is said to be {\em dominated} w.r.t $P\subset_f Q$ if there exists a run $\rho_Q$ in $Q$ on the same word $w$ such that $wt_P(\rho_P)< wt_Q(\rho_Q)$. Similarly for quantitative equivalence $P \equiv_f Q$. We give the complexity analysis of quantitative-inclusion with $\omega$-regular comparators. \begin{thm} \label{thrm:RegularComplexity} Let $P$ and $Q$ be weighted $\omega$-automata and $\mathcal{A}_f$ be an $\omega$-regular comparator. Quantitative inclusion problem, quantitative strict-inclusion problem, and quantitative equivalence problem for $\omega$-regular aggregate function $f$ is $\cc{PSPACE}$-complete. \end{thm} \begin{proof} All operations in $\mathsf{InclusionReg}$ until Line~\ref{alg-line:Project} are polytime operations in the size of weighted $\omega$-automata $P$, $Q$ and comparator $\mathcal{A}_f$. Hence, $\mathit{Dom}$ is polynomial in size of $P$, $Q$ and $\mathcal{A}_f$. Line~\ref{alg-line:ensure} solves a $\cc{PSPACE}$-complete problem. Therefore, the quantitative inclusion for $\omega$-regular aggregate function $f$ is in $\cc{PSPACE}$ in size of the inputs $P$, $Q$, and $\mathcal{A}_f$. The $\cc{PSPACE}$-hardness of the quantitative inclusion is established via reduction from the {\em qualitative} inclusion problem, which is $\cc{PSPACE}$-complete. The formal reduction is as follows: Let $P$ and $Q$ be B\"uchi automata (with all states as accepting states). Reduce $P$, $ Q$ to weighted automata $\overline{P}$, $\overline{Q}$ by assigning a weight of 1 to each transition. Since all runs in $\overline{P}$, $\overline{Q}$ have the same weight sequence, weight of all words in $\overline{P}$ and $\overline{Q}$ is the same for any function $f$. It is easy to see $P \subseteq Q$ (qualitative inclusion) iff $\overline{P} \subseteq_f \overline{Q}$ (quantitative inclusion). \end{proof} Theorem~\ref{thrm:RegularComplexity} extends to weighted $\omega$-automata when weight of words is the {\em infimum} of weight of runs. The key idea for $P\subseteq_f Q$ here is to ensure that for every run $\rho_Q$ in $Q$ there exists a run on the same word in $\rho_P$ in $P$ s.t. $f(\rho_P)\leq f(\rho_Q)$. \subsection{Limit-average language and comparison} \label{Sec:LALanguage} Let $\Sigma = \{0,1,\dots, \mu\}$ be a finite alphabet with $\mu>0$. The {\em limit-average language} $\L_{LA}$ contains the sequence (word) $A \in \Sigma^{\omega}$ iff its limit-average exists. Suppose $\L_{LA}$ were $\omega$-regular, then $\L_{LA} = \bigcup_{i=0}^n U_i\cdot V_i^{\omega}$, where $U_i, V_i\subseteq \Sigma^*$ are regular languages over {\em finite} words. The limit-average of sequences is determined by its behavior in the limit, so limit-average of sequences in $V_i^{\omega}$ exists. Additionally, the average of all (finite) words in $V_i$ must be the same. If this were not the case, then two words in $V_i$ with unequal averages $l_1$ and $l_2$, can generate a word $w \in V_i^{\omega}$ s.t~the average of its prefixes oscillates between $l_1$ and $l_2$. This cannot occur, since limit-average of $w$ exists. Let the average of sequences in $V_i$ be $a_i$, then limit-average of sequences in $V_i^{\omega}$ and $U_i \cdot V_i^{\omega}$ is also $a_i$. This is contradictory since there are sequences with limit-average different from the $a_i$ (see appendix). Similarly, since every $\omega$-CFL is represented by $\bigcup_{i=1}^n U_i \cdot V_i^{\omega}$ for CFLs $U_i, V_i$ over finite words~\cite{cohen1977theory}, a similar argument proves that $\L_{LA}$ is not $\omega$-context-free. \begin{thm} \label{Lemma:LARegularNotExist} $\L_{LA}$ is neither an $\omega$-regular nor an $\omega$-context-free language. \end{thm} \begin{proof} We first prove that {$\L_{LA}$ is not $\omega$-regular}. Let us assume that the language $\L_{LA}$ is $\omega$-regular. Then there exists a finite number $n$ s.t. $\L_{LA} = \bigcup_{i=0}^n U_i \cdot V_i^{\omega}$, where $U_i$ and $V_i \in \Sigma^{*}$ are regular languages over finite words. For all $i \in \{0,1,\dots n\}$, the limit-average of any word in $U_i\cdot V_i^{\omega}$ is given by the suffix of the word in $V_i^{\omega}$. Since $U_i\cdot V_i^{\omega} \subseteq \L_{LA}$, limit-average exists for all words in $U_i \cdot V_i^{\omega}$. Therefore, limit-average of all words in $V_i^{\omega}$ must exist. From Lemma~\ref{lem:sameAverage}, we conclude that the average of all word in $V_i$ must be the same. Furthermore, from Lemma~\ref{lem:sameLA}, we know that the limit-average of all words in $V_i^{\omega}$ must be the same, say $\LA{w} = a_i$ for all $w \in V_i^{\omega}$. Then the limit-average of all words in $\L_{LA}$ is one of $a_0, a_1\dots a_n$. Let $a = \frac{p}{q}$ s.t $p<q$, snd $a \neq a_i$ for $i \in \{0,1,\dots,\mu\}$. Consider the word $w = (1^{p}0^{q-p})^{\omega}$. It is easy to see the $\LA{w} = a$. However, this word is not present in $\L_{LA}$ since the limit-average of all words in $\L_{LA}$ is equal to $a_0$ or $a_1$ \dots or $a_n$. Therefore, our assumption that $\L_{LA}$ is an $\omega$-regular language has been contradicted. Next we prove that $\L_{LA}$ is not an $\omega$-CFL. Every $\omega$-context-free language can be written in the form of $\bigcup_{i=0}^n U_i \cdot V_i^{\omega}$ where $U_i$ and $V_i $ are context-free languages over finite words. The rest of this proof is similar to the proof for non-$\omega$-regularity of $\L_{LA}$. \end{proof} In the next section, we will define {\em prefix-average comparison} as a relaxation of limit-average comparison. To show how prefix-average comparison relates to limit-average comparison, we will require the following two lemmas: Quantifiers $\exists^{\infty}i$ and $\exists^{f}i$ denote the existence of {\em infinitely} many and {\em only finitely} many indices $i$, respectively. \begin{lem} \label{Lemma:LAExistFull} Let $A$ and $B$ be sequences s.t.~their limit average exists. If $\exists^{\infty}i, \Sum{A[0,i-1]} \geq \Sum{B[0,i-1]}$ then $\LA{A} \geq \LA{B}$. \end{lem} \begin{proof} Let the limit average of sequence $A$, $B$ be $a$, $b$ respectively. Since the limit average of $A$ and $B$ exists, for every $\epsilon >0$, there exists $N_{\epsilon}$ s.t. for all $n> N_{\epsilon}$ , $|\Av{A[0,n-1]} - a| < \epsilon$ and $|\Av{B[0,n-1]} - b| < \epsilon$. Let $a - b = k > 0$. Take $\epsilon = \frac{k}{4}$. Then for all $n>N_{\frac{k}{4}}$, since $|\Av{A[0,n-1]} - a| < \epsilon$, $|\Av{B[0,n-1]} - b| < \epsilon$ and that $a-b=k>0$, $\Av{A[0,n-1]} - \Av{B[0,n-1]} > \frac{k}{2} \implies \frac{\Sum{A[0,n-1]}}{n} - \frac{\Sum{B[0,n-1]}}{n} > \frac{k}{2} \implies \Sum{A[0,n-1]} - \Sum{B[0,n-1]} > 0$. Specifically, $\ExistInf{i}{A}{B}$. Furthermore, since there is no index greater than $N_{\frac{k}{4}}$ where $\Sum{A[0,n-1]} \leq \Sum{B[0,n-1]}$, $\ExistFin{i}{B}{A}$. \end{proof} \begin{lem} \label{Lemma:LAExistsProp} Let $A$, $B$ be sequences s.t their limit-average exists. If $\LA{A} > \LA{B}$ then $\ExistFin{i}{B}{A}$ and $\ExistInf{i}{A}{B}$. \end{lem} \begin{proof} Let the limit-average of sequence $A$, $B$ be $L_a$, $L_b$ respectively. Since, the limit average of both $A$ and $B$ exists, for every $\epsilon >0$, there exists $N_{\epsilon}$ s.t. for all $n> N_{\epsilon}$ , $|\Av{A[1,n]} - L_a| < \epsilon$ and $|\Av{B[1,n]} - L_b| < \epsilon$. Suppose it were possible that $\LA{A}<\LA{B}$. Suppose $L_b-L_a = k>0$. Let $\epsilon = \frac{k}{4}$. By arguing as in Lemma~\ref{Lemma:LAExistFull}, it must be the case that for all $n > N_{\frac{k}{4}}$, ${\Sum{B[1, n]}} - {\Sum{A[1, n]}} > 0$. But this is not possible, since we are given that $\ExistInf{i}{A}{B}$ (or $\exists^{\infty}i, \Sum{A[0,i-1] \geq \Sum{B[0,i-1]}}$). Hence $\LA{A} \geq \LA{B}$. \end{proof} \subsection{Prefix-average comparison and comparator} \label{Sec:LAClassificationAll} The previous section relates limit-average comparison with the sums of equal length prefixes of the sequences (Lemma~\ref{Lemma:LAExistFull}-\ref{Lemma:LAExistsProp}). The comparison criteria is based on the number of times sum of prefix of one sequence is greater than the other, which does not rely on the existence of limit-average. Unfortunately, this criteria cannot be used for limit-average comparison since it is incomplete (Lemma~\ref{Lemma:LAExistsProp}). Specifically, for sequences $A$ and $B$ with equal limit-average it is possible that $\exists^{\infty} i, \Sum{A[0,n-1]} > \Sum{B[0,n-1]}$ and $\exists^{\infty} i, \Sum{B[0,n-1]} > \Sum{A[0,n-1]}$. Instead, we use this criteria to define {\em prefix-average comparison}. In this section, we define prefix-average comparison and explain how it relaxes limit-average comparison. Lastly, we construct the prefix-average comparator, and prove that it is not $\omega$-regular but is $\omega$-context-free. \begin{defi}[Prefix-average comparison for relation $\geq$] \label{Def:PrefixAverage} Let $A$ and $B$ be number sequences. We say $\PLA{A} \geq \PLA{B}$ if $\ExistFin{i}{B}{A}$ and $\ExistInf{i}{A}{B}$. \end{defi} Note that, by definition prefix average comparison is defined on inequality relation $\geq$ or $\leq$, and not for the other inequality or equality relations. Intuitively, prefix-average comparison states that $\PLA{A}\geq \PLA{B}$ if eventually the sum of prefixes of $A$ are always greater than those of $B$. We use $\geq$ since the average of prefixes may be equal when the difference between the sum is small. It coincides with limit-average comparison when the limit-average exists for both sequences. Definition\ref{Def:PrefixAverage} and Lemma~\ref{Lemma:LAExistFull}-\ref{Lemma:LAExistsProp} relate limit-average comparison and prefix-average comparison: \begin{cor} When limit-average of $A$ and $B$ exists, then \label{Coro:PLAtoLA} \begin{itemize} \item $\PLA{A}\geq\PLA{B} \implies \LA{A}\geq \LA{B}$. \item $\LA{A}>\LA{B} \implies \PLA{A}\geq \PLA{B}$. \end{itemize} \end{cor} \begin{proof} The first item falls directly from definitions. For the second, let $\LAInf{A}$ and $\LASup{B}$ be $a$ and $b$ respectively. For all $\epsilon>0$ there exists an $N_{\epsilon}$ s.t for all $n>N_{\epsilon}$, $\frac{\Sum{A[1,n]}}{n} > a-\epsilon$, and $\frac{\Sum{B[1,n]}}{n} < b + \epsilon$. Let $a-b = k>0$. Take $\epsilon=\frac{k}{4}$. Replicate the argument from Lemma~\ref{Lemma:LAExistFull} to show that there can exist only finitely many indexes $i$ where ${\Sum{B[0,i-1]}} \geq {\Sum{A[0,i-1]}}$. Similarly, show there exists infinitely many prefixes where $\Sum{A[0,i-1]} > \Sum{B[0, i-1]}$ \end{proof} Therefore, limit-average comparison and prefix-average comparison return the same result on sequences for which limit-average exists. In addition, prefix-average returns intuitive results when even when limit-average may not exist. For example, suppose limit-average of $A$ and $B$ do not exist, but $\LAInf{A} > \LASup{B}$, then $\PLA{A} \geq \PLA{B}$. Therefore, prefix-average comparison relaxes limit-average comparison. The rest of this section describes {\em prefix-average comparator for relation $\geq$}, denoted by $\mathcal{A}_{\mathsf{PA}}^\geq$, an automaton that accepts the pair $(A,B)$ of sequences iff $\PLA{A}\geq \PLA{B}$. \begin{lem} \label{Lemma:PumpingLemmaRegular} \textbf{(Pumping Lemma for $\omega$-regular language~\cite{alur2009omega})} Let $L$ be an $\omega$-regular language. There exists $p \in \mathbb{N}$ such that, for each $w = u_1w_1u_2w_2 \dots \in L$ such that $|w_i| \geq p$ for all $i$, there are sequences of finite words $(x_i)_{i\in \mathbb{N}}$, $(y_i)_{i\in \mathbb{N}}$, $(z_i)_{i\in \mathbb{N}}$ s.t., for all $i$, $w_i = x_iy_iz_i$, $|x_iy_i| \leq p$ and $|y_i| > 0$ and for every sequence of pumping factors $(j_i)_{i\in \mathbb{N}} \in \mathbb{N}$, the pumped word $u_1 x_1 y_1^{j_1}z_1 u_2 x_2 y_2^{j_2}z_2 \dots \in L$. \end{lem} \begin{thm} \label{Lemma:LANotRegular} The prefix-average comparator for $\geq$ is not $\omega$-regular. \end{thm} \begin{proof}[Proof Sketch] We use Lemma~\ref{Lemma:PumpingLemmaRegular} to prove that $\mathcal{A}_{\mathsf{PA}}^\geq$ is not $\omega$-regular. Suppose $\mathcal{A}_{\mathsf{PA}}^\geq$ were $\omega$-regular. For $p >0 \in \mathbb{N}$, let $w = (A,B) = ((0,1)^p(1,0)^{2p})^{\omega}$. The segment $(0,1)^*$ can be pumped s.t the resulting word is no longer in $\mathcal{A}_{\mathsf{PA}}^\geq$. Concretely, $A = (0^p 1^{2p})^{\omega}$, $B = (1^p0^{2p})^{\omega}$, $\LA{A} = \frac{2}{3}$, $\LA{B} = \frac{1}{3}$. So, $w = (A,B) \in \mathcal{A}_{\mathsf{PA}}^\geq$. Select as factor $w_i$ (from Lemma~\ref{Lemma:PumpingLemmaRegular}) the sequence $(0,1)^p$. Pump each $y_i$ enough times so that the resulting word is $\hat{w} = (\hat{A}, \hat{B}) = ((0,1)^{m_i}(1,0)^{2p})^{\omega}$ where $m_i>4p$. It is easy to show that $\hat{w} = (\hat{A}, \hat{B})\notin \mathcal{A}_{\mathsf{PA}}^\geq$. \end{proof} We discuss key ideas and sketch the construction of the prefix average comparator. The term {\em prefix-sum difference at $i$} indicates $\Sum{A[0,i-1]} - \Sum{B[0,i-1]}$, i.e. the difference between sum of $i$-length prefix of $A$ and $B$. \paragraph{Key ideas} For sequences $A$ and $B$ to satisfy $\PLA{A}\geq\PLA{ B}$, $\ExistFin{i}{B}{A}$ and $\ExistInf{i}{A}{B}$. This occurs iff there exists an index $N$ s.t. for all indices $i>N$, $\Sum{A[0,i-1]} - \Sum{B[0,i-1]}>0$. While reading a word, the prefix-sum difference is maintained by states and the stack of $\omega$-PDA: states maintain whether it is negative or positive, while number of tokens in the stack equals its absolute value. The automaton non-deterministically guesses the aforementioned index $N$, beyond which the automaton ensure that prefix-sum difference remains positive. \paragraph{Construction sketch} The push-down comparator $\mathcal{A}_{\mathsf{PA}}^\geq$ consists of three states: (i) State $s_P$ and (ii) State $s_N$ that indicate that the prefix-sum difference is greater than zero and or not respectively, (iii) accepting state $s_F$. An execution of $(A,B)$ begins in state $s_N$ with an empty stack. On reading letter $(a,b)$, the stack pops or pushes $|(a-b)|$ tokens from the stack depending on the current state of the execution. From state $s_P$, the stack pushes tokens if $(a-b)>0,$ and pops otherwise. The opposite occurs in state $s_N$. State transition between $s_N$ and $s_P$ occurs only if the stack action is to pop but the stack consists of $k < |a-b|$ tokens. In this case, stack is emptied, state transition is performed and $|a-b|-k$ tokens are pushed into the stack. For an execution of $(A,B)$ to be an accepting run, the automaton non-deterministically transitions into state $s_F$. State $s_F$ acts similar to state $s_P$ except that execution is terminated if there aren't enough tokens to pop out of the stack. $\mathcal{A}_{\mathsf{PA}}^\geq$ accepts by accepting state. To see why the construction is correct, it is sufficient to prove that at each index $i$, the number of tokens in the stack is equal to $|\Sum{A[0,i-1]} - \Sum{B[0,i-1]}|$. Furthermore, in state $s_N$, $\Sum{A[0,i-1]} - \Sum{B[0,i-1]}\leq 0$, and in state $s_P$ and $s_F$, $\Sum{A[0,i-1]} - \Sum{B[0,i-1]}>0$. Next, the index at which the automaton transitions to the accepting state $s_F$ coincides with index $N$. The execution is accepted if it has an infinite execution in state $s_F$, which allows transitions only if $\Sum{A[0,i-1]} - \Sum{B[0,i-1]} > 0$. \paragraph{Construction} We provide a sketch of the construction of the B\"uchi push-down autoamaton $\mathcal{A}_{\mathsf{PA}}^\geq$, and then prove that it corresponds to the prefix average comparator. Let $\mu$ be the bound on sequences. Then $\Sigma = \{0,1,\dots, n\}$ is the alphabet of sequences. Let $ \mathcal{A}_{\mathsf{PA}}^\geq = (\mathit{S}, \Sigma\times\Sigma, \Gamma, \delta, s_0, Z_0)$ where: \begin{itemize} \item $\mathit{S} = \{s_N, s_P, s_F\}$ is the set of states of the automaton. \item $\Sigma\times\Sigma$ is the alphabet of the language. \item $\Gamma = \{Z_0, \alpha\}$ is the push down alphabet. \item $s_0 = s_N$ is the start state of the push down automata. \item $Z_0$ is the start symbol of the stack. \item $s_F$ is the accepting state of the automaton. Automaton $\mathcal{A}_{\mathsf{PA}}^\geq$ accepts words by final state. \item Here we give a sketch of the behavior of the transition function $\delta$. \begin{itemize} \item When $\mathcal{A}_{\mathsf{PA}}^\geq$ is in configuration $(s_P, \tau)$ for $\tau \in \Gamma$, push $a$ number of $\alpha$-s into the stack. Next, pop $b$ number of $\alpha$-s. If after popping $k$ $\alpha$-s where $k<b$, the PDA's configuration becomes $(s_P, Z_0)$, then first move to state $(s_N, Z_0)$ and then resume with pushing $ b-k$ $\alpha$-s into the stack. \item When $\mathcal{A}_{\mathsf{PA}}^\geq$ is in configuration $(s_N, \tau)$ for $\tau \in \Gamma$, push $b$ number of $\alpha$-s into the stack Next, pop $a$ number of $\alpha$-s. If after popping $k$ $\alpha$-s where $k<a$, the PDA's configuration becomes $(s_N, Z_0)$, then first move to state $(s_P, Z_0)$ and then resume with pushing $ a-k$ $\alpha$-s into the stack. \item When $\mathcal{A}_{\mathsf{PA}}^\geq$ is in configuration $(s_P, \tau)$ for $\tau \neq Z_0$, first move to configuration $(s_F, \tau)$ and then push $a$ number of $\alpha$-s and pop $b$ number of $\alpha$-s. Note that there are no provisions for popping $\alpha$ if the stack hits $Z_0$ along this transition. \item When $\mathcal{A}_{\mathsf{PA}}^\geq$ is in configuration $(s_F, \tau)$ for $\tau \neq Z_0$, push $a$ $\alpha$-s then pop $b$ $\alpha$-s. Note that there are no provisions for popping $\alpha$ if the stack hits $Z_0$ along this transition. \end{itemize} \end{itemize} \begin{lem} \label{Lemma:PAComparator} Push down automaton $\mathcal{A}_{\mathsf{PA}}^\geq$ accepts a pair of sequences $(A,B)$ iff $\PLA{A} \geq \PLA{B}$. \end{lem} \begin{proof}[Proof sketch] To prove this statement, it is sufficient to demonstrate that $\mathcal{A}_{\mathsf{PA}}^\geq$ accepts a pair of sequences $(A,B)$ iff there are only finitely many indexes where $\Sum{B[1,i]} > \Sum{A[1,i]}$. On $\mathcal{A}_{\mathsf{PA}}^\geq$ this corresponds to the condition that there being only finitely many times when the PDA is in state $N$ during the run of $(A,B)$. This is ensured by the push down automaton since the word can be accepted only in state $F$ and there is no outgoing edge from $F$. Therefore, every word that is accepted by $\mathcal{A}_{\mathsf{PA}}^\geq$ satisfies the condition $\ExistFin{i}{B}{A}$. Conversely, for every word $(A,B)$ that satisfies $\ExistFin{i}{B}{A}$ there is a point, call it index $k$, such that for all indexes $m>k$, $\Sum{B[1,m]} \ngeq \Sum{A[1,m]}$. If a run of $(A,B)$ switches to $F$ at this $m$, then it will be accepted by the push down automaton. Since $\mathcal{A}_{\mathsf{PA}}^\geq$ allows for non-deterministic move to $(F, \tau)$ from $(P, \tau)$, the run of $(A,B)$ will always be able to move to $F$ after index $m$. Hence, every $(A,B)$ satisfying $\ExistFin{i}{B}{A}$ will be accepted by $\mathcal{A}_{\mathsf{PA}}^\geq$. \end{proof} \begin{thm} \label{Thrm:PAComparator} The prefix-average comparator for relation $\geq$ is an $\omega$-CFL. \end{thm} While $\omega$-CFL can be easily expressed, they do not possess closure properties, and problems on $\omega$-CFL are easily undecidable. Hence, the application of $\omega$-context-free comparator will require further investigation. \subsection{Discounted sum comparator} $\label{Sec:DiscountedSum} $\input{Section/DS} \subsection{$\omega$-Regular aggregate functions} \label{Sec:RegularAggFunctions} This section draws out the relationship between $\omega$-regular aggregate functions and $\omega$-regular comparators. We begin with showing that $\omega$-regular aggregate functions entails $\omega$-regular comaparators for the aggregate function. \begin{thm} \label{thrm:functionthencomparator} Let $\mu>0$ be the upper-bound on weight sequences, and $\beta\geq 2$ be the integer base. Let $f: \{0,1,\dots,\mu\}^\omega\rightarrow \Re$ be an aggregate function. If aggregate function $f$ is $\omega$-regular under base $\beta$, then its comparator for all inequality and equality relations is also $\omega$-regular. \end{thm} \begin{proof} We show that if an aggregate function is $\omega$-regular under base $\beta$, then its comparator for relation $>$ is $\omega$-regular. By closure properties of $\omega$-regular comparators, this implies that comparators of the aggregate function are $\omega$-regular for all inequality and equality relations. But first we prove that for a given integer base $\beta\geq 2$ there exists an automaton $\mathcal{A}_\beta$ such that for all $a,b\in \Re$, $\mathcal{A}_\beta$ accepts $(\mathsf{rep}(a,\beta), \mathsf{rep}(b, \beta))$ iff $a>b$. Let $a,b\in\Re$, and $\beta>2$ be an integer base. Let $\mathsf{rep}(a,\beta) = \mathsf{sign}_a\cdot(\mathsf{Int}(a, \beta), \mathsf{Frac}(a, \beta))$ and $ \mathsf{rep}(b,\beta) = \mathsf{sign}_b\cdot(\mathsf{Int}(b, \beta), \mathsf{Frac}(b, \beta))$. Then, the following statements can be proven using simple evaluation from definitions: \begin{itemize} \item When $\mathsf{sign}_a =+$ and $\mathsf{sign}_b = -$. Then $a > b$. \item When $\mathsf{sign}_a = \mathsf{sign}_b = +$ \begin{itemize} \item If $\mathsf{Int}(a, \beta) \neq \mathsf{Int}(b, \beta)$: Since $\mathsf{Int}(a, \beta)$ and $\mathsf{Int}(b, \beta)$ eventually only see digit $0$ i.e. they are necessarily identical eventually, there exists an index $i$ such that it is the last position where $\mathsf{Int}(a, \beta)$ and $\mathsf{Int}(b, \beta)$ differ. If $\mathsf{Int}(a, \beta)[i]>\mathsf{Int}(b, \beta)[i]$, then $a > b$. If $\mathsf{Int}(a, \beta)[i]<\mathsf{Int}(b, \beta)[i]$, then $a < b$. \item If $\mathsf{Int}(a, \beta) = \mathsf{Int}(b, \beta)$ but $\mathsf{Frac}(a, \beta) \neq \mathsf{Frac}(b, \beta)$: Let $i$ be the first index where $\mathsf{Frac}(a, \beta)$ and $\mathsf{Frac}(b, \beta)$ differ. If $\mathsf{Frac}(a, \beta)[i] > \mathsf{Frac}(b, \beta)[i]$ then $a > b$. If $\mathsf{Frac}(a, \beta)[i] < \mathsf{Frac}(b, \beta)[i]$ then $a < b$. \item Finally, if $\mathsf{Int}(a, \beta) = \mathsf{Int}(b, \beta)$ and $\mathsf{Frac}(a, \beta) = \mathsf{Frac}(b, \beta)$: Then $a = b$. \end{itemize} \item When $\mathsf{sign}_a = \mathsf{sign}_b = -$ \begin{itemize} \item If $\mathsf{Int}(a, \beta) \neq \mathsf{Int}(b, \beta)$: Since $\mathsf{Int}(a, \beta)$ and $\mathsf{Int}(b, \beta)$ eventually only see digit $0$ i.e. they are necessarily identical eventually. Therefore, there exists an index $i$ such that it is the last position where $\mathsf{Int}(a, \beta)$ and $\mathsf{Int}(b, \beta)$ differ. If $\mathsf{Int}(a, \beta)[i]>\mathsf{Int}(b, \beta)[i]$, then $a < b$. If $\mathsf{Int}(a, \beta)[i]<\mathsf{Int}(b, \beta)[i]$, then $a > b$. \item If $\mathsf{Int}(a, \beta) = \mathsf{Int}(b, \beta)$ but $\mathsf{Frac}(a, \beta) \neq \mathsf{Frac}(b, \beta)$: Let $i$ be the first index where $\mathsf{Frac}(a, \beta)$ and $\mathsf{Frac}(b, \beta)$ differ. If $\mathsf{Frac}(a, \beta)[i] > \mathsf{Frac}(b, \beta)[i]$ then $a < b$. If $\mathsf{Frac}(a, \beta)[i] < \mathsf{Frac}(b, \beta)[i]$ then $a > b$. \item Finally, if $\mathsf{Int}(a, \beta) = \mathsf{Int}(b, \beta)$ and $\mathsf{Frac}(a, \beta) = \mathsf{Frac}(b, \beta)$: Then $a = b$. \end{itemize} \item When $\mathsf{sign}_a =-$ and $\mathsf{sign}_b = +$. Then $a < b$. \end{itemize} Since the conditions given above are exhaustive and mutually exclusive, we conclude that for all $a,b\in\Re$ and integer base $\beta\geq 2$, let $\mathsf{rep}(a,\beta) = \mathsf{sign}_a\cdot(\mathsf{Int}(a, \beta), \mathsf{Frac}(a, \beta))$ and $ \mathsf{rep}(b,\beta) = \mathsf{sign}_b\cdot(\mathsf{Int}(b, \beta), \mathsf{Frac}(b, \beta))$. Then $a>b$ iff one of the following conditions occurs: \begin{enumerate} \item $\mathsf{sign}_a =+$ and $\mathsf{sign}_b = -$. \item $\mathsf{sign}_a = \mathsf{sign}_b = +$, $\mathsf{Int}(a, \beta) \neq \mathsf{Int}(b, \beta)$, and $\mathsf{Int}(a, \beta)[i]>\mathsf{Int}(b, \beta)[i]$ when $i$ is the last index where $\mathsf{Int}(a, \beta)$ and $\mathsf{Int}(b, \beta)$ differ. \item $\mathsf{sign}_a = \mathsf{sign}_b = +$, $\mathsf{Int}(a, \beta) = \mathsf{Int}(b, \beta)$, $\mathsf{Frac}(a, \beta) \neq \mathsf{Frac}(b, \beta)$, and $\mathsf{Int}(a, \beta)[i]>\mathsf{Int}(b, \beta)[i]$ when $i$ is the first index where $\mathsf{Frac}(a, \beta)$ and $\mathsf{Frac}(b, \beta)$ differ. \item $\mathsf{sign}_a = \mathsf{sign}_b = +$, $\mathsf{Int}(a, \beta) \neq \mathsf{Int}(b, \beta)$, and $\mathsf{Int}(a, \beta)[i]<\mathsf{Int}(b, \beta)[i]$ when $i$ is the last index where $\mathsf{Int}(a, \beta)$ and $\mathsf{Int}(b, \beta)$ differ. \item $\mathsf{sign}_a = \mathsf{sign}_b = +$, $\mathsf{Int}(a, \beta) = \mathsf{Int}(b, \beta)$, $\mathsf{Frac}(a, \beta) \neq \mathsf{Frac}(b, \beta)$, and $\mathsf{Int}(a, \beta)[i]<\mathsf{Int}(b, \beta)[i]$ when $i$ is the first index where $\mathsf{Frac}(a, \beta)$ and $\mathsf{Frac}(b, \beta)$ differ. \end{enumerate} Note that each of these five condition can be easily expressed by a B\"uchi automaton over alphabet $\mathsf{AlphaRep(\beta)}$ for an integer $\beta\geq 2$. For an integer $\beta\geq 2$, the union of all these B\"uchi automata will result in a B\"uchi automaton $\mathcal{A}_\beta$ such that for all $a,b\in \Re$ and $A = \mathsf{rep}(a, \beta)$ and $B = \mathsf{rep}(b,\beta)$, $a>b$ iff interleaved word $(A,B) \in \L(\mathcal{A}_\beta)$. Now we come to the main part of the proof. Let $f:\Sigma^\omega\rightarrow \Re$ be an $\omega$-regular aggregate function with aggregate function automata $\mathcal{A}_f$. We will construct an $\omega$-regular comparator for $f$ with relation $>$. Note that $(X,Y)$ is present in the comparator iff $(X, M), (Y,N)\in \mathcal{A}_f$ for $M,N \in \mathsf{AlphaRep}(\beta)^\omega$ and $(M,N) \in \mathcal{A}_\beta$, for $\mathcal{A}_\beta$ as described above. Since $\mathcal{A}_f$ and $\mathcal{A}_\beta$ are both B\"uchi automata, the comparator for function $f$ with relation $>$ is also a B\"uchi auotmaton. Therefore, the comparator for aggregate function $f$ with relation $> $ is $\omega$-regular. \end{proof} The converse direction of whether $\omega$-regular comparator for an aggregate function $f$ for all inequality or equality relations will entail $\omega$-regular functions under an integer base $\beta\geq 0$ is trickier. For all aggregate functions considered in this paper, we see that whenever the comparator is $\omega$-regular, the aggregate function is $\omega$-regular as well. However, the proofs for this have been done on a case-by-cass basis, and we do not have an algorithmic procedure to derive a function (B\"uchi) automaton from its $\omega$-regular comparator. We also do not have an example of an aggregate function for which the comparator is $\omega$-regular but the function is not. Therefore, we arrive at the following conjecture: \begin{conj} \label{Conjecture:comparatortofunction} Let $\mu>0$ be the upper-bound on weight sequences, and $\beta\geq 2$ be the integer base. Let $f: \{0,1,\dots,\mu\}^\omega\rightarrow \Re$ be an aggregate function. If the comparator for an aggregate function $f$ is $\omega$-regular for all inequality and equality relations, then its aggregate function is also $\omega$-regular under base $\beta$. \end{conj} \input{Section/Inclusion.tex} \label{Sec:Represent} \input{Section/Represent.tex} \input{Section/ImperfectInfo.tex} \subsection{Incomplete-information quantitative games} \label{Sec:GraphGames} Given an incomplete-information quantitative game $\mathcal{G} = (S,s_ {\mathcal{I}},\mathit{Obs}, \Sigma, \delta, \gamma, f)$, our objective is to determine if player $P_0$ has a winning strategy $ \alpha: \mathit{Obs}^* \rightarrow \Sigma$ for $\omega$-regular aggregate function $f$. We assume we are given the $\omega$-regular comparator $\mathcal{A}_f$ for function $f$. Note that a function $A^*\rightarrow B$ can be treated like a $B$-labeled $A$-tree, and vice-versa. Hence, we proceed by finding a $\Sigma$-labeled $\mathit{Obs}$-tree -- the {\em winning strategy tree}. Every branch of a winning strategy-tree is an {observed play} $o_{\rho}$ of $\mathcal{G}$ for which every actual play $\rho$ is a winning play for $P_0$. We first consider all {\em game trees} of $\mathcal{G}$ by interpreting $\mathcal{G}$ as a tree-automaton over $\Sigma$-labeled $S$-trees. Nodes $n \in S^*$ of the game-tree correspond to states in $S$ and labeled by actions in $\Sigma$ taken by player $P_0$. Thus, the {\em root node} $\varepsilon$ corresponds to $s_\mathcal{I}$, and a node $s_{i_0},\ldots,s_{i_k}$ corresponds to the state $s_{i_k}$ reached via $s_{\mathcal{I}},s_{i_0},\ldots,s_{i_{k-1}}$. Consider now a node $x$ corresponding to state $s$ and labeled by an action $\sigma$. Then $x$ has children $x s_1,\ldots x s_n$, for every $s_i \in S$. If $s_i\in \delta(s,\sigma)$, then we call $x s_i$ a \emph{valid} child, otherwise we call it an \emph{invalid} child. Branches that contain invalid children correspond to invalid plays. A game-tree $\tau$ is a {\em winning tree} for player $P_0$ if every branch of $\tau$ is either a winning play for $P_0$ or an invalid play of $\mathcal{G}$. One can check, using an automata, if a play is invalid by the presence of invalid children. Furthermore, the winning condition for $P_0$ can be expressed by the $\omega$-regular comparator $\mathcal{A}_f$ that accepts $(A,B)$ iff $f(A) > f(B)$. To use the comparator $ \mathcal{A}_f$, it is determinized to parity automaton $D_f$. Thus, a product of game $\mathcal{G}$ with $D_f$ is a deterministic parity tree-automaton accepting precisely winning-trees for player $P_0$. Winning trees for player $P_0$ are $\Sigma$-labeled $S$-trees. We need to convert them to $\Sigma$-labeled $\mathit{Obs}$-trees. Recall that every state has a unique observation. We can simulate these $\Sigma$-labeled $S$-trees on strategy trees using the technique of {\em thinning} states $S$ to observations $\mathit{Obs}$~\cite{kupferman2000synthesis}. The resulting alternating parity tree automaton $\mathcal{M}$ will accept a $\Sigma$-labeled $\mathit{Obs}$-tree $\tau_o$ iff for all actual game-tree $\tau$ of $\tau_o$, $\tau$ is a winning-tree for $P_0$ with respect to the strategy $\tau_o$. The problem of existence of winning-strategy for $P_0$ is then reduced to non-emptiness checking of $\mathcal{M}$. \begin{thm} \label{Thrm:GameIncomplete} Given an incomplete-information quantitative game $\mathcal{G}$ and $\omega$-regular comparator $\mathcal{A}_f$ for the aggregate function $f$, the complexity of determining whether $P_0$ has a winning strategy is exponential in ${|\mathcal{G}| \cdot |D_f|} $, where $|D_f| = |\mathcal{A}_f|^{O(|\mathcal{A}_f|)}$. \end{thm} Since, $D_f$ is the deterministic parity automaton equivalent to $A_f, |D_f|= |\mathcal{A}_f|^{O(|\mathcal{A}_f|)}$. The thinning operation is linear in size of $|\mathcal{G}\times D_f|$, therefore $|\mathcal{M}| = |\mathcal{G}|\cdot| D_f|$. Non-emptiness checking of alternating parity tree automata is exponential. Therefore, our procedure is doubly exponential in size of the comparator and exponential in size of the game. The question of tighter bounds is open. \section{Discounted-sum comparator} \label{Sec:DiscountedSum} The discounted-sum of an infinite sequence $A$ with discount-factor $d >1$, denoted by $\DSum{A}{d}$, is defined as $\Sigma_{i=0}^{\infty} A[i] / d^{i}$, and the discounted-sum of a finite sequence $A$ is $\Sigma_{i=0}^{|A|-1} A[i] / d^{i}$. The discounted-sum comparator (DS-comparator, in short) for discount-factor $d$ and relation R, denoted by $ \mathcal{A}_{\DSsucceqFord{d}}^R$, accepts a pair $ (A, B) $ of (infinite length) weight sequences iff $ \DSum{A}{d}$ $R$ $\DSum{B}{d} $. We investigate properties of the DS-comparator, and show that the DS-comparator is $\omega$-regular iff the discount-factor $d>1$ is an integer. We also show that the discounted-sum aggregate function is $\omega$-regular iff the discount-factor is an integer. Finally, we show the repercussions of the above results on quantitative inclusion with discounted-sum aggregate function (DS-inclusion, in short). Section~\ref{Sec:RationalDF} and Section~\ref{Sec:IntegerDF} deal with the non-integer rational discount-factors and integer discount-factors, respectively. \subsection{Non-integer, rational discount-factor} \label{Sec:RationalDF} For a weighted $\omega$-automaton $\mathcal{A}$ and a real number $r \in \Re$, the {\em cut-point language} of $\mathcal{A}$ w.r.t. $r $ is defined as $L^{\geq r} = \{w \in L(\mathcal{A}) | wt_\mathcal{A}(w) \geq r \}$~\cite{chatterjee2009expressiveness}. When the discount factor is a rational value $1<d<2$, it is known that not all deterministic weighted $\omega$-automaton with discounted-sum aggregate function (DS-automaton, in short) have an $\omega$-regular cut-point language for an $r \in \Re$~\cite{chatterjee2009expressiveness}. In this section, we extended this result to to all non-integer, rational discount-factors $d>1$. Finally, we use the extension to show that DS-comparator for non-integer, rational discount-factors $d>1$ is not $\omega$-regular. Consequently, also proving that discounted-sum is not an $\omega$-regular aggregate function when its discount-factor is a non-integer rational number. Note that the result on cut-point languages from~\cite{chatterjee2009expressiveness} has motivated the proof for impossibility of determinization of weighted $\omega$-automata with discounted-sum aggregate function with non-integer discount-factors~\cite{boker2014exact}. However, the direct link with cut-point languages is unclear. Here, we give a simpler extension of the proof in~\cite{chatterjee2009expressiveness} to generalize it to all non-integer rational discount-factors. \begin{thm} \label{lem:cutpoint} For all non-integer, rational discount-factor $d>1$, there exists a deterministic discounted-sum automata $\mathcal{A}$ and rational value $r \in \Re$ for which its cut-point language is not $\omega$-regular. \end{thm} \begin{proof} Since the proof for $1<d<2$ has been presented in~\cite{chatterjee2009expressiveness}, we skip that case. The proof presented here extends the earlier result on $1<d<2$ from~\cite{chatterjee2009expressiveness} to all non-integer, rational discount factors $d>1$. Let $d>2$ be a non-integer, rational discount-factor. Define deterministic discounted-sum automata $\mathcal{A}$ over the alphabet $\{0,1,\dots, \ceil{d}-1\}$ such that the weight of transitions on alphabet $n \in \{0,1,\dots, \ceil{d}-1\}$ is $n$. Therefore, weight of word $w \in \mathcal{A}$ is $\DSum{w}{d}$. Consider its cut-point language $L^{\geq (\ceil{d}-1)}$. We say a finite-length word $w$ is {\em ambiguous} iff $\ceil{d}-1 - \frac{1}{d^{|w|-1}} \cdot \frac{\ceil{d}-1}{d-1} \leq DS(w,d) < \ceil{d}-1$. Intuitively, a finite word $w$ is ambiguous if it can be extended into infinite words in $\mathcal{A}$ such that some extensions lie in $L^{\geq (\ceil{d}-1)}$ and some do not. Clearly, the finite word $w = \ceil{d}-2$ is ambiguous. Note that when $d>2$ is non-integer, rational valued, then $\frac{\ceil{d}-1}{d-1} > 1$, so word $\ceil{d}-2$ falls within the range for ambiguity. We claim that if finite word $w$ is ambiguous, then either $w\cdot (\ceil{d}-2)$ or $w\cdot (\ceil{d}-1)$ is ambiguous. To prove ambiguity, we need to show that $\ceil{d}-1 - \frac{1}{d^{|w|}} \cdot \frac{\ceil{d}-1}{d-1}\leq \DSum{w\cdot (\ceil{d}-k)}{d} < \ceil{d}-1$, for $k \in \{1,2\}$. Note, $\DSum{w\cdot (\ceil{d}-2)}{d} = \DSum{w}{d} + \frac{1}{d^{|w|}}\cdot (\ceil{d}-2)$, and $\DSum{w\cdot (\ceil{d}-1)}{d} = \DSum{w}{d} + \frac{1}{d^{|w|}}\cdot (\ceil{d}-1)$. Simplifying the expressions for $w\cdot (\ceil{d}-1)$ and $w\cdot (\ceil{d}-2)$, we need to prove either $\ceil{d}-1 - \frac{1}{d^{|w|+1}} \cdot \frac{\ceil{d}-1}{d-1}\leq \DSum{w}{d} < \ceil{d}-1 - \frac{1}{d^{|w|}}\cdot (\ceil{d}-1)$ or $\ceil{d}-1 - \frac{1}{d^{|w|}} \cdot \frac{\ceil{d}-1}{d-1} - \frac{1}{d^{|w|}}\cdot(\ceil{d}-2)\leq \DSum{w}{d} < \ceil{d}-1 - \frac{1}{d^{|w|}}\cdot (\ceil{d}-2)$. Now, this is true if $\ceil{d}-1 - \frac{1}{d^{|w|}}\cdot\frac{\ceil{d}-1}{d-1} - \frac{1}{d^{|w|}}\cdot(\ceil{d}-2) < \ceil{d}-1 - \frac{1}{d^{|w|}}\cdot (\ceil{d}-1)$ which is equivalent $d>2$ is a is non-integer, rational discount-factor. Therefore, every ambiguous finite word can be extended to another ambiguous word. This means there exists an infinite word $w^\geq$ such that $\DSum{w^\geq}{d} = \ceil{d}-1$ and all finite prefixes of $w^\geq$ are ambiguous. Let us assume that the language $L^{\geq (\ceil{d}-1)}$ is $\omega$-regular and represented by B\"uchi automaton $\mathcal{B}$. For $n<m$, let the $n$- and $m$-length prefixes of $w^{\geq}$, denoted $w^{\geq}[0,n-1]$ and $w^{\geq}[0,m-1]$, respectively, be such that they reach the same states in $\mathcal{B}$. Then there exists an infinite length word $w_s$ such that $\DSum{w^{\geq}[0,n-1]\cdot w_s}{d} = \DSum{w^{\geq}[0,m-1]\cdot w_s}{d} = \ceil{d}-1$. Now, $\DSum{w^{\geq}[0,n-1]\cdot w_s}{d} = \DSum{w^{\geq}[0,n-1]}{d} + \frac{1}{d^n}\cdot\DSum{w_s}{d}$ and $\DSum{w^{\geq}[0,m-1]\cdot w_s}{d} = \DSum{w^{\geq}[0,m-1]}{d} + \frac{1}{d^m}\cdot\DSum{w_s}{d}$. Eliminating $\DSum{w_s}{d}$ from the equations and simplification, we get: \[ d^{m-1}\cdot(\DSum{w^{\geq}[0,m-1]}{d} - (\ceil{d}-1)) + d^{n-1}\cdot(\DSum{w^{\geq}[0,n-1]}{d} - (\ceil{d}-1)) = 0 \] The above is a polynomial over $d$ with degree $m-1$ and integer co-efficients. Specifically, $d = \frac{p}{q} > 2$ such that integers $p,q>1$, and $p$ and $q$ are mutually prime. Since $d =\frac{p}{q}$ is a root of the above equation, $q$ must divide co-efficient of the highest degree term, in this case it is $m-1$. The co-efficient of the highest degree term in the polynomial above is $(w^{\geq}[0] - (\ceil{d}-1))$. Recall from construction of $w^{\geq}$ above, $w^{\geq}[0] = \ceil{d}-2$. So the co-efficient of the highest degree term is $-1$, which is not divisible by integer $q>1$. Contradiction. \end{proof} Finally, we use Theorem~\ref{lem:cutpoint} to prove the discounted-sum comparator is not $\omega$-regular when the discount-factor $d>1$ is non-integer, rational number. \begin{thm} \label{Thrm:DSNotRegular} DS-comparator for non-integer, rational discount-factors $d>1$ for all inequalities and equality are not $\omega$-regular. \end{thm} \begin{proof} If the comparator for an aggregate function for any one inequality is not $\omega$-regular, then the comparator for all inequalities and equality relation will also not be $\omega$-regular. Therefore, it is sufficient to prove that the discounted-sum comparator with non-integer, rational value for relation $\geq$ is not $\omega$-regular. Let $d>1$ be a non-integer, rational discount-fact. Let $ \mathcal{A}$ be the discounted-sum automaton as described in proof of Lemma~\ref{lem:cutpoint}. Consider its cut-point language $L^{\geq (\ceil{d}-1)}$. From Lemma~\ref{lem:cutpoint} and~\cite{chatterjee2009expressiveness}, we know that $L^{\geq (\ceil{d}-1)}$ is not an $\omega$-regular language. Suppose there exists an $\omega$-regular DS-comparator $\mathcal{A}_d^\leq$ for non-integer rational discount factor $d>1$ for relation $\geq$. We define the B\"uchi automaton $\mathcal{P}$ s.t. $\L(\mathcal{P}) = \{(w,v) | w \in \L(\mathcal{A}), v = \ceil{d}-1 \cdot 0^{\omega} \}$. Note that $\DSum{\ceil{d}-1\cdot 0^{\omega}}{d} = \ceil{d}-1$. . Then the cut-point language $L^{\geq (\ceil{d}-1)}$ of deterministic discounted-sum automata $\mathcal{A}$ can be constructed by taking the intersection of $\mathcal{P}$ with $\mathcal{A}_d^\geq$. Since all actions are closed under $\omega$-regular operations, $L^{\geq 1}$ can be represented by a B\"uchi automaton. Contradiction to Theorem~\ref{lem:cutpoint}. \end{proof} \begin{thm} \label{Cor:RationalAggregate} Let $d>1$ be a non-integer, rational discount-factor. The discounted-sum aggregate function with discount-factor $d$ is not $\omega$-regular. \end{thm} \begin{proof} Immediate from Lemma~\ref{lem:cutpoint} and Theorem~\ref{thrm:functionthencomparator}. \end{proof} Since the DS-comparator for all non-integer, rational discount-factor $d>1$ is not $\omega$-regular, the $\omega$-regular-based algorithm for quantitative inclusion described in Algorithm~\ref{Alg:DSInclusion} does not apply to DS-inclusion. In fact, the decidability of DS-inclusion with non-integer, rational discount-factors is still open. \subsection{Integer discount-factor} \label{Sec:IntegerDF} In this section, we provide an explicit construction of an $\omega$-regular comparator for discounted-sum with integer discount-factors. We use this construction to prove that discounted-sum aggregate function with integer discount-factor is $\omega$-regular. Finally, we use the $\omega$-regular DS-comparator in Algorithm~\ref{Alg:DSInclusion} to establish $\mathsf{PSPACE}$-completeness of DS-inclusion with integer discount-factors. \paragraph{Discounted-sum comparator} Let integer $\mu>0$ be the upper-bound on sequences. The core intuition is that bounded sequences can be converted to their value in base $d$ via a finite-state transducer. Lexicographic comparison of the converted sequences renders the desired DS-comparator. Conversion of sequences to base $d$ requires a certain amount of {\em look-ahead} by the transducer. Here we describe a method that directly incorporates the look-ahead with lexicographic comparison to obtain the DS-comparator for integer discount-factor $d>1$. Here we construct the discounted-sum comparator for relation $<$. We explain the construction in detail now. For weight sequence $A$ and integer discount-factor $d>1$, $\DSum{A}{d}$ can be interpreted as a value in base $d$ i.e. $\DSum{A}{d} = A[0] + \frac{A[1]}{d} + \frac{A[2]}{d^2} + \dots = (A[0].A[1]A[2]\dots)_d$~\cite{chaudhuri2013regular}. Unlike comparison of numbers in base $d$, the lexicographically larger sequence may not be larger in value since (i) The elements of weight sequences may be larger in value than base $d$, and (ii) Every value has multiple infinite-sequence representations. To overcome these challenges, we resort to arithmetic techniques in base $d$. Note that $ \DSum{B}{d} > \DSum{A}{d} $ iff there exists a sequence $C$ such that $\DSum{B}{d} = \DSum{A}{d} + \DSum{C}{d} $, and $\DSum{C}{d} > 0$. Therefore, to compare the discounted-sum of $A$ and $B$, we obtain a sequence $ C $. Arithmetic in base $d$ also results in sequence $X$ of carry elements. Then: \begin{lem} \label{Lemma:discount-invariant1} Let $A, B, C, X$ be weight sequences, $ d > 1 $ be a positive integer such that following equations holds true: \begin{enumerate} \item \label{eq:initial} When $ i = 0 $, $ A[0] + C[0] + X[0] = B[0]$ \item \label{eq:invariant} When $ i\geq 1 $, $A[i] + C[i] + X[i] = B[i] + d \cdot X[i-1]$ \end{enumerate} Then $ \DSum{B}{d} = \DSum{A}{d} + \DSum{C}{d}$. \end{lem} \begin{proof} $ \DSum{A}{d} + \DSum{C}{d} = \Sigma_{i=0}^{\infty} A[i] \frac1{d^i} + \Sigma_{i=0}^{\infty} C[i] \frac1{d^i} = \Sigma_{i=0}^{\infty} (A[i] + C[i]) \frac1{d^i} = (B[0] - X[0]) + \Sigma_{i=1}^{\infty} (B[i] + d\cdot X[i-1] - X[i]) \frac1{d^i} = (B[0] - X[0]) + \Sigma_{i=1}^{\infty} (B[i] + d\cdot X[i-1] - X[i]) \frac1{d^i} = \Sigma_{i=0}^{\infty} B[i] \cdot \frac1{d^i} - \Sigma_{i=0}^{\infty} X[i] + \Sigma_{i=0}^{\infty} X[i] = \Sigma_{i=0}^{\infty} B[i] \cdot \frac1{d^i} = \DSum{B}{d}$ \end{proof} Hence to determine $\DSum{B}{d}-\DSum{A}{d}$, systematically guess sequences $ C $ and $ X $ using the equations, element-by-element beginning with the 0-th index and moving rightwards. There are two crucial observations here: (i) Computation of $i$-th element of $C$ and $X$ only depends on $i$-th and $(i-1)$-th elements of $A$ and $B$. Therefore guessing $C[i]$ and $X[i]$ requires {\em finite memory} only. (ii) Intuitively, $C$ refers to a representation of value $\DSum{B}{d} - \DSum{A}{d}$ in base $d$ and $X$ is the carry-sequence. If we can prove that $X$ and $C$ are also bounded-sequences and can be constructed from a finite-set of integers, we would be able to further proceed to construct a B\"uchi automaton for the desired comparator. We proceed by providing an inductive construction of sequences $C$ and $ X$ that satisfy properties in Lemma~\ref{Lemma:discount-invariant1} (Lemma~\ref{Lemma:XInvariant}), and show that these sequences are bounded when $A$ and $B$ are bounded. In particular, when $A $ and $B$ are bounded integer-sequences, then sequences $C$ and $X$ constructed here are also bounded-integer sequences. Therefore, they are be constructed from a finite-set of integers. Proofs for sequence $C$ are in Lemma~\ref{Lemma:SimplifyResidual}-Lemma~\ref{Lemma:BoundOnC}, and proof for sequence $X$ is in Lemma~\ref{Lemma:BoundOnX}. We begin with introducing some notation. Let $\DSumDiff{i} = \Sigma^i_{j=0} (B[j] - A[j])\cdot \frac{1}{d^j} $ for all index $i\geq 0$. Also, let $ \DSumDiff{\cdot} = \Sigma^{\infty}_{j=0} (B[j] - A[j])\cdot \frac{1}{d^j} = \DSum{B}{d} - \DSum{A}{d} $. Define $ \mathit{maxC} = \max\cdot\frac{d}{d-1} $. We define the residual function $ \mathit{Res}: \mathbb{N} \cup \{0\} \mapsto \mathbb{R} $ as follows: \[ \mathit{Res}(i) = \begin{cases} \DSumDiff{\cdot} - \floor{\DSumDiff{\cdot}} & \text{if } i = 0 \\ \mathit{Res}(i-1) - \floor{\mathit{Res}(i-1)\cdot d^i}\cdot \frac{1}{d^i } & \text{otherwise} \end{cases}\] Then we define $C[i]$ as follows: \[ C[i] = \begin{cases} \floor{\DSumDiff{\cdot}} &\text{if } i = 0 \\ \floor{\mathit{Res}(i-1)\cdot d^i } &\text{otherwise} \end{cases}\] Intuitively, $ C[i] $ is computed by \emph {stripping off} the value of the $ i $-th digit in a representation of $ \DSumDiff{\cdot} $ in base $ d $. $ C[i] $ denotes the numerical value of the $ i $-th position of the difference between $ B $ and $ A $. The residual function denotes the numerical value of the difference remaining after assigning the value of $ C[i] $ until that $ i $. We define function $ \mathit{CSum}(i): \mathbb{N}\cup\{0\} \rightarrow \mathbb{Z}$ s.t. $\mathit{CSum}(i) = \Sigma_{j=0}^{i} C[j]\cdot \frac{1}{d^j} $. Then, we define $ X[i] $ as follows: \[ X[i] = (\DSumDiff{i} - \mathit{CSum}(i)) \cdot d^i \] Therefore, we have defined sequences $C$ and $X$ as above. We now prove the desired properties one-by-one. First, we establish sequences $C$, $X$ as defined here satisfy Equations~\ref{eq:initial}-\ref{eq:invariant} from Lemma~\ref{Lemma:discount-invariant1}. Therefore, ensuring that $C$ is indeed the difference between sequences $B$ and $A$, and $X$ is their carry-sequence. \begin{lem} \label{Lemma:XInvariant} Let $A$ and $B$ be bounded integer sequences and $C$ and $X$ be defined as above. Then, \begin{enumerate} \item $ B[0] = A[0] + C[0] + X[0] $ \item For $ i \geq 1 $, $ B[i] + d\cdot X[i-1] = A[i] + C[i] + X[i] $ \end{enumerate} \end{lem} \begin{proof} We prove this by induction on $ i $ using definition of function $ X $. When $ i = 0 $, then $ X[0] = \DSumDiff{0} - \mathit{CSum}(0) \implies X[0] = B[0] - A[0] - C[0] \implies B[0] = A[0] + C[0] + X[0] $. When $ i = 1 $, then $ X[1] = (\DSumDiff{1} - \mathit{CSum}(1))\cdot d = (B[0] + B[1]\cdot \frac1d ) - (A[0] + A[1]\cdot\frac1d) - (C[0] + C[1]\cdot\frac1d))\cdot d \implies X[1] = B[0]\cdot d + B[1] - (A[0]\cdot d + A[1]) - (C[0]\cdot d + C[1])$. From the above we obtain $X[1] = d \cdot X[0] + B[1] - A[1] - C[1] \implies B[1] + d \cdot X[0] = A[1] + C[1]+ X[1] $. Suppose the invariant holds true for all $ i\leq n $, we show that it is true for $ n+1 $. $X[n+1] = (\DSumDiff{n+1} - \mathit{CSum}(n+1))\cdot d^{n+1} \implies X[n+1] = (\DSumDiff{n} - \mathit{CSum}(n))\cdot d^{n+1} + (B[n+1] - A[n+1] - C[n+1]) \implies X[n+1] = X[n]\cdot d + B[n+1] - A[n+1] - C[n+1] \implies B[n+1] + X[n] \cdot d = A[n+1] + C[n+1] + X[n+1]$. \end{proof} Next, we establish the sequence $C$ is a bounded integer-sequences, therefore it can be represented by a finite-set of integers. First of all, by definition of $C[i]$ it is clear that $C[i]$ is an integer for all $i\geq 0$. We are left with proving boundedness of $C$. Lemma~\ref{Lemma:SimplifyResidual}-Lemma~\ref{Lemma:BoundOnC} establish boundedness of $C[i]$. \begin{lem} \label{Lemma:SimplifyResidual} For all $ i \geq 0 $, $ \mathit{Res}(i) = \DSumDiff{\cdot} - \mathit{CSum}(i) $. \end{lem} \begin{proof} Proof by simple induction on the definitions of functions $ \mathit{Res} $ and $ C $. \begin{enumerate} \item When $i = 0$, $\mathit{Res}(0) = \DSumDiff{\cdot} - \floor{\DSumDiff{\cdot}}$. By definition of $C[0]$, $\mathit{Res}(0) = \DSumDiff{\cdot} - C[0] \iff \mathit{Res}(0) = \DSumDiff{\cdot} - \mathit{CSum}(0) $. \item Suppose the induction hypothesis is true for all $i<n$. We prove it is true when $i = n$. When $i = n$, $\mathit{Res}(n) = \mathit{Res}(n-1) - \floor{\mathit{Res}(n-1)\cdot d^{n} } \cdot \frac{1}{d^n}$. By definition of $C[n]$ and I.H, we get $\mathit{Res}(n) = (\DSumDiff{\cdot} - \mathit{CSum}(n-1)) - C[n] \cdot \frac{1}{d^n}$. Therefore $\mathit{Res}(n) = \DSumDiff{\cdot} - \mathit{CSum}(n)$. \end{enumerate} \end{proof} \begin{lem} \label{Lemma:BoundOnRes} When $ \DSumDiff{\cdot}\geq 0 $, for all $ i\geq 0 $, $ 0 \leq \mathit{Res}(i) < \frac{1}{d^i} $. \end{lem} \begin{proof} Since, $ \DSumDiff{\cdot}\geq 0 $, $ \mathit{Res}(0) = \DSumDiff{\cdot} - \floor{\DSumDiff{\cdot}} \geq 0 $ and $ \mathit{Res}(0) = \DSumDiff{\cdot} - \floor{\DSumDiff{\cdot}} < 1 $ . Specifically, $ 0 \leq \mathit{Res}(0) < 1 $. Suppose for all $ i \leq k $, $ 0\leq \mathit{Res}(i) < \frac{1}{d^i} $. We show this is true even for $ k+1 $. Since $ \mathit{Res}(k)\geq 0 $, $ \mathit{Res}(k)\cdot d^{k+1} \geq 0 $. Let $ \mathit{Res}(k)\cdot d^{k+1} = x+f $, for integral $ x\geq 0 $, and fractional $ 0 \leq f < 1 $. Then, from definition of $\mathit{Res}$, we get $\mathit{Res}(k+1) = \frac{x+f}{d^{k+1}} - \frac{x}{d^{k+1}} \implies \mathit{Res}(k+1) < \frac{1}{d^{k+1}} $. Also, $ \mathit{Res}(k+1) \geq 0 $ since $ a - \floor{a } \geq 0 $ for all positive values of $ a $ (Lemma~\ref{Lemma:SimplifyResidual}). \end{proof} \begin{lem} \label{Lemma:BoundOnC} Let $\mathit{maxC} = \mu\cdot \frac{d}{d-1}$. When $ \DSumDiff{\cdot} \geq 0 $, for $ i = 0 $, $0 \leq C(0) \leq \mathit{maxC} $, and for $ i \geq 1 $, $0 \leq C(i) < d $. \end{lem} \begin{proof} Since both $ A $ and $ B $ are non-negative bounded weight sequences, maximum value of $ \DSumDiff{\cdot} $ is when $ B = \{\max\}_{i} $ and $ A = \{0\}_i $. In this case $ \DSumDiff{\cdot} = \mathit{maxC} $. Therefore, $ 0 \leq C[0] \leq \mathit{maxC} $. From Lemma~\ref{Lemma:BoundOnRes}, we know that for all $ i$, $ 0 \leq \mathit{Res}(i) < \frac{1}{d^i} $. Alternately, when $ i \geq 1 $, $ 0 \leq \mathit{Res}(i-1) < \frac{1}{d^{i-1}} \implies 0 \leq \mathit{Res}(i-1) \cdot d^i < \frac{1}{d^{i-1}} \cdot d^i \implies 0 \leq \mathit{Res}(i-1)\cdot d^i < d \implies 0 \leq \floor{\mathit{Res}(i-1)\cdot d^i} < d \implies 0 \leq C[i] < d$. \end{proof} Therefore, we have established that sequence $C$ is non-negative integer-valued and is bounded by $\mathit{maxC} = \mu\cdot \frac{d}{d-1}$. Finally, we prove that sequence $X$ is also a bounded-integer sequence, thereby proving that it is bounded, and can be represented with a finite-set of integers. Note that for all $i\geq 0$, by expanding out the definition of $X[i]$ we get that $X[i]$ is an integer for all $i\geq 0$. We are left with proving boundedness of $X$: \begin{lem} \label{Lemma:BoundOnX} Let $ \mathit{maxX} = 1 + \frac{\max}{d-1} $. When $ \DSumDiff{\cdot} \geq 0 $, then for all $ i\geq 0 $, $ |X(i)| \leq \mathit{maxX} $. \end{lem} \begin{proof} From definition of $ X $, we know that $ X(i) = (\DSumDiff{i} - \mathit{CSum}(i)) \cdot d^i \implies X(i) \cdot \frac{1}{d^i} = \DSumDiff{i} - \mathit{CSum}(i) $. From Lemma~\ref{Lemma:SimplifyResidual} we get $ X(i) \cdot \frac{1}{d^i} = \DSumDiff{i} - (\DSumDiff{\cdot} - \mathit{Res}(i)) \implies X(i) \cdot \frac{1}{d^i} = \mathit{Res}(i) - (\DSumDiff{\cdot} - \DSumDiff{i}) \implies X(i) \cdot \frac{1}{d^i} = \mathit{Res}(i) - (\Sigma_{j=i+1}^{\infty}(B[j]-A[j])\cdot \frac{1}{d^j}) \implies |X(i) \cdot \frac{1}{d^i}| \leq |\mathit{Res}(i)| + |(\Sigma_{j=i+1}^{\infty}(B[j]-A[j])\cdot \frac{1}{d^j}) | \implies |X(i) \cdot \frac{1}{d^i}| \leq |\mathit{Res}(i)| + \frac{1}{d^{i+1}}\cdot|(\Sigma_{j=0}^{\infty}(B[j+i+1]-A[j+i+1])\cdot \frac{1}{d^j}) | \implies |X(i) \cdot \frac{1}{d^i}| \leq |\mathit{Res}(i)| + \frac{1}{d^{i+1}}\cdot|\mathit{maxC}|$. From Lemma~\ref{Lemma:BoundOnRes}, this implies $|X(i) \cdot \frac{1}{d^i}| \leq \frac{1}{d^i} + \frac{1}{d^{i+1}}\cdot|\mathit{maxC}| \implies |X(i)| \leq 1 + \frac{1}{d}\cdot|\mathit{maxC}| \implies |X(i)| \leq 1 + \frac{\max}{d-1} \implies |X(i)| \leq \mathit{maxX}$ \end{proof} We summarize our results from~\ref{Lemma:XInvariant}-Lemma~\ref{Lemma:BoundOnX} as follows: \begin{cor} \label{lem:FiniteAlphabetXandC} Let $d>1$ be an integer discount-factor. Let $A$ and $B$ be non-negative integer sequences bounded by $\mu$, and $\DSum{A}{d} < \DSum{B}{d}$. Then there exists bounded integer-valued sequences $X$ and $C$ that satisfy the conditions in Lemma~\ref{Lemma:discount-invariant1}. Furthermore, $C$ and $X$ are bounded as follows: \begin{enumerate} \item \label{Item1:BoundOnC} $0 \leq C[0] \leq \mu\cdot \frac{d}{d-1}$ and for all $i\geq 1$, $0\leq C[i] < d$, \item For all $i \geq 0$, $0 \leq |X[i]| \leq 1 + \frac{\mu}{d-1}$ \end{enumerate} \end{cor} Intuitively, we construct a B\"uchi automaton $\mathcal{A}_{\DSsucceqFord{d}}^< $ with states of the form $(x,c)$ where $x$ and $c$ range over all possible values of $X$ and $C$, respectively, and a special initial state $s$. Transitions over alphabet $(a,b)$ replicate the equations in Lemma~\ref{Lemma:discount-invariant1}. i.e. transitions from the start state $(s,(a,b), (x,c))$ satisfy $a+c+x = b$ to replicate Equation~\ref{eq:initial} (Lemma~\ref{Lemma:discount-invariant1}) at the 0-th index, and all other transitions $((x_1, c_1), (a,b), (x_2, c_2))$ satisfy $a+ c_2 + x_2 = b + d\cdot x_1$ to replicate Equation~\ref{eq:invariant} (Lemma~\ref{Lemma:discount-invariant1}) at indexes $i>0$. Full construction is as follows: \renewcommand{\max}{\mu} \renewcommand{\mathit{maxC}}{\mu_C} \renewcommand{\mathit{maxX}}{\mu_X} \paragraph{Construction} Let $ \mathit{maxC} = \max \cdot \frac{d}{d-1} $ and $ \mathit{maxX} = 1 + \frac{\max}{d-1}$. $ \mathcal{A}_{\DSsucceqFord{d}}^< = (\mathit{S}, \Sigma, \delta_d, {\mathit{Init}}, \mathcal{F}) $ \begin{itemize} \item $ \mathit{S} = \{s\} \cup \Final \cup S_{\bot} $ where \\ $\mathcal{F} = \{(x,c) ||x| \leq \mathit{maxX}, 0 \leq c \leq \mathit{maxC} \} $, and \\ $S_{\bot} = \{(x, \bot) | | x| \leq \mathit{maxX}\}$ where $\bot$ is a special character, and $ c \in \mathbb{N}$, $x \in \mathbb{Z}$. \item State $s$ is the initial state, and $\mathcal{F}$ are accepting states \item $ \Sigma = \{(a,b) : 0 \leq a, b \leq \max \} $ where $ a $ and $ b $ are integers. \item $\delta_d \subseteq \mathit{S} \times \Sigma \times \mathit{S}$ is defined as follows: \begin{enumerate} \item Transitions from start state $ s $: \begin{enumerate}[label = \roman*] \item $ (s ,(a,b), (x,c)) $ for all $ (x,c) \in \Final $ s.t. $ a + x + c = b $ and $ c \neq 0 $ \item $ (s ,(a,b), (x, \bot)) $ for all $ (x, \bot) \in S_{\bot} $ s.t. $ a + x = b $ \end{enumerate} \item Transitions within $ S_{\bot} $: $ ((x, \bot) ,(a,b), (x', \bot) )$ for all $(x, \bot)$, $(x', \bot) \in S_{\bot} $, if $ a + x' = b + d\cdot x $ \item Transitions within $ \mathcal{F} $: $ ((x,c) ,(a,b), (x',c') )$ for all $ (x,c)$, $(x',c') \in \mathcal{F} $ where $ c' < d $, if $ a + x' + c' = b + d\cdot x $ \item Transition between $ S_{\bot} $ and $ \mathcal{F} $: $ ((x, \bot),(a,b), (x',c')) $ for all $ (x,\bot) \in S_{\bot} $, $ (x',c') \in \mathcal{F} $ where $ 0 < c' < d $, if $ a + x' + c' = b + d\cdot x $ \end{enumerate} \end{itemize} \begin{thm} \label{thm:Construction} Let $d>1$ be an integer discount-factor, and $\mu>1$ be an integer upper-bound. B\"uchi automaton $ \mathcal{A}_{\DSsucceqFord{d}}^<$ accepts pair of bounded sequences $(A,B)$ iff $\DSum{A}{d}\leq \DSum{B}{d}$. The B\"uchi automaton has $\mathcal{O}(\frac{\mu^2}{d})$-many states. \end{thm} \begin{proof} Corollary~\ref{lem:FiniteAlphabetXandC} proves that if $\DSum{A}{d}< \DSum{B}{d}$ then sequence $X$ and $C$ satisfying the integer sequence criteria and bounded-criteria will exist. Let these sequences be $X = X[0]X[1]\dots$ and $C = [0]C[1]\dots$. Since $\DSum{C}{d} > 0$, there exists an index $i\geq 0$ where $C[i]>0$. Let the first position where $C[i]>0$ be index $j$. By construction of $\mathcal{A}_{\DSsucceqFord{d}}^<$, the state sequence given by $s,(X[0],\bot)\dots, (X[j-1],\bot),(X[j], C[j]),(X[j+1], C[j+1])\dots $, where for all $i\geq j$, $C[i]\neq \bot$, forms a run of word $(A,B)$ in the B\"uchi automaton. Furthermore, this run is accepting since state $(x,c)$ where $c \neq \bot$ are accepting states. Therefore, ($A,B) $ is an accepting word in $\mathcal{A}_{\DSsucceqFord{d}}^<$. To prove the other direction, suppose the pair of sequence $(A, B)$ has an accepting run with state sequence $s$, $(x_0,\bot),\dots (x_{j-1}, \bot), (x_j,c_j), (x_{j+1}, c_{j+1})\dots $, where for all $i\geq j$, $c_j \neq \bot$. Construct sequence X and C as follows: For all $i\geq 0$, $X[i] = x_i$. For all $i<j$, $C[i] = 0$ and for all $i\geq j$ $C[i]=c_i$. Then the transitions of $\mathcal{A}_{\DSsucceqFord{d}}^<$ guarantees equations Equation~\ref{eq:initial}-~\ref{eq:invariant} from Lemma~\ref{Lemma:discount-invariant1} to hold for sequences $A$,$B$ and $C$,$X$. Therefore, it must be the case that $\DSum{B}{d} = \DSum{A}{d} + \DSum{C}{d}$. Furthermore, since the first transition to accepting states $(x,c)$ where $c \neq \bot$ is possible only if $c>0$, $\DSum{C}{d}>0$. Therefore, $ \DSum{A}{d}<\DSum{B}{d}$. Therefore, $\mathcal{A}_{\DSsucceqFord{d}}^< $ accepts $(A,B)$ if $ \DSum{A}{d}<\DSum{B}{d}$. \end{proof} \begin{cor} \label{thm:discountedSumComparator} DS-comparator for integer discount-factors $d>1$ for all inequalities and equality are $\omega$-regular. \end{cor} \begin{proof} Immediate from Theorem~\ref{thm:Construction}, and closure properties of B\"uchi automaton. \end{proof} Constructions of DS-comparator with integer discount-factor $d>1$ for non-strict inequality $\leq$ and equality $=$ follow similarly and also have $\mathcal{O}(\frac{\mu^2}{d})$-many states. \paragraph{Discounted-sum aggregate function} We use the $\omega$-regular comparator for DS-aggregate function for integer discount-factor to prove that discounted-sum with integer discount-factors is an $\omega$-regular aggregate function. \begin{thm} \label{thm:DSRegular} Let $d>1$ be an integer discount-factor. The discounted-sum aggregate function with discount-factor $d$ is $\omega$-regular under base $d$. \end{thm} \begin{proof} We define the discounted-sum aggregate function automaton (DS-function automaton, in short): For integer $\mu>0$, let $\Sigma = \{0,1,\dots \mu\}$ be the input alphabet of DS-function, and $d>1$ be its integer base. B\"uchi automaton $\mathcal{A}^\mu_d$ over alphabet $\Sigma\times\mathsf{AlphaRep}(d)$ is a DS-function automaton of type $\Sigma^\omega \rightarrow \Re$ if for all $A \in \Sigma^\omega$, $ (A, \mathsf{rep}(\DSum{A}{d}), d) \in \mathcal{A}^\mu_d$. Here we prove that such a $\mathcal{A}_d^\mu$ exists. Let $\mu>0$ be the integer upper-bound. Let $\mathcal{A}^=_d$ be the DS-comparator for integer discount-factor $d>1$ for relation $=$. Intersect $\mathcal{A}^=_d $ with the B\"uchi automata consisting of all infinite words from alphabet $\{0,1\dots \mu\}\times \{0,\dots,d-1\}$. The resulting automaton $\mathcal{B}$ accepts $(A,B)$ for $A \in \{0,\dots,\mu\}^\omega$ and $B\in \{0,\dots,d-1\}^\omega$ iff $\DSum{A}{d} = \DSum{B}{d}$. Since all elements of $B$ are bounded by $d-1$, $\DSum{B}{d}$ can be represented as an $\omega$-word as follows: Let $B = B[0],B[1]\dots$, then its $\omega$-word representation in base $d$ is given by $ +\cdot (\mathsf{Int}(\DSum{B}{d}, d), \mathsf{Frac}(\DSum{B}{d}, d))$ where $\mathsf{Int}(\DSum{B}{d}, d) = B[0] \cdot 0^\omega$ and $ \mathsf{Frac}(\DSum{B}{d}, d) = B[1],B[2]\dots$. This transformation of integer sequence $B$ into its $\omega$-regular word form in base $d$ can be achieved with a simple transducer $\mathcal{T}$. Therefore, application of transducer $\mathcal{T}$ to B\"uchi automaton $\mathcal{B}$ will result in a B\"uchi automaton over the alphabet $\Sigma\times\mathsf{AlphaRep}(d)$ such that for all $A \in \Sigma^\omega$ the automaton accepts $(A, \mathsf{rep}(\DSum{A}{d},d))$. This is exactly the DS-function automaton over input alphabet $\Sigma$ and integer base $d>1$. Therefore, the discounted-sum aggregate function with integer discount-factors in $\omega$-regular. \end{proof} Recall, this proof works only for the discounted-sum aggregate function with integer discount-factor. In general, there is no known procedure to derive a function automaton from an $\omega$-regular comparator (Conjecture~\ref{Conjecture:comparatortofunction}). \paragraph{DS-inclusion} For discounted-sum with integer discount-factor it is in \textsf{EXPTIME}~\cite{boker2014exact,chatterjee2010quantitative} which does not match with its existing \textsf{PSPACE} lower bound. In this section, we use the $\omega$-regular DS-comparator for integer to close the gap, and establish \cct{PSPACE-completeness} of DS-inclusion. \begin{cor} \label{Cor:DSInSEq} Let integer $\mu>1$ be the maximum weight on transitions in DS-automata $P$ and $Q$, and $d>1$ be an integer discount-factor. Let $\mu$ and $d$ be represented in unary form. Then DS-inclusion, DS-strict-inclusion, and DS-equivalence between are $\cc{PSPACE}$-complete. \end{cor} \begin{proof} Since size of DS-comparator is polynomial w.r.t. to upper bound $\mu$, when represented in unary, (Theorem~\ref{thm:Construction}), DS-inclusion is $\cc{PSPACE}$ in size of input weighted $\omega$-automata and $\mu$ (Theorem~\ref{thrm:RegularComplexity}). \end{proof} Not only does this result improve upon the previously known upper bound of $\cc{EXPTIME}$ but it also closes the gap between upper and lower bounds for DS-inclusion. The earlier known \cct{EXPTIME} upper bound in complexity is based on an exponential determinization construction (subset construction) combined with arithmetical reasoning~\cite{boker2014exact,chatterjee2010quantitative}. We observe that the determinization construction can be performed on-the-fly in $\cc{PSPACE}$. To perform, however, the arithmetical reasoning on-the-fly in \cct{PSPACE} would require essentially using the same bit-level ($(x,c)$-state) techniques that we have used to construct DS-comparator. \section{Limsup comparator construction} Formal construction of the limsup comparator is given here. Suppose all sequences are natural number sequences, bounded by $\mu$. The limsup comparator is the B\"uchi automaton $ \A_{\succ \mathit{LS}} = (\mathit{S}, \Sigma, \delta, {\mathit{Init}}, \mathcal{F}) $ where \begin{itemize} \item $ \mathit{S} = \{s\} \cup \{s_0, s_1 \dots, s_{ \max}\} \cup \{f_0, f_1 \dots, f_{ \max}\}$ \item $ \Sigma = \{(a,b) : 0 \leq a, b \leq \max \} $ where $ a $ and $ b $ are integers. \item $\delta \subseteq \mathit{S}\times\Sigma\times \mathit{S}$ is defined as follows: \begin{enumerate} \item Transitions from start state $ s $: $ (s ,(a,b), p) $ for all $(a,b)\in \Sigma$, and for all $p \in \{s\} \cup \{f_0, f_1, \dots, f_{\max}\}$. \item Transitions between $f_k$ and $s_k$ for each $k$: \begin{enumerate}[label = \roman*] \item $(f_k, \alpha , f_k)$ for $\alpha \in \{k\} \times \{0,1, \dots k\}$. \item $(f_k, \alpha , s_k)$ for $\alpha \in \{0,1,\dots k-1\} \times \{0,1, \dots k\}$. \item $(s_k, \alpha , s_k)$ for $\alpha \in \{0,1,\dots k-1\} \times \{0,1, \dots k\}$. \item $(s_k, \alpha , f_k)$ for $\alpha \in \{k\} \times \{0,1, \dots k\}$. \end{enumerate} \end{enumerate} \item $ {\mathit{Init}} = \{s\} $ \item $ \mathcal{F} = \{f_0, f_1 \dots, f_{ \max}\}$ \end{itemize} \section{Limit Average Comparator} \begin{lem} \label{lem:sameAverage} Let $\Sigma = \{0, 1\dots \mu\}$. Let $L \in \Sigma^*$ s.t. the limit-average of all words in the language $L^{\omega}$ exists. Then average of all words in $L $ is the same. \end{lem} \begin{proof} Suppose it is possible that two finite words $v_1$, $v_2\in L$ have different average. Let their length be $l_1$ and $l_2$ respectively with the average $a_1$ and $a_2$ respectively where $a_1 \neq a_2$. We will show the presence of a word $w \in L^{\omega}$ s.t. the limit-average of $w$ does not exist. Let $w_1 = v_1$. Then $\Av{w_1} = a_1$. Next, let $j_2$ be large enough to construct $w_2 = w_i v_2^{j_2}$ such that $\Av{w_2}\approx a_2$. Next, let $j_3$ be large enough to construct $w_3 = w_2 v_1^{j_3}$ such that $\Av{w_3} \approx a_1$. Continue constructing $w_4, w_5 \dots$ in a similar fashoin s.t. their average change between $a_2, a_1 \dots$ respectively. Let these $w = w_n$ as $n\rightarrow \infty$. Then $w \in L^{\omega}$, and since the average of its finite-length prefixes keeps changing between $a_1$ and $a_2$, limit-average of $w$ does not exist. This contradicts the premise that the limit-average of all words in $L^{\omega}$ exists. Therefore, our assumption that words $v_1$ and $v_2$ can have different average has been contradicted. \end{proof} \begin{lem} \label{lem:sameLA} Let $\Sigma = \{0, 1, \dots \mu \}$. Let $L\subseteq \Sigma^*$ s.t. the average of all words in $\Sigma$ is the same, say $a$. Let $w \in L^{\omega}$ s.t. limit-average of $w$ exists. Then $\LA{w} = a$ \end{lem} \begin{proof} Let $w = w_1w_2w_3\dots$. There exists infinitely many prefixes of $w$, prefix $w[i] = w_1w_2\dots w_i$ s.t. $\Av{w[i]} = a$. Since we are given that the limit-average of $w$ exists, it must be equal to $a$. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The second (after Pluto) target object for the New Horizons mission was chosen in 2014, finalizing an observation survey performed with the Hubble Space Telescope \citep{S17}. It was called 2014~MU69, and, subsequently, (486958) Arrokoth (temporarily it was also nicknamed Ultima Thule). Later on, due to results of dedicated observational campaigns \citep{S17,P17}, this object was suspected to be a classical KBO, a primordial contact binary (hereafter CB). A dumbbell contact-binary shape is typical for KBOs. However, up to the time of the New Horizons flyby, its lightcurve monitoring had not succeeded to retrieve the rotation period, because visual magnitude variations were unresolved \citep{B16}. The rendezvous of New Horizons and Arrokoth took place on January 1, 2019. Indeed, Arrokoth turned out to be a contact binary \citep{S19LPI,C19LPI,P19LPI,Stern19Sci}, visually fitting a dumbbell model, depicted, e.g., in Fig.~5 in \cite{Sch07I}, or most similar, in Fig.~1 in \cite{LSS17}.\footnote{Later on, the Arrokoth constituents were reported to be flattened \citep{Stern19Sci}.} Tantalizingly, the ratio of masses of the binary components has turned out to be $\sim$1/3, very similar to the ratios typical for contact-binary cometary nuclei, as compiled in Table~1 in \cite{LSR18}. Identifying any material in the vicinities of a target object of a space mission is of an especial concern for planning cosmic flybys, including that by Arrokoth \citep{M18GRL}, as the material is hazardous for a space probe. Low-mass shallow matter orbiting around Arrokoth, as around any other KBO, may originate from a number of processes. It may be left from a primordial swarm of solids \citep{MK19LPI}, or it may be ejecta of various origins: ejecta due to early out-gassing \citep{T15AA,SL00JGR}; ejecta from impacting by intruding bodies \citep{N18AJ}; ejecta resulting from the CB-forming collision \citep{U19LPI}. However, up to now, no moons, moonlets, fragments, or debris \citep{K18,SSL19LPI,SSM20Sci}, or any traces of coma \citep{G19LPI}, have been discerned in not-yet-completed image surveys, performed from HST and New Horizons in the field around Arrokoth. As shown in \cite{LS20}, based on preliminary data on the shape of Arrokoth, this rotating CB is able to efficiently cleanse its vicinities by chaotizing all material orbiting it sufficiently close. In this article, we explore properties of the long-term dynamics of low-mass matter (whatever it can be: moonlets, fragments etc.) around CB-shaped objects, expected to be ubiquitous in the Kuiper belt. To assess a global picture of the dynamical environment of Arrokoth or a similar object, it is necessary (1)~to analyze the process of cleansing of the circum-binary chaotic zone; (2)~to analyze the process of formation and further survival of a cocoon, formed by the ejected matter inside CB's Hill sphere. In this article, our study is concentrated on just these two items. Therefore, we are interested in the timescale of clearing the immediate vicinity of Arrokoth (the chaotic circum-binary zone), the possibility and timescale of formation of a cocoon of ejected matter around Arrokoth inside its Hill sphere, and the survivability of such a cocoon. In our study, we aim to assess the rate of clearing process in the chaotic circumbinary zone; to obtain the mass parameter dependence of the depopulation rate; to estimate the characteristic time of dispersal of low-mass matter out from Arrokoth's Hill sphere, if such matter were initially present; to assess collisional hazards for space probes visiting neighborhoods of Arrokoth-like objects in the Kuiper belt. \section{Circum-CB clearing: the problem setting} Spinning gravitating CB-shaped bodies create zones of dynamical chaos around them \citep{LSS17}, and this has a clearing effect: any material put in orbits around a rotating dumbbell (e.g., any material ejected from its surface) cannot survive in this chaotic zone. It either escapes into space, or is absorbed by the parent body's surface \citep{LSR18}. As the orbiting matter is removed in this way, a spinning gravitating CB cleans-up its vicinities. A much more well-known example of analogous ``cleansing'' is the formation of the gap in the close-to-coorbital neighbourhood of a planet \citep{W80,DQT89,MM15}. The close-to-coorbital chaotic gap is formed by the overlap of the first-order mean-motion resonances accumulating in the neighbourhood of a planet's orbit; whereas the circum-CB chaotic zone is formed by the overlap of the accumulating integer spin-orbit resonances with the rotating dumbbell \citep{LSR18}. In the both cases, any material injected into the chaotic zones is subject to an unlimited chaotic diffusion in the eccentricity (as well as subject to possible close encounters with the CB or the planet) and, therefore, finally is removed. \begin{figure}[!t] \centering \includegraphics[width=0.55\textwidth]{Fig1.pdf} \caption{Extents of the chaotic zone (shown in red) around a contact binary as a function of the binary's rotation rate $\omega$, in ratio to the critical $\omega_0$. The pericentric distance $q$ is measured in units of $d$, the contact binary size. White shaded area delimits the range of typical rotation rates of the Kuiper belt objects, according to data in \cite{TNO14}. Red solid curves show locations of three major spin-orbit resonances.} \label{Fig1} \end{figure} In Fig.~\ref{Fig1}, adapted from our previous study \citep{LSR18}, we represent graphically the extents of the circum-CB chaotic zone. The diagram is set in the ``CB rotation rate -- particle's initial pericentric distance'' frames. The rotation rate $\omega$ is measured in units of its critical value $\omega_0$, corresponding to centrifugal disintegration of the initially-contact binary ($\omega_0$ is equal to CB's Keplerian rate of rotation). The pericentric distance $q$ is in units of the binary's size $d$, defined as the distance between the mass centers of its components. In units of the critical rate $\omega_0$, the typical rotation rates $\omega$ of the Kuiper belt objects range from 0.2 to 1 (thus, the periods range from 1 to 5, in critical periods), according to the observational (lightcurve) data given in \cite{TNO14}. The area bounded by these limits in Fig.~\ref{Fig1} is white-shaded. Locations of main resonances 1:2, 1:1, and 3:2 between orbiting particles and the rotating central body are shown as red curves. Fig.~\ref{Fig1} demonstrates that typical Kuiper belt CBs may have rather extended circum-body chaotic zones: for orbits inside such zones, the initial pericentric distance $q$ ranges up to $\sim 6d$. Recall that the radius of a gravitating body's Hill sphere $R_\mathrm{H}$, in units of the semimajor axis of a perturber, $a_0$, is given by \begin{equation} \frac{R_\mathrm{H}}{a_0} = \left( \frac{m}{3 M} \right)^{1/3} , \label{Hill_sma} \end{equation} \noindent where $M$ and $m$ are the primary's and secondary's masses, respectively (those of the Sun and Arrokoth, in our problem). The orbit of Arrokoth's any moonlet should lie within Arrokoth's Hill sphere. This implies the inequality $a(1 + e) \lesssim R_\mathrm{H}$. Given the ``dumbbell size'' of Arrokoth $d \simeq 17$~km \citep{Stern19Sci,McKi20}, it is straightforward to estimate, using the diagram, that the chaotic clearing zone around Arrokoth may have radius of at most $\sim$100 km, an order of magnitude less than the New Horizons flyby distance ($\sim$3500 km) and three orders of magnitude less than Arrokoth's Hill radius ($\sim 5 \cdot 10^4$~km). \section{Numerical simulations: the stability diagram} \label{sec_stab} To describe the immediate dynamical environments of Arrokoth, we construct stability charts in the $q$--$e$ (pericentric distance -- eccentricity) plane of initial conditions. We use the Lyapunov characteristic exponent (LCE) method, that we have earlier employed in \cite{LSS17,LSR18}. We choose an inertial Cartesian coordinate system with the origin at the CB's center of mass. The equations of motion of the particle with coordinates $(x, y)$ are given by \begin{equation} \begin{array}{ccl} \dot{x}&=&v_x, \\ \dot{y}&=&v_y, \\ \dot{v}_x&=&-{{{\it m_2}\,\left(x-{\it x_2}\right)}\over{\left(\left(y- {\it y_2}\right)^2+\left(x-{\it x_2}\right)^2\right)^{{{3}/{2}}} }}-{{{\it m_1}\,\left(x-{\it x_1}\right)}\over{\left(\left(y- {\it y_1}\right)^2+\left(x-{\it x_1}\right)^2\right)^{{{3}/{2}}} }},\\ \dot{v}_y&=&-{{{\it m_2}\,\left(y-{\it y_2}\right)}\over{\left(\left(y- {\it y_2}\right)^2+\left(x-{\it x_2}\right)^2\right)^{{{3}/{2}}} }}-{{{\it m_1}\,\left(y-{\it y_1}\right)}\over{\left(\left(y- {\it y_1}\right)^2+\left(x-{\it x_1}\right)^2\right)^{{{3}/{2}}} }}, \end{array} \label{motion1} \end{equation} \noindent where $(x_1, y_1)$ and $(x_2, y_2)$ are the coordinates of the centers of masses $m_1$ and $m_2$, respectively. The locations $x_1, y_1$ and $x_2, y_2$ of the primaries are given by \begin{equation} \begin{array}{ccl} x_1&=&\mu\cos(\omega t), \\ y_1&=&\mu\sin(\omega t), \\ x_2&=&(\mu-1)\cos(\omega t),\\ y_2&=&(\mu-1)\sin(\omega t). \end{array} \label{motion2} \end{equation} \noindent The quantity $\omega$ is a parameter responsible for the arbitrary rotation frequency of the CB; $\omega$ is equal to CB's rotation rate in units of its critical rotation rate corresponding to centrifugal disintegration. At $\omega = 1$, the equations reduce to the usual equations of motion in the planar restricted three-body problem. The distance between the centers of masses $m_1$ and $m_2$ is set here to unity, $d=1$. Also we set $\mathcal{G}(m_1+m_2)=1$; therefore, the angular rate of the Keplerian orbital motion of the binary (if it were unbound) is $$ \omega_0 = \left( \mathcal{G}(m_1+m_2)/d^3 \right)^{1/2} = 1 . $$ \noindent Arrokoth constitutes an alliance of two round bodies\footnote{Although flattened; but as soon as the Arrokoth components are flattened mostly orthogonal to its rotation plane \citep{McKi20}, this flattening does not compromise our dumbbell model for the gravitational potential.}; therefore, the dynamical model given by Equations~(\ref{motion1})--(\ref{motion2}) is expected to be essentially adequate. We set the physical and dynamical parameters of Arrokoth as obtained during the New Horizons flyby \citep{S19LPI,C19LPI,P19LPI,Stern19Sci}. A two-mascon model for Arrokoth shape model, with the parameters as given in \cite{Stern19Sci} and \cite{McKi20} provides us with the following data. \begin{itemize} \item The ``dumbbell size'' of Arrokoth (the distance between the centers of masses $m_1$ and $m_2$: $d = 17.2$~km and the radii of the components $R_1 \approx 10.1$~km, $R_2 \approx 7.3$~km. \item Masses, assuming a typical density $\rho = 0.5$~g/cm$^3$ for cometary nuclei: $m_1 = 1.01\cdot 10^{18}$~g and $m_2 = 5.45 \cdot 10^{17}$~g. Therefore, $m_1/m_2 = 1.85$ and the reduced mass of the contact binary $\mu \equiv m_2/(m_1 + m_2) = 0.35$. \item Rotation period of Arrokoth: $P_\mathrm{rot} = 15.92$~h, therefore $\omega = 0.77$. \end{itemize} \noindent The initial conditions and technical parameters are as follows: \begin{itemize} \item the initial positions of the two masses are set along the $x$ axis, \item the initial position of the test particle is at the pericenter and its initial velocity vector (calculated in the Arrokoth--particle two-body model) is orthogonal to the $x$ axis, \item the maximum computation time $T_\mathrm{max}=\omega\times10^{5}$, in Arrokoth's rotation periods, is set in computations of the stability diagrams; and $T_\mathrm{max}=10^{5}$, in Arrokoth's rotation periods, is set in computations of the ejection statistics. \end{itemize} \begin{figure}[!t] \centering \includegraphics[width=0.55\textwidth]{Fig2.pdf} \caption{The LCE stability diagram of the immediate dynamical environments of Arrokoth, in finite-time LCE colour gradation.} \label{Fig2} \end{figure} To build the stability diagram, 200$\times$200 orbits were computed using the Dormand--Prince integrator DOP853 \citep{HNW87}. The local error tolerance of the integrator was set to $10^{-10}$. The code makes a loop over $N_e$ initial values of eccentricity $e$ for any fixed initial pericentric distance $q$. A {\it Python} code generates $N_q=200$ executables with various values of $q$. Thus, their total number is 200. The constructed LCE diagram of the global dynamics immediately around Arrokoth is shown in Fig.~\ref{Fig2}. The most prominent feature of this diagram is the ``ragged'' border between the circumbinary chaotic zone and the outer region of regular motion. The border is formed by the overlap of spin-orbit resonances between the rotating Arrokoth and an orbiting particle. The most prominent ``teeth'' of instability visible in Fig.~\ref{Fig2} correspond to integer ratios of Arrokoth's rotation rate and an orbiting particle's mean motion, i.e., to the $p/1$ spin-orbit resonances. Let $K$ be the stochasticity parameter, characterizing the overlap of the integer spin-orbit resonances locally in the phase space of motion, as defined in \cite{LSR18}. In Fig.~\ref{Fig2}, the solid white curves are the theoretical borders (taking place at the critical value $K_\mathrm{G}=0.971635406$) between the chaotic and regular zones; the dashed curves are for $K=2$; and the short-dashed curves are for $K=4$. These theoretical borders are given by Equations~(6) and (11) in \cite{LSR18}. One may see that the numerically revealed borders of chaos generally agree with the analytical predictions: indeed, the $K=4$ analytical curve serves approximately as a borderline above which the chaos is complete, i.e., any regular component is negligible. In Fig.~\ref{Fig3}, additional diagrams are constructed by means the ``movable LCE distribution peaks'' technique. This technique allows one to sharply separate chaotic orbits from regular ones instead of analyzing any continuous gradation of orbits in calculated finite-time LCE values; see \cite{SM03} for the technique description and details. In Fig.~\ref{Fig3}, the panel (a) corresponds to the current (contact-binary) state of Arrokoth with the following parameters: $\mu \simeq 0.35$ and $\omega = 0.77$. We still see the properties described above: the ``ragged'' border and the most prominent ``teeth'' of instability corresponding to integer ratios of Arrokoth's rotation rate and the orbiting particle's mean motion. For circular orbits ($e = 0$), the chaos zone size is $\simeq 2.5$ times greater than the distance between the two masses. The panel (b) is for a non-contact pre-merger phase; here $\omega = 1$. Unlike the panel (a), the chaos zone size is now $\simeq 2$ times greater than the distance between the two masses for the circular orbits. In the both panels, the solid white curves are the theoretical borders between chaotic and regular zones at $K = K_\mathrm{G} \simeq 0.971635406$, the dashed curves are built at $K = 2$, and the short-dashed curves at $K = 4$. The theoretical borders are constructed as in Fig.~\ref{Fig2}. Implications of the obtained diagrams are discussed further on in the following Sections. \begin{figure}[!t] \centering \includegraphics[width=\textwidth]{Fig3.pdf} \caption{LCE stability diagrams for the current and pre-merger states of Arrokoth; red and blue colors correspond to chaotic and regular orbits, respectively. Left panel (a) is for the contact-binary phase, and right panel (b) is for a pre-merger phase of this KBO; see text for details.} \label{Fig3} \end{figure} \section{General background and assumptions} Generally, the Fokker--Planck formalism can be used (adapting approaches proposed in \citealt{MH97}, Section 3.4, and in \citealt{T93}; see also \citealt{DQT87,MT99}) to obtain analytical estimates of the diffusion rates in clearing processes in such or similar systems. Here we base on the modified Kepler map theory, as developed in \cite{LSS17,LSR18} (see also a review in \citealt{LSS18Sch}) to describe chaotic dynamical environments of rotating CBs. It is important to note that our analysis is mostly developed in the assumption that the rotation rate $\omega$ of the contact binary is approximately the same as its critical rotation rate $\omega_0$ of centrifugal disintegration; i.e., $\omega \sim \omega_0$. For Arrokoth, $\omega \simeq 0.6 \omega_0$ (assuming the typical density $\rho = 0.5$~g/cm$^3$; for smaller densities $\omega$ would be more close to unity). This assumption allows one to straightforwardly apply formulas already known for the case of motion around Keplerian binaries, without any their modification. As soon as the physical inferences made below do not mostly require any estimates to be accurate better than by an order of magnitude, we believe that the assumption $\omega \sim \omega_0$ is plausible for our purposes. \begin{figure}[!t] \centering \includegraphics[width=\textwidth]{Fig4.pdf} \caption{Examples of trajectories and time evolutions of their energies. Left panel (a): trajectories with the initial pericentric distance $q \simeq 1.40$ (black curve), $q \simeq 1.80$ (red), $q \simeq 2.13$ (blue), $q \simeq 2.19$ (green), $q \simeq 2.23$ (orange) and $q \simeq 2.27$ (purple); the initial eccentricity $e = 0$ in all cases. Right panel (b): time evolutions of the quantity $H = 2 | E |$ for the trajectories presented in panel (a); the curves are coloured accordingly. The close-up: the $H$ evolutions with time in a greater range, until the particles cross the Hill radius; the crossings are marked with red dots.} \label{Fig4} \end{figure} In accord with the general scenarios of formation of contact binaries in the Kuiper belt \citep{U19LPI,MK19LPI}, we assume that, in the post-formation phase of Arrokoth's evolution, the particles initially reside in a disk-like structure around the merged binary. The theoretical circum-CB chaotic zone in this disk may extend up to radii $\simeq 6 d$, as follows from Figs.~\ref{Fig1} and \ref{Fig2}. In accord with the Kepler map theory basics, we assume that, in the motion of particles, the pericentric distance $q$ is approximately conserved, while the semimajor axis is subject to random walk (see \citealt{S11NA,S15ApJ}). The constancy of $q$ seems plausible down to its values of $\sim 2 d$; at smaller $q$, the employed approximations become more and more approximate; in particular, mergers of particles with Arrokoth become prevalent, thus removing them. We should outline that once $q$ (which is greater than $d$) is assumed to be constant, no collisions with Arrokoth are possible theoretically; therefore, the collisions are generally ignored in what follows. To illustrate that the chaotic dynamics of particles until they reach the Hill sphere border does indeed have a diffusive character, in Fig.~\ref{Fig4} we present examples of trajectories and time evolutions of their energies. In panel (a), trajectories are shown with various initial pericentric distances $q$; the initial eccentricity $e = 0$ in all cases. In panel (b), time evolutions of the energies of the same trajectories are given (the curves are coloured accordingly). The close-up shows the quantity $H = 2 | E |$ (where the energy $E = -1/(2 a)$, and $a$ is the particle's orbital semimajor axis) evolutions with time in a greater range, until the particles cross the Hill radius; the crossings are marked with red dots. One may see that, for non-collisional cases, the orbital evolution of the particles has a random walk character (i.e., a chaotical diffusive character) in the energy, and, therefore, in semimajor axis as well. \section{Dispersal of matter around CBs} For any kind of discrete motion, the diffusion coefficient $D$ can be defined, formally, as the mean-square spread in a selected variable (say, $H$), per time unit: \begin{equation} D_H \equiv \lim_{t \to \infty} \frac{\langle (H_t - H_0)^2 \rangle}{t} , \label{defD} \end{equation} \noindent where $t$ is time, the angular brackets denote averaging over a set of starting values (see, e.g., \citealt{Meiss92}). Let us define the quantity $H = 2 | E |$, where the energy $E = -1/(2 a)$, and $a$ is the particle's orbital semimajor axis; and the central binary's mass parameter $\mu \equiv m_2/(m_1 + m_2)$. We extrapolate a numerical-experimental expression, presented in \cite{DQT87} for the rate of diffusion of circumbinary particles, from small to moderate values of $\mu$. Taking into account that the rotation rates of the Kuiper belt CBs, including Arrokoth, are normally of the order of the critical rate of centrifugal disintegration (as already assumed above), one has \begin{equation} D_H \simeq 100 \, H^2 \mu^2 , \label{DEdb} \end{equation} \noindent where time is measured in pericenter passages. For the diffusion timescale, defined as the time needed for the particle's energy to change by an order of unity, one has \begin{equation} T_\mathrm{d} \simeq P \frac{H^2}{D_H} \simeq 0.01 \mu^{-2} P , \label{Tddb} \end{equation} \noindent where $P$ is the particle's orbital period averaged over the chaotic zone. We take $P \simeq 2 \pi a^{3/2} / ({\cal G} m_\mathrm{CB})^{3/2}$, where $a \sim 5 d$, and $d$ is the CB's size in the mascon model. Then, from Equation~(\ref{Tddb}) one may directly see that for a CB like Arrokoth (with $\mu \sim 0.1$--0.3) the characteristic timescale of the diffusion in the CB's chaotic dynamical environment can be as small as $\sim 10$ times its rotation period; therefore, the clearing of the chaotic zone is, in fact, practically instantaneous. Although our estimate of the transport time has been made in the diffusional approximation, its smallness verifies that, actually, this approximation is invalid and the transport is not diffusional, but ballistic: the clearing process is almost ``single-kick.'' This can be shown independently by calculating the amplitude of the kick function in the Kepler map theory for CBs, presented in \cite{LSS17,LSR18}; the kick function in the normalized energy is given by \begin{equation} \Delta E\left(\mu,q,\omega,\phi\right) \simeq W_1\left(\mu,q,\omega\right)\sin\left(\phi\right)+ W_2\left(\mu,q,\omega\right)\sin\left(2\phi\right) , \label{deltaEall} \end{equation} \noindent where $\nu = 1 - \mu$; $\phi$ is the CB's phase when the particle is at pericenter; and the coefficients $W_1$ and $W_2$ are given by \begin{equation}\label{W1} W_1\left(\mu,q,\omega\right)\simeq \mu\nu(\nu-\mu)2^{1/4} \pi^{1/2} \omega^{5/2} q^{-1/4} \exp \left( - \frac{2^{3/2}}{3} \omega q^{3/2} \right) , \end{equation} \begin{equation}\label{W2} W_2\left(\mu,q,\omega\right)\simeq-\mu\nu2^{15/4} \pi^{1/2} \omega^{5/2}q^{3/4}\exp \left( - \frac{2^{5/2}}{3} \omega q^{3/2} \right) , \end{equation} \noindent where $\omega$ is measured in units of critical $\omega_0$. One may see that, at $\mu \sim 1/3$, $\omega \sim 1$, and $q \sim$ 2--3, the coefficients $W_1$ and/or $W_2$ are of order unity. The normalized single-kick energy variation is $\sim 1$, and, therefore, indeed, an orbiting chaotic particle can be ejected in a few kicks. \begin{figure}[!t] \centering \includegraphics[width=0.85\textwidth]{Fig5.png} \caption{Simulation (video) of the depopulation process of a swarm of 10000 particles which are initially distributed in circular orbits inside a ring $\left[1d,3d\right]$ around Arrokoth (in the post-merger phase, for the parameters obtained according to Fig.~1 in \cite{C19LPI}; here $\mu = 0.28$ and $\omega = 0.59$); $H$ is measured in the barycentric reference frame. The video can also be found at \url{http://perso.utinam.cnrs.fr/~lages/datasets/MU69/MU69.mp4}.} \label{Fig5} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.75\textwidth]{Fig6.pdf} \caption{The number of particles ejected (or colliding with the CB) in dependence on time (counted in number of CB's rotations). Here, $N_c$ is the number of particles initially present in the chaotic area (particles which can exit, see Fig.~\ref{Fig2} and Fig.~\ref{Fig3}). For each panel, the black dashed line shows results for the Arrokoth parameters obtained according to Fig.~1 in \cite{C19LPI}.} \label{Fig6} \end{figure} In Figs.~\ref{Fig5} and \ref{Fig6}, the depopulation process is illustrated in detail, featuring several pre-merger phases of Arrokoth. In Fig.~\ref{Fig6}, the time dependences of the number of particles that are ejected (or collide with the CB) are shown. The time is counted in CB's rotations. In these simulations, 10000 particles in initially circular orbits ($e=0$) are initially uniformly distributed in a ring with $q \in [1d, 3d]$. In Fig.~\ref{Fig6}, the black curves are for the post-merger phase, where $d = d_0 = 17.2$~km, the radius of $m_1$ is $r_1 \simeq 10.1$~km $\simeq 0.59d$ (at which the collisions are fixed), the radius of $m_2$ is $r_2 \simeq 7.3$~km $\simeq0.42d$ (at which collisions are fixed), the Hill radius is $R_\mathrm{H} \simeq 48740$~km $\simeq 3027d$ (at which the ejections are fixed). The observational data are taken as given in \cite{Stern19Sci,McKi20}. The rotation period of Arrokoth is $P = 15.92$~h; this gives $\omega=0.77\omega_{2b}$, where $\omega_{2b}$ is for the Keplerian motion. For the post-merger phase, the complete process of depopulation is visualized in the video provided in Fig.~\ref{Fig5}. Note that on timescales $t$ greater than $\sim 100P_{\rm MU69}$ the remaining particles are those initially trapped inside the stability islet located around $\left(q=2.5,e=0\right)$ in the phase space (the blue islet in Fig.~\ref{Fig2}). The red curves in Fig.~\ref{Fig6} correspond to a pre-merger phase with $d = 3d_0 = 51.6$~km, $r_1 \simeq 10.1$~km $\simeq 0.2d$, $r_2 \simeq 7.3$~km $\simeq 0.14d$, $R_\mathrm{H} \simeq 821d$, $P = 63.5$~h; $\omega = \omega_{2b}$. The green curves correspond to a different pre-merger phase with $d = 5d_0 = 86$~km, $r_1 \simeq 10.1$~km $\simeq 0.12d$, $r_2 \simeq 7.3$~km $\simeq 0.08d$, $R_\mathrm{H} \simeq 492d$, $P=136.6$~h; $\omega = \omega_{2b}$. The blue curves correspond to a different pre-merger phase with $d = 10.1 d_0 = 172$~km, $r_1 \simeq 10.1$~km $\simeq 0.06d$, $r_2 \simeq 7.3$~km $\simeq 0.04d$, $R_\mathrm{H} \simeq 246d$, $P = 386.5$~h; $\omega = \omega_{2b}$. Panel Fig.~\ref{Fig6}a represents a sum of panels (b) and (c); whereas in panel (b) the number of particles before their collision with one of CB's components is shown. In panel (c), statistics of particles before their ejection out from the Hill sphere are illustrated. From Fig.~\ref{Fig6}, one may infer that the depopulation process is rather fast already at the pre-merger phases of the Arrokoth formation. Indeed, in Fig.~\ref{Fig6}(c), we see that after $t \in [100,1000]$ rotations of the binary, at least half of particles present around Arrokoth are ejected out from the Hill sphere. By inspecting the distribution of energy when the particles leave the Hill sphere, one may calculate their ``final'' velocity value reached upon the ejection from the Hill sphere. In Fig.~\ref{Fig7}, the distribution of the energy $H$ on particles' crossing Arrokoth's Hill sphere ($R_\mathrm{H} \simeq 42370$~km) is shown in the post-merger phase. Here, in our units, where $d = 1$, is the primaries' separation and $P_{2b} = 2\pi$ is the binary's period, $H_{1/2} \simeq 0.08$, where the first half of the ejected particles' population has $H > H_{1/2}$ and the second half has $H < H_{1/2}$. With the previous energy, the free particles reach $R_\mathrm{H}$ in $R_\mathrm{H} \omega / (2\pi\sqrt{2H_{1/2}}) \simeq750$ rotations of Arrokoth. Indeed, by associating these findings with the results given in Fig.~\ref{Fig6}, one may conclude that the positive value of energy is reached very fast and we see that the depopulation of the CB's disk proceeds with the typical half-depopulation time (for the whole disk) $\sim$10--100 CB's periods, in accord with our analytical estimate given above. \begin{figure}[t] \centering \includegraphics[width=0.85\textwidth]{Fig7.pdf} \caption{Black dots: the distribution of $H$ for particles crossing the Hill sphere $R_\mathrm{H}$ of Arrokoth. Circles: results obtained using the Arrokoth parameters as given in Fig.~1 in \cite{C19LPI}.} \label{Fig7} \end{figure} In Fig.~\ref{Fig8}, the mass parameter dependence for the depopulation time $t_{H>0}$ is shown. At each separate $\mu$ value in the given range, the orbital evolution of $10^4$ particles is simulated. For the orbits, the initial $e$ is set to zero and the initial $q$ values are set uniformly in the interval $q \in [1d, 3d]$. Any particle is regarded as ejected, if its energy $H$ becomes positive. The depopulation time $t_{H>0}$ is fixed, when the number of particles remaining non-ejected becomes less than $1\%$ of the initial number of particles. One may observe that, in the given range $0.1 \leq \mu \leq 0.5$, the depopulation time depends on $\mu$ rather weakly, and the depopulation process is always fast. \begin{figure}[h] \centering \includegraphics[width=0.85\textwidth]{Fig8.pdf} \caption{Black dots: the mass parameter dependence for the depopulation time $t_{H>0}$. Circles: results obtained using the Arrokoth parameters as given in Fig.~1 in \cite{C19LPI}.} \label{Fig8} \end{figure} \section{``Mixer bowls,'' cocoons, and their long-term survival} As we have seen above, a rotating CB is a kind of a ``cosmic mixer,'' efficiently dispersing any neighbouring material outwards. It is well known that any mixer (blender, eggbeater) needs a container (a bowl) to hold the ingredients from dispersal while mixing. Our cosmic mixer also needs such a storage bowl, otherwise the cocoon of matter inside its Hill sphere would not emerge. Let us estimate the typical time $T_\mathrm{enc}$ between encounters of relatively large KBOs (with mass or size greater than that of Arrokoth) with Arrokoth's Hill sphere. Such low-velocity encounters would disperse Arrokoth's cocoon, if it were present. Therefore, if $T_\mathrm{enc}$ is much less than the Solar system age, one can be confident that the Arrokoth's Hill sphere is totally cleansed. Let $N_0$ be the total number of impactors with size (radius) greater than radius $R_0$. Taking Arrokoth's radius for $R_0$, the characteristic (average) time between encounters of such KBOs with Arrokoth can be written, following a general approach of \cite{PK12ApJ}, as \begin{equation} T_\mathrm{enc} = (P_\mathrm{i} \sigma N_0)^{-1} , \label{Tenc} \end{equation} \noindent where $P_\mathrm{i}$ is the intrinsic collision probability, measured in km$^{-2}$~yr$^{-1}$, $\sigma$ is the collisional cross section, measured in km$^2$. For the probability $P_\mathrm{i}$ inside the classical Kuiper Belt, there exists two estimates: according to \cite{FDS00}, $P_\mathrm{i} = 1.3 \cdot 10^{-21}$, and, according to \cite{DMPV01}, $P_\mathrm{i} = 4 \cdot 10^{-22}$~km$^{-2}$~yr$^{-1}$. According to Equation~(18) in \cite{PK12ApJ}, number $N_0$ of KBOs with size $R>R_0$ can be estimated using the power-law scaling \begin{equation} N_0 = 618\,000 \cdot (26 / R_0)^{q-1} , \label{NR0} \end{equation} \noindent where radius $R_0$ is in kilometers, and we set $q \simeq 3.5$ (a collisional equilibrium slope). With Arrokoth's $R_0 \approx 16$~km, one has $N_0 \simeq 2.1 \cdot 10^6$. For the collisional cross section we take the cross section of Arrokoth's Hill sphere: $\sigma \simeq \pi R_\mathrm{H} \sim 10^{11}$~km$^2$. From Equation~(\ref{Tenc}) one finally has $T_\mathrm{enc} \sim 4000$~yr (using $P_\mathrm{i}$ from \citealt{FDS00}) or $T_\mathrm{enc} \sim 12\,000$~yr (using $P_\mathrm{i}$ from \citealt{DMPV01}). In the both cases, $T_\mathrm{enc}$ is much less than the Solar system age. However, we have not yet taken into account that for an encounter to disperse the cocoon, the impact velocity should be small enough.\footnote{Paradoxically, for encounters between KBOs themselves, an encounter is destructive if the impact velocity is, on the opposite, high enough.} The corresponding low-velocity threshold can be specified as the velocity at which the typical time of traversing Arrokoth's Hill sphere by an impactor is about the same as the typical orbital-period timescale of particles inside the Hill sphere. If impactor's velocity is much greater than this limit, then the cocoon remains practically unperturbed. As derived in the previous Section, the typical orbital-period timescale of particles inside the Hill sphere can be estimated as $\sim 500$~yr. Given the radius of the Hill sphere $R_\mathrm{H} \sim 5 \cdot 10^4$~km, the low-velocity limit for an impactor is then $v_\mathrm{cr} \sim 10^{-4}$~km/s. According to \cite{FDS00} (Table~1) or \cite{DMPV01} (Table~4), the mean impact velocity in the Kuiper belt is about $1$~km/s. Therefore, the probability of an impact with $v < v_\mathrm{cr}$ is much smaller than the intrinsic impact probability. But by what amount? As follows from \cite{DMPV01} (Fig.~7), one may assume that the frequency of impacts at small and moderate $v$ rises with $v$ approximately linearly (for the non-resonant population) almost up to the maximum corresponding to the typical $v \sim 1$~km/s. Therefore, very roughly, one may estimate that the impacts with $v < v_\mathrm{cr}$ occur $\sim 10^4$ times less frequently than the typical ones. As derived above, the timescale for typical impacts is $\sim 10^4$~yr; therefore, for the dispersive low-velocity impacts the timescale is $\sim10^8$~yr. It is still smaller than the age of the Solar system, but not dramatically. Counting from the Solar system early epoch, Arrokoth's cocoon could have suffered $\sim 10$--100 dispersal events; therefore one may expect that Arrokoth's Hill sphere (as well as Hill spheres of similar KBOs) nowadays is empty. However, due to a number of model approximations made, this conclusion should be subject to verification in massive numerical simulations. In particular, it should be taken into account that, at the Kuiper belt peripheral regions, where the concentration of objects can be radically less, cocoons may hypothetically survive; but this should be checked in realistic simulations. \section{Survival of space probes} Let us consider in a more detail the problem of survival of space probes visiting Arrokoth and similar objects, in the light of the analysis performed above. Mass $m_\mathrm{dm}$ of the debris matter left from the formation of a given KBO is generally expected to be less than the KBO's final mass \citep{U19LPI,MK19LPI}. Therefore, for Arrokoth, $m_\mathrm{dm} < 2 \cdot 10^{15}$~kg. The total volume of Arrokoth, given that $R_1 \approx 10$~km, $R_2 \approx 7$~km, is $\sim 4(R_1^3 + R_2^3) \sim 5000$~km$^3$. Estimating rather formally, this material, if dispersed into boulders with size of $R_b \sim 10$~cm each, would provide $\sim 10^{15}$ boulders. When mixed inside the Hill sphere (with $R_\mathrm{H} \sim 5 \cdot 10^4$~km), the boulder concentration would be $\rho_b \sim 10^{-16}$~cm$^{-3}$. In projection on a plane, this would typically result in the column concentration $\sigma_b \sim R_\mathrm{H} \rho_b \sim 10^{-6}$~cm$^{-2}$. Given the New Horizons dimensions $2.2\times2.1\times2.7$~m, the probe cross-section is $\sim 6 \cdot 10^4$~cm$^2$. We see that if all the dispersed material were ``conserved in the bowl,'' the probability of collisional destruction of the space probe would be rather significant, up to 10\%. Taking smaller sizes for the dispersed boulders would raise the probability up to unity. Since New Horizons flied away safely, one may argue that the post-formation debris had already leaked from the ``bowl'', or there were not much of them from the very beginning. Of course, this approach is strictly formal; first of all, a realistic size distribution for fragments should be used in our calculations. However, as soon as any realistic distribution is a power law with the power-law index $\sim -3$, this improvement would only produce more fragments with approximately the same total mass; therefore, the collisional probability would only aggravate. \section{Conclusions} In this article, we have explored properties of the long-term dynamics of particles (moonlets, fragments, debris or particles) around Arrokoth, as a prototype of many similar (dumbbell-shaped) objects potentially present in the Kuiper belt. The chaotic dynamics of particles inside the Hill sphere of Arrokoth (or, generally, a similar object) has been studied numerically, by means of construction of the LCE diagram, as well as analytically. In the both numerical and analytical parts of our work, we have obtained the following results. (1)~The clearing process of the chaotic circumbinary zone is practically instantaneous: the zone is cleared in a few ``kicks'' of the central CB. (2)~In the studied mass parameter range $0.1 \leq \mu \leq 0.5$, the depopulation time depends on $\mu$ rather weakly, and the depopulation process is always fast, although it has a diffusive character. (3)~Due to relatively frequent low-velocity encounters of Arrokoth's Hill sphere with other KBOs, the matter cocoon, if formed inside the Hill sphere, could have been dispersed on a timescale of $\sim 10^8$~yr. (4)~If not dispersed, such a cocoon matter may pose a serious problem for the survival of any space probe visiting Arrokoth, since the collision probability could be well of order unity. Our study has an implication concerning formation scenarios of contact binaries in the Kuiper belt. As noted in \citep{U19LPI}, any such scenario, apart from producing a slowly rotating CB, should treat how all remaining local debris are cleared away. We underline that, irrespective of the formation scenario, the generic chaotization of the immediate vicinities of any gravitating ``snowman'' rotator, followed by transport processes inside its Hill sphere, naturally explains the current absence of such debris. Tantalizingly, the chaotic-clearing phenomenon affects both former targets of the New Horizons mission, but in different ways: Pluto is not able to clean-up any radial neighbourhood of its orbit, and on this reason it was deprived of the planetary status \citep{IAU06}; conversely, Arrokoth is able, as we have seen above, to create a clearing, but of another (circum-contact-binary) kind. This study is a first approach to the problem. The limitations of our dynamical model include, in particular, non-taking into account the effects due to the irregular shape of Arrokoth's components on the orbiting particles, especially important for particles with small pericentric distances. (However, note that such particles, those with $q \lesssim 2 d$, are absorbed by Arrokoth almost immediately.) Besides, the solar gravitational effects on particles with large orbital semimajor axes (comparable with Arrokoth's Hill radius) can be important, although not many particles acquire large elliptic orbits around Arrokoth; they are mostly thrown out from its Hill sphere in a single or only several kicks (see Figs.~\ref{Fig4} and \ref{Fig6}). The 3D study of the escape process is envisaged, and, in the forthcoming separate study, the solar perturbations will be taken into account. \bigskip \noindent {\bf \it Acknowledgments.} The authors are grateful to Beno{\^\i}t Noyelles for helpful remarks. I.I.S. was supported in part by the grant 13.1902.21.0039 of the Ministry of Science and Higher Education of the Russian Federation. \newpage \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The family of Grigorchuk--Gupta--Sikdi-groups, hereafter abbreviated `\GGS-groups', is best known as a source of groups with exotic properties, e.g.\ just-infinite groups or infinite finitely generated periodic groups. It generalises earlier examples constructed by its three namesakes in the 80's, see~\cite{Gri80, GS83}. These groups are defined as groups of automorphisms of a $p$-regular rooted tree $X^*$, for an odd prime $p$. They are two-genereated, and one of the generators is defined according to a one-dimensional subspace $\mathbf{E} \subseteq \F_p^{p-1}$. Allowing $\mathbf{E}$ to be more than one-dimensional yields a natural generalisation, these `higher-dimensional' \GGS-groups are called \emph{\mGGS-groups} or \emph{multi-edge spinal groups}. We prefer the first term. In many regards, \mGGS-groups do not differ overly much from their one-dimensional counterparts, e.g.\ they are periodic under similar conditions, see~\cite[Theorem~3.2]{AKT16}, they possess the congruence subgroup property, see~\cite{GU19}, and they allow similar branching structures. Their virtue, aside from extending the list of subgroups of $\Aut(X^*)$ with remarkable properties, lies therein that many conditions on \GGS-groups, when generalised to the higher-dimensional counterparts, reveal themselves as linear conditions. In this sense, \mGGS-groups are the more natural class. We are concerned with the computation of the automorphism group of a given \mGGS-group. The automorphisms of groups acting on rooted trees have been investigated before, e.g.\ in~\cite{BS05,LN02}. In general, such groups are quite rigid objects, and their automorphisms are induced by homeomorphisms of the tree. Indeed, in many cases all automorphisms are actually induced by automorphisms of the tree, cf.~\cite{GW03,LN02}, and for some specific classes the automorphism groups can be uniformly computed, see~\cite{BS05}. More generally, the (abstract) commensurator of groups acting on rooted trees has been investigated, cf.~\cite{Rov02}; this is the group of `almost automorphisms', i.e.\ automorphisms between two finite-index subgroups. However, there are only few explicit computations of the automorphism group of \GGS- and related groups. Sidki computed the automorphism group of the Gupta--Sidki $3$-group in \cite{Sid87}, and building on the approach for this group, the automorphism groups of the first Grigorchuk group \cite{GS04}, the Fabrikowski--Gupta and the constant \GGS-group on the ternary tree \cite{Sid87} have been computed. The first and the last two examples give a complete list of \GGS-groups acting on the ternary tree. We now state our main result. \begin{theorem}\label{AutmGGS:thm:main} Let $G$ be a non-constant \mGGS-group, and let $U$ be the maximal subgroup of $\F_p^\times$ such that $\mathbf E$ is invariant under the permutation action induced by $U$ by reordering the columns according to multiplication, and $W$ the maximal subgroup of $\F_p^\times$ of elements $\lambda$ such that $\mathbf{E} \subseteq \operatorname{Eig}_\lambda(u)$ for some $u \in U$. Then the following holds. \begin{enumerate} \item If $G$ is regular, then \[ \Aut(G) \cong (G \rtimes \prod_\omega \mathrm{C}_{p})\rtimes (U \times W). \] \item If $G$ is symmetric, then \[ \Aut(G) \cong (G \rtimes \mathrm{C}_p) \rtimes (U \times W). \] \end{enumerate} \end{theorem} The definitions of `regular' and `symmetric' can be found in \cref{sec:regular and symm}. The slightly obscure definitions of $U$ and $W$ are made more transparent in \cref{sec:coprime autom}. We can immediately derive the following corollary. \begin{corollary} Let $G$ be a non-constant \mGGS-group. Then the following statements hold. \begin{enumerate} \item The outer automorphism group of $G$ is finite if and only if $G$ is a symmetric \GGS-group. \item The outer automorphism group of $G$ is non-trivial. \item The automorphism group of $G$ contains elements of order coprime to $p$ if and only if~$\mathbf{E}$ is invariant under a permutation induced by multiplication in $\F_p$. \item The automorphism group of $G$ is a $p$-group if and only if $G$ is periodic and $\mathbf{E}$ is not invariant under any permutation induced by multiplication in $\F_p$. \end{enumerate} \end{corollary} We also explicitly compute the automorphism group for a selection of examples, e.g.\ all Gupta--Sidki $p$-groups, see \cref{sec:examples automggs}. Our proof combines the methods developed by Sidki in~\cite{Sid87} (cf.\ \cite{BS05} for a sketch of the strategy used in Sidki's paper) with techniques used by the author to determine the isomorphism classes of \GGS-groups in~\cite{Pet19}. This, together with a theorem on the rigidity of branch groups of Grigorchuk and Wilson~\cite{GW03}, allows to reduce the complexity of the computations. On the other hand, the inclusion of the symmetric \GGS-groups complicates some of the arguments. \section{Higher dimensional Grigorchuk--Gupta--Sidki-groups} \subsection{Regular rooted trees and their automorphisms} Let $p$ be an odd prime, and denote by $X$ the set~$\{0, \dots, p-1\}$. The Cayley graph~$X^*$ of the free monoid on $X$ is a $p$-regular rooted tree. We think of the vertices of~$X^*$ as words in~$X$. The root of the tree is the empty word $\varnothing$. We write $X^n$ for the set of all words of length~$n$, called the \emph{$n$-th layer of $X^*$}, and we identify $X$ and $X^1$. Every (graph) automorphism $g \in \Aut(X^*)$ necessarily fixes the root, since it has a smaller valency than every other vertex. Consequently, every automorphism $g$ leaves all layers $X^n$ invariant. We write $\Stab(n)$ for the (setwise) stabilisier of $X^n$, and $\Stab_G(n)$ for its intersection with some subgroup $G \leq \Aut(X^*)$. We call a group $G \leq \Aut(X^*)$ \emph{spherically transitive} if it acts transitively on all layers $X^n$. The group $\Aut(X^*)$ inherits the self-similar structure of $X^*$, and decomposes as a wreath product \[ \Aut(X^*) \cong \Aut(X^*) \wr_X \Sym(X). \] We deduce that $\Aut(X^*) \cong \Aut(X^*) \wr_{X^n} (\Sym(X) \wr \dottimes{p^n} \wr \Sym(X))$, for every $n \in \N$, where the finite iterated wreath product acts on $X^n$ as on the leaves of the the finite rooted $p$-regular tree with $n$ layers. The base group of the $n$\textsuperscript{th} such wreath product decomposition is equal to the $n$\textsuperscript{th} layer stabiliser. We denote the induced isomorphism $\Stab(n) \to \Aut(X^*) \dottimes{p^n} \Aut(X^*)$ by $\psi_n$. For $v \in X^n$, we denote the projection to the $v$\textsuperscript{th} component of the base group by $|_v \colon \Aut(X^*) \to \Aut(X^*)$, this so-called \emph{section map} is a group homomorphism on the pointwise stabiliser $\stab(v)$ of $v$. We call a subgroup $G \leq \Aut(X^*)$ \emph{self-similar} if $G|_v \subseteq G$ for all $v \in X^*$, and we call it \emph{fractal} if $\Stab_G(1)|_x \leq G$ for all $x \in X$. The image of an element $g \in \Aut(X^*)$ in $\Sym(X)$ under the quotient by $\Stab(1)$ is denoted $g|^\varnothing$, and we write $g|^v = g|_v|^\varnothing$, for any $v \in X^*$, for the \emph{label of $g$ at $v$}. Any automorphism is uniquely determined by the collection of its labels. We fix an embedding $\rooted \colon \Sym(X) \to \Aut(X^*)$ by $\rooted(\sigma)|^\varnothing = \sigma$ and $\rooted(\sigma)|^v = 1$ for all $v \in X^\ast \smallsetminus\{\varnothing\}$. We call the elements $\rooted{\Sym(X)}$ \emph{rooted automorphisms}. Let $\Gamma \leq \Sym(X)$ be a permutation group. We define the \emph{$\Gamma$-labelled subgroup of $\Aut(X^*)$} by \[ \lab(\Gamma) = \{ g \in \Aut(X^*) \mid g|^v \in \Gamma \text{ for all } v \in X^* \}. \] It is a well-known fact that if $\Gamma$ is of order $p$, the subgroup $\lab(\Gamma)$ is a Sylow pro-$p$ subgroup of $\Aut(X^*)$. Let $(x_i)_{i \in \N}$ be a sequence of elements $x_i \in X$. The words $\{x_0 \cdots x_k \mid k \in \N \}$ form a half-infinite \emph{ray} $R$ in $X^*$ (or, equivalenty, a point of the boundary). Write $\overline x$ for the ray associated to the constant sequence $(x)_{i \in \N}$. An \emph{$R$-directed~automorphism} is an automorphism $g$ fixing $R$ such that for all $v \in X^*$ either $v$ connected by an edge to an element of $R$, or \[ g|^{v} = \id. \] A spherically transitive group $G \leq \Aut(X^*)$ is \emph{regular branch over~$K$}, for a finite-index subgroup~$K \leq G$, if $K$ contains $\psi_1^{-1}(K \dottimes{p} K)$ as a subgroup of finite index. \subsection{Multi-GGS-groups}\label{sec:regular and symm} Fix the permutation $\sigma = (0 \, 1 \, \dots \, p-1)$. Write $a = \rooted(\sigma)$ and $A = \langle a \rangle$, as well as $\Sigma = \langle \sigma \rangle$. Let $\mathbf{E}$ be an $r$-dimensional subspace of $\F_p^{p-1}$, for $r > 0$. Choose an ordered basis $(\mathbf{b}_1, \dots, \mathbf{b}_r)$ of $\mathbf{E}$, and denote the standard basis of $\F_p^{r}$ by $(\mathbf{s}_1, \dots, \mathbf{s}_r)$. Let $E \in \Mat(r, p-1; \F_p)$ be the matrix with the basis elements as rows. The columns of $E$ are denoted $\mathbf{e}_i$ for $i \in \{1, \dots, p-1\}$; thinking of $\mathbf{E}$ as a subspace of $\{0\} \times \F_p^{p-1} \leq \F_p^p$, we will also write $\mathbf{e}_0$ for the zero column vector of length $r$. Define, for all $j \in \{1, \dots, r\}$, the $\overline{0}$-rooted automorphisms \begin{align*} b^{\mathbf{s}_j} \defeq \psi_1^{-1}(b^{\mathbf{s}_j}, a^{\mathbf{s}_j \cdot \mathbf{e}_1}, \dots, a^{\mathbf{s}_j \cdot \mathbf{e}_{p-1}}). \end{align*} Since $a$ has order $p$, we may extend this definition to arbitrary vectors $\mathbf{n} \in \F_p^{r}$, such that $\psi_1(b^{\mathbf{n}}) = (b^{\mathbf{n}}, a^{\mathbf{n}\cdot E})$, where for any $\mathbf{m} = (m_1, \dots, m_{p-1}) \in \F_p^{p-1}$ we set $a^{\mathbf{m}}$ to be the tuple $(a^{m_1}, \dots, a^{m_{p-1}})$ (and tuples are combined appropriately). The associated map $b^{\bullet}\colon \F_p^{r} \to \Aut(X^*)$ is an injective group homomorphism. We write $B$ for the image $b^{\F_p^{r}}$. \begin{definition} The \emph{multi-\GGS-group associated to $\mathbf{E}$} is the group $G_\mathbf{E}$ of automorphisms generated by the set \[ A \cup B. \] \end{definition} The subgroup $A$ (shared by all \mGGS-groups) is called the \emph{rooted group}, and the subgroup $B$ is called the \emph{directed group}. The generating set in the definition is clearly not minimal; a minimal generating set is given by $\{a\} \cup \{ b^{\mathbf{s}_j} \mid j \in \{1, \dots, r\} \}$. If the dimension $r$ of $\mathbf{E}$ is $1$, one usually speaks of a \GGS-group rather than a \mGGS-group. In this case, abusing notation, we write $b$ for $b^{\mathbf{s}_1}$. Depending on the space $\mathbf{E}$, we distinguish three classes of \mGGS-groups: \begin{enumerate} \item If $\mathbf{E} = \{ (\lambda, \dots, \lambda) \in \F_p^{p-1} \mid \lambda \in \F_p \}$, we call $G_{\mathbf{E}}$ the \emph{constant \GGS-group}. This special case behaves very differently to all other \mGGS-groups; we will, for the most part, exclude it from our considerations. \item If $r=1$, the space $\mathbf{E}$ is contained in $\{ (\lambda_1, \dots, \lambda_{p-1}) \in \F_p^{p-1} \mid \lambda_i = \lambda_{p-i} \text{ for all } i \in \{ 1, \dots, p-1\} \}$, and $G_{\mathbf{E}}$ is not the constant \GGS-group, we call $G_{\mathbf{E}}$ a \emph{symmetric} \GGS-group. \item If $G_{\mathbf{E}}$ is neither constant nor symmetric, we call it a \emph{regular} \mGGS-group. \end{enumerate} We record some of the key properties of \mGGS-groups that have been established in the literature, cf.\ \cite[Proposition~3.3]{KT18} \& \cite[Lemma~2]{GU19} for statement (i), \cite[Proposition~4.3 and Proposition~3.2]{KT18} for statements (ii) and (iii), \cite[Lemma~3.5]{FZ13} for (iv), \cite[Theorem~C]{FGU17} for (v), and \cite[Proposition~3.1]{AKT16} for (vi). \begin{theorem}\label{AutmGGS:thm:facts} Let $G = G_{\mathbf{E}}$ be a \mGGS-group. The the following statements hold. \begin{enumerate} \item If $G$ is regular, it is regular branch over the derived subgroup $G'$, and the equality $\psi_1(\Stab_G(1)') = G' \dottimes{p} G'$ holds. \item The abelisation of $G$ is an elementary abelian $p$-group of rank $r+1$. \item If $G$ is not constant, it is regular branch over $\gamma_3(G)$, such that $\psi_1(\gamma_3(\Stab_G(1))) = \gamma_3(G) \dottimes{p} \gamma_3(G)$. \item If $G$ is a symmetric \GGS-group, the intersection $\psi_1(G) \cap G' \dottimes{p} G'$ fulfils \[ \psi_1(G) \cap (G' \dottimes{p} G') = \{ (g_0, \dots, g_{p-1}) \mid \textstyle\prod_{i = 0}^{p-1}g_i \in \gamma_3(G)\}, \] and thus is of index $p$ in $G' \dottimes{p} G'$. \item Every \mGGS-group is self-similar and fractal. \end{enumerate} \end{theorem} It is fruitful to introduce the following overgroup to deal with the special case of symmetric \GGS-groups. \begin{definition} Let $G$ be a \mGGS-group. Set \( \underline{c} = \psi_1^{-1}([b^{\mathbf{s}_1}, a], \id, \dots, \id). \) The \emph{regularisation $G_{\reg}$ of $G$} is the group \[ G_{\reg} = \left\langle G \cup \{ \underline{c} \} \right\rangle. \] \end{definition} We record the following lemma on the regularisation of a \mGGS-group. \begin{lemma}\label{AutmGGS:lem:facts implications for Greg} Let $G$ be a non-constant \mGGS-group. Then the following statements hold. \begin{enumerate} \item $G_{\reg} = G$ if and only if $G$ is regular. \item If $G$ is symmetric, then $|G_{\reg}: G| = p$. \item The derived subgroups of $G$ and $G_{\reg}$ are equal. \end{enumerate} \end{lemma} The first two statements are immediate consequences of \cref{AutmGGS:thm:facts}. Also the last statement follows, in view of \[ \psi_1([b^{\mathbf{s}_1}, \underline{c}]) = ([b^{\mathbf{s}_1}, [b, a]], \id, \dots, \id) \in \gamma_3(G) \dottimes{p} \gamma_3(G), \] and of \[ \psi_1([a, \underline{c}]) = ([b,a], \id, \dots, \id, [b,a]^{-1}) \in \{ (g_0, \dots, g_{p-1}) \mid \textstyle\prod_{i = 0}^{p-1}g_i \in \gamma_3(G)\}, \] from \cref{AutmGGS:thm:facts}~(iii) and (iv). \subsection{Constructions within \(\Aut(X^*)\)} We introduce some notation. Let $g \in \Aut(X^\ast)$ and $n \in \N$. We define the \emph{$n$\textsuperscript{th}~diagonal of~$g$} as the element \[ \kappa_n(g) = \psi_n^{-1}(g, \dottimes{p^n}, g). \] Analogously, for any subset $G \subseteq \Aut(X^\ast)$ we define \( \kappa_n(G) = \{ \kappa_n(g) \mid g \in G\}. \) Note that if $G$ is a group, the set $\kappa_n(G)$ is a group isomorphic to $G$. \begin{definition} Let $S \subseteq \Aut(X^\ast)$ be a set of tree automorphisms. The \emph{diagonal~closure of~$S$} is the set \[ \overline{S} = \left\{ \prod_{i = 0}^\infty \kappa_i(s_i) \;\middle|\; s_i \in S \text{ for } i \in \N \right\}. \] \end{definition} Since the $n$\textsuperscript{th} factor of the infinite product is contained in $\Stab(n)$, the product is defined as $\Aut(X^*)$ is closed in the (profinite) topology induced by the layer stabilisers. Note that the diagonal closure is in general not a subgroup, even if $S \leq \Aut(X^\ast)$ is one. \begin{definition} Let $S = \rooted(\Sigma)$ be a group of rooted automorphisms of $\Aut(X^*)$. The group $$ \kappa_{\infty}(S) = \langle \kappa_n(s) \mid n \in \N, s \in S \rangle $$ is called the \emph{group of layerwise constant labels in $\Sigma$}. \end{definition} It is easy to see that $\kappa_{\infty}(S) \cong \prod_{\omega} S$, and $\overline{\kappa_{\infty}(S)} = \overline S$. \subsection{Coordinates for \mGGS-groups} We first establish the following lemma, that allows us to uniquely describe elements of the first layer stabiliser in terms of `coordinates'. To be precise, we construct an isomorphism \[ \Stab_G(1) \cong (G' \dottimes{p} G') \rtimes B. \] This uses the fact that, also modulo $\psi_1^{-1}(G' \dottimes{p} G')$, the labels $g|^x$ at first layer vertices of an element $g \in \Stab_G(1)$ are completely determined by the image of $g$ in $G/G'$. Recall that $\mathbf{e}_i$ is the $i$\textsuperscript{th} column of $E$, and that $\mathbf{e}_0$ denotes the zero column vector of length $r$. \begin{lemma}\label{AutmGGS:lem:global equations} Let $G$ be a non-constant \mGGS-group. Let $g_0, \dots, g_{p-1} \in G$ be a collection of elements of $G$. Then \[ \psi_1^{-1}(g_0, \dots, g_{p-1}) \in \Stab_{G_{\reg}}(1) \] if and only if there exist $\mathbf{n}_k \in \F_p^{r}$ and $y_k \in G'$ for $k \in \{0, \dots, p-1\}$ such that \[ g_k = a^{s_k} b^\mathbf{n_k} y_k, \quad\text{where}\quad s_k = \sum_{i = 0}^{p-1} \mathbf{n}_i \cdot \mathbf{e}_{k-i}. \] \end{lemma} \begin{proof} We first prove that every element with its sections determined by a collection of vectors and elements of the commutator subgroup defines an element of the regularisation. Fix some $\mathbf{n}_k \in \F_p^r$ and $y_k \in G'$ for all $k \in X$. Then the element \[ g = b^{\mathbf{n}_0}(b^{\mathbf{n}_1})^{a^{p-1}} \dots (b^{\mathbf{n}_{p-1}})^{a} \in \Stab_G(1) \] fulfils \begin{align*} g|_k &= a^{\mathbf{n}_{0} \cdot \mathbf{e}_{k}} a^{\mathbf{n}_{1} \cdot \mathbf{e}_{{k-1}}} \dots a^{\mathbf{n}_{k-1} \cdot \mathbf{e}_{{1}}} b^{\mathbf{n}_{k}} a^{\mathbf{n}_{k+1} \cdot \mathbf{e}_{{p-1}}} \dots a^{\mathbf{n}_{p-1} \cdot \mathbf{e}_{{k-1}}}\\ &= a^{s_k} b^{\mathbf{n}_{k}} \tilde y_k \end{align*} for some $\tilde y_k \in G'$. We have \[ \Stab_{G_{\reg}}(1) \geq \langle \Stab_G(1)' \cup \{ \underline{c} \} \rangle^G = \psi_1^{-1}(G' \times \dots \times G'), \] which follows directly from \cref{AutmGGS:thm:facts}~(i) for regular $G$. For symmetric $G$, by \cref{AutmGGS:thm:facts}~(iv), it is enough to show that $\langle \underline{c} \rangle^G = \psi_1^{-1}(G' \times \dots \times G')$. Clearly $[b,a]$ normally generates $G'$. The conjugates of $\underline{c}$ have only one non-trivial section, which is equal to $[b,a]$. The statement follows, since $G$ is fractal. Thus the element $y = \psi_1^{-1}(\tilde y_0^{-1}y_0, \dots, \tilde y_{p-1}^{-1}y_{p-1})$ is contained in $G_{\reg}$, so the element \[ gy = \psi_1^{-1}(a^{s_0}b^{\mathbf{n}_{0}}y_0, \dots, a^{s_{p-1}}b^{\mathbf{n}_{p-1}}y_{p-1}) \in G_{\reg} \] has the prescribed sections. Now let $g \in \Stab_{G_{\reg}}(1)$. Up to $\psi_1^{-1}(G' \dottimes{p} G')$, i.e.\ up to the choice of $y_k \in G'$ for $k \in \{0, \dots, p-1\}$, we may calculate modulo the subgroup $L \defeq \langle \Stab_G(1)' \cup \{ \underline{c} \} \rangle^G \leq G_{\reg}$. Thus there are $\mathbf{n}_k \in \F_p^r$ for $k \in X$ such that \[ g \equiv_L b^{\mathbf{n}_{0}}(b^{\mathbf{n}_{1}})^{a^{p-1}}\dots (b^{\mathbf{n}_{p-1}})^{a}. \] Taking sections as we did above shows that $g|_k \equiv_{G'} a^{s_k}b^{\mathbf{n}_{k}}$. \end{proof} Given $g \in \Stab_{G_{\reg}}(1)$, we call the vectors $\mathbf{n}_k$ introduced in \cref{AutmGGS:lem:global equations} the \emph{$B$-coordinates of~$g$}, and the collection of elements $y_k \in G'$ the \emph{$L$-coordinates~of~$g$}. The elements $s_k$ (since they are fixed by the $B$-coordinates) are called the \emph{forced~$A$-coordinates~of~$g$}. \subsection{Strategy for the proof of \cref{AutmGGS:thm:main}} By \cite[Theorem~1]{GW03} and \cite[Proposition~3.7]{KT18}, the automorphism group of $G$ coincides with the normaliser of $G$ in $\Aut(X^*)$. Hence we compute this normaliser $\Nor(G)$. In general, for any $H \leq \Aut(X^*)$, we denote by $\Nor(H)$ (without subscript) the normaliser of $H$ in $\Aut(X^*)$. The normaliser of $\Sigma$ in $\Sym(X)$ has the form $\Nor_{\Sym(X)}(\Sigma) \cong \Sigma \rtimes \Delta$, where $\Delta \cong \F_p^\times$ with the multiplication action on $\Sigma \cong \F_p$. Heuristically, the automorphism group of a \mGGS-group $G$ allows for a similar decomposition. Since $G$ is contained in $\lab(\Sigma)$, its normaliser is contained in $\lab(\Nor_{\Sym(X)}(\Sigma))$, which decomposes as described above. The normaliser of $G$ within $\lab(\Sigma)$ is not identical to $G$, but turns out to be closely related. Apart from $G$ being symmetric or not, the structure of $\mathbf{E}$ only comes into play when considering the normaliser of $G$ in $\lab(\Delta)$. We first consider the normaliser of $G$ in an appropriate closure within $\lab(\Sigma)$. Then we prove that the full normaliser splits as a semidirect product of the normaliser of $G$ within said closure, and normaliser of $G$ within an appropriate subgroup of $\lab(\Delta)$. At last, we compute the normaliser of $G$ within $\lab(\Delta)$, and combine our results. \section{The normaliser in \(\overline{G_{\reg}}\)} We begin our study of elements normalising $G$. Adating the strategy of Sidki in~\cite{Sid87}, we start not with the normaliser in the full automorphism group, but rather in the group $\overline{G_{\reg}} \geq G$. This a natural candidate, since it contains the normaliser of the rooted group $A$ (cf.\ \cref{AutmGGS:lem:norm a}) and the group $G$ itself. \begin{lemma}\label{AutmGGS:lem:g infty diagonally closeds} Let $G$ be a non-constant \mGGS-group. Then \[ \kappa_1(G_{\reg}) \leq \kappa_{\infty}(A) \cdot G_{\reg}. \] \end{lemma} \begin{proof} We check that the generators of $\kappa_1(G_{\reg})$ are contained in the group on the right hand side. Clearly $\kappa_1(a) \in \kappa_{\infty}(A)$. To see that $\kappa_1(b^{\mathbf{s}_j})$ is contained in $\kappa_{\infty}(A) \cdot G_{\reg}$ for a given $j \in \{ 1, \dots, r\}$, we use \cref{AutmGGS:lem:global equations}. We have no choice for the set of $B$-coordinates; since $\kappa_1(b^{\mathbf{s}_j})|_x = b^{\mathbf{s}_j}$ for all $x \in X$ they are all equal to $\mathbf{s}_j$. Thus we compute the forced $A$-coordinates \[ s_k = \sum_{i = 0}^{p-1} \mathbf{n}_i \cdot \mathbf{e}_{{k-i}} = \sum_{i = 0}^{p-1}\mathbf{s}_j \cdot \mathbf{e}_{{k-i}} = \sum_{i = 0}^{p-1} e_{j, k-i}, \] where $e_{j, k-i}$ is the respective entry of $E$. Consequently, all forced $A$-coordinates are equal to some fixed $s \in \F_p$ and independent of $k$. Hence \[ \kappa_1(a^s b^{\mathbf{s}_j}) \in G_{\reg}. \] Since we have already established that $\kappa_1(a) \in \kappa_{\infty}(A)$, this implies that $\kappa_1(b^{\mathbf{s}_j})$ is contained in $\kappa_{\infty}(A) \cdot G_{\reg}$ for all $j \in \{1, \dots, r\}$. Finally, in the case that $G$ is symmetric, we have $[\kappa_1(a), b] = \underline{c}$, and hence \[ \psi_1([\kappa_2(a), \kappa_1(b)]) = \kappa_1([\kappa_1(a), b]) = \kappa_1(\underline{c}).\qedhere \] \end{proof} The problem to determine the normaliser is easily solved for the rooted group $A$. To determine the normaliser of $B$ is significantly harder. \begin{lemma}\label{AutmGGS:lem:norm a} The centraliser and the normaliser of the rooted group $A$ are given by \begin{align*} \Cen(A) &= \kappa_1(\Aut(X^*)) \rtimes A, \quad \text{and}\\ \Nor(A) &= \kappa_1(\Aut(X^*)) \rtimes \rooted(\Nor_{\Sym(X)}(\Sigma)). \end{align*} \end{lemma} \begin{proof} Given $x \in X$, we have \(a^g|_x = \id\) if and only if $g|_{x} = g|_{x+1}$, hence we have $a^g|_x = a^j|_x$ for some $j \in \{1, \dots, p-1\}$ if and only if $g|_0 = g|_x$ for all $x \in X$. The image of $a$ under conjugation with $g$ now only depends on $g|^\varnothing$, hence we only need to observe $\Cen_{\Sym(X)}(\Sigma) = \Sigma$. \end{proof} \begin{lemma}\label{AutmGGS:lem:centraliser and normaliser of b restricts to itself in section at 0} Let $G$ be a non-constant \mGGS-group. Then $\Nor(B) \leq \stab(0)$, the point stabiliser of the vertex $0 \in X$, and \[ \Nor(B)|_0 \leq \Nor(B) \qund \Cen(B)|_0 \leq \Cen(B). \] \end{lemma} \begin{proof} Let $g \in \Nor(B)$. Then there is some $\mathbf{n} \in \F_p^r\smallsetminus\{0\}$ such that $(b^{\mathbf{s}_1})^g = b^{\mathbf{n}}$. If $0^{g^{-1}} = x \neq 0$, we see that \[ b^{\mathbf{n}} = b^{\mathbf{n}}|_0 = (b^{\mathbf{s}_1})^g|_0 = (a^{\mathbf{b}_{i, x}})^{g|_0}. \] But a rooted automorphism cannot be conjugate to a directed automorphism. Thus $g \in \stab(0)$. Similarly, we find \[ b^{\mathbf{n}} = (b^{\mathbf{s}_1})^g|_0 = (b^{\mathbf{s}_1}|_0)^{g|_0} = (b^{\mathbf{s}_1})^{g|_0}. \] This shows (allowing $\mathbf{n} = \mathbf{s}_1$) both other statements. \end{proof} For the next lemma we introduce the (word) length function $\ell \colon G \to \N$, with respect to the generating set $A \cup B$, i.e.\ the mapping \[ \ell(g) = \min\{n \in \N \mid g \text{ can be written as a product of length $n$ in } A \cup B\}. \] It is well-known that this length function is \emph{contracting}, i.e. that $\ell(g|_x) \leq g$ for $x \in X$. We need some finer analysis to establish a strict inequality in certain cases. Note that the strictness of the inequality above, for a more general class of self-similar groups $G$, is related to $G$ being a periodic group. \begin{lemma}\label{AutmGGS:lem:length reduction alternating} Let $G$ be a non-constant \mGGS-group, and let $g \in G$ be an element with $\ell(g) > 1$. Then there is some $i \in X\smallsetminus\{0\}$ such that $\ell(g|_0g|_i^{-1}) < \ell(g)$. \end{lemma} \begin{proof} Write $g = a^{i_0} b^{\mathbf{n}_{0}} \dots a^{i_{n-1}} b^{\mathbf{n}_{m-1}} a^{i_m}$, where $m \in \N$, $\mathbf{n}_{k} \in \F_p^\times$, $i_{k} \in \Z\smallsetminus\{0\}$ for $k \in \{0, \dots, m-1\}$, and $i_{n} \in \Z$. Passing to a conjugate if necessary, every $g \in G$ can be written in this way. Taking sections, we see that \[ g|_k = b^{\mathbf{n}_{0}}|_{k-i_0} (b^{\mathbf{n}_{1}})|_{k-i_0-i_1} \dots (b^{\mathbf{n}_{m-1}})|_{k - \sum_{t = 0}^{n-1}i_t} \] for any $k \in X$, hence $\ell(g|_k) \leq m$. Since every $B$-letter contributes at most one $B$-letter to one of the sections, we have $\sum_{k=0}^{p-1} \ell(g|_k) \leq \ell(g) + p - 1$. Assume that $\ell(g|_0g|_1^{-1}) \geq \ell(g)$. Then $\ell(g|_0) + \ell(g|_1) \geq \ell(g)$, hence $\ell(g|_0) = \ell(g|_1) = m$. This can only be the case if every $B$-letter contributes its only section that is contained in $B$ either to $g|_0$ and $g|_1$, i.e.\ $g|_k \in A$ for all other $k \in X$. Thus, if $m > 2$, we have \[ \ell(g|_0g|_k^{-1}) \leq \ell(g|_0) + \ell(g|_k) \leq m + 1 \leq 2m - 1 < \ell(g), \] If $m = 2$, we see that $g = a^{i_0}b^{\mathbf{n}_{0}}$, implying $g|_0 \in A$. Since there is at least a second section contained in $A$, the result follows. \end{proof} \begin{lemma}\label{AutmGGS:lem:Cen a into G1 basic} Let $G$ be a non-constant \mGGS-group. Then \[ \Nor_{\Cen(A)}(G) \cap (\Cen(B)\cdot G) \subseteq \kappa_{\infty}(A) \cdot G_{\reg}. \] \end{lemma} \begin{proof} Let $g \in \Nor_{\Cen(A)}(G) \cap (\Cen(B)\cdot G)$ and $h \in G$ be an element of minimal length such that we may write $g = g' h$ for some $g' \in \Cen(B)$. The proof uses induction on the length of $h$. First assume that $h$ has length one, i.e.\ $h \in A \cup B$. If $h$ is in $B$, we find that $g \in \Cen(B)$. Thus $h$ centralises $G$, but it is well-known that the centraliser of a branch group in $\Aut(X^*)$ is trivial; hence $g = \id$. If $h$ is a power of $a$, the same holds for $gh^{-1}$, hence $g \in A \leq G_{\reg}$. Now we assume that $\ell(h) > 1$. By \cref{AutmGGS:lem:norm a} we may write $g = \kappa_1(g|_0)a^k$ for some $k \in \Z$, yielding for any $\mathbf n \in \F_p^r$ \begin{align*} ((b^{\mathbf{n}})^{g|_0}, (a^{\mathbf n \cdot \mathbf{e}_{{1}}})^{g|_0}, \dots, (a^{\mathbf n \cdot \mathbf{e}_{{p-1}}})^{g|_0})^{a^k} &= \psi_1((b^{\mathbf{n}})^g) = \psi_1((b^{\mathbf{n}})^h)\\ &= ((b^{\mathbf n})^{h|_0}, (a^{\mathbf n \cdot \mathbf{e}_{{1}}})^{h|_1}, \dots, (a^{\mathbf n \cdot \mathbf{e}_{{p-1}}})^{h|_{p-1}})^{h|^\varnothing}. \end{align*} Since $a$ and $b^{\mathbf n}$ are not conjugate in $\Aut(X^\ast)$, this shows that $g|^\varnothing = h|^\varnothing$ and $a^{g|_0} = a^{h|_i}$ for all $i \in X\smallsetminus\{0\}$. Thus $g|_0h|_i^{-1}$ centralises $A$, and by \cref{AutmGGS:lem:centraliser and normaliser of b restricts to itself in section at 0} we find $(b^{\mathbf n})^{g|_0h|_i^{-1}} = (b^{\mathbf n})^{h|_0h|_i^{-1}} \in B^G$, hence $g|_0h|_i^{-1}$ normalises $G$. By \cref{AutmGGS:lem:length reduction alternating}, there is some $i \in X\smallsetminus\{0\}$ such that $\ell(h|_0h|_i^{-1}) < \ell(h)$, so by induction we see that $g|_0h|_i^{-1} \in \kappa_{\infty}(A) \cdot G_{\reg}$. Since $h|_i \in G$, we have $g|_0 \in \kappa_{\infty}(A) \cdot G_{\reg}$, and $$ g = \kappa_1(g|_0) a^{k} = a^k \kappa_1(g|_0) \in A \cdot \kappa_1(\kappa_{\infty}(A) \cdot G_{\reg}) = \kappa_{\infty}(A) \cdot \kappa_1(G_{\reg}). $$ Now \cref{AutmGGS:lem:g infty diagonally closeds} yields $g \in \kappa_{\infty}(A) \cdot G_{\reg}$. \end{proof} With a little care, we can use the same idea to extend the result to $G_{\reg}$. \begin{lemma}\label{AutmGGS:lem:Cen a into G1} Let $G$ be a non-constant \mGGS-group. Then $$\Nor_{\Cen(a)}(G) \cap (\Cen(b) \cdot G_{\reg}) \leq \kappa_{\infty}(A) \cdot G_{\reg}.$$ \end{lemma} \begin{proof} In view of the previous lemma, we may restrict to symmetric $G$. Let $g \in \Nor_{\Cen(A)}(G) \cap (\Cen(B) \cdot G_{\reg})$ and choose $g'\in \Cen(B)$, $h \in G$ and $j \in \Z$, such that $g = g'\underline{c}^jh$. Write $g = \kappa_1(g|_0)a^k$ for some $k \in \Z$, and calculate, \begin{align*} (b^{g|_0}, (a^{\mathbf{e}_{{1}}})^{g|_0}, \dots, (a^{\mathbf{e}_{{p-1}}})^{g|_0})^{a^k} &= \psi_1(b^g) = \psi_1(b^{\underline{c}^jh}) \\&= (b^{\underline{c}^j|_0h|_0}, (a^{\mathbf{e}_{{1}}})^{h|_1}, \dots, (a^{\mathbf{e}_{{p-1}}})^{h|_{p-1}})^{h|^\varnothing}. \end{align*} As we did in the proof of \cref{AutmGGS:lem:Cen a into G1 basic}, we may conclude that $h|^\varnothing = a^k$. Consequently, for all $i \in X\smallsetminus\{0\}$, the element $g|_0h|_j^{-1}$ centralises $a$. By \cref{AutmGGS:lem:centraliser and normaliser of b restricts to itself in section at 0}, the element $g|_0$ is in $\Cen(B) \cdot \underline{c}^j|_0 h|_0$. Since $\underline{c}^j|_0 = [b,a]^j \in G$, this implies that $g|_0h|_i^{-1}$, for all $i \in X\smallsetminus\{0\}$, is an element in $\Nor_{\Cen(A)}(G) \cap (\Cen(B) \cdot G)$. By \cref{AutmGGS:lem:Cen a into G1 basic}, the element $g|_0h|_i^{-1}$, and consequently also $g|_0$ is contained in $\kappa_{\infty}(A) \cdot G_{\reg}$. Finally, by \cref{AutmGGS:lem:g infty diagonally closeds}, we find $g = \kappa_1(g|_0)g|^\varnothing \in \kappa_{\infty}(A) \cdot G_{\reg}$. \end{proof} \begin{lemma}\label{AutmGGS:lem:norm = cent} Let $G$ be a non-constant \mGGS-group. Then \begin{align*} \Nor_{\operatorname{lab}(\Sigma)}(A) \leq \Cen(A) \qund \Nor_{\operatorname{lab}(\Sigma)}(B) \leq \Cen(B). \end{align*} \end{lemma} \begin{proof} We use the description of $\Nor(A)$ given in \cref{AutmGGS:lem:norm a}. Let $g \in \Nor_{\lab(\Sigma)}(A)$. For all $h \in \lab(\Sigma)$, we have $h|^\varnothing \in \langle \sigma \rangle$. Thus we see that $a^{\rooted\kappa_1(g|_0)g|^\varnothing} = a^{\rooted g|^\varnothing} = a$. Now let $g \in \Nor_{\lab(\Sigma)}(B)$, let $\mathbf n \in \F_p^r$ be arbitrary and let $\mathbf{m} \in \F_p^r$ be such that $(b^{\mathbf{n}})^g = b^{\mathbf{m}}$. Then \[ (b^{\mathbf{m}}, a^{\mathbf m \cdot \mathbf{e}_{{1}}}, \dots, a^{\mathbf m \cdot \mathbf{e}_{{p-1}}}) = b^{\mathbf m} = (b^{\mathbf n})^g = ((b^{\mathbf n})^{g|_0}, (a^{\mathbf n \cdot \mathbf{e}_{{1}}})^{g|_1}, \dots, (a^{\mathbf n \cdot \mathbf{e}_{{p-1}}})^{g|_{p-1}})^{g|^\varnothing}. \] The label $g|^\varnothing$ is a power of $a|^\varnothing$. Since $b^{\mathbf{n}}$ and $a$ are not conjugate in $\Aut(X^\ast)$, the element $g|^\varnothing$ must stabilise the vertex $0$, thus it is trivial. Varying $\mathbf{n}$, we see that $g|_i$ normalises $A$ for all $i \in X\smallsetminus\{0\}$ for which $\mathbf{e}_{{i}} \neq \mathbf{0}$. Now since $\lab(\Sigma)$ is self-similar, this implies $g|_i \in \Cen(A)$ by the first part of this lemma, hence $a^{\mathbf n \cdot \mathbf{e}_{{i}}} = a^{\mathbf m \cdot \mathbf{e}_{{i}}}$ for all $i \in X\smallsetminus\{0\}$. Thus $b^{\mathbf m} = b^{\mathbf n}$. Since $g|_0 \in \Nor_{\lab(\Sigma)}(B)$ by \cref{AutmGGS:lem:centraliser and normaliser of b restricts to itself in section at 0}, we can argue in the same way for $g|_0$, hence $g \in \Cen(B)$. \end{proof} \begin{lemma}\label{AutmGGS:lem:finite words in g1} Let $G$ be a non-constant \mGGS-group. Then $$\Nor_{\overline{G_{\reg}}}(G) \subseteq \kappa_{\infty}(A)\cdot G_{\reg}.$$ \end{lemma} \begin{proof} Let $g \in \Nor_{\overline{G_{\reg}}}(G)$. There is a sequence $(g_i)_{i\in \N}$ with $g_i \in G_{\reg}$ such that \[ g = \prod_{i = 0}^{\infty} \kappa_i(g_i). \] Write $h_n$ for the partial product $\prod_{i = 0}^{n} \kappa_i(g_i)$. By \cref{AutmGGS:lem:g infty diagonally closeds}, we find $h_n \in \kappa_{\infty}(A) \cdot G_{\reg}$ for all $n \in \N$. We may write \begin{align*} g|_{0^n} = h_n|_{0^n} \left(\prod_{i = n+1}^{\infty} \kappa_i(g_i)\right)|_{0^n.h_n} = h_n|_{0^n} \prod_{i = 1}^{\infty} \kappa_i(g_{i+n}). \end{align*} In view of \cref{AutmGGS:lem:norm a}, we conclude that $h_n|_{0^n}^{-1}g|_{0^n} \in \Cen(A)$. There is nothing special about $0^n$; indeed, we see that $h_n|_v^{-1}g|_v = h_n|_{0^n}^{-1}g|_{0^n}$ for all $v \in X^n$. By \cite[Lemma 3.4]{Pet19}, there exists an integer $n \in \N$ such that $g|_{0^n} \in \Nor(B)$. By \cref{AutmGGS:lem:norm = cent} $g|_{0^n} \in \Cen(B)$. Consequently \[ B^{g|_{0^n}^{-1}h_n|_{0^n}} = B^{h_n|_{0^n}}. \] Since $h_n|_{v} \in G_{\reg}$ for all $v \in X^n$, we may use \cref{AutmGGS:lem:Cen a into G1} and obtain $g|_{0^n}^{-1}h_n|_{0^n} \in \kappa_{\infty}(A) \cdot G_{\reg}$. Using \cref{AutmGGS:lem:g infty diagonally closeds} again, we find $\kappa_n(h_n|_{0^n}^{-1}g|_{0^n}) \in \kappa_{\infty}(A) \cdot G_{\reg}$, and moreover \[ g = h_n \psi_n^{-1}(h_n|_{0^n}^{-1}g|_{0^n}, \dots, h_n|_{(p-1)^n}^{-1}g|_{(p-1)^n}) = h_n \kappa_n(h_n|_{0^n}^{-1}g|_{0^n}) \in \kappa_{\infty}(A) \cdot G_{\reg}.\qedhere \] \end{proof} \begin{lemma}\label{AutmGGS:lem:glay and its normalising part} Let $G$ be a non-constant \mGGS-group. Write $G_{\lay}$ for the product set $\kappa_{\infty}(A) \cdot G_{\reg}$. \begin{enumerate} \item If $G$ is regular, then we have $\kappa_{\infty}(A) \leq \Nor_{\Aut(X^*)}(G)$, hence $G_{\lay}$ acquires the structure of a semidirect product. \item If $G$ is symmetric, we find $\Nor_{\kappa_{\infty}(A)}(G) = A$. \item Write $G_{\lay}$ for product set in \textup{(i)}. Then \[ \Nor_{G_{\lay}}(G) = \begin{cases} G_{\lay} &\text{ if }G\text{ is regular},\\ G_{\reg} &\text{ if }G\text{ is symmetric.}\\ \end{cases} \] \end{enumerate} \end{lemma} \begin{proof} Let $n \in \N$. Clearly $a^{\kappa_n(a)} = a$, and for all $j \in \{1, \dots, r \}$ \begin{align*} [b^{\mathbf{s}_j},\kappa_n(a)] &= \psi_1^{-1}([b^{\mathbf{s}_j}, \kappa_{n-1}(a)], [a^{\mathbf{b}_{j,1}}, \kappa_{n-1}(a)], \dots, [a^{\mathbf{b}_{j,p-1}}, \kappa_{n-1}(a)])\\ &= \psi_n^{-1}([b^{\mathbf{s}_j},a], \id, \dots, \id) \in \psi_n^{-1}(G' \times \dots \times G') \leq G. \end{align*} This shows (i), and it also shows that $\kappa_n(a)$ does not normalise a symmetric \GGS-group $G$ for $n > 0$, since $\psi_1^{-1}([b, a], \id, \dots, \id) \notin G$. Thus (ii) is proven. Statement (iii) is a consequence of (i) in case $G$ is regular, and an immediate consequence of \cref{AutmGGS:lem:facts implications for Greg}~(iii) in case $G$ is symmetric. \end{proof} \begin{proposition}\label{AutmGGS:prop:norm in g1} Let $G$ be a non-constant \mGGS-group. Then \[ \Nor_{\overline{G_{\reg}}}(G) = \begin{cases} G \rtimes \kappa_{\infty}(A), &\text{ if $G$ is regular},\\ G_{\reg}, &\text{ if $G$ is symmetric.} \end{cases} \] \end{proposition} \begin{proof} Assume that $G$ is regular. By \cref{AutmGGS:lem:glay and its normalising part}, the set $\kappa_{\infty}(A) \cdot G_{\reg}$ is a group. In view of \cref{AutmGGS:lem:finite words in g1} and $G = G_{\reg}$, this proves the first case. If $G$ is symmetric, the result follows from \cref{AutmGGS:lem:finite words in g1} and \cref{AutmGGS:lem:glay and its normalising part}. \end{proof} \section{The normaliser as a product} We now prove that the normaliser of $G$ in $\Aut(X^*)$ decomposes as a semi-direct product. To begin with, we prove the following generalisation of \cite[2.2.5(i)]{Sid87}, which is an interesting proposition in its own right. \begin{proposition}\label{AutmGGS:prop:order p elements} Let $G$ be a non-constant \mGGS-group. Every element of $G$ that has order $p$ is either contained in $\Stab_G(1)$ or is conjugate to a power of $a$ in $G_{\reg}$. \end{proposition} \begin{proof} We have to prove that, given $g \in \Stab_G(1)$ and $i \in \Z$, every element $a^ig$ of order $p$ may be written $(a^h)^i$ for some $h \in G_{\reg}$. Passing to an appropriate power of $a$, we may assume that $i = 1$. From $(ag)^p = 1$ we derive the equations \begin{align*} \id &= (ag)^p|_0 = g|_{0} \dots g|_{p-1}, \text{ resp.}\\ g|_{p-1} &= g|_{p-2}^{-1} \dots g|_0^{-1}. \end{align*} Since $g \in \Stab_G(1)$, by \cref{AutmGGS:lem:global equations} there exists a set of $B$-coordinates $\mathbf{n}_{k} \in \F_p^r$ and a set of $L$-coordinates $y_k \in G'$ uniquely describing $g$. Reformulated in these $B$-coordinates, the condition above reads \[ \sum_{i = 0}^{p-2} \mathbf{n}_i = -\mathbf{n}_{p-1}. \] Given some integer $s \in \Z$, we define an element \begin{align*} h_s = \psi_1^{-1}(a^s, a^sg|_0, a^sg|_0g|_1, \dots, a^sg|_0 \dots g|_{p-2}) \in \psi_1^{-1}(G \dottimes{p} G). \end{align*} Since \begin{align*} a^{h_s}|_k &= h_s|_{k}^{-1}h_s|_{k+1} = (a^s\prod_{i = 0}^{k-1}g|_i)^{-1}a^s\prod_{i = 0}^{k}g|_i \\&= \begin{cases} g|_k, &\text{ if }k \neq p-1,\\ (\prod_{i = 0}^{p-2}g|_i)^{-1} = g|_{p-1}, &\text{ if }k = p-1,\\ \end{cases} \end{align*} the conjugate $a^{h_s}$ is equal to $ag$. It remains to prove that $h_s \in G_{\reg}$ for some $s \in \Z$. If it is contained in $G_{\reg}$, the element $h_s$ has the $B$-coordinates ${\mathbf{h}_k} = \sum_{i = 0}^{k-1} \mathbf{n}_i$ (and some commutators $z_k$ that we shall not need to specify). We have to prove that the corresponding forced $A$-coordinates $\tilde s_k = \sum_{i = 0}^{p-1} \mathbf{h}_i \cdot \mathbf{e}_{{k-i}}$ are equal to the actual $a$-exponents of the corresponding sections of $h$. Since it is enought to show that $h_s \in G_{\reg}$ for one $s$, we fix $s = \tilde s_0$, so that the proposed equality holds in the first component by definition. A quick calculation shows that, for all $k \in X \smallsetminus \{0\}$, \begin{align*} \tilde s_{k} - \tilde s_{k-1} &= \sum_{i = 0}^{p-1} \mathbf{h}_i \cdot \mathbf{e}_{{k-i}} - \sum_{i = 0}^{p-1} \mathbf{h}_i \cdot \mathbf{e}_{{k-1-i}}\\ &= \sum_{i = 0}^{p-1} (\mathbf{h}_{i} - \mathbf{h}_{i-1} ) \cdot \mathbf{e}_{{k - i}} = \sum_{i = 0}^{p-1} \mathbf{n}_{i-1} \cdot \mathbf{e}_{{k-i}} = s_{k-1}, \end{align*} and consequently the $a$-exponent of $h|_k$ is equal to \[ s + \sum_{i = 0}^{k-1} s_i = \tilde s_0 + \sum_{i = 0}^{k-1} s_i = \tilde s_k, \] for all $k \in X$. But the values $s_i$ are the forced $A$-coordinates of $g$, hence, comparing with the definition of $h_s$, we see that the forced $A$-coordinates of $\mathbf{h}_k$, for $k \in X$, and the actual $a$-exponents of $h|_k$ coincide. Hence $h \in G_{\reg}$. \end{proof} Notice that for a symmetric \GGS-group, we do have to pass to $G_{\reg}$ to make this statement true: take the element $d = ([b, a], [b,a]^{-1}, \id, \dots, \id) \in G$. Clearly $a^{[b, a]} = d$, but assume for contradiction that there is another element $h \in \Stab_G(1)$ such that $a^{h^{-1}} = d$. Then $\underline{c}h$ centralises $a$, hence \[ [b,a]h|_0 = (\underline{c}h)|_0 = (\underline{c}h)|_i = h|_i, \] for all $i \in \F_p^\times$. Counting the powers of $[b,a]$ in $h|_k$ as in \cref{AutmGGS:thm:facts}~(iv), we see that if $h|_1 \equiv_{\gamma_3(G)} a^s(b)^n([b,a])^v$ the sum of the $[b,a]$-exponents over all sections mod $\gamma_3(G)$ equals $v-1 + (p-1)v \equiv_p p-1$, contradicting \cref{AutmGGS:thm:facts}~(iv). Thus there is no such $h \in \Stab_G(1)$. Recall that the group $\Delta$ is $\Nor_{\Sym(X)}(\Sigma) \cap \stab_{\Sym(X)}(0)$. Set $D = \rooted(\Delta)$, i.e.\ the group of rooted automorphisms normalising but not centralising $a$. \begin{lemma}\label{AutmGGS:lem:norm in ginfty kappa closure} Let $G$ be a non-constant \mGGS-group. Then \[ \Nor(G) \subseteq \overline{G_{\reg}} \cdot \overline{D}. \] \end{lemma} \begin{proof} Let $g_0 \in \Nor(G)$. Let $k \in \Z$ be such that $(a^{g_0})^k|^\varnothing = a$. By \cref{AutmGGS:prop:order p elements} there exists an element $h_0 \in G_{\reg}$ such that \( (a^{g_0})^k = a^{h_0}. \) Consequently $h_0^{-1}g_0 \in \Nor(A)$. Using \cref{AutmGGS:lem:norm a} and the fact that $\Nor(G)$ is self-similar, cf.~\cite[Lemma~3.3]{Pet19}, we may write \[ h_0^{-1}g_0 = \kappa_1((h_0^{-1}g_0)|_0)\rooted((h_0^{-1}g_0)|^\varnothing) \] for $h_0^{-1}g_0|_0 = g_1 \in \Nor(G)$. Since $g_0h_0^{-1}|^\varnothing$ normalises $\sigma$, we may write $g_0h_0^{-1}|^\varnothing = a^{k_0}d_0$ for some $d_0 \in D$ and $k_0 \in \Z$, so that we obtain the equation \[ g_0 = h_0\kappa_1(g_1) a^{k_0} d_0 = h_0 a^{k_0} \kappa_1(g_1) d_0, \] using the fact that $\kappa_1(\Aut(X^*))$ normalises $a$ for the second equality. Repeating the procedure for $g_1$, we obtain $g_2 \in \Nor(G), h_1 \in G_{\reg}, d_1 \in D$ and $k_1 \in \Z$ such that \begin{align*} g_0 &= h_0 a^{k_0} \kappa_1(h_1 a^{k_1} \kappa_1(g_2) d_1) d_0\\ &= h_0 a^{k_0} \kappa_1(h_1 a^{k_1}) \kappa_2(g_2) \kappa_1(d_1) d_0\\ &= h_0 a^{k_0} \kappa_1(h_1 a^{k_1}) \kappa_2(g_2) d_0 \kappa_1(d_1). \end{align*} In the last step we have used the fact that $\overline{D}$ is abelian. Going on, we obtain a sequence of products \[ t_n = \prod_{i = 0}^n \kappa_{i}(h_ia^{k_i}) \prod_{i = 0}^n \kappa_{n-i}(d_i) = \prod_{i = 0}^n \kappa_{i}(h_ia^{k_i}) \prod_{i = 0}^n \kappa_{i}(d_i), \] such that $t_n \equiv_{\Stab(n+1)} g_0$, i.e.\ that are converging to $g_0$ in the topology induced by the layer stabilisers. Since both $\overline{D}$ and $\overline{G_{\reg}}$ are closed sets, the corresponding limites are well-defined. We obtain \[ g_0 = \prod_{i = 0}^\infty \kappa_{i}(h_ia^{k_i}) \prod_{i = 0}^\infty \kappa_{i}(d_i). \] This shows $g_0 \in \overline{G_{\reg}} \cdot \overline{D}$. \end{proof} \begin{lemma}\label{AutmGGS:lem:directed elements of G} Let $G$ be a non-constant \mGGS-group and let $g \in G$ be an element directed along $\overline 0$. Then $g \in B$. \end{lemma} \begin{proof} Consider that, since $G \leq \lab(\Sigma)$, there are $(x_1, \dots, x_{p-1}) \in \F_p^{p-1}$ such that \[ \psi_1(g) = (g|_0, a^{x_1}, \dots, a^{x_{p-1}}). \] Since directed elements stabilise the first layer, there exist $B$-coordinates $\mathbf{n}_{0}, \dots, \mathbf{n}_{p-1}$ and $y_0, \dots, y_{p-1} \in G'$ for $g$. The equation above shows that $\mathbf{n}_1 = \dots = \mathbf{n}_{p-1} = \mathbf 0$ and $y_1 = \dots = y_{p-1} = \id$. Thus the forced $A$-coordinate at $0$ fulfils \[ s_0 = \sum_{i=0}^{p-1} \mathbf n_i \cdot \mathbf c_{p-i} = 0, \] hence $g|_0 = a^{s_0}b^{\mathbf{n}_{0}} = b^{\mathbf{n}_{0}} \in B$, and in consequence $g = b^{\mathbf{n}_0}\psi_1^{-1}(y_0, \id, \dots, \id)$. Since the set of elements directed along $\overline{0}$ forms a subgroup, the element $\psi_1^{-1}(y_0, \id, \dots, \id)$, and consequently also $y_0$ is directed along $\overline{0}$. We can argue as above for $y_0$, but since $y_0 \in G'$, by \cref{AutmGGS:thm:facts}~(ii), the sum $\sum_{i = 0}^{p-1} \mathbf{n}_i = \mathbf{0}$. Thus $\mathbf{n}_{0} = \mathbf{0}$, and, chasing down the spine, we find $y_0 = \id$. Thus $g \in B$. \end{proof} \begin{lemma}\label{AutmGGS:lem:first two terms of overline g1} Let $G$ be a non-constant \mGGS-group, and let $h \in \overline{G_{\reg}}$. Then \[ a^h \equiv_{G'} a. \] \end{lemma} \begin{proof} Let $(g_i)_{i\in \N}$ be a sequence of elements $g_i \in G_{\reg}$ for $i \in \N$ such that \[ h = \prod_{i=0}^\infty \kappa_i(g_i) = g_0 \prod_{i=1}^\infty \kappa_i(g_i) = g_0 \kappa_1\left(\prod_{i=0}^\infty \kappa_i(g_{i+1})\right). \] By \cref{AutmGGS:lem:norm a}, the element $\kappa_1\left(\prod_{i=0}^\infty \kappa_i(g_{i+1})\right)$ centralises $a$. Thus it is sufficient to consider $h = g_0$. The statement now follows from \cref{AutmGGS:lem:facts implications for Greg}~(iii). \end{proof} \begin{lemma}\label{AutmGGS:lem:segmenting} Let $G$ be a non-constant \mGGS-group. Then \[ \Nor(G) = \Nor_{\overline{G_{\reg}}}(G) \rtimes \Nor_{\overline{D}}(G) \] \end{lemma} \begin{proof} Assume that $\Nor(G)$ is equal to the product set $\Nor_{\overline{G_{\reg}}}(G) \cdot \Nor_{\overline{D}}(G)$. By \cref{AutmGGS:prop:norm in g1}, $\Nor_{\overline{G_{\reg}}}(G)$ is equal to $G \rtimes \kappa_\infty(A)$ or to $G_{\reg}$. Both groups are normalised by $\Nor_{\overline{D}}(G)$, the first one since $\overline{D}$ normalises $\kappa_\infty(A)$, and the second one since for every $d_0 \kappa_1(d_1)$ with $d_0 \in D$ and $d_1 \in \overline{D}$, \[ \underline{c}^{d_0\kappa_1(d_1)} = \psi_1^{-1}([b,a]^{d_1}, \id, \dots, \id)^{d_0} \in \psi_1^{-1}(G' \dottimes{p} G') \leq G_{\reg}. \] Thus the product set is in fact a semidirect product. It remains to show the equality $\Nor(G) = \Nor_{\overline{G_{\reg}}}(G) \cdot \Nor_{\overline{D}}(G)$. By \cref{AutmGGS:lem:norm in ginfty kappa closure}, we may write $g \in \Nor(G)$ as a product $g = h' \cdot d$ with $h' \in \overline{G_{\reg}}$ and $d \in \overline{D}$. Clearly $\overline{D}$ normalises $A$. Thus it is enough to prove: For all $\mathbf{n} \in \F_p^r$ such that $(b^{\mathbf n})^d \not\in G$, then ${h'd} \not\in \Nor(G)$ for all $h' \in \overline{G_{\reg}}$. We to prove that $(h'd)^{-1} \notin \Nor(G)$. Since $\overline{D}$ is a group, we may replace $d$ by its inverse. Write $h$ for $h'^{-1}$. Notice that \[ h = \kappa_1(h_1) h_0 \] for some $h_1 \in \overline{G_{\reg}}$ and $h_0 \in G_{\reg}$. Since $G_{\reg}$ normalises $G$, we may assume that $h_0 = \id$, and thus $h|^\varnothing = \id$. Let $d = \prod_{i = 0}^\infty \kappa_i(d_i)$ for a sequence $(d_i)_{i \in \N}$ of elements $d_i \in D$ such that, for all $i \in \N$, we have $a^{d_i} = a^{j_i}$ for some $j_i \in \Z$. Then, for all $\mathbf{n} \in \F_p^r$, \begin{align*} \psi_1((b^{\mathbf n})^d) &= ((b^{\mathbf n})^{d|_0}, (a^{\mathbf n \cdot \mathbf{e}_{{1}}})^{d|_0}, \dots, (a^{\mathbf n \cdot \mathbf{e}_{{p-1}}})^{d|_0})^{d_0}\\ &= ((b^{\mathbf n})^{d|_0}, a^{j_1 \cdot \mathbf n \cdot \mathbf{e}_{1.d_0}}, \dots, a^{j_1 \cdot \mathbf n \cdot \mathbf{e}_{(p-1).d_0}}). \end{align*} Write $x_{1,i} = j_1 \cdot \mathbf n \cdot \mathbf{e}_{i.d_0}$ for all $i \in \{1, \dots, p-1\}$. Since $\overline{D}$ is self-similar, we see that $(b^{\mathbf n})^d$ is directed along $\overline{0}$, and we write $\mathbf{x}_{k} = (x_{k, 1}, \dots x_{k, p-1})$ for the $A$-exponents of the sections at $i \in \{1, \dots, p-1\}$ of $(b^{\mathbf n})^d|_{0^{k-1}}$. Now, using \cref{AutmGGS:lem:directed elements of G}, we see that if $(b^{\mathbf n})^d \in G$, then actually $(b^{\mathbf n})^d \in B$. Assume that the $\overline{0}$-directed element $(b^{\mathbf n})^d$ is not a member of $B$. Then there are two possibilities: \begin{enumerate} \item there exists some $k \in \N$ such that $\mathbf{x}_{k}$ is not contained in the row space of $E$, or, \item if for all $k \in \N$ the vector $\mathbf{x}$ is contained in the row space of $E$, but there exists some $k \in \N$ such that $\mathbf{x}_{k} \neq \mathbf{x}_{k+1}$. \end{enumerate} In both cases, we may assume that $k = 0$, since $\overline{G_{\reg}}$ and $\overline{D}$ are self-similar. Given $(b^{\mathbf{n}})^d$, we compute the conjugate by $dh$, \begin{align*} \psi_1((b^{\mathbf{n}})^{dh}) &= (b^{\mathbf{n}})^d|_0, a^{x_1}, \dots, a^{x_{p-1}})^{h}\\ &= (((b^{\mathbf{n}})^{d}|_0)^{h|_0}, (a^{x_1})^{h|_1}, \dots, (a^{x_{p-1}})^{h|_{p-1}}). \end{align*} Since $\overline{G_{\reg}}^{-1}$ is self-similar, we may apply \cref{AutmGGS:lem:first two terms of overline g1}, and we find \begin{equation*}\label{eq:dagger} \psi_1((b^{\mathbf n})^{dh}) \equiv_{G' \times \dots \times G'} (((b^{\mathbf n})^{d}|_0)^{h|_0} \bmod{G'}, a^{x_1}, \dots, a^{x_{p-1}}).\tag{$\ast$} \end{equation*} Assume that we are in case (i), i.e.\ that $\mathbf{x}_{0}$ is not contained in the row space of $E$. Then, by (\ref{eq:dagger}) and \cref{AutmGGS:lem:global equations}, also $(b^{\mathbf n})^{dh} \notin G$. Assume that we are in case (ii), i.e.\ that $\mathbf{x}_{0} \neq \mathbf{x_{1}}$, but both represent the forced $a$-exponents of an element $b^{\mathbf{m}_{0}}$ and $b^{\mathbf{m}_{1}}$, respectively. Thus by (\ref{eq:dagger}) and \cref{AutmGGS:lem:global equations}, \begin{align*} (b^{\mathbf n})^{dh} &\equiv_{\psi_1^{-1}(G' \times \dots \times G')} (b^{\mathbf{m}_{0}}) \quad\text{and}\\ (b^{\mathbf n})^{dh}|_{0} &\equiv_{\psi_1^{-1}(G' \times \dots \times G')} (b^{\mathbf{m}_{1}})^{h|^{0}}. \end{align*} Thus \[ (b^{\mathbf{m}_{1}})^{h|^{0}} \equiv (b^{\mathbf n})^{dh}|_{0} \equiv b^{\mathbf{m}_{0}}|_0 \equiv b^{\mathbf{m}_{0}} \bmod{\psi_1^{-1}(G' \times \dots \times G')}. \] Since $\psi_1^{-1}(G' \times \dots \times G') \cap G = \Stab_G(1)'$, this implies $(b^{\mathbf{m}_{1}})^{a^k} \equiv_{\Stab_G(1)'} b^{\mathbf{m}_{0}}$ for some $k \in \Z$, hence $b^{\mathbf{m}_{1}} = b^{\mathbf{m}_{0}}$ and $\mathbf{m}_{0} = \mathbf{m}_{1}$. But then $\mathbf{x}_{0} = \mathbf{x}_{1}$, a contradiction. \end{proof} \section{Elements normalising \(G\) with labels in \(\Delta\)}\label{sec:coprime autom} Recall that the permutation group $\Delta = \langle \delta \rangle$ is isomorphic to $\F_p^\times$. The rooted automorphism $d= \rooted{\delta}$ acts in two different ways on $G = \Stab_G(1)\rtimes A$. It raises $a$ to a power, i.e.\ it acts my multiplication on the exponent of $a$; and it acts on an element of $g \in \Stab_G(1)$ by permuting the tuple $\psi_1(g)$, i.e.\ by multiplication of the indices of said tuple. Note that the vertex $0$ is fixed by $\delta$. Recall that $B$ is isomorphic to $\mathbf{E} \leq \F_p^{p-1}$. We now show that $B$ is normalised by every normaliser of $G$ in $\overline{D}$. \begin{lemma} Let $G$ be a non-constant \mGGS-group. Then $\Nor_{\overline{D}}(G) = \Nor_{\overline{D}}(B)$. \end{lemma} \begin{proof} Since $\overline{D} \leq \Nor(A)$, the inclusion $\Nor_{\overline{D}}(G) \geq \Nor_{\overline{D}}(B)$ is obvious. We now prove the other inclusion. Let $g \in \Nor_{\overline{D}}(G)$. By \cite[Lemma 3.4]{Pet19}, there exists an integer $k \in \N$ such that $g|_{0^k}$ normalises $B$. Thus it is enougth to prove that if $g|_0$ normalises $B$, also $g$ normalises $B$. Assume $g|_0 \in \Nor(B)$, and let $\mathbf{m}$ and $\mathbf{n} \in \F_p^r$ be such that $(b^{\mathbf{n}})^{g|_0} = b^{\mathbf{m}}$. We may write $g = \kappa_1(g|_0) g|^\varnothing$, where $g|^\varnothing$ normalises $a|^\varnothing$. Hence there exists $j \in \Z$ such that $a^{g|_0} = a^j$. We calculate \begin{align*} (b^{\mathbf{n}})^{-1}(b^{\mathbf{m}})^{g} &= (b^{\mathbf{n}})^{-1}\psi_1^{-1}((b^{\mathbf{m}})^{g|_0}, a^{j \cdot \mathbf{m} \cdot \mathbf{e}_{{1}}}, \dots, a^{j \cdot \mathbf{m} \cdot \mathbf{e}_{{p_1}}})^{g|^\varnothing}\\ &= (\id, a^{-\mathbf{n} \cdot \mathbf{e}_{{1}} + j \cdot \mathbf{m} \cdot \mathbf{e}_{{1.g|^\varnothing}}}, \dots, a^{-\mathbf{n} \cdot \mathbf{e}_{{p-1}} + j \cdot \mathbf{m} \cdot \mathbf{e}_{{(p-1).g|^\varnothing}}}). \end{align*} We see that the commutator coordinates of $(b^{\mathbf{n}})^{-1}(b^{\mathbf{m}})^{g}$ are trivial, the $B$-coordinates are all zero, and hence the forced $A$-coordinates are also $s_k = \sum_{i = 0}^{p-1} \mathbf n_i \cdot \mathbf c_{k-i} = 0$. Thus $b^{\mathbf n} = b^{\mathbf m}$. \end{proof} Thus, we may restrict our attention to the group $B$. It is fruitful to consider $B$ as a subgroup of the directed subgroup $b^{\F_p^{p-1}}$ of the \mGGS-group associated to the full space $\F_p^{p-1}$ (with standard basis), which is, by the previous lemma, also invariant under $\Nor_{\overline{D}}(G)$. Write $\mu \colon D \to \F_p^\times$ for the isomorphism induced by \( \alpha^{\delta} = \alpha^{\delta^{\mu}}, \) where the second operation is taking the power, and define a map $P_\bullet \colon D \to \GL_p(p-1)$ such that $P_d$ is the permutation matrix corresponding to the permutation $d|^\varnothing = \delta$. Let $g = \prod_{i = 0}^\infty \kappa_i(d_i) \in \overline{D}$ for a sequence $(d_i)_{i \in \N}$ of elements in $D$. Then, for all $j \in \{1, \dots, p-1\} = \F_p^\times$, \begin{align*} \psi_1((b^{\mathbf{s}_j})^g) &= ((b^{\mathbf{s}_j})^{g|_0}, \id, \dots, \id, a^{d_1}, \id, \dots, \id)^{d_0}\\ &= \psi_1((b^{\mathbf{s}_j})^{g|_0}, \id, \dots, \id, (d_1)^\mu a, \id, \dots, \id), \end{align*} where the non-trivial entries (in the second line) are the positions $0$ and $(d_1)^\mu j$. If $g$ normalises $G$, it must normalise $B$, and the conjugate $(b^{\mathbf{s}_j})^g$ is determined by the exponents of the sections at the positions in $\{1, \dots, p-1\}$. Thus \[ (b^{\mathbf{s}_j})^g = b^{(d_1)^\mu \mathbf{s}_{(d_0)^{\mu} j}}. \] In other words, the action of $g \in \Nor_{\overline{D}}(G)$ induces, via the isomorphism $b^{\bullet}$, the linear map \begin{equation*}\label{eq:linear map} (d_1)^{\mu} P_{d_0}\tag{\textdagger} \end{equation*} on $\F_p^{p-1}$. Returning to the directed group $B$, we see that every $g \in \Nor_{\overline{D}}(G)$ must be such that $P_{d_0}$ leaves $\mathbf{E}$ invariant. Hence we define \[ U \defeq \{ u \in D \mid \mathbf{E}P_u = \mathbf{E} \} = \stab_D(\mathbf E). \] Furthermore, we define the subgroup \begin{align*} V \defeq& \{ v \in U \mid \text{ for all } \mathbf{e} \in \mathbf{E} \text{ there exists } \lambda \in \F_p^\times \text{ such that } \mathbf{e}P_v = \lambda \mathbf{e} \} \\ =& \{ v \in U \mid \mathbf{E} \subseteq \operatorname{Eig}_\lambda(P_v) \text{ for some }\lambda \in \F_p^\times \}. \end{align*} Since $V \leq D$ is cyclic, the element $\lambda \in \F_p^\times$ generating a maximal subgrop is uniquely determined. Finally, define the subgroup \[ W \defeq \langle \lambda \rangle. \] \begin{proposition}\label{AutmGGS:prop:norm in D} Let $G$ be a non-constant \mGGS-group. Let $U$ and $W$ be defined as above. Then \begin{align*} \Nor_{\overline{D}}(G) \cong U \times W. \end{align*} \end{proposition} \begin{proof} By (\ref{eq:linear map}), the action of a given element $g = \prod_{i = 0}^\infty \kappa_{i}(d_i)$ is determined by $d_0$ and $d_1$. Furthermore, since $\mathbf{E}$ must be invariant under $P_{d_0}$, we see that necessarily $d_0 \in U$. Since $\Nor(G)$, by \cite[Lemma~3.3]{Pet19}, and $\overline{D}$ are self-similar, we find, for all $k \in \N$, \[ g|_{0^k} = \prod_{i = 0}^\infty \kappa_{i}(d_{i+k}) \in \Nor_{\overline{D}}(G). \] Since, for all $\mathbf{n} \in \F_p^r$ and $g \in \Nor_{\overline{D}}(G)$, \[ (b^{\mathbf{n}})^g = (b^{\mathbf{n}})^g|_0 = (b^{\mathbf{n}}|_0)^{g|_0} = (b^{\mathbf{n}})^{g|_0}, \] we see that the action induced on $\F_p^{p-1}$ by all elements $g|_{0^k}$, for $k \in \N$, are equal, i.e.\ that the following equalities of matrices hold, \[ (d_{k+1})^{\mu} P_{d_k} = (d_1)^{\mu} P_{d_0}, \quad\text{ hence }\quad \mathrm{I}_{p-1} = (d_{k+1}d_{k+2}^{-1})^{\mu} P_{d_k d_{k+1}^{-1}}, \] where $\mathrm{I}_{p-1}$ is the identity matrix. Recall that, for all $k \in \N$, the matrix $P_{d_k d_{k+1}^{-1}}$ acts either does not act as a scalar on $\mathbf{E}$, hence there is no $d_{k+1}$ fulfilling the equation above, or it acts as some scalar $\lambda^i$, for some $i \in \Z$. Thus, every difference $d_kd_{k+1}^{-1}$ must be an element of $W$, otherwise, $g$ cannot be normalising $G$. On the other hand, for $d_0 \in U$ and $d_1 \in d_0^{-1}W$ there is a unique sequence $(d_i)_{i \in \N}$ that defines an element of $\Nor_{\overline{D}}(G)$, since \begin{align*} d_{k+2} &= d_{k+1} (d_kd_{k+1}^{-1})^{r}\\ &= d_{k+1} (d_k ((d_{k-1}d_k^{-1})^r)^{-1} d_k^{-1})^r\\ &= d_{k+1} ((d_{k-1}d_k^{-1})^{-1})^{r^{2}}\\ &= d_{k+1} ((d_0 d_1^{-1})^{(-1)^k})^{r^{k+1}}. \end{align*} Thus $\Nor_{\overline{D}}(G) \cong U \times W$. \end{proof} In particular, if $r = 1$, every linear map leaving $\mathbf{E}$ invariant is a scalar multiplication, i.e.\ the subgroups $U$ and $V$ coincide. Clearly, if $W$ is the trivial group, the only elements of $\Nor_M(G)$ are defined by the constant sequences. More generally, the sequence $(d_i)_{i\in \N}$ defining the normalising element with given $d_0$ and $d_1$ is periodic with periodicity prescribed by the order of $\lambda$. Now all ingredients are ready for the proof of our main theorem. \begin{proof}[Proof of \cref{AutmGGS:thm:main}] By \cite[Theorem~1]{GW03} and \cite[Proposition~3.7]{KT18}, the automorphism group of $G$ coincides with the normaliser of $G$ in $\Aut(X^*)$. By \cref{AutmGGS:lem:segmenting}, this normaliser is the semidirect product \[ \Nor_{\overline{G_{\reg}}}(G) \rtimes \Nor_{\overline{D}}(G). \] These two groups were computed in \cref{AutmGGS:prop:norm in g1} and \cref{AutmGGS:prop:norm in D}. \end{proof} \section{Examples}\label{sec:examples automggs} To illustrate the definitions of $U, V$ and $W$ we compute some explicit examples. \begin{example} Let $G$ be the \GGS-group acting on the $5$-adic tree with $\mathbf{E}$ generated by $(1,2,2,1)$. Clearly $G$ is symmetric. For every symmetric \GGS-group, the space $\mathbf{E}$ is by definition invariant under the permutation induced by $-1 \in \F_p^{\times}$. In fact, it always acts trivially, hence $-1 \in W$. In our case, this is the only non-trivial permutation leaving $\mathbf{E}$ invariant, since \[ (1,2,2,1) P_{x \mapsto 2x} = (2,1,1,2) = (1,2,2,1) P_{x \mapsto 3x}, \] while $(2,1,1,2) \notin \mathbf{E}$. Thus \[ \Aut(G) = (G \rtimes \langle \underline{c} \rangle) \rtimes \langle \textstyle \prod_{i = 0}^\infty \kappa_{i}(x \mapsto -x) \rangle \cong (G \rtimes \mathrm{C}_5) \rtimes \mathrm{C}_2. \] The group $G$ and the group defined by $(1,4,4,1)$ are the \mGGS-groups with the smallest possible outer automorphism group. \end{example} \begin{example} Let $G$ be the (regular) \GGS-group acting on the $p$-adic tree with $E$ generated by $\mathbf{b} = (1,2,\dots,p-1)$. Let $(\lambda_1, \dots, \lambda_{p-1})$ be the image of $\mathbf{b}$ under $P_d$, for $d \in D$. Since \[ \lambda_i = \mathbf{b}_{d^{-1}i} = (d^{-1})^\mu i = (d^{-1})^\mu \mathbf{b}_{i}, \] we see that $\mathbf{b}P_d = (d^{-1})^\mu \mathbf{b}$. Thus $W = V = U = D$, and the automorphism group is `maximal', \[ \Aut(G) = (G \rtimes \kappa_{\infty}(A)) \rtimes (\F_p^\times)^2. \] \end{example} \begin{example} The distinction between the subgroups $U, V$ and $W$ is not superficial. Consider the vector $\mathbf{b}_1 = (1, 2, 11, 3, 12, 10, 10, 12, 3, 11, 2, 1) \in \F_{13}^{12}$. An easy calculation shows that \[ \mathbf{b}_1 P_{x \mapsto 5x} = (12, 11, 2, 10, 1, 3, 3, 1, 10, 2, 11, 12) = -\mathbf{b}_1, \] while $\mathbf{b}_1 P_{x \mapsto 3x}$ is not a multiple of $\mathbf{b}_1$. Set $\mathbf{b}_2 = \mathbf{b}_1 P_{x \mapsto 3x}$ and $\mathbf{b}_3 = \mathbf{b}_1 P_{x \mapsto 9x}$ and let $\mathbf{E}$ be the space spanned by $\mathbf{b}_1, \mathbf{b}_2$ and $\mathbf{b}_3$. Since $\F_{13}^\times$ is generated by $3$ and $5$, the space $\mathbf{E}$ is invariant under all permutations induced by index-multiplication, i.e.\ $U = D$. But only the multiplication by the multiples of $5$ act by scalar multiplication of $\mathbf{E}$, hence $V = \langle x \mapsto 5x \rangle$. The corresponding scalars are $1$ and $12$, hence $W$ is of order $2$. \end{example} \begin{example} Let $G_p$ be a Gupta--Sidki $p$-group, i.e.\ the \GGS-group with $\mathbf{E}$ spanned by $\mathbf{b} = (1, -1, 0, \dots, 0) \in \F_p^{p-1}$. All Gupta--Sidki $p$-groups are regular. Let $n \in N$, and consider $\mathbf{b}P_n$. Since the projection to the last $p-3$ coordinates of $\mathbf{E}$ is trivial, the index $1$ must be mapped to $1$ or $2$ under $n$, and the same holds for $2$. This is only possible if $n = 1$ or $n = 2$ and $2\cdot 2 \equiv_p 1$, hence in case $p = 3$. Otherwise $U$ is trivial. If $p = 3$, the group $W$ is equal to $U$, since the non-trivial permutation induced by the index multiplication by $2$ is equal to pointwise multiplication by $2$. This recovers the result of \cite{Sid87}, where the automorphism group of $G_3$ was first computed. Interestingly, this example is the `odd one out', having automorphisms of order~$2$. Concludingly, we found \[ \Aut(G_p) = \begin{cases} (G_p \rtimes \kappa_{\infty}(A)) \rtimes \mathrm{C}_2^2 &\text{ if }p = 3,\\ G_p \rtimes \kappa_{\infty}(A) &\text{ otherwise. } \end{cases} \] \end{example} \begin{example} Let $G_{\F_p^{p-1}}$ be the \mGGS-group defined by the full space $\F_p^{p-1}$. This group is regular, and every permutation $P_n$ leaves $\F_p^{p-1}$ invariant. On the other hand, no non-trival permutation acts on the full space as a multiplication. Thus \begin{align*} \Aut(G_{\F_p^{p-1}}) &= (G_{\F_p^{p-1}} \rtimes \kappa_{\infty}(A)) \rtimes \{ \textstyle \prod_{i = 0}^\infty \kappa_{i}(d') \mid d' \in D\}\\ &\cong (G_{\F_p^{p-1}} \rtimes \prod_{\omega} \mathrm{C}_p) \rtimes \mathrm{C}_{p-1}. \end{align*} \end{example}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Preface} This mathematical journey started by exploring a problem formulated by Fermat \cite{1} to find a right triangle having integer sides $a,b,c$ such that $a+b=\square$ and $c=\square$. In $1643$ Fermat wrote a letter to Mersenne confirming the smallest solution to be (a,b,c)=(456548602761,1061652293520,4687298610289). To obtain this result he must have had a rigorous method, first to find this solution and second to claim that it is the smallest triangle of this kind. My own survey and solution to this problem is outlined in the last section. \newline \linebreak It led me to see the importance of the fundamental theorem of Pythagoras which one can compare with a swiss army knife still hiding multiple relationships and identities. The title was an obvious choise since most of the discoveries relate to the Congruent Number Problem \cite{2}, one of the oldest unsolved problems in mathematics, which is about trying to find a right triangle with rational sides whose area is a certain positive integer\cite{3}. \newline \linebreak Starting from Euclid's fundamental formula for the Pythagorean triples \cite{4} we define the rational triples from which we will derive several identities and relationships. The first identity defines a relationship between the areas of the rational triples which are all congruent numbers. A second identity shows how the sum of the squares of the differences of two sides relate to the Euclidean distance \cite{5} between two points in a three-dimensional Euclidean space. Another relationship defines three sets of quadratic Diophantine equations also known as Eulers Concordant Forms \cite{6} for the congruent number problem. By another identity, relating the sum of squares of the sides, we define the Trinity system, a three-dimensional system of spheres embedding small and great circles, preserving the Pythagorean equation accross the spheres. In this context we define the Trinity vectors, their derivatives and their product relationships defining parallelepipeds with volume equal to $1/2$. \newline \linebreak A geometric method shows how congruent numbers are related to two conjugate conics and proves that the rational points on the congruent number elliptic curve $E_N: y^2=x^3-N^2x$ are related to rational intersection points of a line and an ellipse. In this context we show how the famous often celebrated solution found by Don Zagier for the congruent number $157$ can be represented by the two numbers $87005 ,610961$. We define and apply the line ellipse intersection method to prove that the polynomial $(4t^2+1)(4t^2-8t+5)$ represents congruent numbers. Using this method we also define congruent number polynomials from lattice points of an ellipse. \newline \linebreak We define the twin hyperbolas which is another method relating two conic sections to the congruent number problem and show how their rational intersection points define the two congruent number polynomials $ 2 \left(11 t^4-36 t^3+30 t^2-12 t+19\right)$ and $2\left(11 t^4+60 t^3+66 t^2-132 t+43\right)$. \newline \linebreak We define a relationship between the Cassini oval \cite{7} axis intersection points and a system of equations defined by Kurt Heegner in 1952 \cite{8} related to the congruent number problem and show how to obtain Cassini ovals with two or four $X$-axis intersections for the same congruent number. \newline \linebreak A method named the tangent method, due to its relationship to the point doubling method on elliptic curves, shows how a set of rational right triangle solutions can be generated from the known rational hypotenuse of an initial solution for one congruent number by solving a binary quadratic form. \newline \linebreak We define the prime footprint equations which are a template for prime or almost prime congruent numbers and include tables with their solutions for numbers less than $1000$. From these emperically defined equations one can derive relationships to conic sections and their rational points. \newline \linebreak We show how a hidden recurrence defines infinite trees of congruent number sequences from an initial Pythagorean triple. By defining a method choosing the triple tree-side-walk it is possible to generate different congruent number trees for the chosen side-path. Explicit examples for the side-paths $a^i,\;ab,\;bb$ are given for Euclid's definition. \newline \linebreak A method is given to define congruent number families related to the Fibonacci\cite{9}/Lucas\cite{10} numbers and the Chebyshev polynomials \cite{11}. For both we look at some properties of the associated congruent number elliptic curves defining some general rational points. We show how the solution for a right triangle with area $5$ first found by Fibonacci is related to the Fibonacci numbers. We also show that the Brahmagupta triangles \cite{12} can be defined in function of the Chebyshev polynomials and that their semiperimeters are congruent numbers.\newline For these triangles we also define a general elliptic curve and review some of its basic properties. \newline \linebreak We conclude with the naive recursive algorithm generating the solutions for the Fermat triangles which ignited this mathematical journey. \newline \linebreak In order to verify most results the following Sagemath \cite{13} notebooks are published along with this paper. \begin{align*} &\texttt{02\_Beyond\_Pythagoras.ipynb} \\ &\texttt{03\_Conjugate\_conics.ipynb} \\ &\texttt{04\_Twin\_hyperbolas.ipynb} \\ &\texttt{05\_Cassini\_ovals.ipynb} \\ &\texttt{06\_Tangent\_method.ipynb} \\ &\texttt{07\_Prime\_footprints.ipynb} \\ &\texttt{08\_Unseen\_recurrence.ipynb} \\ &\texttt{09\_Fibonacci\_Lucas.ipynb} \\ &\texttt{10\_Chebyshev\_polynomials.ipynb} \\ &\texttt{11\_Fermat.ipynb} \end{align*} \newline \linebreak \includegraphics[width=155mm]{OpenFlower.png} \pagebreak \section{Beyond Pythagoras } From Euclid's fundamental formula we define the rational triples for right angled triangles. \newline The area's of these right triangles are congruent numbers between which certain relationships and identities exist. \newline The rational triples are also the foundation for the Trinity system a three-dimensional differential system defining rational points on circles of spheres and vector algebraic relationships. \subsection{The rational triples } The Pythagorean triples $A,B,C$ representing a right triangle with integer sides and area $N=\frac{AB}{2}$ , satisfying the equation $A^2+B^2=C^2$ , are generated by Euclid's fundamental formula \begin{equation} (A,B,C)=\left(m^2-n^2, 2 m n, m^2+n^2\right) \;\; \forall \;\; { \; m,n \in \mathbb{Z} \; and \; m>n>0 }\end{equation} Combining two sides we can define three related right triangles satisfying $a'^2+b'^2= c'^2$ \small \begin{eqnarray} \nonumber (a, b, c)_{AC}' & = & \left( 2 A^2 C^2, B^2 (A^2 + C^2), A^4 + C^4 \right)\\ \nonumber (a, b, c)_{BC}' &=& \left( 2 B^2 C^2, A^2 (B^2 + C^2), B^4 + C^4\right) \\ (a, b, c)_{BA}' &=& \left( 2 B^2 A^2, C^2 (B^2 - A^2), B^4 + A^4\right) \end{eqnarray} Reducing the sides by $ABC$ we obtain the rational triples $a,b,c$ satisfying the equation $a^2+b^2=c^2\;\; \forall \;\; a,b,c \; \in \mathbb{Q}$ \small \begin{eqnarray} \nonumber (a,b,c)_{AC} =\left(\frac{2 A C}{B},\frac{B (A^2 +C^2)}{A C},\frac{A^4+C^4}{A B C}\right) =&\left( \frac{m^4-n^4}{mn},\frac{4mn(m^4+n^4)}{m^4-n^4},\frac{4m^4n^4+\left(m^4+n^4\right)^2}{nm(m^4-n^4)}\right) \\ \nonumber (a,b,c)_{BC} =\left(\frac{2 B C}{A},\frac{A (B^2 +C^2)}{B C},\frac{B^4+C^4}{A B C}\right) =&\left(\frac{4mn(m^2+n^2)}{m^2-n^2},\frac{(m^2-n^2)\left(4m^2n^2+\left(m^2+n^2\right)^2\right)}{2 m n (m^2+n^2)},\frac{16 m^4 n^4 + (m^2+n^2)^4}{2mn(m^4-n^4)}\right) \\ (a,b,c)_{BA} =\left(\frac{2 B A}{C},\frac{C (B^2 -A^2)}{B A},\frac{B^4+A^4}{A B C}\right) = &\left(\frac{4mn(m^2-n^2)}{m^2+n^2},\frac{(m^2+n^2)\left(4m^2n^2-\left(m^2-n^2\right)^2\right)}{2 m n (m^2-n^2)},\frac{16 m^4 n^4 + (m^2-n^2)^4}{2mn(m^4-n^4)}\right) \end{eqnarray} \subsection{The area identity} The area's for the four triangles $(A,B,C),(a,b,c)_{AC,BC,BA}$ represented by \small \begin{eqnarray} \nonumber (N,N_{AC},N_{BC},N_{BA})&=& \left(\frac{AB}{2},A^2+C^2,B^2+C^2,B^2-A^2\right) \\ &= &\left(m n \left(m^2-n^2\right),2 \left(m^4+n^4\right),m^4+6 m^2 n^2+n^4,-m^4+6 n^2 m^2-n^4\right) \end{eqnarray} are congruent numbers related by the identity \begin{equation} N_{AC}^2 + N_{BC}^2 + N_{BA}^2= 6 \left(C^4 - 4 N^2\right) \end{equation} For example for $(m,n)=(2,1)$, we get the right triangles \small $$ (A,\;B,\;C)=(3, 4, 5) \;\;,\;\; (a,b,c)_{AC}=\left(\frac{15}{2}, \frac{136}{15}, \frac{353}{30}\right) ,\; (a,b,c)_{BC}=\left(\frac{40}{3}, \frac{123}{20}, \frac{881}{60}\right) ,\; (a,b,c)_{BA}=\left(\frac{24}{5}, \frac{35}{12}, \frac{337}{60}\right)$$ relating the congruent numbers $ (N,N_{AC},N_{BC},N_{BA})=(6,34,41,7)$ as follows $$ 34^2+41^2+7^2=6 (5^4- 4*6^2)$$ \subsection{The connecting line} The parallel lines to the $x$-axis at $y$ values $\pm2ABC$ intersect and connect the congruent number elliptic curves \begin{equation} E_{AC}: y^2 = \; x^3 -N_{AC}^2\;x \;,\; E_{BC}: y^2 = \; x^3 -N_{BC}^2\;x \;,\;E_{BA}: y^2 = \; x^3 -N_{BA}^2\;x \end{equation} at the rational points $P_{AC,BC,BA}:(x,y)$ \begin{align} \nonumber P_{AC} : &\left(-B^2,\pm2ABC\right) = \left(\shortminus4 m^2 n^2 \;\;\;\;\;\;\;, \pm 4 m n (m^2 - n^2) (m^2 + n^2) \right) \\ \nonumber P_{BC} : & \left(-A^2,\pm2ABC\right) = \left(\shortminus(m^2 - n^2)^2, \pm 4 m n (m^2 - n^2) (m^2 + n^2) \right) \\ P_{BA} : & \left(\;\;\;C^2,\pm2ABC\right) = \left( (m^2 + n^2)^2\;, \pm4 m n (m^2 - n^2) (m^2 + n^2) \right) \end{align} For the example $(m,n)=(2,1)$, we get the three positive and negative colinear rational points $$ P_{AC} :(-16,\pm120) \;\;\;\;, \;\;\;\;\; P_{BC} : (-9,\pm120) \;\;\;\;, \;\;\;\;\; P_{BA} : (25,\pm120) $$ \subsection{The Diophantine equations} Each Pythagorean triple $(a,b,c)_{AC,BC,BA}$ forms a pair of quadratic Diophantine equations, also known as Euler's concordant forms for the congruent number problem \begin{equation}x^2+N y^2 =z^2 \; and \;x^2-N y^2 = t^2 \;\; \forall \;\; { \; x,y,z,t,N} \in \mathbb{Z}\end{equation} relating the numerator and the denominator of the hypotenuse $c$ to the area $N_{AC,BC,BA}$ such that $$(x,y)= (Numerator[c],2*Denominator[c])$$ with the following parameterization \small \begin{eqnarray} (x,y)_{AC} &=& (A^4 + C^4 ,2 A B C ) = \left(2\left (m^8 + 6 m^4 n^4 + n^8\right), 4 nm(m^4-n^4) \right) \\ \nonumber (z,\;t)_{AC} & = & \left(\sqrt{(A^4 + C^4)^2 + (A^2 + C^2) (2 A B C )^2} ,\sqrt{(A^4 + C^4)^2 - (A^2 + C^2) (2 A B C )^2} \right) \\ &= & \left(2(m^8 + 4 m^6 n^2 - 2 m^4 n^4 + 4 m^2 n^6 + n^8),2(m^8 - 4 m^6 n^2 - 2 m^4 n^4 - 4 m^2 n^6 + n^8)\right) \\[0.5cm] (x,y)_{BC} &=& (B^4 + C^4 ,2 A B C ) = \left(m^8 + 4 m^6 n^2 + 22 m^4 n^4 + 4 m^2 n^6 + n^8, 4 nm(m^4-n^4) \right) \\ \nonumber (z,\;t)_{BC} & = & \left(\sqrt{(B^4 + C^4)^2 + (B^2 + C^2) (2 A B C )^2} , \sqrt{(B^4 + C^4)^2 - (B^2 + C^2) (2 A B C )^2} \right) \\ &=& \left(m^8 + 12 m^6 n^2 + 6 m^4 n^4 + 12 m^2 n^6 + n^8,m^8 - 4 m^6 n^2 - 26 m^4 n^4 - 4 m^2 n^6 + n^8\right) \\[0.5cm] (x,y)_{BA} &=& (B^4 + A^4 ,2 A B C )= \left(m^8 - 4 m^6 n^2 + 22 m^4 n^4 - 4 m^2 n^6 + n^8, 4 nm(m^4-n^4) \right) \\ \nonumber (z,\;t)_{BA} & = & \left(\sqrt{(B^4 + A^4)^2 + (B^2 - A^2) (2 A B C )^2} ,\sqrt{(B^4 + A^4)^2 - (B^2 - A^2) (2 A B C )^2} \right) \\ & = & \left(m^8 - 12 m^6 n^2 + 6 m^4 n^4 - 12 m^2 n^6 + n^8,m^8 + 4 m^6 n^2 - 26m^4 n^4 + 4 m^2 n^6 + n^8\right) \end{eqnarray} This results in three pairs of equations for which the $y$ solutions are all equal to $2ABC$ \begin{equation}\left(y^2 = \frac{z^2 - x^2}{N} = \frac{x^2 - t^2}{N}\right)_{AC,BC,BA} \;\;\; and \;\;\; y_{AC}= y_{BC}= y_{BA}=2 A B C\end{equation} For the example $(m,n)=(2,1)$ we get the following solutions $$(x,y,z,t,N)_{AC}=(706, 120, 994, 94,34)$$ $$\;\;\;(x,y,z,t,N)_{BC}=(881, 120, 1169, 431, 41)$$ $$(x,y,z,t,N)_{BA}=(337, 120, 463, 113, 7)$$ And verifying the identity $(15)$ we find for $y_{AC,BC,BA}=120$ that $$120^2 = \frac{994^2 - 706^2}{34}= \frac{706^2 - 94^2}{34} = \frac{1169^2 - 881^2}{41}= \frac{881^2 - 431^2}{41} = \frac{463^2 - 337^2}{7}= \frac{ 337^2 - 113^2}{7}$$ \subsection{The distance identity} A distance identity closely related to the Euclidean distance formula for three-dimensions, also known as the 2-Norm is obtained by enumerating the triples $(a,b,c)_{AB,BC,BA}$ as cartesian coordinates $(a,b,c)_{1,2,3}$. \newline Taking the sum of the squares of the difference of the sides $a,c$ we obtain the identity \begin{eqnarray}\left(2 \frac{C^4 - 3 (A B)^2 }{A B C}\right)^2 & = & 2 \left(\left(c_1 - a_1\right)^2 + \left(c_2 - a_2\right)^2 + \left(c_3 - a_3\right)^2\right) \\ & = &\;\;\;\left(\left(c_1 - a_1\right) \;+ \left(c_2 - a_2\right) \;\;+ \left(c_3 - a_3\right) \right)^2 \\ & = & 4\left(\left(c_1 - a_1\right) \left(c_2 - a_2\right) + \left(c_1 - a_1\right) \left(c_3 - a_3\right)+\left(c_2 - a_2\right) \left(c_3 - a_3\right) \right) \end{eqnarray} for which the three products of the two differences of $(18)$ are also perfect rational squares \begin{align} \nonumber 4 (c_1 - a_1) (c_2 - a_2) &= \;\left(\;\left(c_1 - a_1\right) + \left(c_2 - a_2\right) - \left(c_3 - a_3\right) \right)^2 \\ \nonumber 4 (c_1 - a_1) (c_3 - a_3) & = \;\left(\;\left(c_1 - a_1\right) - \left(c_2 - a_2\right) + \left(c_3 - a_3\right) \right)^2 \\ 4 (c_2 - a_2) (c_3 - a_3) & = \left(\shortminus\left(c_1 - a_1\right) + \left(c_2 - a_2\right) + \left(c_3 - a_3\right) \right)^2 \end{align} defining rational Pythagorean quadruples which can be normalized to define rational points on the unit sphere. \newline The Euclidean distance relationship becomes clear dividing both sides of $(16)$ by $2$ and taking the square root. \newline For the example $(m,n)=(2,1)$ we get the vectors $\mathbf{a}=\left(\frac{15}{2}, \frac{40}{3}, \frac{24}{5}\right)$ and $\mathbf{c}=\left(\frac{353}{30}, \frac{881}{60}, \frac{337}{60}\right)$ \newline for which the identity evaluates to $\left(\frac{193}{30}\right)^2= \left(\frac{24}{5}\right)^2+\left(\frac{56}{15}\right)^2+\left(\frac{21}{10}\right)^2$ and the 2-Norm gives us $\|c-a\|_2=\frac{1}{\sqrt{2}}\frac{193}{30}$. \pagebreak \subsection{The Trinity system} The Trinity system is a three-dimensional rational differential geometric system originating from the identity \begin{align} (a_1^2 +a_2^2+a_3^2) = 2 (b_1^2 +b_2^2+b_3^2)=\frac{2}{3}(c_1^2 +c_2^2+c_3^2)=\frac{4\left(C^4 - (A B)^2\right)^2}{(A B C)^2} \end{align} relating the sum of the squares of the sides of the rational triples. Viewing the sides as cartesian coordinates \begin{align}&(x,y,z)_1=(a_1,a_2,a_3) & &(x,y,z)_2=(b_1,b_2,b_3)& & (x,y,z)_3=(c_1,c_2,c_3)&\end{align} we obtain after substituting the ratio $\frac{m}{n}$ by the variable $t$, three normalized equations $S_i: x_i^2+y_i^2+z_i^2=r_i^2$ \small \begin{eqnarray} S_1: &(x,y,z,r)_1 &=\left(\frac{\left(t^4-1\right)^2}{t^8+14 t^4+1},\frac{4 t^2\left(t^2+1\right)^2}{t^8+14 t^4+1},\frac{4 t^2 \left(t^2-1\right)^2}{t^8+14 t^4+1},1\right) \\ S_{2}:& (x,y,z,r)_2 &= \left(\frac{4 t^2 \left(t^4+1\right)}{t^8+14 t^4+1},\frac{\left(t^2-1\right)^2 \left(t^4+6 t^2+1\right)}{2 \left(t^8+14 t^4+1\right)},-\frac{\left(t^2+1\right)^2 \left(t^4-6 t^2+1\right)}{2 \left(t^8+14 t^4+1\right)},\sqrt{\frac{1}{2}}\right) \\ S_{3}:& (x,y,z,r)_3 & = \left(\frac{t^8+6 t^4+1}{t^8+14 t^4+1},\frac{t^8+4 t^6+22 t^4+4 t^2+1}{2 \left(t^8+14 t^4+1\right)},\frac{t^8-4 t^6+22 t^4-4 t^2+1}{2 \left(t^8+14 t^4+1\right)},\sqrt{\frac{3}{2}}\right) \end{eqnarray} with the relationships \begin{align} x_1 + y_1 - z_1 &= 1 \;\;\; \;\;\;\;\;\;\;\; &x_2 - y_2 - z_2 &= 0 \;\;\; \;\;\;\;\;\;\;\; &x_3 + y_3 + z_3 &= 2 \\ x_1^2 + y_1^2 + z_1^2 &= 1\;\;\; \;\;\;\;\;\;\;\; &x_2^2 + y_2^2 + z_2^2 &= 1/2 \;\;\; \;\;\;\;\;\;\;\; & x_3^2 + y_3^2 + z_3^2 &= 3/2 \\ x_1^2 + x_2^2 &= x_3^2\;\;\; \;\;\;\;\;\;\;\; &y_1^2 + y_2^2 &= y_3^2 \;\;\; \;\;\;\;\;\;\;\;& z_1^2 + z_2^2 &= z_3^2 \end{align} \begin{align} \frac{\partial ^n x_1}{\partial t^n} + \frac{\partial ^n y_1}{\partial t^n} - \frac{\partial ^n z_1}{\partial t^n} &=0 &\frac{\partial ^n x_2}{\partial t^n} - \frac{\partial ^n y_2}{\partial t^n} - \frac{\partial ^n z_2}{\partial t^n} &=0 &\frac{\partial ^n x_3}{\partial t^n} + \frac{\partial ^n y_3}{\partial t^n} + \frac{\partial ^n z_3}{\partial t^n} &=0 & \forall \;n>0 \end{align} \subsubsection{The circles } The locus of the points $S_i:(x,y,z)_i$ on the spheres with radius $r_i=\{1,\sqrt{\frac{1}{2}},\sqrt{\frac{3}{2}}\}$ , \newline including their permutations over the 3 coordinates, define the circles $C_i(t)_{j}$ : \newline $C_1(t)_{1..8}$ in the planes $ \pm x \pm y\pm z =1$ with radius $r_{1,j}=\sqrt{\frac{2}{3}}$ and center at $(x,y,z)_{1,j}=(\pm\frac{1}{3},\pm\frac{1}{3},\pm\frac{1}{3})$. \newline $C_2(t)_{1..4}$ in the planes $\pm x \pm y + z=0$ with radius $r_{2,j}=\sqrt{\frac{1}{2}}$ and center at $(x,y,z)_{2,j}=(0,0,0)$. \newline $C_3(t)_{1..8}$ in the planes $\pm x \pm y \pm z =2$ with radius $r_{3,j}=\sqrt{\frac{1}{6}}$ and center at $(x,y,z)_{3,j}=(\pm\frac{2}{3},\pm\frac{2}{3},\pm\frac{2}{3})$. \newline The circles $C_1(t)_{1..8}$ also define the intersectionpaths between the sphere with radius $1$ centered at the origin and the $8$ spheres with radius $\sqrt{2}$ centered at $(\pm1,\pm1,\pm1)$ and \newline the cirlces $C_3(t)_{1..8}$ also define the intersectionpaths between the sphere with radius $\sqrt{\frac{3}{2}}$ centered at the origin and the $8$ spheres with radius $\frac{\sqrt{3}}{4}$ centered at $(\pm\frac{3}{4},\pm\frac{3}{4},\pm\frac{3}{4})$. \newline The figure below shows the circles $C_{1..3}$ and their basic trigonometric cartesian component functions. \newline \begin{figure}[hbt!] \begin{tikzpicture}[scale=1.1,baseline={(current bounding box.center)}] \begin{axis}[ view={-45}{-5}, axis lines=center,axis on top, xlabel=$x$,ylabel=$y$,zlabel=$z$, xtick={5},ytick={5},ztick={5}, no marks,axis equal, xmin=-1.2,xmax=1.2,ymin=-1.2,ymax=1.2,zmin=-1.2,zmax=1.2, enlargelimits={upper=0.1}] \addplot3+[blue,no markers,samples=100, samples y=0,domain=-pi:pi,variable=\t] ({(1/3-1/sqrt(3)*cos(\t r) -1/3*sin(\t r))},{(1/3+1/sqrt(3)*cos(\t r) -1/3*sin(\t r))},{(1/3+2/3*sin(\t r))}); \addplot3+[blue,no markers,samples=100, samples y=0,domain=-pi:pi,variable=\t] ({-(1/3-1/sqrt(3)*cos(\t r) -1/3*sin(\t r))},{(1/3+1/sqrt(3)*cos(\t r) -1/3*sin(\t r))},{(1/3+2/3*sin(\t r))}); \addplot3+[blue,no markers,samples=100, samples y=0,domain=-pi:pi,variable=\t] ({(1/3-1/sqrt(3)*cos(\t r) -1/3*sin(\t r))},{-(1/3+1/sqrt(3)*cos(\t r) -1/3*sin(\t r))},{(1/3+2/3*sin(\t r))}); \addplot3+[blue,no markers,samples=100, samples y=0,domain=-pi:pi,variable=\t] ({(1/3-1/sqrt(3)*cos(\t r) -1/3*sin(\t r))},{(1/3+1/sqrt(3)*cos(\t r) -1/3*sin(\t r))},{-(1/3+2/3*sin(\t r))}); \addplot3+[blue,no markers,samples=100, samples y=0,domain=-pi:pi,variable=\t] ({-(1/3-1/sqrt(3)*cos(\t r) -1/3*sin(\t r))},{-(1/3+1/sqrt(3)*cos(\t r) -1/3*sin(\t r))},{(1/3+2/3*sin(\t r))}); \addplot3+[blue,no markers,samples=100, samples y=0,domain=-pi:pi,variable=\t] ({-(1/3-1/sqrt(3)*cos(\t r) -1/3*sin(\t r))},{(1/3+1/sqrt(3)*cos(\t r) -1/3*sin(\t r))},{-(1/3+2/3*sin(\t r))}); \addplot3+[blue,no markers,samples=100, samples y=0,domain=-pi:pi,variable=\t] ({(1/3-1/sqrt(3)*cos(\t r) -1/3*sin(\t r))},{-(1/3+1/sqrt(3)*cos(\t r) -1/3*sin(\t r))},{-(1/3+2/3*sin(\t r))}); \addplot3+[blue,no markers,samples=100, samples y=0,domain=-pi:pi,variable=\t] ({-(1/3-1/sqrt(3)*cos(\t r) -1/3*sin(\t r))},{-(1/3+1/sqrt(3)*cos(\t r) -1/3*sin(\t r))},{-(1/3+2/3*sin(\t r))}); \addplot3+[green,no markers,samples=100, samples y=0,domain=-pi:pi,variable=\t] ({(-1/2*cos(\t r)-1/(2*sqrt(3))*sin(\t r))},{(-1/2*cos(\t r)+1/(2*sqrt(3))*sin(\t r))},{(-1/sqrt(3)*sin(\t r))}); \addplot3+[green,no markers,samples=100, samples y=0,domain=-pi:pi,variable=\t] ({-(-1/2*cos(\t r)-1/(2*sqrt(3))*sin(\t r))},{(-1/2*cos(\t r)+1/(2*sqrt(3))*sin(\t r))},{(-1/sqrt(3)*sin(\t r))}); \addplot3+[green,no markers,samples=100, samples y=0,domain=-pi:pi,variable=\t] ({(-1/2*cos(\t r)-1/(2*sqrt(3))*sin(\t r))},{-(-1/2*cos(\t r)+1/(2*sqrt(3))*sin(\t r))},{(-1/sqrt(3)*sin(\t r))}); \addplot3+[green,no markers,samples=100, samples y=0,domain=-pi:pi,variable=\t] ({(-1/2*cos(\t r)-1/(2*sqrt(3))*sin(\t r))},{(-1/2*cos(\t r)+1/(2*sqrt(3))*sin(\t r))},{-(-1/sqrt(3)*sin(\t r))}); \addplot3+[red,no markers,samples=100, samples y=0,domain=-pi:pi,variable=\t] ({2/3-1/(2*sqrt(3))*cos(\t r) -1/6*sin(\t r)},{2/3+1/(2*sqrt(3))*cos(\t r) -1/6*sin(\t r)},{2/3+1/3*sin(\t r)}); \addplot3+[red,no markers,samples=100, samples y=0,domain=-pi:pi,variable=\t] ({-(2/3-1/(2*sqrt(3))*cos(\t r) -1/6*sin(\t r))},{(2/3+1/(2*sqrt(3))*cos(\t r) -1/6*sin(\t r))},{(2/3+1/3*sin(\t r))}); \addplot3+[red,no markers,samples=100, samples y=0,domain=-pi:pi,variable=\t] ({(2/3-1/(2*sqrt(3))*cos(\t r) -1/6*sin(\t r))},{-(2/3+1/(2*sqrt(3))*cos(\t r) -1/6*sin(\t r))},{(2/3+1/3*sin(\t r))}); \addplot3+[red,no markers,samples=100, samples y=0,domain=-pi:pi,variable=\t] ({(2/3-1/(2*sqrt(3))*cos(\t r) -1/6*sin(\t r))},{(2/3+1/(2*sqrt(3))*cos(\t r) -1/6*sin(\t r))},{-(2/3+1/3*sin(\t r))}); \addplot3+[red,no markers,samples=100, samples y=0,domain=-pi:pi,variable=\t] ({-(2/3-1/(2*sqrt(3))*cos(\t r) -1/6*sin(\t r))},{-(2/3+1/(2*sqrt(3))*cos(\t r) -1/6*sin(\t r))},{(2/3+1/3*sin(\t r))}); \addplot3+[red,no markers,samples=100, samples y=0,domain=-pi:pi,variable=\t] ({-(2/3-1/(2*sqrt(3))*cos(\t r) -1/6*sin(\t r))},{(2/3+1/(2*sqrt(3))*cos(\t r) -1/6*sin(\t r))},{-(2/3+1/3*sin(\t r))}); \addplot3+[red,no markers,samples=100, samples y=0,domain=-pi:pi,variable=\t] ({(2/3-1/(2*sqrt(3))*cos(\t r) -1/6*sin(\t r))},{-(2/3+1/(2*sqrt(3))*cos(\t r) -1/6*sin(\t r))},{-(2/3+1/3*sin(\t r))}); \addplot3+[red,no markers,samples=100, samples y=0,domain=-pi:pi,variable=\t] ({-(2/3-1/(2*sqrt(3))*cos(\t r) -1/6*sin(\t r))},{-(2/3+1/(2*sqrt(3))*cos(\t r) -1/6*sin(\t r))},{-(2/3+1/3*sin(\t r))}); \end{axis} \end{tikzpicture} \begin{minipage}{0.6\textwidth} \begin{align} \nonumber \color{blue} C_1(t)= &\left(\frac{1}{3} -\frac{1}{\sqrt{3}}\cos(t)-\frac{1}{3}\sin(t) ,\frac{1}{3} +\frac{1}{\sqrt{3}}\cos(t) -\frac{1}{3}\sin(t) ,\frac{1}{3} +\frac{2}{3}\sin(t)\right) \\ \nonumber \color{green} C_2(t)= &\left(-\frac{1}{2}\cos(t)-\frac{1}{2\sqrt{3}}\sin(t) ,-\frac{1}{2}\cos(t)+\frac{1}{2\sqrt{3}}\sin(t) ,-\frac{1}{\sqrt{3}}\sin(t)\right) \\ \color{red} C_3(t)= &\left(\frac{2}{3} -\frac{1}{2\sqrt{3}} \cos(t)-\frac{1}{6} \sin(t) ,\frac{2}{3} +\frac{1}{2\sqrt{3}} \cos(t) -\frac{1}{6}\sin(t) ,\frac{2}{3} -\frac{1}{6}\sin(t)\right) \end{align} \end{minipage}\hspace{10cm} \caption{The Trinity circles} \label{fig:F1} \end{figure} \pagebreak \subsubsection{The vectors } Defining the vectors $\mathbf {a},\mathbf {b},\mathbf {c}$ using the equations $(22,23,24)$ as follows \begin{align} \mathbf {a} &= (x,y,z)_1\;,&\mathbf {b} & = (x,-y,z)_2 \;, & \mathbf {c} & = (x,y,-z)_3& \end{align} the Euclidean distances to the origin for the three vectors are \begin{align} &\left\|\mathbf {a} \right\|^2=1\;, &\;\;\; &\left\|\mathbf {b} \right\|^2=\frac{1}{2}\;, \;\;\; &\left\|\mathbf {c} \right\|^2&=\frac{3}{2}& \end{align} Evaluating the dot products of the three vector pairs \begin{align} & \mathbf {a}\cdot \mathbf {b}=0 \;, &\;\;\; & \mathbf {b}\cdot \mathbf {c}=0 \;, &\;\;\; &\mathbf {a}\cdot \mathbf {c}=1 & \end{align} we find that the vector pairs $\mathbf {a},\mathbf {b}$ and $\mathbf {b},\mathbf {c}$ are orthogonal and between the vectors $\mathbf {a},\mathbf {c}$ we have a specific angle defined by \begin{equation} \cos{\phi}=\frac{\mathbf {a} \cdot \mathbf{c}}{\left\|\mathbf {a} \right\|\left\|\mathbf {c} \right\|}= \sqrt{\frac{2}{3}} \;\;\;,\;\;\;\phi = \arccos{\sqrt{\frac{2}{3}}}\end{equation} which is an angle of $35.2643...$°. \newline An interesting fact is that the angle bewteen the cross product of the vector pair $\mathbf {a},\mathbf {b}$ and the vector $\mathbf {c}$ as well as for the cross product of the vector pair $\mathbf {b},\mathbf {c}$ and the vector $\mathbf {a}$ are equal and can be evaluated by \begin{equation} \cos{\theta}=\frac{(\mathbf {a} \times \mathbf {b}) \cdot \mathbf{c}}{\left\|\mathbf {a} \times \mathbf {b} \right\|\left\|\mathbf {c} \right\|}= \frac{(\mathbf {b} \times \mathbf {c}) \cdot \mathbf{a}}{\left\|\mathbf {b} \times \mathbf {c} \right\|\left\|\mathbf {a} \right\|}=\sqrt{\frac{1}{3}} \;\;\;,\;\;\;\theta = \arccos{\sqrt{\frac{1}{3}}} \end{equation} which is an angle of $54.7356...$°. \newline \linebreak Taking the dot products of the $n$-th derivative of the vectors denoted by $\frac{\partial ^n }{\partial t^n} $ we obtain \begin{align} &\frac{\partial ^n \mathbf {a}}{\partial t^n} \cdot \frac{\partial ^n \mathbf {b}}{\partial t^n} =0 \;, &\;\;\; & \frac{\partial ^n\mathbf {b} }{\partial t^n} \cdot \frac{\partial ^n\mathbf {c} }{\partial t^n} =0 \;, &\;\;\; & \frac{\partial ^n \mathbf {a}}{\partial t^n} \cdot \frac{\partial ^n \mathbf {c}}{\partial t^n} = \frac{1}{2} \left\|\frac{\partial ^n \mathbf {a}}{\partial t^n} \right\|^2 = \frac{2}{3} \left\|\frac{\partial ^n \mathbf {b}}{\partial t^n} \right\|^2 = 2 \left\|\frac{\partial ^n \mathbf {c} }{\partial t^n} \right\|^2& \end{align} resulting in the norm equations for the derived spheres and their rational points \begin{align} & \left\|\frac{\partial ^n \mathbf {a} }{\partial t^n} \right\|^2=2 \frac{\partial ^n \mathbf {a} }{\partial t^n}\cdot \frac{\partial ^n \mathbf {c} }{\partial t^n}\;, \;\;\; & & \left\|\frac{\partial ^n \mathbf {b} }{\partial t^n}\right\|^2=\frac{3}{2}\frac{\partial ^n \mathbf {a} }{\partial t^n}\cdot \frac{\partial ^n \mathbf {c} }{\partial t^n} \;, \;\;\; & \left\|\frac{\partial ^n \mathbf {c}}{\partial t^n} \right\|^2=\frac{1}{2} \frac{\partial ^n \mathbf {a} }{\partial t^n}\cdot \frac{\partial ^n \mathbf {c} }{\partial t^n} & \end{align} \newline The scalar triple product of $\mathbf {a},\mathbf {b},\mathbf {c}$ is equal to $\frac{1}{2}$ and remains unchanged under a circular shift of its operands \begin{gather} \mathbf {a}\cdot\left(\mathbf {b}\times \mathbf {c}\right)=\; \mathbf {b}\cdot\left(\mathbf {c} \times \mathbf {a}\right)=\; \mathbf {c} \cdot\left( \mathbf {a}\times\ \mathbf {b}\right)=\;\;\frac{1}{2} \end{gather} It also represents geometrically the volume of the paralellepipeds created by the vector vertices. One sixth of its value, which is $\frac{1}{12}$, represents the volume of the tetrahedrons defined by the three vector vertices and the origin. \newline \linebreak The vector triple product defined as the cross product of $\mathbf {a}$ or $\mathbf {c}$ with the cross product of $\mathbf {b}$ and $\mathbf {c}$ or $\mathbf {a}$ can be simplified due to $(32)$ and the relationship $\mathbf {a}\times \left( \mathbf {b} \times \mathbf {c}\right) = \left(\mathbf {a}\cdot \mathbf {c}\right)\mathbf {b} - \left(\mathbf {a}\cdot \mathbf {b}\right)\mathbf {c} $ to \begin{gather} \mathbf {a} \times\left(\mathbf {b} \times \mathbf {c}\right)=\mathbf {c} \times\left(\mathbf {b}\times \mathbf {a}\right) = \mathbf {c} \times \mathbf {a }= \mathbf {b} \end{gather} The vector triple product of $\mathbf {b}$ with $\mathbf {a} \times \mathbf {c}$ is the zero vector because the $\mathbf {a}$ and $\mathbf {c}$ are orthogonal to $\mathbf {b}$ \begin{gather} \mathbf {b} \times\left(\mathbf {a} \times \mathbf {c}\right)= \mathbf{0} \end{gather} Other symmetry relationships involving cross and dot products of the vector derivatives are : \begin{gather} \frac{\partial ^n \mathbf {a}}{\partial t^n} \times \frac{\partial ^n \mathbf {c}}{\partial t^n} =\mathbf{0} \;\; \forall \;\; n>0 \\ 2\frac{\partial ^n\mathbf {b} }{\partial t^n} \cdot \frac{\partial ^m\mathbf {c} }{\partial t^m} = \frac{\partial ^n \mathbf {b}}{\partial t^n} \cdot \frac{\partial ^m \mathbf {a}}{\partial t^m} \\ 2\frac{\partial ^n\mathbf {b} }{\partial t^n} \times \frac{\partial ^m\mathbf {c} }{\partial t^m} = \frac{\partial ^n \mathbf {b}}{\partial t^n} \times \frac{\partial ^m \mathbf {a}}{\partial t^m} \\ 3 \frac{\partial ^n \mathbf {a}}{\partial t^n} \cdot \frac{\partial ^m \mathbf {a}}{\partial t^m} = 4 \frac{\partial ^n \mathbf {b}}{\partial t^n} \cdot \frac{\partial ^m \mathbf {b}}{\partial t^m} = 12 \frac{\partial ^n \mathbf {c}}{\partial t^n} \cdot \frac{\partial ^m \mathbf {c}}{\partial t^m} \\ 3 \frac{\partial ^n \mathbf {a}}{\partial t^n} \times \frac{\partial ^m \mathbf {a}}{\partial t^m} = 4 \frac{\partial ^n \mathbf {b}}{\partial t^n} \times \frac{\partial ^m \mathbf {b}}{\partial t^m} = 12 \frac{\partial ^n \mathbf {c}}{\partial t^n} \times \frac{\partial ^m \mathbf {c}}{\partial t^m} \end{gather} \begin{gather} \frac{\partial ^n \mathbf {a}}{\partial t^n} \cdot \frac{\partial ^m \mathbf {c}}{\partial t^m}(-1,-1,1) = 2 \frac{\partial ^n \mathbf {b}}{\partial t^n} \times \frac{\partial ^m \mathbf {c}}{\partial t^m} = \frac{\partial ^n \mathbf {b}}{\partial t^n} \times \frac{\partial ^m \mathbf {a}}{\partial t^m} \end{gather} \begin{gather} 3 \frac{\partial ^n \mathbf {a}}{\partial t^n} \times \frac{\partial ^m \mathbf {c}}{\partial t^m} = 2 \frac{\partial ^n \mathbf {b}}{\partial t^n} \cdot \frac{\partial ^m \mathbf {c}}{\partial t^m} (1,1,-1) = \frac{\partial ^n \mathbf {b}}{\partial t^n} \cdot \frac{\partial ^m \mathbf {a}}{\partial t^m} (1,1,-1)\;\; \forall \;\;\{m,n\}> 0 \end{gather} The equations and examples can be verified in the Sagemath notebook $\texttt{02\_Beyond\_Pythagoras.ipynb}$. \pagebreak \section{The conjugate conics} \label{section:Ellipse} We define a geometric method relating the congruent number problem to two conjugate conics, an ellipse $e$ and a degenerate hyperbola $h$ touching eachother. By this method congruent numbers and polynomials can be defined from intersections of a line and an ellipse. To see this relationship a right triangle with sides $a,b,c\in \mathbb{Q}$ for a congruent number $N$ is defined using the conjugate conics $e,h$ and the variables $f_1,f_2$ by \small \begin{equation}(e,h)=\left(\sqrt{N f_1^2 f_2^2- \frac{1}{4}(N f_1^2-f_2^2)^2},\sqrt{N f_1^2 f_2^2+ \frac{1}{4}(N f_1^2-f_2^2)^2}\right)\end{equation} $$(a,b,c) = \left(\frac{2\; e\; h}{f_1 f_2 (N f_1^2-f_2^2)}, \frac{N f_1 f_2 (N f_1^2-f_2^2) }{ e\; h },\sqrt{\left(\frac{2\; e\; h}{f_1 f_2 (N f_1^2-f_2^2)}\right)^2+\left(\frac{ N f_1 f_2 (N f_1^2-f_2^2)}{e\; h}\right)^2 } \right)$$ for which $h$ reduces to $\frac{1}{2}(N f_1^2+f_2^2)$ and $(a,b,c)$ simplifies to \begin{equation}(a,b,c) = \left(\frac{ e\; (N f_1^2+f_2^2)}{f_1 f_2 (N f_1^2-f_2^2)}, \frac{2 N f_1 f_2 (N f_1^2-f_2^2) }{e\; (N f_1^2+f_2^2) },\frac{ 4 N f_1^2 f_2^2\left(3 N f_1^2 f_2^2-(N f_1^2-f_2^2)^2 \right)+(N^2 f_1^4+f_2^4)^2}{4\; e\; f_1 f_2 (N^2 f_1^4 -f_2^4) }\right) \end{equation} \newline For certain $N,f_1,f_2$ , $e$ becomes rational and also the triple $(a,b,c)$. Looking at $N$ as the independent variable $x$, the equation $e(x)$ is an ellipse and all its rational points can be found by intersecting $e(x)$ with a line $y(x)=t(x-x_0)+y_0$ going through a rational point $Q:(x_0,y_0)$ on the ellipse. \newline \linebreak The figure below shows the translated ellipse $e(x)$ touching the degenerate hyperbola $h(x)$ at the point $Q:(x_0,y_0)$. \newline The line $y(x)$ intersects the ellipse in $Q$ and a second rational point $P:(x_t,e_t)$. \newline From $x_t$ we obtain a primitive congruent number $N$ using a procedure to reduce and or raise $x_t\in \mathbb{Q}$ to $N\in \mathbb{Z}$. \begin{figure}[hbt!] \centering \begin{tikzpicture}[scale=1.2, line cap=round,line join=round,>=triangle 45,x=1cm,y=1cm] \begin{axis}[ x=0.75cm,y=0.75cm, axis lines=middle, ymajorgrids=true, xmajorgrids=true, xmin=-2.1, xmax=7.5, ymin=-2.1, ymax=2.5, xtick={-2,-1,...,7}, ytick={-2,-1,...,2},] \clip(-2.6541760682099147,-6.42426466103129) rectangle (10.277881323330046,5.980780987279769); \draw[line width=1pt,color=qqzzcc,smooth,samples=100,domain=0.17157287586052444:5.828427030452535] plot(\x,{sqrt((\x)-1/4*(-1+(\x))^(2))}); \draw[line width=1pt,color=ffqqtt,smooth,samples=100,domain=-2.6541760682099147:10.277881323330046] plot(\x,{sqrt((\x)+1/4*(-1+(\x))^(2))}); \draw[line width=1pt,color=qqccqq,smooth,samples=100,domain=-2.6541760682099147:10.277881323330046] plot(\x,{1-0.3*(-1+(\x))}); \draw[line width=1pt,color=qqzzcc,smooth,samples=100,domain=0.17157287586052444:5.828427030452535] plot(\x,{0-sqrt((\x)-1/4*(-1+(\x))^(2))}); \draw[line width=1pt,color=ffqqtt,smooth,samples=100,domain=-2.6541760682099147:10.277881323330046] plot(\x,{0-sqrt((\x)+1/4*(-1+(\x))^(2))}); \begin{scriptsize} \draw[color=qqzzcc] (4.5,1.5) node {e(x)}; \draw[color=ffqqtt] (-1.0,0.5) node {h(x)}; \draw[color=qqccqq] (3.5,0.7) node {y(x)}; \draw [fill=ududff] (1.0,1.0) circle (2.5pt); \draw[color=uuuuuu] (1.0,1.7) node {$Q(x_0,y_0)$}; \draw [fill=ududff] (5.705882352940685,-0.4117647058822054) circle (2.5pt); \draw[color=uuuuuu] (6.0,-1.2) node {$P(x_t,y_t)$}; \end{scriptsize} \end{axis} \end{tikzpicture} \caption{Congruent numbers as intersections of a line and an ellipse} \label{fig:F2} \end{figure} The process of reducing due to square factors of $x_t$ is to divide the sides $a,b$ by the common factor reducing the area by the square of this factor. \newline The process of raising due to the denominator of $x_t$ is to multiply $a,b$ by this denominator. \newline Both procedures are applied in the example given in the section $\ref{section:EP}$. \newline \linebreak The congruent number elliptic curve $ E_N: y^2= x^3-N^2 x$ has the following 2 points of infinite order \begin{eqnarray} P_1:(x,y)_1 &= &\left(-\frac{\left(N f_1^2 -f_2^2\right)^2}{4 f_1^2 f_2^2},\frac{\left(N f_1^2-f_2^2\right)^5-16 N^2 f_1^4 f_2^4 \left(N f_1^2 -f_2^2\right)}{32 e h f_1^3 f_2^3 }\right) \\ P_2:(x,y)_2 &=& \left(\frac{4N^2 f_1^2 f_2^2}{(N f_1^2 -f_2^2)^2},\frac{16 N^4 f_1^5 f_2^5-N^2 f_1 f_2 \left(N f_1^2-f_2^2\right)^4}{2 e h \left(N f_1^2-f_2^2\right)^3}\right) \end{eqnarray} The equations are also valid for certain congruent numbers which are prime or an almost prime by adjoining $f_2$. \newline From the footprint equations defined in section $\ref{section:FP}$ we can verify the following congruence conditions for primes $p$ and the term by which $f_2$ needs to be adjoined $$ \begin{array}{ccccccc} N= &p & \forall & p \equiv\; 5 \mod 8 & f_1 \in \mathbb{Z} ,& f_2 \in \mathbb{Z} \;\;\;\;\;\;\;\;\;\;\; &\\ N= &p & \forall & p \equiv\; 7 \mod 8 & f_1 \in \mathbb{Z} , & f_2 \in \mathbb{Z}[\sqrt{N}]\;\;\;& \\ N= &2p & \forall & p \equiv\; 7 \mod 8 & f_1 \in \mathbb{Z} , & f_2 \in \mathbb{Z}[\sqrt{2N}]& \\ \end{array}$$ In this case $f_1,f_2$ are equal to the footprint solutions $m,n$ for certain primes $p$ listed in the tables $\ref{table:TI}\;,\ref{table:TII}\;,\ref{table:TIII}$. \newline \subsection{A famous example} The famous example due to Don Zagier who found the smallest rational triangle $(a,b,c)$ for the congruent number $N=157$ can be verified evaluating $(48)$ for $(f_1,f_2)=(87005 ,610961)$ giving the triple \begin{align*} (a,b,c) =\Bigg( & \frac{411340519227716149383203}{21666555693714761309610} , \frac{6803298487826435051217540}{411340519227716149383203}& \\ ,&\;\; \frac{224403517704336969924557513090674863160948472041}{8912332268928859588025535178967163570016480830}\Bigg) \end{align*} and evaluating $(49,50)$ we obtain the rational points for the elliptic curve $E: y^2 = x^3-157^2x$ \begin{align*} P_1&:\left(-\frac{166136231668185267540804}{2825630694251145858025},-\frac{1 67661624456834335404812111469782006}{150201095200135518108761470235125}\right) &\\ P_2&:\left(\frac{69648970982596494254458225}{166136231668185267540804},\frac {538962435089604615078004307258785218335}{67716816556077455999228495435742408}\right)& \end{align*} \subsection{The intersection example} \label{section:EP} By the line and ellipse intersection method we prove that $$N(t)=\left(4 t^2+1\right) \left(4 t^2-8 t+5\right)$$ is a congruent number polynomial. \newline \linebreak First we take $(47)$ and change the variables $N=x$ and $(f_1,f_2)=(1,f)$ , such that the ellipse and hyperbola $$(e,h)=\left(\sqrt{f^2 x-\frac{1}{4} \left(x-f^2\right)^2},\frac{1}{2} \left(f^2+x\right)\right)$$ are conjugate conics touching at the point $Q=(f^2,f^2)$. To find a rational right triangle $(a,b,c)$ as defined by $(48)$ we only have to find the rational points on the ellipse because the hyperbola is degenerate and reduces to a set of lines. Intersecting the ellipse $e(x)$ with a line $y(x)=t(x-f^2)+f^2$ going through $Q$ gives the parameterization of the rational points $$P:(x,e)=\left(\frac{f^2 \left(4 t^2-8 t+5\right)}{4 t^2+1},\frac{f^2 \left(-4 t^2+4 t+1\right)}{4 t^2+1}\right)$$ We notice that we can reduce the sides $a,b$ by $f$ due to the square factor and raise the sides by $4 t^2+1$ due to the denominator. Applying first the intersection solution $(x,e)$ to the rational right triangle $(a,b,c)$ defined by $(48)$ and secondly raising the sides by $4 t^2+1$ we obtain the right triangle definition \begin{align*} (a,b,c)=&\Bigg( \frac{-16 t^4+32 t^3-24 t^2+8 t+3}{2-4 t} ,\frac{4 \left(32 t^5-80 t^4+80 t^3-40 t^2+18 t-5\right)}{16 t^4-32 t^3+24 t^2-8 t-3} & \\ ,&\frac{256 t^8-1024 t^7+1792 t^6-1792 t^5+1504 t^4-1216 t^3+688 t^2-208 t+41}{64 t^5-160 t^4+160 t^3-80 t^2+4 t+6}\Bigg)& \end{align*} completing the proof for the congruent number polynomial $$N(t)=\frac{a b}{2}=\frac{-4 \left(32 t^5-80 t^4+80 t^3-40 t^2+18 t-5\right)}{2(2-4 t)} =\left(4 t^2+1\right) \left(4 t^2-8 t+5\right)$$ The points on the elliptic curve (49,50) are then : \begin{align*} P_1: & \left(-4 (2 t -1)^2, 2 (2 t-1) (4 t^2-4 t -1) (4 t^2-4 t+3) \right) \\ P_2: & \left( \frac{(4 t^2+1)^2 ( 4 t^2-8t+5)^2}{ 4 (2 t-1)^2}, \frac{(4 t^2+1)^2 (4 t^2-8t+5)^2 (4 t^2-4t-1) (4 t^2-4t+3)}{8 (2 t-1)^3}\right) \end{align*} As an example for $N(3)=629$ we obtain the rational triangle $$(a,b,c)=\left(\frac{621}{10},\frac{12580}{621},\frac{405641}{6210}\right)$$ and the two points on the elliptic curve $ E: y^2= x^3-629^2 x$ $$ P_1:(-100,6210) \;\;\;,\;\;\; P_2:\left(\frac{395641}{100},\frac{245693061}{1000} \right)$$ \pagebreak \subsection{The lattice example} A more general example follows from the lattice points of an ellipse. \newline Assigning $(N,f_1,f_2)=(x, 1,m^2+ n^2)\;\; \forall \;\;{m,n} \in \mathbb{Z}$ we obtain for the ellipse and the degenerate hyperbola \begin{eqnarray} (e,h)&=& \left(\sqrt{N f_1^2 f_2^2 -\frac{1}{4} \left(N f_1^2-f_2^2\right)^2},\sqrt{N f_1^2 f_2^2 +\frac{1}{4} \left(N f_1^2-f_2^2\right)^2}\right)\\ & = & \left(\sqrt{x \left(m^2+n^2\right)^2-\frac{1}{4}\left(\left(m^2+n^2\right)^2-x\right)^2},\frac{1}{2}\left(\left(m^2+n^2\right)^2+x\right)\right) \end{eqnarray} the following lattice points for the ellipse \begin{eqnarray} \nonumber & P_1:(x,e)_1 = &\left((m^2 + n^2) (m^2 + 4 m n + 5 n^2) \;,\; (m^2 + n^2)(m^2 + 2mn - n^2) \right) \\ \nonumber & P_2:(x,e)_2 = &\left((m^2 + n^2) (m^2 - 4 m n + 5 n^2) \;,\; (m^2 + n^2)(m^2 - 2mn - n^2) \right) \\ \nonumber & P_3:(x,e)_3 = &\left((m^2 + n^2) (5 m^2 - 4 m n + n^2) \;,\; (m^2 + n^2)(m^2 + 2mn - n^2) \right) \\ & P_4:(x,e)_4 = &\left((m^2 + n^2) (5 m^2 + 4 m n + n^2) \;,\; (m^2 + n^2)(m^2 - 2mn - n^2) \right) \end{eqnarray} The $x_i$ coordinates represent congruent numbers, except when $x_i$ is a square, due to the right triangles \begin{align*} (a,b,c)_1=& \Bigg(\frac{\left(m^2+2 m n-n^2\right) \left(m^2+2 m n+3 n^2\right)}{2 n (m+n)},\frac{4 n (m+n) \left(m^2+n^2\right) \left(m^2+4 m n+5 n^2\right)}{\left(m^2+2 m n-n^2\right) \left(m^2+2 m n+3 n^2\right)}\\ ,&\frac{m^8+8 m^7 n+28 m^6 n^2+56 m^5 n^3+94 m^4 n^4+152 m^3 n^5+172 m^2 n^6+104 m n^7+41 n^8}{2 n (m+n) \left(m^2+2 m n-n^2\right) \left(m^2+2 m n+3 n^2\right)}\Bigg)\\ (a,b,c)_2=& \Bigg(\frac{\left(m^2-2 m n-n^2\right) \left(m^2-2 m n+3 n^2\right)}{2 n (m-n)},\frac{4 n (m-n) \left(m^2+n^2\right) \left(m^2-4 m n+5 n^2\right)}{\left(m^2-2 m n-n^2\right)\left(m^2-2 m n+3 n^2\right)}\\ ,&\frac{m^8-8 m^7 n+28 m^6 n^2-56 m^5 n^3+94 m^4 n^4-152 m^3 n^5+172 m^2 n^6-104 m n^7+41 n^8}{2 m^5 n-10 m^4 n^2+20 m^3 n^3-20 m^2 n^4+2 m n^5+6 n^6}\Bigg)\\ (a,b,c)_3=& \Bigg(\frac{\left(m^2+2 m n-n^2\right) \left(3 m^2-2 m n+n^2\right)}{2 m (m-n)},\frac{4 m (m-n) \left(m^2+n^2\right) \left(5 m^2-4 m n+n^2\right)}{\left(m^2+2 m n-n^2\right) \left(3m^2-2 m n+n^2\right)} \\ ,&\frac{41 m^8-104 m^7 n+172 m^6 n^2-152 m^5 n^3+94 m^4 n^4-56 m^3 n^5+28 m^2 n^6-8 m n^7+n^8}{6 m^6+2 m^5 n-20 m^4 n^2+20 m^3 n^3-10 m^2 n^4+2 m n^5}\Bigg)\\ (a,b,c)_4=& \Bigg(-\frac{\left(m^2-2 m n-n^2\right) \left(3 m^2+2 m n+n^2\right)}{2 m (m+n)},-\frac{4 m (m+n) \left(m^2+n^2\right) \left(5 m^2+4 m n+n^2\right)}{\left(m^2-2 m n-n^2\right) \left(3 m^2+2 m n+n^2\right)} \\ ,&\frac{41 m^8+104 m^7 n+172 m^6 n^2+152 m^5 n^3+94 m^4 n^4+56 m^3 n^5+28 m^2 n^6+8 m n^7+n^8}{2 m \left(-3 m^5+m^4 n+10 m^3 n^2+10 m^2 n^3+5 m n^4+n^5\right)}\Bigg) \end{align*} for $m,n$ such that the denominator of $a_{1,2,3,4}\ne 0$. \newline \linebreak Intersecting the ellipse by the lines $y_i = t (x - x_i) + e_i$ we obtain a second rational point $P_{i2}$. \newline Raising the $X$-axis intersection equations by $4 t^2+1$ we obtain the congruent number equations : \begin{eqnarray}\nonumber &N_{12}&=\left(4 t^2+1\right)\left(m^2+n^2\right) \left(m^2 \left(4 t(t-2)+5\right)+4 m n (4 t (t-1) -1)+n^2 (4 t (5 t+2)+1)\right)\\ \nonumber &N_{22}&=\left(4 t^2+1\right)\left(m^2+n^2\right) \left(m^2 \left(4 t (t+2)+5\right)-4 m n (4 t (t+1)-1)+n^2 (4 t (5 t-2)+1)\right)\\ \nonumber &N_{32}&=\left(4 t^2+1\right)\left(m^2+n^2\right) \left(m^2 \left(4 t (5 t-2)+1\right)-4 m n (4 t (t+1)-1)+n^2 (4 t (t+2)+5)\right)\\ &N_{42}&=\left(4 t^2+1\right)\left(m^2+n^2\right) \left(m^2 \left(4 t (5 t+2)+1\right)+4 m n (4 (t-1) t-1)+n^2 (4 (t-2) t+5)\right) \end{eqnarray} Evaluating for $(m,n,t)=(1,2,3)$ we obtain the congruent numbers and the right triangles \begin{align*} N_{12} = 188885 \;\;\;&,\;\;\; (a,b,c)_{12} = \left( \frac{71757}{418} , \frac{157907860}{71757} , \frac{66206019401}{29994426 }\right) \\ N_{22} = 58645 \;\;\;\;\;&,\;\;\;(a,b,c)_{22} = \left( \frac{58483}{66} , \frac{7741140}{58483} , \frac{3458210761}{3859878}\right) \\ N_{32} = 7585 \;\;\;\;\;\;&,\;\;\;(a,b,c)_{32} = \left( \frac{ -5537}{72} , \frac{-1092240}{5537} , \frac{-84406081}{398664}\right) \\ N_{42} = 84545 \;\;\;\;\; &,\;\;\;(a,b,c)_{42} = \left( \frac{ 82497}{136} , \frac{22996240}{82497} , \frac{7489959041}{11219592 }\right) \end{align*} \newline The equations and examples for this section can be found in the Sagemath notebook $\texttt{03\_Conjugate\_conics.ipynb}$. \pagebreak \section{The twin hyperbolas} \label{section:TH} Another example of two combined conic sections related to the congruent number problem follows from the term under the square root $\sqrt{\frac{N}{2}\left((m-n)^2 +2m^2\right)\left((m+n)^2+2n^2\right)}$ of the footprint equation $(78)$ of section $\ref{section:FP}$. Replacing $m$ by $n x$ we get $n\sqrt{\frac{N}{2}\left(1-2x+3x^2\right)\left(3+2x+x^2\right)}$ obtaining the definition for the twin hyperbolas. \newline \begin{figure}[hbt!] \begin{tikzpicture}[scale=1,baseline={(current bounding box.center)}] \begin{axis}[ xmin=-5, xmax=5, ymin=-10, ymax= 10, xlabel={$x$}, ylabel={$y$}, axis lines=middle, ymajorgrids=true, xmajorgrids=true, samples=100, smooth, xtick={-4,-3,...,4}, ytick={-8,-6,...,8}, clip=false, ] \draw[line width=1pt,color=blue,smooth,samples=100,domain=-5:5] plot(\x, { sqrt(1-2*(\x)+3*(\x)^(2))}); \draw[line width=1pt,color=blue,smooth,samples=100,domain=-5:5] plot(\x, {-sqrt(1-2*(\x)+3*(\x)^(2))}); \draw[line width=1pt,color=green,smooth,samples=100,domain=-5:5] plot(\x, { sqrt(3+2*(\x)+(\x)^(2))}); \draw[line width=1pt,color=green,smooth,samples=100,domain=-5:5] plot(\x, {-sqrt(3+2*(\x)+(\x)^(2))}); \begin{scriptsize} \draw[color= black] (1.7,2) node[anchor = west] {$Q_1$}; \draw[color= black] (-1,0.6) node[anchor = west] {$Q_2$}; \draw [fill= blue] (2,3) circle (1.5pt); \draw [fill= green] (-0.5,1.5) circle (1.5pt); \end{scriptsize} \end{axis} \end{tikzpicture} \begin{minipage}{0.6\textwidth} \begin{equation} ( \color{blue} h_1,\color{green}h_2 \color{black})= \left(\sqrt{1-2x+3x^2},\sqrt{3+2x+x^2} \right) \end{equation} From the two rational points \newline \begin{equation} \{Q_{1},Q_{2}\}=\left\{(2,3),\left(-\frac{1}{2},\frac{3}{2}\right)\right\} \end{equation} and the intersection lines \newline \begin{equation} (y_1,y_2)= \left( t (x-2) +3, t \left(x+\frac{1}{2}\right) +\frac{3}{2}\right) \end{equation} we obtain the $x$-coordinates for their rational points \begin{equation} (x_1,x_2) = \left( 2 \frac{2 - 3 t + t^2}{t^2-3} , \frac{3 - 6 t - t^2}{2(t^2-1)}\right) \end{equation} \end{minipage}\hspace{10cm} \caption{The twin hyperbolas} \label{fig:F3} \end{figure} \newline Evaluating the product $H(x)=2 h_1(x)^2 h_2(x)^2$ for the $x$-coordinates we get in function of $t$ \begin{equation} H(x_1)= \left(\frac{9-10 t +3 t^2}{\left(t^2-3\right)^2}\right)^2 2 \left(11 t^4-36 t^3+30 t^2-12 t+19\right) \end{equation} \begin{equation} H(x_2)= \left(\frac{3-2 t +3 t^2}{4\left(t^2-1\right)^2}\right)^2 \frac{\left(11 t^4+60 t^3+66 t^2-132 t+43\right)}{2} \end{equation} Then the square free part of $H(x) $ is related to a congruent number polynomial $N$ in function of $t$. \newline Taking the squarefree part from $(59)$ and four times the squarefree part of $(60)$ we obtain the two congruent number polynomials \begin{equation} (N_1,N_2) = \left( 2 \left(11 t^4-36 t^3+30 t^2-12 t+19\right) , 2\left(11 t^4+60 t^3+66 t^2-132 t+43\right)\right) \end{equation} representing the areas for the rational right triangles with sides \begin{align*} (a,b)_1= & \Bigg( -\frac{\left(3 t^2-10 t+9\right) \left(t^4+12 t^3-62 t^2+84 t-31\right) \left(11 t^4-36 t^3+30 t^2-12 t+19\right)}{\left(t^2-2 t-1\right) \left(7 t^2-22 t+17\right) \left(5 t^4-24 t^3+46 t^2-48 t+25\right)} ,\\ & -\frac{4 \left(t^2-2 t-1\right) \left(7 t^2-22 t+17\right) \left(5 t^4-24 t^3+46 t^2-48 t+25\right)}{\left(3 t^2-10 t+9\right) \left(t^4+12 t^3-62 t^2+84 t-31\right)} \Bigg) \end{align*} \begin{align*} (a,b)_2= & \Bigg( -\frac{\left(3 t^2-2 t+3\right) \left(t^4+36 t^3+22 t^2-60 t+17\right) \left(11 t^4+60 t^3+66 t^2-132 t+43\right)}{\left(t^2+2 t-7\right) \left(7 t^2-2 t-1\right) \left(5 t^4+12 t^3+22 t^2-36 t+13\right)} ,\\ & -\frac{4 \left(t^2+2 t-7\right) \left(7 t^2-2 t-1\right) \left(5 t^4+12 t^3+22 t^2-36 t+13\right)}{\left(3 t^2-2 t+3\right) \left(t^4+36 t^3+22 t^2-60 t+17\right)} \Bigg) \end{align*} For example for $t=10$ we obtain the congruent numbers and the right triangles \begin{align*} N_1=153798 \;\;\;,\;\;\; (a,b,c)_1 &= \left(-\frac{266938037619}{1183583135}, -\frac{4734332540}{ 3471281}, \frac{5679574272052285061}{4108549648445935}\right) \\ N_2= 350646 \;\;\;,\;\;\; (a,b,c)_2 &= \left(-\frac{2362584547353}{4899249131}, -\frac{19596996524}{ 13475611}, \frac{101151574309748379365}{66020375481444041}\right) \end{align*} The equations and examples can be verified in the Sagemath notebook $\texttt{04\_Twin\_hyperbolas.ipynb}$. \pagebreak \section{The Cassini ovals} The general equation for the Cassini oval, with foci at $(a', 0)$ and $(-a', 0)$, is a quartic curve $$ \left((x-a')^{2}+y^{2}\right)\left((x+a')^{2}+y^{2}\right)=b'^{4}$$ which can be written as a difference of two squares $ (y^2 + x^2+ a'^2)^2 - (2 a' x)^2 = b'^4 $ or as the function \begin{equation} y(x)=\sqrt{\sqrt{4 a'^2 x^2+b'^4}-x^2-a'^2} \end{equation} For $a'=1$ the Cassini oval intersects the coordinate axis depending on $b'$ at the points $$\{(\shortminus c_4,0),(0,\shortminus c_3),(0,c_3),(c_4,0)\} \;\;\forall \;\; \|b'\|>1 \;\;\; \lor \;\;\; \{(\shortminus c_4,0),(\shortminus c_3,0),(c_3,0),(c_4,0)\}\;\; \forall\;\; \|b'\|<1$$ and has the following shape \newline \newrgbcolor{qqzzff}{0 0.6 1} \newrgbcolor{ttqqqq}{0.2 0 0} \begin{figure}[hbt!] \centering \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=2cm,y=2cm] \begin{axis}[ x=2.0cm,y=2.0cm, axis lines=middle, ymajorgrids=true, xmajorgrids=true, xmin=-2.5, xmax=2.5, ymin=-1.4, ymax=1.4, xtick={-2,-3/2,...,2}, ytick={-1,-1/2,...,1},] \clip(-2.6541760682099147,-6.42426466103129) rectangle (10.277881323330046,5.980780987279769); \draw[line width=1pt,color=black,smooth,samples=200,domain=-1.7320492250169548:1.7320480088065726] plot(\x,{sqrt(-1-(\x)^(2)+sqrt(4*(\x)^(2)+(2/sqrt(2))^(4)))}); \draw[line width=1pt,color=black,smooth,samples=200,domain=-1.7320492250169548:1.7320480088065726] plot(\x,{-sqrt(-1-(\x)^(2)+sqrt(4*(\x)^(2)+(2/sqrt(2))^(4)))}); \draw[line width=1pt,color=blue,smooth,samples=200,domain=-1.5620482404410878:1.5620468802436978] plot(\x,{sqrt(-1-(\x)^(2)+sqrt(4*(\x)^(2)+(-1.2)^(4)))}); \draw[line width=1pt,color=blue,smooth,samples=200,domain=-1.5620482404410878:1.5620468802436978] plot(\x,{-sqrt(-1-(\x)^(2)+sqrt(4*(\x)^(2)+(-1.2)^(4)))}); \draw[line width=1pt,color=red,smooth,samples=200,domain=-1.4142102367228777:1.414212399303442] plot(\x,{sqrt(-1-(\x)^(2)+sqrt(4*(\x)^(2)+(1.001)^(4)))}); \draw[line width=1pt,color=red,smooth,samples=200,domain=-1.4142102367228777:1.414212399303442] plot(\x,{-sqrt(-1-(\x)^(2)+sqrt(4*(\x)^(2)+(1.001)^(4)))}); \draw[line width=1pt,color=green,smooth,samples=200,domain=0.7071079069455056:1.2247437735079298] plot(\x,{sqrt(-1-(\x)^(2)+sqrt(4*(\x)^(2)+(1.0001/sqrt(2))^(4)))}); \draw[line width=1pt,color=green,smooth,samples=200,domain=0.7071079069455056:1.2247437735079298] plot(\x,{-sqrt(-1-(\x)^(2)+sqrt(4*(\x)^(2)+(1.0001/sqrt(2))^(4)))}); \draw[line width=1pt,color=green,smooth,samples=200,domain=0.7071079069455056:1.2247437735079298] plot(-\x,{sqrt(-1-(\x)^(2)+sqrt(4*(\x)^(2)+(1.0001/sqrt(2))^(4)))}); \draw[line width=1pt,color=green,smooth,samples=200,domain=0.7071079069455056:1.2247437735079298] plot(-\x,{-sqrt(-1-(\x)^(2)+sqrt(4*(\x)^(2)+(1.0001/sqrt(2))^(4)))}); \begin{scriptsize} \draw[color= green] (0.47,0.1) node {$(c_3,0)$}; \draw[color= green] (-0.45,0.1) node {$(\shortminus c_3,0)$}; \draw[color= black] (0.29,1.17) node {$(0,c_3)$}; \draw[color= black] (0.3,-1.17) node {$(0,\shortminus c_3)$}; \draw[color= black] (2,0.1) node {$(c_4,0)$}; \draw[color= black] (-2.05,0.1) node {$(\shortminus c_4,0)$}; \draw [fill= black] (0,1) circle (1.5pt); \draw [fill= black] (0,-1) circle (1.5pt); \draw [fill= black] (1.73,0) circle (1.5pt); \draw [fill= black] (-1.73,0) circle (1.5pt); \draw [fill= green] (0.7071,0) circle (1.5pt); \draw [fill= green] (-0.7071,0) circle (1.5pt); \draw [fill= green] (1.22474,0) circle (1.5pt); \draw [fill= green] (-1.22474,0) circle (1.5pt); \draw [fill= red] (1.41492,0) circle (1.5pt); \draw [fill= red] (-1.41492,0) circle (1.5pt); \draw [fill= red] (0,0.0447325) circle (1.5pt); \draw [fill= red] (0,-0.0447325) circle (1.5pt); \draw [fill= blue] (1.56205,0) circle (1.5pt); \draw [fill= blue] (-1.56205,0) circle (1.5pt); \draw [fill= blue] (0,0.663325) circle (1.5pt); \draw [fill= blue] (0,-0.663325) circle (1.5pt); \draw[color= blue] (-2.1,1.0) node {$\|b'\|>1$}; \draw[color= red] (-2.1,0.8) node {$\|b'\| \approx 1$}; \draw[color= green] (-2.1,0.6) node {$\|b'\|<1$}; \end{scriptsize} \end{axis} \end{tikzpicture} \caption{Cassini ovals for $a'=1$ with axis intersection points} \label{fig:F4} \end{figure} \subsection{Two\space $X$-axis intersections} \label{section:CO1} A relationship between the Cassini oval and the congruent number problem becomes apparent from a system of equations first defined by Kurt Heegner \cite{8} in $1952$ on page $228$. The key point is to see that his variables $c_1,c_2$ can be defined in function of the variables $N,f_1,f_2$ from section $\ref{section:Ellipse}$ and that the Cassini oval intersection points are defined by his variables $c_3,c_4$ from the equations \begin{eqnarray} (c_1,c_2) = & \left(\|f_1 f_2\| , \frac{1}{2}\|N f_1^2 -f_2^2\|\right) \\ (c_3,c_4) = &\left(\sqrt{\|N c_1^2-c_2^2\|}, \sqrt{N c_1^2+c_2^2} \right) \end{eqnarray} The right triangle for the congruent number $N$ is then represented by \begin{equation} (a,b,c)=\left(\frac{c_3 c_4}{c_1 c_2},\frac{2 c_1 c_2 N}{c_3 c_4},\sqrt{\frac{c_3^2 c_4^2}{c_1^2 c_2^2}+\frac{4 c_1^2 c_2^2 N^2}{c_3^2 c_4^2}}\right) \end{equation} The corresponding Cassini oval is obtained by assigning $(a',b')=\left(c_2,c_1\sqrt{N}\right)$ resulting in the equation \begin{equation} y(x)=\sqrt{\sqrt{4c_2^2 x^2+c_1^4 N^2 } - x^2 - c_2^2} \end{equation} For example for $N=29$ with $(f_1,f_2)=(1,\shortminus 13)$ we obtain $(c_1,c_2,c_3,c_4)=(13,70,1,99)$ for the right triangle $$(a,b,c)=\left(\frac{99}{910}, \frac{52780}{99}, \frac{48029801}{90090}\right)$$ and the Cassini oval $$ y(x)=\sqrt{\sqrt{4*70^2 x^2+13^4* 29^2 } - x^2 - 70^2} $$ which is almost a lemniscate with the intersection points $$\{(\shortminus 99,0),(0,\shortminus1),(0,1),(99,0)\}$$ The plots for the examples are added in the Sagemath notebook $\texttt{05\_Cassini.ipynb}$. \pagebreak \newline Similarly to section $\ref{section:Ellipse}$ we can adjoin $f_2$ for certain congruent numbers. \newline \linebreak For $N=79$ with $(f_1,f_2)=\left(125, 52\right)$ adjoining $f_2$ by $\sqrt{N}$ we get $$(c_1,c_2,c_3,c_4)=\left(6500\sqrt{79}, \frac{1020759}{2}, \frac{12719}{2}\sqrt{79}, \frac{1447991}{2}\right)$$ Even though $c_1$ and $c_3$ are not rational we obtain the rational right triangle $$(a,b,c)=\left(\frac{233126551}{167973000}, \frac{335946000}{2950969}, \frac{56434050774922081}{495683115837000} \right)$$ due to the cancellation of the squareroots in $(65)$ and the corresponding Cassini oval is $$y(x)=\sqrt{\sqrt{1020759^2 x^2+6500^4 79^4 } - x^2 - \frac{1020759^2}{4}}$$ By using a second system of equations a Cassini oval with four $X$-axis intersections for $N=79$ \newline such that $(c_1,c_2,c_3,c_4)$ are integers is shown in the next section. \newline \linebreak For $N=62$ with $(f_1,f_2)=\left(20, 7\right)$ adjoining $f_2$ by $\sqrt{2N}$ we get $$(c_1,c_2,c_3,c_4)=\left(280\sqrt{31}, 9362, 1426\sqrt{31}, 15438\right)$$ the rational triangle $$(a,b,c)=\left(\frac{177537}{21140}, \frac{84560}{5727}, \frac{2056525601}{121068780} \right)$$ and the corresponding Cassini oval $$y(x)=\sqrt{\sqrt{4*9362^2 x^2+280^4 31^2 61^2 } - x^2 - 9362^2}$$ \subsection{Four $X$-axis intersections} Another system of equations relating the congruent number problem to the Cassini oval is \begin{eqnarray} (c_3,c_4) = &\left(f_1^2-f_2^2, 2 f_1 f_2\right) \\ (c_1,c_2) = & \left(\sqrt{\frac{c_4^2-c_3^2}{N}}, f_1^2 +f_2^2\right) \end{eqnarray} where the oval \begin{equation} y(x)=\sqrt{\sqrt{8 c_2^2 x^2 + c_1^4 N^2} - 2 x^2 - c_2^2} \end{equation} splits into two separate closed loops around a focus with the intersection points $$ \{(-c_4,0),(-c_3,0),(c_3,0),(c_4,0)\}$$ representing the right triangle \begin{equation} (a,b,c)=\left(\frac{c_1 c_2 N}{c_3 c_4},\frac{2 c_3 c_4}{c_1 c_2},\sqrt{\frac{c_1^2 c_2^2 N^2}{c_3^2 c_4^2}+\frac{4 c_3^2 c_4^2}{c_1^2 c_2^2}}\right) \end{equation} For the example $N=79$ with $(f_1,f_2)=\left(125, 52\right)$ we now obtain the Cassini oval $$y(x)=\sqrt{\sqrt{8 * 18329^2 x^2 + 161^4 79^2} - 2 x^2 - 18329^2}$$ with the four $X$-axis intersection points $$\{ (\shortminus13000, 0), (\shortminus12921, 0) , (12921, 0),(13000, 0)\}$$ \newline And for the example $N=62$ with $(f_1,f_2)=\left(20, 7 \sqrt{2}\right)$ we obtain the Cassini oval $$y(x)=\sqrt{\sqrt{8 * 498^2 x^2 + 4*23^4 62^2} - 2 x^2 - 498^2}$$ with the intersection points $$\{ (\shortminus 280 \sqrt{2}, 0), (\shortminus302, 0) , (302, 0),(280 \sqrt{2}, 0)\}$$ \pagebreak \section{The tangent method} The Tangent method generates a solution set $S_n=\{(f_1,f_2)_1,..,(f_1,f_2)_n\}$ for a congruent number $N$ from an initial known right triangle $(a,b,c)$ and is similar to the elliptic curve point doubling method. \newline \linebreak The solutions $S_i$ correspond to the intersections of the tangent lines of the elliptic curve $E_N: y^2=x^3-N^2x$ such that the next tangent line touches $E_N$ at the intersection point found by the previous tangent line. \newline \linebreak From the known hypotenuse $c$ of a right triangle for a congruent number $N$ we assign $(c_1,c_2)$ such that $$(c_1,c_2)=\left( Denominator[c],\frac{1}{2}Numerator[c]\right)$$ for which we obtain a new solution $S_i=(f_1,f_2)_i$ by solving the binary quadratic form for $c_2$ from the equations \begin{equation} (c_1,c_2) = \left(\|f_1 f_2\| , \frac{1}{2}\|N f_1^2 -f_2^2\|\right) \end{equation} The solution $S_i$ corresponds to a new right triangle $(a,b,c)_i$ by evaluating the equations defined in section $\ref{section:CO1}$ Repeating this method we obtain different right triangles for the same congruent number $N$. \newline \linebreak For example for $N=5$ starting from the hypotenuse $c=\frac{41}{6}$ we obtain the following solutions \begin{align*} S_{1-4}=\{(&3,2) ,(372,2009),(169317668184,15811196552161),(1336220772668316930638357029463135419039\backslash\\ & 997035301712,62496947695267799013412096545625364258488963961427841)\} \end{align*} The figure below illustrates the method showing the first two tangent lines for the elliptic curve $E_5: y^2=x^3-25 x$ and the rational points $P_{1,2,3}$. The first tangent line, touches $E_5$ at the point $P_1$ and intersects $E_5$ at the point $P_2$. Using $P_2$ as the next tangent point the rational point $P_3$ is obtained. \newline \begin{figure}[hbt!] \centering \begin{tikzpicture}[scale=1.5] \begin{axis}[ xmin=-5, xmax=15, ymin=-50, ymax=55, xlabel={$x$}, ylabel={$y$}, axis lines=middle, samples=201, smooth, clip=false, ] \draw[line width=1pt,color=blue,smooth,samples=100,domain=5:14] plot(\x,{sqrt((\x)^(3)-5^(2)*(\x)}); \draw[line width=1pt,color=blue,smooth,samples=100,domain=-5:0] plot(\x,{sqrt((\x)^(3)-5^(2)*(\x)}); \draw[line width=1pt,color=blue,smooth,samples=100,domain=5:14] plot(\x,{-sqrt((\x)^(3)-5^(2)*(\x)}); \draw[line width=1pt,color=blue,smooth,samples=100,domain=-5:0] plot(\x,{-sqrt((\x)^(3)-5^(2)*(\x)}); \draw[line width=1pt,color=green,smooth,samples=100,domain=-4.5:14] plot(\x,{1.9167*(\x)+13.667}); \draw[line width=1pt,color=red,smooth,samples=100,domain=-4:14] plot(\x,{5.3248*(\x)-26.118}); \begin{scriptsize} \draw[color= black] (1,30) node[anchor = west] {$P_1=(-4,6)$}; \draw[color= black] (1,40) node[anchor = west] {$P_2=\left(\frac{1681}{144},\frac{62279}{1728}\right)$}; \draw[color= black] (1,50) node[anchor = west] {$P_3=\left(\frac{11183412793921}{2234116132416},\frac{1791076534232245919}{3339324446657665536}\right)$}; \draw[color= black] (-4.8,12) node[anchor = west] {$P_1$}; \draw[color= black] (10.8,30) node[anchor = west] {$P_2$}; \draw[color= black] (3.4,5) node[anchor = west] {$P_3$}; \draw [fill= blue] (-4,6) circle (1.5pt); \draw [fill= blue] (11.674,36.041) circle (1.5pt); \draw [fill= blue] (5.0057,0.53636) circle (1.5pt); \end{scriptsize} \end{axis} \end{tikzpicture} \caption{The tangent method} \label{fig:F5} \end{figure} \newline It should be noted that the solutions grow quite fast like for example for $N=79$ the first three solutions are \begin{align*} S_{1-3}=\{(&2080281,238277000), (55260645511189879706636193594000,3223389202505003051748398476629439), \\ (&343513126900812678740983883808614992645462095396485294670725007133093011002487199141113\backslash\\ & 5222198071436263742039481934242611258578572000 \\ ,&107898845343126533895905583080797449600586402309215336477362300480422977554420331904855\backslash\\ & 288440002098386935913690366578027064421253187841)\} \end{align*} The equations and examples can be verified in the Sagemath notebook $\texttt{06\_Tangent\_method.ipynb}$. \pagebreak \section{The prime footprints } \label{section:FP} For certain congruent numbers $N$ which are prime or 2 times a prime, one can define footprint equations for the variables $p,q$ in function of $N,m,n \in \mathbb{Z}$ for the right triangle with sides $a,b,c \in \mathbb{Q}$ \begin{equation}(a,b,c)=\left(\frac{p}{q},2 N\frac{q}{p},\frac{\sqrt{p^4+4N^2q^4}}{pq}\right)\end{equation} The equations depend on the congruence $mod \;8$ and allow to see a relationship to a combination of conics. \newline Their solutions $m,n$ for primes $P<1000$, are listed in the Tables $\ref{section:TFP}$ \newline \linebreak Table $\ref{table:T0}$ for $N = P \;\; \not \forall \;\; P \equiv 1 \mod 8$ \newline \indent * For the primes of this kind represented by $ N=(m^4 + 6 m^2 n^2 + n^4)$ we have \begin{equation} (p, q) =\left(N(m^2 - n^2), 2 m n (m^2+ n^2) \right) \end{equation} \indent * For the other congruent primes of this kind we have \begin{equation} (p, q) =\left(\frac{1}{4}(m^2 + n^2) \sqrt{N\left(\left(2m n\right)^2-\left(m^2-n^2\right)^2\right)}, \frac{1}{2} m n (m^2 - n^2)\right) \end{equation} Table $\ref{table:TI}$ for $N = P \;\; \forall \;\; P \equiv 5 \mod 8$ \begin{equation} (p, q) =\left(\sqrt{(m^2n^2N)^2 - \frac{(m^2N-n^2)^4}{16}}, \frac{mn(m^2N-n^2)}{2}\right) \end{equation} Table $\ref{table:TII}$ for $N = P \;\; \forall\;\; P \equiv 7 \mod 8$ \begin{equation} (p, q) = \left((m^2+n^2)\sqrt{N\left(\left(2 m n\right)^2-\left(m^2-n^2\right)^2\right)},2mn\left(m^2-n^2\right)\right)\end{equation} Table $\ref{table:TIII}$ for $N = 2P\;\; \forall\;\; P \equiv 7 \mod 8$ \begin{equation}(p, q) = \left((m^2+2n^2)\sqrt{N\left(\left(2 \sqrt{2} m n\right)^2-\left(m^2-2n^2\right)^2\right)},2\sqrt{2}mn\left(m^2-n^2\right)\right) \end{equation} Table $\ref{table:TIV}$ for $N =2P\;\; \forall\;\; P \equiv 3 \mod 8 $ \begin{equation} (p, q) = \left(\left(m^2-n^2-2mn\right)\sqrt{\frac{N}{2} \left(\left(m-n\right)^2+2 m^2\right)\left(\left(m+n\right)^2 + 2 n^2\right)},\left(m^2-n^2+2mn\right)\left(m^2+n^2\right)\right) \end{equation} Evaluating one example from each table we get the following right triangles : \begin{align*} (N,m,&n) =(353,4,1) \\ (a,b ,&c) = \left(\frac{5295}{136}, \frac{272}{15}, \frac{87617}{2040}\right) \\ (N,m,&n) =(761,31,51) \\ (a,b ,&c) = \left(\frac{66411709}{1296420}, \frac{2592840}{87269}, \frac{6699926952721}{113137276980}\right) \\ (N,m,&n) =(173, 10865, -343141) \\ (a,b ,&c) = \left(\frac{418416739097462232963}{181421867613059954270}, \frac{62771966194118744177420}{418416739097462232963}, \frac{11389552969201600543101928087171460571651881}{75909946247628040203029119534348866602010}\right) \\ (N,m,&n) =(191, 27469, 11580) \\ (a,b ,&c) = \left(\frac{1726816796630813713}{394718867434084440}, \frac{789437734868168880}{9040925636810543}, \frac{311996818759910472998178689881743841}{3568623927917636168751328944250920} \right) \\ (N,m,&n) =(382, 540, 239) \\ (a,b ,&c) = \left(\frac{447382566673}{11444911740}, \frac{45779646960}{2342317103}, \frac{1171595729834345971681}{26807612510927489220}\right) \\ (N,m,&n) =(326 , 170, -69) \\ (a,b ,&c) = \left(\frac{28931957373}{22855819}, \frac{91423276}{177496671}, \frac{5135326544339012645}{4056831785478549}\right) \end{align*} The equations tables and examples can be verified in the Sagemath notebook $\texttt{07\_Prime\_footprints.ipynb}$. \pagebreak \section{The unseen recurrence} A general recurrence hidden in the equation of Pythagoras defines infinite trees of congruent number sequences and their corresponding Pythagorean triples. \newline \linebreak In order to see this we use the right triangle with sides $a,b,c$ for a congruent number $N$ which can be defined in terms of $p,q \in \mathbb{Z}$ by \begin{equation}(a,b,c)=\left(\frac{p}{q},2 N\frac{q}{p},\frac{\sqrt{p^4+4N^2q^4}}{pq}\right)\end{equation} The congruent number recurrence is then defined by \begin{equation}(N, p, q) = \left(\sqrt{p^4 + 4 N^2 q^4}, p \sqrt{p^4 + 4 N^2 q^4} , q^2 N\right)\end{equation} Evaluating the right hand side using an initial solution $(a,b,c)$ defined by $N,p,q$ we obtain a new set $N,p,q$ resulting in a new congruent number with its corresponding Pythagorean triple. By this iteration one generates infinite sequences of congruent numbers. It is important to notice that the sequences and triples change depending on the side $a$ or $b$ one chooses to define the variables $N,p,q$ not only initially but also during the iteration process resulting in a tree structure. \newline \linebreak So by using any triple definition $(a,b,c)$ and assigning $(N,p,q)$ to $$ \begin{array}{cccc}& (N,p,q)_a &= & (N,Numerator[a],Denominator[a]) \\ or & & & \\ & (N,p,q)_b & = & (N,Numerator[b],Denominator[b]) \\ \end{array} $$ the recurrence defines infinite tree-sets of Pythagorean triples for which $N$ is a congruent number sequence by choosing a tree-side-walk iteration, denoting the chosen side-path sequence in the subscript of the triple $(a,b,c)$. \newline \linebreak For example applying the recurrence to Euclid's definition we get for the side-paths $a^i,ab,bb$ the following triples $$(A,B,C)_{a^i}=\left(\frac{m^{2^{i+1}}-n^{2^{i+1}}}{(m n)^{2^{i-1}}},2 (m n)^{2^{i-1}},\frac{m^{2^{i+1}}+n^{2^{i+1}}}{(m n)^{2^{i-1}}}\right)$$ $$(A,B,C)_{ab}=\left(\frac{4 m n \left(m^2+n^2\right)}{m^2-n^2},m^2-n^2,\frac{m^4+6 m^2 n^2+n^4}{m^2-n^2}\right)$$ $$(A,B,C)_{bb}=\left(\frac{8 m n \left(m^6+7 m^4 n^2+7 m^2 n^4+n^6\right)}{\left(m^2-n^2\right)^2},\left(m^2-n^2\right)^2,\frac{m^8+28 m^6 n^2+70 m^4 n^4+28 m^2 n^6+n^8}{\left(m^2-n^2\right)^2}\right)$$ Another way to illustrate the recurrence is to write the congruent number followed by the triple by $N:(a,b,c)$ and apply the recurrence denoting the choice of the side-walk iteration by $\overset{a}{\rightarrow}$ or $ \overset{b}{\rightarrow}$ evaluating the next congruent number and its corresponding triple. \newline \linebreak For the right triangles from our first example for $(m,n)=(2,1)$ the first few iterations gives us $$ \begin{array}{ccccccccccc} 6&:&(3, 4, 5) & \overset{a}{\rightarrow}& 15&:& \left(\frac{15}{2},4,\frac{17}{2}\right) & \overset{a}{\rightarrow}& {255}&:&\left(\frac{255}{4},8,\frac{257}{4}\right) \\& & & & & & & \overset{b}{\rightarrow} & {34}&:&\left(\frac{136}{15},\frac{15}{2},\frac{353}{30}\right) \\& & & \overset{b}{\rightarrow}& 20&:& \left(\frac{40}{3},3,\frac{41}{3}\right) & \overset{a}{\rightarrow} & {1640}&:&\left(\frac{3280}{9},9,\frac{3281}{9}\right) \\& & & & & & & \overset{b}{\rightarrow} & {41}&:&\left(\frac{123}{20},\frac{40}{3},\frac{881}{60}\right) \\ 34&:&(\frac{15}{2}, \frac{136}{15} \frac{353}{30}) & \overset{a}{\rightarrow}& 353&:& \left(\frac{5295}{136},\frac{272}{15},\frac{87617}{2040}\right) & \overset{a}{\rightarrow}& {30928801}&:&\left(\frac{463932015}{18496},\frac{36992}{15},\frac{6992534657}{277440 }\right) \\& & & & & & & \overset{b}{\rightarrow} & 175234&:&\left(\frac{47663648}{79425},\frac{79425}{136},\frac{9045146753}{10801800}\right) \\ & & & \overset{b}{\rightarrow} & 24004&:& \left(\frac{96016}{225},\frac{225}{2},\frac{198593}{450}\right) & \overset{a}{\rightarrow}& 9534052744&:&\left(\frac{38136210976}{50625},\frac{50625}{2},\frac{76315468673}{1012 50}\right) \\& & & & & & & \overset{b}{\rightarrow} & 198593&:&\left(\frac{44683425}{96016},\frac{192032}{225},\frac{21001035137}{21603600}\right) \\ 41&:&(\frac{40}{3}, \frac{123}{20}, \frac{881}{60}) & \overset{a}{\rightarrow} & 1762&:& \left(\frac{70480}{369},\frac{369}{20},\frac{1416161}{7380}\right) & \overset{a}{\rightarrow}& 4990551364&:&\left(\frac{199622054560}{136161},\frac{136161}{20},\frac{3992484137921 }{2723220}\right) \\& & & & & & & \overset{b}{\rightarrow} & 1416161&:&\left(\frac{522563409}{704800},\frac{1409600}{369},\frac{1012025897921}{260071200}\right) \\ & & &\overset{b}{\rightarrow} & 36121&:& \left(\frac{108363}{400},\frac{800}{3},\frac{456161}{1200}\right) & \overset{a}{\rightarrow}& 16476991481&:&\left(\frac{49430974443}{160000},\frac{320000}{3},\frac{156882857921}{4 80000}\right) \\& & & & & & & \overset{b}{\rightarrow} & 912322&:&\left(\frac{729857600}{325089},\frac{325089}{400},\frac{310482857921}{130035600}\right) \\ 7&:&(\frac{24}{5}, \frac{35}{12}, \frac{337}{60}) & \overset{a}{\rightarrow} & 674&:& \left(\frac{16176}{175},\frac{175}{12},\frac{196513}{2100}\right) & \overset{a}{\rightarrow}& 264899524&:&\left(\frac{6357588576}{30625},\frac{30625}{12},\frac{76296827713}{3675 00}\right) \\& & & & & & & \overset{b}{\rightarrow} & 196513&:&\left(\frac{34389775}{97056},\frac{194112}{175},\frac{19777624897}{16984800}\right) \\ & & & \overset{b}{\rightarrow} & 2359&:& \left(\frac{11795}{144},\frac{288}{5},\frac{72097}{720}\right) & \overset{a}{\rightarrow}& 170076823&:&\left(\frac{850384115}{20736},\frac{41472}{5},\frac{4338014017}{103680} \right) \\& & & & & & & \overset{b}{\rightarrow} & 144194&:&\left(\frac{41527872}{58975},\frac{58975}{144},\frac{6917904193}{8492400}\right) \\ \end{array} $$ The equations and examples can be verified in the Sagemath notebook $\texttt{08\_Unseen\_recurrence.ipynb}$. \pagebreak \section{The Fibonacci/Lucas numbers} We define two congruent number sequences related to the Fibonacci/Lucas numbers and show how the congruent number solution for $5$ first found by Fibonacci can be expressed in terms of the Fibonacci/Lucas numbers. \newline \linebreak Combining the identitiy between the Fibonacci \cite{9} and the Lucas numbers \cite{10} \begin{equation} L_{n}^2= 5 F_{n}^2+4(-1)^n\end{equation} together with the algebraic equation \begin{equation} \left(L_n^2-4\right)^2+\left(4L_n\right)^2= \left(L_n^2+4\right)^2\end{equation} we can define one right triangle for $n$ even and one for $n$ odd due to the $(-1)^n$ in the identity. \subsection{The even indices} The right triangle resulting from the algebraic equation and the identity for even indices is \begin{equation}(a,b,c)_n'=(5 F_{2n}^2, 4L_{2n},L_{2n}^2+4) \end{equation} Reducing the sides by $F_{2n}$ we obtain the rational right triangle \begin{equation}(a,b,c)_n=\left(5 F_{2n}, 4\frac{L_{2n}}{F_{2n}},\frac{L_{2n}^2+4}{F_{2n}}\right) \end{equation} and the area for this triangle defines the congruent number sequence \begin{equation} N_{n}=10 L_{2n}= 10 \left(F_{2n-1}+F_{2n-1}\right)\end{equation} An interesting historical example is $N_3 = 10 L_6 = 5 * 6^2$ , for which the sides can be reduced by $6$ due to the square factor obtaining a right triangle with area $5$ in terms of the Fibonacci/Lucas numbers \begin{equation}(a,b,c)=\left(\frac{5 F_6}{6},\frac{4 L_6}{6 F_6},\frac{L_6^2+4}{6 F_6}\right)=\left(\frac{20}{3},\frac{3}{2},\frac{41}{6}\right)\end{equation} Multiplying the identity (81) for $n$ even by $2000$ we obtain $$ 20 (10 L_{2n})^2 = (10^2 F_{2n})^2 + 20^3 $$ from which follows that there exists a parallel line to the $y$-axis at $x=-20$ intersecting and connecting all the Elliptic curves $E_{N_{n}}: y^2=x^3- N_{n}^2 \;x$ for this congruent number sequence at the rational points \begin{equation} P_0= \left(-20, 10^2 F_{2n}\right) \end{equation} These points are closely related to the two points \begin{equation} P_1= \left(\frac{1}{2}a(a + c) , \frac{1}{2}a^2(a + c)\right) \;\;\; ,\;\;\; P_2 = \left(\frac{c^2}{4}, \frac{c}{8} (a^2 - b^2)\right) \end{equation} By adding the trivial point $(0,0)$ to $P_0$ we obtain the point $P_1$ and the point $P_2$ is obtained by doubling point $P_0$ \begin{equation} P_1= (0,0) + P_0 \;\;\;,\;\;\; P_2 = 2 P_0 \end{equation} \subsection{The odd indices} The right triangle resulting from the algebraic equation and the identity for odd indices is \begin{equation}(a,b,c)_n=(L_{2n+1}^2-4,4L_{2n+1},5 F_{2n+1}^2) \end{equation} giving the congruent number sequence \begin{equation}N_{n}=2\left(L_{2n+1}^2-4\right) L_{2n+1} \end{equation} \newline \linebreak The Elliptic curve $E_{N_{n}}: y^2=x^3- N_{n}^2 \;x$ for this congruent number sequence also has the rational points of infinite order as defined by $(88)$. \newline \linebreak The equations and examples can be verified in the Sagemath notebook $\texttt{09\_Fibonacci\_Lucas.ipynb}$. \pagebreak \section{The Chebyshev polynomials} We define a congruent number family related to the Chebyshev polynomials and show their relationship to the Heron-Brahmagupta triangles, their semiperimeter and some properties of their associated elliptic curves. \newline \linebreak Using the Pell identitiy \cite{11} relating the Chebyshev polynomials of the first $T_{m}(n)$ and second kind $U_{m-1}(n)$ \begin{equation} T_{m}(n)^2= (n^2-1) U_{m-1}(n)^2+1\end{equation} in combination with the algebraic equation for the polynomials of the first kind \begin{equation} \left(T_{m}(n)^2-1\right)^2+ 4 T_{m}(n)^2 = \left(T_{m}(n)^2+1\right)^2\end{equation} we can define the right triangle $(a,b,c)'$ for \(m>0\) and \(n>1\) by \begin{equation}(a,b,c)_{m,n}'=\left(\left(n^2-1\right) U_{m-1}\left(n\right)^2, 2 T_{m}(n),T_{m}\left(n\right)^2+1\right) \end{equation} Reducing the sides due to the square factor of $U_{m-1}(n)$ we get the rational triangle \begin{equation}(a,b,c)_{m,n}=\left(\left(n^2-1\right) U_{m-1}(n),\frac{2 T_m(n)}{U_{m-1}(n)} ,\frac{T_m(n)^2+1}{U_{m-1}(n)} \right)\end{equation} and the area $N_{m,n}$ for this triangle defines the congruent number polynomial \begin{equation}N_{m,n}= (n^2-1) T_{m}(n)\end{equation} related to the Chebyshev polynomial of the first kind. \newline The corresponding congruent number Elliptic curve $E_{N_{m,n}}: y^2=x^3- N_{m,n}^2 \;x$ has the rational point \begin{equation} P_0:\left(1-n^2, (n^2-1)^2U_{m-1}(n)\right) \end{equation} and also the two rational points $P_1,P_2$ as defined by $(88)$ with the same relationships as $(89)$. \subsection{The Brahmagupta triangles} The Brahmagupta triangles are Heronian triangles with consecutive integer sides $A,B,C$ , integer semiperimeter $P$ and integer area $S$. A generalization of these triangles as defined in \cite{12} with consecutive integer sides is \begin{eqnarray}(A,B,C)_{t} & = &\left( t -1 , t , t + 1\right) \\ P_{t} & = &\frac{1}{2}(A+B+C)=3\frac{t}{2} \\ S_{t} &= &\sqrt{P(P-A)(P-B)(P-C)}= \frac{t}{2} \sqrt{3 \left(\left(\frac{t}{2}\right)^2-1\right)} \end{eqnarray} The Brahmaputa triangles are obtained replacing $t$ by $2T(t,2)$ where $T(t,2)$ is the Chebyshev polynomials of the first kind. For these triangles the semiperimeter $P_t$ and the area $N_{m,n}$ defined by $(96)$ for a right triangle $(95)$ are related by \begin{equation} P_{t}=N_{t,2} \end{equation} The elliptic curves for the generalized triangles $$E: y^2 = (x + A B) (x + B C) (x + A C) $$ have a minimum rank of $1$ due to the integral points $Q_{0,1,2,3}:(x,y)$ of infinte order for $t>2$ \begin{equation} Q_0: \left(0,A B C\right) \;\;, \;\; Q_1: \left(-B^2, B \right) \;\;,\;\; Q_2: \left(2 - A B , 2 C \right) \;\;,\;\; Q_3: \left(2 - B C , 2 A \right) \end{equation} For $t=2$ the area of the triangle is $0$ because of the sides $(A,B,C)=(1,2,3)$. In this case the Elliptic curve reduces to $E : y^2= x^3+11 x^2+36 x +36$ which has rank $0$ and the points $Q_{0..3}$ have order $4$. \newline \linebreak For example for $t=2 T(3,2)$ we obtain the Brahmagupta triangle with integer sides area and semiperimeter $$(A,B,C)=(51,52,53) \;\;\;,\;\;\; S=1170 \;\;\;,\;\;\; P=78$$ and the integral points $Q_{0,1,2,3}$ on the elliptic curve $E: y^2=(x+2652)(x+2756)(x+2703)$ are $$ Q_0:(0,140556) \;\;,\;\; Q_1:(-2704,52) \;\;,\;\; Q_2:(-2650,106) \;\;,\;\; Q_3:(-2754,102)$$ The semiperimeter is a congruent number due to $(101)$ representing the area of the triangle defined by $(95)$ $$(a,b,c)=\left(45,\frac{52}{15},\frac{677}{15}\right) $$ with the three rational points $P_{0,1,2}$ on the elliptic curve $E_{N_{3,2}}: y^2=x^3 - 78^2x$ $$ P_0:(-3,135) \;\;,\;\; P_1:(2028,91260) \;\;,\;\; P_2:\left(\frac{458329}{900},\frac{306627517}{27000}\right) $$ The equations and examples can be verified in the Sagemath notebook $\texttt{10\_Chebyshev\_polynomials.ipynb}$. \pagebreak \section{The Fermat triangles} A naive recursive method is presented to generate solutions to a problem posed by Fermat in $1643$ finding the smallest right triangle with sides $$(a,b,c)=(4565486027761, 1061652293520 , 4687298610289)$$ for which the sum of the sides $a+b = 23721592^2 $ and the hypotenuse $c = 21650172^2$ are squares. \newline \linebreak The Pythagorean triple for this solution is generated using Euclid's formula assigning $(m,n) = (2150905, 246792)$. \newline \linebreak The method is based on the fact that the triple solutions for which the hypotenuse $c$ and the sum $a + b$ or the difference $a - b$ are squares form an infinite tree. \newline \linebreak Denoting the solutions for which the sum is a square by $P_i$ and the solutions for which the difference is a square by $N_i$ the following table shows the solutions $(a,b,c)$ for the first and second $N_i$ and $P_i$ triangles. \newline \begin{center} \renewcommand{\arraystretch}{1.2} \begin{tabu}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|[1pt]}{$$}& \multicolumn{2}{|c|[1pt]}{$N_1$} & \multicolumn{2}{|c|[1pt]}{$N_2$} & \multicolumn{2}{|c|[1pt]}{$P_1$} & \multicolumn{2}{|c|[1pt]}{$P_2$} \\ \hline \tabucline[1pt]\hline \multicolumn{2}{|c|[1pt]}{$a$} & \multicolumn{2}{|c|[1pt]}{$-119$} & \multicolumn{2}{|c|[1pt]}{$2276953$} & \multicolumn{2}{|c|[1pt]}{$4565486027761$} & \multicolumn{2}{|c|[1pt]}{$214038981475081188634947041892245670988588201$} \\ \hline \multicolumn{2}{|c|[1pt]}{$b$} & \multicolumn{2}{|c|[1pt]}{$120$} & \multicolumn{2}{|c|[1pt]}{$-473304$} & \multicolumn{2}{|c|[1pt]}{$1061652293520$} & \multicolumn{2}{|c|[1pt]}{$109945628264924023237017010068507003594693720$} \\ \hline \multicolumn{2}{|c|[1pt]}{$c$} & \multicolumn{2}{|c|[1pt]}{$169$} & \multicolumn{2}{|c|[1pt]}{$2325625$}& \multicolumn{2}{|c|[1pt]}{$4687298610289$} & \multicolumn{2}{|c|[1pt]}{$240625698472667313160415295005368384723483849$} \\ \hline \end{tabu} \newline \renewcommand{\arraystretch}{1} \end{center} The infinite tree starts from a root node where each node defines 2 child solutions. \newline \linebreak The root node is defined by the fraction $x=\frac{p}{q}=\frac{1}{1}$ and represents the initial trivial solution $1^2+0^2 =1^2$. \newline \linebreak Assigning $(p,q)=\left(Numerator[x],Denominator[x]\right)$ we can define the Pythagorean triple \begin{equation}(a, b, c) = \left(p\; q , -\frac{1}{2}(p^2 - q^2),\frac{1}{2} (p^2 + q^2)\right)\end{equation} and determine the intermediate variables $(n,m)$ \begin{equation}(n, m) = (a - b , \sqrt{a + b}\sqrt{c})\end{equation} \newline These allow us to evaluate the fractions $(x_1,x_2)$ \begin{equation}(x_1, x_2) = \left(\frac{(2 m n)^2 + n^4 + 4 m n \sqrt{8 m^4 + n^4}}{ 16 m^4 + n^4}, \frac{(2 m n)^2 + n^4 - 4 m n \sqrt{8 m^4 + n^4}}{ 16 m^4 + n^4}\right)\end{equation} \newline generating the two child solutions by assigning $p,q$ to the numerator and denominator of the respective fractions for two new triples in case $p,q \ne 1$. \newline \linebreak Inso this method steps from one solution to the next without any search. \subsection{The recursive algorithm} The method can be implemented into a recursive algorithm as follows. \begin{algorithm} \caption{Fermat recursive algorithm} \textbf{procedure} $Fermat(x)$ \tikzmark{right}\Comment{ $x \in \mathbb{Q}$ } \begin{algorithmic} \State $(p,q) \leftarrow(Numerator(x),Denominator(x))$ \State $(a, b, c) \leftarrow (p\; q , -\frac{1}{2}(p^2 - q^2),\frac{1}{2} (p^2 + q^2))$ \tikzmark{right}\Comment{$Pythagorean$ triple } \State $(n, m) \leftarrow (a - b , \sqrt{a + b}\sqrt{c})$ \State $(x_1, x_2) \leftarrow \left(\frac{(2 m n)^2 + n^4 + 4 m n \sqrt{8 m^4 + n^4}}{ 16 m^4 + n^4}, \frac{(2 m n)^2 + n^4 - 4 m n \sqrt{8 m^4 + n^4}}{ 16 m^4 + n^4}\right)$ \If{$x_1 \neq 1$} $ Fermat(x_1) $ \EndIf \If{$x_2 \neq 1$} $ Fermat(x_2) $ \EndIf \end{algorithmic} \textbf{end procedure} \end{algorithm} \newline A python version of this algorithm can be found in the Sagemath notebook $\texttt{11\_Fermat.ipynb}$. \pagebreak \section{The Prime footprint tables} \label{section:TFP} For the footprint equations $\ref{section:FP}$ we include the solutions $N,m,n$. \subsection{For $N = p ( 1 mod 8)$} \label{table:T0} \begin{center} \begin{tabu}{|c|c|c|c|} \hline \multicolumn{2}{|c|[1pt]}{$N$} & \multicolumn{2}{c|[1pt]}{$m,n$} \\ \hline \tabucline[1pt]\hline \multicolumn{2}{|c|[1pt]}{41} & \multicolumn{2}{|c|[1pt]}{ 2,1} \\ \hline \multicolumn{2}{|c|[1pt]}{313} & \multicolumn{2}{|c|[1pt]}{3,2} \\ \hline \multicolumn{2}{|c|[1pt]}{353} & \multicolumn{2}{|c|[1pt]}{4,1} \\ \hline \tabucline[1pt]\hline \multicolumn{2}{|c|[1pt]}{137} & \multicolumn{2}{|c|[1pt]}{ 23,51} \\ \hline \multicolumn{2}{|c|[1pt]}{257} & \multicolumn{2}{|c|[1pt]}{ 9,17} \\ \hline \multicolumn{2}{|c|[1pt]}{457} & \multicolumn{2}{|c|[1pt]}{ 11,23} \\ \hline \multicolumn{2}{|c|[1pt]}{761} & \multicolumn{2}{|c|[1pt]}{ 31,51} \\ \hline \end{tabu} \end{center} \subsection{For $N = p ( 5 mod 8)$} \label{table:TI} \begin{center} \begin{tabu}{|c|c|c|c|} \hline \multicolumn{2}{|c|[1pt]}{$N$} & \multicolumn{2}{c|[1pt]}{$m,n$} \\ \hline \tabucline[1pt]\hline \multicolumn{2}{|c|[1pt]}{5} & \multicolumn{2}{|c|[1pt]}{ 1,1} \\ \hline \multicolumn{2}{|c|[1pt]}{13} & \multicolumn{2}{|c|[1pt]}{ 1,-5} \\ \hline \multicolumn{2}{|c|[1pt]}{29} & \multicolumn{2}{|c|[1pt]}{ 1,-13} \\ \hline \multicolumn{2}{|c|[1pt]}{37} & \multicolumn{2}{|c|[1pt]}{ 5,29} \\ \hline \multicolumn{2}{|c|[1pt]}{53} & \multicolumn{2}{|c|[1pt]}{ 41,145} \\ \hline \multicolumn{2}{|c|[1pt]}{61} & \multicolumn{2}{|c|[1pt]}{ 5,-89} \\ \hline \multicolumn{2}{|c|[1pt]}{101} & \multicolumn{2}{|c|[1pt]}{ 53,397} \\ \hline \multicolumn{2}{|c|[1pt]}{109} & \multicolumn{2}{|c|[1pt]}{ 1,5} \\ \hline \multicolumn{2}{|c|[1pt]}{149} & \multicolumn{2}{|c|[1pt]}{ 1,-25} \\ \hline \multicolumn{2}{|c|[1pt]}{157} & \multicolumn{2}{|c|[1pt]}{ 87005,610961} \\ \hline \multicolumn{2}{|c|[1pt]}{173} & \multicolumn{2}{|c|[1pt]}{ 10865,-343141} \\ \hline \multicolumn{2}{|c|[1pt]}{181} & \multicolumn{2}{|c|[1pt]}{ 13,-317} \\ \hline \multicolumn{2}{|c|[1pt]}{197} & \multicolumn{2}{|c|[1pt]}{ 533,5965} \\ \hline \multicolumn{2}{|c|[1pt]}{229} & \multicolumn{2}{|c|[1pt]}{ 61,-1013} \\ \hline \multicolumn{2}{|c|[1pt]}{269} & \multicolumn{2}{|c|[1pt]}{ 74425,1150909} \\ \hline \multicolumn{2}{|c|[1pt]}{277} & \multicolumn{2}{|c|[1pt]}{ 4264945,54532889} \\ \hline \multicolumn{2}{|c|[1pt]}{293} & \multicolumn{2}{|c|[1pt]}{ 75665,-3059809} \\ \hline \multicolumn{2}{|c|[1pt]}{317} & \multicolumn{2}{|c|[1pt]}{ 15073,195805} \\ \hline \multicolumn{2}{|c|[1pt]}{349} & \multicolumn{2}{|c|[1pt]}{ 29,257} \\ \hline \multicolumn{2}{|c|[1pt]}{373} & \multicolumn{2}{|c|[1pt]}{ 312842465,-8322284921} \\ \hline \multicolumn{2}{|c|[1pt]}{389} & \multicolumn{2}{|c|[1pt]}{ 976825,15047929} \\ \hline \multicolumn{2}{|c|[1pt]}{397} & \multicolumn{2}{|c|[1pt]}{ 113,965} \\ \hline \multicolumn{2}{|c|[1pt]}{421} & \multicolumn{2}{|c|[1pt]}{ 1453,22565} \\ \hline \multicolumn{2}{|c|[1pt]}{461} & \multicolumn{2}{|c|[1pt]}{ 84653,-3918457} \\ \hline \multicolumn{2}{|c|[1pt]}{509} & \multicolumn{2}{|c|[1pt]}{ 1,13} \\ \hline \multicolumn{2}{|c|[1pt]}{541} & \multicolumn{2}{|c|[1pt]}{ 40573,-2037137} \\ \hline \multicolumn{2}{|c|[1pt]}{557} & \multicolumn{2}{|c|[1pt]}{ 5585,-312709} \\ \hline \multicolumn{2}{|c|[1pt]}{613} & \multicolumn{2}{|c|[1pt]}{ 823709,17966645} \\ \hline \multicolumn{2}{|c|[1pt]}{653} & \multicolumn{2}{|c|[1pt]}{ 669773012805221,7399743723764065} \\ \hline \multicolumn{2}{|c|[1pt]}{661} & \multicolumn{2}{|c|[1pt]}{ 923777,-56911865} \\ \hline \multicolumn{2}{|c|[1pt]}{677} & \multicolumn{2}{|c|[1pt]}{ 4755629897398453,55948826401549645} \\ \hline \multicolumn{2}{|c|[1pt]}{701} & \multicolumn{2}{|c|[1pt]}{ 265358113,2932361965} \\ \hline \multicolumn{2}{|c|[1pt]}{709} & \multicolumn{2}{|c|[1pt]}{ 1,17} \\ \hline \multicolumn{2}{|c|[1pt]}{733} & \multicolumn{2}{|c|[1pt]}{ 24668160920485,307594098561689} \\ \hline \multicolumn{2}{|c|[1pt]}{757} & \multicolumn{2}{|c|[1pt]}{ 1607466650773733245,-83390357142888376589} \\ \hline \multicolumn{2}{|c|[1pt]}{773} & \multicolumn{2}{|c|[1pt]}{ 1528361,23404585} \\ \hline \multicolumn{2}{|c|[1pt]}{797} & \multicolumn{2}{|c|[1pt]}{ 18638233339062102833,349269016577216709805} \\ \hline \multicolumn{2}{|c|[1pt]}{821} & \multicolumn{2}{|c|[1pt]}{ 5,-181} \\ \hline \multicolumn{2}{|c|[1pt]}{829} & \multicolumn{2}{|c|[1pt]}{ 240841442845,4628302886849} \\ \hline \multicolumn{2}{|c|[1pt]}{853} & \multicolumn{2}{|c|[1pt]}{ 4849409,-320740745} \\ \hline \multicolumn{2}{|c|[1pt]}{877} & \multicolumn{2}{|c|[1pt]}{ 54778385345,-1789378941029} \\ \hline \multicolumn{2}{|c|[1pt]}{941} & \multicolumn{2}{|c|[1pt]}{ 738533,9409777} \\ \hline \multicolumn{2}{|c|[1pt]}{997} & \multicolumn{2}{|c|[1pt]}{ 93521216025186467821517,2200234831458939782292245} \\ \hline \end{tabu} \end{center} \pagebreak \subsection{For $N = p ( 7 mod 8)$ } \label{table:TII} \begin{center} \begin{tabu}{|c|c|c|c|} \hline \multicolumn{2}{|c|[1pt]}{$N$} & \multicolumn{2}{|c|[1pt]}{$m,n$} \\ \hline \tabucline[1pt]\hline \multicolumn{2}{|c|[1pt]}{7} & \multicolumn{2}{|c|[1pt]}{ 2,1} \\ \hline \multicolumn{2}{|c|[1pt]}{23} & \multicolumn{2}{|c|[1pt]}{ 13,6} \\ \hline \multicolumn{2}{|c|[1pt]}{31} & \multicolumn{2}{|c|[1pt]}{ 5,4} \\ \hline \multicolumn{2}{|c|[1pt]}{47} & \multicolumn{2}{|c|[1pt]}{ 53,36} \\ \hline \multicolumn{2}{|c|[1pt]}{71} & \multicolumn{2}{|c|[1pt]}{ 6,5} \\ \hline \multicolumn{2}{|c|[1pt]}{79} & \multicolumn{2}{|c|[1pt]}{ 125,52} \\ \hline \multicolumn{2}{|c|[1pt]}{103} & \multicolumn{2}{|c|[1pt]}{ 10361,4522} \\ \hline \multicolumn{2}{|c|[1pt]}{127} & \multicolumn{2}{|c|[1pt]}{ 145249,60248} \\ \hline \multicolumn{2}{|c|[1pt]}{151} & \multicolumn{2}{|c|[1pt]}{ 17,10} \\ \hline \multicolumn{2}{|c|[1pt]}{167} & \multicolumn{2}{|c|[1pt]}{ 449,378} \\ \hline \multicolumn{2}{|c|[1pt]}{191} & \multicolumn{2}{|c|[1pt]}{ 27469,11580} \\ \hline \multicolumn{2}{|c|[1pt]}{199} & \multicolumn{2}{|c|[1pt]}{ 1465,1274} \\ \hline \multicolumn{2}{|c|[1pt]}{223} & \multicolumn{2}{|c|[1pt]}{ 77056,46313} \\ \hline \multicolumn{2}{|c|[1pt]}{239} & \multicolumn{2}{|c|[1pt]}{ 12,5} \\ \hline \multicolumn{2}{|c|[1pt]}{263} & \multicolumn{2}{|c|[1pt]}{ 239558789,102570078} \\ \hline \multicolumn{2}{|c|[1pt]}{271} & \multicolumn{2}{|c|[1pt]}{ 14485,10388} \\ \hline \multicolumn{2}{|c|[1pt]}{311} & \multicolumn{2}{|c|[1pt]}{ 676273,322890} \\ \hline \multicolumn{2}{|c|[1pt]}{359} & \multicolumn{2}{|c|[1pt]}{ 8329,3450} \\ \hline \multicolumn{2}{|c|[1pt]}{367} & \multicolumn{2}{|c|[1pt]}{ 91887589669,89142445028} \\ \hline \multicolumn{2}{|c|[1pt]}{383} & \multicolumn{2}{|c|[1pt]}{ 4692,2141} \\ \hline \multicolumn{2}{|c|[1pt]}{431} & \multicolumn{2}{|c|[1pt]}{ 15805,13908} \\ \hline \multicolumn{2}{|c|[1pt]}{439} & \multicolumn{2}{|c|[1pt]}{ 16510,8069} \\ \hline \multicolumn{2}{|c|[1pt]}{463} & \multicolumn{2}{|c|[1pt]}{ 27557,19564} \\ \hline \multicolumn{2}{|c|[1pt]}{479} & \multicolumn{2}{|c|[1pt]}{ 193,120} \\ \hline \multicolumn{2}{|c|[1pt]}{487} & \multicolumn{2}{|c|[1pt]}{ 5386013,2963446} \\ \hline \multicolumn{2}{|c|[1pt]}{503} & \multicolumn{2}{|c|[1pt]}{ 388326872921,162262911162} \\ \hline \multicolumn{2}{|c|[1pt]}{599} & \multicolumn{2}{|c|[1pt]}{ 1317481516325,661578829566} \\ \hline \multicolumn{2}{|c|[1pt]}{607} & \multicolumn{2}{|c|[1pt]}{ 94040069519467036,59514405263901653} \\ \hline \multicolumn{2}{|c|[1pt]}{631} & \multicolumn{2}{|c|[1pt]}{ 10078210,4186673} \\ \hline \multicolumn{2}{|c|[1pt]}{647} & \multicolumn{2}{|c|[1pt]}{ 555349528702900386,1035251108850383833} \\ \hline \multicolumn{2}{|c|[1pt]}{719} & \multicolumn{2}{|c|[1pt]}{ 65,144} \\ \hline \multicolumn{2}{|c|[1pt]}{727} & \multicolumn{2}{|c|[1pt]}{ 3971654742468682789,2041107876866928758} \\ \hline \multicolumn{2}{|c|[1pt]}{743} & \multicolumn{2}{|c|[1pt]}{ 607763379942529,546999398401098} \\ \hline \multicolumn{2}{|c|[1pt]}{751} & \multicolumn{2}{|c|[1pt]}{ 20,13} \\ \hline \multicolumn{2}{|c|[1pt]}{823} & \multicolumn{2}{|c|[1pt]}{ 39999941080334034526,22509182182068729173} \\ \hline \multicolumn{2}{|c|[1pt]}{839} & \multicolumn{2}{|c|[1pt]}{ 1050,457} \\ \hline \multicolumn{2}{|c|[1pt]}{863} & \multicolumn{2}{|c|[1pt]}{ 40579118388,16831312549} \\ \hline \multicolumn{2}{|c|[1pt]}{887} & \multicolumn{2}{|c|[1pt]}{ 37615957468559969,28312289691904218} \\ \hline \multicolumn{2}{|c|[1pt]}{911} & \multicolumn{2}{|c|[1pt]}{ 349810551819545,145327242840624} \\ \hline \multicolumn{2}{|c|[1pt]}{919} & \multicolumn{2}{|c|[1pt]}{ 20306,10225} \\ \hline \multicolumn{2}{|c|[1pt]}{967} & \multicolumn{2}{|c|[1pt]}{ 6445185278237,3169446617854} \\ \hline \multicolumn{2}{|c|[1pt]}{983} & \multicolumn{2}{|c|[1pt]}{ 14410159497869814,6444783543384757} \\ \hline \multicolumn{2}{|c|[1pt]}{991} & \multicolumn{2}{|c|[1pt]}{ 659581,282740} \\ \hline \end{tabu} \end{center} \begin{center} \renewcommand{\arraystretch}{1} \end{center} \pagebreak \subsection{For $N = 2 p , p (7 mod 8)$} \label{table:TIII} \begin{center} \begin{tabu}{|c|c|c|c|} \hline \multicolumn{2}{|c|[1pt]}{$N$} & \multicolumn{2}{|c|[1pt]}{$m,n$} \\ \hline \tabucline[1pt]\hline \multicolumn{2}{|c|[1pt]}{14} & \multicolumn{2}{|c|[1pt]}{$ 2,1$} \\ \hline \multicolumn{2}{|c|[1pt]}{46} & \multicolumn{2}{|c|[1pt]}{$ 2,-3$} \\ \hline \multicolumn{2}{|c|[1pt]}{62} & \multicolumn{2}{|c|[1pt]}{$ 20,7$} \\ \hline \multicolumn{2}{|c|[1pt]}{94} & \multicolumn{2}{|c|[1pt]}{$ 12,7$} \\ \hline \multicolumn{2}{|c|[1pt]}{142} & \multicolumn{2}{|c|[1pt]}{$ 290,171$} \\ \hline \multicolumn{2}{|c|[1pt]}{158} & \multicolumn{2}{|c|[1pt]}{$ 20,-31$} \\ \hline \multicolumn{2}{|c|[1pt]}{206} & \multicolumn{2}{|c|[1pt]}{$ 34,-41$} \\ \hline \multicolumn{2}{|c|[1pt]}{254} & \multicolumn{2}{|c|[1pt]}{$ 16,-17$} \\ \hline \multicolumn{2}{|c|[1pt]}{302} & \multicolumn{2}{|c|[1pt]}{$ 35890,-60401$} \\ \hline \multicolumn{2}{|c|[1pt]}{334} & \multicolumn{2}{|c|[1pt]}{$ 1018,-1257$} \\ \hline \multicolumn{2}{|c|[1pt]}{382} & \multicolumn{2}{|c|[1pt]}{$ 540,239$} \\ \hline \multicolumn{2}{|c|[1pt]}{398} & \multicolumn{2}{|c|[1pt]}{$ 650,-1009$} \\ \hline \multicolumn{2}{|c|[1pt]}{446} & \multicolumn{2}{|c|[1pt]}{$ 6104,-6169$} \\ \hline \multicolumn{2}{|c|[1pt]}{478} & \multicolumn{2}{|c|[1pt]}{$ 3481500,2186791$} \\ \hline \multicolumn{2}{|c|[1pt]}{526} & \multicolumn{2}{|c|[1pt]}{$ 74,51$} \\ \hline \multicolumn{2}{|c|[1pt]}{542} & \multicolumn{2}{|c|[1pt]}{$ 4397260,1896071$} \\ \hline \multicolumn{2}{|c|[1pt]}{622} & \multicolumn{2}{|c|[1pt]}{$ 10,-9$} \\ \hline \multicolumn{2}{|c|[1pt]}{718} & \multicolumn{2}{|c|[1pt]}{$ 65870050,-68694393$} \\ \hline \multicolumn{2}{|c|[1pt]}{734} & \multicolumn{2}{|c|[1pt]}{$ 28,-23$} \\ \hline \multicolumn{2}{|c|[1pt]}{766} & \multicolumn{2}{|c|[1pt]}{$ 26316,-22351$} \\ \hline \multicolumn{2}{|c|[1pt]}{862} & \multicolumn{2}{|c|[1pt]}{$ 10373408340,3061104191$} \\ \hline \multicolumn{2}{|c|[1pt]}{878} & \multicolumn{2}{|c|[1pt]}{$ 2290,683$} \\ \hline \multicolumn{2}{|c|[1pt]}{926} & \multicolumn{2}{|c|[1pt]}{$ 19163084,10851559$} \\ \hline \multicolumn{2}{|c|[1pt]}{958} & \multicolumn{2}{|c|[1pt]}{$ 196826640,-325857553$} \\ \hline \multicolumn{2}{|c|[1pt]}{974} & \multicolumn{2}{|c|[1pt]}{$ 1451962,833243$} \\ \hline \end{tabu} \end{center} \subsection{For $N = 2 p , p (3 mod 8)$} \label{table:TIV} \begin{center} \begin{tabu}{|c|c|c|c|} \hline \multicolumn{2}{|c|[1pt]}{$N$} & \multicolumn{2}{|c|[1pt]}{$m,n$} \\ \hline \tabucline[1pt]\hline \multicolumn{2}{|c|[1pt]}{6} & \multicolumn{2}{|c|[1pt]}{$ 1,0$} \\ \hline \multicolumn{2}{|c|[1pt]}{22} & \multicolumn{2}{|c|[1pt]}{$ 2,1$} \\ \hline \multicolumn{2}{|c|[1pt]}{38} & \multicolumn{2}{|c|[1pt]}{$ 3,4$} \\ \hline \multicolumn{2}{|c|[1pt]}{86} & \multicolumn{2}{|c|[1pt]}{$ 2,3$} \\ \hline \multicolumn{2}{|c|[1pt]}{118} & \multicolumn{2}{|c|[1pt]}{$ 131,-42$} \\ \hline \multicolumn{2}{|c|[1pt]}{134} & \multicolumn{2}{|c|[1pt]}{$ 30,-11$} \\ \hline \multicolumn{2}{|c|[1pt]}{166} & \multicolumn{2}{|c|[1pt]}{$ 70,71$} \\ \hline \multicolumn{2}{|c|[1pt]}{214} & \multicolumn{2}{|c|[1pt]}{$ 12,13$} \\ \hline \multicolumn{2}{|c|[1pt]}{262} & \multicolumn{2}{|c|[1pt]}{$ 71,48$} \\ \hline \multicolumn{2}{|c|[1pt]}{278} & \multicolumn{2}{|c|[1pt]}{$ 767,336$} \\ \hline \multicolumn{2}{|c|[1pt]}{326} & \multicolumn{2}{|c|[1pt]}{$ 170,-69$} \\ \hline \multicolumn{2}{|c|[1pt]}{358} & \multicolumn{2}{|c|[1pt]}{$ 4674,-143$} \\ \hline \multicolumn{2}{|c|[1pt]}{422} & \multicolumn{2}{|c|[1pt]}{$ 23,-6$} \\ \hline \multicolumn{2}{|c|[1pt]}{454} & \multicolumn{2}{|c|[1pt]}{$ 700,467$} \\ \hline \multicolumn{2}{|c|[1pt]}{502} & \multicolumn{2}{|c|[1pt]}{$ 3529,-848$} \\ \hline \multicolumn{2}{|c|[1pt]}{566} & \multicolumn{2}{|c|[1pt]}{$ 769,1590$} \\ \hline \multicolumn{2}{|c|[1pt]}{614} & \multicolumn{2}{|c|[1pt]}{$ 7741,-174$} \\ \hline \multicolumn{2}{|c|[1pt]}{662} & \multicolumn{2}{|c|[1pt]}{$ 167376,128473$} \\ \hline \multicolumn{2}{|c|[1pt]}{694} & \multicolumn{2}{|c|[1pt]}{$ 13,10$} \\ \hline \multicolumn{2}{|c|[1pt]}{758} & \multicolumn{2}{|c|[1pt]}{$ 146000702,316270911$} \\ \hline \multicolumn{2}{|c|[1pt]}{838} & \multicolumn{2}{|c|[1pt]}{$ 2690159,-878878$} \\ \hline \multicolumn{2}{|c|[1pt]}{886} & \multicolumn{2}{|c|[1pt]}{$ 1334,181$} \\ \hline \multicolumn{2}{|c|[1pt]}{934} & \multicolumn{2}{|c|[1pt]}{$ 11,-4$} \\ \hline \multicolumn{2}{|c|[1pt]}{982} & \multicolumn{2}{|c|[1pt]}{$ 56212854,-14198593$} \\ \hline \multicolumn{2}{|c|[1pt]}{998} & \multicolumn{2}{|c|[1pt]}{$ 1839056773,1911574194$} \\ \hline \end{tabu} \end{center} \pagebreak \section{Conclusions} The fascination for Fermat's solution and the Pythagorean theorem led to the rational triples their connection to the congruent number problem and their discoveries going beyond like the Trinity system unfolding beautiful algebraic and geometric properties. \newline \includegraphics[width=150mm,scale=1]{Star.png} \section{Acknowledgment} I would like to express my deepest gratitude to Dr. N.Tati Ruiz for supporting and motivating me. \bibliographystyle{te}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Previous work in the social sciences and psychology has shown that the impact and persuasive power of an argument depends not only on the language employed, but also on the credibility and character of the communicator (i.e.\ ethos) \cite{fb566a52435647fcbb369ed48db6fbec, Chaiken1979CommunicatorPA, source-effect}; the traits and prior beliefs of the audience \cite{audience-effect-1, audience-effect, Correll2004AnAS, doi:10.1177/0093650205277317}; and the pragmatic context in which the argument is presented (i.e.\ kairos) \cite{10.1086/209393, article-3}. Research in Natural Language Processing (NLP) has only partially corroborated these findings. One very influential line of work, for example, develops computational methods to automatically determine the linguistic characteristics of persuasive arguments \cite{habernal-gurevych-2016-makes, tan2016winning, zhang-etal-2016-conversational}, but it does so without controlling for the audience, the communicator or the pragmatic context. Very recent work, on the other hand, shows that attributes of both the audience and the communicator constitute important cues for determining argument strength \cite{lukin-etal-2017-argument,durmus-cardie-2018-exploring}. They further show that audience and communicator attributes can influence the relative importance of linguistic features for predicting the persuasiveness of an argument. These results confirm previous findings in the social sciences that show a person's perception of an argument can be influenced by his background and personality traits. To the best of our knowledge, however, no NLP studies explicitly investigate the role of \textit{kairos} --- a component of pragmatic context that refers to the context-dependent ``timeliness" and ``appropriateness" of an argument and its claims within an argumentative discourse --- in argument quality prediction. Among the many social science studies of attitude change, the order in which argumentative claims are shared with the audience has been studied extensively: \newcite{10.1086/209393}, for example, summarize studies showing that the argument-related claims a person is exposed to beforehand can affect his perception of an alternative argument in complex ways. \newcite{article-3} similarly find that changes in an argument's context can have a big impact on the audience's perception of the argument. Some recent studies in NLP have investigated the effect of interactions on the overall persuasive power of posts in social media \cite{tan2016winning,AAAI1817077}. However, in social media not all posts have to express arguments or stay on topic \cite{DBLP:journals/corr/abs-1709-03167}, and qualitative evaluation of the posts can be influenced by many other factors such as interactions between the individuals \cite{Durmus:2019:MFU:3308558.3313676}. Therefore, it is difficult to measure the effect of argumentative pragmatic context alone in argument quality prediction without the effect of these confounding factors using the datasets and models currently available in this line of research. \textit{In this paper, we study the role of kairos on argument quality prediction by examining the individual claims of an argument for their timeliness and appropriateness in the context of a particular line of argument.} We define kairos as the sequence of \textbf{argumentative} text (e.g. claims) along a particular line of argumentative reasoning. To start, we present a dataset extracted from \textit{kialo.com} of over 47,000 claims that are part of a diverse collection of arguments on 741 controversial topics. The structure of the website dictates that each argument must present a supporting or opposing claim for its parent claim, and stay within the topic of the main thesis. Rather than being posts on a social media platform, these are community-curated claims. Furthermore, for each presented claim, the audience votes on its impact within the given line of reasoning. Critically then, the dataset includes the argument context for each claim, allowing us to investigate the characteristics associated with impactful arguments. With the dataset in hand, we propose the task of studying the characteristics of impactful claims by (1) taking the argument context into account, (2) studying the extent to which this context is important, and (3) determining the representation of context that is more effective. To the best of our knowledge, ours is the first dataset that includes claims with both impact votes and the corresponding context of the argument. \begin{figure*}[t] \includegraphics[width=15.9cm,height=9.7cm]{claim_impact_2.png} \caption{Example partial argument tree with claims and corresponding impact votes for the thesis {\sc``Physical torture of prisoners is an acceptable interrogation tool.''}.} \label{fig:impact_image} \end{figure*} \section{Related Work} Recent studies in computational argumentation have mainly focused on the tasks of identifying the structure of the arguments such as argument structure parsing \cite{peldszus-stede-2015-joint,park-cardie:2014:W14-21}, and argument component classification \cite{habernal-gurevych-2017-argumentation,mochales2011argumentation}. More recently, there is an increased research interest to develop computational methods that can automatically evaluate qualitative characteristic of arguments, such as their impact and persuasive power \cite{habernal-gurevych-2016-makes,tan2016winning,kelman1961processes,burgoon1975toward, chaiken1987heuristic,tykocinskl1994message, dillard2002persuasion, Cialdini.2007, durik2008effects, Marquart2016}. Consistent with findings in the social sciences and psychology, some of the work in NLP has shown that the impact and persuasive power of the arguments are not simply related to the linguistic characteristics of the language, but also on characteristics the source (ethos) \cite{Durmus:2019:MFU:3308558.3313676} and the audience \cite{lukin-etal-2017-argument,durmus-cardie-2018-exploring}. These studies suggest that perception of the arguments can be influenced by the credibility of the source, and the background of the audience. It has also been shown, in social science studies, that \textit{kairos}, which refers to the ``timeliness'' and ``appropropriateness'' of arguments and claims, is important to consider in studies of argument impact and persuasiveness \cite{10.1086/209393,article-3}. One recent study in NLP has investigated the role of argument sequencing in argument persuasion looking at \cite{AAAI1817077} Change My View\footnote{https://www.reddit.com/r/changemyview/.}, which is a social media platform where users post their views, and challenge other users to present arguments in an attempt to change their them. However, as stated in \cite{DBLP:journals/corr/abs-1709-03167} many posts on social media platforms either do not express an argument, or diverge from the main topic of conversation. Therefore, it is difficult to measure the effect of pragmatic context in argument impact and persuasion, without confounding factors from using noisy social media data. In contrast, we provide a dataset of claims along with their structured argument path, which only consists of claims and corresponds to a particular line of reasoning for the given controversial topic. This structure enables us to study the characteristics of impactful claims, accounting for the effect of the pragmatic context. Consistent with previous findings in the social sciences, we find that incorporating pragmatic and discourse context is important in computational studies of persuasion, as predictive models that with the context representation outperform models that only incorporate claim-specific linguistic features, in predicting the impact of a claim. Such a system that can predict the impact of a claim given an argumentative discourse, for example, could potentially be employed by argument retrieval and generation models which aims to pick or generate the most appropriate possible claim given the discourse. \section{Dataset} \textbf{Claims and impact votes.} We collected 47,219 claims from \textit{kialo.com}\footnote{The data is collected from this website in accordance with the terms and conditions.}\footnote{There is prior work by \newcite{durmus-etal-2019-determining} which created a dataset of argument trees from \textit{kialo.com}. That dataset, however, does not include any impact labels.} for 741 controversial topics and their corresponding impact votes. Impact votes are provided by the users of the platform to evaluate how impactful a particular claim is. Users can pick one of $5$ possible impact labels for a particular claim: {\sc no impact}, {\sc low impact}, {\sc medium impact}, {\sc high impact} and {\sc very high impact}. While evaluating the impact of a claim, users have access to the full argument context and therefore, they can assess how impactful a claim is in the given context of an argument. An interesting observation is that, in this dataset, the same claim can have different impact labels depending on the context in which it is presented. Figure \ref{fig:impact_image} shows a partial\textbf{ argument tree} for the argument \textbf{thesis} {\sc``Physical torture of prisoners is an acceptable interrogation tool.''}. Each node in the argument tree corresponds to a claim, and these argument trees are constructed and edited collaboratively by the users of the platform. Except the thesis, every claim in the argument tree either opposes or supports its parent claim. Each path from the root to leaf nodes corresponds to an \textbf{argument path} which represents a particular line of reasoning on the given controversial topic. Moreover, each claim has \textbf{impact votes} assigned by the users of the platform. The impact votes evaluate how impactful a claim is within its context, which consists of its predecessor claims from the thesis of the tree. For example, claim \textbf{O1} {\sc``It is morally wrong to harm a defenseless person''} is an opposing claim for the thesis and it is an {\sc impactful claim} since most of its impact votes belong to the category of {\sc very high impact}. However, claim \textbf{S3} {\sc``It is illegitimate for state actors to harm someone without the process''} is a supporting claim for its parent \textbf{O1} and it is a less impactful claim since most of the impact votes belong to the {\sc no impact} and {\sc low impact} categories. \begin{table}[h] \centering \begin{tabular}{|c|c|} \hline \# impact votes & \# claims \\ \hline \hline $[3,5)$ & 4,495\\ \hline $[5,10)$ & 5,405 \\ \hline $[10,15)$ & 5,338\\ \hline $[15,20)$ & 2,093\\ \hline $[20,25)$ & 934\\ \hline $[25,50)$ & 992\\ \hline $[50,333)$ & 255\\ \hline \end{tabular} \caption{Number of claims for the given range of number of votes. There are 19,512 claims in the dataset with $3$ or more votes. Out of the claims with $3$ or more votes, majority of them have $5$ or more votes.} \label{tab:vote_statistics_1} \end{table} \begin{table*} \centering \begin{tabular}{|c|c|c|} \hline & 3-class case & 5-class case \\ \hline Agreement score & Number of claims & Number of claims \\ \hline \hline $>50\%$ & 10,848 & 7,304\\ \hline $>60\%$ & 7,386 & 4,329\\ \hline $>70\%$ & 4,412 & 2,195\\ \hline $>80\%$ & 2,068 & 840\\ \hline \end{tabular} \caption{Number of claims, with at least 5 votes, above the given threshold of agreement percentage for 3-class and 5-class cases. When we combine the low impact and high impact classes, there are more claims with high agreement score.} \label{tab:agreement_vote} \end{table*} \begin{table}[h] \centering \begin{tabular}{|c|c|} \hline Impact label & \# votes- all claims \\ \hline \hline No impact & 32,681\\ \hline Low impact & 37,457\\ \hline Medium impact & 60,136\\ \hline High impact & 52,764\\ \hline Very high impact & 58,846\\ \hline Total \# votes & 241,884 \\ \hline \end{tabular} \caption{Number of votes for the given impact label. There are $241,884$ total votes and majority of them belongs to the category {\sc medium impact}. } \label{tab:impact_label_stats} \end{table} \textbf{Distribution of impact votes.} The distribution of claims with the given range of number of impact votes are shown in Table \ref{tab:vote_statistics_1}. There are 19,512 claims in total with $3$ or more votes. Out of the claims with $3$ or more votes, majority of them have $5$ or more votes. We limit our study to the claims with at least $5$ votes to have a more reliable assignment for the accumulated impact label for each claim. \begin{table} \centering \begin{tabular}{|c|c|} \hline Context length & \# claims \\ \hline \hline $1$ & 1,524 \\ \hline $2$ & 1,977 \\ \hline $3$ & 1,181\\ \hline $[4,5]$ & 1,436 \\ \hline $(5,10]$ & 1,115 \\ \hline $>10$ & 153\\ \hline \end{tabular} \caption{Number of claims for the given range of context length, for claims with more than $5$ votes and an agreement score greater than $60\%$.} \label{tab:context_length} \end{table} \textbf{Impact label statistics.} Table \ref{tab:impact_label_stats} shows the distribution of the number of votes for each of the impact categories. The claims have $241,884$ total votes. The majority of the impact votes belong to {\sc medium impact} category. We observe that users assign more {\sc high impact} and {\sc very high impact} votes than {\sc low impact} and {\sc no impact} votes respectively. When we restrict the claims to the ones with at least $5$ impact votes, we have $213,277$ votes in total\footnote{26,998 of them {\sc no impact}, 33,789 of them {\sc low impact}, 55,616 of them {\sc medium impact}, 47,494 of them {\sc high impact} and 49,380 of them {\sc very high impact.}}. \textbf{Agreement for the impact votes.} To determine the agreement in assigning the impact label for a particular claim, for each claim, we compute the percentage of the votes that are the same as the majority impact vote for that claim. Let $c_{i}$ denote the count of the claims with the class labels C=[{\sc no impact}, {\sc low impact}, {\sc medium impact}, {\sc high impact}, {\sc very high impact}] for the impact label $l$ at index $i$. \begin{equation} \label{agreement} \text{Agreement} = 100 * \frac{\max_{0 \leq i \leq 4}c_i}{\sum_{i=0}^{4} c_i}\%\\ \end{equation} For example, for claim S1 in Figure \ref{fig:impact_image}, the agreement score is $100 * \frac{30}{90}\%=33.33\%$ since the majority class ({\sc no impact}) has $30$ votes and there are $90$ impact votes in total for this particular claim. We compute the agreement score for the cases where (1) we treat each impact label separately (5-class case) and (2) we combine the classes {\sc high impact} and {\sc very high impact} into a one class: {\sc impactful} and {\sc no impact} and {\sc low impact} into a one class: {\sc not impactful} (3-class case). Table \ref{tab:agreement_vote} shows the number of claims with the given agreement score thresholds when we include the claims with at least $5$ votes. We see that when we combine the low impact and high impact classes, there are more claims with high agreement score. This may imply that distinguishing between no impact-low impact and high impact-very high impact classes is difficult. To decrease the sparsity issue, in our experiments, we use 3-class representation for the impact labels. Moreover, to have a more reliable assignment of impact labels, we consider only the claims with have more than 60\% agreement. \textbf{Context.} In an argument tree, the claims from the thesis node (root) to each leaf node, form an argument path. This argument path represents a particular line of reasoning for the given thesis. Similarly, for each claim, all the claims along the path from the thesis to the claim, represent the \textbf{context} for the claim. For example, in Figure \ref{fig:impact_image}, the context for \textbf{O1} consists of only the thesis, whereas the context for \textbf{S3} consists of both the thesis and \textbf{O1} since \textbf{S3} is provided to support the claim \textbf{O1} which is an opposing claim for the thesis. The claims are not constructed independently from their context since they are written in consideration with the line of reasoning so far. In most cases, each claim elaborates on the point made by its parent and presents cases to support or oppose the parent claim's points. Similarly, when users evaluate the impact of a claim, they consider if the claim is timely and appropriate given its context. There are cases in the dataset where the same claim has different impact labels, when presented within a different context. Therefore, we claim that it is not sufficient to only study the linguistic characteristic of a claim to determine its impact, but it is also necessary to consider its context in determining the impact. \textit{Context length} ($\text{C}_{l}$) for a particular claim \textit{C} is defined by number of claims included in the argument path starting from the thesis until the claim \textit{C}. For example, in Figure \ref{fig:impact_image}, the context length for \textbf{O1} and \textbf{S3} are $1$ and $2$ respectively. Table \ref{tab:context_length} shows number of claims with the given range of context length for the claims with more than $5$ votes and $60\%$ agreement score. We observe that more than half of these claims have $3$ or higher context length. \section{Methodology} \subsection{Hypothesis and Task Description} Similar to prior work, our aim is to understand the characteristics of impactful claims in argumentation. However, we \textbf{hypothesize} that the qualitative characteristics of arguments is not independent of the context in which they are presented. To understand the relationship between argument context and the impact of a claim, we aim to incorporate the context along with the claim itself in our predictive models. \textbf{Prediction task.} Given a claim, we want to predict the impact label that is assigned to it by the users: {\sc not impactful}, {\sc medium impact}, or {\sc impactful}. \textbf{Preprocessing.} We restrict our study to claims with at least $5$ or more votes and greater than $60\%$ agreement, to have a reliable impact label assignment. We have $7,386$ claims in the dataset satisfying these constraints\footnote{We have 1,633 {\sc not impactful}, 1,445 {\sc medium impact} and 4,308 {\sc impacful} claims.}. We see that the impact class {\sc impacful} is the majority class since around $58\%$ of the claims belong to this category. For our experiments, we split our data to train (70\%), validation (15\%) and test (15\%) sets. \subsection{Baseline Models} \subsubsection{Majority} The majority baseline assigns the most common label of the training examples ({\sc high impact}) to every test example. \subsubsection{SVM with RBF kernel} Similar to \cite{habernal-gurevych-2016-makes}, we experiment with SVM with RBF kernel, with features that represent (1) the simple characteristics of the argument tree and (2) the linguistic characteristics of the claim. The features that represent the simple characteristics of the claim's argument tree include the distance and similarity of the claim to the thesis, the similarity of a claim with its parent, and the impact votes of the claim's parent claim. We encode the similarity of a claim to its parent and the thesis claim with the cosine similarity of their tf-idf vectors. The distance and similarity metrics aim to model whether claims which are more similar (i.e.\ potentially more topically relevant) to their parent claim or the thesis claim, are more impactful. We encode the quality of the parent claim as the number of votes for each impact class, and incorporate it as a feature to understand if it is more likely for a claim to impactful given an impactful parent claim. \textbf{Linguistic features}. To represent each claim, we extracted the linguistic features proposed by \cite{habernal-gurevych-2016-makes} such as tf-idf scores for unigrams and bigrams, ratio of quotation marks, exclamation marks, modal verbs, stop words, type-token ratio, hedging \cite{:/content/books/9789027282583}, named entity types, POS n-grams, sentiment \cite{ICWSM148109} and subjectivity scores \cite{wilson2005recognizing}, spell-checking, readibility features such as \textit{Coleman-Liau} \cite{1975-22007-00119750401}, \textit{Flesch} \cite{1949-01274-00119480601}, argument lexicon features \cite{somasundaran2007detecting} and surface features such as word lengths, sentence lengths, word types, and number of complex words\footnote{ We pick the parameters for SVM model according to the performance validation split, and report the results on the test split.}. \subsubsection{FastText} \newcite{joulin-etal-2017-bag} introduced a simple, yet effective baseline for text classification, which they show to be competitive with deep learning classifiers in terms of accuracy. Their method represents a sequence of text as a bag of n-grams, and each n-gram is passed through a look-up table to get its dense vector representation. The overall sequence representation is simply an average over the dense representations of the bag of n-grams, and is fed into a linear classifier to predict the label. We use the code released by \newcite{joulin-etal-2017-bag} to train a classifier for argument impact prediction, based on the claim text\footnote{We used maxNgram legnth of 2, learning rate of 0.8, num epochs of 15, vector dim of 300. We also used the pre-trained 300-dim wiki-news vectors made available on the fastText website.}. \subsubsection{BiLSTM with Attention} Another effective baseline \cite{zhou-etal-2016-attention, yang-etal-2016-hierarchical} for text classification consists of encoding the text sequence using a bidirectional Long Short Term Memory (LSTM) \cite{Hochreiter:1997:LSM:1246443.1246450}, to get the token representations in context, and then attending \cite{luong-etal-2015-effective} over the tokens to get the sequence representation. For the query vector for attention, we use a learned context vector, similar to \newcite{yang-etal-2016-hierarchical}. We picked our hyperparameters based on performance on the validation set, and report our results for the best set of hyperparameters\footnote{Our final hyperparams were: 100-dim word embedding, 100-dim context vector, 1 layer BiLSTM with 64 units, trained for 40 epochs with early stopping based on validation performance.}. We initialized our word embeddings with glove vectors \cite{pennington-etal-2014-glove} pre-trained on Wikipedia + Gigaword, and used the Adam optimizer \cite{DBLP:journals/corr/KingmaB14} with its default settings. \begin{table*} \centering \begin{tabular}{|l|c|c|c|} \hline & Precision & Recall & F1 \\ \hline Majority & $19.43$ & $33.33$ & $24.55$\\ \hline \hline SVM with RBF Kernel &&&\\ \hline Distance from the thesis & $27.42$ & $33.53$ & $26.05$\\ \hline Parent quality & $58.11$ & $47.85$ & $46.61$ \\ \hline Linguistic features & $\mathbf{65.67}$ & $38.58$ & $35.42$\\ \hline BiLSTM with Attention& $46.50_{\pm{0.28}}$ &$46.35_{\pm{0.99}}$& $46.22_{\pm{0.58}}$\\ \hline FastText & $51.18_{\pm{0.80}}$ &$46.09_{\pm{0.64}}$& $47.06_{\pm{0.70}}$\\ \hline BERT models & & & \\ \hline Claim only & $53.24_{\pm{1.07}}$ &$50.93_{\pm{2.01}}$& $51.53_{\pm{1.53}}$\\ \hline Claim + Parent & $55.79_{\pm{1.72}}$ & $53.54_{\pm{2.09}}$ & $54.00_{\pm{1.79}}$\\ \hline Claim + $\text{Context}_{f}(2)$ & $56.57_{\pm{0.85}}$ & $54.76_{\pm{1.71}}$ & $55.18_{\pm{0.99}}$\\ \hline Claim + $\text{Context}_{f}(3)$ & $57.19_{\pm{0.92}}$ & $\mathbf{55.77_{\pm{1.05}}}$ & $\mathbf{55.98_{\pm{0.70}}}$\\ \hline Claim + $\text{Context}_{f}(4)$ & $57.09_{\pm{1.71}}$ & $55.31_{\pm{1.09}}$ & $55.72_{\pm{1.14}}$\\ \hline Claim + $\text{Context}_{gru}(4)$ & $54.95_{\pm{2.00}}$ & $51.55_{\pm{1.27}}$ & $52.37_{\pm{1.26}}$\\ \hline Claim + $\text{Context}_{a}(4)$ & $56.60_{\pm{0.52}}$ & $54.55_{\pm{0.57}}$ & $54.65_{\pm{0.33}}$ \\ \hline \end{tabular} \caption{Results for the baselines and the BERT models with and without the context. Best performing model is BERT with the representation of previous $3$ claims in the path along with the claim representation itself. We run the models $5$ times and we report the mean and standard deviation. } \label{tab:results} \end{table*} \subsection{Fine-tuned BERT model} \newcite{devlin2018bert} fine-tuned a pre-trained deep bi-directional transformer language model (which they call BERT), by adding a simple classification layer on top, and achieved state of the art results across a variety of NLP tasks. We employ their pre-trained language models for our task and compare it to our baseline models. For all the architectures described below, we finetune for 10 epochs, with a learning rate of 2e-5. We employ an early stopping procedure based on the model performance on a validation set. \subsubsection{Claim with no context} In this setting, we attempt to classify the impact of the claim, based on the text of the claim only. We follow the fine-tuning procedure for sequence classification detailed in \cite{devlin2018bert}, and input the claim text as a sequence of tokens preceded by the special [CLS] token and followed by the special [SEP] token. We add a classification layer on top of the BERT encoder, to which we pass the representation of the [CLS] token, and fine-tune this for argument impact prediction. \subsubsection{Claim with parent representation} In this setting, we use the parent claim's text, in addition to the target claim text, in order to classify the impact of the target claim. We treat this as a sequence pair classification task, and combine both the target claim and parent claim as a single sequence of tokens, separated by the special separator [SEP]. We then follow the same procedure above, for fine-tuning. \subsubsection{Incorporating larger context} In this setting, we consider incorporating a larger context from the discourse, in order to assess the impact of a claim. In particular, we consider up to four previous claims in the discourse (for a total context length of 5). We attempt to incorporate larger context into the BERT model in three different ways. \textbf{Flat representation of the path.} The first, simple approach is to represent the entire path (claim + context) as a single sequence, where each of the claims is separated by the [SEP] token. BERT was trained on sequence pairs, and therefore the pre-trained encoders only have two segment embeddings \cite{devlin2018bert}. So to fit multiple sequences into this framework, we indicate all tokens of the target claim as belonging to segment A and the tokens for all the claims in the discourse context as belonging to segment B. This way of representing the input, requires no additional changes to the architecture or retraining, and we can just finetune in a similar manner as above. We refer to this representation of the context as a flat representation, and denote the model as $\text{Context}_{f}(i)$, where $i$ indicates the length of the context that is incorporated into the model. \begin{table*}[h] \centering \begin{tabular}{|l|c|c|c|c|c|} \hline & $\text{C}_{l}=1$ & $\text{C}_{l}=2$ & $\text{C}_{l}=3$ & $\text{C}_{l}=4$ \\ \hline BERT models & & & & \\ \hline \hline Claim only & $48.61_{\pm{3.16}}$ & $53.15_{\pm{1.95}}$ & $54.51_{\pm{1.91}}$ & $50.89_{\pm{2.95}}$\\ \hline Claim + Parent & $51.49_{\pm{2.63}}$ & $54.78_{\pm{2.95}}$& $54.94_{\pm{2.72}}$ & $51.94_{\pm{2.59}}$ \\ \hline Claim + $\text{Context}_{f}(2)$ & $52.84_{\pm{2.55}}$ & $53.77_{\pm{1.00}}$ & $55.24_{\pm{2.52}}$ & $57.04_{\pm{1.19}}$\\ \hline Claim + $\text{Context}_{f}(3)$ & $\mathbf{54.88_{\pm{2.49}}}$ & $54.71_{\pm{1.74}}$ & $52.93_{\pm{2.07}}$& $\mathbf{58.17_{\pm{1.89}}}$ \\ \hline Claim + $\text{Context}_{f}(4)$ & $54.47_{\pm{2.95}}$ & $\mathbf{54.88_{\pm{1.53}}}$ &$\mathbf{57.11_{\pm{3.38}}}$ & $57.02_{\pm{2.22}}$ \\ \hline \end{tabular} \caption{F1 scores of each model for the claims with various context length values.} \label{tab:results_context} \end{table*} \textbf{Attention over context.} Recent work in incorporating argument sequence in predicting persuasiveness \cite{AAAI1817077} has shown that hierarchical representations are effective in representing context. Similarly, we consider hierarchical representations for representing the discourse. We first encode each claim using the pre-trained BERT model as the claim encoder, and use the representation of the [CLS] token as claim representation. We then employ dot-product attention \cite{luong-etal-2015-effective}, to get a weighted representation for the context. We use a learned context vector as the query, for computing attention scores, similar to \newcite{yang-etal-2016-hierarchical}. The attention score $\alpha_c$ is computed as shown below: \begin{equation} \alpha_{c} = \frac{exp(V_{c}^T V_{l})}{\sum_{c\in{D}} exp(V_{c}^T V_{l})} \end{equation} Where $V_c$ is the claim representation that was computed with the BERT encoder as described above, $V_l$ is the learned context vector that is used for computing attention scores, and $D$ is the set of claims in the discourse. After computing the attention scores, the final context representation $v_d$ is computed as follows: \begin{equation} V_{d} = \sum_{c\in{D}}\alpha_{c} V_{c} \end{equation} We then concatenate the context representation with the target claim representation $[V_d, V_r]$ and pass it to the classification layer to predict the quality. We denote this model as $\text{Context}_{a}(i)$. \textbf{GRU to encode context} Similar to the approach above, we consider a hierarchical representation for representing the context. We compute the claim representations, as detailed above, and we then feed the discourse claims' representations (in sequence) into a bidirectional Gated Recurrent Unit (GRU) \cite{cho-al-emnlp14}, to compute the context representation. We concatenate this with the target claim representation and use this to predict the claim impact. We denote this model as $\text{Context}_{gru}(i)$. \section{Results and Analysis} Table \ref{tab:results} shows the macro precision, recall and F1 scores for the baselines as well as the BERT models with and without context representations\footnote{For the models that result in different scores with different random seed, we run them $5$ times and report the mean and standard deviation.}. We see that \textit{parent quality} is a simple yet effective feature and SVM model with this feature can achieve significantly higher ($p<0.001$)\footnote{We perform two-sided t test for significance analysis.} F1 score ($46.61\%$) than \textit{distance from the thesis} and \textit{linguistic features}. Claims with higher impact parents are more likely to be have higher impact. \textit{Similarity with the parent and thesis} is not significantly better than the \textit{majority} baseline. Although the BiLSTM model with attention and FastText baselines performs better than the SVM with \textit{distance from the thesis} and \textit{linguistic features}, it has similar performance to the \textit{parent quality} baseline. We find that the BERT model with \textit{claim only} representation performs significantly better ($p<0.001$) than the baseline models. Incorporating the \textit{parent representation} only along with the \textit{claim representation} does not give significant improvement over representing the claim only. However, \textit{incorporating the flat representation of the larger context} along with the claim representation consistently achieves significantly better ($p<0.001$) performance than the claim representation alone. Similarly, \textit{attention representation} over the context with the learned query vector achieves significantly better performance then the \textit{claim representation} only ($p<0.05$). We find that the \textit{flat representation} of the context achieves the highest F1 score. It may be more difficult for the models with a larger number of parameters to perform better than the \textit{flat representation} since the dataset is small. We also observe that modeling $3$ claims on the argument path before the target claim achieves the best F1 score ($55.98\%$). To understand for what kinds of claims the best performing contextual model is more effective, we evaluate the BERT model with \textit{flat context representation} for claims with context length values $1$, $2$, $3$ and $4$ separately. Table \ref{tab:results_context} shows the F1 score of the BERT model without context and with \textit{flat context representation} with different lengths of context. For the claims with context length $1$, adding $\text{Context}_{f}(3)$ and $\text{Context}_{f}(4)$ representation along with the claim achieves significantly better $(p<0.05)$ F1 score than modeling the \textit{claim only}. Similarly for the claims with context length $3$ and $4$, $\text{Context}_{f}(4)$ and $\text{Context}_{f}(3)$ perform significantly better than BERT with \textit{claim only} ($(p<0.05)$ and $(p<0.01)$ respectively). We see that models with larger context are helpful even for claims which have limited context (e.g. $\text{C}_{l}=1$). This may suggest that when we train the models with larger context, they learn how to represent the claims and their context better. \section{Conclusion} In this paper, we present a dataset of claims with their corresponding impact votes, and investigate the role of argumentative discourse context in argument impact classification. We experiment with various models to represent the claims and their context and find that incorporating the context information gives significant improvement in predicting argument impact. In our study, we find that \textit{flat representation} of the context gives the best improvement in the performance and our analysis indicates that the contextual models perform better even for the claims with limited context. \section{Acknowledgements} This work was supported in part by NSF grants IIS-1815455 and SES-1741441. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of NSF or the U.S.\ Government.
{ "redpajama_set_name": "RedPajamaArXiv" }